Like they know what they’re doing now. They combine 1-5TB of tokens with random numbers.Implies they ever had it, which they didn’t.
This is the sort of propaganda these companies leak as if it’s a problem when what they really want is to perpetuate this idea that their autocomplete machines are magical and mysterious.
So, this is where the researchers are acknowledging that they are choking on the smell of their own farts.
However, there are still a lot of questions about how these advanced models are actually working. Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes.
Bull fucking shit. Making something complex enough to be unpredictable doesn’t mean it is intentional. Reasoning models are not reasoning, they are outputting something that looks like reasoning. There is no intent behind it to mislead or do anything on purpose. It just looks that way because the output is in a format that looks like a person writing down their thoughts.
Mage: The Ascension did permanent damage to the programming community.
Some research has suggested that reasoning models may even be misleading users through their chain-of-thought processes.
Sure, let’s put this loose into the public’s hands. It’ll be fine…
“losing”?!?!
Meaningless grifter nonsense.
On the one hand it’s relatively simple to understand how data is processed.
On the other hand It’s impossible for these grifters to “understand” a function developed by processing endless stolen data.
“may be losing funding for explainable AI”
They actually believe their own press and think their “reasoning” models say anything meaningful when they “explain” their “reasoning”?
Wow! I could be a top AI researcher too and I don’t even know how to begin programming a computer!