The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, “Reasoning” models, and Agentic frameworks.
ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.
Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what’s next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.
You can try the tasks yourself here: https://arcprize.org/arc-agi/3
Here is the current leaderboard for ARC-AGI 3, using state of the art models
- OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
- Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
- Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
- xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)
https://arcprize.org/leaderboard
Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf
In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training
I can’t see AI actually being intelligent until they no longer need to send a built up prompt of guides and skills and the chat history on every submission.
It’s no different from Alexa 15 years ago with skills. Just a better protocol and interface and ability to parse the current user prompt.
In my opinion of course.
Right? I have a Google Home Mini in our kitchen and if we ask it a question it just pulls a source from a website and tells us. That’s it. Nothing intelligent about it.
AI now is no different. It’s just pulling more complex wording from more (albeit illegally) sources to give a (albeit sometimes incorrect) better description of the question asked.
AI is just as stupid as Alexa is/was 15 years ago. It just has more information to pull from and still fucks it up.
I tend to be anti-AI because it doesn’t seem to me to be anything other than a super fast regurgitator of data. If a database can be searched for an answer, AI can do that faster than a human. However it doesn’t to seem to be able to take some portion of that database, understand it, and then use that information to solve a novel problem.
Well… It cannot even search databases without errors.
LLMs just produce plausible replies in natural languages very quickly and this is useful in certain situations. Sometimes it helps humans getting started with a task, but as it is now, it cannot replace them. As much as the capital class want it, and sink our money into it.
The better setup generate “semantic embeddings” that try to map how data stored relate to each other (by mapping how to it related within in its own weights and biases). That and knowledge graph look ups in which the links between different articles of data are evaluated in the same way.
The very expensive LLM portion really do just give rough aproximations of information language in that setup
Yes, the key thing is it might have extracted useful info from otherwise confusing data, it might have mixed up info from the data incorrectly or it might have just made it up.
So it can be useful, if you can then validate the info provided in more traditional means, but it’s dubious as a first pass, and sometimes surprisingly bad when it’s a scenario you thought it would work well at.
I know lemmy’s very anti-ai but this is really fascinating stuff.
We’re anti-AI because AI is fucking stupid. Both literally and figuratively.
Tell me again how AGI is just around the corner, Sam
just when he had to shut down sora, because making ai videos is too expensive.
When Sammy fuck says “we’re so close to AGI, I can just feel it. Like a tingle on the tip of my shrimpdick it’s getting so close to blossoming into something guys”, just ignore him. He’s crazy man!
"Like a tingle on the tip of my shrimpdick it’s getting so close to blossoming into something guys”
Wow, that really is something. XD
to be fair, he’s not human so he’s just guessing based on his observations earth as a demon
machines will be able to ‘think like humans’ when it happens
Maybe AGI is just a brain-destroying pandemic?
Try spelling things phonetically (example: faux net tick alley), that’s one of my benchmarks that AI fails almost every time.
If the input is at all long, or purposefully includes a lot of words about a specific unrelated theme to the coded message, it’s impossible.
Wait, I thought phonetically (example: papa hotel oscar novermber echo tango india charle alfa lima lima yankee) meant using a phonetic alphabet, not using word(s) with the same Soundex encoding.
Yeah, there was some phonics in my primary school education, and I continue to approach new words in that way sometimes. But, they said Phonetically.
Oh that’s an interesting challenge.
I hear some LLMs now have some solutions for the classic “how many Rs in ‘strawberry’” problem (related to the tokenization processes), but I have no idea how they might solve the phonetic thing. I’m sure some smart people will eventually find a way though
Grok Reasoning: 0%
Hilarious
Grok isn’t designed to solve problems. It’s designed to create sexually explicit images of children for Republicans…
Reasoning is woke propaganda anyway.
Well, yeah, it’s very good at making weird porn clips though. If anyone wants some very odd entertainment, go to /gif/ on 4chan and look at the reoccurring “/gg/ grok gens” threads. There’s everything from actually impressive and hot videos to the weirdest and most fucked up shit ever, it’s weirdly fun. Never seen anything really bad there, like CP etc. so I can comfortably recommend it for the lols.
This replay is the funniest shit lmao. Keep building that bridge Claude.
https://arcprize.org/replay/0964128b-a2f5-4c5b-886e-497d893f429d
Interesting that it seems to be perceiving the environment mostly accurately, and is just completely wrong about the purpose of all the game objects.
I couldn’t find replays. Are there more? Also, it is a bit funny that “building the bridge” which at one point seems to be Claude’s “chosen goal” is just “running out of moves” and failing the task.
Task failed successfully, Claude. Task failed, successfully.
There’s a column linking to replays in the table of tasks here: https://arcprize.org/tasks
Here’s another reply where the model mistakes running out of time/move for making progress
it’s reasoning log is so fucking funny
It’s almost as if a chatbot isn’t actually thinking.
Ii can thoroughly recommend “A Brief History of Intelligence” (by Max Bennett), which explains how intelligence has taken steps through evolution, what those steps were etc.
Spatial intelligence requires spatial understanding and it’s not something that can be solved through a large language model, IMHO.
I’m excited to see how these are solved. And I’m terrified to see how these will be solved.
It’s fun to point at the crappy performance of current technology. But all I can think about is the amount of power and hardware the AI bros are going to burn through trying to improve their results.
Funnier yet will be if they continue to just train the model on that particular kind of test, invalidating its results in the process.
As a psychiatrist, I have a theory about what’s missing in AI. First, it lacks childhood dependency and attachments. Second, it struggles to overcome repeated pain and suffering. Third, it lacks regular eating and restroom breaks. Fourth, it struggles to accept loss in everyday situations. Finally, it lacks the concept of our inevitable death. Without these nagging memories and concepts, machines will simply revert to the simpler concepts we use them for in our recent times, such as stealing cryptocurrency. After all, we live in a world run by capitalism, so it’s only logical. ¯\(ツ)/¯
Here is a way of describing what I see as ‘the problem’:
An LLM cannot forget things in its base training data set.
Its permanent memory… is totally permanent.
And this memory has a bunch of wrong ideas, a bunch of nonsensical associations, a bunch of false facts, a bunch of meaningless gibberish.
It has no way of evaluating its own knowledge set for consistency, coherence, and stability.
It literally cannot learn and grow, because it cannot realize why it made mistakes, it cannot discard or ammend in a permanent way, concepts that are incoherent, faulty ways of reasoning (associating) things.
Seriously, ask an LLM a trick question, then tell it it was wrong, explain the correct answer, then ask it to determine why it was wrong.
Then give it another similar category of trick question, but that is specifically different, repeat.
The closer you try to get it toward reworking a fundamental axiom it holds to that is flawed, the closer it gets to responding in totally paradoxical, illogical gibberish, or just stuck in some kind of repetetive loop.
… Learning is as much building new ideas and experiences, as it is reevaluating your old ideas and experiences, and discarding concepts that are wrong or insufficient.
Biological brains have neuroplasticity.
So far, silicon ones do not.
As a technologist, I have to remind everyone that AI is not intelligence. It’s a word prediction/statistical machine. It’s guessing at a surprisingly good rate what words follow the words before it.
It’s math. All the way down.
We as humans have simply taken these words and have said that it is “intelligence”.
Few of countless dictionary definitions for intelligence:
- The ability to acquire, understand, and use knowledge.
- The ability to learn or understand or to deal with new or trying situations
- The ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)
- The act of understanding
- The ability to learn, understand, and make judgments or have opinions that are based on reason
- It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
There isn’t even concensus on what intelligence actually means yet here you are declaring “AI is not intelligence” what ever that even means.
Artificial Intelligence is a term in computer science that describes a system that’s able to perform any task that would normally require human intelligence. Atari chess engine is an intelligent system. It’s narrowly intelligent as opposed to humans that are generally intelligent but it’s intelligent nevertheless.
As a therapist, I can tell you the only thing holding LLMs back from true intelligence is having to pee and poop. Peeing and pooping is the foundation of all higher level operations. I poured water on my PC and the LLM I was running said “I think” right before committing suicide
I was arguing against it being an intelligence because it lacked the suffering and past experiences that define intelligence. Without pain and suffering, what are we? Not for it being intelligent.
I think you’re conflating intelligence and consciousness. Pain and suffering requires consciousness but intelligence does not imply pain or suffering or happiness. LLMs are already “intelligent” to a certain degree in some aspects, though not generally intelligent like humans. But there is no reason to believe that you couldn’t have a generally intelligent artificial agent that lacks consciousness and thus can feel no pain or suffering.
As another technologist, I have to remind everyone that unless you subscribe to some rather fringe theories, humans are also based on standard physics.
Which is math. All the way down.
As a philosopher, I have to remind you that humans invented math and physics to model reality.
Humans are not based on physics or math. That would be like saying the earth is based on a globe.
As a mathematician, it should be noted that the mathematics of physics aren’t laws of the universe, they are models of the laws of the universe. They’re useful for understanding and predicting, but are purely descriptive, not prescriptive. And as they say, all models are wrong, but some are useful
As a random person on the Internet I don’t actually have anything to add but felt it would be nice to jump in.
That’s true, but that doesn’t contradict the above comment. Unless you believe in something like a spirit or soul, you must concede that human intelligence ultimately arises from physical matter (whatever your model of physics is). From what we know of science right now, there are no direct reasons for thinking that true intelligence or even consciousness is limited to biological organisms based on carbon and could not arise in silicon.
It could also be that it lacks the machinery to feel any emotions at all. You don’t (normally) have to train people to be afraid of bears or heights or loneliness or boredom. You also don’t (normally) have to train people to have empathy or compassion.
I argue that our obsession with AI is, itself, a misalignment with our environment; it disproportionately tickles psychological reward centers which evolved under unrecognizably different circumstances.
You don’t (normally) have to train people to be afraid of bears or heights or loneliness or boredom. You also don’t (normally) have to train people to have empathy or compassion.
So what are you implying about people who don’t experience these?
What am I implying? That their machinery is abnormal and they likely need assistance to live normal, healthy lives. That’s literally why the fields of psychiatry and psychology exist: healthy people don’t need doctors and therapists. Do you disagree?
Introverts exist, and are… very often fine with solitude, prefer it generally over socializing.
But they are generally fine at participating in society and living normal lives.
Healthy people… do need doctors … and therapists.
A person can outwardly appear to be healthy… and actually not be.
Preventative medicine, regular checkups, your body changes as you grow, and habits you develop in your youth may need significant reworking.
Therapy can give otherwise healthy people a method of exploring their inner selves more fully or more consistently… they can teach them frameworks for understanding and dealing with other kinds of people, for being better able to deal with kinds of trauma they have not yet experienced.
Also… same with physical health… people with some nascent mental problems or patterns forming… probably won’t be obvious to a non specialist, untill it gets more severe.
Introverts exist, and are… very often fine with solitude, prefer it generally over socializing.
Definitely! I am one :) but I still desire the presence of friends from time to time (and usually in small groups).
A person can outwardly appear to be healthy… and actually not be.
Yup! There’s always a nonzero chance you’re not as healthy as you think you are (let’s call it the quantum theory of health: everyone is in a superposition of being both healthy and unhealthy at the same time), especially as we change due to age, making us unfamiliar with our own bodies… I’d tell you about my own challenges here, but that’d be TMI.
And, yes, that’s why we go to regular checkups with someone who has a better perspective to judge “healthiness” (side note: doctors aren’t perfect, so visiting them too frequently can be worse than never at all; there’s a “healthy” cadence to checkups).
Therapy can give otherwise healthy people a method of exploring their inner selves more fully or more consistently…
This boils down to the definition of “healthy”. It even becomes a philosophical question that’s really hard to answer… Is it healthy to live a sedentary lifestyle? Is it healthy to exercise too much? Is it healthy to not know TIPP, in case you (or a loved one) gets a panic attack? Is it healthy to ignore yourself? Ignore others? Is it healthy to mention quantum superposition in a conversation about health? ;)
But, yes, I agree. Life’s as messy and diverse and as hard to sum up as everybody whose ever lived, but yet we carry on … I hope that’s healthy.
Edit: typo, and missing a hint that I’m making a joke about me over-generalizing physics concepts
My entire point is that you are just overgeneralizing, in general, and saying rather silly things.
Fair enough; the Internet is a silly place full of distracted, armchair philosophers. However, my entire point was that an LLM doesn’t rely on machinery in the same way that a human brain does. That doesn’t make AI “worse” or “better” overall, but it does make it an awful replacement for humans.
Boring game…
Not the point
The humans literally didn’t score 100% though. Why lie?
You can really only judge fairness of the score if you understand the scoring criteria. It is a relative score where the baseline is 100% for humans – i.e. A task was only included in the challenge if at least two people in the panel of humans were able to solve it completely, and their action count is a measure of efficiency. This is the baseline used as a point of comparison.
From the Technical Report:
The procedure can be summarized as follows:
• “Score the AI test taker by its per-level action efficiency” - For each level that the test taker completes, count the number of actions that it took.
• “As compared to human baseline” - For each level that is counted, compare the AI agent’s action count to a human baseline, which we define as the second-best human action count. Ex: If the second-best human completed a level in only 10 actions, but the AI agent took 100 to complete it, then the AI agent scores (10/100)^2 for that level, which gets reported as 1%. Note that level scoring is calculated using the square of efficiency.
• “Normalized per environment” - Each level is scored in isolation. Each individual level will get a score between 0% (very inefficient) 100% (matches or surpasses human level efficiency). The environment score will be a weighted-average of level score across all levels of that environment.
• “Across all environments” - The total score will be the sum of individual environment scores divided by the total number of environments. This will be a score between 0% and 100%.So the humans “scored 100%” because that is the baseline by definition, and the AIs are evaluated at how close they got to human correctness and efficiency. So a score of 0.26% is 1/0.0026 ~= 385 times less efficient (and correct) compared to humans.
ARC-AGI-3
What happened to ARC-AGI-1 and -2?
https://arcprize.org/arc-agi/1
https://arcprize.org/arc-agi/2
(They were more static, but yes, eventually frontier models got good at them.)
It’s true that frontier models got better at the previous challenges, but it’s worth noting that they’re still not quite at human level even with those simpler tasks.
Also, each generation of the challenge tries to close loopholes that newer models would exploit, like brute-forcing the training with tons of synthesized tasks and solutions, over-fitting to these particular kinds of tasks, and issues with the similarities between the tasks in the challenge.
A common strategy in past challenges was to generate thousands of similar tasks, and you can imagine the big AI companies were able to do that at massive scale for their frontier models.
AI won them
Ok. I’m sure AGI-1 and AGI-2 weren’t real AGI. What number is the real AGI? Is it AGI-3?
Probably AGI-42
If human scores were included, they would be at 100%, at the cost of approximately $250
Wait, why did it cost real humans $250 to pass the test?
it’s also an odd metric since only 20-60% of the humans completed it. Very 60% of the time they complete it everytime energy.
Ideally they’d run the bots multiple times through (with no context or training of previous run), but I guess that is cost prohibitive?
Yeah, this is what I was going to call out. Calling it “100% solvable by humans” and saying “if human scores were included, they would be at 100%” when 20-60% of humans solved each task seems kinda misleading. The AI scores are so low that I don’t think this kind of hyperbole is necessary; I assume there are some humans that scored 100%, but I would find it a lot more useful if they said something like “the worst-performing human in our sample was able to solve 45% of the tasks” or whatever. Given that the AIs are still scoring below 1%, that’s still pretty dark.
I assume it’s an hourly wage or something. Just because humans can work for free if they choose, doesn’t mean they have no cost associated with them. Just like a company could choose to give away unlimited tokens, those tokens still have a standard cost.












