Aussie AI
Chapter 3. Intelligence
-
Book Excerpt from "The Sweetest Lesson: Your Brain vs AI"
-
by David Spuler, Ph.D.
Chapter 3. Intelligence
“Software is eating the world,
but AI is going to eat software.”
— Jensen Huang, May 2017.
Unintelligent AI
The big goal in AI research is called “Artificial General Intelligence” or “AGI” for short. This refers to having an AI engine that is smart enough to match the general level of human intelligence. Estimates vary as to how many years it will take before we get to AGI in the future. But they all agree on one thing:
We’re not there yet.
Sometimes, it seems like we are already at that level. The AIs are so great at mimicking human-like text, that it often feels to us like we’re talking to a real human. But this is human nature to “project” ourselves onto a non-human algorithm. In reality, an AI model has these properties:
- LLMs have no “feelings” or “empathy” (it’s just fake words).
- LLMs don’t “care” if they make mistakes (even when you point them out!).
Your AI buddy doesn’t have real feelings for you, but can certainly write words that make it seem that way, because that’s how it’s been trained. This type of training is called “alignment” or “conversational AI.”
An LLM also feels no real “embarrassment” if you point out its mistakes, although, again, it has been trained to handle this type of input situation by saying words like “oops” or “sorry” or whatever. Your challenge to an AI engine about its answer feels not different to it than any other input prompt because, well, it feels nothing either way. Also, it doesn’t matter if you say “please” and “thank you” to an AI engine. Lots of people do, but it’s just extra words to the LLM.
It’s a machine, and there’s literally no feelings anywhere. AI engines really are an “alien intelligence” and not very much like us.
Similarities
How are brains and LLMs similar? At a high level, there is a great deal of functional and structural similarity in how they perform tasks:
- Fast thinking and slow thinking modes
- Great at pattern recognition
- Nobody understands how they work!
A lot of the limitations of LLMs read like they were written about humans:
- Bad at arithmetic and math (without help from a calculator).
- Have biases, toxicity, and other unwelcome aspects.
- Can’t do crossword puzzles well (especially cryptics!).
- Don’t like anagrams and word scrambles.
- Forget stuff they were told earlier.
- Need training to be able to work with Windows.
It’s like an endless series of memes. LLMs are so much like us!
Differences
Sure, one is carbon-based and one is silicon. Other than that, they’re the same? Well, no, there are other distinctions between your brain and AI.
Differences appear more in the broader structure of the usage of a brain versus an LLM, whereas the low-level structure is a neural network for both.
How are they different in this higher-level functioning? Here’s some areas of distinction between you and an LLM:
- LLMs are more specialized (e.g., at writing).
- Fast LLMs can process vast reams of information.
- LLMs are integrated digitally to other systems.
- Brains have a human body attached.
You might think it an advantage to have a calculator attached rather than a glob of pulsating cells, but there’s a whole research area that says that AIs won’t achieve human-level intelligence without having a body attached. It’s called “embodied AI” and predicts that AIs will need to learn about 3D environments by exploring them physically, while being trained with that information.
But, if you want to know the main difference between LLMs and human brains: dumber than us. They can perform some impressive brute-force calculations, and are great at writing documents or creating images, but they also make some simple mistakes.
AI Thinking Limitations
There’s a long list of AI limitations, including some I mentioned above, but let’s focus on things that humans can do, which LLMs cannot. Despite all the PR hype, the AIs are not that great at:
- Thinking generally (“generalization”)
- Conceptual thinking
- Learning on-the-fly
- Continual, incremental learning
Humans can do this stuff in their sleep, literally. Children are just sponges absorbing information. To understand how poor LLMs are at learning, consider a recent study at Apple by Shojaee et. al. (June 2025). They asked some of the best frontier models to solve some abstract puzzles and got some human-like results: as the puzzles got harder, the LLMs failed more. This is actually worse than it sounds, because they failed because they weren’t using very general reasoning methods, but mostly relying on pattern recognition.
Let’s give them the benefit of the doubt. We’ll call it a tie.
Anyway, here’s the real kicker: the folks at Apple also did another test where they told the AI exactly how to solve the puzzle. They put the answer to the puzzle in the questions, with detailed instructions on how it can be solved. All the AI needed to do was read the answer, follow the instructions on how to solve the puzzle, and they’d be done.
It made no difference.
The LLMs were completely unable to use the answer to help solve the puzzle. They just couldn’t map it to their thought pattern, and were completely unable to learn. Zero on-the-fly learning capability. Were they just being stubborn?
I mean, there are humans that don’t read the instructions when building flat-packed furniture, and others who won’t ask the grocery store clerk where to find the I Can’t Believe It’s Not Buffer on the reach-in cooler shelves. But I think anyone else would be happy to take the answer in front of them and use it to solve a complex question. Teachers who put the answers in every question on their surprise math quiz would be immensely popular with their students, although perhaps less so with the School Board.
Weird Problems
There are some very weird things going on inside the brain of your average LLM. Here are some of the things you might see:
- Making stuff up that looks plausible but is false (“hallucinations” or “confabulations”).
- Repeating the same things again that it already told you.
- Never saying it’s run out of ideas.
- Never answering “I don’t know” ever, ever, ever.
It’s like a good friend who can’t ever admit they’re wrong. They make up stuff rather than backing down in an argument. But even in that case, your human friend is better than the LLM, because the AI:
Doesn’t know it’s lying.
In fact, it has no feelings about it at all. The LLM is just doing its best to spit out the words that look the most like a good answer. You can even challenge it, in which case, it can even spot its own mistakes, and it isn’t even embarrassed by this. I mean, it might be trained to output a few sheepish words when a mistake is found, but it doesn’t really know what that means either. It’s all just meaningless sequences of words.
Specific Thought Problems
The above issues are very deep and general aspects of thinking, and what thinking even means. Going deeper, there are specific types of thinking that LLMs are poor at:
- Common sense
- Empathy (the real kind, not the smarmy parroting)
- Understanding the “human condition”
- Senses, textures, touch.
- 2D spatial understanding
- 3D environment understanding
- Large search spaces (e.g., chess games)
Common sense is about a thousand little things. For example, there was a cyclone coming toward my hometown recently, so I asked an AI about what the update was. It came back with a news report from about two years ago, and pleasantly summarized it as an update for me.
That’s not common sense.
If you know what a cyclone is, then you understand that I want recent information only. It’s like if I ask for the Super Bowl score, I don’t want one from 1957. In this case, I think you’ll find that the AI gets it correct for the Super Bowl, because this is a fixable problem. It’s been trained a lot on Super Bowls, and what sorts of questions people ask, and hasn’t been trained much about cyclones, because there are “hurricanes” in the Northerm Hemisphere rather than the “cyclones” in Australia (they spin in opposite directions).
I was going to put “reasoning” on that list of things AI is poor at, but maybe it’s no longer very true. It’s still partially true because LLMs are still bad at “temporal reasoning” about time and cause-and-effect, and also “spatial reasoning” in 2D or 3D environments. However, they’re now amazing at mathematical proofs and other scientific reasoning, not to mention they got better at word puzzles and other meta-cognition about language. There has been an immense amount of research done into “reasoning models” starting with the OpenAI o1 “Strawberry” release in September, 2024. So, the AI industry is moving fast, and some of the limitations can switch to being solved problems.
Solved Problems
Some of the capabilities that are not being mentioned as limitations in AI outputs anymore:
- Basic grammar and punctuation
- Instruction following
- Conversational capabilities
- Foreign language outputs
Some of the more technical capabilities include:
- Image file formats (e.g., JPEG).
- Output formatting correctness (e.g., tables or HTML).
- Encoding problems (e.g., emojis in UTF8 versus Unicode)
- Programming language outputs (e.g., Python or C++).
- Processing columns of numbers (like Excel).
Meta-cognition problems in AI that are largely solved:
- Tone of writing (e.g., optimistic versus negative, casual versus formal).
- Style of writing (whatever that may mean).
- Knowing when to stop (e.g., using “stop tokens”).
- Following meta-requests about outputs (e.g., word counts, paragraph lengths, etc.).
The list of solved problems is getting longer with every major release.
References on AI Intelligence
Research on the nature of intelligence and its relationship to AI:
- Maryville University Online. June 6, 2024 Artificial Intelligence vs. Human Intelligence, https://online.maryville.edu/blog/ai-vs-human-intelligence/
- David De Cremer and Garry Kasparov, March 18, 2021, AI Should Augment Human Intelligence, Not Replace It, https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it
- Korteling JEH, van de Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC, Eikelboom AR, 2021, Human- versus Artificial Intelligence, Front Artif Intell. 2021 Mar 25;4:622364. doi: 10.3389/frai.2021.622364. PMID: 33981990; PMCID: PMC8108480. 2021, https://pmc.ncbi.nlm.nih.gov/articles/PMC8108480/, PDF: https://pmc.ncbi.nlm.nih.gov/articles/PMC8108480/pdf/frai-04-622364.pdf (Good article on the nature of intelligence.)
- Shneiderman, B., 2020, Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy, International Journal of Human–Computer Interaction, 36(6), 495–504, https://doi.org/10.1080/10447318.2020.1741118, https://www.tandfonline.com/doi/full/10.1080/10447318.2020.1741118
Research on the limitations in AI’s version of intelligence:
- Eli Amdur, Nov 25, 2023, Jobs AI Just Can’t Do, Forbes https://www.forbes.com/sites/eliamdur/2023/11/25/jobs-ai-just-cant-do/
- Bernard Marr, Nov 28, 2024, AI Won’t Replace Humans – Here’s The Surprising Reason Why, Forbes, https://www.forbes.com/sites/bernardmarr/2024/11/28/ai-wont-replace-humans--heres-the-surprising-reason-why/
- Yash Sorout, October 2023, Exploring the Boundaries: Unveiling the Limitations and Challenges of Artificial Intelligence, IJRAR October 2023, Volume 10, Issue 4, https://ijrar.org/papers/IJRAR23D1171.pdf (Issues like lack of common sense and boundaries to creativity.)
- Cao, X., 2025, The Boundaries of AI Capabilities, In: Modern Business Management. Palgrave Macmillan, Singapore, https://doi.org/10.1007/978-981-96-0594-1_10, https://link.springer.com/chapter/10.1007/978-981-96-0594-1_10
- Rob Toews, June 1st, 2021, What Artificial Intelligence Still Can’t Do, https://www.forbes.com/sites/robtoews/2021/06/01/what-artificial-intelligence-still-cant-do/ (AI lacks: common sense, learning on-the-fly, understand cause-and-effect, reason ethically.)
- Cade Metz, March 24, 2016, One Genius’ Lonely Crusade to Teach a Computer Common Sense https://www.wired.com/2016/03/doug-lenat-artificial-intelligence-common-sense-engine/ ("For decades, as the tech world passed him by, Doug Lenat has fed computers millions of rules for daily life. Is this the way to artificial common sense?")
- German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, Stefan Wermter, 11 Feb 2019 (v4), Continual Lifelong Learning with Neural Networks: A Review, https://arxiv.org/abs/1802.07569
- James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, Raia Hadsell, 25 Jan 2017 (v2), Overcoming catastrophic forgetting in neural networks, https://arxiv.org/abs/1612.00796
- Sangwon Jung, Hongjoon Ahn, Sungmin Cha, Taesup Moon, 2020, Continual Learning with Node-Importance based Adaptive Group Sparse Regularization, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada, https://papers.nips.cc/paper/2020/file/258be18e31c8188555c2ff05b4d542c3-Paper.pdf
Research papers on Artificial General Intelligence (AGI), which refers to AIs with human-level intelligence:
- Tao Feng, Chuanyang Jin, Jingyu Liu, Kunlun Zhu, Haoqin Tu, Zirui Cheng, Guanyu Lin, Jiaxuan You, 16 May 2024, How Far Are We From AGI, https://arxiv.org/abs/2405.10313
- Nathan Lambert, APR 18, 2024, Llama 3: Scaling open LLMs to AGI, https://www.interconnects.ai/p/llama-3-and-scaling-open-llms
- jbetke, June 3, 2024, General Intelligence (2024), https://nonint.com/2024/06/03/general-intelligence-2024/
- Ethan Mollick, May 12, 2024, Superhuman? What does it mean for AI to be better than a human? And how can we tell? https://www.oneusefulthing.org/p/superhuman
- Rohin Shah, Seb Farquhar, Anca Dragan, 21st Aug 2024, AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work, https://www.alignmentforum.org/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
- Vishal Rajput, Jul 8, 2024, Why LLMs Can’t Plan And Unlikely To Reach AGI? https://medium.com/aiguys/why-llms-cant-plan-and-unlikely-to-reach-agi-642bda3e0aa3
- David Gilmore, Sep 2024, When will AI outthink humans? https://davidvgilmore.com/writings/outthinking-ai (Interesting analysis of all the GPUs in the world and when they will “out-think” all the human knowledge workers, predicting a range of years from 2028 to 2035, depending on assumptions.)
- Chloe Berger, October 2, 2024, Mark Cuban says his puppy is ‘smarter than AI is today’, https://fortune.com/2024/10/01/mark-cuban-dog-puppy-smarter-than-ai/
- Samantha Kelly, Sept. 29, 2024, “Superintelligent” AI Is Only a Few Thousand Days Away: OpenAI CEO Sam Altman, https://www.cnet.com/tech/services-and-software/superintelligent-ai-is-only-a-few-thousand-days-away-openai-ceo-sam-altman/
- Brian Merchant, Dec 2024, AI Generated Business: The Rise of AGI and the Rush to Find a Working Business Model, https://ainowinstitute.org/general/ai-generated-business
- Alhassan Mumuni, Fuseini Mumuni, 6 Jan 2025, Large language models for artificial general intelligence (AGI): A survey of foundational principles and approaches, https://arxiv.org/abs/2501.03151
- Jeffrey Anthony, Jan 2025, No GPT-5 in 2025 and No AGI — Ever. The Triadic Nature of Meaning-Making and the Fallacy of AI’s Understanding, https://medium.com/@WeWillNotBeFlattened/no-gpt-5-in-2025-and-no-agi-ever-aa9384efdbe5
- Mohit Sewak, Ph.D., January 29, 2025, Achieving General Intelligence (AGI) and Super Intelligence (ASI): Pathways, Uncertainties, and Ethical Concerns, https://towardsai.net/p/l/achieving-general-intelligence-agi-and-super-intelligence-asi-pathways-uncertainties-and-ethical-concerns
- Alberto Romero, Feb 06, 2025, AGI Is Already Here—It’s Just Not Evenly Distributed: Or: why you should learn to prompt AI models, https://open.substack.com/pub/thealgorithmicbridge/p/agi-is-already-hereits-just-not-evenly
- Apoorv Agrawal, May 23, 2025, Why Cars Drive Themselves Before Computers Do: Robocars are ready; robot secretaries aren’t… yet, https://apoorv03.com/p/autonomy
- Parshin Shojaee, Maxwell Horton, Iman Mirzadeh, Samy Bengio, Keivan Alizadeh, June 2025, The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity, Apple, https://machinelearning.apple.com/research/illusion-of-thinking https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
- Dr. Ashish Bamania, June 2025, Apple’s New Research Shows That LLM Reasoning Is Completely Broken: A deep dive into Apple research that exposes the flawed thinking process in state-of-the-art Reasoning LLMs, https://ai.gopubby.com/apples-new-research-shows-that-llm-reasoning-is-completely-broken-47b5be71a06a
|
• Online: Table of Contents • PDF: Free PDF book download |
|
The Sweetest Lesson: Your Brain Versus AI: new book on AI intelligence theory:
Get your copy from Amazon: The Sweetest Lesson |