Aussie AI
Chapter 1. Your Brain Versus AI
-
Book Excerpt from "The Sweetest Lesson: Your Brain vs AI"
-
by David Spuler, Ph.D.
Chapter 1. Your Brain Versus AI
“You are more powerful than you think.”
— Tim Cook.
Your Brain is Bigger
Your brain is big and it’s definitely a lot bigger than any AI engine. If you talk to an AI chatterbox for a while, you’ll probably start to realize that it’s not actually as intelligent as it should be, and here’s the main reason: it’s not big enough.
The human brain is so large that it’s difficult to quantify it with an exact number, but the prevailing theory is that you have around 100 trillion synapses that connect about 86 billion neurons. We’ll examine the evidence for those numbers in a later chapter, but for now, note that there are therefore more than 1,000 synapse connections per neuron, although some estimates are as high as 7,000 for the ratio, which would mean 700 trillion human synapses. A trillion is a one followed by 12 zeros, so these numbers are not small. You can’t count that many synapses with a microscope and a hand-clicker, but feel free to try.
AI engines vary in size, but none are anywhere near that big. Instead of 86 billion neurons, they usually have a “model size” or “hidden dimension” of about 7,268. That’s about 11 million times smaller, if you wanted an ego boost.
The equivalent of a human synapse in an AI model is called a “weight” and it’s just one number. Like synapses, these weights can either intensify or reduce a “signal” in the silicon brain, depending on whether they’re large numbers or small fractions.
All AI engines are smaller in size than the human brain. OpenAI’s GPT-4 was reportedly about 1.76 trillion weights, or let’s say 2 trillion to be kind, which is still 50 times smaller than 100 trillion carbon-based synapses. Also, GPT-4 really only used 200 billion weights at a time in what’s called a “Mixture-of-Experts” architecture, which is abbreviated as “MoE” (to remember this, think of Moe’s Tavern in The Simpsons, which probably has a few “experts”). So, that’s 500 times smaller than a brain, if you’re keeping count.
But GPT-4 is over a year old and not state-of-the-art as I write this. Not all of the top frontier models disclose their sizes, but I can tell you a few. The Google Gemini 1.5 Pro was about one trillion weights (100 times smaller). The DeepSeek R1 model, which caused a huge flutter in the industry when released, had multiple models ranging in size up to 671 billion weights, about a quarter the size of GPT-4, so that’s 200 times smaller. DeepSeek R1 also used the “MoE” architecture, at 37 billion weights per invocation, which is 2,700 times smaller. The latest one this week as I write this, and these leaderboards seem to change every week, is the Kimi K2 model from Moonshot AI, which has around 1 trillion weights (100 times smaller) with a 32B MoE architecture (3,125 times smaller than a brain).
You can pick any of these numbers that make you feel good, even up to the fun fact that AI has 11 million times fewer neurons. That’s probably wishful thinking and it also seems unfair to compare our total synapse counts against the smaller “MoE” subsets of activated weights in those models, rather than their full weight counts. Humans also don’t “activate” all of their brain, or haven’t you seen any of the Hollywood movies with this premise?
So, let’s compare human synapses to AI weights. Personally, I think the most likely number is close to this: based on 100 trillion synapses versus two trillion weights, you are at least 50 times smarter than an AI engine.
Your Brain is More Efficient
Despite being massively bigger, your brain is far more energy-efficient to run. The typical comparison is that your brain is like a 20 watt lightbulb, except that nobody ever turns it off, because you need it to keep breathing. Inside your head, you’re always “burning the midnight oil” even while you sleep. For comparison, a single high-end GPU chip in a data center uses hundreds of watts.
Your brain actually runs hotter than most other animal species. The brain consumes 20% of the human body’s total energy output (100W), despite only being 2% of our body weight. You’ve probably heard that the advancement of human intelligence was spurred along by one of these theories:
1. Our ability to stand upright.
2. A larger brain space in the skull (also somewhat related: being taller).
3. The invention of fire to allow cooking to expose enough nutrients to feed our brain.
But there’s another one that gets less attention:
The invention of sweating.
Our brain runs so hot that we would overheat without sweating, because even 20 watts is quite a heatload in an enclosed space. Exposed skin with sweat glands is critical to evaporate sweat, and this “evaporative cooling” is what makes sweating work. Most other animal species can’t sweat enough to radiate enough heat away. Hence, evolutionary brain size was limited by the inability to sweat.
In order to sweat, we needed to lose most of our hair, too. Animal fur interferes with sweating in most species, which is why your pet dog hangs its tongue out the car window all the time, although maybe it’s also fun. Hence, if you’re balding in your old age, you can now say it’s a sign of high intelligence.
All of those AI GPU chips need “liquid cooling” to run hot, too. It runs in pipes under “raised floors” inside massive data centers. That’s another way that your brain is better than AI: your brain uses a lot less water, also.
Your Brain Learns Like AI
No, that’s wrong. Your brain is not like AI, because your intelligence came first, and the AIs are a recent invention. AI is like your brain, because it was designed by scientists to use the same brain-like methods.
AI is a copy.
The idea is called a “neural network” and yours uses chemicals and carbon-based molecules, whereas AI uses GPU chips that run on electricity in silicon.
Your brain learns like AI, because the “training” of LLMs is based on your neural network. In the brain, the mappings between neurons are adjusted via “synaptic plasticity,” which doesn’t mean they’re made of plastic, but that they are soft and changeable. AI mappings are numbers called “weights” and they are changed by adding or subtracting small amounts to these numbers, as the LLM training process scans through reams and reams of pre-written text.
The technical term for how AI models get better weights is “backpropagation” because the “forward pass” reads the data, and then the “backward pass” propagates small changes to the weights, so that they’ll be better next time. This is a slow process, sometimes even taking weeks or months for a big model, modifying the LLM weights a tiny amount each document they read.
They say that humans need 10,000 hours of practice to master a task. Well, AI engines need a lot more than 10,000 GPU hours of computation for their weights to converge to numbers that make it smart. Hence, another point: your brain learns faster than AI.
References
References on human brain size and sweating abilities:
- Kovác L., 2009, The 20 W sleep-walkers. EMBO Rep. 2010 Jan;11(1):2. doi: 10.1038/embor.2009.266, https://www.embopress.org/doi/full/10.1038/embor.2009.266, https://pmc.ncbi.nlm.nih.gov/articles/PMC2816633/
- Kandel E.R.,Schwartz J.H., and Jessell T. M., January 2000, Principles of Neural Science (4th Ed.), New York, McGraw-Hill, https://www.amazon.com/Principles-Neural-Science-Eric-Kandel/dp/0838577016 (Early source for the 100 trillion synapse estimate.)
- Carl Zimmer, January 2011, 100 Trillion Connections: New Efforts Probe and Map the Brain’s Detailed Architecture, Scientific American Magazine, Vol. 304, No. 1, https://www.scientificamerican.com/article/100-trillion-connections/
- Vybarr Cregan-Reid, July 20, 2016, From perspiration to world domination – the extraordinary science of sweat, https://theconversation.com/from-perspiration-to-world-domination-the-extraordinary-science-of-sweat-62753
- National Science Foundation, April 20, 2021, How humans evolved a super-high cooling capacity: Discovery illuminates human sweat gland evolution, https://www.nsf.gov/news/how-humans-evolved-super-high-cooling-capacity
- Rafe Brena, May 24, 2024, 3 Key Differences Between Human and Machine Intelligence You Need to Know: AI is an alien intelligence https://pub.towardsai.net/3-key-differences-between-human-and-machine-intelligence-you-need-to-know-7a34dcee2cd3 (Good article about how LLMs don’t have “emotions” or “intelligence” and they don’t “pause”.)
References on AI engine weight sizes:
- Seifeur Guizeni, 2024, Decoding the Enormous Scale of GPT-4: An In-Depth Exploration of the Model’s Size and Abilities, https://seifeur.com/chat-gpt-4-data-size/
- Toolify.ai, Mar 01,2024, Inside GPT-4: Leaked Parameters, Weights, and Costs, https://www.toolify.ai/ai-news/inside-gpt4-leaked-parameters-weights-and-costs-2438865
- Gemini Team Google: Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, (many more authors not shown), 16 Dec 2024 (v5), Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context, https://arxiv.org/abs/2403.05530
- DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, (over 100 more authors not shown), 22 Jan 2025, DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning https://arxiv.org/abs/2501.12948
- Kimi Team, July 2025, Kimi K2: Open Agentic Intelligence, https://github.com/MoonshotAI/Kimi-K2/blob/main/tech_report.pdf, https://github.com/MoonshotAI/Kimi-K2
|
• Online: Table of Contents • PDF: Free PDF book download |
|
The Sweetest Lesson: Your Brain Versus AI: new book on AI intelligence theory:
Get your copy from Amazon: The Sweetest Lesson |