Listen

Description

If you’re interested in further discussion on neuro-integration from all aspects and angles, I’ve set up a discord server here. Come join the fun!

WARNING: The following contains MANY spoilers for the show Pantheon (2022) from AMC. You probably haven’t seen it yet, but reading this article will make you want to see it, so I highly suggest you watch it first if you care about spoilers. If not, let us proceed.

On the 30th of November 2022, the technological landscape tilted on its axis. OpenAI released ChatGPT, and suddenly Generative AI was the only thing anyone could talk about. We became obsessed with Large Language Models, transformers, and the eerie ability of a machine to predict the next token in a sentence.

Just two months earlier, in September 2022, a show called Pantheon aired on AMC. While the rest of the world was about to lose its mind over a chat bot, Pantheon was quietly laying out a roadmap for something far more profound: Computer-Simulated Human Consciousness, or “Uploaded Intelligence” (UI).

It contained some of the most realistic depictions of mind uploads, and the potential effects on society, that I have ever seen. It grapples with the hardest philosophical questions head-on, such as the true meaning of consciousness, personality cults, the consequences of immortality and digital death, self-copying, and the universe-as-simulation theory.

I don’t think many people saw it when it came out. (I didn’t)

Yet there are so many fantastic, mind-bending, life-changing things about AMC’s Pantheon not just in its philosophy, but also the way it depicts cyber-security, software engineering and big data concepts; not to mention the cut-throat world of big tech both within and beyond Silicon Valley. It’s not perfect, but it’s easily the most accurate I’ve ever seen from a show of this kind.

Watching it today makes it feel somewhat prophetic, but likely was the result of extremely good footwork by the producers and their team, getting the most up-to-date picture of the inner world of big tech at the time, between 2021 and 2022. Facebook became Meta at the end of 2021 as a result of Zuckerberg’s Metaverse strategy, and you see a lot of those ideas were depicted in Pantheon.

We also see clear depictions of things Generative AI are doing today: chat interfaces with digital intelligence, for instance.

The digital intelligences in Pantheon weren’t artificial neural networks, but fully scanned and emulated human brains. They call these “Uploaded Intelligence”, or UI (which is confusing in tech, since UI stands for User Interface, but we’ll roll with it.)

In the show, a UI is created by laser-scanning a biological brain, layer by layer, down to the stem. The brain is destroyed in the process - vaporised - but stored apparently in its entirety in digital form. This is a human mind, stripped of its biological substratum which is replaced by silicon.

The entire connectome is then reconstructed digitally, presumably with simulated sensory receptors; the rest of the central nervous system is not included in the actual scan.

Somewhere in between all that, there is some magic fairy dust that allows the full emulation to happen, but since that is still one of the great millennium-type problems of our time, I don’t expect them to have that bit of detail.

At one point in the show, a UI produces an 80-page patent in seconds. Although GenAI today would likely screw that up with hallucinations, you can see the resemblance. It feels eerily prescient.

However, we’re not here to talk about GenAI, and neither does the show: the real focus is on the Uploaded Intelligence and the ability to fully simulate a human being computationally. This requires a few major assumptions:

* The brain - that is, the cortex and brain stem - constitutes the entirety of who we are as individuated conscious beings

* An individual’s behaviours, emotions, and cognition, can be simulated in their entirety from a scan of the cortical network using classical and quantum computing platforms

Before we can even begin to tackle these assumptions, we need to understand what it is to compute, to be intelligent, and to be conscious.

I Am, Therefore I Compute

When we perform a mental task, sometimes we are conscious of the effort expended to perform it. Other times, we are unconscious of that effort, and an input gets processed and turned into an output which is re-integrated with our conscious awareness sometime later.

To us, these results arrive as flashes of inspiration or insight; that is the moment of reintegration. In truth, the brain was likely working on that problem for a period of time without you consciously being aware of it.

These unconscious computations are presently done in our biological circuitry. Technically, the physical medium of computation doesn’t matter, and could be electronic. However, there is an inherent problem in viewing the brain as equivalent to electronic circuits.

Classical computing and electronics are based on gates. Gates allow you to perform simple, but exact, operations on incoming electrical signals. For example, an OR gate takes 2 inputs, and so long as at least one of those inputs is receiving a signal, the OR gate outputs a signal. A simple way to model this would be like:

0 OR 0 = 0

1 OR 0 = 1

0 OR 1 = 1

1 OR 1 = 1

The number 1 denotes an electrical signal, while 0 is no signal.

Meanwhile, an AND gate takes 2 inputs, and only outputs a signal if both inputs have a signal:

0 AND 0 = 0

1 AND 0 = 0

0 AND 1 = 0

1 AND 1 = 1

Gates like these feel intuitive to us. They follow a simple logic. They’re also exact and about as deterministic as it gets. They have no hidden influences outside the two expected inputs which can affect the result; so 0 AND 1 should never result in 1, just as 2 + 2 should never equal 5.

Brains, and biology in general, are nothing like that.

The thing about biological computation is that it is fuzzy. For most of us, doing novel arithmetic in our head is a combination of heuristics learned from repetition in similar tasks, which we use to get a sense of approximating a value; then more heuristics refine it down until we have a value in mind that we feel confident enough about.

Let’s do a quick experiment. Solve the following 2 problems:

* What is half of 10,000,000?

* What is half of 8,626,400?

Which one required more time/mental energy, the bigger number or the smaller one?

If we followed a purely computational way of thinking about things, our expectation should be that the bigger number would be more computationally expensive to calculate than the smaller one. However for the human brain, it’s not dependent on the size of the numbers we’re working with, but on their composition.

In the first problem, although the number was larger, its composition was vastly simpler, made almost entirely of zeroes. To solve it, we could reduce it by 6 decimal places, and then the problem becomes “What is half of 10?

The second smaller number had many more non-zero digits, meaning we could not reduce it in the same way. Instead, our natural inclination is to solve for each non-zero digit separately: “What is half of 8? What is half of 6? What is half of 2?” and so on.

1 problem instantly turns into 5.

Therefore, our brains are not computers, therefore, our brains cannot be simulated by computers.

Woah hold up there cowboy, not so fast. Plenty of things which are not computers are simulated on computers literally all the time in every field ever; we just haven’t simulated literally everything in the universe that exists.

This argument, that the brain is not a computer and therefore could never be modelled by one, always drives me a little insane: just because it doesn’t follow gate-based logic does not mean it is not performing computation, and does not mean that it cannot be simulated computationally. It is, and it can. The problem here is one of dimensionality.

To simulate the human brain, you need way more than merely its connectome. We know this from simulations of C. Elegans, the Nematode Worm whose species all have exactly the same number of neurons: 302, no more, no less. We have been working to simulate its entire set of known behaviours for decades computationally, and we have made progress; but consider how incredibly simple C. Elegans is, and yet we still haven’t figured it out? How complicated can a near-microscopic worm possibly be?

Some have reached the conclusion that the complexity is not in simulating the worm, but rather, simulating chemistry itself.

Chemistry is, in my opinion, the most sophisticated and most powerful phenomenon in the physical universe. Chemistry is like the operating system, the essential firmware, upon which the software of human minds can run. Even firmware requires something firmer: hardware. That’s quantum mechanics. That’s effectively the architecture upon which the Firmware must execute.

Indeed, if the answer is that we need to simulate chemistry itself, then think of our predicament this way:

Imagine you were trying to simulate a classical gate-based Turing machine on some highly exotic computer, and you only managed to implement AND and OR gates; then you proceed to try running DOOM on it (as is tradition.)

You wouldn’t get very far, would you?

Technically, we can build a complete computing machine using any combination of “NOT” operator with “AND” or “OR” gates, but we don’t even know the NOT operator exists, let alone what a universal Turing machine even is.

That’s more or less where we are in the grand scheme of things when it comes to simulating all of chemistry.

I Am, Therefore I Intellect

Another very common assumption made by just about everyone is that the ability to think implies intelligence. It’s also extremely common to conflate intelligence with emotion, with motivation, with the survival instinct. This is part of the problem which has often taken debates on simulated cognition in circles: we assume a full suite of human cognitive abilities as the baseline for any model of cognition.

Although you likely cannot have intelligence without the ability to think in some sense, it is not necessarily a given that thought brings with it intelligence. It is also essential to be aware of the fact that emotion is not included by default; in fact, emotion is a wholly separate concept. A machine that can think is not, by default, emotional. To be emotional, it would need to be given emotions explicitly; emotions evolved much earlier than intelligent cognition, and were a useful tool for boosting our instinctual motivations which promoted survival.

We tend to assume that a super-intelligence will naturally develop super-ambition, or super-benevolence, or super-hatred. But this is a fallacy.

This is best explained by the Orthogonality Thesis, proposed by philosopher Nick Bostrom. It states, effectively, that intelligence and final goals are independent variables. You can have a clumsy intelligence that wants to conquer the world, and you can have a god-like super-intelligence whose only motivation is to count every paperclip in the universe.

In a “pure” AI, there is no inherent reason for it to care about its own survival unless we program it to. It has no fear of death, no pride, no resentment.

This is where Pantheon (and the concept of Uploaded Intelligence) gets interesting. The UIs in the show are not “Pure AI” sitting on an abstract graph. They are brute-force scans of biological architecture. They inherit the “spaghetti code” of human evolution. They maintain their human wants and human needs. They carry the baggage of the limbic system simulated alongside the cortex.

If we are to achieve full emulation of a human mind, we must be prepared to emulate biological drives, emotions, hormones, the lot.

A Means Without An End

Everyone always asks “What Is Consciousness”But no one ever asks “How Is Consciousness”

Finally, we arrive at the most challenging hurdle of all. Let’s assume we solve the chemistry problem. Let’s assume we achieve full emulation. We boot up the simulation. The digital eyes open.

The entity speaks.

It says, “I am.

How do we know it is really aware?

This is part of the Hard Problem of Consciousness. If we wanted to simulate a human mind, we must assume consciousness is part of the success criteria. Whatever else it might be, it must be conscious.

Yet here we are, at the end of 2025, and guess what? We still have no idea how to tell for sure. We can’t even be sure that a rock is not conscious. I’m serious.

Think of it like this; what, exactly, is conscious, when it comes to a human being? Is it the brain alone? Or the body? Do the proteins that make up the neuron cell count as being conscious? Is it the whole system?

In our daily lives, we assume other humans are conscious because they act like us and are made of the same wet biological stuff as us. We assume a rock is not conscious because… it just sits there.

This is an assumption based purely on external observation. We have no “consciousness detector”, yet. We cannot measure the internal subjective experience (qualia) of another being.

Philosophers talk about the P-Zombie (Philosophical Zombie); a being that is physically identical to a human, acts exactly like a human, screams when you pinch it, and laughs when you tell a joke, but has absolutely zero inner experience. The lights are on, but supposedly, nobody is home.

If we simulate a mind on a silicon chip (or a quantum substrate), and it screams in pain, is it actually feeling pain? Or is it just executing a code function, equivalent to:

if (pain_intensity > 3) play_audio(”ouch.mp3”)

I don’t personally believe P-Zombies can exist. If I simulate the universe, and within that simulated universe I also simulate a human being, and the simulation behaves exactly the way a human would in every case, to the degree that they are entirely indistinguishable from any “real” human being when observed from the outside, on what grounds can one deny its consciousness?

If we can’t even prove a rock isn’t silently judging us, we certainly won’t be able to prove a digital mind isn’t experiencing qualia.

The Integrated Future

Simulating the human mind in its entirety, including chemistry, is something which we will not see happen for a very long time; some have estimated a timeline of 100 years from where we are now.

However, I believe there is a kind of part-way point which we are fast approaching, and when it hits, could herald another exponential growth curve in technology.

For at least the past 10 years, I have thought about a future where the integration between the biological and digital is real and deep.

It’s something I call “Neuro-Integration.

It’s the idea that we can extend on the human brain’s existing biological modules of cognition, adding digital ones. Not only could we add additional sensors - infra-red video, say - we could add vastly more powerful computational abilities, such as floating-point arithmetic, physics simulations, statistics, and more, all accessible directly to the conscious mind at a thought.

Simply being able to control your laptop’s cursor with your mind isn’t what I’m talking about; we already can do some of this sort of stuff, but it’s fairly pedestrian. I’m talking about an input-output loop, two-way communication between mind and digital computers, via Brain Control Interfaces (BCI) and Transcranial Direct Current Stimulation (tDCS) devices, without invasive techniques such as surgical implants.

The technologies as they exist today are far from perfect; challenges remain to be solved around the level of accurate targeting, focal area, inter-individual variability and sensitivities, information encoding, and much more. Regardless, I truly believe these are solvable challenges.

It could reduce to zero the friction and effort required to make use of terrifically accurate and high-speed computation. Similar to how LLMs can access tools, these tools could one day be accessible directly from the human mind, and the result could arrive the same way as a flash of insight or inspiration does now.

If we combine this with various multi-modal ML technologies, and solve the problem of encoding various information modes so the brain can understand it, we could expand upon our own ability to visualise in our minds eye. We would never have to search our memory for the meaning of an uncommon word again; it could invoke an instant dictionary look-up. Semantic memory could be extended in ways which we can’t imagine, becoming a near-instant vector graph query for extended contextual information.

Hell, just imagine if you could add to your mental workspace with custom scripts, like you can with a shell environment.

What’s more, there are still questions of basic user experience, privacy and security to be answered before we can turn it into a platform for extending cognition: how does the human mind send input to the device, and in what manner does it perceive output? How do we minimise the risk of malicious passive scanning of brain activity? How do we address the risks of misuse and data security? How do we avoid amplifying inequality?

Perhaps the biggest challenge to the general availability of this technology - for very good reason - is the slow pace of regulatory approval.

Regardless, there is no doubt in my mind that this technology is coming, and probably faster than we think. The real question is: will we allow a small gaggle of tech bros to dominate the space like they have in Generative AI and Cloud Computing? Or will we take our destiny into our own hands for once?

I am, as you would know, not a fan of “big tech”; we can already see the insanity going on with the narcissists at the top of most of these companies, whose greed is now amplified to 11, desperate to replace human workforces with anything they can.

I am also much more cynical about the recent GenAI boom than most. However, I also know that some of that cynicism is a pure emotional response to the fact that this amazing technology humanity has developed has made that small gaggle of greedy narcissists at the top of Big Tech so much richer and more powerful than anyone ever thought possible, and it disgusts me.

The technology itself, however, when stripped of all the amphetamine-like hype and euphoria - though knowing CEOs as I do, it’s not amphetamine, it’s cocaine - is really quite remarkable.

The track we are currently on is one where corporations are desperately trying to scrape away the human “chattel”, whom they have long relied upon for their intellectual capabilities, in a race to replace almost all of their operations with machines, keeping on a skeleton crew of human meat sacks to monitor them, upgrade their hardware, keep the AC running; until they eventually get replaced by robotics. Meanwhile, the CEOs, board members and shareholders continue to rake in money until the mathematically inevitable collapse of aggregate demand.

I believe there is a future where human intelligence and machine intelligence are not in conflict. That may not be the future we are currently hurtling towards at trillion-dollar speeds; but it is reachable from where we are.

All that is required is for people who believe in the future of human kind and our continued relevance, who believe in fundamental ethics and open technology, to come together and start a conversation about how we can really create a future for all humanity, together.

Not for profit, not for domination, not for any one nation; for everyone.

Where we go from here should be up to us all.

If you enjoyed this post, please consider a paid subscription. I’m trying to keep all my posts free, but it’s a challenge. Please help out if you can.

Thank you for tuning in to today’s episode of A Chemical Mind; what do you think of the possibilities of neuro-integration technology? What did you think of Pantheon? Would you ever upload yourself to the cloud? Please let me know in the comments on the Substack article!

If you’re interested in further discussion on neuro-integration from all aspects and angles, I’ve set up a discord server here. Come join the fun!

Thanks for listening, and I’ll see you next time.



Get full access to A Chemical Mind at chemicalmind.substack.com/subscribe