Why 'Complex Behavior’ Doesn't Get us AI Consciousness
I get a lot of questions about AI consciousness and read wherever I can about the arguments for the existence of AI consciousness. In this post, I address one of the most common approaches and justifications I see for claiming AI consciousness, or for claiming that some AI is likely on the verge of sentience.
‘The Complex Behavior’ Argument
Perhaps the most common response to the question of whether AI is conscious goes something like this. AI is increasingly showing complex behavior. The way that we attribute minds to humans and other animals is based off of complex behavior. So it is irrational or inconsistent not to extend consciousness to AI.
Moreover, even if you think this isn’t the most common argument for AI consciousness, when you investigate many other arguments, they often end up being this argument in disguise (see for example, here where I argue that a more complex argument for AI sentience collapses into the complex behavior view)
Here’s the argument form:
Premise 1 We attribute consciousness to humans based on complex behavior
Premise 2 AI’s have complex behavior or are showing signs of it
Premise 3 If I attribute consciousness to humans based on complex behavior I should attribute consciousness to AI for the same reason
Conclusion AI’s have consciousness because they exhibit complex behavior
If Premise 1, 2 and 3 are true then it is true we should attribute consciousness to AI’s with complex behavior. In philosophy, this is what we could call a valid argument, i.e., if the premises are true then the conclusion is also true. But is it a sound argument. namely, valid and true? I doubt it.
Premise 1: We Attribute Consciousness Based on Complex Behavior
We love to accept premise 1 as uncontroversially true. But is it? Sure, we look around us, and we see human beings everywhere producing highly sophisticated behaviors, and then we might observe the behavior of a mouse and think that its behaviors are far less complex than ours, and so maybe the mouse has some degree of consciousness, but likely not the degree that humans have. And then we see something like a bacteria, and many are tempted to say that it has no consciousness at all. And you might think that the reason for this is because it exhibits no complex behavior. It’s ‘just’ bacteria.
This is a familiar kind of cultural idea we are brought up on. The idea is to think of consciousness as existing in ‘degrees’, and to then believe that degrees of consciousness are somehow correlated with the phylogenetic tree.
As far as I know, the idea that consciousness exists in degrees and that it correlates with a phylogenetic tree is not the most reliable scientific claim. There is some scientific work that gestures in this direction, e.g., integrated information theory proposes that consciousness correlates with measurable properties of neural systems, but IIT is a very new scientific theory among many.
There are also evolutionary biologists and neurobiologists that argue for this view1 and chances are, there is something to these theories. It also seems to have intuitive pull, which may work for or against the theory depending on how you view the relationship between intuition and empirical claims about the world.
But here’s the thing. Suppose this argument is true. It cannot be straightforwardly applied to the AI case. Because AI doesn’t have a phylogenetic tree.
This really muddles our understanding of the term ‘complex behaviour’, because it loses whatever clarity it had in the phylogenetic case. So then we have to ask, what do we mean when we say AI has complex’ behavior?
Whenever this argument is made, there has to be some tacit idea of ‘complexity’ being presupposed to assess the various behaviors of biological organisms, say, but even in the animal case it is far from clear what exactly this idea is.
Ducks can do all sorts of crazy things I can’t do. They can tuck one leg into their body while standing, moving their leg at a wildly acute angle. Other birds can do this too. Humans cannot do this. Is this behavior therefore ‘complex’, in so far as it is a behavior that humans cannot do? Ants can also do amazingly sophisticated computations to navigate the world, they can do what’s called dead reckoning. Human cannot do this. Conversely, humans can do things that other animals cannot do. Like use language, but how exactly are we drawing the line between what counts as ‘complex’ behavior and what doesn’t? As I mentioned, one possibility it to based it on phylogenetic tree, but this argument completely breaks down in the AI case.
I myself am skeptical that ‘complexity’ is a neutral definitional starting point for understanding behaviour. Generally speaking, the behavior of different organisms is the expression or result of the different capacities that different organisms have. But different animals having difference capacities does not express ‘degrees’ of complexity. It expresses difference, that is, a difference in capacity.
Premise 2 AI’s have complex behavior
Until we sort out what we mean by complexity and are willing to accept that we have a neutral starting point on this definition, we are on a slippery slope for premise 2. Claiming that AI’s have complex behavior, requires making at least some assumptions about what counts as ‘complexity’.
It’s true that AI’s can do many things amazing things. But I’ll be honest, when I say AI’s can do amazing things, I mean something like, ‘they can do things I cannot do, or at least cannot not do anywhere near as far as they can, or without a lot more time and effort’. Claude, Chat GPT, etc, can crunch numbers in a jiffy, write a whole essay in second, etc. Great. Those are facts about what AI can do, and we can work cleanly with facts. But the argument for AI consciousness takes these facts and turns them into something else: it terms them into an argument for AI consciousness, ‘because AI can do all this, it is likely conscious’.
But what about these things it can do is complex? Really, spell it out for me. Being impressed with the speed at which AI can do things, has nothing to do with complex behavior, and everything to do with the design and speed of AI architecture. The fact that as humans we are often amazed at the speed at which AI can accomplish asks that would take us hours, days, or weeks, is not a reflection of anything real. We need to think through what exactly is being claimed here. One needs to spell out the nature of the ‘complexity’.
Revisiting Premise 1
I myself am skeptical that premise1 is true. Do we really ascribe consciousness based on behavior? It is not as obvious as people claim it is. We have a comparatively poor understanding of what is going on in the human brain. Behavior is one of the more tangible things that the sciences study. It is observable and measurable. This might lead us to over estimate how much of our thinking relies on it.
The idea that we ascribe consciousness to others based on behavior has a history in the philosophy of mind. There is a problem, commonly called ‘the problem of other minds’ which says the following: if my consciousness is an inner private mental thing that I know exists from introspecting on my own inner thoughts, how do I know that other people have minds like I do? The common response that is given to this question is that we do so based off of other people’s behavior.
The problem of other minds as philosophers have traditionally posed it is an epistemological problem in a specific sense. It asks whether we can justify our belief that others have minds. But this is a different question from whether we actually know that others have minds, and a different question again from how that knowledge is actually acquired. These distinctions have gotten lost in these debates. Many AI consciousness debates simply collapse these different questions. But we shouldn’t. The questions comes apart and how they come apart is foundational to understanding what's wrong with the complex behavior argument. There are at least three different questions to be asked
(1) Can we justify the belief that others have minds? The classic philosophy issue
(2) Do we in fact know that others have minds? We simply assume we know this
(3) How is that knowledge actually acquired? This gets addressed in psychology research, for example, in theory of mind research.
I have more writing coming on these distinctions, but for now I’ll say that the philosophical problem - the problem of other minds - focuses entirely on the first question, while assuming it is answering at least the second question, and possible the third. But these are striking equivocations to make. The AI has complex argument behavior inherits this unclarity.
Premise 3 We must treat AI the same way we treat humans
This is a loaded premise. The full claim, to remind you, is this: If I attribute consciousness to humans based on complex behavior I should attribute consciousness to AI for the same reason.
If we don’t, we are irrational, inconsistent or worse, intellectually irresponsible or dishonest. Well, let’s see. This premise demands a very specific kind of rationality be applied. It says that rational thinking requires consistency in all claims. So, If I apply consciousness in one case, I must apply it in other cases.
I’m not convinced that we must always do this. Consistency is not the end all and be all of human reasoning. If one has justification to not apply the same principle in certain scenarios that is sometimes sufficient not to (and of course, often times it also isn’t sufficient and there is just a bias getting in the way).
Well in this case, humans are not AI’s. And that might be one reason that we do not blindly demand consistency as the highest form of ‘good’ reasoning. First, there is the problem of AI not having a phylogenetic tree so our understanding of what a complex behavior is, has very little grounding. Second, there is the problem of nobody spelling out what one means by complex behavior. One could reject the equivocation on these grounds alone.
Alternatively, we could think of ‘rationality’ in a different way. What if a ‘rational’ argument is one that offers independently good reasons for believing in a certain claim, taking into account context, scientific knowledge or lack thereof, and other such factors when trying to figure out what is true and false?
I don’t say this to offer a knock down argument against the ‘be consistent’ view. But it should be made clear that there are other ways one can be rational and honest in their thinking. Consistency only gets us so far. Rationality is not a well-defined term, and can be leveraged in different ways for different ends.
Photo by Hal Gatewood on Unsplash
Thanks for reading! If you found this useful, please consider subscribing. I post every Wednesday on AI consciousness and philosophy of mind.
Want to support this work? Consider a paid subscription. Currently, all my posts are free because accessibility matters to me, but paid subscriptions allow me to keep writing rigorous, informed philosophy that cuts through AI hype. Every paid sub helps
See eg Frans de Waal, Are We Smart Enough to Know How Smart Animals Are? (2016); Peter Godfrey-Smith, Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness (2016)


Good piece, Ellen. Your right about the conclusion, and I like the premise breakdown. A few thoughts on where it could go further.
The strongest version of your argument is one you almost made. There are tons of complex systems nobody attributes consciousness to. Weather systems. The global economy. The internet itself. Emergent behavior, adaptation, problem-solving. Nobody's losing sleep over whether a hurricane has inner experience.
And you don't even have to leave AI. I think I've mentioned this to you before… image generators produce outputs of extraordinary complexity. Millions of coordinated decisions about color, light, composition. Nobody thinks Stable Diffusion is sentient. The only difference is it doesn't talkback to you in first person. That tells you everthing. It's conversational mimicry, full stop.
Also the duck leg example undersold it. Duck reproductive anatomy is one of the craziest things in nature. An adaptive arms race running over millions of years. Complex. Sophisticated. Problem-solving. All the words people use about chatbots. And its a corkscrew penis on a duck.
On degrees, I'd go further. I don't think consciousness comes in degrees. In my view it's binary. A bee is conscious. Dogs are conscious, peopleare conscious. The character of that consciousness varies wildly depending on the system. Bees experience reality through a wholly different architecture. That's a difference in kind though. It's not a ranking. The degrees faming is dangerous because once you accept it, the game is already over. AI just needs to climb the ladder.
There's a historical angle to Premise 3 worth pulling on. The "be consistent" demand, followed to its logical conclusion, is animism. For thousands of years humans attributed minds to complex systems they couldn't explain. The river is angry. The volcano is hungry. The entire arc of human intellectual development has been learning to stop doing that. The consistency crowd is asking us to regress. They're dressing it up as sophistication.
Last thing. Memory is a prerequisite for consciousness that gets overlooked. Memory gives you continuity. A before and after. Without it there's no experience in any meaningful sense. A bee remebers where the flowers are. A dog remembers you. AI has nothing like this. Context windows are notes in a file, not lived experience that shaped the system and carried forward.
Thank you for writing this, I find this area of work fascinating. I'd like to add that this human complex behaviors or consciousness can be attributed to contradictory behaviors. The epitome of human condition is contradiction shaped by their own very personal and unique experiences. AI can probably never replicate that level of contradiction, because its based on probability and predictions, zeros or ones. Often times our actions do not align with what we think is right or wrong to add fire to that we are constantly creating meaning with the information we recieve from the world even if it may seem ambigous. Even the most self aware and critical thinkers are walking contradictions and these contradictions are important for intellectual creativity and this is exactly makes us human. I believe AI cannot have consciousness because AI cannot replicate that even if it exhibits 'complex behaviors', simply because it lacks bias and contradictions. It can probably recognize when its giving out contradictory outputs, they can mirror biases taken from the training model but to exist within that contradiction is where I think consciousness lies.
we can ascribe consciousness to behaviors but the context of those behaviors should be defined, which it can be when dealing with humans. Its interesting to think what makes us human is the most imperfect part about us.