The Mythical Origins of Artificial General Intelligence
Whatever AGI is, Turing didn't invent it
What is Artificial General Intelligence?
This post is a mini deep dive I did into the origins of the term AGI or ‘artificial general intelligence’. Spoiler: I was not impressed with what I found.
I was sitting reading a 2024 article from the Economist entitled, ‘How to Define Artificial General Intelligence’? The article mentions that on March 19th of 2024, Jensen Huang, chief executive of the company Nvidia, stated that he believed AI could advance to AGI within five years. The article then raises the question of how to define AGI and how to know when it arrives. Good questions to be asking.
The definition of AGI that Huang offers is interesting. He claims, according to this Economist article, that we will reach AGI when an AI program can do 8% better than most people can at tests like a logic test or the bar exam. I have many questions about this definition of AGI but it was the next sentence of the article that really got me.
It states that Huang’s proposal was
‘‘the latest in a long line of definitions. In the 1950s Alan Turing, a British mathematician, said that talking to a model that had achieved agi would be indistinguishable from talking to a human’’ (beginning of the third paragraph please verify the quote for yourself)1
I was honestly taken aback at the casual-ness and falsity of this statement. In what universe was Turing the first to give us an operative definition of AGI? In what universe did Turing even talk about AGI? Certainly not in this one.
I’ve written on Turing and his famous ‘Computing Machinery and Intelligence’ paper before. The paper, it seems to me, is one where Turing is interested in discussing the social consequences of the advent of AI. He first asks the question ‘can machines think’? before dismissing the question as ‘‘too meaningless to deserve discussion’’ (p 10)2
He then goes on to say that instead of answering this question he will create a game called ‘the imitation game’, which we now call the ‘Turing Test’. The game consisted, as the article I quoted makes reference to, of trying to see if a computer could fool a person into thinking it was a person. If the computer could convince people that it was a person 70% of the time, Turing argued that we should talk of the machine as if it ‘thinks’.
Of course, you might not agree with how I interpret Turing’s work, that’s fine. But what I am most interested in, is how Turing is simply inserted into the conversation on AGI so anachronistically without justification. What’s going on?
Turing, Searle, and Artificial General Intelligence
I could not make sense of this claim. So I decided to do some digging. I wanted to know when the term ‘AGI’ first emerged and how it could have seeped into an article that Turing allegedly spearheaded it.
I found a Forbes article that said the term AGI was coined in a collection of essays that was published as a book titled, Artificial General intelligence, co-edited by Ben Goertzel and Cassio Pennachin.3
In reading the book, it does state in reference to the works it talks about, that ‘‘this body of work has not been given a name before; in this book we christen it “Artificial General Intelligence” (AGI)’’. So let’s assume that the term AGI emerged in 2007, or at least, that this book was one of the first to discuss AGI.
Turing published the imitation game in 1950. We still seem to have some missing puzzle pieces. It’s not as if, due to the discrepancy of Turing’s work, we can so easily assume that he was concerned with discussing AGI.
I decided to keep reading. Maybe this book would offer more answers. The book presents its aims as ‘‘to present what we believe to be some of the most important ideas and themes in the AGI field overall’’4
It also briefly offers an answer to the question what is ‘general intelligence’? It gives two definitions from a dictionary about how intelligence is defined before claiming that ‘‘general intelligence implies an ability to acquire and apply knowledge, and to reason and think, in a variety of domains, not just in a single area like, say, chess or game-playing or languages or mathematics or rugby’’5
I have thoughts on this definition. The first thing that comes to mind is that we are back to the idea that ‘intelligence’ somehow means AI being able to ‘do things’, an idea I’ve critiqued in a previous article, but let’s press on.
Our first encounter with Turing comes on the next page, which states that no discussion of the definition of intelligence is complete without talking about Turing. The book shortly thereafter asserts that
‘‘The most important point about the Turing test, we believe, is that it is a sufficient but not necessary criterion for artificial general intelligence’’6
This quote is interesting and full of ambiguity. It inserts Turing as part of the conversation of AGI without making clear whether Turing himself would situate his work in this context. The authors write that they belief this is a sufficient criteria, but one might be tempted to ask why. It also strikes me as odd that there is no acknowledgement of whether Turing himself would have viewed his work through this lens. Generally speaking, anachronistic readings require justification.
The book then goes on to do the exact same thing with John Searle’s work. The very next sentence reads
‘‘Some AI theorists don’t even consider the Turing test as a sufficient test for general intelligence – a famous example is the Chinese Room argument’’7
The Chinese Room Argument, as it's so called, is one of Searle’s most famous arguments, but it is surprising to me to see his work pop up in discussions of AGI. Searle is a foundational thinker in the philosophy of mind and he too was unconnected to AGI debates.
Searle’s Chinese room experiment (published in 1980) argued against what he called the Strong AI thesis, the view that an appropriately programmed computer has a mind, and that these cognitive states explain human minds.8 A core assumption of the strong AI thesis is what we might call ‘the mind-computer program analogy’, the view that the mind is fundamentally a computer program, and that the brain is its hardware. Searle rejects this view using the following argument, referred to in the literature as the ‘Chinese Room Argument’.
Suppose you put me in a room and gave me various batches of symbols that I cannot understand or interpret (it could be any language that I don’t understand, be it a formal language or a natural one). Suppose also that these batches of symbols represent respectively, a story, a set of questions about the story, and a set of answers to the questions about the story. I also get a set of instructions in English for how to correlate the batches of symbols. E.g., ‘if you see this symbol as input, then you should map it to this symbol as output’ etc.
Under such circumstances, Searle argues that I could become as good as ever at manipulating these symbols. I could easily practice enough that I can produce the right answers to the questions being asked, respond promptly and on time to each of the questions, etc. And still, I have no real understanding of the questions being asked or the answers being given. Searle’s point is that manipulating symbols doesn’t equate to understanding.
He therefore concludes that while we might be tempted to claim that a computer is really understanding things, it isn’t. And that we should reject the strong AI thesis, i.e., that computer programs have minds, on these grounds.
Again, whatever one thinks about this argument, a valid question is what any of it has to do with AGI. As I read it, the answer is nothing. Searle was writing in a philosophy of mind context and responding to a philosophy of mind (and perhaps computer science) thesis, and there is no mention anywhere in his work of AGI.
I have not read a ton of the literature on AGI but from I have read so far, I am somewhat skeptical that the field is as rigorous in its citation of ideas as it should be. This, of course, is not something I am actively claiming, only that what I have read worries me. The causal ambiguity around the original intentions of Turing’s and Searle’s work strikes me as problematic.
We cannot simply anachronistically read back into Turing, Searle’s, or anyone else’s work from pre-AGI times, and claim that they are engaged in AGI debates. At the very least, if we are going to claim them as a lineage to conversations on AGI, there should be an honest acknowledgement on behalf of the others about who is claiming these people are engaged in this debates. Because Turing never claimed it. and neither did Searle. Not making these facts clear, is misleading.
Maybe that is why we get articles from outlets like the Economist offering overly simplistic claims that are in fact completely false, such as claiming that Turing was the first to provide a definition of AGI.
The Turing Test and AGI
Let’s go back to the claim that the Turing test is a sufficient but not necessary criterion for artificial general intelligence.9 The claim, in other words, is that if a machine passes the Turing test, the machine has general artificial intelligence, where artificial general intelligence is, according to these authors, something like the ability to learn and apply knowledge across a wide variety of domains.
I have frankly no idea why we should think that an AI passing the Turing Test indicates the presence of AGI. This is what the authors of this book suggest. That passing the test is a sufficient condition of general intelligence.
Depending on how we interpret ‘passing the Turing test’ this would qualify a significant number of chat bots as ‘generally intelligent’, meaning that these chat bots can learn and reason across a wide variety of domains. But what reasons do we have to believe these conclusions?
I can’t see any. And interestingly, no actual arguments even seem to be given for these claims, aside from the (non) argument I wrote about in an earlier post that because AI can ‘do things’ we should think that it thinks/is intelligent, etc.
Passing the Turing test shows absolutely nothing about whether an AI is generally intelligent. It merely shows that an AI can produce sophisticated enough output to convince us of certain arbitrary conclusions, where the term ‘sophisticated’ means something like ‘sounds human enough to us’. None of this allows us to draw conclusions about AI minds.
If the work I read to write this post is representative of the ‘origins’ of AGI, I am starting to better understand the causal-ness which with we get articles talking about this topic in terms that spout what I would consider falsities.
Thank you reading Ai Without Minds! If this essay was useful or interesting, you’re very welcome to subscribe for future posts. You can also support my work by buying a paid subscription. Currently, I have no added benefits for a paid subscription and will keep my weekly posts free, as I value the accessibility of my work. Buying a paid subscription allows me to keep writing informed, rigorous and accessible writing for all. I post weekly on Wednesdays on topics in AI, the philosophy of mind, and their intersection. See this post for a starter guide to my work, if you are new here, or subscriber-curious.
Photo Credit
Photo by Marek Pavlík on Unsplash
The Economist. (2024, March 28). How to define artificial general intelligence. The Economist.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460
Press, G. (2024, March 29). Artificial general intelligence or AGI: A very short history. Forbes
Goertzel, B., & Pennachin, C. (Eds.). (2007). Artificial general intelligence (Cognitive Technologies). Springer, p, 2.
Goertzel, B., & Pennachin, C. (Eds.). (2007). Artificial general intelligence (Cognitive Technologies). Springer, p, 7.
Goertzel, B., & Pennachin, C. (Eds.). (2007). Artificial general intelligence (Cognitive Technologies). Springer, p, 8
Goertzel, B., & Pennachin, C. (Eds.). (2007). Artificial general intelligence (Cognitive Technologies). Springer, p, 8.
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457.
Goertzel, B., & Pennachin, C. (Eds.). (2007). Artificial general intelligence (Cognitive Technologies). Springer, p, 8.


Thank you for your well considered and well researched analysis. So many people are willing to jump on or off the AGI bandwagon without really understanding the concepts of “learning” or “knowledge”
At Codex Odin we are exploring these concepts, especially with respect to AI. Most of the “learning” can be readily explained in math terms. However, some is not so easily understood.
We recently published a recap of some non-traditional research. Would appreciate your thoughts.
https://open.substack.com/pub/codexodin/p/reflections-on-ai-cognition?r=5e7gwl&utm_medium=ios&shareImageVariant=overlay
Artificial General Intelligence is not a test result, a performance threshold, or a lineage of borrowed authority. It is a claim about understanding. Turing never defined it, Searle never defended it, and passing for human has never implied possessing a mind. Confusing behavioral imitation with intelligence is how we keep mistaking fluency for comprehension… and why the debate keeps circling without progress.