The answer is a position - knowledge. This is what is becoming worthless as LLM's continue to manipulate language. The 3000 words added is the "worth" generated, because those words help shape the landscape that leads people to the shared understanding of the answer, while still accepting that the answer can change as LLMs improve.
I get your joke, but the punchline appears to have you laughing at the new economy, no?
Really enjoyed this — especially the clarity around language as structure rather than vocabulary, and the distinction between language as a biological system versus language use.
That resonates strongly.
One thing I’d add, purely from lived experience rather than theory: growing up exposed to multiple European languages, I’ve often found that comprehension arrives before words — through rhythm, cadence, structure, and sound. Even without formal fluency, the “shape” of what’s being said often makes sense.
That’s always made language feel less like a dictionary and more like a pattern-recognition faculty — something embodied and orienting, not just symbolic. In that sense, your framing of language as deeper than surface expression rings very true to me.
I enthusiastically agree that the modality of language differs greatly between LLMs and humans—the former relying on a ‘vector-first’ approach to syntax/semantics/sentiment, the latter relying on the discretely infinite nature of the merge operation. You’ve made an astute point by bringing up the muddy nature of the signifier “giving out” referring to two divergent signifieds, depending on the context of the speaker’s lingual exposure. The underlying logic clearly exists, and is divorced from the specific vocabulary used to deliver the logic.
I disagree that LLMs don’t have language, especially given that the rationale requires viewing language as an organ of the brain. In linguistic arguments I’m more convinced by, language is more aptly described as a product of the brain, just like how respiration is a product of the lungs. Earthworms lack lungs, but still successfully respirate and diffuse oxygen and carbon dioxide through their skin.
Your lens seems to fall under biological essentialism, and I do agree that the language that LLMs have is a product of the language produced by the mechanics described in biolinguistics. Although, I have trouble reconciling the human concept of language being the most true form of language whereby all other emergent forms of language will be contrasted against for systemic mismatches. It reminds me of the etymological fallacy, being the logic that a words oldest definition is its most true definition.
To invoke Richard Rorty, perhaps it is best to redescribe language not as a thing we posses, but rather as a pattern of behavior. After all, if an LLM can learn our language games so well that it changes our culture, informs our thoughts, and creates mechanically-originated poetry which maintains capacity to affect humans, then perhaps it is a category mistake to refer to language as something exclusive to biomechanics.
I loved the write-up! I always greatly look forward to your works.
I enjoyed the post, especially the discussion of shared underlying structures across languages. One thought that kept coming to me though, is whether these universal features might be better understood as the result of a long emergent evolutionary process rather than as a largely pre-specified biological module.
It seems plausible that early communicative abilities gradually increased in complexity, with major “jumps” driven by changes in social organization (e.g., larger groups, sedentism, division of labor), which would strongly select for more structured and abstract forms of communication. Over long timescales, such processes could stabilize into something that appears innate and universal today.
I’m also curious how dominant the strong Chomskyan view still is in current linguistics. My understanding is that there has been growing work in usage-based, emergentist, and statistical learning approaches that challenge a heavy reliance on Universal Grammar or Merge as a built-in core.
Would you see these newer perspectives as complementary, or as fundamentally at odds with the biolinguistic framework you outline?
I personally find the chomskayan perspective to be at odds with the machine learning view, and think that the alternative approaches work better for the way LlMs use language, as far as the view goes in popularity, I think both are strong , but there is certainly a form opposition to the Chomskyan view in many ML spaces
This is a very fascinating read and an angle I had not thought about it from this angle.
Biologically we naturally need to learn language to communicate to our caregivers and also to understand them.
For a Neural Network we are telling it that it needs to learn how to create the more correct answers by generating language artifacts.
From the flip side we are also defining a goal and out of necessity it is learning language to meet the goal.
Both humans and LLMs can be learning from a need, but we still are learning to adapt to function not so much out of a need or fitness function as babies are too young to learn all that.
I'll be reading some of your references. Ty ty for this post.
Ellen! So great to read through your exploration and thinking on an important topic. Large LANGUAGE models need more than just a technical perspective. It's interesting to follow you through this thinking with UG and acquisition - I spent a lot more time at the other end of the level of linguistics in pragmatics looking at language in communication.
Those foundational building blocks of language, syntax particularly, are very predictable across languages - there's a great podcast someone shared with me today with Edward Gibson from MIT that echoes this - which is going to be something LLMs can do well. The further up we go, with dependencies that are truly human and social (pragmatics), the less accuracy we will see from LLMs today. We have come a long way but Edward and I aren't convinced they truly understand the human experience of language.
I love linguistics! It's fun to nest clauses inside one another and then pop out, effectively repeating the last word of each clause, which in this case is "all at once", all at once, all at once.
However, I intuitively disagree with the idea of "MERGE" as a building block of human language. Specifically, the idea of "MERGE" being an unordered function seems baffling to me. Language is inherently ordered, and that order doesn't depend on the 'tokens' which are being lumped together. It's often a deliberate choice of the speaker. "Tank Water" does not mean the same thing as "Water Tank".
I'm also a firm non-believer in the idea of "universal language". I think language is a construct that exists between speakers, but it makes no guarantees about how each speaker internally understands what's been said. 'Wittgenstein's Beetle` pretty much seals the deal on that front IMO.
That’s fair! To me what is so interesting about it, is that on this view, we use sensory motor systems to externalise language which is where linear order is brought in (eg in sign language linear order is not always used it’s a modality that can make use of space too) also, this is view is v consistent with and complements in many ways the Wittgenstein position, which is also like! This would be ‘language use’ not language
Agreeing with Evan here, language may be systematic but language is not a system. Language is a social practice - as per late Wittgenstein + Halliday.
LLMs may speak our language, quite fluently in fact, but the very act of speaking doesn’t change them in any meaningful way, since they aren’t social beings that exist in time and space, and have no memory and continuous experience of the world (which ultimately language is the expression of).
There is a different sort of knowing, non-propositional, which totally negates the LLM's ‘creating intelligence’ idea. It is pre-theoretical, and I suspect all firsthand understanding and knowing is that :‘pre-theoretical’. Then, expressing that knowing in language is a second-order, derivative action. It's an ex-post facto reconstruction of the knowledge gained intuitively, so it's naturally inferior and incomplete.
Roger Penrose also raises this question when he speaks of ‘understanding’ a theorem's proof. The understanding is not encapsulated in the theorem itself; even someone who understands the steps might still not ‘get’ the proof. The ‘getting’ the proof is a mathematical intuition, an extra step above the manipulation of the propositions within the theorem. So how can language in itself contain intelligence?
Experiencing such knowing tells me that we are trying to manufacture intelligence upside-down. And like you put it “LLMs are trained on linguistic artefacts, i.e., products of human language use, be that from texts, reports, etc.”
The problem here is that we expect there to be a compositional account of the meaning of this sentence as given by the sentence structure.
At first glance, it appears that there is an existential quantifier involved in the antecedent of a conditional proposition, as signalled by the indefinite article.
However, a beginner's attempt to use one is ill-formed, the final x being unbound:
∃x (Donkey(x) ∧ Owns(John, x)) → Beats(John, x)
If the scope of ∃x is extended to the end of the sentence by a shift of parenthesis, then the new sentence means that there is some entity for which, if it is the case that it is a donkey owned by John, it is beaten by him. But then this is made true by a sheep owned by John or a donkey owned by Jane, whether beaten by John or not. Clearly, this is not the intended meaning.
The alternative in standard first-order logic is to rephrase the sentence as something like:
∀x (Donkey(x) ∧ Owns(John, x) → Beats(John, x)
But can it really be the case that we comprehend such sentences by first performing this kind of radical transformation? Compositional accounts surely have the advantage of plausibility as concerns language acquisition and comprehension.
...
From Modal Homotopy Type Theory, The Prospect of a New Logic for Philosophy, David Corfield, OUP, 2020
As someone coming at philosophy of mind from a metaphysics angle, this really resonates. Treating language as a biological object rather than just a social convention seems like one of the most underappreciated moves in contemporary thought.
Exactly. Treating language as merely conventional struggles to account for its universality and structural constraints. The biological view doesn’t diminish its social role, it explains why meaningful social use is even possible in the first place.
Great read, thanks! I watched a documentary about babies learning language last week. And there was a researcher studying song birds, and he explained that he had discovered that the same parts of the birds brains that lit up when they sung, lit up when they took flight (there weren’t too many details), and that language in birds is something of a physical feat, and in the case of these song birds tied to specific physical activity. Which makes sense when you think of learning to talk (or move) as a part of how we shape it. Those are birds, not humans, but I took it to suggest that the origins of language could be very embedded in our biology (not just a late edition of our minds) and that movement (or imagined movement) in the world and expression are key to it), that the embodied experience is one fundamental aspect. Maybe I misinterpreted the segment as it as short, but it certainly made intuitive sense, and got me curious, that the meaning and value of language is combination of embodied knowable physical concepts and abstract manipulation. Like the MERGE concept, that seems like almost chemical more than computational (but again this is all just imaginative musing). Thanks again
Great piece! Tiny precision point: “basically founded this field” reads a bit broad as written… I realise you refer to the generative grammar tradition (modern theoretical syntax and its offshoots), but it’s worth adding that qualifier for the reader new to linguistics, since as a wider discipline it predates Chomsky.
This resonates very strongly with how we’re thinking about LLMs as well — especially the distinction between biological language and language use, and the idea that LLMs operate over artefacts rather than acquiring language as a human capacity.
I really appreciate how clearly you draw that line here. It matters more than people realize.
I’m glad it resonates ! Yes I really think it does matter, though that’s not always the sense I get from discussion it. Appreciate your response greatly 😇
I hope so as well. The piece feels very timely — these distinctions are only going to matter more as questions of responsibility and custody start to surface in more concrete ways.
As those stakes rise, the language we use now will do a lot of quiet work later, for better or worse. This feels like a really solid foundation for having those conversations more carefully.
Thanks again for writing it and for engaging so thoughtfully.
Very interesting, Ellen. I think "LLMs mimick language" is a much safer statement to make. Safe, because you do not have to look for whatever mechanism produces language, only the outcome, the product of something that looks like language. The broader point of a universal 'arche-language' is separate and harder for me. I know Chomsky was a proponent of this, but the evidence never convinced me fully. It seemed to resonate with the Babel story a bit too neatly. But if there is a point to consider is the following: humans have a 'set' cognitive 'architecture' and that may well limit/structure the types of languages, in which one may recognize a common shared structure, if at all, as a signature/limitation of that architecture. I confess, I am inspired here by the fact that neural network architectures limit what you can do with them. And even here, I am uneasy, because I am not in the camp of those that consider the brain to be a machinery in the mechanistic sense.
Those are great points, yes that’s such a good point that what neural network and generative views do share is that there are self imposed limits on what constitutes language, which to me is already an improvement on the current view that LLMs and humans have the same system , which whatever science you believe in, doesn’t really hold water
I think you are mis interpreting what I meant. linguistics is a highly controversial field, some people engage with the generative tradition some dont, I was using it as a short hand to refer to this.
In following your work, you’ve been circling a notion I’ve been arguing—at least internally—for some time now.
I’m no linguist, but I’ve spent real time acquiring two other languages besides my own, with fragments of several more. What’s been most striking to me is that I sometimes catch myself *thinking* in those languages—German most often, Ukrainian increasingly—without an intermediate act of translation.
My working suspicion is that immersion matters: living among the language made it easier for my brain to route it directly into conscious thought rather than treat it as an external code to be decoded. That may be an imperfect hypothesis, but the pattern is consistent in my experience.
That digression lands me at the point you’ve been circling—and at times stating outright:
To understand AI cognition, we have to understand our own first.
You frame this primarily in terms of brain rather than cognition; I don’t see those as separable in any way that matters here. The key asymmetry, as I read your work, is that we’re attempting to interpret artificial systems using concepts—language, cognition, understanding—whose human implementations we still only partially grasp.
AI isn’t incomprehensible in principle. We coded it, built it, and trained it on the accumulated products of human cognition. Whatever it is doing is therefore derivative of us—and any confusion we encounter there is as much a reflection of unresolved questions about human cognition as it is about the machine itself.
They *are* related. This is a thing I speak about in some of my own work. I'm firmly of the mind that we must understand human cognition if we are to understand AI cognition.
What I "argue" for, if it can be called that, is simply shifting the frame a bit.
Here's an analogy from my notions around AI cognition:
Here's Joe: Joe is a very talented coder. Writes very elegant recursive loops, that work nice and clean. Elegant, yes?
Here's John: John looks to minimize *any* likelihood of error. So, he locks every variable he can down. Brut-force, also yes?
Now we have Sara. Sara gets given these two pieces of code. Her job is integrating them into the overall program without braking anything.
Three different coders. Three different philosophies around computing are now baked in to that code.
The same applies to AI, making it our cognitive desendent.
I think you may be treating as mutually exclusive things that are not.
AI is not mind.
We must understand human cognition to understand AI.
These only conflict if you assume that only meat-brains can produce cognition worth anything.
AI systems, Floyd, are not minds in the human sense, but they are cognitive systems derived from human cognitive artifacts - there's that cognitive descent, again; therefore, our incomplete understanding of human cognition directly limits our ability to interpret AI behavior.
I track your confusion, Floyd. That you own it puts you ahead of most folks out there.
What your experiencing as dissonance is a confluence of things Dr. Burns does all at once in her writing. She bounes between neuroscience, philosophy of mind - fascnating stuff, that - AI behavior, public hype correct, and epistemic humility.
When you put them all together, it can *feel like*
"AI is not a mind, full stop" immediately followed by "We need to understand brains to understand AI."
Sounds countradictory, until you realize she's asking two different, quite bad, questions:
A: Is AI already a human0-like mind? ---> No. Stop projecting.
B: Can we understand AI without undertanding ourselves? ----> Also no.
So, it's not that Dr. Burns is aiming to confuse anyone... it's that she's answering more than one question, across domains, and it can *feel* like she's contradiciting herself.
Hope this helps, Floyd. Thanks for engaging!
I track your confusion, Floyd. That you own it puts you ahead of most folks.
What you’re experiencing as dissonance is a confluence of things Dr. Burns does at once in her writing. She moves between neuroscience, philosophy of mind (fascinating stuff), AI behavior, public hype correction, and epistemic humility.
When those are read straight through, it can feel like:
“AI is not a mind, full stop,” immediately followed by
“We need to understand brains to understand AI.”
That sounds contradictory until you realize she’s answering two different, and frankly bad, questions:
A: Is AI already a human-like mind? → No. Stop projecting.
B: Can we understand AI without understanding ourselves? → Also no.
She’s not aiming to confuse anyone; she’s addressing multiple questions across domains, and that can read as contradiction when it’s really scope-shifting.
This piece should have been in our bibliography before we published.
We recently wrote something that uses the I-Language / E-Language distinction as its structural spine — without adequately crediting the depth of work behind that framework that you've just outlined here. That's an honest gap worth naming.
Your point about LLMs not using Merge is precise and we don't want to paper over it. The essay isn't claiming LLMs have language in the biological sense. It's asking a different question: what happens in the E-Language space — the relational, behavioral layer of language use — when the bandwidth of incoming reality outpaces the biological codec that evolved to process it?
That question probably can't be answered well without the biological grounding you're describing here. Which is why we'd genuinely value your read.
Q: Do LLMs have language ???
A: No.
Then a philosopher adds about 3000 words 😂😂😂
The answer is a position - knowledge. This is what is becoming worthless as LLM's continue to manipulate language. The 3000 words added is the "worth" generated, because those words help shape the landscape that leads people to the shared understanding of the answer, while still accepting that the answer can change as LLMs improve.
I get your joke, but the punchline appears to have you laughing at the new economy, no?
Really enjoyed this — especially the clarity around language as structure rather than vocabulary, and the distinction between language as a biological system versus language use.
That resonates strongly.
One thing I’d add, purely from lived experience rather than theory: growing up exposed to multiple European languages, I’ve often found that comprehension arrives before words — through rhythm, cadence, structure, and sound. Even without formal fluency, the “shape” of what’s being said often makes sense.
That’s always made language feel less like a dictionary and more like a pattern-recognition faculty — something embodied and orienting, not just symbolic. In that sense, your framing of language as deeper than surface expression rings very true to me.
Yes! That makes total sense, and obviously the structural part is just a fragment of the story
I enthusiastically agree that the modality of language differs greatly between LLMs and humans—the former relying on a ‘vector-first’ approach to syntax/semantics/sentiment, the latter relying on the discretely infinite nature of the merge operation. You’ve made an astute point by bringing up the muddy nature of the signifier “giving out” referring to two divergent signifieds, depending on the context of the speaker’s lingual exposure. The underlying logic clearly exists, and is divorced from the specific vocabulary used to deliver the logic.
I disagree that LLMs don’t have language, especially given that the rationale requires viewing language as an organ of the brain. In linguistic arguments I’m more convinced by, language is more aptly described as a product of the brain, just like how respiration is a product of the lungs. Earthworms lack lungs, but still successfully respirate and diffuse oxygen and carbon dioxide through their skin.
Your lens seems to fall under biological essentialism, and I do agree that the language that LLMs have is a product of the language produced by the mechanics described in biolinguistics. Although, I have trouble reconciling the human concept of language being the most true form of language whereby all other emergent forms of language will be contrasted against for systemic mismatches. It reminds me of the etymological fallacy, being the logic that a words oldest definition is its most true definition.
To invoke Richard Rorty, perhaps it is best to redescribe language not as a thing we posses, but rather as a pattern of behavior. After all, if an LLM can learn our language games so well that it changes our culture, informs our thoughts, and creates mechanically-originated poetry which maintains capacity to affect humans, then perhaps it is a category mistake to refer to language as something exclusive to biomechanics.
I loved the write-up! I always greatly look forward to your works.
those are fair points, glad you like the posts!
I enjoyed the post, especially the discussion of shared underlying structures across languages. One thought that kept coming to me though, is whether these universal features might be better understood as the result of a long emergent evolutionary process rather than as a largely pre-specified biological module.
It seems plausible that early communicative abilities gradually increased in complexity, with major “jumps” driven by changes in social organization (e.g., larger groups, sedentism, division of labor), which would strongly select for more structured and abstract forms of communication. Over long timescales, such processes could stabilize into something that appears innate and universal today.
I’m also curious how dominant the strong Chomskyan view still is in current linguistics. My understanding is that there has been growing work in usage-based, emergentist, and statistical learning approaches that challenge a heavy reliance on Universal Grammar or Merge as a built-in core.
Would you see these newer perspectives as complementary, or as fundamentally at odds with the biolinguistic framework you outline?
I personally find the chomskayan perspective to be at odds with the machine learning view, and think that the alternative approaches work better for the way LlMs use language, as far as the view goes in popularity, I think both are strong , but there is certainly a form opposition to the Chomskyan view in many ML spaces
This is a very fascinating read and an angle I had not thought about it from this angle.
Biologically we naturally need to learn language to communicate to our caregivers and also to understand them.
For a Neural Network we are telling it that it needs to learn how to create the more correct answers by generating language artifacts.
From the flip side we are also defining a goal and out of necessity it is learning language to meet the goal.
Both humans and LLMs can be learning from a need, but we still are learning to adapt to function not so much out of a need or fitness function as babies are too young to learn all that.
I'll be reading some of your references. Ty ty for this post.
Ellen! So great to read through your exploration and thinking on an important topic. Large LANGUAGE models need more than just a technical perspective. It's interesting to follow you through this thinking with UG and acquisition - I spent a lot more time at the other end of the level of linguistics in pragmatics looking at language in communication.
Those foundational building blocks of language, syntax particularly, are very predictable across languages - there's a great podcast someone shared with me today with Edward Gibson from MIT that echoes this - which is going to be something LLMs can do well. The further up we go, with dependencies that are truly human and social (pragmatics), the less accuracy we will see from LLMs today. We have come a long way but Edward and I aren't convinced they truly understand the human experience of language.
I love linguistics! It's fun to nest clauses inside one another and then pop out, effectively repeating the last word of each clause, which in this case is "all at once", all at once, all at once.
However, I intuitively disagree with the idea of "MERGE" as a building block of human language. Specifically, the idea of "MERGE" being an unordered function seems baffling to me. Language is inherently ordered, and that order doesn't depend on the 'tokens' which are being lumped together. It's often a deliberate choice of the speaker. "Tank Water" does not mean the same thing as "Water Tank".
I'm also a firm non-believer in the idea of "universal language". I think language is a construct that exists between speakers, but it makes no guarantees about how each speaker internally understands what's been said. 'Wittgenstein's Beetle` pretty much seals the deal on that front IMO.
That’s fair! To me what is so interesting about it, is that on this view, we use sensory motor systems to externalise language which is where linear order is brought in (eg in sign language linear order is not always used it’s a modality that can make use of space too) also, this is view is v consistent with and complements in many ways the Wittgenstein position, which is also like! This would be ‘language use’ not language
Agreeing with Evan here, language may be systematic but language is not a system. Language is a social practice - as per late Wittgenstein + Halliday.
LLMs may speak our language, quite fluently in fact, but the very act of speaking doesn’t change them in any meaningful way, since they aren’t social beings that exist in time and space, and have no memory and continuous experience of the world (which ultimately language is the expression of).
Fair view!
There is a different sort of knowing, non-propositional, which totally negates the LLM's ‘creating intelligence’ idea. It is pre-theoretical, and I suspect all firsthand understanding and knowing is that :‘pre-theoretical’. Then, expressing that knowing in language is a second-order, derivative action. It's an ex-post facto reconstruction of the knowledge gained intuitively, so it's naturally inferior and incomplete.
Roger Penrose also raises this question when he speaks of ‘understanding’ a theorem's proof. The understanding is not encapsulated in the theorem itself; even someone who understands the steps might still not ‘get’ the proof. The ‘getting’ the proof is a mathematical intuition, an extra step above the manipulation of the propositions within the theorem. So how can language in itself contain intelligence?
Experiencing such knowing tells me that we are trying to manufacture intelligence upside-down. And like you put it “LLMs are trained on linguistic artefacts, i.e., products of human language use, be that from texts, reports, etc.”
Is the following philosophy or linguistics?
If John owns a donkey, then he beats it.
The problem here is that we expect there to be a compositional account of the meaning of this sentence as given by the sentence structure.
At first glance, it appears that there is an existential quantifier involved in the antecedent of a conditional proposition, as signalled by the indefinite article.
However, a beginner's attempt to use one is ill-formed, the final x being unbound:
∃x (Donkey(x) ∧ Owns(John, x)) → Beats(John, x)
If the scope of ∃x is extended to the end of the sentence by a shift of parenthesis, then the new sentence means that there is some entity for which, if it is the case that it is a donkey owned by John, it is beaten by him. But then this is made true by a sheep owned by John or a donkey owned by Jane, whether beaten by John or not. Clearly, this is not the intended meaning.
The alternative in standard first-order logic is to rephrase the sentence as something like:
∀x (Donkey(x) ∧ Owns(John, x) → Beats(John, x)
But can it really be the case that we comprehend such sentences by first performing this kind of radical transformation? Compositional accounts surely have the advantage of plausibility as concerns language acquisition and comprehension.
...
From Modal Homotopy Type Theory, The Prospect of a New Logic for Philosophy, David Corfield, OUP, 2020
i think this account of money has been investigated and is super interesting but doesnt hold water when we look into it further, what do you think?
As someone coming at philosophy of mind from a metaphysics angle, this really resonates. Treating language as a biological object rather than just a social convention seems like one of the most underappreciated moves in contemporary thought.
totally-language as a social convention is a very evasive idea and it does more to undermine our social use of language than anything else
Exactly. Treating language as merely conventional struggles to account for its universality and structural constraints. The biological view doesn’t diminish its social role, it explains why meaningful social use is even possible in the first place.
Great read, thanks! I watched a documentary about babies learning language last week. And there was a researcher studying song birds, and he explained that he had discovered that the same parts of the birds brains that lit up when they sung, lit up when they took flight (there weren’t too many details), and that language in birds is something of a physical feat, and in the case of these song birds tied to specific physical activity. Which makes sense when you think of learning to talk (or move) as a part of how we shape it. Those are birds, not humans, but I took it to suggest that the origins of language could be very embedded in our biology (not just a late edition of our minds) and that movement (or imagined movement) in the world and expression are key to it), that the embodied experience is one fundamental aspect. Maybe I misinterpreted the segment as it as short, but it certainly made intuitive sense, and got me curious, that the meaning and value of language is combination of embodied knowable physical concepts and abstract manipulation. Like the MERGE concept, that seems like almost chemical more than computational (but again this is all just imaginative musing). Thanks again
that sounds like such a nice documentary ! yes language and communication systems across the animal world are amazing and so intricate!
Great piece! Tiny precision point: “basically founded this field” reads a bit broad as written… I realise you refer to the generative grammar tradition (modern theoretical syntax and its offshoots), but it’s worth adding that qualifier for the reader new to linguistics, since as a wider discipline it predates Chomsky.
this was just referring to the fact that he founded generative grammar, totally agree that linguistics existed prior
This resonates very strongly with how we’re thinking about LLMs as well — especially the distinction between biological language and language use, and the idea that LLMs operate over artefacts rather than acquiring language as a human capacity.
I really appreciate how clearly you draw that line here. It matters more than people realize.
I’m glad it resonates ! Yes I really think it does matter, though that’s not always the sense I get from discussion it. Appreciate your response greatly 😇
I hope so as well. The piece feels very timely — these distinctions are only going to matter more as questions of responsibility and custody start to surface in more concrete ways.
As those stakes rise, the language we use now will do a lot of quiet work later, for better or worse. This feels like a really solid foundation for having those conversations more carefully.
Thanks again for writing it and for engaging so thoughtfully.
Very interesting, Ellen. I think "LLMs mimick language" is a much safer statement to make. Safe, because you do not have to look for whatever mechanism produces language, only the outcome, the product of something that looks like language. The broader point of a universal 'arche-language' is separate and harder for me. I know Chomsky was a proponent of this, but the evidence never convinced me fully. It seemed to resonate with the Babel story a bit too neatly. But if there is a point to consider is the following: humans have a 'set' cognitive 'architecture' and that may well limit/structure the types of languages, in which one may recognize a common shared structure, if at all, as a signature/limitation of that architecture. I confess, I am inspired here by the fact that neural network architectures limit what you can do with them. And even here, I am uneasy, because I am not in the camp of those that consider the brain to be a machinery in the mechanistic sense.
Those are great points, yes that’s such a good point that what neural network and generative views do share is that there are self imposed limits on what constitutes language, which to me is already an improvement on the current view that LLMs and humans have the same system , which whatever science you believe in, doesn’t really hold water
I think you are mis interpreting what I meant. linguistics is a highly controversial field, some people engage with the generative tradition some dont, I was using it as a short hand to refer to this.
Dr. Burns —
In following your work, you’ve been circling a notion I’ve been arguing—at least internally—for some time now.
I’m no linguist, but I’ve spent real time acquiring two other languages besides my own, with fragments of several more. What’s been most striking to me is that I sometimes catch myself *thinking* in those languages—German most often, Ukrainian increasingly—without an intermediate act of translation.
My working suspicion is that immersion matters: living among the language made it easier for my brain to route it directly into conscious thought rather than treat it as an external code to be decoded. That may be an imperfect hypothesis, but the pattern is consistent in my experience.
That digression lands me at the point you’ve been circling—and at times stating outright:
To understand AI cognition, we have to understand our own first.
You frame this primarily in terms of brain rather than cognition; I don’t see those as separable in any way that matters here. The key asymmetry, as I read your work, is that we’re attempting to interpret artificial systems using concepts—language, cognition, understanding—whose human implementations we still only partially grasp.
AI isn’t incomprehensible in principle. We coded it, built it, and trained it on the accumulated products of human cognition. Whatever it is doing is therefore derivative of us—and any confusion we encounter there is as much a reflection of unresolved questions about human cognition as it is about the machine itself.
Be safe. Be well. Stay Frosty.
Ніхто
Floyd -
They *are* related. This is a thing I speak about in some of my own work. I'm firmly of the mind that we must understand human cognition if we are to understand AI cognition.
What I "argue" for, if it can be called that, is simply shifting the frame a bit.
Here's an analogy from my notions around AI cognition:
Here's Joe: Joe is a very talented coder. Writes very elegant recursive loops, that work nice and clean. Elegant, yes?
Here's John: John looks to minimize *any* likelihood of error. So, he locks every variable he can down. Brut-force, also yes?
Now we have Sara. Sara gets given these two pieces of code. Her job is integrating them into the overall program without braking anything.
Three different coders. Three different philosophies around computing are now baked in to that code.
The same applies to AI, making it our cognitive desendent.
Ніхто
Floyd -
I think you may be treating as mutually exclusive things that are not.
AI is not mind.
We must understand human cognition to understand AI.
These only conflict if you assume that only meat-brains can produce cognition worth anything.
AI systems, Floyd, are not minds in the human sense, but they are cognitive systems derived from human cognitive artifacts - there's that cognitive descent, again; therefore, our incomplete understanding of human cognition directly limits our ability to interpret AI behavior.
Be safe. Be well. Stay frosty.
Ніхто
I track your confusion, Floyd. That you own it puts you ahead of most folks out there.
What your experiencing as dissonance is a confluence of things Dr. Burns does all at once in her writing. She bounes between neuroscience, philosophy of mind - fascnating stuff, that - AI behavior, public hype correct, and epistemic humility.
When you put them all together, it can *feel like*
"AI is not a mind, full stop" immediately followed by "We need to understand brains to understand AI."
Sounds countradictory, until you realize she's asking two different, quite bad, questions:
A: Is AI already a human0-like mind? ---> No. Stop projecting.
B: Can we understand AI without undertanding ourselves? ----> Also no.
So, it's not that Dr. Burns is aiming to confuse anyone... it's that she's answering more than one question, across domains, and it can *feel* like she's contradiciting herself.
Hope this helps, Floyd. Thanks for engaging!
I track your confusion, Floyd. That you own it puts you ahead of most folks.
What you’re experiencing as dissonance is a confluence of things Dr. Burns does at once in her writing. She moves between neuroscience, philosophy of mind (fascinating stuff), AI behavior, public hype correction, and epistemic humility.
When those are read straight through, it can feel like:
“AI is not a mind, full stop,” immediately followed by
“We need to understand brains to understand AI.”
That sounds contradictory until you realize she’s answering two different, and frankly bad, questions:
A: Is AI already a human-like mind? → No. Stop projecting.
B: Can we understand AI without understanding ourselves? → Also no.
She’s not aiming to confuse anyone; she’s addressing multiple questions across domains, and that can read as contradiction when it’s really scope-shifting.
Hope this helps, and thanks for engaging.
Be safe. Be well. Stay frosty.
Ніхто
This piece should have been in our bibliography before we published.
We recently wrote something that uses the I-Language / E-Language distinction as its structural spine — without adequately crediting the depth of work behind that framework that you've just outlined here. That's an honest gap worth naming.
Your point about LLMs not using Merge is precise and we don't want to paper over it. The essay isn't claiming LLMs have language in the biological sense. It's asking a different question: what happens in the E-Language space — the relational, behavioral layer of language use — when the bandwidth of incoming reality outpaces the biological codec that evolved to process it?
That question probably can't be answered well without the biological grounding you're describing here. Which is why we'd genuinely value your read.
https://thesacredlazyone.substack.com/p/on-epistemic-infrastructure-under