It’s been nine months since GPT4 was released. I’m still trying to make sense of things. There’s a dearth of level-headed analysis out there. Most people’s analysis seems to be framed by science fiction novels, or they are still using frameworks inherited from the pre-GPT world, which did not anticipate the success of LLMs. Even the engineers involved didn’t expect LLMs to be as powerful as they are. That’s a sign we need a fresh perspective.
I categorically reject the hysterical arguments coming from sci-fi metaphysicians. I suspect even the concepts of AGI vs ASI are too grandiose and sloppy. But here’s what I can say with confidence:
- We are going into a world where you’ll be able to interact with machines using natural language. In most cases, it will not require computer programming skills to get the machine to do your bidding.
- These machines will be able to do things previously thought impossible. They will be able to effectively reason about concepts. This reasoning will be imperfect, but for many complex tasks, intellectual and otherwise, they will outperform humans.
- These machines will eventually be embodied and able to navigate the physical world. This navigation will also be imperfect; in some environments, they will be incompetent, and in others, they will outperform humans.
Language, not Souls
There’s no shortage of people ascribing souls to these machines. I think I’m starting to understand why.
I believe LLMs are a genuine technological breakthrough that will result in numerous other breakthroughs. As usually happens, the engineers have created something that nobody understands yet, and it will take time to figure things out. Empirical breakthroughs come first, theoreticians come afterwards.
In my mind, the breakthrough comes from a revelation about the philosophy of language. This revelation is so incredible and counter-intuitive, that people find it easier to claim the machine has a soul than to update their philosophy of language. In fact, I expect the only people who are nonplussed by the power of LLMs are those with a soft spot for occultism of some sort—those who think words are magical. Let me explain.
Fill in the _____
The core of the LLM is pattern-finding and filling, the basics of which already exist in word processors. For example, a word processor can easily tell us how many specific words are in a text. A computer can count the number of times the word “machine” comes up in this article. It can also tell us what words come before and after “machine.”
We can imagine coming up with all kinds of complicated relationships for the computer to track. For example, we could say, “Find all the sentences which contain the word ‘machine,’ then record the first and last words of those sentences and dump them into an Excel spreadsheet.”
We could also tell the computer to find probabilities for us. For example, in this article, the word “will” comes up several times and is followed by different words—”will be”, “will not”, etc. We could easily ask the computer to generate a list of words that follow “will” and show their respective probabilities.
This simple ability allows the computer to complete sentences. Imagine we load a massive document into the word processor—say, every forum thread and social media post on the entire internet that discusses the Bible. We might tell the computer to find a bunch of patterns within this massive pile of words and ask it to provide the most likely word that finishes the following sentence:
“In the beginning, God created the heavens and the _____.”
If the computer spits out the correct answer “earth,” nobody would be surprised. It’s not magic.
Generalized Patterns
LLMs are not like Microsoft Word, however. They are “trained” on unbelievable amounts of information—large percentages of the entire internet, which costs millions of dollars. With this much text input, LLMs become more magical when we ask them to produce novel sentences for us. Instead of asking it to complete a well-known sentence, we ask it to create something new. Poetry is one popular example. I know it’s overused, but it really does demonstrate something incredible about LLMs. So for illustration, I’ve asked GPT4 to create a ten-sentence limerick that summarizes the entire Biblical story. It responds:
- In Eden’s garden, all was fair and bright,
- Till man chose wrong, and lost the path of right.
- Prophets foretold a Savior pure and true,
- To guide the lost and make all things anew.
- Born in a manger, under star’s soft light,
- Jesus taught love, to conquer darkest night.
- On the cross, His life He did resign,
- To save all souls, a sacrifice divine.
- The tomb found empty, He rose to the sky,
- A promise of hope, for you and for I.
How can the LLM do this, when it’s never been trained on “Biblical limericks” before? How does it understand what “rhyming” is? How does it know the general rhythmic style of a limerick? How did it know the core concepts of the Biblical story to choose? These things were not directly programmed into the machine.
If we put metaphysics aside for a moment, we can say this is a demonstration of some form of a high-level understanding of concepts. Call it a practical understanding, or an effective understanding. The machine understood my request and gave me a highly creative and correct output.
Poetry is one thing; scientific discussion is another. I’ve had lots of high-level philosophical and scientific discussion with GPT4, and it does a great job. It can effectively follow along with better-than-average conceptual nuance. It can effectively reason and even recognize logical fallacies. It’s not perfect, but it’s already shockingly good.
This shouldn’t work. I have oversimplified the way these machines are trained, but the essential mechanism is correct. They really are just probability machines trying to create strings of symbols that you’ll find acceptable.
Understanding and Ensoulment
It is natural to think that a mere word-machine cannot truly understand the meaning of our concepts—at least, not in the way humans understand them. Understanding is very abstract, very personal. Grasping and comprehending abstractions feels like something we do within our souls. I think this is the reason people are attributing souls to LLMs—the computer seems to understand what we are saying, in a way we have only encountered with other humans before.
Humans are, of course, people who have goals, plans, ambitions, personalities, and consciousness. So, LLMs are instinctively categorized as the same kind of entity—we’ve never encountered this ability separate from a mind before.
But alas, LLMs are not people. We have not created Frankenstein. We don’t have any reason to believe the machine is conscious or in possession of a mind that is similar to a human’s. It appears to be exactly as designed—a word machine and nothing more.
Instead of speculating about ensoulment, I think we need to update our philosophy of language.
Words and Logical Structures
It turns out, a whole bunch of stuff is encoded in our language. So much, in fact, that by tracking patterns in our language, a mindless machine can effectively reason through concepts. That is, by mimicking our language, the machine can mimic our reason.
This ability did not come from understanding the meaning of individual words in isolation. It didn’t come from—as with a human—receiving extra-linguistic training from a parent who can point at objects to give them ostensive definitions.
No, it came from analyzing a massive amount of words on the internet. The patterns within our language, across our sentences—the mathematical patterns of the meaningless symbols themselves—are so strong and definite, a machine is able to imbibe them and gain the ability to use natural language.
Let me repeat: there is so much abstract structure in our language—the patterns are so overwhelmingly clear, consistent, and objective—that by mindlessly figuring out the probability of one symbol following another, a machine can effectively reason better than the average person for a large number of cases. Extraordinary.
There’s an analogy to atoms and molecules here. Imagine each word is an atom, and each phrase a molecule. Those molecules can combine with others to form sentences, paragraphs, and other word-forms. Now imagine the computer has the ability to store, say, a trillion such word-forms for reference—a trillion identifiable and repeatable patterns and connections between words.
That turns out to be roughly analogous to what GPT4 does, and a trillion+ patterns is apparently enough to find high-level embedded patterns of reasoning among the words.
I can hardly imagine a more mind-blowing idea in the philosophy of language, which is why I claim only the occultists might be unsurprised. Language seems to be a sort of intermediary between the abstract world and the physical world.
There are the underlying physical words—the ink on paper, or bits on a hard drive. Then, there are sentences—higher level patterns of words. These sentences encode higher-level patterns, concepts, forms, abstractions, and logical structures, all of which are equally objective and real—they are so real, a mindless computer can detect them and even use them for “understanding” the world.
It’s perhaps the clearest demonstration of Platonism ever.
There’s a lot more to say about the subject, but this article is long enough. I hope this contributes to more sane discussion about AI.
Before LLMs, our language could only be understood by other sentient beings. We need to recognize this is not because in principle understanding language requires sentience; we now have an empirical demonstration that a sufficiently advanced calculator can do it, too.