Author: Steve Patterson

  • Interview w/Bob Murphy on the Mind-body Problem

    I recently did an interview with Bob Murphy on our attempted resolutions the mind-body problem. You might enjoy it.

    Years ago, I wrote an article explaining my own theory of indirect interaction. For some silly reason, I never created any visuals to go along with the article, though they would have helped immensely. So that was corrected in this interview, around the 43:30 mark.

    This theory is extremely flexible. If allows for interaction between multiple ontological categories—not just mind and body. If your ontology includes mind, body, spirits, numbers, abstractions, or more categories, it provides a plausible mechanism for interaction across them all.

    If you want two-way interaction, it allows for that. If you only want one-way, that’s fine too. If you want free will, there’s a spot for it. If you want an intuitive causal mechanism for spiritual phenomena like prayer, here’s how it could work. Or, you can be a hardcore determinist epiphenomenalist. It’s a powerful model indeed.

  • Interview on “The Digger” with Phil Harper

    Very enjoyable conversation with Phil Harper on the Dark Age Hypothesis. We get into some deep concepts.


    Link here:

  • New Book about Bitcoin!

    As many of you know, I’ve been working on a new book about Bitcoin for the last few years, and it’s finally ready! It is a collaboration with Roger Ver (aka “Bitcoin Jesus”), and we tell the story of how BTC was slowly hijacked over a period of several years.

    “This is the book that had to be written. It is a story of a missed opportunity to change the world, a tragic tale of subversion and betrayal.”
    -Jeffrey Tucker, Brownstone Institute

    Have you noticed that the financial establishment is now favorable towards Bitcoin? There’s a reason: it was successfully sabotaged and longer threatens their power.

    https://amzn.to/4coudlP
    Hijacking Bitcoin: The Hidden History of BTC

    This story has been heavily censored for years, so newcomers will find the history shocking. Information manipulators and social media engineers have successfully silenced dissent about Bitcoin; they character assassinate anybody who shares this perspective. But much to their dismay, this book is finally here, and it’s going to blow open a story they would like to keep hidden.

    For idealistic Bitcoiners like myself and Roger, it’s a heartbreaking story, but one that had to be told.

    It’s not some kooky conspiracy theory, either. There are over 280 references in the book. Prepare to have your faith in Bitcoin shaken.

    The book is now available for pre-order on Amazon. Release date is April 5th. Go get your copy!

  • Apriorist Geometry and Curved Space

    Countless thinkers for the past two thousand years have appealed to Euclidean geometry as an example of rock-solid reasoning. The proofs in Euclid’s Elements are beautiful deductive structures. One proof builds on the next, and by accepting the starting axioms, you are compelled to agree with the final conclusions.

    The geometric objects within Euclid have properties which can be grasped by logical reasoning alone—e.g. the reason we don’t believe that parallel lines touch is because of understanding the concept of parallel lines, not by observing and measuring them in the world. Kant famously considered this an example of synthetic a priori reasoning and wondered how it could be possible.

    Moderns have since rejected the idea of the geometric a priori, thanks to the discovery of non-Euclidean geometries in the 19th century. And thanks to Einstein in the 20th, physicists claim that our own universe is non-Euclidean—space is curved by mass, they say. That’s what gravity is all about.

    Geometry Meet Economics

    When I was researching the fundamentals of Austrian Economics and their distinctive methodology, there were often analogies drawn to Euclidean geometry—i.e. the unshakeable axioms of human action in economics are akin to the unshakeable axioms of Euclid in geometry. You don’t go out and measure whether the interior angles of a triangle add up to 180 degrees! You just know it by understanding what ‘triangle’ means!

    When I first heard these analogies, I rather liked them. All of my geometric intuitions were Euclidean, and I hadn’t heard about non-Euclidean geometries before.

    As I learned more, I discovered the history of non-Euclidean geometries. I’d speak with Austrian-types about it, but they tended to be skeptical and would ask questions like: What does it really mean for space to be curved?

    Clever mathematicians and physicists say odd things like, “You see, straight lines can sometimes be circular! It just requires space to be curved!” And my skeptical Austrian friends would roll their eyes. Physicists are just playing word games; straight lines cannot be curved by definition!

    I like this criticism. Since that time, I’ve spent a lot more time thinking about these topics and have become critical of the extreme apriorists in economics. And I don’t believe the axiomatic appeal to Euclid works. But not for the reason you might think.

    No Curved Space—>No Curves At All

    The problem is not with the notion of a priori geometry. From what I can tell, mathematicians and physicists are playing word game when talking about “curved space.” I don’t think that concept makes sense. It’s useful for building practical models, but that doesn’t make it true or even coherent.

    The problem isn’t just with the traditional non-Euclidean notion of “curved space.”

    The problem is with the Euclidean conception of curves.

    Ultimately, curved space doesn’t make sense for the same reason that curved lines don’t make sense: curves don’t make sense at all!

    The reason is actually quite simple: the traditional understanding of curves presupposes the infinite divisibility of space. Euclid, along with 99.9% of everyone else, presupposes that space is a continuum—that between any two points of space, there are an infinite number of additional points. Infinities within infinities.

    If you reject the notion of infinite totalities and do not believe space is a continuum, then that means curves don’t exist. Or at least, it means that the curves that obviously do exist work different than we’ve been told.

    I would like to hereby resurrect the notion of the geometric a priori and claim, by appealing to pure conceptual analysis, that geometry does have an underlying logical framework that we all presuppose in order to make sense of things. That framework is non-Euclidean, not because space is curved, but because space is actually discrete.

    Therefore, the traditional conceptions of “points” and “lines” and “spheres” will have to be reformulated accordingly. If you don’t yet have the finitist intuition, examine the following image of “curved space” closely enough until you see its underlying discreteness and total lack of smooth curves.

    (If necessary, move your physical eye closer to the screen until you see the pixels.)

  • Why Language Machines do not have Souls

    It’s been nine months since GPT4 was released. I’m still trying to make sense of things. There’s a dearth of level-headed analysis out there. Most people’s analysis seems to be framed by science fiction novels, or they are still using frameworks inherited from the pre-GPT world, which did not anticipate the success of LLMs. Even the engineers involved didn’t expect LLMs to be as powerful as they are. That’s a sign we need a fresh perspective.

    I categorically reject the hysterical arguments coming from sci-fi metaphysicians. I suspect even the concepts of AGI vs ASI are too grandiose and sloppy. But here’s what I can say with confidence:

    1. We are going into a world where you’ll be able to interact with machines using natural language. In most cases, it will not require computer programming skills to get the machine to do your bidding.
    2. These machines will be able to do things previously thought impossible. They will be able to effectively reason about concepts. This reasoning will be imperfect, but for many complex tasks, intellectual and otherwise, they will outperform humans.
    3. These machines will eventually be embodied and able to navigate the physical world. This navigation will also be imperfect; in some environments, they will be incompetent, and in others, they will outperform humans.

    Language, not Souls

    There’s no shortage of people ascribing souls to these machines. I think I’m starting to understand why.

    I believe LLMs are a genuine technological breakthrough that will result in numerous other breakthroughs. As usually happens, the engineers have created something that nobody understands yet, and it will take time to figure things out. Empirical breakthroughs come first, theoreticians come afterwards.

    In my mind, the breakthrough comes from a revelation about the philosophy of language. This revelation is so incredible and counter-intuitive, that people find it easier to claim the machine has a soul than to update their philosophy of language. In fact, I expect the only people who are nonplussed by the power of LLMs are those with a soft spot for occultism of some sort—those who think words are magical. Let me explain.

    Fill in the _____

    The core of the LLM is pattern-finding and filling, the basics of which already exist in word processors. For example, a word processor can easily tell us how many specific words are in a text. A computer can count the number of times the word “machine” comes up in this article. It can also tell us what words come before and after “machine.”

    We can imagine coming up with all kinds of complicated relationships for the computer to track. For example, we could say, “Find all the sentences which contain the word ‘machine,’ then record the first and last words of those sentences and dump them into an Excel spreadsheet.”

    We could also tell the computer to find probabilities for us. For example, in this article, the word “will” comes up several times and is followed by different words—”will be”, “will not”, etc. We could easily ask the computer to generate a list of words that follow “will” and show their respective probabilities.

    This simple ability allows the computer to complete sentences. Imagine we load a massive document into the word processor—say, every forum thread and social media post on the entire internet that discusses the Bible. We might tell the computer to find a bunch of patterns within this massive pile of words and ask it to provide the most likely word that finishes the following sentence:

    “In the beginning, God created the heavens and the _____.”

    If the computer spits out the correct answer “earth,” nobody would be surprised. It’s not magic.

    Generalized Patterns

    LLMs are not like Microsoft Word, however. They are “trained” on unbelievable amounts of information—large percentages of the entire internet, which costs millions of dollars. With this much text input, LLMs become more magical when we ask them to produce novel sentences for us. Instead of asking it to complete a well-known sentence, we ask it to create something new. Poetry is one popular example. I know it’s overused, but it really does demonstrate something incredible about LLMs. So for illustration, I’ve asked GPT4 to create a ten-sentence limerick that summarizes the entire Biblical story. It responds:

    1. In Eden’s garden, all was fair and bright,
    2. Till man chose wrong, and lost the path of right.
    3. Prophets foretold a Savior pure and true,
    4. To guide the lost and make all things anew.
    5. Born in a manger, under star’s soft light,
    6. Jesus taught love, to conquer darkest night.
    7. On the cross, His life He did resign,
    8. To save all souls, a sacrifice divine.
    9. The tomb found empty, He rose to the sky,
    10. A promise of hope, for you and for I.

    How can the LLM do this, when it’s never been trained on “Biblical limericks” before? How does it understand what “rhyming” is? How does it know the general rhythmic style of a limerick? How did it know the core concepts of the Biblical story to choose? These things were not directly programmed into the machine.

    If we put metaphysics aside for a moment, we can say this is a demonstration of some form of a high-level understanding of concepts. Call it a practical understanding, or an effective understanding. The machine understood my request and gave me a highly creative and correct output.

    Poetry is one thing; scientific discussion is another. I’ve had lots of high-level philosophical and scientific discussion with GPT4, and it does a great job. It can effectively follow along with better-than-average conceptual nuance. It can effectively reason and even recognize logical fallacies. It’s not perfect, but it’s already shockingly good.

    This shouldn’t work. I have oversimplified the way these machines are trained, but the essential mechanism is correct. They really are just probability machines trying to create strings of symbols that you’ll find acceptable.

    Understanding and Ensoulment

    It is natural to think that a mere word-machine cannot truly understand the meaning of our concepts—at least, not in the way humans understand them. Understanding is very abstract, very personal. Grasping and comprehending abstractions feels like something we do within our souls. I think this is the reason people are attributing souls to LLMs—the computer seems to understand what we are saying, in a way we have only encountered with other humans before.

    Humans are, of course, people who have goals, plans, ambitions, personalities, and consciousness. So, LLMs are instinctively categorized as the same kind of entity—we’ve never encountered this ability separate from a mind before.

    But alas, LLMs are not people. We have not created Frankenstein. We don’t have any reason to believe the machine is conscious or in possession of a mind that is similar to a human’s. It appears to be exactly as designed—a word machine and nothing more.

    Instead of speculating about ensoulment, I think we need to update our philosophy of language.

    Words and Logical Structures

    It turns out, a whole bunch of stuff is encoded in our language. So much, in fact, that by tracking patterns in our language, a mindless machine can effectively reason through concepts. That is, by mimicking our language, the machine can mimic our reason.

    This ability did not come from understanding the meaning of individual words in isolation. It didn’t come from—as with a human—receiving extra-linguistic training from a parent who can point at objects to give them ostensive definitions.

    No, it came from analyzing a massive amount of words on the internet. The patterns within our language, across our sentences—the mathematical patterns of the meaningless symbols themselves—are so strong and definite, a machine is able to imbibe them and gain the ability to use natural language.

    Let me repeat: there is so much abstract structure in our language—the patterns are so overwhelmingly clear, consistent, and objective—that by mindlessly figuring out the probability of one symbol following another, a machine can effectively reason better than the average person for a large number of cases. Extraordinary.

    There’s an analogy to atoms and molecules here. Imagine each word is an atom, and each phrase a molecule. Those molecules can combine with others to form sentences, paragraphs, and other word-forms. Now imagine the computer has the ability to store, say, a trillion such word-forms for reference—a trillion identifiable and repeatable patterns and connections between words.

    That turns out to be roughly analogous to what GPT4 does, and a trillion+ patterns is apparently enough to find high-level embedded patterns of reasoning among the words.

    I can hardly imagine a more mind-blowing idea in the philosophy of language, which is why I claim only the occultists might be unsurprised. Language seems to be a sort of intermediary between the abstract world and the physical world.

    There are the underlying physical words—the ink on paper, or bits on a hard drive. Then, there are sentences—higher level patterns of words. These sentences encode higher-level patterns, concepts, forms, abstractions, and logical structures, all of which are equally objective and real—they are so real, a mindless computer can detect them and even use them for “understanding” the world.

    It’s perhaps the clearest demonstration of Platonism ever.

    There’s a lot more to say about the subject, but this article is long enough. I hope this contributes to more sane discussion about AI.

    Before LLMs, our language could only be understood by other sentient beings. We need to recognize this is not because in principle understanding language requires sentience; we now have an empirical demonstration that a sufficiently advanced calculator can do it, too.

  • Self-Reference without Paradox

    Self-reference is the foundation of a new mystery religion. Adherents see paradoxes everywhere, even at the foundation of critical thinking—logic itself. “The Liar’s Paradox”, they say, “demonstrates that the law of non-contradiction isn’t absolute.”

    “Logic can’t really give us the truth, because something something Godel’s incompleteness theorems.”

    Nearly all the mystical paradoxes people bring up today invoke either 1) self-reference, 2) infinity, or 3) quantum mechanics. I’ve dealt with all three before, but I want to revisit the topic of self-reference.

    Here’s a general technique for clearing up the paradoxes that are generated by self-referential sentences:

    Pay attention.

    Pay attention to your own mind and how it processes language. The magic and mystery of self-reference disappears when you take the time to observe your own mental processes. Take two examples:

    Statement (1):    This sentence is false.

    Statement (2):    This sentence has five words.

    (1) Is the famous liar’s paradox, which superficially appears to be a contradiction—if it’s true, then it’s false, but if it’s false, then it’s true.

    Contradictions cannot be made sense of, and yet, the liar’s paradox seems like it should make sense. Hence, this is the most famous paradox and has been around forever.

    (2) Makes straightforward sense and is “true.”

    So what’s going on here? How can one example of self-reference result in logical annihilation, while the other is trivially evaluated as true?

    Pay attention.

    Observe what your mind is doing when encountering the words.

    Resolving the Liar

    I have written about the resolution the Liar’s paradox elsewhere, but let me summarize the argument here.

    “This sentence is false” either 1) explodes into an infinity, or 2) collapses to zero.

    It either generates an infinite regress, or it’s simply nonsense wearing a fancy suit.

    To see why, ask the question, “What sentence exactly is false?” What do the words “this sentence” refer to?

    In other words, is the claim:

    Option A: “This sentence is false” is false.

    Or simply,

    Option B: This sentence is false.

    If Option A, then it’s easy to see why it generates an infinite regressThe use of parentheses helps. The claim is:

    (This sentence is false) is false.

    How does our mind try to make sense of these words?

    Well, outside the parentheses, we are told that something is false, which means that inside the parentheses, there must be some valid truth claim to evaluate. So we look inside the parentheses and see the words “this sentence is false.”

    How do we evaluate such a claim? We again have to figure out what “this sentence” refers to. If it refers to the entire sentence—“this sentence is false”—then we are stuck generating the infinite regress:

    “((This sentence is false) is false) is false…”, and so on. It’s like trying to walk to the end of a road that keeps elongating with every step you take. It won’t work.

    The only other option is to evaluate “this sentence is false” by itself. We should first break it into two parts: (This sentence) + (is false).

    The words (is false) tell us that we’re supposed to evaluate the truth value of a preceding proposition. But (this sentence) is not a proposition. It’s merely two words: “this” and “sentence.”

    “This sentence” is not a valid truth claim. It’s essentially an undefined function; we cannot evaluate the words “this sentence” as true or false. That’s why I like to say the liar’s paradox either explodes to infinity or collapses to zero.

    Just Make Sense

    It sounds like a good resolution, but perhaps it proves too much? Are all examples of self-reference therefore invalid?

    Of course not. We can clearly make sense of the following:

    Statement (P): This sentence has five words.

    Statement (Q): This sentence is in English.

    Statement (R): This sentence does not contain the word “paradox.”

    All three of these we can evaluate as true or false. The first two are true. The last is false. So what’s going on? Why do these not explode to infinity or collapse to zero? I propose the same answer:

    Pay attention.

    Observe what your mind is doing when encountering the words.

    When you read (P), what does your mind do? It says, “Hey, check out this set of words. It’s supposed to contain five elements. The elements are: “This” “sentence” “has” “five” “words”, which totals five, and therefore (P) is true.”

    No magic, no mystery, no explosion to infinity—the words “this sentence” refers to something definite. A proper use of self-reference.

    Now consider (Q). How does your mind process that sentence?

    It says, “Hey, check out these words. They are all supposed to be English. The words are “This” “sentence” “is” “in” “English”, which are all English words, therefore (Q) is true.”

    Clear and simple. Now let’s do (R):

    “Hey, check out this set of words. It’s not supposed to contain the word “paradox.” The words are “This” “sentence”… “paradox”. Since “paradox” is part of the set, (R) is false.”

    Clear and simple. No magic, no infinity. Just good ol’ fashioned self-reference that does not destroy the fabric of reality.

    Good Self, Bad Self

    Every case I’ve encountered of self-reference works this way. By looking at the mind, all of the paradoxical examples are resolved, and all of the sensible examples are explained. It’s similar to computer programming. Most of the time, self-referential code works fine, but sometimes, it hangs the computer in a never-ending loop. When the latter happens, we conclude, “Huh, I guess the code/coder is bad” and never “Huh, I guess that means logic is broken.”

    I suggest we approach all examples of self-reference with this common sense heuristic:

    If you can’t make sense of it, or it’s impossible for a computer to execute, or it results in a contradiction, it’s bad. There’s a bug lurking somewhere.

    If it’s possible to make sense of without contradiction, it’s good. That simple.

    Paradoxes do not reveal anything fundamental about reality; they reveal our own confusion about things. When you take the time to carefully look at the processes in your own mind—or the processes in a computer—everything can be understood, the bugs can be discovered, and apparent contradictions can be resolved.