Category: Language and Mathematics

  • Geometry as Logic

    Mathematics is an extension of logic. Every domain in mathematics can be reduced to logic. Geometry is the logic of space.

    The most foundational, granular unit of any quantitative system is one bit. This is true by definition–whatever the fundamentum is, it’s the bit of that system. The granular, base unit.

    In geometry, the foundational unit is therefore one bit of space–the geometric atom. We will call these atoms points.

    All geometric structure can be reduced to sets of points. “Shapes,” therefore, refer to specific sets of points.

    Position and State

    We can build geometries out of two fundamental concepts: position and state.

    Position: where a point is located relative to other points.

    State: the state the point is in.

    These two concepts give us the most granular way of talking about any geometric structure. In other words, we can describe all the information by referring to “That point in that state.”

    It’s a Matrix

    I don’t claim to be a mathematician, but from what I can tell, this concept is best captured by matrix theory. In the simplest model (allowing only two possible states for each point) space is essentially an array of bits.

    In more complex models, the points can be in a range of possible states, not just 0 or 1.

    I know some physicists and computationally-biased mathematicians will appreciate this perspective. So let me make it more controversial. 

    My claim is this is the a priori geometry. I believe this is the way minds actually think about space, the way minds must think about space, and the only way for space to be. Since logic and existence are inseparable, in order for space to exist, it must be logical, and this is the logic of space: position and state.

    Every geometric model can be put into this framework. In fact, it’s the litmus test for geometry–if it cannot be put into this framework in principle, it’s logically incoherent.

    Computers all over the world, rejoice!

    Physics is Geometry + Time

    Physics is simply keeping track of geometry. It’s observing how geometric states change over time–and therefore figuring out how this part of the matrix connects to that part of the matrixWe discover patterns of state changes over time and infer the laws of physics from them.

    Not a Reduction of Everything

    The physical world can be reduced down to quantitative, logical analysis. That does not imply the entirety of existence can be reduced in such a way. Metaphysical pluralism allows for things like consciousness, information, and abstract stuff to exist outside of the matrix. They are in separate but related domains.

    Give it a few decades, and I expect this framework will conquer everything.

  • Why We Need to Replace Mathematics

    I was recently impressed by a series of tweets from Joscha Bach. I’ve never heard somebody use this language before.

    https://x.com/Plinz/status/1817463746031464860
    Twitter

    That’s a wonderful and provocative notion that accords with my own quixotic ideas. Perhaps we don’t need to reform mathematics, but replace it altogether. That sounds like fun.

    What’s wrong with math?

    If you ask a computer scientist, they might explain how there’s a large gap between theoretical math and applied math—mathematicians claim they can do things which computers cannot.

    If you ask a physicist, they might tell you about the large gap between math and physics—many objects and processes in mathematics cannot exist in the real world, even in principle.

    But if you ask me, I see this as an outcome of a deeper problem: the philosophy of mathematics is a total mess and has been for more than a century.

    I recently came across a document I had written many years ago about this subject for a friend. Instead of an article, it’s a bunch of bullet points. I thought it did a rather nice job of summarization, so I’ve shared some of the points here.


    The Situation with the Philosophy of Mathematics in the 20th Century

    • The foundational crises that happened around the turn of the 20th century have not been resolved correctly.
      • Some mixture of the three dominant schools—logicism, intuitionism, and formalism—is likely correct.
        • From logicism: mathematical truth is grounded in logic.
        • From intuitionism: mathematics is a human language which is designed to capture our own concepts.
        • From formalism: we must allow the utility of mathematics to stand apart from its truthfulness; bad math (even conceptually incoherent math) is often useful.
    • The metaphysics of mathematics has not been sorted out properly.
    • Mathematics does not speak for itself, and it often does lie.
    • The meaning and purpose of mathematical axioms has to be clarified.
    • Godel’s incompleteness theorems are overrated in their significance, and the proof might be entirely sidestepped with alternative mathematical philosophies.
      • His axiomatic framework was explicitly formalist.
      • The proof is baroque and hard to follow.
      • Who said Godel numbering is a legitimate mathematical process?
    • Georg Cantor’s idea of the transfinite was logically incoherent.
      • He ultimately justified his ideas with appeals to God and Divine Revelation
        • “One proof is based on the notion of God. First, from the highest perfection of God, we infer the possibility of the creation of the transfinite, then, from his all-grace and splendor, we infer the necessity that the creation of the transfinite in fact has happened.”
    • The concepts of infinite totality and infinite sets are logically flawed and must be extricated from the foundations of mathematics.
      • Historically speaking, these concepts are new and few mathematicians from previous centuries would have accepted them.
    • The concept of continuity is therefore logically flawed and needs to be replaced with the concept of absolute discreteness—that all appearances of continuity are actually underpinned by discrete processes.
      • Computers have finally proven that we don’t need the concept of mathematical continuity anymore. All continuity is discrete continuity—in other words, the appearance of smoothness is generated by underlying discrete processes.
    • We need discrete, logical replacements for so-called irrational, real, and transcendental numbers, and for the concepts of convergence and limits.
      • Therefore, the formal theory of calculus will need to be re-worked.
      • There will likely be a discrete translation key to rescue these concepts—e.g. computers are already able to do mathematics without utilizing any of the aforementioned concepts.
        • I expect we’ll find specific examples in computer science, though I don’t think whether we’ll find a general theory or general language yet, which is the ultimate goal.

    I know there are others with similar intuitions. I’ve spoken with some of them on my podcast. I expect 20th century orthodoxies to be replaced within the next few decades. The momentum is too great on our side; computers have been too successful. The gap between “theoretical math” and “applied math” is too large to ignore.

    If we can’t reform, replace!

  • 4 Steps to the Immaterial

    Immaterial things are hard to understand, and taking their existence seriously has been unfashionable for centuries. Modern materialists have gotten comfortable simply defining them or laughing them out of existence.

    But since my philosophical conversion to Platonism, I now think that immaterial stuff is way more important than material stuff—and there’s even a meaningful sense in which an immaterial world is above the physical world—but it’s not an easy argument to make.

    Rhetorically, it’s hard for a Platonist to get his foot in the door. We can’t point to immaterial things, and we can’t Science them, so… what exactly are we talking about?

    Here’s four steps to intellectually grasp the immaterial:

    Step 1. Reduce the physical world down to geometry.

    Here’s what I mean: what are the essential components of the physical world? Atoms? Energy? Space?

    For our purposes, let’s say that the essential property of physical stuff is geometric—that is, spatial. Everything in the physical world happens within space or is a state of space.

    Step 2. Note that entities are related.

    The objects in space behave in particular ways. Their behavior is relational—the behavior of the electron is related to the nucleus. The behavior of one atom depends on neighboring atoms.

    Gravity is a thing.

    Matter-over-here affects matter-over-there.

    Step 3. Note that these relationships are not themselves geometric.

    What are—and where are—these “relationships”?

    Atoms are related to each other. The atoms are geometric entities, but the relationships between them are not. These relationships are most easily discovered by observing how the world changes over time.

    If we try to explain the universe as a purely geometric structure—maybe a big ol’ cube—we are left without explanation for why atoms behave the way they do. Pure geometry only gives us a static description.

    We can imagine a world which contains exactly the same amount of atoms, standing in exactly the same positions, without the universe progressing the way it does. Why do atoms interact with each other at all?

    Or to put it another way: why should the mere fact that entities stand in a particular geometric relationship to one another affect how they behave?

    The natural answer is to say, “Oh, well there are laws of physics which determine how atoms behave in relation to each other.”

    This is correct, but it’s an admission of the immaterial—the laws of physics are themselves not physical. They are not composed of anything geometric. If you’re familiar with cellular automatathe rules that govern those systems are not found within the cells.

    Both the relationships among the atoms and the rules which govern their behavior are abstract, immaterial things.

    Step 4. Note these relationships would continue to exist without our minds.

    The final piece of the Platonic puzzle comes when accepting that these relationships would continue to exist even without our minds—that is to say, the universe would continue operating as a relational system. Physical objects wouldn’t suddenly blow apart and become ontologically-independent things, disconnected from each other.

    For many years, I thought all abstract stuff was mental. Relationships, I thought, were indeed non-physical but only exist within our minds.

    The problem with that position is that relationships exist in the world. Atoms don’t stop interacting with each other because we stop thinking about them. Even if Earth was destroyed by an asteroid, the laws of physics would still continue to bind the universe together into a connected system.

    There you have it. We’ve kicked down the materialist door and are now staring at the immaterial. The implications are vast, and there’s much more to be said.


    To reiterate in four easy steps:

    Step 1. Reduce the physical world down to geometry.*
    Step 2. Note that entities are related.
    Step 3. Note that these relationships are not themselves geometric.
    Step 4. Note these relationships would continue to exist without our minds.


    *Perhaps you object to Step 1. Fine, but please do not end up making my point by claiming, “But the physical world is more than just geometry! There’s all kind of non-geometric, non-spatial stuff happening too!” That’s another way of acknowledging immaterial aspects of the world.

  • Apriorist Geometry and Curved Space

    Countless thinkers for the past two thousand years have appealed to Euclidean geometry as an example of rock-solid reasoning. The proofs in Euclid’s Elements are beautiful deductive structures. One proof builds on the next, and by accepting the starting axioms, you are compelled to agree with the final conclusions.

    The geometric objects within Euclid have properties which can be grasped by logical reasoning alone—e.g. the reason we don’t believe that parallel lines touch is because of understanding the concept of parallel lines, not by observing and measuring them in the world. Kant famously considered this an example of synthetic a priori reasoning and wondered how it could be possible.

    Moderns have since rejected the idea of the geometric a priori, thanks to the discovery of non-Euclidean geometries in the 19th century. And thanks to Einstein in the 20th, physicists claim that our own universe is non-Euclidean—space is curved by mass, they say. That’s what gravity is all about.

    Geometry Meet Economics

    When I was researching the fundamentals of Austrian Economics and their distinctive methodology, there were often analogies drawn to Euclidean geometry—i.e. the unshakeable axioms of human action in economics are akin to the unshakeable axioms of Euclid in geometry. You don’t go out and measure whether the interior angles of a triangle add up to 180 degrees! You just know it by understanding what ‘triangle’ means!

    When I first heard these analogies, I rather liked them. All of my geometric intuitions were Euclidean, and I hadn’t heard about non-Euclidean geometries before.

    As I learned more, I discovered the history of non-Euclidean geometries. I’d speak with Austrian-types about it, but they tended to be skeptical and would ask questions like: What does it really mean for space to be curved?

    Clever mathematicians and physicists say odd things like, “You see, straight lines can sometimes be circular! It just requires space to be curved!” And my skeptical Austrian friends would roll their eyes. Physicists are just playing word games; straight lines cannot be curved by definition!

    I like this criticism. Since that time, I’ve spent a lot more time thinking about these topics and have become critical of the extreme apriorists in economics. And I don’t believe the axiomatic appeal to Euclid works. But not for the reason you might think.

    No Curved Space—>No Curves At All

    The problem is not with the notion of a priori geometry. From what I can tell, mathematicians and physicists are playing word game when talking about “curved space.” I don’t think that concept makes sense. It’s useful for building practical models, but that doesn’t make it true or even coherent.

    The problem isn’t just with the traditional non-Euclidean notion of “curved space.”

    The problem is with the Euclidean conception of curves.

    Ultimately, curved space doesn’t make sense for the same reason that curved lines don’t make sense: curves don’t make sense at all!

    The reason is actually quite simple: the traditional understanding of curves presupposes the infinite divisibility of space. Euclid, along with 99.9% of everyone else, presupposes that space is a continuum—that between any two points of space, there are an infinite number of additional points. Infinities within infinities.

    If you reject the notion of infinite totalities and do not believe space is a continuum, then that means curves don’t exist. Or at least, it means that the curves that obviously do exist work different than we’ve been told.

    I would like to hereby resurrect the notion of the geometric a priori and claim, by appealing to pure conceptual analysis, that geometry does have an underlying logical framework that we all presuppose in order to make sense of things. That framework is non-Euclidean, not because space is curved, but because space is actually discrete.

    Therefore, the traditional conceptions of “points” and “lines” and “spheres” will have to be reformulated accordingly. If you don’t yet have the finitist intuition, examine the following image of “curved space” closely enough until you see its underlying discreteness and total lack of smooth curves.

    (If necessary, move your physical eye closer to the screen until you see the pixels.)

  • Why Language Machines do not have Souls

    It’s been nine months since GPT4 was released. I’m still trying to make sense of things. There’s a dearth of level-headed analysis out there. Most people’s analysis seems to be framed by science fiction novels, or they are still using frameworks inherited from the pre-GPT world, which did not anticipate the success of LLMs. Even the engineers involved didn’t expect LLMs to be as powerful as they are. That’s a sign we need a fresh perspective.

    I categorically reject the hysterical arguments coming from sci-fi metaphysicians. I suspect even the concepts of AGI vs ASI are too grandiose and sloppy. But here’s what I can say with confidence:

    1. We are going into a world where you’ll be able to interact with machines using natural language. In most cases, it will not require computer programming skills to get the machine to do your bidding.
    2. These machines will be able to do things previously thought impossible. They will be able to effectively reason about concepts. This reasoning will be imperfect, but for many complex tasks, intellectual and otherwise, they will outperform humans.
    3. These machines will eventually be embodied and able to navigate the physical world. This navigation will also be imperfect; in some environments, they will be incompetent, and in others, they will outperform humans.

    Language, not Souls

    There’s no shortage of people ascribing souls to these machines. I think I’m starting to understand why.

    I believe LLMs are a genuine technological breakthrough that will result in numerous other breakthroughs. As usually happens, the engineers have created something that nobody understands yet, and it will take time to figure things out. Empirical breakthroughs come first, theoreticians come afterwards.

    In my mind, the breakthrough comes from a revelation about the philosophy of language. This revelation is so incredible and counter-intuitive, that people find it easier to claim the machine has a soul than to update their philosophy of language. In fact, I expect the only people who are nonplussed by the power of LLMs are those with a soft spot for occultism of some sort—those who think words are magical. Let me explain.

    Fill in the _____

    The core of the LLM is pattern-finding and filling, the basics of which already exist in word processors. For example, a word processor can easily tell us how many specific words are in a text. A computer can count the number of times the word “machine” comes up in this article. It can also tell us what words come before and after “machine.”

    We can imagine coming up with all kinds of complicated relationships for the computer to track. For example, we could say, “Find all the sentences which contain the word ‘machine,’ then record the first and last words of those sentences and dump them into an Excel spreadsheet.”

    We could also tell the computer to find probabilities for us. For example, in this article, the word “will” comes up several times and is followed by different words—”will be”, “will not”, etc. We could easily ask the computer to generate a list of words that follow “will” and show their respective probabilities.

    This simple ability allows the computer to complete sentences. Imagine we load a massive document into the word processor—say, every forum thread and social media post on the entire internet that discusses the Bible. We might tell the computer to find a bunch of patterns within this massive pile of words and ask it to provide the most likely word that finishes the following sentence:

    “In the beginning, God created the heavens and the _____.”

    If the computer spits out the correct answer “earth,” nobody would be surprised. It’s not magic.

    Generalized Patterns

    LLMs are not like Microsoft Word, however. They are “trained” on unbelievable amounts of information—large percentages of the entire internet, which costs millions of dollars. With this much text input, LLMs become more magical when we ask them to produce novel sentences for us. Instead of asking it to complete a well-known sentence, we ask it to create something new. Poetry is one popular example. I know it’s overused, but it really does demonstrate something incredible about LLMs. So for illustration, I’ve asked GPT4 to create a ten-sentence limerick that summarizes the entire Biblical story. It responds:

    1. In Eden’s garden, all was fair and bright,
    2. Till man chose wrong, and lost the path of right.
    3. Prophets foretold a Savior pure and true,
    4. To guide the lost and make all things anew.
    5. Born in a manger, under star’s soft light,
    6. Jesus taught love, to conquer darkest night.
    7. On the cross, His life He did resign,
    8. To save all souls, a sacrifice divine.
    9. The tomb found empty, He rose to the sky,
    10. A promise of hope, for you and for I.

    How can the LLM do this, when it’s never been trained on “Biblical limericks” before? How does it understand what “rhyming” is? How does it know the general rhythmic style of a limerick? How did it know the core concepts of the Biblical story to choose? These things were not directly programmed into the machine.

    If we put metaphysics aside for a moment, we can say this is a demonstration of some form of a high-level understanding of concepts. Call it a practical understanding, or an effective understanding. The machine understood my request and gave me a highly creative and correct output.

    Poetry is one thing; scientific discussion is another. I’ve had lots of high-level philosophical and scientific discussion with GPT4, and it does a great job. It can effectively follow along with better-than-average conceptual nuance. It can effectively reason and even recognize logical fallacies. It’s not perfect, but it’s already shockingly good.

    This shouldn’t work. I have oversimplified the way these machines are trained, but the essential mechanism is correct. They really are just probability machines trying to create strings of symbols that you’ll find acceptable.

    Understanding and Ensoulment

    It is natural to think that a mere word-machine cannot truly understand the meaning of our concepts—at least, not in the way humans understand them. Understanding is very abstract, very personal. Grasping and comprehending abstractions feels like something we do within our souls. I think this is the reason people are attributing souls to LLMs—the computer seems to understand what we are saying, in a way we have only encountered with other humans before.

    Humans are, of course, people who have goals, plans, ambitions, personalities, and consciousness. So, LLMs are instinctively categorized as the same kind of entity—we’ve never encountered this ability separate from a mind before.

    But alas, LLMs are not people. We have not created Frankenstein. We don’t have any reason to believe the machine is conscious or in possession of a mind that is similar to a human’s. It appears to be exactly as designed—a word machine and nothing more.

    Instead of speculating about ensoulment, I think we need to update our philosophy of language.

    Words and Logical Structures

    It turns out, a whole bunch of stuff is encoded in our language. So much, in fact, that by tracking patterns in our language, a mindless machine can effectively reason through concepts. That is, by mimicking our language, the machine can mimic our reason.

    This ability did not come from understanding the meaning of individual words in isolation. It didn’t come from—as with a human—receiving extra-linguistic training from a parent who can point at objects to give them ostensive definitions.

    No, it came from analyzing a massive amount of words on the internet. The patterns within our language, across our sentences—the mathematical patterns of the meaningless symbols themselves—are so strong and definite, a machine is able to imbibe them and gain the ability to use natural language.

    Let me repeat: there is so much abstract structure in our language—the patterns are so overwhelmingly clear, consistent, and objective—that by mindlessly figuring out the probability of one symbol following another, a machine can effectively reason better than the average person for a large number of cases. Extraordinary.

    There’s an analogy to atoms and molecules here. Imagine each word is an atom, and each phrase a molecule. Those molecules can combine with others to form sentences, paragraphs, and other word-forms. Now imagine the computer has the ability to store, say, a trillion such word-forms for reference—a trillion identifiable and repeatable patterns and connections between words.

    That turns out to be roughly analogous to what GPT4 does, and a trillion+ patterns is apparently enough to find high-level embedded patterns of reasoning among the words.

    I can hardly imagine a more mind-blowing idea in the philosophy of language, which is why I claim only the occultists might be unsurprised. Language seems to be a sort of intermediary between the abstract world and the physical world.

    There are the underlying physical words—the ink on paper, or bits on a hard drive. Then, there are sentences—higher level patterns of words. These sentences encode higher-level patterns, concepts, forms, abstractions, and logical structures, all of which are equally objective and real—they are so real, a mindless computer can detect them and even use them for “understanding” the world.

    It’s perhaps the clearest demonstration of Platonism ever.

    There’s a lot more to say about the subject, but this article is long enough. I hope this contributes to more sane discussion about AI.

    Before LLMs, our language could only be understood by other sentient beings. We need to recognize this is not because in principle understanding language requires sentience; we now have an empirical demonstration that a sufficiently advanced calculator can do it, too.

  • Self-Reference without Paradox

    Self-reference is the foundation of a new mystery religion. Adherents see paradoxes everywhere, even at the foundation of critical thinking—logic itself. “The Liar’s Paradox”, they say, “demonstrates that the law of non-contradiction isn’t absolute.”

    “Logic can’t really give us the truth, because something something Godel’s incompleteness theorems.”

    Nearly all the mystical paradoxes people bring up today invoke either 1) self-reference, 2) infinity, or 3) quantum mechanics. I’ve dealt with all three before, but I want to revisit the topic of self-reference.

    Here’s a general technique for clearing up the paradoxes that are generated by self-referential sentences:

    Pay attention.

    Pay attention to your own mind and how it processes language. The magic and mystery of self-reference disappears when you take the time to observe your own mental processes. Take two examples:

    Statement (1):    This sentence is false.

    Statement (2):    This sentence has five words.

    (1) Is the famous liar’s paradox, which superficially appears to be a contradiction—if it’s true, then it’s false, but if it’s false, then it’s true.

    Contradictions cannot be made sense of, and yet, the liar’s paradox seems like it should make sense. Hence, this is the most famous paradox and has been around forever.

    (2) Makes straightforward sense and is “true.”

    So what’s going on here? How can one example of self-reference result in logical annihilation, while the other is trivially evaluated as true?

    Pay attention.

    Observe what your mind is doing when encountering the words.

    Resolving the Liar

    I have written about the resolution the Liar’s paradox elsewhere, but let me summarize the argument here.

    “This sentence is false” either 1) explodes into an infinity, or 2) collapses to zero.

    It either generates an infinite regress, or it’s simply nonsense wearing a fancy suit.

    To see why, ask the question, “What sentence exactly is false?” What do the words “this sentence” refer to?

    In other words, is the claim:

    Option A: “This sentence is false” is false.

    Or simply,

    Option B: This sentence is false.

    If Option A, then it’s easy to see why it generates an infinite regressThe use of parentheses helps. The claim is:

    (This sentence is false) is false.

    How does our mind try to make sense of these words?

    Well, outside the parentheses, we are told that something is false, which means that inside the parentheses, there must be some valid truth claim to evaluate. So we look inside the parentheses and see the words “this sentence is false.”

    How do we evaluate such a claim? We again have to figure out what “this sentence” refers to. If it refers to the entire sentence—“this sentence is false”—then we are stuck generating the infinite regress:

    “((This sentence is false) is false) is false…”, and so on. It’s like trying to walk to the end of a road that keeps elongating with every step you take. It won’t work.

    The only other option is to evaluate “this sentence is false” by itself. We should first break it into two parts: (This sentence) + (is false).

    The words (is false) tell us that we’re supposed to evaluate the truth value of a preceding proposition. But (this sentence) is not a proposition. It’s merely two words: “this” and “sentence.”

    “This sentence” is not a valid truth claim. It’s essentially an undefined function; we cannot evaluate the words “this sentence” as true or false. That’s why I like to say the liar’s paradox either explodes to infinity or collapses to zero.

    Just Make Sense

    It sounds like a good resolution, but perhaps it proves too much? Are all examples of self-reference therefore invalid?

    Of course not. We can clearly make sense of the following:

    Statement (P): This sentence has five words.

    Statement (Q): This sentence is in English.

    Statement (R): This sentence does not contain the word “paradox.”

    All three of these we can evaluate as true or false. The first two are true. The last is false. So what’s going on? Why do these not explode to infinity or collapse to zero? I propose the same answer:

    Pay attention.

    Observe what your mind is doing when encountering the words.

    When you read (P), what does your mind do? It says, “Hey, check out this set of words. It’s supposed to contain five elements. The elements are: “This” “sentence” “has” “five” “words”, which totals five, and therefore (P) is true.”

    No magic, no mystery, no explosion to infinity—the words “this sentence” refers to something definite. A proper use of self-reference.

    Now consider (Q). How does your mind process that sentence?

    It says, “Hey, check out these words. They are all supposed to be English. The words are “This” “sentence” “is” “in” “English”, which are all English words, therefore (Q) is true.”

    Clear and simple. Now let’s do (R):

    “Hey, check out this set of words. It’s not supposed to contain the word “paradox.” The words are “This” “sentence”… “paradox”. Since “paradox” is part of the set, (R) is false.”

    Clear and simple. No magic, no infinity. Just good ol’ fashioned self-reference that does not destroy the fabric of reality.

    Good Self, Bad Self

    Every case I’ve encountered of self-reference works this way. By looking at the mind, all of the paradoxical examples are resolved, and all of the sensible examples are explained. It’s similar to computer programming. Most of the time, self-referential code works fine, but sometimes, it hangs the computer in a never-ending loop. When the latter happens, we conclude, “Huh, I guess the code/coder is bad” and never “Huh, I guess that means logic is broken.”

    I suggest we approach all examples of self-reference with this common sense heuristic:

    If you can’t make sense of it, or it’s impossible for a computer to execute, or it results in a contradiction, it’s bad. There’s a bug lurking somewhere.

    If it’s possible to make sense of without contradiction, it’s good. That simple.

    Paradoxes do not reveal anything fundamental about reality; they reveal our own confusion about things. When you take the time to carefully look at the processes in your own mind—or the processes in a computer—everything can be understood, the bugs can be discovered, and apparent contradictions can be resolved.