Category: Logic and Epistemology

  • Geometry as Logic

    Mathematics is an extension of logic. Every domain in mathematics can be reduced to logic. Geometry is the logic of space.

    The most foundational, granular unit of any quantitative system is one bit. This is true by definition–whatever the fundamentum is, it’s the bit of that system. The granular, base unit.

    In geometry, the foundational unit is therefore one bit of space–the geometric atom. We will call these atoms points.

    All geometric structure can be reduced to sets of points. “Shapes,” therefore, refer to specific sets of points.

    Position and State

    We can build geometries out of two fundamental concepts: position and state.

    Position: where a point is located relative to other points.

    State: the state the point is in.

    These two concepts give us the most granular way of talking about any geometric structure. In other words, we can describe all the information by referring to “That point in that state.”

    It’s a Matrix

    I don’t claim to be a mathematician, but from what I can tell, this concept is best captured by matrix theory. In the simplest model (allowing only two possible states for each point) space is essentially an array of bits.

    In more complex models, the points can be in a range of possible states, not just 0 or 1.

    I know some physicists and computationally-biased mathematicians will appreciate this perspective. So let me make it more controversial. 

    My claim is this is the a priori geometry. I believe this is the way minds actually think about space, the way minds must think about space, and the only way for space to be. Since logic and existence are inseparable, in order for space to exist, it must be logical, and this is the logic of space: position and state.

    Every geometric model can be put into this framework. In fact, it’s the litmus test for geometry–if it cannot be put into this framework in principle, it’s logically incoherent.

    Computers all over the world, rejoice!

    Physics is Geometry + Time

    Physics is simply keeping track of geometry. It’s observing how geometric states change over time–and therefore figuring out how this part of the matrix connects to that part of the matrixWe discover patterns of state changes over time and infer the laws of physics from them.

    Not a Reduction of Everything

    The physical world can be reduced down to quantitative, logical analysis. That does not imply the entirety of existence can be reduced in such a way. Metaphysical pluralism allows for things like consciousness, information, and abstract stuff to exist outside of the matrix. They are in separate but related domains.

    Give it a few decades, and I expect this framework will conquer everything.

  • Self-Reference without Paradox

    Self-reference is the foundation of a new mystery religion. Adherents see paradoxes everywhere, even at the foundation of critical thinking—logic itself. “The Liar’s Paradox”, they say, “demonstrates that the law of non-contradiction isn’t absolute.”

    “Logic can’t really give us the truth, because something something Godel’s incompleteness theorems.”

    Nearly all the mystical paradoxes people bring up today invoke either 1) self-reference, 2) infinity, or 3) quantum mechanics. I’ve dealt with all three before, but I want to revisit the topic of self-reference.

    Here’s a general technique for clearing up the paradoxes that are generated by self-referential sentences:

    Pay attention.

    Pay attention to your own mind and how it processes language. The magic and mystery of self-reference disappears when you take the time to observe your own mental processes. Take two examples:

    Statement (1):    This sentence is false.

    Statement (2):    This sentence has five words.

    (1) Is the famous liar’s paradox, which superficially appears to be a contradiction—if it’s true, then it’s false, but if it’s false, then it’s true.

    Contradictions cannot be made sense of, and yet, the liar’s paradox seems like it should make sense. Hence, this is the most famous paradox and has been around forever.

    (2) Makes straightforward sense and is “true.”

    So what’s going on here? How can one example of self-reference result in logical annihilation, while the other is trivially evaluated as true?

    Pay attention.

    Observe what your mind is doing when encountering the words.

    Resolving the Liar

    I have written about the resolution the Liar’s paradox elsewhere, but let me summarize the argument here.

    “This sentence is false” either 1) explodes into an infinity, or 2) collapses to zero.

    It either generates an infinite regress, or it’s simply nonsense wearing a fancy suit.

    To see why, ask the question, “What sentence exactly is false?” What do the words “this sentence” refer to?

    In other words, is the claim:

    Option A: “This sentence is false” is false.

    Or simply,

    Option B: This sentence is false.

    If Option A, then it’s easy to see why it generates an infinite regressThe use of parentheses helps. The claim is:

    (This sentence is false) is false.

    How does our mind try to make sense of these words?

    Well, outside the parentheses, we are told that something is false, which means that inside the parentheses, there must be some valid truth claim to evaluate. So we look inside the parentheses and see the words “this sentence is false.”

    How do we evaluate such a claim? We again have to figure out what “this sentence” refers to. If it refers to the entire sentence—“this sentence is false”—then we are stuck generating the infinite regress:

    “((This sentence is false) is false) is false…”, and so on. It’s like trying to walk to the end of a road that keeps elongating with every step you take. It won’t work.

    The only other option is to evaluate “this sentence is false” by itself. We should first break it into two parts: (This sentence) + (is false).

    The words (is false) tell us that we’re supposed to evaluate the truth value of a preceding proposition. But (this sentence) is not a proposition. It’s merely two words: “this” and “sentence.”

    “This sentence” is not a valid truth claim. It’s essentially an undefined function; we cannot evaluate the words “this sentence” as true or false. That’s why I like to say the liar’s paradox either explodes to infinity or collapses to zero.

    Just Make Sense

    It sounds like a good resolution, but perhaps it proves too much? Are all examples of self-reference therefore invalid?

    Of course not. We can clearly make sense of the following:

    Statement (P): This sentence has five words.

    Statement (Q): This sentence is in English.

    Statement (R): This sentence does not contain the word “paradox.”

    All three of these we can evaluate as true or false. The first two are true. The last is false. So what’s going on? Why do these not explode to infinity or collapse to zero? I propose the same answer:

    Pay attention.

    Observe what your mind is doing when encountering the words.

    When you read (P), what does your mind do? It says, “Hey, check out this set of words. It’s supposed to contain five elements. The elements are: “This” “sentence” “has” “five” “words”, which totals five, and therefore (P) is true.”

    No magic, no mystery, no explosion to infinity—the words “this sentence” refers to something definite. A proper use of self-reference.

    Now consider (Q). How does your mind process that sentence?

    It says, “Hey, check out these words. They are all supposed to be English. The words are “This” “sentence” “is” “in” “English”, which are all English words, therefore (Q) is true.”

    Clear and simple. Now let’s do (R):

    “Hey, check out this set of words. It’s not supposed to contain the word “paradox.” The words are “This” “sentence”… “paradox”. Since “paradox” is part of the set, (R) is false.”

    Clear and simple. No magic, no infinity. Just good ol’ fashioned self-reference that does not destroy the fabric of reality.

    Good Self, Bad Self

    Every case I’ve encountered of self-reference works this way. By looking at the mind, all of the paradoxical examples are resolved, and all of the sensible examples are explained. It’s similar to computer programming. Most of the time, self-referential code works fine, but sometimes, it hangs the computer in a never-ending loop. When the latter happens, we conclude, “Huh, I guess the code/coder is bad” and never “Huh, I guess that means logic is broken.”

    I suggest we approach all examples of self-reference with this common sense heuristic:

    If you can’t make sense of it, or it’s impossible for a computer to execute, or it results in a contradiction, it’s bad. There’s a bug lurking somewhere.

    If it’s possible to make sense of without contradiction, it’s good. That simple.

    Paradoxes do not reveal anything fundamental about reality; they reveal our own confusion about things. When you take the time to carefully look at the processes in your own mind—or the processes in a computer—everything can be understood, the bugs can be discovered, and apparent contradictions can be resolved.

  • Theoretical Pluralism and Chess

    I am drawn to a weak form of religious pluralism. 

    The strong form says, “All paths lead to the same God. All approaches are equally valid. There is no right or wrong way.”

    The opposite extreme says, “There is exactly one denomination of one religion which is the correct path to God. All other paths are heretical and wrong.”

    I think the truth is somewhere in the middle. Many paths do lead to the same God; many approaches work, but the different paths are not equally true or sophisticated. Some paths you can sprint down; other paths are dangerous and will take you in the opposite direction of truth.

    In other words, there are indeed many ways to skin a cat. But there are many more ways to do it wrong. Chess provides us with a great analogy.


    There are different schools of thought in chess, different theoretical perspectives. Around a century ago, there was a notable tension between the classical (or “modern”) school and the hypermodern school. On a few critical points, the moderns and the hypermoderns had radically different perspectives.

    Perhaps the key disagreement was this: how should you exert influence over the center of the board in the opening? The classical approach says that you should occupy the center with your pawns. Here’s an example of the Queen’s Gambit Declined, where black has responded to 1. d4 with … 1. d5, putting his own pawn right in the center.

    The hypermoderns took a different approach. They said the center can be influenced indirectly, from the flanks and from distant pieces. They even invited their opponent to occupy the center, so they could undermine their pawn structure later. Here’s an example of the same opening from white, but a very different approach from black in the Queen’s Indian Defense:

    Notice that black has no pawns in the center and is instead exerting influence with his knight and bishop. These are two very different approaches. So who is right?

    The answer is: it depends. There is truth in both perspectives. Even though the principles are opposite of one another, they both work when skillfully executed. Right now, it looks like the most advanced chess theory is some hybrid of modern and hypermodern (perhaps the “hyper-hyper-modern”?) Both modern and hypermodern openings are still used today at the highest levels.


    Returning to the question of pluralism. It would be an obvious mistake to conclude, “Since the modern and hypermodern openings work, there is no truth in chess! All openings are equally good and valid!”

    Just like it would be a mistake to conclude, “There is only one opening that works. All other openings are wrong and lead their users to hell.”

    The truth is somewhere in the middle. Many different openings work. In fact, with the usage of AI in chess, openings that were recently considered broken are being resurrected.

    Different chess principles and theories can also work, even if they are directly opposed to each other. And yet, despite this degree of pluralism, there are still obviously superior and inferior chess moves. There are openings with objectively higher levels of success than others.

    There really is truth to be discovered in chess. These truths might eventually be synthesized into the One True Theory, which explains the nuances of why the classical and hypermodern openings work and when they don’t. Some day, we might even have powerful enough technology to solve chess and tell us, once and for all, whether White can force mate from the opening, or whether perfect play ends in a draw.

    But chances are, for the foreseeable future, we are not going to have the One True Chess Theory. We’re instead stuck with lesser theories which vary in their level of sophistication.

    So it is, I claim, with philosophical, religious, and scientific theories. A weak form of pluralism is the right approach to capture the most truths.

  • Our Present Dark Age, Part 1

    For the last fifteen years, I’ve been researching a wide range of subjects. Full-time for the last seven years. I’ve traveled the world to interview intellectuals for my podcast, but most of my research has been in private. After careful examination, I have come to the conclusion that we’ve been living in a dark age since at least the early 20th century. 

    Our present dark age encompasses all domains, from philosophy to political theory, to biology, statistics, psychology, medicine, physics, and even the sacred domain of mathematics. Low-quality ideas have become common knowledge, situated within fuzzy paradigms. Innumerable ideas which are assumed to be rigorous are often embarrassingly wrong and utilize concepts that an intelligent teenager could recognize as dubious. For example, the Copenhagen interpretation in physics is not only wrong, it’s aggressively irrational—enough to damn its supporters throughout the 20th century.

    Whether it’s the Copenhagen interpretation, Cantor’s diagonal argument, or modern medical practices, the story looks the same: shockingly bad ideas become orthodoxy, and once established, the social and psychological costs of questioning the orthodoxy are sufficiently high to dissuade most people from re-examination.

    This article is the first of an indefinite series that will examine the breadth and depth of our present dark age.  For years, I have been planning on writing a book on this topic, but the more I study, the more examples I find. The scandals have become a never-ending list. So, rather than indefinitely accumulate more information, I’ve decided to start writing now.

    Darkness Everywhere

    By a “dark age”, I do not mean that all modern beliefs are false. The earth is indeed round.  Instead, I mean that all of our structures of knowledge are plagued by errors, at all levels, from the trivial to the profound, periphery to the fundamental. Nothing that you’ve been taught can be believed because you were taught it. Nothing can be believed because others believe it. No idea is trustworthy because it’s written in a textbook.

    The process that results in the production of knowledge in textbooks is flawed, because the methodology employed by intellectuals is not sufficiently rigorous to generate high-quality ideas. The epistemic standards of the 20th century were not high enough to overcome social, psychological, and political entropy. Our academy has failed. 

    At present, I have more than sixty-five specific examples that vary in complexity. Some ideas, like the Copenhagen interpretation, have entire books written about them, and researchers could spend decades understanding their full history and significance. The global reaction to COVID-19 is another example that will be written about for centuries. Other ideas, like specific medical practices, are less complex, though the level of error still suggests a dark age. 

    Of course, I cannot claim this is true in literally every domain, since I have not researched every domain. However, my studies have been quite broad, and the patterns are undeniable. Now when I research a new field, I am able to accurately predict where the scandalous assumptions lie within a short period of time, due to recognizable patterns of argument and predictable social dynamics. 

    Occasionally, I will find a scholar that has done enough critical thinking and historical research to discover that the ideas he was taught in school are wrong. Usually, these people end up thinking they have discovered uniquely scandalous errors in the history of science. The rogue medical researcher that examines the origins of the lipid hypothesis, or the mathematician that wonders about set theory, or the biologist that investigates fundamental problems with lab rats—they’ll discover critical errors in their discipline but think they are isolated events. I’m sorry to say, they are not isolated events. They are the norm, no matter how basic the conceptual error.

    Despite the ubiquity of our dark age, there have been bright spots. The progress of engineers cannot be denied, though it’s a mistake to conflate the progress of scientists with the progress of engineers. There have been high-quality dissenters. Despite being dismissed as crackpots and crazies by their contemporaries, their arguments are often superior to the orthodoxies they criticize, and I suspect history will be kind to these skeptics. 

    Due to recent events and the proliferation of alternative information channels, I believe we are exiting the dark age into a new Renaissance. Eventually, enough individuals will realize the severity of the problems with existing orthodoxies and the systemic problems with the academy, and they will embark on their own intellectual adventures. The internet has made possible a new life of the mind, and it’s unleashing pent-up intellectual energies around the world that will bring illumination to our present situation, in addition to creating the new paradigms that we desperately need.

    Why Did This Happen?

    It will take years to go through all of the examples, but before examining the specifics, it’s helpful to see the big picture. Here’s my best explanation for why we ended up in a dark age, summarized into six points:

    1. Intellectuals have greatly underestimated the complexity of the world.

    The success of early science gave us false hope that the world is simple. Laboratory experiments are great for identifying simple structures and relationships, but they aren’t great for describing the world outside of the laboratory. Modern intellectuals are too zoomed-in in their analyses and theories. They do not see how interconnected the world is nor how many domains one has to research in order to gain competence. For example, you simply cannot have a rigorous understanding of political theory without studying economics. Nor can you understand physics without thinking about philosophy. Yet, almost nobody has interdisciplinary knowledge or skill.  

    Even within a single domain like medicine, competence requires a broad exposure to concepts. Being too-zoomed-in has resulted in a bunch of medical professionals that don’t understand basic nutrition, immunologists that know nothing of virology, surgeons that unnecessarily remove organs, dentists that poison their patients, and doctors that prolong injury by prescribing anti-inflammatory drugs and harm their patients through frivolous antibiotic usage. The medical establishment has greatly underestimated the complexity of biological systems, and due to this oversimplification, they yank levers that end up causing more harm than good. The same is true for the economists and politicians who believe they can centrally plan economies. They greatly underestimate the complexity of economic systems and end up causing more harm than good. That’s the standard pattern across all disciplines.

    2. Specialization has made people stupid.

    Modern specialization has become so extreme that it’s akin to a mental handicap. Contemporary minds are only able to think about a couple of variables at the same time and do not entertain variables outside of their domain of training. While this myopia works, and is even encouraged, within the academy, it doesn’t work for understanding the real world. The world does not respect our intellectual divisions of labor, and ideas do not stay confined to their taxonomies. 

    A competent political theorist must have a good model of human psychology. A competent psychologist must be comfortable with philosophy. Philosophers, if they want to understand the broader world, must grasp economic principles. And so on. The complexity of the world makes it impossible for specialized knowledge to be sufficient to build accurate models of reality. We need both special and general knowledge across a multitude of domains.

    When encountering fundamental concepts and assumptions within their own discipline, specialists will often outsource their thinking altogether and say things like “Those kinds of questions are for the philosophers.” They are content leaving the most important concepts to be handled by other people. Unfortunately, since competent philosophers are almost nowhere to be found, the most essential concepts are rarely examined with scrutiny. So, the specialist ends up with ideas that are often inferior to the uneducated, since uneducated folks tend to have more generalist models of the world.

    Specialization fractures knowledge into many different pieces, and in our present dark age, almost nobody has tried to put the pieces back together. Contrary to popular opinion, it does not take specialized knowledge or training to comment on the big-picture or see conceptual errors within a discipline. In fact, a lack of training can be an advantage for seeing things from a fresh perspective. The greatest blindspots of specialists are caused by the uniformity of their formal education.

    The balance between generalists and specialists is mirrored by the balance between experimenters and theorists. The 20th century had an enormous lack of competent theorists, who are often considered unnecessary or “too philosophical.” Theorists, like generalists, are able to synthesize knowledge into a coherent picture and are absolutely essential for putting fractured pieces of knowledge back together.

    3. The lack of conceptual clarity in mathematics and physics has caused a lack of conceptual clarity everywhere else. These disciplines underwent foundational crises in the early 20th century that were not resolved correctly.

    The world of ideas is hierarchical; some ideas are categorically more important than others. The industry of ideas is also hierarchical; some intellectuals are categorically more important than others. In our contemporary paradigm, mathematics and physics are considered the most important domains, and mathematicians and physicists are considered the most intelligent thinkers. Therefore, when these disciplines underwent foundational crises, it had a devastating effect upon the entire world of ideas. The foundational notion of a knowable reality came into serious doubt.

    In physics, the Copenhagen interpretation claimed that there is no world outside of observation—that it doesn’t even make sense to talk about reality-in-some-state separate from our observations. When the philosophers disagreed, their word was pitted against the word of physicists. In the academic hierarchy, physicists occupy a higher spot than philosophers, so it became fashionable to deny the existence of independent reality. More importantly, within the minds of intellectuals, even if they naively believe in the existence of a measurement-independent world, upon hearing that prestigious physicists disagree, most people end up conforming to the ideas of physicists who they believe are more intelligent than themselves. 

    In mathematics, the discovery of non-Euclidean geometries undermined a foundation that was built upon for two thousand years. Euclid was often assumed to be a priori true, despite the high-quality criticisms leveled at Euclid for thousands of years. If Euclid is not the rock-solid foundation of mathematics, what is? In the early 1900’s, some people claimed the foundation was logic (and they were correct). Others claimed there is no foundation at all or that mathematics is meaningless because it’s merely the manipulation of symbols according to arbitrary rules.

    David Hilbert was a German mathematician that tried to unify all of mathematics under a finite set of axioms. According to the orthodox story, Kurt Godel showed in his famous incompleteness theorems that such a project was impossible. Worse than impossible, actually. He supposedly showed that any attempt to formalize mathematics within an axiomatic system would either be incomplete (meaning some mathematical truths cannot be proven), or if complete, the system becomes inconsistent (meaning they contain a logical contradiction). The impact of these theorems cannot be overstated, both within mathematics and outside of it. Intellectuals have been abusing Godel’s theorems for a century, invoking them to make all kinds of anti-rational arguments. Inescapable contradictions in mathematics would indeed be devastating, because after all, if you cannot have conceptual clarity and certainty in mathematics, what hope is there for other disciplines? 

    Due to the importance of physics and mathematics, and the influence of physicists and mathematicians, the epistemic standards of the 20th century were severely damaged by these foundational crises. The rise of logical positivism, relativism, and even scientism can be connected to these irrationalist paradigms, which often serve as justification for abandoning the notion of truth altogether. 

    4. The methods of scientific inquiry have been conflated with the processes of academia.

    What is science? In our current paradigm, science is what scientists do. Science is what trained people in lab coats do at universities according to established practices. Science is what’s published in scientific journals after going through the formal peer review process. Good science is what wins awards that science gives out. In other words, science is now equivalent to the rituals of academia.

    Real empirical inquiry has been replaced by conformity to bureaucratic procedures. If a scientific paper has checked off all the boxes of academic formalism, it is considered true science, regardless of the intellectual quality of the paper. Real peer review has been replaced by formal peer review—a religious ritual that is supposed to improve the quality of academic literature, despite all evidence to the contrary. The academic publishing system has obviously become dominated by petty and capricious gatekeepers. With the invention of the internet, it’s probably unnecessary altogether.

    “Following standard scientific procedure” sounds great unless it’s revealed that the procedures are mistaken. “Peer review” sounds great, unless your peers are incompetent. Upon careful review of many different disciplines, the scientific record demonstrates that “standard practice” is indeed insufficient to yield reliable knowledge, and chances are, your scientific peers are actually incompetent.

    5. Academia has been corrupted by government and corporate funding.

    Over the 20th century, the amount of money flowing into academia has exploded and degraded the quality of the institution. Academics are incentivized to spend their time chasing government grants rather than researching. The institutional hierarchy has been skewed to favor the best grant-winners rather than the best thinkers. Universities enjoy bloated budgets, both from direct state funding and from government-subsidized student loans. As with any other government intervention, subsidies cause huge distortions to incentive structures and always increase corruption.  Public money has sufficiently politicized the academy to fully eliminate the separation of Science and state.

    Corporate-sponsored research is also corrupt. Companies pay researchers to find whatever conclusion benefits the company. The worst combination happens when the government works with the academy and corporations on projects, like the COVID-19 vaccine rollout. The amount of incompetence and corruption is staggering and will be written about for centuries or more.

    In the past ten years, the politicization of academia has become apparent, but it has been building since the end of WWII. We are currently seeing the result of far-left political organizing within the academy that has affected even the natural sciences. Despite being openly hostile to critical thinking, they have successfully suppressed discussion within the institution that’s supposed to exist to pursue truth—a clear and inexcusable structural failure.

    6. Human biology, psychology, and social dynamics make critical thinking difficult.

    Nature does not endow us with great critical thinking skills from birth. From what I can tell, most people are stuck in a developmental stage prior to critical thinking, where social and psychological factors are the ultimate reason for their ideas. Gaining popularity and social acceptance are usually higher goals than figuring out the truth, especially if the truth is unpopular. Therefore, the real causes for error are often socio-psychological, not intellectual—an absence of reasoning rather than a mistake of reasoning. Before reaching the stage of true critical thinking, most people’s thought processes are stunted by issues like insecurity, jealousy, fear, arrogance, groupthink, and cowardice. It takes a large, never-ending commitment to self-development to combat these flaws.

    Rather than grapple with difficult concepts, nearly every modern intellectual is trying to avoid embarrassment for themselves and for their social class. They are trying to maintain their relative position in a social hierarchy that is constructed around orthodoxies. They adhere to these orthodoxies, not because they thought the ideas through, but because they cannot bear the social cost of disagreement. 

    The greater the conceptual blunder within an orthodoxy, the greater the embarrassment to the intellectual class that supported it; hence, few people will stick their necks out to correct serious errors. Of course, few people even entertain the idea that great minds make elementary blunders in the first place, so there’s a low chance most intellectuals even realize the assumptions of their discipline or practice are wrong.

    Not even supposed mathematical “proofs” are immune from social and psychological pressures. For example, Godel’s incompleteness theorems are not even considered a thing skepticism can be applied to; they are treated as a priori truths to mathematicians (which looks absurd to anybody who has actually examined the philosophical assumptions underpinning modern mathematics.) 

    Individuals who consider themselves part of the “smart person club”—that is, those that self-describe as intellectuals and are often part of the academy—have a difficult time admitting errors in their own ideology. But they have an exceptionally difficult time admitting error by “great minds” of the past, due to group dynamics. It’s one thing to admit that you don’t understand quantum mechanics; it’s an entirely different thing to claim Niels Bohr did not understand quantum mechanics. The former admission can actually gain you prestige within the physics club; the latter will get you ostracized.

    All fields of thought are under constant threat of being captured by superficial “consensus” by those who are seeking to be part of an authoritative group. These people tend to have superior social/manipulative skills, are better at communicating with the general public, and are willing to attack any critics as if their lives depended on it—for understandable reasons, since the benefits of social prestige are indeed on the line when sacred assumptions are being challenged.

    If this analysis is correct, then the least examined ideas are likely to be the most fundamental, have the greatest conceptual errors, and have been established the longest. The longer the orthodoxy exists, the higher the cost of revision, potentially costing an entire class their relative social position. If, for example, the notion of the “completed infinity” in mathematics turns out to be bunk, or the cons of vaccination outweigh the benefits, or the science of global warming is revealed to be corrupt, the social hierarchy will be upended, and the status of many intellectuals will be permanently damaged. Some might end up tarred and feathered. With this perspective, it’s not surprising that ridiculous dogmas can often take centuries or even millennia to correct.

    Speculation and Conclusion

    In addition to the previous six points, I have a few other suspicions that I’m less confident of, but am currently researching:

    1. Physical health might have declined over the 20th century due to reduced food quality, forgotten nutritional knowledge, and increased pesticides and pollutants in the environment. Industrialization created huge quantities of food at the expense of quality. Perhaps our dark age is partially caused by an overall reduction in brain function.

    2. New communications technology, starting with the radio, might have helped proliferate bad ideas, amplified their negative impact, and increased the social cost of disagreement with the orthodoxy. If true, this would be another unintended consequence of modernization.

    3.  Conspiracy/geopolitics might be a significant factor. Occasionally, malice does look like a better explanation than stupidity.

    In conclusion, the legacy of the 20th century is not an impressive one, and I do not currently have evidence that it was an era of great minds or even good ideas. But don’t take my word for it; the evidence will be supplied here over the coming years. If we are indeed in a dark age, then the first step towards leaving it is recognizing that we’ve been in one.

  • Responding to Jason Brennan’s Review of Square One

    Last year, I put out a challenge to some of my academic friends. (more…)

  • The Abuse of Apriorism in Economics

    I come in peace, fellow rationalists.

    I know that some truths can be discovered through the application of pure reason, without appealing to empirical data. These truths are limited in number and tend to be very abstract, but they still exist and are foundational to our other beliefs. I give an argument for them in my book Square One: The Foundations of Knowledge.  

    However, rationalism is easily abused and often turns dogmatic. Our perennial critics – empiricists – correctly point out the many dogmas common to rationalism. In turn, we rationalists correctly point out the many unspoken assumptions of empiricism. The purpose of this article is to point out where my fellow rationalists are indeed being dogmatic, in particular, with regard to Austrian Economics.

    I am an apriorist when it comes to economics. There is indeed a non-empirical, purely conceptual framework that we all bring to our analyses of human action. However, apriorist arguments are frequently over-extended by extreme apriorists, whose ideas are best represented by the philosopher Hans-Hermann Hoppe.

    So for the sake of apriorism, we need to tighten up our arguments and specify exactly what can and cannot be claimed by appealing to pure logical analysis. We can make axiomatic deductive claims in economics, but they don’t tell you almost anything about the world. They are important claims – even fundamental – but they are so abstract that most people won’t find them relevant.

    Take the common question:

    “Does increasing the minimum wage cause a disemployment effect?”

    I’ll be analyzing two different answers: “Yes, it certainly does,” and “Yes, it probably does, given reasonable assumptions.” The former is an apriorist claim: on purely logical grounds, we can know that increasing the minimum wage causes disemployment. The latter is an empirical claim: given what we know about the world, it’s most likely true that increasing the minimum wage causes disemployment, though it’s not logically necessary.

    For all practical purposes, the latter position is correct, and the former position is dogmatic.

    Other Things Equal…

    The careful reader might have thought, “Ah, we have to clarify the proposition further. It’s not simply that an increase in the minimum wage causes disemployment. It’s that an increase in the minimum wage causes disemployment, ceteris paribus. You have to hold every other variable constant.”

    I concede this point, and this new, more precise proposition is indeed true.

    It’s true and neutered.

    There is a world’s worth of difference between claiming,

    “X causes Y,” and

    “X causes Y, everything else constant.”

    The claim that “X causes Y” is a regular claim about the world. The claim that “X causes Y, everything else constant” is not a regular claim about the world. In fact, it’s not really saying something about the world; it’s talking about a hypothetical world where only one variable changes at a time, which is not the world we inhabit. I fully recognize that the ceteris paribus condition can be helpful as a thought experiment to clarify our concepts, but it often gets abused by the dogmatic rationalist who ends up claiming:

    “X causes Y, ceteris paribus.”

    “Therefore, X causes Y.”

    To state the error more concretely, the dogmatic rationalist says:

    1) An increase in the minimum wage causes disemployment effects, ceteris paribus.

    2) Therefore, an increase in the minimum wage causes disemployment effects.

    This might seem like a subtle error, but in fact, it’s a catastrophic one. It’s partly the reason why aprioristic reasoning is seen as being dogmatic. Extreme apriorists try to claim that “X is a matter of logical necessity,” when in fact, it’s actually an empirical matter, and because of the axiomatic-deductive nature of their argument, they are not open to being convinced otherwise. I’ve had to change my own beliefs on this subject, having been more on the dogmatic side before realizing my ideas were true yet neutered.

    Let’s take a simple example. Say I ask, “Will increasing the minimum wage in Seattle cause disemployment effects?” An extreme apriorist answers, “Yes, certainly, because of these particular causal connections…”

    Now imagine I follow up with the question, “But what if nobody follows the minimum wage law? If nobody follows the law, then surely it won’t cause disemployment.”

    I’ve actually asked this questions several times to economists in person, and I’ve heard some interesting answers. One prominent apriorist told me, “Well, they’ve got to follow the law!”

    But surely, in the real world, people don’t have to follow the law. And if they don’t follow the law, then the standard economic story simply doesn’t apply. I recognize that in the thought experiment they must follow the law – otherwise the analysis won’t work – but that’s not the world we live in. Nobody is asking economic questions about your thought experiment. They want to know what will happen in the real world if Seattle actually raises the minimum wage. Whether or not people follow the law is an empirical question; you cannot logically deduce the answer. Therefore, questions about the minimum wage in the real world require empirical assumptions in order to answer. Whether or not people follow the law is only one example; there are innumerable other empirical assumptions that get packed into economic claims. These assumptions might be perfectly reasonable, but they’re still empirical in nature.

    As I like to say in metaphysics, it might be the case that the minimum wage in your head is not the same as the minimum wage in the world.

    Changing Ideas

    Imagine that the following were true:

    “When the minimum wage increases, it changes the self-image of employees. They view themselves as being higher-quality workers and raise their productivity levels accordingly.”

    If that were true, then an increase in the minimum wage could, in fact, increase employment. The increased productivity of workers could make their employers more money, which means the employers could afford to hire more people.

    Notice that this is not a ceteris paribus scenario. The minimum wage would change, causing another variable to change: the ideas of employees.

    So the question is this: do we live in a world where increasing the minimum wage changes the ideas of workers so that they are more productive?

    It’s an empirical question.

    Now, I personally don’t think we live in such a world. (Or if we do, the gains in productivity are not sufficient enough to offset the additional costs of employment.) However, I didn’t arrive at those conclusions through a series of logical deductions. I’ve observed the world, and I don’t think that’s the one we live in.

    Imagine that somebody were making an explicitly psychological case for raising the minimum wage. They wouldn’t claim, “We can increase employment by changing one-and-only-one variable: the minimum wage.” They’d claim, “We should increase the minimum wage because it changes other variables in the real world that increase worker productivity.” This is a coherent, empirical claim that you cannot refute by responding, “But ceteris paribus an increase in the minimum wage causes disemployment!”

    Ceteris Dubious

    If you think about it, the ceteris paribus parameter is odd in the first place. On the one hand, it’s important to help us understand cause and effect relationships in economics. On the other hand, it can lead to extreme myopia. Take an example outside of economics to see just how odd it is.

    Imagine I’m a martial arts instructor, and I tell you, “When kids get promoted to a new colored belt, they perform at slightly higher levels because they think of themselves as being higher ranked.” This is something akin to the “winner effect” – the idea that winners gain confidence because they’re winners, which makes them more likely to win in the future.

    Now imagine an apriorist walks in and says, “Bah! That’s logically impossible! Simply awarding a belt to a kid will not improve his martial arts skills because, ceteris paribus, there is no causal connection between wearing a different colored belt and gaining greater skill!”

    It would be a bizarre, myopic argument indeed. Technically, the apriorist would be right. It’s not actually the different colored belt which makes kids better; it’s because their self-image changed. But this entirely misses the relevant phenomena we observe in the world.

    (Personal anecdote: I actually experienced this in reverse when receiving my first black belt in karate. I was so focused on getting it that once I finally did, I found my skill level dropped. My body and mind got lazy because I wasn’t as focused on improvement anymore. The belt actually made me worse!)

    It’s not hard to imagine a child improving his skill level because of a new belt. So why would it be hard to imagine an employee improving their skill level because of a raise? Yes, technically it’s not directly because of the belt or raise, but that’s a claim nobody is making. It’s a clear empirical possibility.

    In What World?

    Let’s take a step back to see the limitations of ceteris paribus reasoning.

    Say you make a claim that “X is true.” Your claim should not change if you add the phrase, “in the real world.” So for example, if I say, “X is true,” I should be able to say that “X is true in the real world.”

    What good is a proposition that claims “X is true, but not in the real world”?

    For example, the claim “The minimum wage causes a disemployment effect” should be the same as “The minimum wage causes a disemployment effect in the real world.” Yet, the extreme apriorist’s claim is actually, “The minimum wage causes a disemployment effect ceteris paribus, but I can’t tell you what happens in the real world.”

    Again, I think the claim is true, but it’s also neutered. If adding “in the real world” changes the validity of your claims, it should be a red flag. This is especially true for the many abuses of mathematics throughout various disciplines, but that’s an article for another time.

    One more example of the abuse of apriorism, taken from a lecture I attended in 2011 at Mises University, delivered by Hans-Hermann Hoppe about Praxeology. He gives many examples of what he considers to be certainly-true apriorist claims about how the world works.

    Take this one, for example:

    “If we increase the amount of money without increasing the quantity of non-money goods, social wealth will not be higher, but only prices will rise.”

    This is a great example of dropping the ceteris paribus condition. His claim is true, but only if we’re holding every other variable constant. In the real world, where multiple variables change – including variables that we don’t know are causally connected – it might be the case that an increase in the amount of money could increase the amount of social wealth. It just takes a bit of imagination.

    Imagine that we live in a world where the most incompetent people are the wealthiest, and the most capable entrepreneurs are all stuck in poverty. Now imagine there’s a large monetary inflation – a bunch of new money is printed and given to the poor, competent entrepreneurs. Suddenly, they have new means available to them. They start undertaking projects, employing people, and end up creating wealth for society. Without this inflation, they wouldn’t have had enough capital to undertake the new projects. This scenario is essentially a redistribution of wealth from the incompetent to the competent.

    So, by increasing the amount of money, social wealth could indeed increase. This is possible because the increase of money isn’t ceteris paribus. When the entrepreneurs received the new money, their behavior also changed, which caused an increase in total social wealth.

    I’m not advocating for such an inflation. I’m simply giving an example of where dropping the ceteris paribus condition turns a true-but-neutered claim into a false-and-dogmatic one.

    In Defense of Apriorism

    Alright, after criticizing the abuse of apriorism, I want to defend the methodology, because I do think it plays a fundamental role in economic reasoning.

    From my perspective, sound apriorist claims are rarely about states of the world. They are about our concepts. A careful rationalist will correctly point out that everybody brings pre-empirical concepts to the table before analyzing any data. These concepts are themselves not the subject of empirical inquiry; they are the lens through which we make sense of empirical data.

    Every discipline has these presuppositions – including physics, mathematics, biology, etc. – though most people simply aren’t aware of them because they tend to be very abstract and philosophic rather than concrete and scientific.

    For example, take the claim that “Humans act purposefully.” It might sound like an empirical claim – that we could go out and test whether or not humans are acting purposefully. But that’s not really correct. What careful thinkers like Ludwig Von Mises point out is that we interpret data about humans through the lens of purposeful action. When we observe humans, we presuppose a fundamentally different lens than when we observe billiard balls – that of purposeful action. It doesn’t make sense to say, “The billiard balls intended to go into their pockets after getting struck.” It does make sense to say, “Johnny intended to go to work at 3pm,” or “Johnny chose to take the train instead of the bus,” or “Johnny values classical music higher than electronica.”

    The apriorist is examining the concepts within our own mental framework. When Johnny chooses Beethoven, we can also meaningfully say, “And he could have chosen otherwise, which means Johnny has a preference scale for music.” This preference scale isn’t measured or observed – it’s not really a thing-in-the-world. It’s an analytical construct that’s implied by our concepts about human action.

    Take a fundamental economic law: the law of diminishing marginal utility. The first unit of a good gets employed to satisfy the most highly valued end; each additional unit will satisfy a lower-valued end. Is this an empirical claim? Not really. What we mean by “highest valued end” is precisely that it gets satisfied before other ends. It’s part of the definition – the pre-empirical conceptual lens. To say, “Johnny satisfied the lower-valued X prior to the higher-valued Y” is really to say that “Johnny actually valued X higher than Y.”

    When I worked for FEE, I remember listening to a lecture by Israel Kirzner, who was a student of Mises. He and some fellow students apparently asked Mises, “But how do we know that humans act?” To which Mises replied, “We observe it.”

    I think this is key. To say, “We observe it,” is to say, “Well, we’re guessing, based on the behavior of the objects we observe and based our own internal introspection, since we have special access to knowledge about what humans are. We’re interpreting human phenomena through the lens of purposeful action. It might very well be the case that humans do not act, but then you’ve got a great deal of explaining to do.”

    In other words, the concept of “purposeful human action” is extraordinarily powerful – so powerful that it’s fused to the lens of virtually anybody analyzing human behavior. If the concept correlates to the world, we can use pure logical deduction to come up with a kind of aprioristic framework for analyzing human action. If the concept is false, then… perhaps everything is a great hallucination or something else wild and bizarre.

    If you accept that humans act purposefully, then you are bound by a particular logical framework that rationalists have discovered. The framework is abstract and limited in scope, but it still exists and is fundamental.

    There are other places where apriorism is fundamental to economic reasoning, though I will not spend much time covering them. For example, take the proposition that “Scarcity exists” – i.e. there aren’t enough goods for everybody to have all their ends satisfied. If this is true, it implies other aprioristic truths that we don’t have to observe in the world. If scarcity exists, then humans must choose which ends to satisfy with their scarce means. And if humans choose “this” over “that,” we can meaningfully talk about preference scales, the law of diminishing marginal utility, and if we’re careful, we can even deduce the general framework of the laws of supply and demand. Also note, the particular claim that “scarcity exists” is actually a claim about the world. It’s not just purely about our concepts.

    Even in the earlier example about the minimum wage, apriorism and ceteris paribus reasoning can serve a valuable purpose. You could say, “To the extent that employment increases after a minimum wage hike, it is certainly not the case that the additional employment was solely caused by the cost of employment increasing.”

    Again, it’s not a particularly relevant claim, since innumerable variables are always changing, but it is still true. Careful ceteris paribus reasoning allows us to hyper-focus on cause and effect relationships. What exactly causes what, and for what reasons? If my martial arts skill improved when I get a new belt, it must be the case that some other variable changed. Holding “everything else constant” in our minds greatly improves our ability to identify cause and effect in a complex world.

    So, I think the most accurate approach to economic reasoning is a mixture of rationalism and empiricism. We all bring non-empirical conceptual frameworks to the table whenever we analyze a particular phenomenon. It’s valuable – even essential – to examine, explain, and flesh out the implications of these pure concepts. However, conceptual frameworks and extremely distant abstract truths do not tell you almost anything about what happens in the world. Economics is supposed to be about the world, not about our own minds, and the world is extremely complex. Multiple variables are changing every instant. The ceteris paribus parameter tells you essentially nothing about a world where more than one variable changes at the same time. Since that’s the world we live in, I suggest we rationalists stop abusing apriorism in economics.

    If you enjoyed this article, please consider supporting for $1 on Patreon