The spiritual domain heavily overlaps the informational domain—the world of patterns. I don’t think there’s a complete reduction of one domain to the other, but there is considerable overlap. Consider a few spiritual ideas.
“Telling the truth is of spiritual importance. Lies destroy, while the truth heals.”
This is not a claim about atoms or specific biological entities. This is a claim about general patterns. Humans act based on their ideas about the world. Lies create intrinsically disharmonious patterns, while the truth creates harmonious patterns—at least in the long run. Relationships based on deception are intrinsically unstable and dangerous to humans, while relationships based on the truth are solid and stable.
This principle is both abstract and true—the truth is a pattern in the abstract domain which lies above the material.
“You cannot overcome evil with evil, but you overcome evil with good.”
This is also a claim about patterns—how some patterns relate to others. Revenge begets revenge; this is true both empirically and theoretically. I often think back to a conversation I had with Dr Hirini Kaa from New Zealand who told me about the never-ending cycles of revenge within indigenous Maori groups. These cycles were only stopped once Christian settlers introduced the concept of forgiveness to them, which was revolutionary. The pattern of forgiveness is logically incompatible with the pattern of revenge; the two cannot exist together. Therefore forgiveness, when manifested in the world, is a kind of destruction of revenge.
“Spiritual warfare is real.”
How humans relate to other humans is an objective pattern in the world. Some relationships are objectively harmonious (a loving marriage; a safe and healthy community) and other relationships are discordant (a spiteful marriage; a toxic and dangerous community). The reality of social harmony is dependent on an astronomical number of variables, including physical, economic, political, cultural, and psychological factors. All of these variables intersect with the spiritual.
The health of your body is connected to your nutrition; your nutrition is connected to your ideas; your ideas are connected to a million other information structures. Controversial cultural subjects like pornography and drug use are fundamentally spiritual subjects—their effects are objective and real in the domain of patterns. Spiritual degradation will eventually manifest in the world as physical degradation.
“The spiritual world is invisible.”
We “see” patterns differently than we see material objects. A toxic marriage doesn’t have color, a smell, or a surface area. It’s a pattern which is grasped intellectually and felt intuitively—we “see” it in terms of understanding abstract relations and we can “feel” it in our guts.
For example, the danger and chaos of physical warfare is easy to observe with your senses. The danger and chaos of spiritual warfare has to be grasped with the intellect or felt by the intuition.
“The spiritual world is timeless and eternal.”
Political leaders can be assassinated. Assassinationitself, as a pattern, cannot be destroyed. Individuals can stop lying. Lying itself is not going anywhere. You can individually run away from love, but you can’t kill the pattern of love.
Humans are creatures whose heads and hearts seem to intersect the spiritual world. We have the ability to instantiate or not instantiate patterns. There is no murder in my neighborhood, but at any point, that pattern can be instantiated. The spirit of hatred is always there; the question is whether that spirit will find a host.
There’s a never-ending battle going on—whether or not specific patterns will be instantiated. It takes spiritual strength and discipline to fight it.
It is important to re-iterate that the spiritual world does not entirely reduce to the informational world. The metaphysics are complex and confusing. The next article will be on Matter and Spirit.
I’ve been enjoying some theological research lately, and I came across the work of Jordan Cooper, a Lutheran theologian. He’s got some great video series on different thinkers. I watched his Marx video last night, and it sparked a bunch of thoughts.
Cooper makes the claim that Marx can best be understood through the lens of a hardcore materialism—that reality is only composed of atoms and their movement. Everybody knows Marx is a materialist, but I’ve never considered his metaphysics to be fundamental to his thought.
The more I think about it, the more powerful this analysis becomes.
Economics without Abstractions
If Cooper is right, it would explain why Marxists are fundamentally confused about economics.
For example, think about the factory owner employing a worker. To the materialist, the only person doing real work is the physical laborer—the man moving atoms around. The capitalist is not moving atoms around and is therefore literally doing nothing productive. In that perspective, capitalists are indeed parasites, mooching off workers. Things like risk, capital investment, coordination of labor, etc., these are all abstractions and not fundamentally real; they are word-games to keep the capitalist in power.
The labor theory of value also flows from a hardcore materialism. Physical labor actually moves atoms around; this labor is an objective phenomenon in the domain of physics. By contrast, subjectivist theories of value are all abstract (and even metaphysically dualistic, to say that value is “in the mind”).
Think about private property. As Marx says, the essence of communism is the abolition of private property. That also makes sense from a materialist perspective. Private property is an abstraction placed on top of atoms; it’s not real; it’s an arbitrary carving up of the physical world. In the strongest metaphysical sense, private property does not exist. So it’s natural to think the social orders built on top of private property are fundamentally flawed.
Why are communists against the existence of the family? Well, if the family does not actually exist, that’s a pretty good reason.
The inevitability of communism also makes sense to me. Workers are the ones with real power, so what’s stopping them from throwing off the yoke of their capitalist oppressors? Simple class consciousness; they are not aware of their situation, and after gaining awareness, nobody can stop them.
Later Marxist Thinkers
The development of Marxism into its modern form of being a nihilistic, anti-truth, power-obsessed worldview also makes sense. If abstractions are not real—if they do not correspond to anything essential—then I can suddenly understand why these people think “Everything is a social construction.” They don’t believe the world can be carved up in an objectively meaningful way.
In the most extreme version of materialism, even the relations between atoms are not real, which would mean at the most fundamental level, there is no such thing as a composite object. Basic distinctions between “men” and “women” are arbitrary. Everything is individual atoms without relation to one another. Therefore, there’s no abstraction that could possibly be “true.”
Understood through this lens, language really does look like it’s just about power. What is language fundamentally? Well, if our metaphysics only allows us to track how atoms move, then the only thing real about language is how it pushes atoms around. That’s all it can do. It’s all just power—physical power, ultimately.
I’ve never really thought about Marxism through this lens, but it sure explains a lot.
Markets are extraordinary. Nearly all the textbook criticisms of them are bad quality and can be refuted without much effort. Laissez-faire is such a theoretical triumph that it’s mesmerizing—it can be difficult to see the bad amongst all the good.
Clever libertarians will quickly tell you that “markets are not perfect!”, but they rarely examine just how imperfect. They might focus on markets’ intrinsic power of self-correction, while overlooking the painful reality of what self-correction means. I speak from experience here as a market absolutist. Everything is better within markets, yet there remains real, deep, structural dangers that are risky to overlook.
Laissez-faire is a theoretical triumph and a painful reality.
Trust and Incentives
A rough libertarian model says that markets are trustworthy because of the powerful incentive structure within them that rewards good behavior and punishes the bad. Goods can generally be trusted to be safe and satisfactory by virtue of them being produced in markets.
This, I want to argue, is false. The incentive structure within markets is not strong enough to prevent harmful goods and services from being produced, even for products that are extremely popular and successful.
The libertarian wants to walk into a store, pull an item off the shelf, and confidently proclaim that it’s safe because of market forces—producers don’t want to harm their customers, after all. I wish this were the reality, but I think it’s utopian. Such a system will never exist. Blind faith in markets is not justified.
Markets do indeed fail… sort of. Depending on what you mean by “failure.” At the very least, markets produce goods and services that are harmful to people—and if consumers had higher-quality information, they would choose to not purchase them. These goods and services can cause widespread harm, even to individuals not directly involved in the transaction. By this metric, markets do fail, especially on short timescales.
Ham Sandwich Theory
It’s worth telling a short story about the wondrous incentives within markets. It goes like this:
We are so advanced that we take for granted the many miracles that happen throughout the day. We live in a dangerous world filled with unstable, bitter, incompetent people. Yet, we trust absolute strangers to prepare our meals for us and don’t think twice about it.
Say you stop by a food truck to purchase a ham sandwich. You don’t know anything about the owner; you don’t know where he got his food, his sanitary standards, or his opinion towards [your group identity]. And yet, you trust the stranger with your life. He could poison you, after all.
Why do we trust complete strangers in this way?
It sounds like an ethical question, but economics has a better explanation. Market incentives are sufficiently strong to punish people who end up poisoning their customers. A dangerous sandwich seller, whether malicious or incompetent, will quickly go out of business. That’s the gist of it.
To the average market enthusiast, this principle gets extended to all products—the same incentive why you trust the manufacturers of shampoos, cars, watches, and literally everything else you purchase—and the conclusion is that the output of markets can be generally trusted.
The point of this article is not to explain those incentives, but they are worthy of deep examination. They are powerful enough to bring humanity out of poverty.
The Complexity Problem Strikes Again
The problem is that not every good is like a ham sandwich. The more complex the good, the less reliable markets become.
Food poisoning is obvious and visible. Sellers get almost immediate feedback. Angry, poisoned customers can tell others to stay away. Profits are immediately affected.
The real trouble happens when the situation is more complex. Instead of food poisoning within 24 hours, imagine a food additive raises your risk of cancer after 5, 10, or 20 years or disrupts your gut microbiome—an extraordinarily complex system/ecosystem that we’re only now discovering is connected to a wide range of diseases. This might already be happening with ubiquitous additives like carrageenan and cellulose gum. Or, consider the situation where no product in particular is implicated, but rather a whole industrial process. Microplastics in our water supply are a real problem; there is no singular “microplastic producer” to blame or sue for compensation.
Greed is not the primary problem here. Complexity is.
First of all, it’s extremely difficult to establish causality. How do we figure out whether [preservative X] actually increases the risk of colon cancer? How do we figure out whether EMF exposure is a real problem? As a pure intellectual matter, it’s extremely difficult, often expensive, and time-consuming—and with the current state of academic research, literally impossible.
Entrepreneurs are not going to figure this information out beforehand. They cannot, because the systems are so complex, we don’t even know what to look for. We are relegated to finding empirical causal associations after the fact—after some researcher finds out years later that [product Y] damages your reproductive health.
Despite these risks, the overwhelming incentive is to push products to market anyways. There are hundreds of billions of dollars at stake, for example, with the rollout of 5G technology worldwide. If you think we understand the real-world health impacts of 5G technology, you have been the subject of propaganda. We have no fucking idea what the effects are, especially over years and decades of exposure.
Real science has no chance here, not in the short term. Billions of dollars can be made, and it only takes millions of dollars to buy scientists. It doesn’t matter whether the scientists are corrupt or incompetent; figuring out causality beforehand is simply too difficult.
In the Long Run, We’re All Dead
Libertarians will want to counter, “But harmful companies would be subject to lawsuits! They’d be sued into oblivion!”
This is naive. First of all, there is a straightforward, short-to-mid-term profit incentive. If the company can generate billions of dollars in the next few decades, what does it matter if they get sued into oblivion in 40 years? The executives and employees involved will be gone. Shareholders will be happy for a generation or two. Business decisions don’t usually get made for 40 years in the future.
With sufficient short-term gains, long-term pain becomes irrelevant. Lies are sometimes more profitable than the truth.
Even the idea of there being long-term pain assumes that scientific truth actually wins. It might take 40 years of intellectual and scientific battles to establish causality. Who is funding that research, and for what reason? Imagine the Herculean effort it takes to stand up to multi-billion dollar companies whose entire business model depends on the safety of the product in question. Real research is an existential threat, so the companies have every incentive to attack, discredit, corrupt, or destroy researchers.
There is simply too much money at stake for people to care about the truth. This battle cannot be fought only once—it will remain an ongoing battle between innovators and researchers, indefinitely.
Failure Cycles
Society is in a state of constant flow. Individuals are always learning, failing, succeeding, discovering, and forgetting. People get hired and fired. They advance and regress. I like to think of this flow as a progression through “failure cycles.”
In economics, profits and losses are both integral to a healthy economy. Profits are the reward for creating value; losses are the penalty for destroying value. Since resources are scarce, it’s critically important that unprofitable entities go bankrupt—otherwise, they would turn into zombies, forever draining resources into a black hole.
The longer a failure cycle takes to complete, the more resources get wasted in the process. Imagine the sclerotic company, losing money because they refused to change their ways. Banks might lend them money to continue operations. If they fail to adjust and are forced into bankruptcy, then the money lent to them is wasted—it could have gone elsewhere, to more productive entrepreneurs.
Consider incompetent management of a sports team. Say the owner of a franchise hires the wrong coaching staff. Season after season, they end up with losing records, and their fan base stops attending games. The sooner the incompetent people get fired, or ownership changes, the sooner the franchise can be turned around—even if firing people causes a lot of short-term pain.
Perhaps the best example of a dysfunctional failure cycle is in public health. Anthony Fauci has enjoyed a long career in government; meanwhile, he actually appears incompetent (or malicious) and should probably be in jail. His failure cycle has taken decades too long to complete, and countless people have suffered because of it. If he was fired in the 80’s, we’d all be better off.
Failure cycles cause terrible pain. Bankruptcy sucks for the individuals affected, but it can also generate ripple effects for others. Employees, clients, and customers can all have their lives disrupted. Dangerous products damage real people and their families. When banks fail, it can cause a cascade of other failures, amplifying the total damage done to the economy. Yet, despite the pain, these cycles are necessary, because the alternative is worse.
Banks sometimes need to fail. Bad decisions have to get punished; moral hazard has to be avoided. Otherwise, corrections will never happen—the most corrupt and incompetent people will continue to make the world a worse place and drain our resources.
There will always been dangerous products on the market. Therefore, we must have failure cycles, and the quicker they operate, the better.
Unfortunately, the more complex the product, the longer the failure cycle takes.
5G might honestly be terrible for your health, and if so, the public is not going to know about it for years—perhaps not until it completes its entire technological life-cycle. The same can be said for a ton of different products, from pesticides to preservatives to pills.
Market incentives are not strong enough to eliminate these failure cycles—and sometimes, the failures are spectacular. If you walk into any supermarket in the United States, I bet you will find a huge percentage of products that will be recognized as dangerous in a century. Everything from food and drink to health supplements, household cleaners and gadgets. These items will be seen as dangerous as lead pipes and asbestos are today.
“Solving” Unsolvable Problems…
Critics of markets generally make bad arguments. But they make extraordinarily bad arguments when it comes to solutions for the problems I’ve highlighted above. They correctly see that the market generates dangerous products. But their solutions tend to be ridiculous—have the government regulate the market! Have a panel of experts figure out the truth, then force everybody to comply! Politics will surely solve these problems!
If complexity is the problem, then government is not the solution.
Government is the opposite of the market, for a host of reasons. For one, its resources come from taxation, not profits and losses. Even if the government is phenomenally incompetent, they do not go bankrupt. The failure cycle is extremely slow—Fauci has been in government for more than half a century.
The only thing worse than a dangerous medical product is a dangerous medical product that has been approved by the regulatory apparatus and forced onto millions of people.
This inefficiency and incompetence is built into the structure of government. Even if we grant the possibility that a competent individual might find themselves in a position of political power, and even if they do a good job, the structure of the government will eventually ensure that this competence is strangled into oblivion.
Trying to solve the problem of market failure with government is making the cure worse than the disease. Official Regulatory Boards are centralized points of failure; they are honeypots for companies to corrupt. They turn painful failure cycles into catastrophic ones.
You cannot have centralized power without centralized, systemic risk.
… When the Problem is Reality
The complexity problem, seen from a different angle, is simply the problem of life. Life is complex, and there’s nothing we can do about it. Life is so complex that failure is inevitable. Failure cycles are also inevitable. People are always going to create dangerous products; researchers are always going to fail; corruption is never going away; businessmen are never going to stop selling bullshit; and reality will continue to remain maximally-complex into the future. We are not moving into a world where blind faith in humans is justified.
Despite my pessimistic conclusions about markets, I still think they are, by far, the best-possible solution to the complexity problem. Decentralized, natural, emergent systems are able to handle more complexity than centralized, top-down, authoritarian systems. There’s no way around it; it’s so true, it’s almost a tautology.
The best solution to the complexity problem is fast failure cycles. Failure as a feature, not a bug. The sooner we learn how stupid we are, the sooner we can correct our mistakes. This requires freedom and involves pain—just less pain than the alternative.
Libertarians are right that freedom is the answer. Markets do self-correct, but they might take multiple decades and damage millions of people in the process. It’s a bitter pill to swallow.
In the spirit of free markets, there’s a practical, entrepreneurial question here. Governmental regulatory bodies hold little credibility—so, in the future, who will do the real safety research?
How will dissident researchers get paid?
These are critically important, exciting questions, and lots of money will be made by the next generation of competent entrepreneurs.
For the last fifteen years, I’ve been researching a wide range of subjects. Full-time for the last seven years. I’ve traveled the world to interview intellectuals for my podcast, but most of my research has been in private. After careful examination, I have come to the conclusion that we’ve been living in a dark age since at least the early 20th century.
Our present dark age encompasses all domains, from philosophy to political theory, to biology, statistics, psychology, medicine, physics, and even the sacred domain of mathematics. Low-quality ideas have become common knowledge, situated within fuzzy paradigms. Innumerable ideas which are assumed to be rigorous are often embarrassingly wrong and utilize concepts that an intelligent teenager could recognize as dubious. For example, the Copenhagen interpretation in physics is not only wrong, it’s aggressively irrational—enough to damn its supporters throughout the 20th century.
Whether it’s the Copenhagen interpretation, Cantor’s diagonal argument, or modern medical practices, the story looks the same: shockingly bad ideas become orthodoxy, and once established, the social and psychological costs of questioning the orthodoxy are sufficiently high to dissuade most people from re-examination.
This article is the first of an indefinite series that will examine the breadth and depth of our present dark age. For years, I have been planning on writing a book on this topic, but the more I study, the more examples I find. The scandals have become a never-ending list. So, rather than indefinitely accumulate more information, I’ve decided to start writing now.
Darkness Everywhere
By a “dark age”, I do not mean that all modern beliefs are false. The earth is indeed round. Instead, I mean that all of our structures of knowledge are plagued by errors, at all levels, from the trivial to the profound, periphery to the fundamental. Nothing that you’ve been taught can be believed because you were taught it. Nothing can be believed because others believe it. No idea is trustworthy because it’s written in a textbook.
The process that results in the production of knowledge in textbooks is flawed, because the methodology employed by intellectuals is not sufficiently rigorous to generate high-quality ideas. The epistemic standards of the 20th century were not high enough to overcome social, psychological, and political entropy. Our academy has failed.
At present, I have more than sixty-five specific examples that vary in complexity. Some ideas, like the Copenhagen interpretation, have entire books written about them, and researchers could spend decades understanding their full history and significance. The global reaction to COVID-19 is another example that will be written about for centuries. Other ideas, like specific medical practices, are less complex, though the level of error still suggests a dark age.
Of course, I cannot claim this is true in literally every domain, since I have not researched every domain. However, my studies have been quite broad, and the patterns are undeniable. Now when I research a new field, I am able to accurately predict where the scandalous assumptions lie within a short period of time, due to recognizable patterns of argument and predictable social dynamics.
Occasionally, I will find a scholar that has done enough critical thinking and historical research to discover that the ideas he was taught in school are wrong. Usually, these people end up thinking they have discovered uniquely scandalous errors in the history of science. The rogue medical researcher that examines the origins of the lipid hypothesis, or the mathematician that wonders about set theory, or the biologist that investigates fundamental problems with lab rats—they’ll discover critical errors in their discipline but think they are isolated events. I’m sorry to say, they are not isolated events. They are the norm, no matter how basic the conceptual error.
Despite the ubiquity of our dark age, there have been bright spots. The progress of engineers cannot be denied, though it’s a mistake to conflate the progress of scientists with the progress of engineers. There have been high-quality dissenters. Despite being dismissed as crackpots and crazies by their contemporaries, their arguments are often superior to the orthodoxies they criticize, and I suspect history will be kind to these skeptics.
Due to recent events and the proliferation of alternative information channels, I believe we are exiting the dark age into a new Renaissance. Eventually, enough individuals will realize the severity of the problems with existing orthodoxies and the systemic problems with the academy, and they will embark on their own intellectual adventures. The internet has made possible a new life of the mind, and it’s unleashing pent-up intellectual energies around the world that will bring illumination to our present situation, in addition to creating the new paradigms that we desperately need.
Why Did This Happen?
It will take years to go through all of the examples, but before examining the specifics, it’s helpful to see the big picture. Here’s my best explanation for why we ended up in a dark age, summarized into six points:
1. Intellectuals have greatly underestimated the complexity of the world.
The success of early science gave us false hope that the world is simple. Laboratory experiments are great for identifying simple structures and relationships, but they aren’t great for describing the world outside of the laboratory. Modern intellectuals are too zoomed-in in their analyses and theories. They do not see how interconnected the world is nor how many domains one has to research in order to gain competence. For example, you simply cannot have a rigorous understanding of political theory without studying economics. Nor can you understand physics without thinking about philosophy. Yet, almost nobody has interdisciplinary knowledge or skill.
Even within a single domain like medicine, competence requires a broad exposure to concepts. Being too-zoomed-in has resulted in a bunch of medical professionals that don’t understand basic nutrition, immunologists that know nothing of virology, surgeons that unnecessarily remove organs, dentists that poison their patients, and doctors that prolong injury by prescribing anti-inflammatory drugs and harm their patients through frivolous antibiotic usage. The medical establishment has greatly underestimated the complexity of biological systems, and due to this oversimplification, they yank levers that end up causing more harm than good. The same is true for the economists and politicians who believe they can centrally plan economies. They greatly underestimate the complexity of economic systems and end up causing more harm than good. That’s the standard pattern across all disciplines.
2. Specialization has made people stupid.
Modern specialization has become so extreme that it’s akin to a mental handicap. Contemporary minds are only able to think about a couple of variables at the same time and do not entertain variables outside of their domain of training. While this myopia works, and is even encouraged, within the academy, it doesn’t work for understanding the real world. The world does not respect our intellectual divisions of labor, and ideas do not stay confined to their taxonomies.
A competent political theorist must have a good model of human psychology. A competent psychologist must be comfortable with philosophy. Philosophers, if they want to understand the broader world, must grasp economic principles. And so on. The complexity of the world makes it impossible for specialized knowledge to be sufficient to build accurate models of reality. We need both special and general knowledge across a multitude of domains.
When encountering fundamental concepts and assumptions within their own discipline, specialists will often outsource their thinking altogether and say things like “Those kinds of questions are for the philosophers.” They are content leaving the most important concepts to be handled by other people. Unfortunately, since competent philosophers are almost nowhere to be found, the most essential concepts are rarely examined with scrutiny. So, the specialist ends up with ideas that are often inferior to the uneducated, since uneducated folks tend to have more generalist models of the world.
Specialization fractures knowledge into many different pieces, and in our present dark age, almost nobody has tried to put the pieces back together. Contrary to popular opinion, it does not take specialized knowledge or training to comment on the big-picture or see conceptual errors within a discipline. In fact, a lack of training can be an advantage for seeing things from a fresh perspective. The greatest blindspots of specialists are caused by the uniformity of their formal education.
The balance between generalists and specialists is mirrored by the balance between experimenters and theorists. The 20th century had an enormous lack of competent theorists, who are often considered unnecessary or “too philosophical.” Theorists, like generalists, are able to synthesize knowledge into a coherent picture and are absolutely essential for putting fractured pieces of knowledge back together.
3. The lack of conceptual clarity in mathematics and physics has caused a lack of conceptual clarity everywhere else. These disciplines underwent foundational crises in the early 20th century that were not resolved correctly.
The world of ideas is hierarchical; some ideas are categorically more important than others. The industry of ideas is also hierarchical; some intellectuals are categorically more important than others. In our contemporary paradigm, mathematics and physics are considered the most important domains, and mathematicians and physicists are considered the most intelligent thinkers. Therefore, when these disciplines underwent foundational crises, it had a devastating effect upon the entire world of ideas. The foundational notion of a knowable reality came into serious doubt.
In physics, the Copenhagen interpretation claimed that there is no world outside of observation—that it doesn’t even make sense to talk about reality-in-some-state separate from our observations. When the philosophers disagreed, their word was pitted against the word of physicists. In the academic hierarchy, physicists occupy a higher spot than philosophers, so it became fashionable to deny the existence of independent reality. More importantly, within the minds of intellectuals, even if they naively believe in the existence of a measurement-independent world, upon hearing that prestigious physicists disagree, most people end up conforming to the ideas of physicists who they believe are more intelligent than themselves.
In mathematics, the discovery of non-Euclidean geometries undermined a foundation that was built upon for two thousand years. Euclid was often assumed to be a priori true, despite the high-quality criticisms leveled at Euclid for thousands of years. If Euclid is not the rock-solid foundation of mathematics, what is? In the early 1900’s, some people claimed the foundation was logic (and they were correct). Others claimed there is no foundation at all or that mathematics is meaningless because it’s merely the manipulation of symbols according to arbitrary rules.
David Hilbert was a German mathematician that tried to unify all of mathematics under a finite set of axioms. According to the orthodox story, Kurt Godel showed in his famous incompleteness theorems that such a project was impossible. Worse than impossible, actually. He supposedly showed that any attempt to formalize mathematics within an axiomatic system would either be incomplete (meaning some mathematical truths cannot be proven), or if complete, the system becomes inconsistent(meaning they contain a logical contradiction). The impact of these theorems cannot be overstated, both within mathematics and outside of it. Intellectuals have been abusing Godel’s theorems for a century, invoking them to make all kinds of anti-rational arguments. Inescapable contradictions in mathematics would indeed be devastating, because after all, if you cannot have conceptual clarity and certainty in mathematics, what hope is there for other disciplines?
Due to the importance of physics and mathematics, and the influence of physicists and mathematicians, theepistemic standards of the 20th century were severely damaged by these foundational crises. The rise of logical positivism, relativism, and even scientism can be connected to these irrationalist paradigms, which often serve as justification for abandoning the notion of truth altogether.
4. The methods of scientific inquiry have been conflated with the processes of academia.
What is science? In our current paradigm, science is what scientists do. Science is what trained people in lab coats do at universities according to established practices. Science is what’s published in scientific journals after going through the formal peer review process. Good science is what wins awards that science gives out. In other words, science is now equivalent to the rituals of academia.
Real empirical inquiry has been replaced by conformity to bureaucratic procedures. If a scientific paper has checked off all the boxes of academic formalism, it is considered true science, regardless of the intellectual quality of the paper. Real peer review has been replaced by formal peer review—a religious ritual that is supposed to improve the quality of academic literature, despite all evidence to the contrary. The academic publishing system has obviously become dominated by petty and capricious gatekeepers. With the invention of the internet, it’s probably unnecessary altogether.
“Following standard scientific procedure” sounds great unless it’s revealed that the procedures are mistaken. “Peer review” sounds great, unless your peers are incompetent. Upon careful review of many different disciplines, the scientific record demonstrates that “standard practice” is indeed insufficient to yield reliable knowledge, and chances are, your scientific peers are actually incompetent.
5. Academia has been corrupted by government and corporate funding.
Over the 20th century, the amount of money flowing into academia has exploded and degraded the quality of the institution. Academics are incentivized to spend their time chasing government grants rather than researching. The institutional hierarchy has been skewed to favor the best grant-winners rather than the best thinkers. Universities enjoy bloated budgets, both from direct state funding and from government-subsidized student loans. As with any other government intervention, subsidies cause huge distortions to incentive structures and always increase corruption. Public money has sufficiently politicized the academy to fully eliminate the separation of Science and state.
Corporate-sponsored research is also corrupt. Companies pay researchers to find whatever conclusion benefits the company. The worst combination happens when the government works with the academy and corporations on projects, like the COVID-19 vaccine rollout. The amount of incompetence and corruption is staggering and will be written about for centuries or more.
In the past ten years, the politicization of academia has become apparent, but it has been building since the end of WWII. We are currently seeing the result of far-left political organizing within the academy that has affected even the natural sciences. Despite being openly hostile to critical thinking, they have successfully suppressed discussion within the institution that’s supposed to exist to pursue truth—a clear and inexcusable structural failure.
6. Human biology, psychology, and social dynamics make critical thinking difficult.
Nature does not endow us with great critical thinking skills from birth. From what I can tell, most people are stuck in a developmental stage prior to critical thinking, where social and psychological factors are the ultimate reason for their ideas. Gaining popularity and social acceptance are usually higher goals than figuring out the truth, especially if the truth is unpopular. Therefore, the real causes for error are often socio-psychological, not intellectual—an absence of reasoning rather than a mistake of reasoning. Before reaching the stage of true critical thinking, most people’s thought processes are stunted by issues like insecurity, jealousy, fear, arrogance, groupthink, and cowardice. It takes a large, never-ending commitment to self-development to combat these flaws.
Rather than grapple with difficult concepts, nearly every modern intellectual is trying to avoid embarrassment for themselves and for their social class. They are trying to maintain their relative position in a social hierarchy that is constructed around orthodoxies. They adhere to these orthodoxies, not because they thought the ideas through, but because they cannot bear the social cost of disagreement.
The greater the conceptual blunder within an orthodoxy, the greater the embarrassment to the intellectual class that supported it; hence, few people will stick their necks out to correct serious errors. Of course, few people even entertain the idea that great minds make elementary blunders in the first place, so there’s a low chance most intellectuals even realize the assumptions of their discipline or practice are wrong.
Not even supposed mathematical “proofs” are immune from social and psychological pressures. For example, Godel’s incompleteness theorems are not even considered a thing skepticism can be applied to; they are treated as a priori truths to mathematicians (which looks absurd to anybody who has actually examined the philosophical assumptions underpinning modern mathematics.)
Individuals who consider themselves part of the “smart person club”—that is, those that self-describe as intellectuals and are often part of the academy—have a difficult time admitting errors in their own ideology. But they have an exceptionally difficult time admitting error by “great minds” of the past, due to group dynamics. It’s one thing to admit that you don’t understand quantum mechanics; it’s an entirely different thing to claim Niels Bohr did not understand quantum mechanics. The former admission can actually gain you prestige within the physics club; the latter will get you ostracized.
All fields of thought are under constant threat of being captured by superficial “consensus” by those who are seeking to be part of an authoritative group. These people tend to have superior social/manipulative skills, are better at communicating with the general public, and are willing to attack any critics as if their lives depended on it—for understandable reasons, since the benefits of social prestige are indeed on the line when sacred assumptions are being challenged.
If this analysis is correct, then the least examined ideas are likely to be the most fundamental, have the greatest conceptual errors, and have been established the longest. The longer the orthodoxy exists, the higher the cost of revision, potentially costing an entire class their relative social position. If, for example, the notion of the “completed infinity” in mathematics turns out to be bunk, or the cons of vaccination outweigh the benefits, or the science of global warming is revealed to be corrupt, the social hierarchy will be upended, and the status of many intellectuals will be permanently damaged. Some might end up tarred and feathered. With this perspective, it’s not surprising that ridiculous dogmas can often take centuries or even millennia to correct.
Speculation and Conclusion
In addition to the previous six points, I have a few other suspicions that I’m less confident of, but am currently researching:
1. Physical health might have declined over the 20th century due to reduced food quality, forgotten nutritional knowledge, and increased pesticides and pollutants in the environment. Industrialization created huge quantities of food at the expense of quality. Perhaps our dark age is partially caused by an overall reduction in brain function.
2. New communications technology, starting with the radio, might have helped proliferate bad ideas, amplified their negative impact, and increased the social cost of disagreement with the orthodoxy. If true, this would be another unintended consequence of modernization.
3. Conspiracy/geopolitics might be a significant factor. Occasionally, malice does look like a better explanation than stupidity.
In conclusion, the legacy of the 20th century is not an impressive one, and I do not currently have evidence that it was an era of great minds or even good ideas. But don’t take my word for it; the evidence will be supplied here over the coming years. If we are indeed in a dark age, then the first step towards leaving it is recognizing that we’ve been in one.
I know that some truths can be discovered through the application of pure reason, without appealing to empirical data. These truths are limited in number and tend to be very abstract, but they still exist and are foundational to our other beliefs. I give an argument for them in my book Square One: The Foundations of Knowledge.
However, rationalism is easily abused and often turns dogmatic. Our perennial critics – empiricists – correctly point out the many dogmas common to rationalism. In turn, we rationalists correctly point out the many unspoken assumptions of empiricism. The purpose of this article is to point out where my fellow rationalists are indeed being dogmatic, in particular, with regard to Austrian Economics.
I am an apriorist when it comes to economics. There is indeed a non-empirical, purely conceptual framework that we all bring to our analyses of human action. However, apriorist arguments are frequently over-extended by extreme apriorists, whose ideas are best represented by the philosopher Hans-Hermann Hoppe.
So for the sake of apriorism, we need to tighten up our arguments and specify exactly what can and cannot be claimed by appealing to pure logical analysis. We can make axiomatic deductive claims in economics, but they don’t tell you almost anything about the world. They are important claims – even fundamental – but they are so abstract that most people won’t find them relevant.
Take the common question:
“Does increasing the minimum wage cause a disemployment effect?”
I’ll be analyzing two different answers: “Yes, it certainly does,” and “Yes, it probably does, given reasonable assumptions.” The former is an apriorist claim: on purely logical grounds, we can know that increasing the minimum wage causes disemployment. The latter is an empirical claim: given what we know about the world, it’s most likely true that increasing the minimum wage causes disemployment, though it’s not logically necessary.
For all practical purposes, the latter position is correct, and the former position is dogmatic.
Other Things Equal…
The careful reader might have thought, “Ah, we have to clarify the proposition further. It’s not simply that an increase in the minimum wage causes disemployment. It’s that an increase in the minimum wage causes disemployment, ceteris paribus. You have to hold every other variable constant.”
I concede this point, and this new, more precise proposition is indeed true.
It’s true and neutered.
There is a world’s worth of difference between claiming,
“X causes Y,” and
“X causes Y, everything else constant.”
The claim that “X causes Y” is a regular claim about the world. The claim that “X causes Y, everything else constant” is not a regular claim about the world. In fact, it’s not really saying something about the world; it’s talking about a hypothetical world where only one variable changes at a time, which is not the world we inhabit. I fully recognize that the ceteris paribus condition can be helpful as a thought experiment to clarify our concepts, but it often gets abused by the dogmatic rationalist who ends up claiming:
“X causes Y, ceteris paribus.”
“Therefore, X causes Y.”
To state the error more concretely, the dogmatic rationalist says:
1) An increase in the minimum wage causes disemployment effects, ceteris paribus.
2) Therefore, an increase in the minimum wage causes disemployment effects.
This might seem like a subtle error, but in fact, it’s a catastrophic one. It’s partly the reason why aprioristic reasoning is seen as being dogmatic. Extreme apriorists try to claim that “X is a matter of logical necessity,” when in fact, it’s actually an empirical matter, and because of the axiomatic-deductive nature of their argument, they are not open to being convinced otherwise. I’ve had to change my own beliefs on this subject, having been more on the dogmatic side before realizing my ideas were true yet neutered.
Let’s take a simple example. Say I ask, “Will increasing the minimum wage in Seattle cause disemployment effects?” An extreme apriorist answers, “Yes, certainly, because of these particular causal connections…”
Now imagine I follow up with the question, “But what if nobody follows the minimum wage law? If nobody follows the law, then surely it won’t cause disemployment.”
I’ve actually asked this questions several times to economists in person, and I’ve heard some interesting answers. One prominent apriorist told me, “Well, they’ve got to follow the law!”
But surely, in the real world, people don’t have to follow the law. And if they don’t follow the law, then the standard economic story simply doesn’t apply. I recognize that in the thought experiment they must follow the law – otherwise the analysis won’t work – but that’s not the world we live in. Nobody is asking economic questions about your thought experiment. They want to know what will happen in the real world if Seattle actually raises the minimum wage. Whether or not people follow the law is an empirical question; you cannot logically deduce the answer. Therefore, questions about the minimum wage in the real world require empirical assumptions in order to answer. Whether or not people follow the law is only one example; there are innumerable other empirical assumptions that get packed into economic claims. These assumptions might be perfectly reasonable, but they’re still empirical in nature.
As I like to say in metaphysics, it might be the case that the minimum wage in your head is not the same as the minimum wage in the world.
Changing Ideas
Imagine that the following were true:
“When the minimum wage increases, it changes the self-image of employees. They view themselves as being higher-quality workers and raise their productivity levels accordingly.”
If that were true, then an increase in the minimum wage could, in fact, increase employment. The increased productivity of workers could make their employers more money, which means the employers could afford to hire more people.
Notice that this is not a ceteris paribus scenario. The minimum wage would change, causing another variable to change: the ideas of employees.
So the question is this: do we live in a world where increasing the minimum wage changes the ideas of workers so that they are more productive?
It’s an empirical question.
Now, I personally don’t think we live in such a world. (Or if we do, the gains in productivity are not sufficient enough to offset the additional costs of employment.) However, I didn’t arrive at those conclusions through a series of logical deductions. I’ve observed the world, and I don’t think that’s the one we live in.
Imagine that somebody were making an explicitly psychological case for raising the minimum wage. They wouldn’t claim, “We can increase employment by changing one-and-only-one variable: the minimum wage.” They’d claim, “We should increase the minimum wage because it changes other variables in the real world that increase worker productivity.” This is a coherent, empirical claim that you cannot refute by responding, “But ceteris paribus an increase in the minimum wage causes disemployment!”
Ceteris Dubious
If you think about it, the ceteris paribus parameter is odd in the first place. On the one hand, it’s important to help us understand cause and effect relationships in economics. On the other hand, it can lead to extreme myopia. Take an example outside of economics to see just how odd it is.
Imagine I’m a martial arts instructor, and I tell you, “When kids get promoted to a new colored belt, they perform at slightly higher levels because they think of themselves as being higher ranked.” This is something akin to the “winner effect” – the idea that winners gain confidence because they’re winners, which makes them more likely to win in the future.
Now imagine an apriorist walks in and says, “Bah! That’s logically impossible! Simply awarding a belt to a kid will not improve his martial arts skills because, ceteris paribus, there is no causal connection between wearing a different colored belt and gaining greater skill!”
It would be a bizarre, myopic argument indeed. Technically, the apriorist would be right. It’s not actually the different colored belt which makes kids better; it’s because their self-image changed. But this entirely misses the relevant phenomena we observe in the world.
(Personal anecdote: I actually experienced this in reverse when receiving my first black belt in karate. I was so focused on getting it that once I finally did, I found my skill level dropped. My body and mind got lazy because I wasn’t as focused on improvement anymore. The belt actually made me worse!)
It’s not hard to imagine a child improving his skill level because of a new belt. So why would it be hard to imagine an employee improving their skill level because of a raise? Yes, technically it’s not directly because of the belt or raise, but that’s a claim nobody is making. It’s a clear empirical possibility.
In What World?
Let’s take a step back to see the limitations of ceteris paribus reasoning.
Say you make a claim that “X is true.” Your claim should not change if you add the phrase, “in the real world.” So for example, if I say, “X is true,” I should be able to say that “X is true in the real world.”
What good is a proposition that claims “X is true, but not in the real world”?
For example, the claim “The minimum wage causes a disemployment effect” should be the same as “The minimum wage causes a disemployment effect in the real world.” Yet, the extreme apriorist’s claim is actually, “The minimum wage causes a disemployment effect ceteris paribus, but I can’t tell you what happens in the real world.”
Again, I think the claim is true, but it’s also neutered. If adding “in the real world” changes the validity of your claims, it should be a red flag. This is especially true for the many abuses of mathematics throughout various disciplines, but that’s an article for another time.
One more example of the abuse of apriorism, taken from a lecture I attended in 2011 at Mises University, delivered by Hans-Hermann Hoppe about Praxeology. He gives many examples of what he considers to be certainly-true apriorist claims about how the world works.
Take this one, for example:
“If we increase the amount of money without increasing the quantity of non-money goods, social wealth will not be higher, but only prices will rise.”
This is a great example of dropping the ceteris paribus condition. His claim is true, but only if we’re holding every other variable constant. In the real world, where multiple variables change – including variables that we don’t know are causally connected – it might be the case that an increase in the amount of money could increase the amount of social wealth. It just takes a bit of imagination.
Imagine that we live in a world where the most incompetent people are the wealthiest, and the most capable entrepreneurs are all stuck in poverty. Now imagine there’s a large monetary inflation – a bunch of new money is printed and given to the poor, competent entrepreneurs. Suddenly, they have new means available to them. They start undertaking projects, employing people, and end up creating wealth for society. Without this inflation, they wouldn’t have had enough capital to undertake the new projects. This scenario is essentially a redistribution of wealth from the incompetent to the competent.
So, by increasing the amount of money, social wealth could indeed increase. This is possible because the increase of money isn’t ceteris paribus. When the entrepreneurs received the new money, their behavior also changed, which caused an increase in total social wealth.
I’m not advocating for such an inflation. I’m simply giving an example of where dropping the ceteris paribus condition turns a true-but-neutered claim into a false-and-dogmatic one.
In Defense of Apriorism
Alright, after criticizing the abuse of apriorism, I want to defend the methodology, because I do think it plays a fundamental role in economic reasoning.
From my perspective, sound apriorist claims are rarely about states of the world. They are about our concepts. A careful rationalist will correctly point out that everybody brings pre-empirical concepts to the table before analyzing any data. These concepts are themselves not the subject of empirical inquiry; they are the lens through which we make sense of empirical data.
Every discipline has these presuppositions – including physics, mathematics, biology, etc. – though most people simply aren’t aware of them because they tend to be very abstract and philosophic rather than concrete and scientific.
For example, take the claim that “Humans act purposefully.” It might sound like an empirical claim – that we could go out and test whether or not humans are acting purposefully. But that’s not really correct. What careful thinkers like Ludwig Von Mises point out is that we interpret data about humans through the lens of purposeful action. When we observe humans, we presuppose a fundamentally different lens than when we observe billiard balls – that of purposeful action. It doesn’t make sense to say, “The billiard balls intended to go into their pockets after getting struck.” It does make sense to say, “Johnny intended to go to work at 3pm,” or “Johnny chose to take the train instead of the bus,” or “Johnny values classical music higher than electronica.”
The apriorist is examining the concepts within our own mental framework. When Johnny chooses Beethoven, we can also meaningfully say, “And he could have chosen otherwise, which means Johnny has a preference scale for music.” This preference scale isn’t measured or observed – it’s not really a thing-in-the-world. It’s an analytical construct that’s implied by our concepts about human action.
Take a fundamental economic law: the law of diminishing marginal utility. The first unit of a good gets employed to satisfy the most highly valued end; each additional unit will satisfy a lower-valued end. Is this an empirical claim? Not really. What we mean by “highest valued end”is precisely that it gets satisfied before other ends. It’s part of the definition – the pre-empirical conceptual lens. To say, “Johnny satisfied the lower-valued X prior to the higher-valued Y” is really to say that “Johnny actually valued X higher than Y.”
When I worked for FEE, I remember listening to a lecture by Israel Kirzner, who was a student of Mises. He and some fellow students apparently asked Mises, “But how do we know that humans act?” To which Mises replied, “We observe it.”
I think this is key. To say, “We observe it,” is to say, “Well, we’re guessing, based on the behavior of the objects we observe and based our own internal introspection, since we have special access to knowledge about what humans are. We’re interpreting human phenomena through the lens of purposeful action. It might very well be the case that humans do not act, but then you’ve got a great deal of explaining to do.”
In other words, the concept of “purposeful human action” is extraordinarily powerful – so powerful that it’s fused to the lens of virtually anybody analyzing human behavior. If the concept correlates to the world, we can use pure logical deduction to come up with a kind of aprioristic framework for analyzing human action. If the concept is false, then… perhaps everything is a great hallucination or something else wild and bizarre.
If you accept that humans act purposefully, then you are bound by a particular logical framework that rationalists have discovered. The framework is abstract and limited in scope, but it still exists and is fundamental.
There are other places where apriorism is fundamental to economic reasoning, though I will not spend much time covering them. For example, take the proposition that “Scarcity exists” – i.e. there aren’t enough goods for everybody to have all their ends satisfied. If this is true, it implies other aprioristic truths that we don’t have to observe in the world. If scarcity exists, then humans must choose which ends to satisfy with their scarce means. And if humans choose “this” over “that,” we can meaningfully talk about preference scales, the law of diminishing marginal utility, and if we’re careful, we can even deduce the general framework of the laws of supply and demand. Also note, the particular claim that “scarcity exists” is actually a claim about the world. It’s not just purely about our concepts.
Even in the earlier example about the minimum wage, apriorism and ceteris paribus reasoning can serve a valuable purpose. You could say, “To the extent that employment increases after a minimum wage hike, it is certainly not the case that the additional employment was solely caused by the cost of employment increasing.”
Again, it’s not a particularly relevant claim, since innumerable variables are always changing, but it is still true. Careful ceteris paribus reasoning allows us to hyper-focus on cause and effect relationships. What exactly causes what, and for what reasons? If my martial arts skill improved when I get a new belt, it must be the case that some other variable changed. Holding “everything else constant” in our minds greatly improves our ability to identify cause and effect in a complex world.
So, I think the most accurate approach to economic reasoning is a mixture of rationalism and empiricism. We all bring non-empirical conceptual frameworks to the table whenever we analyze a particular phenomenon. It’s valuable – even essential – to examine, explain, and flesh out the implications of these pure concepts. However, conceptual frameworks and extremely distant abstract truths do not tell you almost anything about what happens in the world. Economics is supposed to be about the world, not about our own minds, and the world is extremely complex. Multiple variables are changing every instant. The ceteris paribus parameter tells you essentially nothing about a world where more than one variable changes at the same time. Since that’s the world we live in, I suggest we rationalists stop abusing apriorism in economics.
“Finland and Sweden have national health care, free college, affordable housing and a higher standard of living… Why shouldn’t that appeal to our disappearing middle class?”
Like millions of Americans, I am inspired by Bernie Sanders*. He’s a true progressive – compassionate, intelligent, and full of solutions. He embodies a core principle of progressive thought: the willingness to identify problems and solve them with government. Unlike so many free-market advocates, he isn’t dogmatically attached to old economic doctrines.