Markets Fail to Solve the Complexity Problem

Markets are extraordinary. Nearly all the textbook criticisms of them are bad quality and can be refuted without much effort. Laissez-faire is such a theoretical triumph that it’s mesmerizing—it can be difficult to see the bad amongst all the good.

Clever libertarians will quickly tell you that “markets are not perfect!”, but they rarely examine just how imperfect. They might focus on markets’ intrinsic power of self-correction, while overlooking the painful reality of what self-correction means. I speak from experience here as a market absolutist. Everything is better within markets, yet there remains real, deep, structural dangers that are risky to overlook.

Laissez-faire is a theoretical triumph and a painful reality.

Trust and Incentives

A rough libertarian model says that markets are trustworthy because of the powerful incentive structure within them that rewards good behavior and punishes the bad. Goods can generally be trusted to be safe and satisfactory by virtue of them being produced in markets.

This, I want to argue, is false. The incentive structure within markets is not strong enough to prevent harmful goods and services from being produced, even for products that are extremely popular and successful.

The libertarian wants to walk into a store, pull an item off the shelf, and confidently proclaim that it’s safe because of market forces—producers don’t want to harm their customers, after all. I wish this were the reality, but I think it’s utopian. Such a system will never exist. Blind faith in markets is not justified.

Markets do indeed fail… sort of. Depending on what you mean by “failure.” At the very least, markets produce goods and services that are harmful to people—and if consumers had higher-quality information, they would choose to not purchase them. These goods and services can cause widespread harm, even to individuals not directly involved in the transaction. By this metric, markets do fail, especially on short timescales.

Ham Sandwich Theory

It’s worth telling a short story about the wondrous incentives within markets. It goes like this:

We are so advanced that we take for granted the many miracles that happen throughout the day. We live in a dangerous world filled with unstable, bitter, incompetent people. Yet, we trust absolute strangers to prepare our meals for us and don’t think twice about it.

Say you stop by a food truck to purchase a ham sandwich. You don’t know anything about the owner; you don’t know where he got his food, his sanitary standards, or his opinion towards [your group identity]. And yet, you trust the stranger with your life. He could poison you, after all.

Why do we trust complete strangers in this way?

It sounds like an ethical question, but economics has a better explanation. Market incentives are sufficiently strong to punish people who end up poisoning their customers. A dangerous sandwich seller, whether malicious or incompetent, will quickly go out of business. That’s the gist of it.

To the average market enthusiast, this principle gets extended to all products—the same incentive why you trust the manufacturers of shampoos, cars, watches, and literally everything else you purchase—and the conclusion is that the output of markets can be generally trusted.

The point of this article is not to explain those incentives, but they are worthy of deep examination. They are powerful enough to bring humanity out of poverty.

The Complexity Problem Strikes Again

The problem is that not every good is like a ham sandwich. The more complex the good, the less reliable markets become.

Food poisoning is obvious and visible. Sellers get almost immediate feedback. Angry, poisoned customers can tell others to stay away. Profits are immediately affected.

The real trouble happens when the situation is more complex. Instead of food poisoning within 24 hours, imagine a food additive raises your risk of cancer after 5, 10, or 20 years or disrupts your gut microbiome—an extraordinarily complex system/ecosystem that we’re only now discovering is connected to a wide range of diseases. This might already be happening with ubiquitous additives like carrageenan and cellulose gum. Or, consider the situation where no product in particular is implicated, but rather a whole industrial process. Microplastics in our water supply are a real problem; there is no singular “microplastic producer” to blame or sue for compensation.

Greed is not the primary problem here. Complexity is.

First of all, it’s extremely difficult to establish causality. How do we figure out whether [preservative X] actually increases the risk of colon cancer? How do we figure out whether EMF exposure is a real problem? As a pure intellectual matter, it’s extremely difficult, often expensive, and time-consuming—and with the current state of academic research, literally impossible.

Entrepreneurs are not going to figure this information out beforehand. They cannot, because the systems are so complex, we don’t even know what to look for. We are relegated to finding empirical causal associations after the fact—after some researcher finds out years later that [product Y] damages your reproductive health.

Despite these risks, the overwhelming incentive is to push products to market anyways. There are hundreds of billions of dollars at stake, for example, with the rollout of 5G technology worldwide. If you think we understand the real-world health impacts of 5G technology, you have been the subject of propaganda. We have no fucking idea what the effects are, especially over years and decades of exposure.

Real science has no chance here, not in the short term. Billions of dollars can be made, and it only takes millions of dollars to buy scientists. It doesn’t matter whether the scientists are corrupt or incompetent; figuring out causality beforehand is simply too difficult.

In the Long Run, We’re All Dead

Libertarians will want to counter, “But harmful companies would be subject to lawsuits! They’d be sued into oblivion!”

This is naive. First of all, there is a straightforward, short-to-mid-term profit incentive. If the company can generate billions of dollars in the next few decades, what does it matter if they get sued into oblivion in 40 years? The executives and employees involved will be gone. Shareholders will be happy for a generation or two. Business decisions don’t usually get made for 40 years in the future.

With sufficient short-term gains, long-term pain becomes irrelevant. Lies are sometimes more profitable than the truth.

Even the idea of there being long-term pain assumes that scientific truth actually wins. It might take 40 years of intellectual and scientific battles to establish causality. Who is funding that research, and for what reason? Imagine the Herculean effort it takes to stand up to multi-billion dollar companies whose entire business model depends on the safety of the product in question. Real research is an existential threat, so the companies have every incentive to attack, discredit, corrupt, or destroy researchers.

There is simply too much money at stake for people to care about the truth. This battle cannot be fought only once—it will remain an ongoing battle between innovators and researchers, indefinitely.

Failure Cycles

Society is in a state of constant flow. Individuals are always learning, failing, succeeding, discovering, and forgetting. People get hired and fired. They advance and regress. I like to think of this flow as a progression through “failure cycles.”

In economics, profits and losses are both integral to a healthy economy. Profits are the reward for creating value; losses are the penalty for destroying value. Since resources are scarce, it’s critically important that unprofitable entities go bankrupt—otherwise, they would turn into zombies, forever draining resources into a black hole.

The longer a failure cycle takes to complete, the more resources get wasted in the process. Imagine the sclerotic company, losing money because they refused to change their ways. Banks might lend them money to continue operations. If they fail to adjust and are forced into bankruptcy, then the money lent to them is wasted—it could have gone elsewhere, to more productive entrepreneurs.

Consider incompetent management of a sports team. Say the owner of a franchise hires the wrong coaching staff. Season after season, they end up with losing records, and their fan base stops attending games. The sooner the incompetent people get fired, or ownership changes, the sooner the franchise can be turned around—even if firing people causes a lot of short-term pain.

Perhaps the best example of a dysfunctional failure cycle is in public health. Anthony Fauci has enjoyed a long career in government; meanwhile, he actually appears incompetent (or malicious) and should probably be in jail. His failure cycle has taken decades too long to complete, and countless people have suffered because of it. If he was fired in the 80’s, we’d all be better off.

Failure cycles cause terrible pain. Bankruptcy sucks for the individuals affected, but it can also generate ripple effects for others. Employees, clients, and customers can all have their lives disrupted. Dangerous products damage real people and their families. When banks fail, it can cause a cascade of other failures, amplifying the total damage done to the economy. Yet, despite the pain, these cycles are necessary, because the alternative is worse.

Banks sometimes need to fail. Bad decisions have to get punished; moral hazard has to be avoided. Otherwise, corrections will never happen—the most corrupt and incompetent people will continue to make the world a worse place and drain our resources.

There will always been dangerous products on the market. Therefore, we must have failure cycles, and the quicker they operate, the better.

Unfortunatelythe more complex the product, the longer the failure cycle takes.

5G might honestly be terrible for your health, and if so, the public is not going to know about it for years—perhaps not until it completes its entire technological life-cycle. The same can be said for a ton of different products, from pesticides to preservatives to pills.

Market incentives are not strong enough to eliminate these failure cycles—and sometimes, the failures are spectacular. If you walk into any supermarket in the United States, I bet you will find a huge percentage of products that will be recognized as dangerous in a century. Everything from food and drink to health supplements, household cleaners and gadgets. These items will be seen as dangerous as lead pipes and asbestos are today.

“Solving” Unsolvable Problems…

Critics of markets generally make bad arguments. But they make extraordinarily bad arguments when it comes to solutions for the problems I’ve highlighted above. They correctly see that the market generates dangerous products. But their solutions tend to be ridiculous—have the government regulate the market! Have a panel of experts figure out the truth, then force everybody to comply! Politics will surely solve these problems!

If complexity is the problem, then government is not the solution.

Government is the opposite of the market, for a host of reasons. For one, its resources come from taxation, not profits and losses. Even if the government is phenomenally incompetent, they do not go bankrupt. The failure cycle is extremely slow—Fauci has been in government for more than half a century.

The only thing worse than a dangerous medical product is a dangerous medical product that has been approved by the regulatory apparatus and forced onto millions of people.

This inefficiency and incompetence is built into the structure of government. Even if we grant the possibility that a competent individual might find themselves in a position of political power, and even if they do a good job, the structure of the government will eventually ensure that this competence is strangled into oblivion.

Trying to solve the problem of market failure with government is making the cure worse than the disease. Official Regulatory Boards are centralized points of failure; they are honeypots for companies to corrupt. They turn painful failure cycles into catastrophic ones.

You cannot have centralized power without centralized, systemic risk.

… When the Problem is Reality

The complexity problem, seen from a different angle, is simply the problem of life. Life is complex, and there’s nothing we can do about it. Life is so complex that failure is inevitable. Failure cycles are also inevitable. People are always going to create dangerous products; researchers are always going to fail; corruption is never going away; businessmen are never going to stop selling bullshit; and reality will continue to remain maximally-complex into the future. We are not moving into a world where blind faith in humans is justified.

Despite my pessimistic conclusions about markets, I still think they are, by far, the best-possible solution to the complexity problem. Decentralized, natural, emergent systems are able to handle more complexity than centralized, top-down, authoritarian systems. There’s no way around it; it’s so true, it’s almost a tautology.

The best solution to the complexity problem is fast failure cycles. Failure as a feature, not a bug. The sooner we learn how stupid we are, the sooner we can correct our mistakes. This requires freedom and involves pain—just less pain than the alternative.

Libertarians are right that freedom is the answer. Markets do self-correct, but they might take multiple decades and damage millions of people in the process. It’s a bitter pill to swallow.


In the spirit of free markets, there’s a practical, entrepreneurial question here. Governmental regulatory bodies hold little credibility—so, in the future, who will do the real safety research?

How will dissident researchers get paid?

These are critically important, exciting questions, and lots of money will be made by the next generation of competent entrepreneurs.