Category: Academia and Self-Study

  • Theoretical Pluralism and Chess

    I am drawn to a weak form of religious pluralism. 

    The strong form says, “All paths lead to the same God. All approaches are equally valid. There is no right or wrong way.”

    The opposite extreme says, “There is exactly one denomination of one religion which is the correct path to God. All other paths are heretical and wrong.”

    I think the truth is somewhere in the middle. Many paths do lead to the same God; many approaches work, but the different paths are not equally true or sophisticated. Some paths you can sprint down; other paths are dangerous and will take you in the opposite direction of truth.

    In other words, there are indeed many ways to skin a cat. But there are many more ways to do it wrong. Chess provides us with a great analogy.


    There are different schools of thought in chess, different theoretical perspectives. Around a century ago, there was a notable tension between the classical (or “modern”) school and the hypermodern school. On a few critical points, the moderns and the hypermoderns had radically different perspectives.

    Perhaps the key disagreement was this: how should you exert influence over the center of the board in the opening? The classical approach says that you should occupy the center with your pawns. Here’s an example of the Queen’s Gambit Declined, where black has responded to 1. d4 with … 1. d5, putting his own pawn right in the center.

    The hypermoderns took a different approach. They said the center can be influenced indirectly, from the flanks and from distant pieces. They even invited their opponent to occupy the center, so they could undermine their pawn structure later. Here’s an example of the same opening from white, but a very different approach from black in the Queen’s Indian Defense:

    Notice that black has no pawns in the center and is instead exerting influence with his knight and bishop. These are two very different approaches. So who is right?

    The answer is: it depends. There is truth in both perspectives. Even though the principles are opposite of one another, they both work when skillfully executed. Right now, it looks like the most advanced chess theory is some hybrid of modern and hypermodern (perhaps the “hyper-hyper-modern”?) Both modern and hypermodern openings are still used today at the highest levels.


    Returning to the question of pluralism. It would be an obvious mistake to conclude, “Since the modern and hypermodern openings work, there is no truth in chess! All openings are equally good and valid!”

    Just like it would be a mistake to conclude, “There is only one opening that works. All other openings are wrong and lead their users to hell.”

    The truth is somewhere in the middle. Many different openings work. In fact, with the usage of AI in chess, openings that were recently considered broken are being resurrected.

    Different chess principles and theories can also work, even if they are directly opposed to each other. And yet, despite this degree of pluralism, there are still obviously superior and inferior chess moves. There are openings with objectively higher levels of success than others.

    There really is truth to be discovered in chess. These truths might eventually be synthesized into the One True Theory, which explains the nuances of why the classical and hypermodern openings work and when they don’t. Some day, we might even have powerful enough technology to solve chess and tell us, once and for all, whether White can force mate from the opening, or whether perfect play ends in a draw.

    But chances are, for the foreseeable future, we are not going to have the One True Chess Theory. We’re instead stuck with lesser theories which vary in their level of sophistication.

    So it is, I claim, with philosophical, religious, and scientific theories. A weak form of pluralism is the right approach to capture the most truths.

  • Our Present Dark Age, Part 1

    For the last fifteen years, I’ve been researching a wide range of subjects. Full-time for the last seven years. I’ve traveled the world to interview intellectuals for my podcast, but most of my research has been in private. After careful examination, I have come to the conclusion that we’ve been living in a dark age since at least the early 20th century. 

    Our present dark age encompasses all domains, from philosophy to political theory, to biology, statistics, psychology, medicine, physics, and even the sacred domain of mathematics. Low-quality ideas have become common knowledge, situated within fuzzy paradigms. Innumerable ideas which are assumed to be rigorous are often embarrassingly wrong and utilize concepts that an intelligent teenager could recognize as dubious. For example, the Copenhagen interpretation in physics is not only wrong, it’s aggressively irrational—enough to damn its supporters throughout the 20th century.

    Whether it’s the Copenhagen interpretation, Cantor’s diagonal argument, or modern medical practices, the story looks the same: shockingly bad ideas become orthodoxy, and once established, the social and psychological costs of questioning the orthodoxy are sufficiently high to dissuade most people from re-examination.

    This article is the first of an indefinite series that will examine the breadth and depth of our present dark age.  For years, I have been planning on writing a book on this topic, but the more I study, the more examples I find. The scandals have become a never-ending list. So, rather than indefinitely accumulate more information, I’ve decided to start writing now.

    Darkness Everywhere

    By a “dark age”, I do not mean that all modern beliefs are false. The earth is indeed round.  Instead, I mean that all of our structures of knowledge are plagued by errors, at all levels, from the trivial to the profound, periphery to the fundamental. Nothing that you’ve been taught can be believed because you were taught it. Nothing can be believed because others believe it. No idea is trustworthy because it’s written in a textbook.

    The process that results in the production of knowledge in textbooks is flawed, because the methodology employed by intellectuals is not sufficiently rigorous to generate high-quality ideas. The epistemic standards of the 20th century were not high enough to overcome social, psychological, and political entropy. Our academy has failed. 

    At present, I have more than sixty-five specific examples that vary in complexity. Some ideas, like the Copenhagen interpretation, have entire books written about them, and researchers could spend decades understanding their full history and significance. The global reaction to COVID-19 is another example that will be written about for centuries. Other ideas, like specific medical practices, are less complex, though the level of error still suggests a dark age. 

    Of course, I cannot claim this is true in literally every domain, since I have not researched every domain. However, my studies have been quite broad, and the patterns are undeniable. Now when I research a new field, I am able to accurately predict where the scandalous assumptions lie within a short period of time, due to recognizable patterns of argument and predictable social dynamics. 

    Occasionally, I will find a scholar that has done enough critical thinking and historical research to discover that the ideas he was taught in school are wrong. Usually, these people end up thinking they have discovered uniquely scandalous errors in the history of science. The rogue medical researcher that examines the origins of the lipid hypothesis, or the mathematician that wonders about set theory, or the biologist that investigates fundamental problems with lab rats—they’ll discover critical errors in their discipline but think they are isolated events. I’m sorry to say, they are not isolated events. They are the norm, no matter how basic the conceptual error.

    Despite the ubiquity of our dark age, there have been bright spots. The progress of engineers cannot be denied, though it’s a mistake to conflate the progress of scientists with the progress of engineers. There have been high-quality dissenters. Despite being dismissed as crackpots and crazies by their contemporaries, their arguments are often superior to the orthodoxies they criticize, and I suspect history will be kind to these skeptics. 

    Due to recent events and the proliferation of alternative information channels, I believe we are exiting the dark age into a new Renaissance. Eventually, enough individuals will realize the severity of the problems with existing orthodoxies and the systemic problems with the academy, and they will embark on their own intellectual adventures. The internet has made possible a new life of the mind, and it’s unleashing pent-up intellectual energies around the world that will bring illumination to our present situation, in addition to creating the new paradigms that we desperately need.

    Why Did This Happen?

    It will take years to go through all of the examples, but before examining the specifics, it’s helpful to see the big picture. Here’s my best explanation for why we ended up in a dark age, summarized into six points:

    1. Intellectuals have greatly underestimated the complexity of the world.

    The success of early science gave us false hope that the world is simple. Laboratory experiments are great for identifying simple structures and relationships, but they aren’t great for describing the world outside of the laboratory. Modern intellectuals are too zoomed-in in their analyses and theories. They do not see how interconnected the world is nor how many domains one has to research in order to gain competence. For example, you simply cannot have a rigorous understanding of political theory without studying economics. Nor can you understand physics without thinking about philosophy. Yet, almost nobody has interdisciplinary knowledge or skill.  

    Even within a single domain like medicine, competence requires a broad exposure to concepts. Being too-zoomed-in has resulted in a bunch of medical professionals that don’t understand basic nutrition, immunologists that know nothing of virology, surgeons that unnecessarily remove organs, dentists that poison their patients, and doctors that prolong injury by prescribing anti-inflammatory drugs and harm their patients through frivolous antibiotic usage. The medical establishment has greatly underestimated the complexity of biological systems, and due to this oversimplification, they yank levers that end up causing more harm than good. The same is true for the economists and politicians who believe they can centrally plan economies. They greatly underestimate the complexity of economic systems and end up causing more harm than good. That’s the standard pattern across all disciplines.

    2. Specialization has made people stupid.

    Modern specialization has become so extreme that it’s akin to a mental handicap. Contemporary minds are only able to think about a couple of variables at the same time and do not entertain variables outside of their domain of training. While this myopia works, and is even encouraged, within the academy, it doesn’t work for understanding the real world. The world does not respect our intellectual divisions of labor, and ideas do not stay confined to their taxonomies. 

    A competent political theorist must have a good model of human psychology. A competent psychologist must be comfortable with philosophy. Philosophers, if they want to understand the broader world, must grasp economic principles. And so on. The complexity of the world makes it impossible for specialized knowledge to be sufficient to build accurate models of reality. We need both special and general knowledge across a multitude of domains.

    When encountering fundamental concepts and assumptions within their own discipline, specialists will often outsource their thinking altogether and say things like “Those kinds of questions are for the philosophers.” They are content leaving the most important concepts to be handled by other people. Unfortunately, since competent philosophers are almost nowhere to be found, the most essential concepts are rarely examined with scrutiny. So, the specialist ends up with ideas that are often inferior to the uneducated, since uneducated folks tend to have more generalist models of the world.

    Specialization fractures knowledge into many different pieces, and in our present dark age, almost nobody has tried to put the pieces back together. Contrary to popular opinion, it does not take specialized knowledge or training to comment on the big-picture or see conceptual errors within a discipline. In fact, a lack of training can be an advantage for seeing things from a fresh perspective. The greatest blindspots of specialists are caused by the uniformity of their formal education.

    The balance between generalists and specialists is mirrored by the balance between experimenters and theorists. The 20th century had an enormous lack of competent theorists, who are often considered unnecessary or “too philosophical.” Theorists, like generalists, are able to synthesize knowledge into a coherent picture and are absolutely essential for putting fractured pieces of knowledge back together.

    3. The lack of conceptual clarity in mathematics and physics has caused a lack of conceptual clarity everywhere else. These disciplines underwent foundational crises in the early 20th century that were not resolved correctly.

    The world of ideas is hierarchical; some ideas are categorically more important than others. The industry of ideas is also hierarchical; some intellectuals are categorically more important than others. In our contemporary paradigm, mathematics and physics are considered the most important domains, and mathematicians and physicists are considered the most intelligent thinkers. Therefore, when these disciplines underwent foundational crises, it had a devastating effect upon the entire world of ideas. The foundational notion of a knowable reality came into serious doubt.

    In physics, the Copenhagen interpretation claimed that there is no world outside of observation—that it doesn’t even make sense to talk about reality-in-some-state separate from our observations. When the philosophers disagreed, their word was pitted against the word of physicists. In the academic hierarchy, physicists occupy a higher spot than philosophers, so it became fashionable to deny the existence of independent reality. More importantly, within the minds of intellectuals, even if they naively believe in the existence of a measurement-independent world, upon hearing that prestigious physicists disagree, most people end up conforming to the ideas of physicists who they believe are more intelligent than themselves. 

    In mathematics, the discovery of non-Euclidean geometries undermined a foundation that was built upon for two thousand years. Euclid was often assumed to be a priori true, despite the high-quality criticisms leveled at Euclid for thousands of years. If Euclid is not the rock-solid foundation of mathematics, what is? In the early 1900’s, some people claimed the foundation was logic (and they were correct). Others claimed there is no foundation at all or that mathematics is meaningless because it’s merely the manipulation of symbols according to arbitrary rules.

    David Hilbert was a German mathematician that tried to unify all of mathematics under a finite set of axioms. According to the orthodox story, Kurt Godel showed in his famous incompleteness theorems that such a project was impossible. Worse than impossible, actually. He supposedly showed that any attempt to formalize mathematics within an axiomatic system would either be incomplete (meaning some mathematical truths cannot be proven), or if complete, the system becomes inconsistent (meaning they contain a logical contradiction). The impact of these theorems cannot be overstated, both within mathematics and outside of it. Intellectuals have been abusing Godel’s theorems for a century, invoking them to make all kinds of anti-rational arguments. Inescapable contradictions in mathematics would indeed be devastating, because after all, if you cannot have conceptual clarity and certainty in mathematics, what hope is there for other disciplines? 

    Due to the importance of physics and mathematics, and the influence of physicists and mathematicians, the epistemic standards of the 20th century were severely damaged by these foundational crises. The rise of logical positivism, relativism, and even scientism can be connected to these irrationalist paradigms, which often serve as justification for abandoning the notion of truth altogether. 

    4. The methods of scientific inquiry have been conflated with the processes of academia.

    What is science? In our current paradigm, science is what scientists do. Science is what trained people in lab coats do at universities according to established practices. Science is what’s published in scientific journals after going through the formal peer review process. Good science is what wins awards that science gives out. In other words, science is now equivalent to the rituals of academia.

    Real empirical inquiry has been replaced by conformity to bureaucratic procedures. If a scientific paper has checked off all the boxes of academic formalism, it is considered true science, regardless of the intellectual quality of the paper. Real peer review has been replaced by formal peer review—a religious ritual that is supposed to improve the quality of academic literature, despite all evidence to the contrary. The academic publishing system has obviously become dominated by petty and capricious gatekeepers. With the invention of the internet, it’s probably unnecessary altogether.

    “Following standard scientific procedure” sounds great unless it’s revealed that the procedures are mistaken. “Peer review” sounds great, unless your peers are incompetent. Upon careful review of many different disciplines, the scientific record demonstrates that “standard practice” is indeed insufficient to yield reliable knowledge, and chances are, your scientific peers are actually incompetent.

    5. Academia has been corrupted by government and corporate funding.

    Over the 20th century, the amount of money flowing into academia has exploded and degraded the quality of the institution. Academics are incentivized to spend their time chasing government grants rather than researching. The institutional hierarchy has been skewed to favor the best grant-winners rather than the best thinkers. Universities enjoy bloated budgets, both from direct state funding and from government-subsidized student loans. As with any other government intervention, subsidies cause huge distortions to incentive structures and always increase corruption.  Public money has sufficiently politicized the academy to fully eliminate the separation of Science and state.

    Corporate-sponsored research is also corrupt. Companies pay researchers to find whatever conclusion benefits the company. The worst combination happens when the government works with the academy and corporations on projects, like the COVID-19 vaccine rollout. The amount of incompetence and corruption is staggering and will be written about for centuries or more.

    In the past ten years, the politicization of academia has become apparent, but it has been building since the end of WWII. We are currently seeing the result of far-left political organizing within the academy that has affected even the natural sciences. Despite being openly hostile to critical thinking, they have successfully suppressed discussion within the institution that’s supposed to exist to pursue truth—a clear and inexcusable structural failure.

    6. Human biology, psychology, and social dynamics make critical thinking difficult.

    Nature does not endow us with great critical thinking skills from birth. From what I can tell, most people are stuck in a developmental stage prior to critical thinking, where social and psychological factors are the ultimate reason for their ideas. Gaining popularity and social acceptance are usually higher goals than figuring out the truth, especially if the truth is unpopular. Therefore, the real causes for error are often socio-psychological, not intellectual—an absence of reasoning rather than a mistake of reasoning. Before reaching the stage of true critical thinking, most people’s thought processes are stunted by issues like insecurity, jealousy, fear, arrogance, groupthink, and cowardice. It takes a large, never-ending commitment to self-development to combat these flaws.

    Rather than grapple with difficult concepts, nearly every modern intellectual is trying to avoid embarrassment for themselves and for their social class. They are trying to maintain their relative position in a social hierarchy that is constructed around orthodoxies. They adhere to these orthodoxies, not because they thought the ideas through, but because they cannot bear the social cost of disagreement. 

    The greater the conceptual blunder within an orthodoxy, the greater the embarrassment to the intellectual class that supported it; hence, few people will stick their necks out to correct serious errors. Of course, few people even entertain the idea that great minds make elementary blunders in the first place, so there’s a low chance most intellectuals even realize the assumptions of their discipline or practice are wrong.

    Not even supposed mathematical “proofs” are immune from social and psychological pressures. For example, Godel’s incompleteness theorems are not even considered a thing skepticism can be applied to; they are treated as a priori truths to mathematicians (which looks absurd to anybody who has actually examined the philosophical assumptions underpinning modern mathematics.) 

    Individuals who consider themselves part of the “smart person club”—that is, those that self-describe as intellectuals and are often part of the academy—have a difficult time admitting errors in their own ideology. But they have an exceptionally difficult time admitting error by “great minds” of the past, due to group dynamics. It’s one thing to admit that you don’t understand quantum mechanics; it’s an entirely different thing to claim Niels Bohr did not understand quantum mechanics. The former admission can actually gain you prestige within the physics club; the latter will get you ostracized.

    All fields of thought are under constant threat of being captured by superficial “consensus” by those who are seeking to be part of an authoritative group. These people tend to have superior social/manipulative skills, are better at communicating with the general public, and are willing to attack any critics as if their lives depended on it—for understandable reasons, since the benefits of social prestige are indeed on the line when sacred assumptions are being challenged.

    If this analysis is correct, then the least examined ideas are likely to be the most fundamental, have the greatest conceptual errors, and have been established the longest. The longer the orthodoxy exists, the higher the cost of revision, potentially costing an entire class their relative social position. If, for example, the notion of the “completed infinity” in mathematics turns out to be bunk, or the cons of vaccination outweigh the benefits, or the science of global warming is revealed to be corrupt, the social hierarchy will be upended, and the status of many intellectuals will be permanently damaged. Some might end up tarred and feathered. With this perspective, it’s not surprising that ridiculous dogmas can often take centuries or even millennia to correct.

    Speculation and Conclusion

    In addition to the previous six points, I have a few other suspicions that I’m less confident of, but am currently researching:

    1. Physical health might have declined over the 20th century due to reduced food quality, forgotten nutritional knowledge, and increased pesticides and pollutants in the environment. Industrialization created huge quantities of food at the expense of quality. Perhaps our dark age is partially caused by an overall reduction in brain function.

    2. New communications technology, starting with the radio, might have helped proliferate bad ideas, amplified their negative impact, and increased the social cost of disagreement with the orthodoxy. If true, this would be another unintended consequence of modernization.

    3.  Conspiracy/geopolitics might be a significant factor. Occasionally, malice does look like a better explanation than stupidity.

    In conclusion, the legacy of the 20th century is not an impressive one, and I do not currently have evidence that it was an era of great minds or even good ideas. But don’t take my word for it; the evidence will be supplied here over the coming years. If we are indeed in a dark age, then the first step towards leaving it is recognizing that we’ve been in one.

  • Responding to Jason Brennan’s Review of Square One

    Last year, I put out a challenge to some of my academic friends. (more…)

  • Learning = Discovering the Basics | I Learn How to Draw

    The basics will take you far. It doesn’t matter what you’re trying to learn. Philosophy, economics, tennis, knitting, brain surgery – it doesn’t matter. Study the basics, the fundamentals, and you’ll become advanced in no time.

    The difficult part is discovering the basics and applying them to whatever you’re doing. If you can discover and consistently apply basic principles, you’ll be more advanced than 90% of people trying to learn that subject or skill.

    Over the past twenty years, I’ve learned many things. I’ve become advanced in several academic fields in addition to several non-academic. I’ve composed works for the piano, received black belts in two styles of the martial arts, and become an advanced chess player, in addition to my academic studies. This isn’t because I’m smart or magical. It’s because I’ve discovered a superior technique for learning.

    If you know how to learn, you can learn anything in a short period of time. Let me give you a recent example from my own life.

    I have always been an awful drawer. I love charcoal and pencil drawings, but I’ve been completely incompetent at it my whole life. If I thought about learning and knowledge like most people, I would have concluded years ago that “I just don’t have the gift! I am simply not a visual artist,” and given up any hope of learning.

    This is an incorrect way of thinking. My incompetence with a pencil isn’t because of who I am or some lack inherent talent. It’s simply because I’d never discovered the basics.

    Good drawing = good drawing technique. I never learned the technique, so it’s not surprising that I didn’t do it well.

    Fortunately for me, I have a sister-in-law who is an art teacher.  Last November, as we were celebrating Thanksgiving together, I asked her if she’d teach me a few basic things about drawing. She agreed.

    If my theory about learning is correct, my incompetence should be correctable – as long as I can accurately sniff out the basics. These are my results.

    Baseline

    Alright, so just how sad was my drawing skill before? Well, here’s my sincerest attempt to draw my wife and me.

    IMG_20151031_210004822

    This qualified as “good” for me. Not exactly a Rembrandt. As I’ll mention later, self-honesty is absolutely crucial to learning. This drawing sucks. It’s true. It’s because I didn’t know a damn thing about drawing, and I’m not going to pretend otherwise.

    If you pretend to know what you’re doing when you don’t actually know – which is what most people do all the time – it prevents you from ever discovering the basics. You’re forced to act like you already know what the fundamentals are. Nobody wants to get caught pretending to understand something while not even grasping the fundamentals.

    So, here’s the first question I asked my sister-in-law at Thanksgiving:

    “How do you hold a pencil?”

    Serious question. We’re starting at ground floor. If I’m not even holding the pencil the right way, I’m doomed from the start. And go figure – I was holding it the wrong way.

    Most people don’t even bother to ask these kinds of elementary questions. They feel embarrassed – like they should already know the answer. Because of this, they most likely get the fundamentals wrong, which foils their attempts at gaining more knowledge or seriously improving their skills. There are innumerable parallels in the world of ideas.

    You want to know about economics? Tell me: what is money? How does exchange work? What do banks do?

    You want to learn about mathematics? Tell me: what is a number? What is a circle? Why does 2 + 2 = 4? How do you know?

    If you don’t sort these things out, you’re most likely going to be holding the pencil wrong.

    Baby Steps

    After the most basic questions about pencils, we moved on to some exercises. The first was trying to draw a gradient shadow on a simple object like a t-shirt. I wish I’d taken pictures of my work because it was even more awful than my self-portrait. I literally couldn’t draw within the lines, and I even struggled to make a symmetrical outline of a t-shirt.

    In fact, we all got a good laugh at my incompetent failures. I was holding the pencil wrong and couldn’t even draw within the lines – no better than a child! If I were embarrassed about being bad, I would have immediately shut down and stopped. But because I’ve been here before, and because I understand the process of learning, my laughable failures lasted less than an hour.

    I practiced the basic shading techniques until I could make a reasonable gradient of dark-to-light. Then I practiced on a sphere. Bad at first. Better after a little practice.

    After the exercises, we tried drawing a face from a picture. Instead of haphazardly trying to draw one, as I had done before, she explained the proper method. It’s essentially two steps:

    1) Trace the outline of the face onto a piece of paper.

    2) Start with the darker areas and shade/fade into the lighter ones.

    Very simple and straightforward. Once you’ve got the outlines, it’s just about making gradient shades on the paper – an application of the basic technique I’d learned a few minutes ago. This time, I was supposed to notice where the darker areas were on the picture and where they blend into lighter areas. If I could draw that, the rest of the picture would take care of itself. This was my first attempt:Jake_Portrait

    Not incredible, but I was already pleased. It’s not fine art, but it’s a million times better than my previous attempt. I assumed it would take many more hours of practice just to reach this level. Instead, it took me less than two hours from beginning to end. Plus, I’d become aware of many errors I’d made when drawing this face, which meant my next attempt would look even better.

    So I refined my technique on portrait #2:

    Julia_Portrait

    Now we’re getting somewhere. From my perspective, I was pretty damn impressed. Only a few more hours of drawing, and I’d reached a level that exceeds the majority of people who ever try.

    Again, this isn’t because I’m smart, nor because I’m an artist. It’s because I know how to learn and have embraced my own radical incompetence. I know where to look for basic concepts. I know how to use the beginner’s mind. I hadn’t developed any bad habits, so it’s rather easy to build up my skill from scratch.

    Here is my third, most recent attempt. Again, I learned from the mistakes of the previous drawing. This is a self-portrait (also the picture on the About Me section of this website):Patterson_Portrait

    If you showed me this drawing before Thanksgiving, I would have guessed it would take somebody years of practice to reach this level. I would have been wrong. With about ten hours of drawing experience in total, I’ve gone from being unable to stay within the lines, to being able to do this portrait without difficulty. That’s the power of the basics.

    Were I deliberate in practicing the basics, I have no doubt that I could reach professional-level within a year. It’s a long ways from where I am now, but given the rapid progression that comes from knowing how to learn, it’s completely feasible. I’m learning other things right now, but perhaps someday I’ll spend enough time to really master the art.

    Here’s my progression in four tries:

    Pencil Portraits

    — Update —

    I decided to try drawing with colored pencils. I didn’t know how colors would change things, but I just applied the same principles from before. I’ve done two colored pencil drawings and am pleased with the results.

    The first is of my dog Goose:

    The discreteness of the colored pencils threw me for a loop. I don’t have a good technique yet for blending colors into each other, but I also don’t mind how chunky it looks. It’s kind of artistic!

    While drawing the ears, I was convinced they would look ridiculous. I was adding so many colors that didn’t seem to make sense. However, after finishing, all those colors worked well together, and the ears are my favorite part.

    For my next drawing, I decided to put more effort into accurate tracing. This is a portrait of Thomas Sowell:

    Now we’re talking. The coloring isn’t realistic, but I don’t mind since it looks good.

    To be honest, I’m completely surprised by how quickly the drawings are improving. I don’t have much knowledge about the art industry, but I would suspect somebody would pay for a portrait of this quality – which is absurd to think about, given my complete lack of experience.

    Abstract, not Concrete

    This post isn’t about drawing. It’s about learning, and it’s especially about the world of ideas. I claim that most professional intellectuals do not understand the basics of their subject matter, and it’s because they lack a clear feedback mechanism. Unlike the artist, who can simply look at his drawing to see if it’s good – or the engineer, who can observe to see if his structures fall down – the professional intellectual merely produces a bunch of words on paper. It’s difficult for him to see if he grasps the fundamentals.

    I’ve written about this phenomenon in detail in “How Brazilian Jujitsu Explains the Popularity of Bad Ideas in Academia”.

    I claim that if you learn the basics of economics, you can understand more than 90% of professional economists and talking heads in media. The same is true in philosophy. If you understand the basics – the relationship between the mind and the world – you’ll be farther along than most professional philosophers. If you sort out the basics of critical thinking – and understand why logical contradictions don’t exist – you’ll have a clearer worldview than 99% of people on the planet.

    Once you’ve mastered the basics, you’ll find that advanced thoughts and techniques don’t look incomprehensible anymore. You can comprehend and execute them with enough practice.

    Plus, if you seek out fundamental principles, you will quickly discover how much bullshit and nonsense is spewed by people who don’t understand the basics – those who think the most important stuff is the advanced stuff, allthewhile failing to grasp the fundamental concepts in their own field. They try to impress, not to understand. Most academics fit into this category.

    Finally, if you’re so inclined, I encourage you to do what I’ve done. Learn the basic principles about learning. Discover the basic principles about discovering basic principles. You can develop an ability to find the fundamentals, cut through superfluous fluff, avoid bad habits, maintain the beginner’s mind, embrace failure, ask the right questions, and keep a radically open mind. If you get those basics right, you can teach yourself anything at a remarkable pace.

    For self-learners, there has never been a better time to exist. Learn how to learn, connect to the internet, and you can gain more knowledge than virtually anybody else in history.

  • Jason Brennan’s Original Review of Square One

    Update: read my full response to Brennan’s review here. The ideas speak for themselves.

    Here is the review Jason sent me and submitted to NDPR. He apparently updated it afterwards. Below you’ll find the original version, which is what my video responds to. Not too much gets changed – except now he tries to give even less credit than before, by changing lines like:

    Square One refutes what I call super-duper radical skepticism, but not radical skepticism. So, Patterson is right that truth is discoverable, but he doesn’t show us how to discover most of the interesting truths.

    And turning it into:

    Square One repeats the arguments which refute super-duper radical skepticism, but it does not respond to radical skepticism. So, Patterson is right that truth is discoverable, but he doesn’t show us how to discover most of the interesting truths.

    So now, it’s no refutation – it’s repeating somebody else’s argument! You can’t make this stuff up. The original review is  below:

    Steve Patterson, Square One: The Foundations of Knowledge. CreateSpace Independent Publishing, 2016. 125 pages (ppk), $9.99. ISBN: 978-1540402783

    Review by Jason Brennan, Georgetown University

    Philosophy could use a Tim Harford (The Undercover Economist) or a Steven Landsburg (The Armchair Economist and More Sex is Safer Sex). Both write lucid, engaging books which teach a popular audience the central insights of economics, even if these books do not produce new knowledge.

    Steve Patterson wants to do even more. He wants not only to spread philosophical wisdom to the masses, but also to shake philosophy from its dogmatic slumbers. He’s got an audience. As I write this, Square One: The Foundations of Knowledge is among the top 200 epistemology books on Amazon, outselling (older) introductory books by world-class epistemologists like Keith Lehrer, Alvin Goldman, Ernest Sosa, or John Pollock.

    In chapter two (and to some degree in later chapters), Patterson presents a range of arguments for radical skepticism. For instance, some skeptics say that claiming to possess knowledge is arrogant. Other skeptics say that to have any knowledge would require us to have far more information that we can possibly acquire. Others claim that knowledge presupposes faith in God. Others claim that natural selection would not evolve truth-tracking brains. Others says we’re stuck inside our subjective perspectives and cannot access objective facts. Still others say that the imprecision or vagueness of language means we lack knowledge, because our language doesn’t carve out nature by its joints. Still others claim that logic and mathematics are Western inventions and cultural artifacts, or that logic and mathematical truths are simply empty tautologies derived from arbitrary definitions and axioms.

    Patterson intends to debunk these skeptical arguments. The back cover of Square One declares, “Truth is discoverable. It’s not popular to say. It’s not popular to think. But you can be certain of it.”

    Chapter three—the best chapter in the book—responds to radical skepticism about logic. But his arguments are nothing new. Patterson uses the same argument students learn in week one of PHIL 101: Criticisms of the basic rules of logic are self-refuting. Any argument purporting to invalidate logic presupposes the truth of the rules of logic. Poststructuralist or postmodernist complaints about logic are internally incoherent. Fair enough.

    Throughout Square One, Patterson promises to help the reader discover “certain truth” (e.g., pp. 9, 13, 14, 15, passim). It is unclear to me whether Patterson understands the difference between A) the certainty of a proposition itself versus B) the certainty of one’s belief in that proposition. He often seems to conflate logical necessity with epistemic certainty (e.g., p. 54). But these are of course distinct.

    To illustrate, consider true statement R, a properly constructed formula in sentential logic, which I’ve write below in an abbreviated form:

    Formula1

    In unabbreviated form, R is very long. R has 14,000 particles and on the left side of the biconditional and 15,000 particles on the right.

    Now, R is not only true, but true in all possible worlds. But since R is so darn long, even the world’s best logician would have less than than perfect epistemic certainty about the truth of R. She would reasonably worry, even if she uses a computer, that she made a mistake when she tried to calculate the truth value for R. This isn’t because she doubts the validity of logic; rather, she doubts herself.

    Similar remarks apply to moderately difficult math problems. When I took my last math test in college, I knew my answers were either necessarily true or false. But I also know that I am fallible, so my degree of credence in my answers was less than 100%. I doubt myself, not the universal validity of mathematics.

    Patterson glosses over or ignores this problem. Even if we grant, as we should, that logical truths are necessary truths, that doesn’t mean we have epistemic certainty about all or even most of them. There are an infinite number of necessary truths in logic and math. But some of these are hard to figure out, so we cannot be certain we got them right, though we know all the true statements are necessarily true and the false statements are necessarily false.

    This problem aside, Patterson tries and I think succeeds in refuting skepticism about some of the basic axioms of logic. But what about skepticism about other beliefs? For instance, how do I know I’m not a brain in a vat? May I trust my senses? Is it possible that I am being radically deceived by a demon? If so, how can I be justified in thinking I really do have two kids or that I really am 37 years old? The axioms of logic do little to answer these questions, and Patterson does even less.

    This means Patterson’s critique of skepticism is rather narrow. To illustrate, consider these two forms of skepticism:

    Super-duper Radical Skepticism:  We can know nothing, not even the rules of logic or mathematical truths. We don’t even know whether super-duper skepticism is justified! 

    Radical Skepticism: We have knowledge of some mathematical and logical truths, some analytically true statements, and small number of metaphysical claims. But we are not justified in most of our beliefs about the outside world, such as the belief that the universe is more than two seconds old, or that we have hands, or that we are not in the Matrix, or that Steve Patterson really did write Square One, etc.

    Square One refutes what I call super-duper radical skepticism, but not radical skepticism. So, Patterson is right that truth is discoverable, but he doesn’t show us how to discover most of the interesting truths.

    Square One is meant to present a theory epistemology, but it is unclear whether Patterson knows what epistemology is. He does not even attempt to answer the basic questions of the field.

    Let’s briefly review. The subfield of epistemology studies the nature of knowledge. Its central questions include A, B, and C:

    A) What is knowledge?

    For instance, epistemologists generally agree that three necessary (but not sufficient) conditions for a person to know that P are 1) the person must believe P, 2) P must be true, and 3) the person must be justified in believing in P.

    This brings us to the most important question in epistemology.

    B) What distinguishes justified from unjustified belief?

    For instance, if you believe that penicillin kills bacteria because the evidence overwhelmingly shows that, then you are justified; if you believe that Santa is real on the basis of wishful thinking, you are not justified. But there are plenty of interesting questions about what takes to be justified. To be justified in believing P, must it be impossible for you to be wrong? How does evidence in scientific reasoning work? When does the testimony of others confer justification, and when doesn’t it? Am I justified prima facie in trusting my senses?

    Patterson makes almost no attempt to answer either question A or B. He discusses some examples of justified or unjustified belief, though, again, it’s unclear whether Patterson understands the difference between 1) the truth of a proposition and 2) the epistemic justification an individual person has in believing that proposition.

    Epistemology also asks a third, closely related question:

    C) Does knowledge have a structure? How do justified beliefs relate to one another?

    There are many competing theories trying to answer this question. All such theories are either internalist or externalist. Internalist theories hold justification is entirely a function of an agent’s mental states, while externalist theories hold that justification also depends on conditions outside the agent’s mind. (For example, process reliabilism, an externalist theory, holds that a belief is justified if and only it comes about through a truth-tracking belief-formation process.)

    Internalist theories are either doxastic or non-doxastic. Doxastic theories hold that whether a believing agent is justified is entirely a function of her beliefs. In contrast, non-doxastic internalist theories hold that justification depends not merely on the agents’ beliefs, but also on her other mental states (such as her sensory perceptions rather than her beliefs about her sensory perceptions).

    Foundationalist doxastic theories says that all beliefs are justified by being grounded in certain basic beliefs. Coherentist doxastic theories claim that there are no basic beliefs, but instead that the structure of belief is more like a web. All beliefs are justified by reference to other beliefs nearby in the “web”. For foundationalists, justification usually moves in one direction, from basic to non-basic beliefs. For coherentists, justification moves in multiple directions at the same time; beliefs can mutually support each other.

    Patterson seems to want to defend a doxastic foundationalist theory of knowledge Square One. He laments:

    Modern philosophy is dominated by schools of thought that deny the existence of foundations. They argue that worldviews aren’t like trees; they are more like spider webs. Each part is connected together with no clear hierarchy of importance. Each thread is fallible and can be removed without destroying the whole structure. (2)

    But there are two big problems with this claim.

    First, that’s not a fair description of what coherentists actually think. In fact, most coherentists agree that the web of beliefs is structured. Some beliefs carrying more weight than others. Removing some beliefs (e.g., “There is an external world”) would severely damage the web; removing others (e.g., “There is hot sauce in the fridge”) would not. Further, coherentists can agree that beliefs are certain or express logically necessary claims, though they deny this makes such beliefs foundational.

    Second, Patterson is right that foundationalism is now unpopular, but, pace Patterson, so is coherentism. The PhilPapers survey finds that only 26.2% of philosophy faculty accept any form of internalism, while 43.7% accept some form of externalism. The numbers are roughly the same for specialists in epistemology.[1]

    Patterson advises his readers to doubt themselves and vigorously check their premises (5). He should take his own advice: If he bothered to research what philosophers think and why, he wouldn’t strawman the field.

    All this aside, Square One never actually gets around to defending foundationalism. He gives us lots of unoriginal metaphors about trees and roots, houses and foundations, and the like. He declares in chapter three that logical axioms are among the foundations (32). But he never tries to show us 1) which beliefs (aside from logical axioms) are basic, 2) how these basic beliefs justify our non-basic beliefs, or 3) which mental states justify the majority of our non-mathematical beliefs. He therefore provides no evidence that our beliefs form a foundationalist structure. He responds to none of the common objections foundationalism; he may be unaware of them.[2]

    Patterson asserts that the axioms of logic are the foundations of our beliefs. He’s right, of course, that our beliefs should be compatible with logic. But it doesn’t follow that these axioms somehow justify most of my beliefs or that my beliefs are “grounded” in logic in any interesting way. I believe I own more than two guitars, that I have brown hair, that I have thirty-two teeth, that Australia is bigger than Rhode Island, and so on. If any of these beliefs violated the axioms of logic, they would necessarily be false. But other than that, there’s no obvious way in logic provides the roots from which these beliefs grow.

    Instead, the justification of these beliefs depends on various perceptual states I’ve had, on the reliability of the testimony I’ve received from others, and whole host of other interesting issues which epistemologists routinely discuss and which Patterson mostly ignores. If Patterson intends to defend foundationalism, his project is radically incomplete. He’s not even 1% of the way there.

    Patterson does not discuss attempt to refute rival epistemology theories. That’s not some minor oversight. To defend foundationalism, he needs to show the theory does a better job explaining the phenomena than the rival theories. Defending an epistemological theory is like selling car; if you want us to buy the 3-series you need to show us it’s better than the C Class.

    To summarize: Patterson does a decent job defeating what I call “super-duper radical skepticism.” He does almost nothing to defeat what I call “radical skepticism”. He does not actually bother to defend a foundationalist theory of knowledge. Still, his book beautifully written, takes only an hour to read, and at least defeats super-duper radical skepticism. We might ask: Is this at least a good book for a lay audience?

    Unfortunately, the answer is no, for two big reasons. First, there are far better books, such as Thomas Nagel’s The View from Nowhere or Michael Huemer’s Skepticism and the Veil of Perception. Second, Patterson’s book is chock full of elementary errors. Nearly every page contains some major mistake or conflates two or more distinct ideas together. Laypeople would be better off having no exposure to philosophy at all.

    For instance, on p. 61, he says that “The study was unbiased” and “The study was conducted properly” are “concepts”. But these are propositions, not concepts. On p. 83, he says, “Mathematical truths, if carefully constructed, can also be immune from the possibility of error.” But this once again conflates metaphysical/logical necessity with epistemic certainty/justification or with a person’s math skills. Mathematical truths cannot be in error, but I could be in error when I try to do a math problem or when I form beliefs about mathematics. On p. 36, he discusses the phrase “The elephant outside my window”. He says that since there isn’t actually an elephant outside his window, then the referent of “elephant” is an idea or a concept in someone’s head. But that’s not right. To see why, let’s use Russell’s example. Suppose we say, “The present king of France is bald.” If, as Patterson claims, the definite description “the present king of France” refers to an idea, then the sentence “The present king of France is bald” is true; after all, ideas are hairless and so therefore bald. But that’s absurd. His treatments of the theory-ladenness of observation (63-66 or the problem of vagueness (pp. 100-104) are superficial, though a lay audience won’t know better.

    In the end, Square One is a beautifully written text, with lucid prose and delightful metaphors. But nearly every page contains a major mistake. What’s new isn’t good and what’s good isn’t new. Five stars for style; one star for substance.

    [1] https://philpapers.org/surveys/results.pl?affil=Philosophy+faculty+or+PhD&areas0=11&areas_max=1&grain=coarse

    [2] See, e.g., John Pollock and Joseph Cruz, Contemporary Theories of Knowledge (Boulder: Rowman and Littlefield, 1999), pp. 60-65; Keith Lehrer, Theory of Knowledge (Boulder: Westview Press, 2000), pp. 50-95); Ali Hasan and Richard Fumerton, “Foundationalist Theories of Epistemic Justification,” The Stanford Encyclopedia of Philosophy (Winter 2016 edition), ed. Edward N. Zalta, URL = <https://plato.stanford.edu/archives/win2016/entries/justep-foundational/>