Living in an Inadequate World

By EliezerYudkowsky @ 2017-11-09T21:47 (+13)

Previous: Moloch's Toolbox (pt. 1, pt. 2)


 

Be warned: Trying to put together a background model like the one I sketched in the previous chapter is a pretty perilous undertaking, especially if you don’t have a professional economist checking your work at every stage.

Suppose I offered the following much simpler explanation of how babies are dying inside the US healthcare system:

What if parents don’t really care about their babies?

Maybe parents don’t bond to their babies so swiftly? Maybe they don’t really care that much about those voiceless pink blobs in the early days? Maybe this is one of those things that people think they’re supposed to feel very strongly, and yet the emotion isn’t actually there. Maybe parents just sort of inwardly shrug when their infants die, and only pretend to be sad about it. If they really cared, wouldn’t they demand a system that didn’t kill babies?

In our taxonomy, this would be a “decisionmaker is not beneficiary” explanation, with the parents and doctors being the decisionmakers, and the babies being the beneficiaries.

A much simpler hypothesis, isn’t it?

When we try to do inadequacy analysis, there is such a thing as wrong guesses and false cynicism.

I’m sure there are some parents who don’t bond to their babies all that intensely. I’m sure some of them lie to themselves about that. But in the early days when Omegaven was just plain illegal to sell across state lines, some parents would drive for hours, every month, to buy Omegaven from the Boston Children’s Hospital to take back to their home state. I, for one, would call that an extraordinary effort. Those parents went far outside their routine, beyond what the System would demand of them, beyond what the world was set up to support them doing by default. Most people won’t make an effort that far outside their usual habits even if their own personal lives are at stake.

If parents are letting their babies die of liver damage because the parents don’t care, we should find few extraordinary efforts in these and other cases of baby-saving. This is an observational consequence we can check, and the observational check fails to support the theory.

For a fixed amount of inadequacy, there is only so much dysfunction that needs to be invoked to explain it. By the nature of inadequacy there will usually be more than one thing going wrong at a time… but even so, there’s only a bounded amount of failure to be explained. Every possible dysfunction is competing against every other possible dysfunction to explain the observed data. Sloppy cynicism will usually be wrong, just like your Facebook acquaintances who attribute civilizational dysfunctions to giant malevolent conspiracies.

If you’re sloppy, then you’re almost always going to find some way to conclude, “Oh, those physicists are just part of the broken academic system, what would they really know about the Higgs boson?” You will detect inadequacy every time you go looking for it, whether or not it’s there. If you see the same vision wherever you look, that’s the same as being blind.

 

i.

In most cases, you won’t need to resort to complicated background analyses to figure out whether something is broken.

I mean, it’s not like the only possible way one might notice that the US health care system is a vast, ill-conceived machine that is broken and also on fire is to understand microeconomics and predict a priori that aspects of this system design might promote inadequate equilibria. In real life, one notices the brokenness by reading economists who blog about the grinding gears and seas of flame, and listening to your friends sob about the screams coming from the ruins.

Then what good does it do to understand Moloch’s toolbox? What’s the point of the skill?

I suspect that for many people, the primary benefit of inadequacy analysis will be in undoing a mistake already made, where they disbelieve in inadequacy even when they’re looking straight at it.

There are people who would simply never try to put up 130 light bulbs in their house—because if that worked, surely some good and diligent professional researcher would have already tried it. The medical system would have made it a standard treatment, right? The doctor would already know about it, right? And sure, sometimes people are stupid, but we’re also people and we’re also stupid so how could we amateurs possibly do better than current researchers on SAD, et cetera.

Often the most commonly applicable benefit from a fancy rational technique will be to cancel out fancy irrationality.1 I expect that the most common benefit of inadequacy analysis will be to break a certain kind of blind trust—that is, trust arrived at by mental reasoning processes that are insensitive to whether you actually inhabit a universe that’s worthy of that trust—and open people’s eyes to the blatant brokenness of things that are easily observed to be broken. Understanding the background theory helps cancel out the elaborate arguments saying that you can’t second-guess the European Central Bank even when it’s straightforward to show how and why they’re making a mistake.

Conversely, I’ve also watched some people plunge straight into problems that I’d guess were inexploitable, without doing the check, and then fail—usually falling prey to the Free Energy Fallacy, supposing that they can win just by doing better on the axis they care about. That subgroup might benefit, not from being told, “Shut up, you’ll always fail, the answer is always no,” but just from a reminder to check for signs of inexploitability.

It may be that some of those people will end up always saying, “I can think of at least one Moloch’s toolbox element in play, therefore this problem will be exploitable!” No humanly possible strictures of rationality can be strict enough to prevent a really determined person from shooting themselves in the foot. But it does help to be aware that the skill exists, before you start refining the skill.

Whether you’re trying to move past modesty or overcome the Free Energy Fallacy:

And then you can move on to step three: the fine-tuning against reality.

 

ii.

In my past experience, I’ve both undershot and overshot the relative competence of doctors in the US medical system:

Anecdote 1: I once became very worried when my then-girlfriend got a headache and started seeing blobs of color, and when she drew the blobs they were left-right asymmetrical. I immediately started worrying about the asymmetry, thinking, “This is the kind of symptom I’d expect if someone had suffered damage to just one side of the brain.” Nobody at the emergency room seemed very concerned, and she waited for a couple of hours to be seen, when I could remember reading that strokes had to be treated within the first few hours (better yet, minutes) to save as much brain tissue as possible.

What she was really experiencing, of course, was her first migraine. And I expect that every nurse we talked to knew that, but only a doctor is allowed to make diagnoses, so they couldn’t legally tell us. I’d read all sorts of wonderful papers about exotic and illuminating forms of brain damage, but no papers about the much more common ailments that people in emergency rooms actually have. “Think horses, not zebras,” as the doctors say.

Anecdote 2: I once saw a dermatologist for a dandruff problem. He diagnosed me with eczema, and gave me some steroid cream to put on my head for when the eczema became especially severe. It didn’t cure the dandruff—but I’d seen a doctor, so I shrugged and concluded that there probably wasn’t much to be done, since I’d already tried and failed using the big guns of the Medical System.

Eight years later, when I was trying to compound a ketogenic meal replacement fluid I’d formulated in an attempt to lose weight, my dandruff seemed to get worse. So I checked whether online paleo blogs had anything to say about treating dandruff via diet. I learned that a lot of dandruff is caused by the Candida fungus (which I’d never heard of), and that the fungus eats ketones. So if switching to a ketogenic diet (or drinking MCT oil, which gets turned into ketones) makes your dandruff worse, why, your dandruff is probably the Candida fungus. I looked up what kills Candida, found that I should use a shampoo containing ketoconazole, kept Googling, found a paper stating that 2% ketocanozole shampoo is an order of magnitude more effective than 1%, learned that only 1% ketocanozole shampoo was sold in the US, and ordered imported 2% Nizoral from Thailand via Amazon. Shortly thereafter, dandruff was no longer a significant issue for me and I could wear dark shirts without constantly checking my right shoulder for white specks. If my dermatologist knew anything about dandruff commonly being caused by a fungus, he never said a word.

From those two data points and others like them, I infer that medical competence—not medical absolute performance, but medical competence relative to what I can figure out by Googling—is high-variance. I shouldn’t trust my doctor on significant questions without checking her diagnosis and treatment plan on the Internet, and I also shouldn’t trust myself.

A lot of the times we put on our inadequacy-detecting goggles, we’re deciding whether to trust some aspect of society to be more competent than ourselves. Part of the point of learning to think in economic terms about this question is to make it more natural to treat it as a technical question where specific lines of evidence can shift specific conclusions to varying degrees.

In particular, you don’t need to be strictly better or worse than some part of society. The question isn’t about ranking people, so you can be smarter in some ways and dumber in others. It can vary from minute to minute as the gods roll their dice.

By contrast, the modest viewpoint seems to me to have a very social-status-colored perspective on such things.

In the modest world, either you think you’re better than doctors and all the civilization backing them, or you admit you’re not as good and that you ought to defer to them.

If you don’t defer to doctors, then you’ll end up as one of those people who try feeding their children organic herbs to combat cancer; the outside view says that that’s what happens to most non-doctors who dare to think they’re better than doctors.

On the modest view, it’s not that we hold up a thumb and eyeball the local competence level, based mostly on observation and a little on economic thinking; and then update on our observed relative performance; and sometimes say, “This varies a lot. I’ll have to check each time.”

Instead, every time you decide whether you think you can do better, you are declaring what sort of person you are.

For an example of what I mean here, consider writer Ozy Brennan’s taxonomy:

I think a formative moment for any rationalist—our “Uncle Ben shot by the mugger” moment, if you will—is the moment you go “holy shit, everyone in the world is fucking insane.” […]

Now, there are basically two ways you can respond to this.

First, you can say “holy shit, everyone in the world is fucking insane. Therefore, if I adopt the radical new policy of not being fucking insane, I can pick up these giant piles of utility everyone is leaving on the ground, and then I win.” […]

This is the strategy of discovering a hot new stock tip, investing all your money, winning big, and retiring to Maui.

Second, you can say “holy shit, everyone in the world is fucking insane. However, none of them seem to realize that they’re insane. By extension, I am probably insane. I should take careful steps to minimize the damage I do.” […]

This is the strategy of discovering a hot new stock tip, realizing that most stock tips are bogus, and not going bankrupt.2

According to this sociological hypothesis, people can react to the discovery that “everyone in the world is insane” by adopting the Maui strategy, or they can react by adopting the not-going-bankrupt strategy.

(Note the inevitable comparison to financial markets—the one part of civilization that worked well enough to prompt an economist, Eugene Fama, to come up with the modern notion of efficiency.)

Brennan goes on to say that these two positions form a “dialectic,” but that nonetheless, some kinds of people are clearly on the “becoming-sane side of things” while others are more on the “insanity-harm-reduction side of things.”

But, speaking first to the basic dichotomy that’s being proposed, the whole point of becoming sane is that your beliefs shouldn’t reflect what sort of person you are. To the extent you’re succeeding, at least, your beliefs should just reflect how the world is.

Good reasoners don’t believe that there are goblins in their closets. The ultimate reason for this isn’t that goblin-belief is archaic, outmoded, associated with people lost in fantasy worlds, too much like wishful thinking, et cetera. It’s just that we opened up our closets and looked and we didn’t see any goblins.

The goal is simply to be the sort of person who, in worlds with closet goblins, ends up believing in closet goblins, and in worlds without closet goblins, ends up disbelieving in closet goblins. Avoiding beliefs that sound archaic does relatively little to help you learn that there are goblins in a world where goblins exist, so it does relatively little to establish that there aren’t goblins in a world where they don’t exist. Examining particular empirical predictions of the goblin hypothesis, on the other hand, does provide strong evidence about what world you’re in.

To reckon with the discovery that the world is mad, Brennan suggests that we consider the mix of humble and audacious “impulses in our soul” and try to strike the right balance. Perhaps we have some personality traits or biases that dispose us toward believing in goblins, and others that dispose us toward doubting them. On this framing, the heart of the issue is how we can resolve this inner conflict; the heart isn’t any question about the behavioral tendencies or physiology of goblins.

This is a central disagreement I have with modest epistemology: modest people end up believing that they live in an inexploitable world because they’re trying to avoid acting like an arrogant kind of person. Under modest epistemology, you’re not supposed to adapt rapidly and without hesitation to the realities of the situation as you observe them, because that would mean trusting yourself to assess adequacy levels; but you can’t trust yourself, because Dunning-Kruger, et cetera.

The alternative to modest epistemology isn’t an immodest epistemology where you decide that you’re higher status than doctors after all and conclude that you can now invent your own de novo medical treatments as a matter of course. The alternative is deciding for yourself whether to trust yourself more than a particular facet of your civilization at this particular time and place, checking the results whenever you can, and building up skill.

When it comes to medicine, I try to keep in mind that anyone whatsoever with more real-world medical experience may have me beat cold solid when it comes to any real-world problem. And then I go right on double-checking online to see if I believe what the doctor tells me about whether consuming too much medium-chain triglyceride oil could stress my liver.3

In my experience, people who don’t viscerally understand Moloch’s toolbox and the ubiquitously broken Nash equilibria of real life and how group insanity can arise from intelligent individuals responding to their own incentives tend to unconsciously translate all assertions about relative system competence into assertions about relative status. If you don’t see systemic competence as rare, or don’t see real-world systemic competence as driven by rare instances of correctly aligned incentives, all that’s left is status. All good and bad output is just driven by good and bad individual people, and to suggest that you’ll have better output is to assert that you’re individually smarter than everyone else. (This is what status hierarchy feels like from the inside: to perform better is to be better.)

On a trip a couple of years ago to talk with the European existential risk community, which has internalized norms from modest epistemology to an even greater extent than the Bay Area community has, I ran into various people who asked questions like, “Why do you and your co-workers at MIRI think you can do better than academia?” (MIRI is the Machine Intelligence Research Institute, the organization I work at.)

I responded that we were a small research institute that sustains itself on individual donors, thereby sidestepping a set of standard organizational demands that collectively create bad incentives for the kind of research we’re working on. I described how we had deliberately organized ourselves to steer clear of incentives that discourage long-term substantive research projects, to avoid academia’s “publish or perish” dynamic, and more generally to navigate around the multiple frontiers of competitiveness where researchers have to spend all their energy competing along those dimensions to get into the best journals.

These are known failure modes that academics routinely complain about, so I wasn’t saying anything novel or clever. The point I wanted to emphasize was that it’s not enough to say that you want risky long-term research in the abstract; you have to accept that your people won’t be at the competitive frontier for journal publications anymore.

The response I got back was something like a divide-by-zero error. Whenever I said “the nonprofit I work at has different incentives that look prima facie helpful for solving this set of technical problems,” my claim appeared to get parsed as “the nonprofit I work at is better (higher status, more authoritative, etc.) than academia.”

I think that the people I was talking with had already internalized the mathematical concept of Nash equilibria, but I don’t think they were steeped in a no-free-energy microeconomic equilibrium view of all of society where most of the time systems end up dumber than the people in them due to multiple layers of terrible incentives, and that this is normal and not at all a surprising state of affairs to suggest. And if you haven’t practiced thinking about organizations’ comparative advantages from that perspective long enough to make that lens more cognitively available than the status comparisons lens, then it makes sense that all talk of relative performance levels between you and doctors, or you and academia, or whatever, will be autoparsed by the easier, more native, more automatic status lens.

Because, come on, do you really think you’re more authoritative/respectable/qualified/reputable/adept than your doctor about medicine? If you think that, won’t you start consuming Vitamin C megadoses to treat cancer? And if you’re not more authoritative/respectable/qualified/reputable/adept than your doctor, then how could you possibly do better by doing Internet research?

(Among most people I know, the relative status feeling frequently gets verbalized in English as “smarter,” so if the above paragraph didn’t make sense, try replacing the social-status placeholder “authoritative/respectable/etc.” with “smarter.”)

Again, a lot of the benefit of becoming fluent with this viewpoint is just in having a way of seeing “systems with not-all-that-great outputs,” often observed extensively and directly, that can parse into something that isn’t “Am I higher-status (‘smarter,’ ‘better,’ etc.) than the people in the system?”

 

iii.

I once encountered a case of (honest) misunderstanding from someone who thought that when I cited something as an example of civilizational inadequacy (or as I put it at the time, “People are crazy and the world is mad”), the thing I was trying to argue was that the Great Stagnation was just due to unimpressive/unqualified/low-status (“stupid”) scientists.4 He thought I thought that all we needed to do was take people in our social circle and have them go into biotech, or put scientists through a CFAR unit, and we’d see huge breakthroughs.5

What?” I said.

(I was quite surprised.)

“I never said anything like that,” I said, after recovering from the shock. “You can’t lift a ten-pound weight with one pound of force!”

I went on to say that it’s conceivable you could get faster-than-current results if CFAR’s annual budget grew 20x, and then they spent four years iterating experimentally on techniques, and then a group of promising biotechnology grad students went through a year of CFAR training…6

So another way of thinking about the central question of civilizational inadequacy is that we’re trying to assess the quantity of effort required to achieve a given level of outperformance. Not “Can it be done?” but “How much work?”

This brings me to the single most obvious notion that correct contrarians grasp, and that people who have vastly overestimated their own competence don’t realize: It takes far less work to identify the correct expert in a pre-existing dispute between experts, than to make an original contribution to any field that is remotely healthy.

I did not work out myself what would be a better policy for the Bank of Japan. I believed the arguments of Scott Sumner, who is not literally mainstream (yet), but whose position is shared by many other economists. I sided with a particular band of contrarian expert economists, based on my attempt to parse the object-level arguments, observing from the sidelines for a while to see who was right about near-term predictions and picking up on what previous experience suggested were strong cues of correct contrarianism.7

And so I ended up thinking that I knew better than the Bank of Japan. On the modest view, that’s just about as immodest as thinking you can personally advance the state of the art, since who says I ought to be smarter than the Bank of Japan at picking good experts to trust, et cetera?

But in real life, inside a civilization that is often tremendously broken on a systemic level, finding a contrarian expert seeming to shine against an untrustworthy background is nowhere remotely near as difficult as becoming that expert yourself. It’s the difference between picking which of four runners is most likely to win a fifty-kilometer race, and winning a fifty-kilometer race yourself.

Distinguishing a correct contrarian isn’t easy in absolute terms. You are still trying to be better than the mainstream in deciding who to trust.8 For many people, yes, an attempt to identify contrarian experts ends with them trusting faith healers over traditional medicine. But it’s still in the range of things that amateurs can do with a reasonable effort, if they’ve picked up on unusually good epistemology from one source or another.

We live in a sufficiently poorly-functioning world that there are many visibly correct contrarians whose ideas are not yet being implemented in the mainstream, where the authorities who allegedly judge between experts are making errors that appear to me trivial. (And again, by “errors,” I mean that these authorities are endorsing factually wrong answers or dominated policies—not that they’re passing up easy rewards given their incentives.)

In a world like that, you can often know things that the average authority doesn’t know… but not because you figured it out yourself, in almost every case.

 

iv.

Going beyond picking the right horse in the race and becoming a horse yourself, inventing your own new personal solution to a civilizational problem, requires a much greater investment of effort.

I did make up my own decision theory—not from a tabula rasa, but still to my own recipe. But events like that should be rare in a given person’s life. Logical counterfactuals in decision theory are one of my few major contributions to an existing academic field, and my early thoughts on this topic were quickly improved on by others.9 And that was a significant life event, not the sort of thing I believe I’ve done every month.

Above all, reaching the true frontier requires picking your battles.

Computer security professionals don’t attack systems by picking one particular function and saying, “Now I shall find a way to exploit these exact 20 lines of code!” Most lines of code in a system don’t provide exploits no matter how hard you look at them. In a large enough system, there are rare lines of code that are exceptions to this general rule, and sometimes you can be the first to find them. But if we think about a random section of code, the base rate of exploitability is extremely low—except in really, really bad code that nobody looked at from a security standpoint in the first place.

Thinking that you’ve searched a large system and found one new exploit is one thing. Thinking that you can exploit arbitrary lines of code is quite another.

No matter how broken academia is, no one can improve on arbitrary parts of the modern academic edifice. My own base frequency for seeing scholarship that I think I can improve upon is “almost never,” outside of some academic subfields dealing with the equivalent of “unusually bad code.” But don’t expect bad code to be guarding vaults of gleaming gold in a form that other people value, except with a very low base rate. There do tend to be real locks on the energy-containing vaults not already emptied… almost (but not quite) all of the time.

Similarly, you do not generate a good startup idea by taking some random activity, and then talking yourself into believing you can do it better than existing companies. Even where the current way of doing things seems bad, and even when you really do know a better way, 99 times out of 100 you will not be able to make money by knowing better. If somebody else makes money on a solution to that particular problem, they’ll do it using rare resources or skills that you don’t have—including the skill of being super-charismatic and getting tons of venture capital to do it.

To believe you have a good startup idea is to say, “Unlike the typical 99 cases, in this particular anomalous and unusual case, I think I can make a profit by knowing a better way.”

The anomaly doesn’t have to be some super-unusual skill possessed by you alone in all the world. That would be a question that always returned “No,” a blind set of goggles. Having an unusually good idea might work well enough to be worth trying, if you think you can standardly solve the other standard startup problems. I’m merely emphasizing that to find a rare startup idea that is exploitable in dollars, you will have to scan and keep scanning, not pursue the first “X is broken and maybe I can fix it!” thought that pops into your head.

To win, choose winnable battles; await the rare anomalous case of, “Oh wait, that could work.”

 

v.

In 2014, I experimentally put together my own ketogenic meal replacement drink via several weeks of research, plus months of empirical tweaking, to see if it could help me with long-term weight normalization.

In that case, I did not get to pick my battleground.

And yet even so, I still tried to design my own recipe. Why? It seems I must have thought I could do better than the best ketogenic liquid-food recipes that had ever before been tried, as of 2014. Why would I believe I could do the best of anyone who’s yet tried, when I couldn’t pick my battle?

Well, because I looked up previous ketogenic Soylent recipes, and they used standard multivitamin powders containing, e.g., way too much manganese and the wrong form of selenium. (You get all the manganese you need from ordinary drinking water, if it hasn’t been distilled or bottled. Excess amounts may be neurotoxic. One of the leading hypotheses for why multivitamins aren’t found to produce net health improvement, despite having many individual components found to be helpful, is that multivitamins contain 100% of the US RDA of manganese. Similarly, if a multivitamin includes sodium selenite instead of, e.g., se-methyl-selenocysteine, it’s the equivalent of handing you a lump of charcoal and saying, “You’re a carbon-based lifeform; this has carbon in it, right?”)

Just for the sake of grim amusement, I also looked up my civilization’s medically standard ketogenic dietary options—e.g., for epileptic children. As expected, they were far worse than the amateur Soylent-inspired recipes. They didn’t even contain medium-chain triglycerides, which your liver turns directly into ketones. (MCT is academically recommended, though not commercially standard, as the basis for maintaining ketosis in epileptic children.) Instead the retail dietary options for epileptic children involved mostly soybean oil, of which it has been said, “Why not just shoot them?”

Even when we can’t pick our battleground, sometimes the most advanced weapon on offer turns out to be a broken stick and it’s worth the time to carve a handaxe.

… But even then, I didn’t try to synthesize my own dietary theory from scratch. There is nothing I believe about how human metabolism works that’s unique or original to me. Not a single element of my homemade Ketosoylent was based on my personal, private theory of how any of the micronutrients worked. Who am I to think I understand Vitamin D3 better than everyone else in the world?

The Ketosoylent didn’t work for long-term weight normalization, alas—the same result as all other replicated experiments on trying to long-term-normalize weight via putting different things inside your mouth. (The Shangri-La Diet I mentioned at the start of this book didn’t work for me either.)

So it goes. I mention the Ketosoylent because it’s the most complicated thing I’ve tried to do without tons of experience in a domain and without being able to pick my battles.

In the simpler and happier case of treating Brienne’s Seasonal Affective Disorder, I again didn’t get to pick the battleground; but SAD has received far less scientific attention to date than obesity. And success there again didn’t involve coming up with an amazing new model of SAD. It’s not weird and private knowledge that sufficiently bright light might cure SAD. The Sun is known to work almost all the time.

So a realistic lifetime of trying to adapt yourself to a broken civilization looks like:

The accumulation of many judgments of the latter kind is where you get the fuel for many small day-to-day decisions (e.g., about what to eat), and much of your ability to do larger things (like solving a medical problem after going through the medical system has proved fruitless, or executing well on a startup).

 

vi.

A few final pieces of advice on everyday thinking about inadequacy:

When it comes to estimating the competence of some aspect of civilization, especially relative to your own competence, try to update hard on your experiences of failure and success. One data point is a hell of a lot better than zero data points.

Worrying about how one data point is “just an anecdote” can make sense if you’ve already collected thirty data points. On the other hand, when you previously just had a lot of prior reasoning, or you were previously trying to generalize from other people’s not-quite-similar experiences, and then you collide directly with reality for the first time, one data point is huge.

If you do accidentally update too far, you can always re-update later when you have more data points. So update hard on each occasion, and take care not to flush any new observation down the toilet.

Oh, and bet. Bet on everything. Bet real money. It helps a lot with learning.

I once bet $25 at even odds against the eventual discovery of the Higgs boson—after 90% of the possible mass range had been experimentally eliminated, because I had the impression from reading diatribes against string theory that modern theoretical physics might not be solid enough to predict a qualitatively new kind of particle with prior odds greater than 9:1.

When the Higgs boson was discovered inside the remaining 10% interval of possible energies, I said, “Gosh, I guess they can predict that sort of thing with prior probability greater than 90%,” updated strongly in favor of the credibility of things like dark matter and dark energy, and then didn’t make any more bets like that.

I made a mistake; and I bet on it. This let me experience the mistake in a way that helped me better learn from it. When you’re thinking about large, messy phenomena like “the adequacy of human civilization at understanding nutrition,” it’s easy to get caught up in plausible-sounding stories and never quite get around to running the experiment. Run experiments; place bets; say oops. Anything less is an act of self-sabotage.

 


 

Cross-posted to Less Wrong and equilibriabook.com. Next: Blind Empiricism.

 


 

  1. As an example, relatively few people in the world need well-developed skills at cognitive reductionism for the purpose of disassembling aspects of nature. The reason why anyone else needs to learn cognitive reductionism—the reason it’s this big public epistemic hygiene issue—is that there are a lot of damaging supernatural beliefs that cognitive reductionism helps counter. 

  2. Brennan, “The World Is Mad.”

    When I ran a draft of this chapter by Brennan, they said that they basically agree with what I’m saying here, but are thinking about these issues using a different conceptual framework. 

  3. Answer: this is the opposite of standard theory; she was probably confusing MCT with other forms of saturated fat. 

  4. The Great Stagnation is economist Tyler Cowen’s hypothesis that declining rates of innovation since the 1970s (excluding information technology, for the most part) have resulted in relative economic stagnation in the developed world. 

  5. CFAR, the Center for Applied Rationality, is a nonprofit that applies ideas from cognitive science to everyday problem-solving and decision-making, running workshops for people who want to get better at solving big global problems. MIRI and CFAR are frequent collaborators, and share office space; the organization’s original concept came from MIRI’s work on rationality. 

  6. See also Weinersmith’s Law: “No problem is too hard. Many problems are too fast.” 

  7. E.g., the cry of “Stop ignoring your own carefully gathered experimental evidence, damn it!” 

  8. Though, to be clear, the mainstream isn’t actually deciding who to trust. It’s picking winners by some other criterion that on a good day is not totally uncorrelated with trustworthiness. 

  9. In particular, Wei Dai came up with updatelessness, yielding the earliest version of what's now called functional decision theory. See Soares and Levinstein's “Cheating Death in Damascus” for a description. 


null @ 2017-11-11T10:55 (+1)

One class of relevant cases is contemporary evaluation of historical decisions. Because then the same issue is being decided twice, by two sets of actors with different incentives, often without much difference in available information. If the two systems come to substantially different conclusions then that suggests that at least one system is inadequate. The frequency of differences in opinion can give us a sense of how often it is that systems are inadequate.

My impression of military history, where I know the subject best is that the immediate, naive "they made the wrong decision" sorts of claims frequently made or implied in early or popular historical literature are frequently incorrect or flawed and revised by more comprehensive scholarship. E.g., the French realistically couldn't have countered German movement into the Rhineland in 1935 because of a combination of deep political factors and misinformation. The Germans were forced to switch to night bombing in 1940 because of their aircraft losses, not as a single fatal mistake that cost them the Battle of Britain. As to why wrong claims like this are made in the first place, there may be an incentive in historical media to prize sensationalism, blame, other things that activate emotions, rather than simply saying that everyone involved did the best they could and here's how the cards fell.

However, they're not always incorrect. There are, on occasion, well identified mistakes in historical decisions. You might think that no one in history had a better reason to identify better night fighter designs than the Germans in World War II. But they still failed to build the He-219 in large numbers, due to inadequate political structure of the time.

Meta note: I really like how the book website links to the messageboards where each chapter is discussed.