What are examples of technologies which would be a big deal if they scaled but never ended up scaling?

By Linch @ 2021-08-27T08:47 (+36)

Specifically, interested in examples of technologies where a) people who actively worked on it genuinely believed that if they can do it once, it'd (eventually) be very cheap and scalable, b) we eventually did develop the technology, c) it ended up not being a big deal anyway because it wasn't very scalable and d) in hindsight, we still believe that had the technology been cheap and scalable, it would have been a pretty big deal.

Also would love to see statistics on how frequently this has happened in history, though might be pretty hard to draw a reference class well.

I think a lot of EAs have the implicit assumption that if something can be done once, you can probably (eventually) do it cheaply and at scale. For very EA relevant technologies, I think many people think this applies to human-level AI, anti-aging treatments, and cultured meat.

I'm curious how frequently counterexamples happen. The only counterexample I'm aware of is alchemy. Transmuting base metals into gold was something multiple civilizations literally wanted to do for millennia, we eventually figured out how to do it last century, but the discovery turned out to be at best an academic curiosity since the energy requirements were too massive.

I'm interested in this question because having base rates on the scalability question would be really useful to form some very weak priors on P(technology radically changes civilization | radical-seeming technology was invented). For example, if, after an extensive search, alchemy was the only interesting example, we can conclude that our initial implicit assumption that "once you can do something, it can be scaled (EDIT: though political/cultural resistance can still mean it won't be scaled)" seems like a very safe one. 


Madhav Malhotra @ 2021-08-28T00:21 (+16)

For historical examples, consider:

One general theme = for any standard X that we use, there was probably a better standard Y that wasn't widely-used enough. 

Another general theme = when it comes to failed stuff, government archives are a great resource :D 

Charlie Steiner @ 2021-08-27T12:47 (+14)

Scalability, or cost?

When I think of failure to scale, I don't just think of something with high cost (e.g. transmutation of lead to gold), but something that resists economies of scale.

Level 1 resistance is cost-disease-prone activities that haven't increased efficiency in step with most of our economy, education being a great example. Individual tutors would greatly increase results for students, but we can't do it. We can't do it because it's too expensive. And it's too expensive because there's no economy of scale for tutors - they're not like solar panels, where increasing production volume lets you make them more cheaply.

Level 2 resistance is adverse network effects - the thing actually becomes harder as you try to add more people. Direct democracy, perhaps? Or maintaining a large computer program? It's not totally clear what the world would have to be like for these things to be solvable, but it would be pretty wild; imagine if the difficulty of maintaining code scaled sublinearly with size!

Level 3 resistance is when something depends on a limited resource and if you haven't got it, you're out of luck. Stradivarius violins, perhaps. Or the element europium used in red-emitting phosphor for CRT tubes. Solutions to these, when possible, probably just look like better technology allowing a workaround.

Peterslattery @ 2021-09-06T23:27 (+1)

Thanks, this was particularly useful for me!

Harrison D @ 2021-08-29T02:59 (+1)

(+2, I really like the breakdown of different effects. I haven't really tried critically analyzing it for issues, but I definitely feel like it helped carve out/prop up some initial ideas)

kokotajlod @ 2021-08-27T09:20 (+11)

Going to the moon.

Fusion power?

Nuclear power more generally?

...I guess the problem with these examples is that they totally are scalable, they just didn't scale for political/cultural reasons.

JP Addison @ 2021-08-27T09:34 (+4)

I feel like your qualifying statement is only true of the last one?

kokotajlod @ 2021-08-27T12:47 (+4)

I'm pretty confident that if loads more money and talent had been thrown at space exploration, going to the moon would be substantially cheaper and more common today. SpaceX is good evidence of this, for example.

As for fusion power, I guess I've got a lot less evidence for that. Perhaps I am wrong. But it seems similar to me.  We could also talk about fusion power on the metric of "actually producing more energy than it takes in, sustainably" in which case my understanding is that we haven't got there at all yet.

Harrison D @ 2021-08-29T03:07 (+5)

I've been trying to think of good examples in military technology, but haven't thought of any great ones yet. However, one thing I thought about was the supposed "rods from god" idea of using what are (basically) oversized, high-density (tungsten) lawn darts dropped from space. These weapons could potentially have tactical-nuclear-level kinetic energy without any of the nuclear fallout or stigma (albeit an entirely different set of stigma/international condemnation). But IIRC it's not being scaled for a variety of reasons including "it's really dang expensive to put a lot of large tungsten rods into space."

However, that doesn't necessarily mean it couldn't eventually be scaled up if we, e.g., developed an effective space elevator. And that leads to a follow-up question: how do we distinguish between "can't scale up (.)" and "can't scale up (yet)"? I definitely think there are some instances where the difference would be clear, but I would similarly be interested to see cases where we thought "X technology doesn't have a future (due to competitor technology Y and/or physical limitation Z)" only to later discover/invent something that makes an altered form of X viable.

Linch @ 2021-08-29T05:31 (+3)

I would similarly be interested to see cases where we thought "X technology doesn't have a future (due to competitor technology Y and/or physical limitation Z)" only to later discover/invent something that makes an altered form of X viable.

I too would be interested in this, as a reference class. I think it would be a strategically important update for us if we were to conclude that there's a decent chance human-level AI or aging or cultured meat(or for that matter transmutation) is scientifically but not economically viable in the current form, but an entirely different route of getting there eventually becomes economically viable decades later. 

Nathan Young @ 2021-08-28T21:34 (+4)

Really interesting question.

Josh Jacobson @ 2021-08-27T16:41 (+3)

From my perspective, technically, Google Wave qualifies with the words you’ve written, but I don’t think it’s in the spirit of what you’ve written. (“Cheap” makes me think you’re looking for physical-world inventions, which is probably worth being more explicit about.)

If I’m wrong and it does qualify, there’s a number of web app examples.

Denkenberger @ 2021-08-29T18:39 (+2)

I think most technologies don't end up scaling. This says 2 to 10% of patents make enough money to maintain protection. A prototype is not required for a patent, but there would be lots of demonstrated ideas in the lab that are not patented. There is also the concept of the Valley of Death in commercialization where most technologies die. This is not necessarily the same as technologies that would be a "big deal" but I think it is a useful reference class.

Ramiro @ 2021-08-27T18:14 (+2)

This question is surprisingly hard... I can barely start thinking about very ordinary stuff like "automatized mailbox management..." Your "gold example" made me think about artificial diamonds, which are still regarded as less valuable than natural ones in jewelry - but that's because jewelry is a luxury / status good. It helps a bit to think about tech that sort of existed for very long and was only largely deployed in the last hundred years, like bicycles. I mean, we could have it since at least 18th Century, but they only appeared around 1840s, and somehow it only became a real option after the 1890s - when we already had trains and cars.

kokotajlod @ 2021-08-27T12:50 (+2)

I think there's an ambiguity in "it'd eventually be very cheap and scalable."

Consider alchemy. It's cheaper to do now than it was when we first did it, in part because the price of energy has dropped. It's also possible to do it on much bigger scales. However, nobody bothers because people have better things to do. So for something to count as cheaper and scalable, does it need to actually be scaled up, or is it enough that we could do it if we wanted to? If the latter, then alchemy isn't even an example of the sort of thing you want. If the former, then there are tons of examples, examples all over the place!

kokotajlod @ 2021-08-27T12:53 (+2)

Also technically Alchemy will in fact be cheaply scaled in the future, probably. When we are disassembling entire stars to fund galaxy-wide megaprojects, presumably some amount of alchemy will be done as well, and that amount will be many orders of magnitude bigger than the original alchemists imagined, and it will be done many orders of magnitude more cheaply (in 2021 dollars, after adjusting for inflation) as well. EDIT: Nevermind I no longer endorse this comment, I think I was assuming alignment success for some reason.

Misha_Yagudin @ 2021-08-27T13:08 (+1)

For social technology, I think we have been consistently disappointed by various attempts to reform education. Specifically, think about interventions like direct instruction investigated under the Follow Through project and, maybe, intervention tested by Gates Foundation.

Pivocajs @ 2021-09-02T09:01 (+1)

In a somewhat similar vein, it would be great to have a centralized database for medical records, at least within each country. And we know how to do this technically. But it "somehow doesn't happen" (at least anywhere I know of).

A general pattern would be "things where somebody believes a problem is of a technical nature, works hard at it, and solves it, only to realize that the problem was of a social/political nature". (Relatedly, the solution might not catch on because the institution you are trying to improve serves a somewhat different purpose from what you believed, Elephant in the Brain style. EG, education being not just for improving thinking and knowledge but also for domestication and signalling.)