If tech progress might be bad, what should we tell people about it?
By Robert_Wiblin @ 2016-02-16T10:26 (+21)
I want to draw attention to a tension effective altruists have not dealt with:
- Almost all of our written output takes as a strong assumption that economic growth and technological advancement are good things.
- Many intellectuals think this is actually unclear.
- We invent dangerous new technologies sooner, while society remains unwise, immature and unable to use them safely. Or new dangerous technologies advance from possibilities to realities more quickly, giving us less time to evaluate and limit their risks. For more on this see section 9.4; 2, 3.
- We become richer and this allows for e.g. more destructive conflicts (i.e. poor countries have weaker and less destructive armies).
- Producing more wealth is currently doing more harm than good (e.g. via climate change, other environmental destruction, spread of factory farming or selfish materialism, etc).
- They violate common sense for most people.
- The arguments in their favour are hard to explain quickly.
- Over the last 200 years growth seems to have been a force for good; you look ignorant or deluded to suggest that something that was good in the past will not continue to be good in the future.
- They involve speculation about the direction of future technologies that most people find unpersuasive and unrigorous.
- They can have offensive implications, such as the idea that it would be better for people in poverty today to remain poor, and that the things most people do to improve the world aren't working or are even making things worse.
- Our ability to further raise economic growth or technological advancement is small anyway, because billions of people are already pursuing those goals so we are a tiny fraction of the total.
- Projects focussed on reducing poverty also: raise average global intelligence, education, income, governance, patience, and so on. These 'quality' effects may well dominate.
- Other modelling suggests the overall effect is very unclear (e.g. wars seem to occur less frequently when economic growth is strong; faster growth lowers the number of years spent in any particular state of development, lowering so-called 'state risk'; some technologies clearly lower existing risks, e.g. we could now divert an asteroid away from Earth).
- Imagine that you somehow knew economic growth, or technological advancement, was merely neutral on average. While controversial, some smart people believe this to be true. Would your project nonetheless be one of those that is 'better than average' and therefore a force for good?
- Some things that have been suggested to look good on the 'differential technological development' test include:
- making people more cosmopolitan, kind and cautious;
- improving the ability to coordinate countries and avoid e.g. prisoner's dilemmas;
- increasing wisdom (especially the ability to foresee and solve future problems and conflicts);
- predominantly reducing pressing existing risks, such as climate change;
- predominantly empowering the people with the best values.
undefined @ 2016-02-16T17:29 (+4)
Related: http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/
undefined @ 2016-03-04T10:19 (+1)
And http://www.overcomingbias.com/2009/12/tiptoe-or-dash-to-future.html
undefined @ 2016-03-02T16:35 (+2)
A link to Paul Christiano's excellent 'On Progress and Prosperity' shouldn't be left out in this discussion:
http://effective-altruism.com/ea/9f/on_progress_and_prosperity/
undefined @ 2016-02-16T19:19 (+2)
Even if growth were bad or neutral, there would have to be specific activities that were bad, and other activities that remained good. So how does this differ from just telling folks to look for ways that their society might hurt itself, or ways that they might be contributing to this antisocial behavior? There is a lot of disagreement about which behaviors, exactly, are antisocial.
I do worry that given enough time, industrialized countries will, um, self-destruct by using nuclear weapons. But in that case the remedy would probably not be giving up industrialization. That seems like too high a cost.
It's also possible that growth may not be that important because growth is becoming much harder or impossible. But is it?
One point you make is that during the last 200 years growth has helped. Without strong evidence against it, it seems hard to make any assumption but that trends continue. So I think growth is good; growing societies will either be looked to and emulated by other groups that want the same rewards, or else powerful growing societies will just conquer other weaker ones. Either way, growth seems like the winning strategy.
undefined @ 2016-02-16T17:06 (+2)
Almost all of our written output takes as a strong assumption that economic growth and technological advancement are good things.
For what it's worth, I think this conclusion is extremely non-obvious and I'm somewhat disheartened when I see people taking it for granted. Most people are prone to optimism bias.
Why are we so cautious about raising these issues?
There may be a sampling bias here. People at Stanford EA talk about these issues, and I read about them online all the time. I haven't interacted much with CEA/Oxford people but my impression is you guys are a lot less willing to acknowledge that anything might be harmful, and less willing to discuss weird ideas.
undefined @ 2016-02-16T18:44 (+3)
"People at Stanford EA talk about these issues, and I read about them online all the time."
I've visited virtually every EA chapter and I think Stanford is the single most extreme one in this regard.
undefined @ 2016-02-17T14:55 (+1)
And GiveWell - their published statements on this matter basically just say they assume it's good: http://blog.givewell.org/2013/04/04/deep-value-judgments-and-worldview-characteristics/
With a little more detail: http://blog.givewell.org/2013/05/15/flow-through-effects/
But recently there was this cool post: http://blog.givewell.org/2015/09/30/differential-technological-development-some-early-thinking/
undefined @ 2016-02-17T15:33 (+4)
I don't want to interpret that post on flow-through effects as representing anything other than Holden's personal opinion, but it does strike me as pretty naive (in the mathematical sense of "you only thought of the most obvious conclusion and didn't go into any depth on this"). GiveWell's lack of (public) reasoning on flow-through effects is a large part of why I don't follow its charity recommendations.
The post on differential progress is a step in the right direction, and I'm generally more confident that Nick Beckstead is thinking correctly about flow-through effects than I am about anyone else at GiveWell.
EDIT: To Holden's credit, he does discuss how global catastrophic risks could make technological/economic harmful, so it's not like he hasn't thought about this at all.
undefined @ 2016-02-17T18:21 (+2)
The level of confidence in 'broad empowerment' as a force for good has always been my biggest disagreement with GiveWell.
undefined @ 2016-02-16T13:18 (+2)
Nice post. Can I suggest you're missing the most obvious one from your test?
How about "making people happier"?
which you could rephrase as
"reducing suffering/connecting people/empowering people to life the lives they want."
I'm one of those (controversial?) people who thinks most economic and technological development is morally neutral and does surprisingly little to make people's lives better, largely because people adapt to it and doesn't make a difference over the long run. I'm actually planning to make this argument in a longer post soon as I also think it's something of a neglected issue.
undefined @ 2016-02-16T13:28 (+2)
"reducing suffering/connecting people/empowering people to life the lives they want."
Are you saying that's probably an example of positive differential progress, or that because it's good in the immediate term, it should be good overall?
If the former could you flesh out the reason?