Brian Tomasik on cooperation and peace

By Vasco Grilo🔸 @ 2024-05-20T17:01 (+27)

This is a linkpost to https://reducing-suffering.org/#cooperation_and_peace

This is a linkpost for Brian Tomasik's posts on cooperation and peace.

Gains from Trade through Compromise

First written: 26 July 2013; last update: 18 Feb. 2018

When agents of differing values compete for power, they may find it mutually advantageous in expectation to arrive at a compromise solution rather than continuing to fight for winner takes all. I suggest a few toy examples of future scenarios in which suffering reducers could benefit from trade. I propose ideas for how to encourage compromise among nations, ideologies, and individuals in the future, including moral tolerance, democracy, trade, social stability, and global governance. We should develop stronger institutions and mechanisms that allow for greater levels of compromise.

Possible Ways to Promote Compromise

First written: fall 2013; last update: 5 Feb. 2016

Compromise has the potential to jointly benefit many different individuals, countries, and value systems. This piece enumerates ideas for how to encourage compromise, drawn from political science, international relations, sociology, and ethics.

Differential Intellectual Progress as a Positive-Sum Project

First written: 23 Oct. 2013; Last update: 21 Dec. 2015

Fast technological development carries a risk of creating extremely powerful tools, especially AI, before society has a chance to figure out how best to use those tools in positive ways for many value systems. Suffering reducers may want to help mitigate the arms race for AI so that AI developers take fewer risks and have more time to plan for how to avert suffering that may result from the AI's computations. The AI-focused work of the Machine Intelligence Research Institute (MIRI) seems to be one important way to tackle this issue. I suggest some other, broader approaches, like advancing philosophical sophistication, cosmopolitan perspective, and social institutions for cooperation.

As a general heuristic, it seems like advancing technology may be net negative, though there are plenty of exceptions depending on the specific technology in question. Probably advancing social science is generally net positive. Humanities and pure natural sciences can also be positive but probably less per unit of effort than social sciences, which come logically prior to everything else. We need a more peaceful, democratic, and enlightened world before we play with fire that could cause potentially permanent harm to the rest of humanity's future.

International Cooperation vs. AI Arms Race

First written: 5 Dec. 2013; last update: 29 Feb. 2016

There's a decent chance that governments will be the first to build artificial general intelligence (AI). International hostility, especially an AI arms race, could exacerbate risk-taking, hostile motivations, and errors of judgment when creating AI. If so, then international cooperation could be an important factor to consider when evaluating the flow-through effects of charities. That said, we may not want to popularize the arms-race consideration too openly lest we accelerate the race.

How Would Catastrophic Risks Affect Prospects for Compromise?

First written: 24 Feb. 2013; major updates: 13 Nov. 2013; last update: 4 Dec. 2017

Catastrophic risks -- such as engineered pathogens, nanotech weapons, nuclear war, or financial collapse -- would cause major damage in the short run, but their effects on the long-run direction that humanity takes are also significant. In particular, to the extent these disasters increase risks of war, they may contribute to faster races between nations to build artificial general intelligence (AGI), less opportunity for compromise, and hence less of what everyone wants in expectation, including less suffering reduction. In this way, even pure negative utilitarians may oppose catastrophic risks, though this question is quite unsettled. While far from ideal, today's political environment is more democratic and peaceful than what we've seen historically and what could have been the case, and disrupting this trajectory might have more downside than upside. I discuss further considerations about how catastrophes could have negative and positive consequences. Even if averting catastrophic risks is net good to do, I see it as less useful than directly promoting compromise scenarios for AGI and setting the stage for such compromise via cooperative political, social, and cultural institutions.

Note, 20 Jul. 2015: Relative to when I first wrote this piece, I'm now less hopeful that catastrophic-risk reduction is plausibly good for pure negative utilitarians. The main reason is that some catastrophic risks, such as from malicious biotech, do seem to pose nontrivial risk of causing complete extinction relative to their probability of merely causing mayhem and conflict. So I now don't support efforts to reduce non-AGI "existential risks". (Reducing AGI extinction risks is a very different matter, since most AGIs would colonize space and spread suffering into the galaxy, just like most human-controlled future civilizations would.) Regardless, negative utilitarians should just focus their sights on more clearly beneficial suffering-reduction projects, like promoting suffering-focused ethical viewpoints and researching more how best to reduce wild-animal and far-future suffering.

A Lower Bound on the Importance of Promoting Cooperation

First written: 3 Jan. 2014; last update: 7 Jun. 2016

This piece suggests a lower-bound Fermi calculation for the cost-effectiveness of working to promote international cooperation based on one specific branch of possible future scenarios. The purpose of this exercise is to make our thinking more concrete about how cooperation might exert a positive influence for suffering reduction and to make its potential more tangible. I do not intend for this estimate to be quoted in comparison with standard DALYs-per-dollar kinds of figures because my parameter settings are so noisy and arbitrary, and more importantly because these types of calculations are not the best ways to compare projects for shaping the far future when many complex possibilities and flow-through effects are at play. I enumerate other reasons why advancing cooperation seems robustly positive, although I don't claim that cooperation is obviously better than alternate approaches.

Reasons to Be Nice to Other Value Systems

First written: 16 Jan. 2014; last update: 17 Oct. 2017

I suggest several arguments in support of the heuristic that we should help groups holding different value systems from our own when doing so is cheap, unless those groups prove uncooperative to our values. This is true even if we don't directly care at all about other groups' value systems. Exactly how nice to be depends on the particulars of the situation, but there are some cases where helping others' moral views is clearly beneficial for us.

Expected Value of Shared Information for Competing Agents

First written: 4 Jun 2013. Last nontrivial update: 17 Oct 2013.

Most of the time, learning more allows you to more effectively accomplish your goals. The expected value of information for you is proportional to the probability you'll find a better strategy times the expected amount by which it's a better strategy than your old one conditional on you switching to it. However, what happens when you conduct research that is shared publicly for use by other agents that may have different values? Under which conditions is this helpful versus harmful to your goals? A few generalizations we can make are that acquiring information that will be shared with you and other agents is beneficial by your values when (1) you have more resources to act than the other agents, (2) the other agents currently disagree with you a lot on their policy stances, and (3) you expect yourself and other agents to adopt closer policy stances to each other upon learning more, perhaps because your values are similar.

It's not obvious whether negative-leaning utilitarians should welcome general research that will be useful to both themselves and to positive-leaning utilitarians, though the publicity, credibility, and feedback benefits of sharing your research are substantial and suggest that you should share by default unless you have other strong reasons not to.

In addition, as a matter of theory, it should by the Coase theorem usually be possible to make arrangements such that the sharing of information is net positive when appropriate compensation is given for losses that might result to the values of the originator. It would be worth exploring more how mechanisms to compensate for information externalities could work in practice.


SummaryBot @ 2024-05-20T20:22 (+3)

Executive summary: Compromise and cooperation between agents with differing values can be mutually beneficial, and we should develop institutions and mechanisms to encourage compromise to reduce risks from powerful future technologies like AI.

Key points:

  1. When agents with differing values compete for power, compromise solutions can be mutually advantageous compared to winner-takes-all conflict.
  2. Possible ways to promote compromise include advancing moral tolerance, democracy, trade, social stability, global governance, and philosophical sophistication.
  3. International cooperation, especially avoiding an AI arms race between nations, is important for ensuring AI is developed with less risk-taking and more planning to avert potential harms.
  4. Catastrophic risks could negatively impact prospects for compromise by increasing international hostility and accelerating AI races with less concern for safety.
  5. Even from a pure negative utilitarian perspective, reducing non-extinction risks may be net positive by maintaining a relatively peaceful trajectory, though this is uncertain.
  6. Sharing information between agents with differing values can be mutually beneficial under certain conditions, and mechanisms to compensate for information externalities are worth exploring.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.