What are words, phrases, or topics that you think most EAs don't know about but should?

By Ozzie Gooen @ 2020-01-21T20:15 (+30)

I think there's a lot of great literature that's relevant for EA purposes. Sometimes specific phrases can act as useful keywords.

If we use similar language as other academic fields, then:

  1. Other groups can understand Effective Altruist writing easier.
  2. Effective Altruists can more easily search for existing literature and discussion.

I've recently been doing some surveying of different fields and finding a lot of terminology I think is both (1) not currently used by many people here, and (2) would be interesting to them.

This can be as simple as an interesting Wikipedia page. I think there are tons of interesting Wikipedia pages I don't yet know to search, but would get a lot value out of if I did.

When submitting, if it's not obvious, I suggest adding information about why this could be interesting to other EAs.


MaxRa @ 2020-01-23T21:53 (+24)

Some while ago, Peter McIntyre and Jesse Avshalomov compiled a list of concepts they deemed worth knowing. I can imagine that many are pretty well known within EA, but I’ll go out on a limb and say I woudn‘t be surprised if most EAs will find more than one useful new concept. https://conceptually.org/concepts

Ozzie Gooen @ 2020-01-21T20:30 (+23)

Consilience

The principle that evidence from independent, unrelated sources can "converge" on strong conclusions

This word can arguably be used to describe the "Many Weak Arguments" aspect of the "Many Weak Arguments vs. One Relatively Strong Argument" post. JonahSinick pointed that out in that post.

Why this is interesting
Consilience is important for evaluating claims. There's a fair bit of historic discussion and evidence now that shows how useful it can be to get a variety of evidence from many different sources.

Linch @ 2020-01-24T02:14 (+21)

Credible Interval:

In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution.[1] The generalisation to multivariate problems is the credible region. Credible intervals are analogous to confidence intervals in frequentist statistics,[2] although they differ on a philosophical basis:[3] Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use (and indeed, require) knowledge of the situation-specific prior distribution, while the frequentist confidence intervals do not.

Credence:

Credence is a statistical term that expresses how much a person believes that a proposition is true

Why this matters:

It seems like a lot of questions EAs are interested in involves subjective Bayesian probabilities. A lot of people misuse the frequentist term "confidence interval" for these purposes (to be fair, this isn't just a problem with EAs/rationalists, I've seen scientists make this mistake too, akin to how the p-value is commonly misunderstood). I think it's helpful to use the right statistical jargon so we can more easily engage with the statistical literature, and with statisticians.

Denise_Melchin @ 2020-09-24T11:20 (+7)

Thank you for writing this! I once failed a job interview because what I learned from the EA community as a 'confidence interval' was actually a credible interval. Pretty embarrassing.

Linch @ 2020-09-24T11:51 (+4)

Wow that's an awfully specific way to fail a job interview! But I'm glad you've learned something from it, at least?

Ozzie Gooen @ 2020-01-21T20:18 (+16)

Nobel Cause Corruption

From Wikipedia:

Noble cause corruption is corruption caused by the adherence to a teleological ethical system, suggesting that people will use unethical or illegal means to attain desirable goals, a result which appears to benefit the greater good. Where traditional corruption is defined by personal gain, noble cause corruption forms when someone is convinced of their righteousness, and will do anything within their powers to achieve the desired result. An example of noble cause corruption is police misconduct "committed in the name of good ends" or neglect of due process through "a moral commitment to make the world a safer place to live."

Why this is interesting
I think one serious concern around consequentialist thought is that it can be used in dangerous ways. I think this term describes some of this, and the corresponding literature provides examples that seem similar to what I can expect future people to follow who misuse EA content.

Larks @ 2020-09-24T14:14 (+2)
Nobel Cause Corruption

Is this about how the Peace Prize is given out to either warmongers or ineffective activists rather than professional diplomats and international supply chain managers?

Khorton @ 2020-01-22T08:39 (+11)

Governance

In political science literature, "governance" refers to how something is overseen and managed whether or not that's done by Government. For example, if your AI system has to comply with a few regulations, but you're also responsible to your company's ethics board and shareholders, that's all governance.

Relevant for

EAs in politics, policy or institutional change. Particularly useful for EAs interested in AI policy where a wider conception of governance is arguably much more desirable than direct government regulation.

Khorton @ 2020-01-22T08:28 (+11)

Endogenous institutions

In political and economic literature, institutions include formal groups (eg the Civil Service, the Church of England, the monarchy) but also the overall "rules of the game" (eg to what extent politicians are comfortable accepting bribes/gifts/political donations in exchange for political influence). These rules affect the people "playing the game" eg lobbyists and politicians, but they're also created by them.

Relevant for

EAs working on politics, lobbying or institutional change.

weeatquince @ 2020-01-22T18:34 (+9)

Optimisers curse / Regression to the mean

On how trying to optimise can lead you to make mistakes

Khorton @ 2020-01-22T18:43 (+8)

Related: Goodhart's Law

"When a measure becomes a target, it ceases to be a good measure"

https://en.m.wikipedia.org/wiki/Goodhart's_law

MichaelA @ 2020-08-23T18:40 (+2)

I second the importance of these three terms/concepts.

In case anyone stumbles across this post in future, here are two sources on the optimizer's curse. (I thought the first was great. I personally disagree with the second on various points, but mention it as other people seemed to find it good and there's good discussion in the comments.)

And here are some sources on  Goodhart's law.

Ozzie Gooen @ 2020-01-21T20:37 (+9)

The Cooperative Principle

The cooperative principle describes how people achieve effective conversational communication in common social situations—that is, how listeners and speakers act cooperatively and mutually accept one another to be understood in a particular way.

There are 4 corresponding maxims. I think the main non-obvious ones are:

Maxim of quantity:

  1. Make your contribution as informative as is required (for the current purposes of the exchange).
  2. Do not make your contribution more informative than is required.

Maxim of relevance

  1. Be relevant to the discussion. (For instance, when responding to, "What would you like for lunch" and you respond "I would like a sandwhich"; you are expected to be responding to that very question, not to be making an unrelated statement.)

I think this video explains this well.

Why this is interesting
I've definitely been in conversations where bringing up maxims of quantity and relevance would have been useful to bring up. Conversation and discussion can be quite difficult. We do a lot of that.

Stefan_Schubert @ 2020-01-21T20:48 (+15)

Sometimes the term "the Gricean maxims" (or "Grice's maxims") is used instead of "the Cooperative Principle" as the principal term. I personally find it more memorable, since "the Cooperative Principle" could mean so many things.

Thomas Kwa @ 2020-09-25T00:27 (+1)

Can you give an example of such a conversation, as well as the thought process towards bringing them up? I hear about conversational principles like these, but I don't know how to get from "vague feeling that something is wrong with the conversation" to "I think you're confusing me with excess information".

Ozzie Gooen @ 2020-09-25T09:12 (+2)

A very simple example might be someone saying, "What's up?" and the other person saying "The sky.". "What's up?" assumes a shared amount context. To be relevant, it would make much more sense for it to be asking how the other person is doing.

There are a bunch of youtube videos around the topic, I recall some go into examples.

G Gordon Worley III @ 2020-01-22T20:13 (+7)

Normalization of deviance

"Social normalization of deviance means that people within the organization become so much accustomed to a deviant behavior that they don't consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety" [5]. People grow more accustomed to the deviant behavior the more it occurs [6] . To people outside of the organization, the activities seem deviant; however, people within the organization do not recognize the deviance because it is seen as a normal occurrence. In hindsight, people within the organization realize that their seemingly normal behavior was deviant.

(from Wikibooks)

I think this generalizes to cases where there is a stated norm, that norm is regularly violated, and the violation of the norm becomes the new norm.

Relevance

Scrupulous people or people otherwise committed to particular stances may be concerned about ways in which norms are not upheld around, for example, truth telling, donating, veganism, etc..

Ozzie Gooen @ 2020-01-21T20:22 (+6)

Deepity

The term refers to a statement that is apparently profound but actually asserts a triviality on one level and something meaningless on another. Generally, a deepity has (at least) two meanings: one that is true but trivial, and another that sounds profound, but is essentially false or meaningless and would be "earth-shattering" if true.

Why this is interesting
I mostly think this is just a great phrase to describe a lot of difficult language I occasionally see get used in moral discussions.

weeatquince @ 2020-01-22T18:30 (+5)

Knightian uncertainty / deep uncertainty

a lack of any quantifiable knowledge about some possible occurrence

This means any situation where uncertainty is so high that it is very hard / impossible / foolish to quantify the outcomes.

To understand this it is useful to note the difference between uncertainty (EG 1: The chance of a nuclear war this century) and risk (EG 2: the chance of a coin coming up heads).

The process for making decisions that rely on uncertainty may be very different form the process for making decision that rely on risk. The optimal tactic for making good decisions on situations about deep uncertainty may not be to just quantify the situation.


Why this matters

This could drastically change the causes EAs care about and the approaches they take.

This could alter how we judge the value of taking action that affects the future.

This could means that "rationalist"/LessWrong approach of "shut up and multiply" for making decisions might not be correct.

For example this could shift decisions away from a naive exacted value based on outcomes and probabilities and towards favoring courses of actions that are robust to failure modes, have good feedback loops, have short chains of affects, etc.

(Or maybe not, I don’t know. I don’t know enough about how to make optimal decisions under deep uncertainty but I think it is a thing I would like to understand better.)


See also

The difference between "risk" and "uncertainty". "Black swan events". Etc