New Top EA Causes for 2021?
By MichaelA🔸 @ 2021-04-01T06:50 (+47)
Here's an annual thread for spending the day collecting ideas for Cause X, with me stealing the idea from John Maxwell because I'm in Australia and raring to get started.
Remember--serious suggestions only!!!
NunoSempere @ 2021-04-01T08:24 (+97)
This isn't exactly a proposal for a new cause area, but I've felt that the current names of EA organizations are confusingly named. So I'm proposing some name-swaps:
- Probably Good should now be called "80,000 hours". Since 80,000 hours explicitly moved towards a more longtermist direction, it has abandoned some of its initial relationship to its name, and Probably Good seems to be picking some of that slack.
- "80,000 hours should be renamed to "Center for Effective Altruism" (CEA). Although technically a subsidiary, 80,000 hours reaches more people than CEA, and produces more research. This change in name would reflect its de-facto leadership position in the EA community.
- The Center for Effective Altruism should rebrand to "EA Infrastructure Fund", per CEA's strategical focus on events, local groups and the EA forum, and on providing infrastructure for community building more generally.
- However, this leaves the "EA Infrastructure Fund" without a name. I think the main desiderata for a name is basically prestige, and so I suggest "Future of Humanity Institute", which sounds suitably ominous. Further, the association with Oxford might lead more applicants to apply, and require a lower salary (since status and monetary compensation are fungible), making the fund more cost-effective.
- Fortunately, the Global Priorities Institute (GPI) recently determined that helping factory farmed animals is the most pressing priority, and that we never cared that much about humans in the first place. This leaves a bunch of researchers at the Future of Humanity Institute and at the Global Priorities Institute, which recently disbanded, unemployed, but Animal Charity Evaluators is offering them paid junior researcher positions. To reflect its status as the indisputable global priority, Animal Charity Evaluators should consider changing their name to "Doing Good Better".
- To enable this last change and to avoid confusion, Doing Good Better would have to be put out of print.
I estimate that having better names only has a small or medium impact, but that tractability is sky-high. No comment on neglectedness.
What do you blokes think?
EdoArad @ 2021-04-01T15:20 (+22)
I think that QURI should be called Probably Good
Ozzie Gooen @ 2021-04-02T00:45 (+4)
Maybe, Probabilistically Good?
EdoArad @ 2021-04-02T03:20 (+9)
How about: Probability? Good!
evelynciara @ 2021-04-01T18:33 (+7)
I suggest that the names be reassigned using the Top Trading Cycles and Trains algorithm.
Milan_Griffes @ 2021-04-01T12:03 (+3)
+1 makes sense.
abrahamrowe @ 2021-04-01T13:42 (+68)
Working title: Reversetermism
Longtermists have pointed out that we've often failed to consider the interests or wellbeing of future beings. But an even more neglected space is the past.
If we think that existential risk is sufficiently high in the near future, there is a good chance that the vast majority of moral value is in the past. Just considering humans, there are at least 300,000 years of experiences, all of which we ought to consider just as important as present day ones. If we consider non-humans' interests, there are billions of years and countless individuals who we ought to expand our moral circle to include.
The scale here is obvious, as is the neglectedness - as far as I am aware, there are no groups focused on ensuring that the past is as good as possible. So, how tractable is it?
Immediately, a handful of interventions come to find:
- Cultivating expert backcasting:
- Written history is just a few thousand years old, and unfortunately, a lot of it is incredibly sad. But prior to around 5,500 years ago, we have little data on what human lives were like. By improving our backcasting ability, we can ensure that documentation of these lives in the prehistoric world states they were as good as possible.
- Making sure there were no existential catastrophes
- If a x-risk is bad right now, it stands to reason that it might have been even worse had it occurred in the past. We might be able to verify that existential catastrophes did not happen previously, preventing the flourishing of both present day and future humans.
One immediate advantage of reversetermism is that cost-effectiveness can actually be estimated relatively accurately. Here's a simple test:
"On May 5th (Gregorian calendar), 10,560 BC, at 2:00pm Eastern, everything was chill for an hour for everybody."
This expert backcasting took around 12 seconds to produce. Assuming a human population of 2 million, and that you pay expert backcasters $30 USD / hour, this cost $0.10, and created around 228 years of good experiences. With an average lifespan of say 30 years, it costs around $0.013 to save a life. And even more expert backcasters might achieve more efficient results through further work in the field, driving down the cost-effectiveness further.
Ofir @ 2021-04-01T16:28 (+33)
You neglect to mention that with a time preference discount rate high enough, the past counts disproportionately more than the future. As they say, "Tutankhamun was a billion times more important than you".
Peter_Hurford @ 2021-04-01T15:21 (+56)
Strong Middletermism as an EA Priority
Strong middletermism suggests that the best actions are exclusively contained within the set of actions that aim to influence how the next 137 years go (and not a year longer!)
We know that compromising between smart people is a good decision procedure (see "Aumann's agreement theorem" also see how ensemble models generally outperform any individual models). Given that many smart people support near-term causes and many smart people support longtermist causes, I suggest that the highest impact causes will be found in what I call middletermism.
Another important issue is that our predictive track record gets worse as a function of time - increasing time means increasing error. Insofar as we are trying to balance expected impact and robustness of impact calculations, this suggests a time at which error will balance out impact. In my calculations, this occurs exactly 137 years from now. Thus middletermism only focuses on these 137 years.
EdoArad @ 2021-04-01T17:56 (+8)
I think that it's interesting to note that it will always be the 137 years ahead, regardless of the current year. That is unless we learn to do better predictions. But it doesn't matter, as currently we should only care about the next 137 years!
BrianTan @ 2021-04-01T09:44 (+40)
Punning What WEA Can.
The acronym EA is so flexible and can be used to create so many puns. And yet there are so little puns being used or made in the EA community. So I think more EAs, on the margin, should create and use puns with the EA acronym. These can be used as names for group events, or to show how EA is already ingrained in so many concepts or causes. Here are a bunch of ideas:
Group Events
The Most Pressing Puns (using words that already have EA in them)
- REAding Groups
- RetrEAts
- IcebrEAkers
- tEA time
- External OutrEAch
- ResEArch Workshop
- Podcast Episode REActions Meetup
- BEAch Outings
Other Potentially Promising Puns
- FEAllowships
- ConfEArences
- LightnEAng Talks
- wEAtch Parties
- DEAbates
- DinnEAr Parties
- CarEAr Planning Workshops
- GathEAr Town
- SocEAls
- GEAneral AssembliEAs
- Speed dEAting
- PodcEAst Discussions
- BoEArd Games
- CowEArking
- One-on-OnEAs
- SlEAck Discussion
- GivEAng Games
EA Concepts / Topics
The Most Pressing Puns (using words that already have EA in them)
- Global HEAlth
- Mental HEAlth
- EArning to Give
- NuclEAr Security
- Global Priorities ResEArch
- Scientific ResEArch
- GrEAt Power Conflict
- PEAce & Conflict Studies
- Clean MEAt
- Plant-Based MEAt
- High School OutrEAch
- Using the hEAd & hEArt
- ReplacEAbility
- LEAdership
- ForeseEAbility
- NuclEAr Energy
- TEAching EA
- DisEAse Eradication
- DiarrhEA Eradication
- Pain & PlEAsure
- Moral REAlism
- MEAning CrEAtion
Other Potentially Promising Puns
- RationalitEA
- DiversitEA
- ForecEAsting
- TEAchnical AI Safety
- Animal WEAlfare
- EAconomic Growth
- DEAvelopment
- ClimEAte Change
- PEArsonal Fit
- BiosEAcurity
EdoArad @ 2021-04-01T15:18 (+22)
ConsEAder applyEAng to NWWC
BrianTan @ 2021-04-02T01:34 (+2)
What a grEAt idEA!
Milan_Griffes @ 2021-04-01T12:04 (+6)
lol DiarrhEA Eradication
MichaelA @ 2021-04-01T06:51 (+32)
Why EDM remixes of 80,000 Hours interviews is one of the biggest bottlenecks in the EA community
Importance
Gargantuan.
Tractability
How hard can it be?
Neglectedness
Truly outrageous.
Personal fit considerations
Irrelevant. Just do it. You have your orders.
Urgency
Some have proposed that the Importance, Tractability, Neglectedness framework should be complemented with a separate factor for Urgency. This would if anything strengthen the case for this new cause area, given that it is already April 1st, and that each remix would take hours to create (not to mention upwards of hundreds of hours to listen to).
abrahamrowe @ 2021-04-01T15:58 (+61)
Out of curiosity I stuck an episode into the Wub Machine. It's genuinely mildly listenable. Also takes no time so the cost-effectiveness here might be high. Original audio: 80,000 Hours.
BrianTan @ 2021-04-02T01:50 (+11)
This is gold.
Milan_Griffes @ 2021-04-02T04:33 (+2)
80k wubstepping all night long
MichaelA @ 2021-04-01T06:52 (+36)
Podcasts beyond our current EDM remix priorities
At Effective Remix, we've generally focused on finding the most pressing podcasts and the best genres to remix them into.
But even if some podcast is 'the most pressing'—in the sense of being the highest impact thing for someone to remix if they could be equally successful at remixing anything—it might easily not be the highest impact thing for many people to remix, because people have various talents, experience, and temperaments.
The following are some podcasts that seem like they might be especially pressing from the perspective of improving the vibe of the thing.
- Rationally Speaking
- Astral Codex Ten
- EconTalk
- NPR’s Planet Money
- All 3,400 hours of Rationality: From AI to Zombies
More speculatively, for value of information reasons, it could even make sense for 3-50 people with especially strong personal fit to explore the possibility of making trap remixes of the bookThinking, Fast and Slow by Nobel Prize laureate Daniel Kahneman. We think such remixes are unlikely to be competitive with our current priorities, but if they are, making such remixes could potentially absorb hundreds of Oxbridge philosophy & physics double majors specifically.
Sean_o_h @ 2021-04-01T08:30 (+44)
EA projects should be evidence based: I've done a survey of myself, and the results conclusively show that if 80,000 hours produced dubstep remixes of its podcasts, I would actually listen to them. The results were even more conclusive when the question included "what if Wiblin spliced in 'Wib-wib-wib' noises whenever crucial considerations were touched on?".
Sean_o_h @ 2021-04-01T10:26 (+17)
Related cause area: Deepfake dub-over all 80k podcasts so that they're presented by David Attenborough for prestige gains.
konrad @ 2021-04-01T14:32 (+12)
I prefer the lower pitch "wob-wob-wob" and thus would like to make a bid to simply rename Robert Wiblin to "the Wob". Maybe Naming What We Can could pick this up?
Milan_Griffes @ 2021-04-01T12:06 (+3)
Big +1
An 80k podcast dubstep house party actually sounds like a good time.... BURNING MAN OF THE NERDS!!!!
Robbie Wib-wib-wib-wibibiblin in da HAUS!!!!!!!!
Milan_Griffes @ 2021-04-01T12:11 (+3)
"All 3,400 hours of Rationality: From AI to Zombies"
Speedcore EDM R:A2Z will be the background soundtrack at the Schelling Point Temple of EA Burning Man.
24/7 baby.
Louis_Dixon @ 2021-04-01T12:09 (+3)
Strongly upvoted for the link to the Castle. Btw in one podcast I'm pretty sure I heard Wiblin say "the general vibe of the thing"
Ofir @ 2021-04-01T16:50 (+30)
Wire-heading chickens as an EA cause area
Importance
It is well established that farm animal suffering is one of the largest moral disasters of our time, because of its negative moral value and scale (stemming from low costs).
We see these as an opportunity and call to action. By the same token, we can, with reasonable cost, raise a huge number of chickens who are wire-headed (using electrodes or chicken heroin) to believe they have the most wonderful life imaginable. This positive moral value can far outweigh the positive moral value of flourishing of human lives - a life is a life after all, and heroin is heroin.
Tractability
I mean, they're chickens. We don't foresee them mounting an armed resistance. Besides, if they don't like it, we're doing something wrong.
In contrast to humans who show great resistance to any proposed radical change to their lives (like radical life extension), nobody resists when people put countless chickens through a very contorted experience.
Neglectedness
I mean, are you working on it? Then I guess it's neglected.
Personal fit considerations
We are currently sourcing people have deep insights into the chicken neurology and experience to help lead the UX research front. If you are one of these "bird brained" experts, we need you!
Variations
We must begrudgingly admit that there is a splinter group in our midst which is contemplating instead of raising actual chickens, creating a simulation where even more chickens lead the most wonderful life imaginable. They are currently working on their function applyOptimalHeroinDose(chicken).
DonyChristie @ 2021-04-02T03:43 (+12)
Imaginarytermism
I think the axis of Imaginary Time has been entirely neglected. It is time chauvinism to prefer one dimension of time over any other.
MichaelA @ 2021-04-01T06:50 (+11)
See also [New org] Canning What We Give
evelynciara @ 2021-04-01T18:45 (+6)
Reducing Existential Risk by Embracing the Absurd
As we all know, longtermists face a lot of moral cluelessness: it is impossible to predict all of the consequences of any of our actions over the very long term. This makes us especially susceptible to existential crises. As longtermists, we should reduce this existential risk by recognizing that the universe is fundamentally meaningless, and that we are the only ones who can create meaning. We should embrace the absurd.