Wild animal welfare? Stable totalitarianism? Predict which new EA cause area will go mainstream!

By Jackson Wagner @ 2024-03-11T14:27 (+48)

Long have I idly whiled away the hours browsing Manifold Markets, trading on trivialities like videogame review scores or NASA mission launch dates.  It's fun, sure -- but I am a prediction market advocate, who believes that prediction markets have great potential to aggregate societally useful information and improve decision-making!  I should stop fooling around, and instead put my Manifold $Mana to some socially-productive use!!

So, I've decided to create twenty subsidized markets about new EA cause areas.  Each one asks if the nascent cause area (like promoting climate geoengineering, or researching space governance) will receive $10,000,000+ from EA funders before the year 2030.

My hope is that that these markets can help create common knowledge around the most promising up-and-coming "cause area candidates", and help spark conversations about the relative merits of each cause.  If some causes are deemed likely-to-be-funded-by-2030, but little work is being done today, that could even be a good signal for you to start your own new project in the space!

"Nuka Zaria?"

Without further ado, here are the markets:

Animal Welfare

[embed most popular market]

Global Health & Development

Institutional improvements

Investing

(Note that the resolution criteria on these markets is different than for the other questions, since investments are different from grants.)

X-Risk

Artificial Intelligence

Transhumanism

Moral philosophy

I encourage you to trade on these markets, comment on them, and boost/share them -- put your Manifold mana to a good use by trying to predict the future trajectory of the EA movement!  Here is one final market I created, asking which three of the cause areas above will receive the most support between now and 2030.

Resolution details & other thoughts

The resolution criteria for most of these questions involves looking at publicly-available grantmaking documentation (like this Openphil website, for example), adding up all the grants that I believe qualify as going towards the stated cause area, and seeing if the grand total exceeds ten million dollars.  Since I'm specifically interested in how the EA movement will grow and change over time, I will only be counting money from "EA funders" -- stuff like OpenPhil, LTFF, SFF, Longview Philanthropy, Founders Fund, GiveWell, etc, will count for this, while money from "EA-adjacent" sources (like, say, Patrick Collison, Yuri Milner, the Bill & Melinda Gates Foundation, Elon Musk, Vitalik Buterin, Peter Thiel, etc) won't count even if it goes directly to one of the mentioned cause areas.  This is obviously a fuzzy distinction, but I'll try my best.  (ACX Grants is surely EA, right?  How about grants made through some future EA-inspired impact market?  Or what if some of the aforementioned billionaires move even closer to the EA worldview over the next few years?)

I chose $10m since this threshold hopefully indicates an idea which has solidly moved beyond the "exploratory research" phase, into a phase where we're trying to build organizations and perform serious real-world interventions.  The $10m threshold is forward-looking (grants that were already given out before the markets' creation don't count, only grants in the period Nov 2023 - Dec 2029), but for comparison's sake, here is a sampling of areas that have received >$10m funding over the past decade just from OpenPhil:

Other nascent cause areas that received >$10m from OpenPhil alone over the past decade:

Animal Welfare

Global health & development

Institutional improvements

Transhumanism & moral philosophy

If you have suggestions for more areas, let me know!  Or just make your own markets with the same format, and tag them as "New EA Cause Area?".  Finally, thanks to Nuno's big list of cause candidates for inspiration & comprehensiveness!


abrahamrowe @ 2024-03-11T17:32 (+17)

Removed

Jackson Wagner @ 2024-03-12T08:25 (+4)

Yeah, I wondered what threshold to set things at -- $10m is a pretty easy bar for some of these areas, since of course some of my listed cause areas are more niche / fringe than others. I figure that for the highest-probability markets, where $10m is considered all but certain, maybe I can follow up with a market asking about a $50m or $100m threshold.

I agree that $10m isn't "mainstream" in the sense of joining the pantheon alongside biosecurity, AI safety, farmed animal welfare, etc. But it would still be a big deal to me if, say, OpenPhil doubled their grantmaking to "land use" and split the money equally between YIMBYism and Georgism. Or if mitigating stable totalitarianism risk got as much support as "progress studies"-type stuff. $10m of grants towards studying grabby aliens or the simulation hypothesis or etc would definitely be surprising!

SummaryBot @ 2024-03-11T15:44 (+1)

Executive summary: The author creates 20 prediction markets on Manifold Markets to forecast which new EA cause areas will receive significant funding ($10M+) by 2030, in order to spark conversations and identify promising areas for future work.

Key points:

  1. The markets cover potential new EA cause areas across animal welfare, global health, institutional improvements, investing, existential risk, AI, transhumanism, and moral philosophy.
  2. Funding of $10M+ from EA sources between 2023-2029 is the criteria for a cause area being considered "mainstream".
  3. The author hopes these markets will help create common knowledge around promising cause areas and inspire people to start new projects in likely-to-be-funded areas.
  4. The $10M threshold is chosen to indicate a cause area has moved beyond exploratory research to serious real-world interventions.
  5. The author provides examples of cause areas that have already received >$10M from OpenPhil alone in the past decade for comparison.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.