What important questions are missing from Metaculus?
By Charles Dillon 🔸 @ 2021-05-26T14:03 (+38)
Metaculus has currently got over 1000 open forecasting questions, many of which are longtermist or EA focused.
These include several EA-focused categories, e.g. EA survey 2025, an Alt-Protein Tournament, Animal Welfare, the "Ragnorak" global catastrophic risks series, and other questions on the distant future.
I am volunteering at Rethink Priorities doing forecasting research, and am looking to see if there are EA related questions with long time horizons (>5 years) people are interested in seeing predictions on, and if there are I am willing to put some time into operationalising them and submitting them to Metaculus.
I think this would be both directly useful for those who have these questions and others who find them interesting, and also useful for expanding the database of such questions we have for the purpose of improving long term forecasting.
This question is part of a project of Rethink Priorities.
It was written by Charles Dillon, a volunteer for Rethink Priorities. Thanks to Linch Zhang for advising on the question. If you like our work, please consider subscribing to our newsletter. You can see all our work to date here.
MichaelDickens @ 2021-05-26T14:57 (+21)
I have a doc on my computer with some notes on Metaculus questions that I want to see, but either haven't gotten around to writing up yet, or am not sure how to operationalize. Feel free to take any of them.
Giving now vs. later parameter values
- "In 2030, I personally will either donate at least 10% of my income to an EA cause or will work directly on an EA cause full time"
- attempting to measure value drift
- or maybe ask about Jeff Kaufman or somebody like that because he's public about his donations
- or make a list of people, and ask how many of them will fulfill the above criteria
- "According to the EA Survey, what percent of people who donated at least 10% in 2018 will donate at least 10% in 2023?"
- Not sure if it's possible to derive this info
- According to David Moss in Rethink Priorities Slack, it's probably not feasible to get data on this
- "When will the Founders Pledge's long-term investment fund make its last grant?"
- https://forum.effectivealtruism.org/posts/8vfadjWWMDaZsqghq/long-term-investment-fund-at-founders-pledge
- because its investments run out, value drift, or expropriation
- Have they actually established this fund yet?
- "When the long-term investment fund run by Founders Pledge ceases to make grants, will it happen because the fund is seized by an outside actor?"
- by a government, etc.
- "When will the longest-lived foundation or DAF owned by an EA make its last grant?"
- EA defined as someone who identifies as an EA as of this prediction
- the DAF must already exist and contain nonzero dollars
- question about Rockefeller/Ford/Gates foundation longevity
- best achievable QALYs per dollar in 2030 according to ACE, etc.
- "Will the US stock market close by 2120?"
- A stock market is considered to have closed if all public exchanges cease trading for at least one year
- Could also ask about any developed market, but I think it makes most sense to ask about a single country
Open research questions
-
"By 2040, there will be a broadly accepted answer on how to construct a rank ordering of possible worlds where some of the worlds have a nonzero probability of containing infinite utility."
- "broadly accepted" doesn't mean everyone agrees with its prescriptions, but at least people agree that it's internally consistent and largely aligns with intuitions on finite-utility cases
-
"In 2121, it will be broadly agreed that, all things considered, donations to GiveDirectly were net positive."
- attempt at addressing cluelessness
- "broadly agreed" is hard to define in a useful way. it's already broadly agreed right now, in spite of cluelessness
- maybe "broadly agreed among philosophers who have written about cluelessness" but this might limit your sample to like 4 people
-
"By 2040, there will be a broadly accepted answer on what prior to use for the lifespan of humanity." see https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1
- alternate formulation: Toby Ord and Will MacAskill both agree (to some level of confidence) on the correct prior
-
"By 3020, a macroscopic object will be observed traveling faster than the speed of light."
- relevant to Beyond Astronomical Waste
Finance
- "What annual real return will be realized by the Good Ventures investment portfolio 2022-2031?"
- Can be calculated by Form 990-PF, Schedule B, Part II, which gives the gain of any assets held
- Might make more sense to look at Dustin Moskowitz's net worth
- But that doesn't account for spending
- "Will the momentum factor have a positive return in the United States 2022-2031?"
- Fama/French 12-2 momentum over a total market index
- As measured by "Momentum Factor (Mom)" on Ken French Data Library
- Gross of costs
- "Will the Fama-French value factor (using E/P) be positive in the United States 2022-2031?"
- Fama-French value over a total market index (not S&P 500), measured with E/P, not B/P
- French "Portfolios Formed on Earnings/Price"
- Factor is considered positive if the low 30% portfolio (equal-weighted) outperforms the high 30% portfolio.
- E/P chosen due to being less subject to company structure than B/P
- "What annualized real return will be obtained by the top decile of momentum stocks in the United States 2022-2031?"
- same definitions as previous question
- "What will be the magnitude of the S&P 500's largest drawdown 2022-2031?"
- magnitude = percent decline from peak to trough
Charles_Dillon @ 2021-06-08T13:36 (+1)
"Will the US stock market close by 2120?"
For this, would you prefer to condition on something like there being no transformative AI, or not? I feel like sometimes these questions end up dominated by considerations like this, and it is plausible you care about this answer only conditional on something like this not happening.
MichaelDickens @ 2021-06-09T21:52 (+3)
The question is intended to look at tail risk associated with stock markets shutting down. Transformative AI may or may not constitute such a risk; for example, the AI might shut down the stock market because it's going to do something far better with people's money, or it might shut down the market because everyone is turned into paperclips. So I think it should be unconditional.
Charles_Dillon @ 2021-06-10T17:22 (+1)
That's in pending now, as are a few other questions you may be interested in, though not identical to the ones you list.
I'll post a response here in a few weeks once most of the questions I intend to write are actually live with a summary.
Charles_Dillon @ 2021-05-27T09:10 (+1)
Thanks for these!
"When will the longest-lived foundation or DAF owned by an EA make its last grant?"
- EA defined as someone who identifies as an EA as of this prediction
Just to be clear, you specifically mean to exclude not-yet-EAs who set up DAFs in, say, 2025?
"What annual real return will be realized by the Good Ventures investment portfolio 2022-2031?"
- Can be calculated by Form 990-PF, Schedule B, Part II, which gives the gain of any assets held
- Might make more sense to look at Dustin Moskowitz's net worth
- But that doesn't account for spending
It might be interesting to have forecasts on the amount of resources expected to be devoted to EA causes in the future, e.g. by more billionaires getting involved. This could be useful for questions like "how fast should Good Ventures be spending their money?" if we expect to have 5 more equally big donors in 2030 that might suggest they should be spending down faster than if they are still expected to be the biggest donor by a wide margin.
MichaelDickens @ 2021-05-27T16:07 (+3)
Just to be clear, you specifically mean to exclude not-yet-EAs who set up DAFs in, say, 2025?
Yes, the intention is to predict the maximum length of time that foundations and DAFs created now (or before now) can continue to exist.
It might be interesting to have forecasts on the amount of resources expected to be devoted to EA causes in the future [...]
Agreed.
MichaelA @ 2021-05-26T14:27 (+7)
This sounds like a cool idea, thanks for doing it!
One place where you could find a bunch of ideas is my Database of existential risk estimates (context here). It could be interesting to put very similar questions/statements on Metaculus and see how their forecasts differ from the estimates given by these individuals/papers (most of whom don't have any known forecasting track record). It could also be interesting to put on Metaculus:
- questions inspired by (but different from) the statements in that database
- questions inspired by what you notice there aren't any statements on
- e.g., neglected categories of risks, or risks where there are very long time-scale estimates but nothing for the next few decades
- I think authoritarianism and dystopias are examples of that
- e.g., neglected categories of risks, or risks where there are very long time-scale estimates but nothing for the next few decades
- questions that could serve as somewhat nearer-term, less extreme proxies of later catastrophes
On the other hand, forecasting existential risks (or similar things) introduces other challenges aside from being (usually) long-range. So this might not be the ideal approach for your specific goals - not sure.
(This is a somewhat lazy response, since I'm just pointing in a direction rather than giving specifics, but maybe it could still be helpful.)
MaxRa @ 2021-06-22T15:53 (+3)
Great initiative! Are you still taking questions?
Charles_Dillon @ 2021-06-30T11:51 (+3)
Yes, absolutely. To be clear, I'm not committing to writing up all question suggestions, but I have written up some questions inspired by these suggestions and suggestions I've been sent privately already and will probably write more.
You can see questions I've written so far here (note not all are EA related) : https://www.metaculus.com/questions/?order_by=-activity&search=author:Charles&categories=
MichaelA @ 2021-05-27T09:51 (+3)
I'd also love for someone to turn a bunch of questions from my draft Politics, Policy, and Security from a Broad Longtermist Perspective: A Preliminary Research Agenda into forecasting questions, and many would most naturally have horizons of >5 years.
This comment is again asking you to do most of the work, in the form of picking out which questions in that agenda are about the future and then operationalising them into crisp forecasting questions. But I'll add as replies a sample of some questions from the agenda that I think it'd be cool to operationalise and put on Metaculus.
MichaelA @ 2021-05-27T09:52 (+3)
On authoritarianism and/or dystopias
- What are the main pathways by which each type of authoritarian political system could reduce (or increase) the expected value of the long-term future?
- E.g., increasing the rate or severity of armed conflict; reducing the chance that humanity has (something approximating) a successful long reflection; increasing the chances of an unrecoverable dystopia.
- Risk and security factors for (global, stable) authoritarianism
- How much would each of the “risk factors for stable totalitarianism” reviewed by Caplan (2008) increase the risk of (global, stable) authoritarianism (if at all)?
- How likely is the occurrence of each factor?
- What other risk or security factors should we focus on?
- What effects would those factors have on important outcomes other than authoritarianism? All things considered, is each factor good or bad for the long-term future?
- E.g., mass surveillance, preventive policing, enhanced global governance, and/or world government might be risk factors from the perspective of authoritarianism but security factors from the perspective of extinction or collapse risks (see also Bostrom, 2019).
- What are the best actions for influencing these factors?
- How likely is it that relevant kinds of authoritarian regimes will emerge, spread (especially to become global), and/or persist (especially indefinitely)?
- How politically and technologically feasible would this be?
- Under what conditions would societies trend towards and/or maintain authoritarianism or a lack thereof?
- What strategic, military, economic, and political advantages and disadvantages do more authoritarian regimes tend to have? How does this differ based on factors like the nature of the authoritarian regime, the size of the state/polity it governs, and the nature and size of its adversaries?
- How likely is it that relevant actors will have the right motivations to bring this about?
- How many current political systems seem to be trending towards authoritarianism?
- How much (if at all) are existing authoritarian regimes likely to spread? How long are they likely to persist? Why?
- How likely is it that any existing authoritarian regimes would spread globally and/or persist indefinitely? Why?
- How politically and technologically feasible would this be?
- Typology of, likelihoods of, and interventions for dystopias
- How likely is each type of dystopia to arise initially and then to persist indefinitely?
- How bad would each type of unrecoverable dystopia be, relative to each other, to other existential catastrophes, and to other possible futures?
- How much should we worry about recoverable or temporary equivalents of each type of unrecoverable dystopia?
- E.g., how much would each increase (or decrease) the risk of later extinction, unrecoverable collapse, or unrecoverable dystopia?
- What are the main factors affecting the likelihood, severity, and persistence of each type of dystopia?
- What would be the best actions for reducing the likelihood, severity, or persistence of each type of dystopia?
MichaelA @ 2021-05-27T09:51 (+3)
On armed conflict and military technology
- How likely are international tensions, armed conflicts of various levels/types, and great power war specifically at various future times? What are the causes of these things?
- How might shifts in technology, climate, power, resource scarcity, migration, and economic growth affect the likelihood of war?
- Are Pinker’s claims in The Better Angels of Our Nature essentially correct?
- Are the current trends likely to hold in future? What might affect them?
- How do international tensions, strategic competition, and risks of armed conflict affect the expected value of the long-term future? By what pathways?
- What are the plausible ways a great power war could play out?
- E.g., what countries would become involved? How much would it escalate? How long would it last? What types of technologies might be developed and/or used during it?
- What are the main pathways by which international tensions, armed conflicts of various levels/types, or great power war specifically could increase (or decrease) existential risks? Possible examples include:
- Spurring dangerous development and/or deployment of new technologies
- Spurring dangerous deployment of existing technologies
- Impeding existential risk reduction efforts (since those often require coordination and are global public goods)
- Sweeping aside or ushering in global governance arrangements
- Weakening (or strengthening) democracies
- Worsening (or improving) the values of various actors (e.g., reducing or increasing impartiality or inclinations towards multilateralism among the public or among political leaders)
- Changing the international system’s global governance arrangements and/or polarity (which could then make coordination easier or harder, make stable authoritarianism more or less likely, etc.)
- Serving as a “warning shot” that improves values, facilitates coordination, motivates risk reduction efforts, etc.
- How might plausible changes in variables such as climate, resource scarcity, migration, urbanisation, population size, and economic growth affect answers to the above questions?
- To what extent does this push in favour of or against work to affect those variables (e.g., climate change mitigation, open borders advocacy, improving macroeconomic policy)?
- What are the plausible ways a great power war could play out?
- What are the best actions for intervening on international tensions, strategic competition, risks of armed conflict, or specifically the ways that these things might harm the long-term future?
- What are the most cost-effective actions for achieving these goals?
- In relation to international tensions, strategic competition, and risks of armed conflict in particular, we can also ask the following specific sub-questions:
- How useful are things like diplomacy, treaties, arms control agreements, international organisations, and international norms? What actions are best in relation to those things?