Long list of AI questions

By NunoSempere, David Mathers, Misha_Yagudin, Gavin @ 2023-12-06T11:12 (+124)

tl;dr: This document contains a list of forecasting questions, commissioned by Open Philanthropy as part of its aim to have more accurate models of future AI progress. Many of these questions are more classic forecasting questions, others have the same shape but are unresolvable, still others look more like research projects or like suggestions of data gathering efforts. Below we give some recommendations of what to do with this list, mainly to feed them into forecasting and research pipelines. In a separate document, we outline reasons why using forecasting for discerning the future of AI may prove particularly difficult.

Table of Contents

Recommendations

We recommended that Open Philanthropy feed these questions into various forecasting and research pipelines, with the thought of incentivizing the research needed to come up with good models of the world around AI developments.

We have categorized questions with three stars in various buckets, each of which has its own recommendations:

Note that the boundary between questions which could be in a forecasting tournament (FT), and questions which we deem to be unresolvable with a reasonable amount of effort (UF) is fairly arbitrary. Fewer questions would be suitable for a forecasting tournament on a platform like Metaculus, which seeks to have explicit and rigorous questions. More would be suitable for a tournament or list of questions on Manifold Markets, which has more of an "anything goes" attitude.

We have also worded many questions in terms of a "resolution council", which would make them more resolvable, if you had a resolution council willing to go through the effort of coming up with a subjective judgment on the question topic. For an explanation of what a resolution council could be, see here

Questions

Recurring terms

An specification for a [resolution council] is discussed in a separate document, here.

"Leading lab" is defined as a lab that has performed a training run within 2 orders of magnitude of the largest ever at the time of the training run, within the last 2 years.

A floating point operation (FLOP) is here defined as one addition, subtraction, multiplication, or division of two decimal numbers, whatever their size. So doing subtracting two 64 bit floats would here correspond to one FLOP, as would subtracting two 8 bit "mini-floats". See this document for a short discussion of this point.

"Automating some fraction of labour" is operationalized as follows:

Key

Questions relevant to speed of capabilities progress

Questions relevant to safety and alignment

Note: these questions make extensive use of this alignment overview

Interpretability

Eliciting Latent Knowledge

Iterated Distillation and Amplification:

Debate (see section 2. here)

General safety

General safety agenda templates

Regulation and Corporate Governance[16]

Who will be at the forefront of AI research?

Governments, if so, which ones? small companies or large companies? US or Chinese companies?, etc.

Questions about militarization.

Questions about how agent-y and general future AIs will be, and how that affects X-risk from AI

Based on comments in the Slack at Trajan:

Risks of various kinds from EAs and other people concerned about AI X-risk getting things wrong

General Warning Signs

Chance and Effects of Deliberately Slowing AI Progress

Questions about public and researcher opinion

Security Questions

EA opinion on relevant issues:

AI effects on (non-AI takeover) catastrophic and X-risks in international relations

Miscellaneous

Acknowledgments

This list of forecasting questions was originally developed by David Mathers, Gavin Leech and Misha Yagudin, of Arb Research and then completed by Nuño Sempere, of Shapley Maximizers. Open Philanthropy provided funding.


  1. At least absent a very, very large amount of algorithmic progress. ↩︎

  2. https://en.wikipedia.org/wiki/Self-driving_car#Classifications ↩︎

  3. Admittedly, I'm basing this off of raw intuition, not any particular argument. ↩︎

  4. perhaps using this leaderboard: https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu ↩︎

  5. https://paperswithcode.com/sota/code-generation-on-apps#:~:text=The APPS benchmark attempts to,as well as problem-solving ↩︎

  6. https://paperswithcode.com/sota/code-generation-on-apps#:~:text=The APPS benchmark attempts to,as well as problem-solving ↩︎

  7. across the datasets mentioned in table 2, p.7 of this: https://cdn.openai.com/papers/gpt-4.pdf ↩︎

  8. https://en.wikipedia.org/wiki/Semiconductor_fabrication_plant ↩︎

  9. https://paperswithcode.com/dataset/arcade-learning-environment#:~:text=The Arcade Learning Environment (ALE,of%20emulation%20from%20agent%20design ↩︎

  10. https://www.openphilanthropy.org/research/new-web-app-for-calibration-training/) ↩︎

  11. [I have pasted in this and the following Cotra questions from Gavin's Airtable: personally, I can't figure out how to easily find out what the parameters actually are or where they are explained in the report, and I doubt that forecasters would be able to either without a lot of work]. ↩︎

  12. https://paperswithcode.com/sota/image-classification-on-imagenet ↩︎

  13. https://en.wikipedia.org/wiki/Koomey's_law ↩︎

  14. https://paperswithcode.com/dataset/big-bench ↩︎

  15. See this for 'deceptive alignment' https://www.lesswrong.com/posts/CsjLDAhQat4PY6dsc/order-matters-for-deceptive-alignment-1 ↩︎

  16. Used for inspiration https://forum.effectivealtruism.org/posts/iqDt8YFLjvtjBPyv6/some-things-i-heard-about-ai-governance-at-eag#Crunch_Time_Friends ↩︎

  17. https://www.slowboring.com/p/at-last-an-ai-existential-risk-policy ↩︎

  18. https://www.slowboring.com/p/at-last-an-ai-existential-risk-policy ↩︎

  19. [not necessarily a stupid thing to do depending on circumstance, but this still seemed like the most natural section for this question.] ↩︎


titotal @ 2023-12-07T20:36 (+11)

Good job on putting this together

If I could make one suggestion, I think the questions about the "how" a catastrophe would occur (ie nanotech, viruses, etc), deserve it's own section, rather than being lumped in under "miscellaneous". This is a key part of the argument for AI being an x-risk, and imo one of the most underdeveloped parts. 

Nick K. @ 2023-12-10T10:12 (+5)

I agree that this would be interesting to explore, but heavily disagree that having a detailed answer to that influences the prediction of X risk substantially.

Dr. David Mathers @ 2023-12-11T11:17 (+2)

Why do you disagree? 

Dr. David Mathers @ 2023-12-09T10:33 (+2)

Fair point. I personally agree that has tended to be underdeveloped.

PeterSlattery @ 2023-12-13T21:49 (+10)

Thanks for this. If easy, can you please curate your suggested questions in a spreadsheet so that I can filter them by priority and type? If you do this, I will share with at least two academics and labs who might do some of the research desired. I may do so anyway, but at the moment, it probably won't be something that they will find time to read unless I can refer them to the parts that are most immediately relevant.

PeterSlattery @ 2023-12-18T22:20 (+6)

Here is what I eventually extracted and will share, just in case it's useful. 

**★★★ (RP DG) By what year will at least 15% of patents granted in the US be for designs generated primarily via AI? Reasons for inclusion: both an early sign that AI might be able to design dangerous technology and an indicator that AIs will be economically useful to deploy across diverse industries. Question resolves according to the best estimate by the [resolution council].

**★★★ (UF RP) How long will be the gap between the first creation of an AI which could automate 65% of current labour and the availability of an equivalently capable model as a free open-source program?​​

**★★★ (RP) Meta-capabilities question: by 2029 will there be a better way to assess the capabilities of models than testing their performance on question-and-answer benchmarks?​​

**★★★ (RP UF) How much money will the Chinese government cumulatively spend on training AI models between 2024 and 2040 as estimated by the [resolution council]?​

**★★★ (UF, FE, RP) Consider the first AI model able to individually perform any cognitive labour that a human can. Then, how likely is the chance of an deliberately engineered pandemic which kills >20% of the world's population in the 50 years after the first such model is built?

**★★★ (UF, FE, RP) How does the probability of the previous question change if models are widely available to citizens and private businesses, compared to if only government and specified trusted private organizations are allowed to use them?

**★★★ (FE, RP) What is the total number of EAs in technical AI alignment Across academia, industry, independent research organizations, ¿government?, etc. See The academic contribution to AI safety seems large for an estimate from 2020.

**★★★ (FE, RP) What is the total number of non-EAs in technical AI alignment? Across academia, industry, independent research organizations, ¿government?, etc.

**★★★ (RP) How likely is it that an AI could get nanomachines built just by making ordinary commercial purchases online, and obtaining the cooperation of <30 human beings without scientific skills above masters degrees in relevant subjects?

**★★★ (UF, RP) Take-off speed: after automating 15% of labour, how long will it take until 60% of labour is automated? Question note: 99%+ of labour has been already been automated, since most humans don't work on agriculture any more. This question asks about automating 15% and 60% of labour of the type done in 2023; see "recurring terms".

**★★★ (FE, RP) How long does it take TSMC to manufacture 100k GPUs? Relevance: Not that high, but a neat Fermi estimate warm up. Might just generally be good for having good models of the world, though.

**★★★ (UF, RP) What is the % chance that by 2025/2030/35/40 an AI will persuade a human to commit a crime in order to further the AI's purposes? If one wanted to make this question resolvable: Question resolves according to the [resolution council]'s probability that this has happened. This would require a platform that accepts probabilistic resolutions. See also below "When will the US' SEC accuse someone of committing securities fraud substantially aided by AI systems?"

**★★★ (RP, FE) What fraction of labour will be automated between 2023 and 2028/2035/2040/2050/2100? Question operationalization: See "recurring terms" section For a reference on an adjacent, see Phil Trammell's Economic growth under transformative AI.

NunoSempere @ 2023-12-14T00:14 (+4)

I have extracted top questions to here: https://github.com/NunoSempere/clarivoyance/blob/master/list/top-questions.md with the Linux command at the top of the page. Hope this is helpful enough.

PeterSlattery @ 2023-12-14T23:00 (+2)

Thank you.

Vasco Grilo @ 2023-12-08T19:56 (+4)

Nice work!

In this adjacent document, we also outline a "resolution council"

The link points to this post.

NunoSempere @ 2023-12-09T00:20 (+2)

Thanks, fixed

tobytrem @ 2023-12-12T00:05 (+1)

I spotted three instances of "this document" not being linked to the relevant document. Let me know if this was a bug :)