aogara's Quick takes

By aogara @ 2021-01-19T01:20 (+5)

null
aogara @ 2022-03-31T20:59 (+15)

Some thoughts on FTX copied from this thread:

One way to approach this would simply be to make a hypothesis (i.e. the bar for grants is being lowered, we're throwing money at nonsense grants), and then see what evidence you can gather for and against it.

Thinking about FTX and their bar for funding seems very important. I'm thrilled that so much money is being put towards EA causes, but a few early signs have been concerning. Here’s two considerations on the hypothesis that FTX has a lower funding bar than previous EA funding.

First, it seems that FTX would like to spend a lot more a lot faster than has been the EA consensus for a long time. Perhaps it's a rational response to having more available funds, but the funds are nowhere near unlimited. If the entire value of FTX was liquidated and distributed to the 650 million people in extreme poverty, each impoverished person would receive only $49, leaving global poverty as pressing a problem as ever. It also strikes against recent work on patient philanthropy, which is supported by Will MacAskill's argument that we are not living in the most influential time in human history.  (EDIT: See additional section below.) It seems that this money is intended to be disbursed with much less research and deliberation than used by GiveWell, OpenPhilanthropy, and other grantmakers, with no visible plans to build a sizeable research organization before giving out their money. Using the Rowing vs Steering analogy of EA, FTX has strapped a motor engine to the EA boat without engaging in any substantial steering of the boat. 

On the object level, the Atlas Fellowship for high schoolers looks concerning on a few levels. The program intends to award $50K to 100 fellows by April 10th of this year. The fellowship has received little major press and was announced on Tyler Cowen's blog only 2 days ago. This does not seem like enough time or publicity to generate the strongest possible pool of applicants. The selection process itself is unusual in a number of ways, requiring standardized tests but not grades or letters of recommendation, and explicitly not pursuing goals of diversity in race, gender, or socioeconomic status. The program is working with a number of young EAs with good reputations for direct work in their respective cause areas, but little to no experience in running admissions processes or academic fellowships (bottom of this page). Will this $5.5M given to high schoolers for professional development really do more good than spending it on lifesaving medication or other provably beneficial interventions?

There's a lot to cover here and I've raised more questions or concerns than I've offered answers. But FTX is a massively influential development within EA, and should receive a lot of time and attention to make sure it achieves its full potential for positive impact. 

More Thoughts on FTX

I'm confused by FTX's astronomical spend on PR and brand awareness. Wikipedia gives a good breakdown of the spend, highlights include renaming an NBA arena, a college football stadium, an esports organization, and the Mercedes Benz Formula One team; sponsoring athletes such as Tom Brady, Steph Curry, and Shohei Ohtani; and making donations to the personal charitable foundations of celebrities Phil Mickelson, Alex Honnold, and Bryson DeChambeau. The marketing spend has been on the order of hundreds of millions if not billions of dollars. This would all be wonderful if it's a profit-making strategy that creates more funding for good causes on net. But I would ask both how this will be perceived publicly, and the object-level question of whether this spending is, say, 8x better than donating directly to people in poverty through GiveDirectly. 

Separately, I think there's a real tension between the fact that FTX is headquartered in the Bahamas to avoid paying taxes and the fact that Sam Bankman Fried was the second largest donor to the Joe Biden campaign. They care enough about American politics to spend millions trying to influence the outcome of our elections, but don't feel any responsibility to pay taxes? You can make the pure utilitarian argument, but I think most people would object to it. 

EDIT: Spending Now as Patient Philanthropy

Thank you to several people who pointed out that spending now might be the best means to patient philanthropy, particularly for longtermists. Here is Owen Cotton-Barratt's explanation of why "patient vs urgent longtermism" has little direct bearing on giving now vs later that conceptualizes some forms of current grantmaking as investments that open up greater opportunities for giving at a more impactful time in the future. FTX is specifically interested in these kinds of investments in future opportunities, with five or more of their focus areas potentially leading to greater opportunities in the future. Lukas Gloor also points out that there is significantly more disagreement about the Hinge of History hypothesis than I realized, much of it about priors and anthropic reasoning arguments that I don't quite understand. This all seems reasonable, particularly for an organization that is trying to find giving opportunities to fulfill their mission of longtermist grantmaking. 

Stefan_Schubert @ 2022-04-02T11:07 (+15)

First, it seems that FTX would like to spend a lot more a lot faster than has been the EA consensus for a long time. ... It also strikes against recent work on patient philanthropy, which is supported by Will MacAskill's argument that we are not living in the most influential time in human history. 

I don't think fast spending in and of itself strikes against patient longtermism: see Owen-Cotton-Barratt's post "Patient vs urgent longtermism" has little direct bearing on giving now vs later.

Lukas_Gloor @ 2022-04-02T12:15 (+6)

In addition, the arguments for not living in the most influential time in human history are rejected by many EAs, as you can see in the discussion section of MacAskill's orginal article and here.

(In general, I think it's legitimate even for very large organizations to bet on a particular worldview, especially if they're being transparent to donors and supporters.)

(That said, I want to note that "spend money now" is very different from "have a low bar." I haven't looked into FTX grants yet, but I want to flag that while I'm in favor of deploying capital now, I wouldn't necessarily lower the bar. Instead, I'd aggressively fund active grantmaking and investigations into large grants in areas where EAs haven't been active yet.)

aogara @ 2022-04-05T00:11 (+2)

Appreciate and agree with both of these comments. I’ve made a brief update to the original post to reflect it, and hope to respond in more detail soon.

Linch @ 2022-03-31T23:04 (+8)

This would all be wonderful if it's a profit-making strategy that creates more funding for good causes on net. But I would ask both how this will be perceived publicly, and the object-level question of whether this spending is, say, 8x better than donating directly to people in poverty through GiveDirectly. 

The second sentence seems to be confusing investment with consumption. 
 

aogara @ 2022-04-01T00:51 (+6)

The investment in advertising, versus the consumption-style spending on GiveDirectly? Just meant to compare the impact of the two. The first’s impact would come by raising more money to eventually be donated, the second is directly impactful, so I’d like to think about which is a better use of the funds.

Chris Leong @ 2022-04-02T09:40 (+5)

I suggest caution with trying to compare the future fund's investments against the donating to global poverty without engaging with the long-termist worldview. This worldview you could be right or wrong but it is important to engage with it to understand why FTX might consider these investments worthwhile.

Another part of the argument is that there is currently an absurd amount of money per effective altruist. This might not matter for global poverty where much of the work can be outsourced, but it is a much bigger problem for many projects in other areas. In this scenario, it might make sense to exchange apps to seeming amounts of money to grow the pool of committed members, at least if this really is the bottleneck, particularly if you believe that certain projects need to be completed on short timelines.

I agree being situated in the Bahamas is less than deontologically spotless but I don't believe that avoiding the negative PR is worth billions of dollars and I don't see it as a particularly egregious moral violation nor do I see this as significantly reducing trust in EA or FTX.

aogara @ 2022-04-14T00:07 (+2)

Update on Atlas Fellowship: They've extended their application period by one week! Good decision for getting more qualified applications into the pipeline. I wonder how many applications they've received overall. 

aogara @ 2022-04-08T18:54 (+13)

Concerns with BioAnchors Timelines

A few points on the Bio Anchors framework, and why I expect TAI to require much more compute than used by the human brain:

1. Today we routinely use computers with as much compute as the human brain. Joe Carlsmith’s OpenPhil report finds the brain uses between 10^13 and 10^17 FLOP/s. He points out that Nvidia's V100 GPU retailing for $10,000 currently performs 10^14 FLOP/s. 

2. Ajeya Cotra’s Bio Anchors report shows that AlphaStar's training run used 10^23 FLOP, the equivalent of running a human brain-sized computer with 10^15 FLOP/s for four years. The Human Lifetime anchor therefore estimates that a transformative model could already be trained with today's levels of compute with 22% probability, but we have not seen such a model so far. 

Why, then, do we not have transformative AI? Maybe it's right around the corner, with the human lifetime anchor estimating a 50% chance of transformative AI by 2032. I'm more inclined to say that this reduces my credence in the report's short timelines based on the compute of the human brain. The Evolution anchor seems to me like a more realistic prediction, with 50% probability of TAI by 2090.

I'd also like to see more research on the evolution anchor. The Evolution anchor is the part of the report that Ajeya says she "spent the least amount of time thinking about." Its estimates of the size of evolutionary history are primarily from this 2009 blog post, and its final calculation assumes that all of our ancestors had brains the size of nematodes and that the organism population of the Earth has been constant for 1 billion years. These are extremely rough assumptions, and Ajeya also says that "there are plausible arguments that I have underestimated true evolutionary computation here in ways that would be somewhat time-consuming to correct." On the other hand, it seems reasonable to me that our scientists could generate algorithmic improvements much faster than evolution did, though Ajeya notes that "some ML researchers would want to argue that we would need substantially more computation than was performed in the brains of all animals over evolutionary history; while I disagree with this, it seems that the Evolution Anchor hypothesis should place substantial weight on this possibility."

aogara @ 2022-06-12T17:36 (+2)

This (pop science) article provides two interesting critiques of the analogy between the human brain and neural nets. 

  1. "Neural nets are typically trained by “supervised learning”. This is very different from how humans typically learn. Most human learning is “unsupervised”, which means we’re not explicitly told what the “right” response is for a given stimulus. We have to work this out ourselves."
  2. "Another difference is the sheer scale of data used to train AI. The GPT-3 model was trained on 400 billion words, mostly taken from the internet. At a rate of 150 words per minute, it would take a human nearly 4,000 years to read this much text."

I'm not sure the direct implication for timelines here. You might be able to argue that these disanalogies mean that neural nets will require less compute than the  brain. But an interesting point of disanalogy, to correct any misconceptions that neural networks are "just like the brain".

aogara @ 2022-04-08T18:15 (+10)

I strongly disagree with the claim that there is a >10% chance of TAI in the next 10 years. Here are two small but meaningful pieces of why I have much longer AI timelines. 

Note that TAI is here defined as one or both of: (a) any 5 year doubling of real global GDP, or (b) any catastrophic or existential AI failures.  

Market Activity

Top tech companies do not believe that AI takeoff is around the corner. Mark Zuckerberg recently saw many of his top AI research scientists leave the company, as Facebook has chosen to acquire Oculus and bet on the metaverse rather than AI as the next big thing. This 2019 interview with Facebook's VP of AI might shed some light on why. 

Microsoft has similarly bet heavily on entertainment over AI with their acquisition of Activision Blizzard. Microsoft purchased Activision for $68B, which is 68 times more than they invested into OpenAI three years ago, after which they have not followed up with more public investments. 

IBM Watson just sold off their entire healthcare business. This is a strong sign of the AI's failure to meet tremendous expectations of revolutionizing the healthcare industry. Meanwhile on LessWrong, somebody is getting lots of upvotes for predicting (in admittedly a fun, off-the-cuff manner) that "Chatbots [will be] able to provide better medical diagnoses than nearly all doctors" in 2024. 

Data Constraints

Progress has been swift in areas where it is easy to generate lots of training data. ML systems are lauded for achieving human-level performance on academic competitions like ImageNet, but those performances are only possible because of the millions of labeled data points provided. NLP systems trained on self-supervised objectives leverage massive datasets, but regurgitate hate speech, fake news, and private information memorized from the internet. Reinforcement learning (RL) systems play games like chess and Atari for thousands of years of virtual time in the popular method of self play. 

Many real world goals have much longer time horizons than those where AI succeeds today, and cannot be readily decomposed into smaller goals. We cannot simulate the experience of founding a startup, running an experiment, or building a relationship in the same way we can do with writing a paper or playing a game. Machines will need to learn in open-ended play with the world, where today they mostly learn from labeled examples. 

See Andrew Ng on the incredible challenge of data sparse domains. Perhaps this is why radiologists have not been replaced by machines, as Geoffrey Hinton so confidently predicted back in 2016. 

Evan R. Murphy @ 2022-04-08T20:57 (+11)

These are thoughtful data points, but consider that they may just be good evidence for hard takeoff rather than soft takeoff.

What I mean is that most of these examples show a failure of narrow AIs to deliver on some economic goals. In soft takeoff, we expect to see things like broad deployment of AIs contributing to massive economic gains and GDP doublings in short periods of time well before we get to anything like AGI.

But in hard takeoff, failure to see massive success from narrow AIs could happen due to regulations and other barriers (or it could just be limitations of the narrow AI). In fact, these limitations could even point more forcefully to the massive benefits of an AI that can generalize. And having the recipe for that AGI discovered and deployed in a lab doesn't depend on the success of prior narrow AIs in the regulated marketplace. AGI is a different breed and may also become powerful enough that it doesn't have to play by the rules of the regulated marketplace and national legal systems.

Machines will need to learn in open-ended play with the world, where today they mostly learn from labeled examples. 

Have you seen DeepMind's Generally capable agents emerge from open-ended play? I think it is a powerful demonstration of learning from open-ended play actually working in a lab (not just a possible future approach). Though it is still in a virtual environment rather than the real physical world.

aogara @ 2022-04-09T00:48 (+10)

Hey Evan, these are definitely stronger points against short timelines if you believe in slow takeoff, rather than points against short-timelines in a hard takeoff world. It might come as no surprise that I think slow takeoff is much more likely than hard takeoff, with the Comprehensive AI Systems model best representing what I would expect. A short list of the key arguments there:

  • Discontinuities on important metrics are rare, see the AI Impacts writeup. EDIT: Dan Hendrycks and Thomas Woodside provide a great empirical survey of AI progress across several domains. It largely shows continuous progress on individual metrics, but also highlights the possibilities of emergent capabilities and discontinuity.
  • Much of the case for fast takeoff relies heavily on the concept of "general intelligence". I think the history of AI progress shows that narrow progress is much more common, and I don't expect advances in e.g. language and vision models to generalize to success in the many low-data domains required to achieve transformative AI.
  • Recursive self-improvement is entirely possible in theory, but far from current capabilities. AI is not currently being used to write research papers or build new models, nor is it significantly contributing to the acceleration of hardware progress. (The two most important counterexamples are OpenAI's Codex and Google's DL for chip placement. If these were shown to be significantly pushing the cutting edge of AI progress, I would change my views on the likelihood of recursive self-improvement in a short-timelines scenario.) 
    • EDIT 07/2022: Here is Thomas Woodside's list of examples of AI increasing AI progress. While it's debatable how much of an impact these  are having on the pace of progress, it's undeniable that it's happening to some degree and efforts are ongoing to increase capacity for recursive self-improvement. My summary above was an overstatement. 
  • I don't think there's any meaningful "regulatory overhang". I haven't seen any good examples of industries where powerful AI systems are achieved in academic settings, but not deployed for legal reasons. Self-driving cars, maybe? But those seem like more of a regulatory success story than a failure, with most caution self-imposed by companies.

The short timelines scenarios I find most plausible are akin to those outlined by Gwern and Daniel Kokotajlo (also here), where a pretrained language model is given an RL objective function and the capacity to operate a computer, and it turns out that one smart person behind a computer can do a lot more damage than we realized. More generally, short timelines and hard takeoff can happen when continuous scaling up of inputs results in discontinuous performance on important real world objectives. But I don't see the argument for where that discontinuity will arise -- there are too many domains where a language model trained with no real world goal will be helpless. 

And yeah, that paper is really cool, but is really only a proof of concept of what would have to become a superhuman science in order for our "Clippy" to take over the world. You're pointing towards the future, but how long until it arrives?

Charles He @ 2022-04-08T21:34 (+2)

But in hard takeoff, failure to see massive success from narrow AIs could happen due to regulations and other barriers. It could just be limitations of the narrow AIs. In fact, these limitations could even point more forcefully to the massive benefits of an AI that can generalize. 

I think you're saying that regulations/norms could mask dangerous capability and development, having the effect of eroding credibility/recourses in safety. Yet, unhindered by enforcement, bad actors continue to progress to the worse states, even using the regulations as signposts.

I'm not fully sure I understand all of the sentences in the rest of your paragraph. There's several jumps in there? 

Gwern's writing "Clippy" lays out some potential possibilities of dislocation of safety mechanisms. If there is additional content you think is convincing (of mechanisms and enforcement) that would be good to share.

Evan R. Murphy @ 2022-04-08T21:50 (+5)

You're right, that paragraph was confusing. I just edited it to try and make it more clear.

aogara @ 2022-03-14T17:12 (+9)

Career Path: Nuclear Weapons Security Engineering

Nuclear weapons are one of the only direct means to an existential catastrophe for humanity. Other existential risk factors such as global warming, great power war, and misaligned AI could not alone pose a specific credible threat to Earth's population of seven billion. Instead, these stories only reach human extinction through bioweapons, asteroids, or something closer to the conclusion of Gwern's recent story about AI catastrophe: 

All over Earth, the remaining ICBMs launch.

How can we engineer a safer nuclear weapons system? A few ideas:

In the 2016 report where 80,000 Hours declared nuclear security a "sometimes recommended" path for improving the world, they note a key cause for concern: "This issue is not as neglected as most other issues we prioritize. Current spending is between $1 billion and $10 billion per year." In 2022, with longtermist philanthropy looking to deploy billions of dollars over the next decade, do we still believe nuclear security engineering is too ambitious to work on? 

aogara @ 2022-04-17T16:50 (+2)

Fun fact: For 20 years at the peak of the Cold War, the US nuclear launch code was “00000000”

https://gizmodo.com/for-20-years-the-nuclear-launch-code-at-us-minuteman-si-1473483587

H/t: Gavin Leech

aogara @ 2022-04-14T23:43 (+8)

Collected Thoughts on AI Safety

Here are of some of my thoughts on AI timelines:

And here are some thoughts on other AI Safety topics:

Generally speaking, I believe in longer timelines and slower takeoff speeds. But short timelines seem more dangerous, so I'm open to alignment work tailored to short timelines scenarios. Right now, I'm looking for research opportunities on risks from large language models. 

Aidan O'Gara @ 2021-01-19T01:20 (+7)

Three Scenarios for AI Progress

How will AI develop over the next few centuries? Three scenarios seem particularly likely to me: 

For clarify my beliefs about AI timelines, I found it helpful to flesh out these concrete "scenarios" by answering a set of closely related questions about how transformative AI might develop:

The potentially useful insight here is that answering one of these questions helps you answer the others. If massive compute is necessary, then TAI will be built by a few powerful governments or corporations, not by a diverse ecosystem of small startups. If TAI isn't achieved for another century, that affects which research agendas are most important today. Follow this exercise for awhile, and you might end up with a handful of distinct scenarios, and then you can judge the relative likelihood and timelines of each. 

Here's my rough sketch of what each of these mean. [Dumping a lot of rough notes here, which is why I'm posting as a shortform.]

This is pretty rough around the edges, but these three scenarios seem like the key possibilities for the next few centuries that I can see at this point. For the hell of it, I'll give some very weak credences: 10% that we solve superintelligence within decades, 25% that CAIS brings double-digit growth within a century or so, maybe 50% that human progress continues as usual for at least a few centuries, and (at least) 15% that what ends up happening looks nothing like any of these scenarios. 

Very interested in hearing any critiques or reactions to these scenarios or the specific arguments within.

EdoArad @ 2021-01-19T04:23 (+7)

I like the no takeoff scenario intuitive analysis, and find that I also haven't really imagined this as a concrete possibility. Generally, I like that you have presented clearly distinct scenarios and that the logic is explicit and coherent. Two thoughts that came to mind:

Somehow in the CAIS scenario, I also expect the rapid growth and the delegation of some economic and organizational work to AI to have some weird risks that involve something like humanity getting pushed away from the economic ecosystem while many autonomous systems are self-sustaining and stuck in a stupid and lifeless revenue-maximizing loop. I couldn't really pinpoint an x-risk scenario here. 

 Recursive self-improvement can also happen within long periods of time, not necessarily leading to a fast takeoff, especially if the early gains are much easier than later gains (which might make more sense if we think of AI capability development as resulting mostly from computational improvements rather than algorithmic). 

EdoArad @ 2021-01-19T04:30 (+3)

Ah! Richard Ngo had just written something related to the CAIS scenario :)