Washington Post article about EA university groups

By Lizka @ 2023-07-05T12:58 (+35)

The article is here (note that the Washington Post is paywalled[1]). The headline[2] is "How elite schools like Stanford became fixated on the AI apocalypse," subtitled "A billionaire-backed movement is recruiting college students to fight killer AI, which some see as the next Manhattan Project." It's by Nitasha Tiku. 

Notes on the article: 

I expect that some folks on the Forum might have reactions to the article — I might share some in the comments later, but I just want to remind people about the Forum norms of civility.

  1. ^

    Up to some number of free articles per month

  2. ^

    My understanding is that journalists don't generally choose their headlines. Someone should correct me in the comments if this is wrong! 


Aaron Gertler @ 2023-07-06T08:28 (+66)

I work for Open Phil, which is discussed in the article. We spoke with Nitasha for this story, and we appreciate that she gave us the chance to engage on a number of points before it was published.

 

A few related thoughts we wanted to share:

 

We also want to express that we are very excited by the work of groups and organizers we’ve funded. We think that AI and other emerging technologies could threaten the lives of billions of people, and it’s encouraging to see students at universities around the world seriously engaging with ideas about AI safety (as well as other global catastrophic risks, like from a future pandemic). These are sorely neglected areas, and we hope that today’s undergraduates and graduate students will become tomorrow’s researchers, governance experts, and advocates for safer systems.

For a few examples of what students and academics in the article are working on, we recommend:

James Herbert @ 2023-07-07T10:28 (+16)

I ask ChatGPT to analyse the text's sentiment. 

What is the text's sentiment re effective altruism?

The sentiment towards effective altruism (EA) in the provided text is primarily skeptical and critical. The text associates effective altruism with the AI safety movement, describing them as intertwined. It suggests that EA has been recruiting ambitious young people on elite college campuses, but questions the movement's focus, methods, and associations.

Here are a few points from the text that highlight the skeptical sentiment toward effective altruism:

  1. The text mentions that EA initially focused on global poverty but then shifted its focus to rogue AI, with the implication that this shift may not be entirely grounded in current, practical concerns. This suggests a lack of consistency in the movement's goals.
  2. The text notes that extreme practitioners within EA began promoting an idea called “longtermism,” which is portrayed as prioritizing the lives of hypothetical future beings over present-day suffering. This is presented as a rather abstract and potentially disconnected concern.
  3. The text mentions that EA is associated with controversial figures and ideas, including a scandal involving one of its largest donors, Bankman-Fried, and a backlash against Nick Bostrom, an influential figure in EA, for controversial past writings. This suggests that the movement is connected to individuals and ideas that have faced criticism.
  4. The phrase “EA Kool-Aid” is used in the text, a colloquial term that implies that EA might be likened to a cult where followers are expected to accept beliefs without critical examination.

Overall, the sentiment toward effective altruism in the text is that it is a movement with questionable focus, controversial associations, and methods that may be seen as cult-like or ungrounded.

What is the text's sentiment re the field of AI safety? 

The sentiment towards the field of AI safety in the provided text is largely mixed with a tinge of skepticism. While the text acknowledges the importance and legitimacy of AI safety as a field, it appears to criticize its association with effective altruism and the extreme scenarios some proponents focus on.

Here are some points from the text that illustrate the sentiment towards AI safety:

  1. The text acknowledges that AI safety is a “legitimate field”, indicating that there is recognition of its relevance and importance.
  2. The description of the “superintelligent AI uprising” being an “extremely improbable scenario” reflects skepticism towards certain extreme concerns within the field. It suggests that there may be an exaggerated focus on unlikely outcomes.
  3. The association of AI safety with effective altruism in the text might imply that some aspects of AI safety are criticized for being too abstract or speculative, as effective altruism is portrayed with skepticism, particularly when it shifted focus to long-term, more abstract concerns like rogue AI.
  4. The mentioning of AI safety researchers not taking themselves too seriously in the text suggests that there may be an underlying criticism or questioning of the magnitude of certain claims or concerns within the field.

In summary, the text portrays AI safety as an important and legitimate field, but questions the focus on highly speculative scenarios and criticizes its close association with the effective altruism movement, which is portrayed skeptically. The text also seems to suggest a need for a more grounded and realistic approach within the AI safety community.

Do a sentiment analysis of the text

The text provided can be broken down into various elements in terms of sentiment:

Neutral-Positive sentiment towards Paul Edwards and Steve Luby: The text introduces Paul Edwards and Steve Luby in a neutral light and showcases their dedication and positive intentions in educating young minds about potential threats to humanity.

Neutral sentiment regarding Artificial Intelligence (AI) risks: The text discusses AI risks and how the concept has gained traction over time. It doesn't take a side, rather presents the evolving perception of AI.

Positive sentiment towards efforts to counter rogue AI: The text illustrates a positive sentiment towards various organizations, philanthropists, and labs (such as DeepMind, OpenAI, and Anthropic) that are taking steps to prevent AI-related catastrophes by promoting AI safety and developing benign AI systems.

Negative sentiment towards the AI safety movement by critics: The text highlights criticism of the AI safety movement as unscientific and potentially distracting from other important issues. The movement is presented as potentially exaggerative and is likened to a religion by critics.

Positive sentiment towards student involvement in AI safety: The article shows positive sentiment towards the involvement of students, especially Gabriel Mukobi, in AI safety, and how they are becoming increasingly aware and motivated to act against the potential existential risks associated with AI.

Negative sentiment regarding controversial figures and ideas associated with the AI safety movement: The text shows a negative sentiment towards certain figures and ideas associated with the AI safety movement, including Effective Altruism (EA) and past controversial statements by Nick Bostrom.

Neutral-Positive sentiment towards the increasing acknowledgment of AI safety: Towards the end, the article discusses how AI safety is becoming a topic of growing interest and concern among academia and industry. This is presented in a neutral to positive light, indicating the recognition of AI safety as an area that requires attention.

Mixed sentiment among educators: The article ends with mixed sentiments among educators like Luby, who seems to be in agreement with the concerns, and Edwards, who acknowledges AI safety but still sees other issues such as climate change as more immediate threats.

In summary, the text provides a balanced view, containing neutral, positive, and negative sentiments regarding the topic of AI safety and its growing prominence in academia and the tech industry. There is a general positive sentiment towards proactive measures and student involvement, but also a significant negative sentiment related to the criticisms and controversial aspects of the AI safety movement.

Linch @ 2023-07-06T02:02 (+11)

Minor note, but

My understanding is that journalists don't generally choose their headlines. Someone should correct me in the comments if this is wrong! 

I think this is true, but also a distraction. When somebody critiques inaccurate/clickbaity/bombastic/generally bad headlines, what they usually care about is the low quality of the headline qua headline, not the internals of how some news corporation allocates their job duties.

Nathan Young @ 2023-07-05T16:16 (+7)

Seems "within tolerance". Like I guess I would nitpick some stuff, but does it seem egregiously unfair? No. 

And in terms of tone, it's pretty supportive.

Jeff Kaufman @ 2023-07-06T00:53 (+18)

in terms of tone, it's pretty supportive

That's not my read? It starts by establishing Edwards as a trusted expert who pays attention to serious risks to humanity, and then contrasts this with students who are "focused on a purely hypothetical risk". Except the areas Edwards is concerned about ("autonomous weapons that target and kill without human intervention") are also "purely hypothetical", as is anything else wiping out humanity.

I read it as an attempt to present the facts accurately but with a tone that is maybe 40% along the continuum from "unsupportive" to "supportive"? Example word choices and phrasings that read as unsupportive to me: "enthralled", emphasizing that the outcome is "theoretical", the fixed-pie framing of "prioritize the fight against rogue AI over other threats", emphasizing Karnofsky's conflicts of interest in response to a blog post that pre-dates those conflicts, bringing up the Bostrom controversy that isn't really relevant to the article, and "dorm-room musings accepted at face value in the forums". But it does end on a positive note, with Luby (the alternative expert) coming around, Edwards in between, and an official class on it at Stanford.

Overall, instead of thinking of the article as trying to be supportive or not, I think it's mostly trying to promote controversy?