What Do AI Safety Pitches Not Get About Your Field?

By a_e_r @ 2022-09-20T18:13 (+70)

When I was first introduced to AI Safety, coming from a background studying psychology, I kept getting frustrated about the way people defined the and used the word "intelligence". They weren't able to address my questions about cultural intelligence, social evolution, and general intelligence in a way I found rigorous enough to be convincing. I felt like professionals couldn't answer what I considered to be basic and relevant questions about a general intelligence, which meant that I took a lot longer to take AI Safety seriously than I otherwise would have. It feels possible to me that other people have run into AI Safety pitches and been turned off because of something similar -- a communication issue because both parties approached the conversation with very different background information. I'd love to try to minimize these occurrences, so if you've had anything similar happen, could you please share: 

What is something that you feel AI Safety pitches usually don't seem to understand about your field/background? What's a common place where you feel you've become stuck in a conversation with AI Safety pitches? What question/information makes/made the conversation stop progressing and start circling? 


aogara @ 2022-09-20T18:28 (+27)

From an economics perspective, I think claims of double-digit GDP growth are dubious and undermine the credibility of the short AI timelines crowd. Here is a good summary of why it seems so implausible to me. To be clear, I think AI risk is a serious problem and I'm open to short timelines. But we shouldn't be forecasting GDP growth, we should be forecasting the thing we actually care about: the possibility of catastrophic risk. 

(This is a point of active disagreement where I'd expect e.g. some people at OpenPhil to believe double-digit GDP growth is plausible. So it's more of a disagreement than a communication problem, but one that I think will particularly push away people with backgrounds in economics.)

Steven Byrnes @ 2022-09-24T13:51 (+2)

I join you in strongly disagreeing with people who say that we should expect unprecedented GDP growth from AI which is very much like AI today but better. OTOH, at some point we'll have AI that is like a new intelligent species arriving on our planet, and then I think all bets are off.

seanrson @ 2022-09-20T21:35 (+25)

Psychology/anthropology:

The misleading human-chimp analogy: AI will stand in relation to us the same way we stand in relation to chimps. I think this analogy basically ignores how humans have actually developed knowledge and power--not by rapid individual brain changes, but by slow, cumulative cultural changes. In turn, the analogy may lead us to make incorrect predictions about AI scenarios.

Geoffrey Miller @ 2022-09-21T21:36 (+6)

Well, human brains are about three times the mass of chimp brains, diverged from our most recent common ancestor with chimps about 6 million years ago, and have evolved a lot of distinctive new adaptations such as language, pedagogy, virtue signaling, art, music, humor, etc. So we might not want to put too much emphasis on cumulative cultural change as the key explanation for human/chimp differences.

seanrson @ 2022-09-21T23:29 (+11)

Oh totally (and you probably know much more about this than me). I guess the key thing I'm challenging is the idea that there was something like a very fast transfer of power resulting just from upgraded computing power moving from chimp-ancestor brain -> human brain (a natural FOOM), which the discussion sometimes suggests. My understanding is that it's more like the new adaptations allowed for cumulative cultural change, which allowed for more power.

Geoffrey Miller @ 2022-09-21T21:32 (+7)

Aris -- great question. 

I'm also in psychology research, and I echo your frustrations about a lot of AI research having a very vague, misguided, and outdated notion of what human intelligence is. 

Specifically, psychologists use 'intelligence' in at least two ways: (1) it can refer (e.g. in cognitive psychology or evolutionary psychology) to universal cognitive abilities shared across humans, but (2) it can also refer (in IQ research and psychometrics) to individual differences in cognitive abilities. Notably 'general intelligence' (aka the g factor, as indexed by IQ scores) is a psychometric concept, not a description of a cognitive ability. 

The idea that humans have a 'general intelligence' as a distinctive mental faculty is a serious misunderstanding of the last 120 years of intelligence research, and makes it pretty confusing with AI researchers talk about 'Artificial General Intelligence'. 

(I've written about these issues in my books 'The Mating Mind' and 'Mating Intelligence', and in lots of papers available here, under the headings 'Cognitive evolution' and 'Intelligence': 

jskatt @ 2022-09-25T21:31 (+3)

Seems like the problem is that the field of AI uses a different definition of intelligence? Chapter 4 of Human Compatible:

Before we can understand how to create intelligence, it helps to understand what it is. The answer is not to be found in IQ tests or even in Turing tests, but in a simple relationship between what we perceive, what we want, and what we do. Roughly speaking, an entity is intelligent to the extent that what it does is likely to achieve what it wants, given what it has perceived.

To me, this definition seems much broader than g factor. As an illustrative example, Russell discusses how E. coli exhibits intelligent behavior.

As E. coli floats about in its liquid home (your lower intestine), it alternates between rotating its flagella clockwise, causing it to tumble in place, and counterclockwise, causing the flagella to twine together into a kind of propeller, so the bacterium swims in a straight line. Thus, E. coli does a sort of random walk -- swim, tumble, swim, tumble -- that allows it to find and consume glucose rather than staying put and dying of starvation. If this were the whole story, we wouldn't say that E. coli is particularly intelligent, because its actions would not depend in any way on its environment. It wouldn't be making any decisions, just executing a fixed behavior that evolution has built into its genes. But this isn't the whole story. When E. coli senses an increasing concentration of glucose, it swims longer and tumbles less, and it does the opposite when it senses a decreasing concentration of glucose. So what it does (swim toward glucose) is likely to achieve what it wants (more glucose, let's assume) given what it has perceived (an increasing glucose concentration).

Perhaps you were thinking "But evolution built this into its genes too, how does that make it intelligent?" This is a dangerous line of reasoning, because evolution built the basic design of your brain into your genes too, and presumably you wouldn't wish to deny your own intelligence on that basis. The point is that what evolution into E. coli's genes, as it has into yours, is a mechanism whereby the bacterium's behavior varies according to what it perceives in its environment. Evolution doesn't know in advance where the glucose is going to be or where your keys are, so putting the capability to find them into the organism is the next best thing.

Geoffrey Miller @ 2022-10-04T16:43 (+3)

Yes, I think we're in agreement -- the Stuart Russell definition is much closer to my meaning (1) for 'intelligence' (ie a universal cognitive ability shared across individuals) than to my meaning (2) for 'intelligence' (i.e. the psychometric g factor).

The trouble comes mostly when the two are conflated, e.g. when we imagine that 'superintelligence' will basically be like an IQ 900 person (whatever that would mean), or when we confuse 'general intelligence' as indexed by the g factor with truly 'domain-general intelligence' that could help an agents do whatever it wants to achieve, in any domain, given any possible perceptual input.

There's a lot more to say about this issue; I should write a longer form post about it soon.

mhendric @ 2022-09-21T19:14 (+4)

Philosophy: Agency

While agency is often invoked as a crucial step in an AI or AGI becoming dangerous, I often find pitches for AI safety oscillate between a very deflationary sense of agency that does not ground worries well (e.g. "Able to represent some model of the world, plan and execute plans") and more substantive accounts of agency (e.g. "Able to act upon a wide variety of objects, including other agents, in a way that can be flexibly adjusted as it unfolds based on goal-representations").

I'm generally unsure if agency is a useful term for the debate at least when engaging with philosophers, as it comes with a lot of baggage that is not relevant to AI safety.

TeddyW @ 2022-11-22T15:29 (+3)

Exponential growth does not come easy, and real life exponentials crap out.   You cannot extrapolate growth carelessly.

People's time and money are required to deliver each 1.5x improvement in hardware and this is treated like it comes from some law of nature.   In 40 years, I have seen transistor line widths go from 1 micron to 5 nanometer, a factor of 200.   Each and every time transistor line widths have reduced by a factor of 1.4, it took a great deal of money and effort to make it happen.  In 40 years we have had armies of talented engineers shrink the line width by 2.3 orders of magnitude.

Exponential growth occurs when a process has feedback and fuel to spare.  It always stops.   The fuel required to feed exponential growth also grows exponentially.  Either the feedback mechanism is interrupted, or the fuel source is overwhelmed.  

Dennard scaling quit 16 years ago.  It was the engine of Moore's law.  We now design a patchwork of work arounds to keep increasing transistor counts and preserve the trend observed by Gordon Moore. 

People point to exponential growth and extrapolate it as if it were a law of nature.  Exponential growth can not be projected into the future without serious consideration of both the mechanism driving growth, and the impediments that will curb growth.  Graphs that rely on extrapolating an exponential trend by orders of magnitude are optimistic at best.

ekka @ 2022-10-04T16:24 (+3)

From an engineering perspective. The way AIS folk talk about AI is based on philosophical argument and armchair reasoning. This is not how engineers think. Physical things in the world are not built based on who has the bast argument and can write the best blog post but by making lots of tradeoffs between different constraints. I think this has 2 major effects: The first is that people with lived experience of building things in the physical world especially at scale will just not engage with a lot of the material produced by AIS folk. The second is that AIS folk hand wave away a lot of details that are actually very important from an engineering perspective and only engage with the most exciting high level abstract ideas. Usually it is the small boring details that are not very exciting to think about that determine how well things work in the physical world.

Sam Elder @ 2022-09-29T04:53 (+3)

My work (for a startup called Kebotix) aims to use and refine existing ML methods to accelerate scientific and technological progress, focused specifically on discovery of new chemicals and materials.

Most descriptions of TAI in AIS pitches route through essentially the same approach, claiming that smarter AI will be dramatically more successful than our current efforts, bringing about rapid economic growth and societal transformation, usually en route to claiming that the incentives will be astronomical to deploy quickly and unsafely.

However, this step often gets very little detailed attention in that story. Little thought is given to explicating how that would actually work in practice, and, crucially, whether intelligence is even the limiting factor in scientific and technological progress. My personal, limited experience is that better algorithms are rarely the bottleneck.

Charles He @ 2022-09-29T05:51 (+2)

whether intelligence is even the limiting factor in scientific and technological progress. 

My personal, limited experience is that better algorithms are rarely the bottleneck.

 

Yeah, in some sense everything else you said might be true or correct.

But I suspect by "better algorithms", I think you thinking along the lines of "What's going to work as a classifier, is this gradient booster with these parameters going to work robustly for this dataset?", "More layers to reduce false negatives has huge diminishing returns, we need better coverage and identification in the data." or  "Yeah, this clustering algorithm sucks for parsing out material with this quality." 

Is the above right?

The above isn't what the AI safety worldview sees as "intelligence". In that worldview, the "AI" competency would basically start working up the org chart and taking over a lot of roles progressively: starting with the decisions in the paragraph above of model selection, then doing data cleaning, data selection over accessible datasets, calling and interfacing with external data providers, then understanding the relevant material science and how that might relate to the relevant "spaces" of the business model. 

So this is the would-be "intelligence". In theory, solving all those problems above seems like a formidable "algorithm". 

Sam Elder @ 2022-11-12T17:11 (+1)

What I mean by "better algorithms" is indeed in the narrow sense of better processes of taking an existing data set and generating predictions. You could indeed also define "better algorithms" much more broadly to encompass everything that everyone in a company does from the laboratory chemist tweaking a faulty instrument to the business development team pondering an acquisition to the C-suite deciding how to navigate the macroeconomic environment. And in that sense, yes, better algorithms would always be the bottleneck, but that would also be a meaningless statement.

Arthur Conmy @ 2022-09-21T20:01 (+2)

What were/are your basic and relevant questions? What were AIS folks missing?

Aris Richardson @ 2022-09-21T20:21 (+2)

It's been a while since, but from what I remember, my questions were generally in the same range as the framing highlighted by user seanrson's above! 
I've also heard objections from people who've felt that predictions about AGI from biological anchors don't understand the biology of a brain well enough to be making calculations. Ajeya herself even caveats "Technical advisor Paul Christiano originally proposed this way of thinking about brain computation; neither he nor I have a background in neuroscience and I have not attempted to talk to neuroscientists about this. To the extent that neuroscientists who talk about “brain computation” have a specific alternative definition of this in mind, this proposal may not line up well with their way of thinking about it; this might make it more hazardous to rely as much as I do on evidence Joe gathered from discussions with neuroscientists."

Catalin M @ 2022-10-04T08:02 (+1)

Thanks for writing this post, it's a very succinct way to put it (I've struggled to formulate and raise this question with the AI community). 

My personal opinion is that AI research - like many other fields - relies on "the simplest definition" of concepts that it can get away with for those notions that lie outside the field.  This is not a problem in itself, as we can't all be phds in every field (not that this would solve the problem). However, my view is there are many instances where AI research and findings rely on axioms - or yield results - that require specific interpretations of concepts (re: intelligence, agency, human psychology, neuroscience etc.) that have a speculative or at least "far-from-the-average" interpretation. This is not helped by the fact that many of these terms do not always have consensus in their own respective fields.  I think that when presenting AI ideas and pitches many overlook the nuance required to formulate/explain AI research given such assumptions. This is especially important for AI work that is no longer theoretical or purely about solving algorithmic or learning problems - but extends to other scientific fields and the broader society (e.g. AI-safety).