Searle vs Bostrom: crucial considerations for EA AI work?
By Forumite @ 2022-07-13T10:18 (+11)
In his review of Nick Bostrom's Superintelligence, philosopher John Searle (creator of the 'Chinese Room' thought experiment) seems to attack many of the fundamental assumptions and conclusions of Bostrom's (and, I think most EAs') approach to thinking about AI.
If Searle is right, it would perhaps imply that many, many EAs are wasting a lot of time and energy at the moment.
- Does anyone know if Nick Bostrom has replied to Searle's arguments?
- What do EA Forum readers think about Searle's arguments?
Searle's review is paywalled, but it's super easy to register for the site and view it for free.
(Meta-point: I'm just jumping in to my reading on this topic. If this is well-trodden ground, apologies - and I would appreciate any links to cannonical reading on these debates - thank you!)
rgb @ 2022-07-13T11:08 (+18)
Since the article is paywalled, it may be helpful to excerpt the key parts or say what you think Searle's argument is. I imagine the trivial inconvenience of having to register will prevent a lot of people from checking it out.
I read that article a while ago, but can't remember exactly what it says. To the extent that it is rehashing Searle's arguments that AIs, no matter how sophisticated their behavior, necessarily lack understanding / intentionality/ something like that, then I think that Searle's arguments are just not that relevant to work on AI alignment.
Basically I think what Chalmers says in his paper The Singularity: a Philosophical Analysis.
As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behavior, it might be missing important “internal” aspects of mentality: consciousness, understanding, intentionality, and so on. Later in the paper, I will advocate the view that if a system in our world duplicates not only our outputs but our internal computational structure, then it will duplicate the important internal aspects of mentality too. For present purposes, though, we can set aside these objections by stipulating that for the purposes of the argument, intelligence is to be measured wholly in terms of behavior and behavioral dispositions, where behavior is construed operationally in terms of the physical outputs that a system produces. The conclusion that there will be AI++ in this sense is still strong enough to be interesting. If there are systems that produce apparently superintelligent outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact on the rest of the world. (emph mine)
rgb @ 2022-07-13T14:47 (+11)
Well, I looked it up and found a free pdf, and it turns out that Searle does consider this counterargument.
Why is it so important that the system be capable of consciousness? Why isn’t appropriate behavior enough? Of course for many purposes it is enough. If the computer can fly airplanes, drive cars, and win at chess, who cares if it is totally nonconscious? But if we are worried about a maliciously motivated superintelligence destroying us, then it is important that the malicious motivation should be real. Without consciousness, there is no possibility of its being real.
But I find the arguments that he then gives in support of this claim quite unconvincing / I don't understand exactly what the argument is. Notice that Searle's argument is based on comparing a spell-checking program on a laptop with human cognition. He claims that reflecting on the difference between the human and the program establishes that it would never make sense to attribute psychological states to any computational system at all. But that comparison doesn't seem to show that at all.
And it certainly doesn't show, as Searle thinks it does, that computers could never have the "motivation" to pursue misaligned goals, in the sense that Bostrom needs to establish that powerful AGI could be dangerous.
I should say—while Searle is not my favorite writer on these topics, I think these sorts of questions at the intersection of phil mind and AI are quite important and interesting, and it's cool that you are thinking about them. (Then again, I *would *think that given my background). And it's important to scrutinize the philosophical assumptions (if any) behind AI risk arguments.