Jaan Tallinn: Fireside chat (2018)
By EA Global @ 2018-06-08T07:15 (+9)
This is a linkpost to https://www.youtube.com/watch?v=xwsfGYHwpQc&list=PLwp9xeoX5p8P3cDQwlyN7qsFhC9Ms4L5W&index=22
Jaan Tallinn has been an effective altruist for many years, and has used his reputation and personal funds to support the study of existential risk. In this fireside chat from Effective Altruism Global 2018: San Francisco, moderated by Nathan Labenz, he discusses how his views on AI have become more complex, which sorts of organizations he prefers to support, and how being a programmer helped him develop epistemic humility.
A transcript of the chat is below, which we have lightly edited for readability. You can also watch it on YouTube and read it on effectivealtruism.org.
The Talk
Nathan: I first met you 8 years ago. You had recently wrapped up your involvement with Skype, and you gave a talk at an old Singularity Summit event and you said, "I find myself in the unusual position of trying to figure out what to do, knowing that I've just wrapped up what is probably going to be the most successful project that I'll ever do." That is a unique challenge, but you've really made the most of that opportunity, and have engaged in a lot of really interesting things over the last 8 years that have spanned AI and other existential risks, some work on global coordination, and some interest in a whole bunch of different companies. So, I want to give you the chance to talk about all of that sort of stuff.
But let's start off with a little bit of a retrospective on the last 8 years since you first got involved. How have things gone from your perspective? How have they deviated from what you thought might happen?
Jaan: I'm not sure I remember how much I thought about what would happen, but yeah, things have gone really well with one exception. A thing that has gone well is that the EA movement has scaled. Probably the most important thing is that the so-called Overton Window - things that are now acceptable to talk about - now firmly includes at least some version of AI safety. You can actually publicly talk about it without being ostracized.
There are many more people, much more talent, much more resources and much more money available in the existential risk ecosystem in general. Things are really growing really, really nicely. I think the main exception is that we still have a lot of uncertainty about how much time we have. We're making great progress, but it's against an unknown deadline.
Nathan: From the very beginning, you were really worried about the Yudkowsky-style scenario of fast takeoff, or things getting out of control quickly. Have your views on that changed over the last eight years?
Jaan: I mean, a little bit. I guess they have diversified or gotten more uncertain over time as more total people have entered the AI safety space. There are basically more hypotheses, more views about what might happen and what might be important. My general approach is to basically look at when people are making arguments and see whether I can find a bug in them. If I can't find a bug, I just say, "Okay, here's some probability mass for you." People have made more plausible arguments, so I have increased my uncertainty.
I'm missing the days when it was just one fairly crisp idea of what might go wrong. Recently, I funded a project in Berkeley that is trying to reconciliate different ideas, different hypotheses of how things might develop so we can have a better, hopefully more compact idea of what's going to happen.
Nathan: I think one way you have always stood out is in your epistemic modesty. I mean, here going back to 2010, you're this very successful person, you've had this very visible company that's made a huge impact on the world. But when I first encountered you, you were willing to spend time with anyone who had an interesting idea on AI, and it did not seem to matter to you what credentials they had or didn't have. And also, you've used your own social capital to take the best AI ideas and try to take them to other places where maybe those ideas' authors wouldn't be welcome. So tell us a little bit about your strategy for leveraging your own social capital, but being so humble at the same time.
Jaan: One thing that I sometimes say is that there's really nice benefits that you get from doing programming full-time. One of them is that you get an intuitive sense of what it means to have your thoughts fully specified, and that you can use that ability in other arguments, like are they using just metaphors to conflate things, or are they actually doing something that potentially a machine could understand?
This has been useful when I want to use it for epistemics. If I wanted to evaluate arguments by a bunch of weird people at the Singularity Institute, that really helped. One thing I think that programming gives you is, indeed, epistemic humility. Because, like programs, you're so confident that you know what's going to happen now, and then you press f5, and nope. Then, you look at it and it's like, "Yeah, okay. Clearly, I made this mistake." You fix it. Nope, that that still wasn't the case.
So, yeah. It creates epistemic humility. And when I enter a young ecosystem I immediately can look like, "Okay, so what are things that I could contribute?" After trying to debug arguments and finding that they seem to be solid. And indeed, I found that - as a side effect of my programming career - I picked up this brand, that I can then use by basically touching the brand and my reputation to arguments that I believe are true, but forw hich people needed some kind of brand to be attached to them to actually take them seriously. That has seemed to work pretty well.
For example, Huw Price, my co-founder at the Centre for the Study of Existential Risk in Cambridge, wrote this New York Times piece. In that piece, he explicitly said that the reason why he started taking x-risk seriously, even though he had heard those arguments many times before, was that somebody who really had a reputation of being practical in software actually was taking those ideas seriously.
Nathan: Were there some very early fundamental arguments that persuaded you most, and do those still hold a lot of weight in your mind today?
Jaan: Oh, absolutely. The main realization is the idea of recursive self-improvement. The idea, I think, was initially formulated by I.J. Good in 1965, but I read it from Overcoming Bias when Eliezer Yudkowsky was writing there. He was making this argument in a very clear manner that I bought into, and I still think that it's one possible way of how things could go massively wrong in the world. We could have an accident in an AI lab, where we're pushing meta-learning or whatnot. And suddenly, the meta-learning system starts to meta-learn and you have a system that you can no longer turn off.
Nathan: So how has that seed view evolved? I mean, you've talked about more different ways that things can go wrong, so now, you have a richer view of that.
Jaan: Yeah.
Nathan: So tell us a little bit about the more nuanced view that you have today.
Jaan: One competing theory is that the current deep learning framework might actually take us to the point where we have systems that are roughly on human level or superhuman, but they still cannot understand themselves, that they're still fairly opaque to themselves. In that case, there'd be no immediate danger of fast takeoff in a lab, but you still have a situation where there's global instability because suddenly humans are no longer the smartest planners on this planet, for example.
So yeah. I think it's a more complicated, messy, less clearly defined scenario, but I do think it's plausible and so I'm assigning some probably mass to that.
Nathan: And you're involved now with a number of different organizations as an adviser, in some cases as a funder. Give us a little sense of your portfolio of activity across the AI space with the different organizations, and sort of strategies that you're involved with.
Jaan: Yeah so, my strategy over the last ten years or so has been to cultivate diversity in the ecosystem. If I see that there are some smart, and what's now called "aligned" people, and they want to contribute to existential risk reduction, what I've done is say: "Okay, here's some seed money." And then, if they stay around and start growing and becoming productive, then I have just increased the amount of funding that I support them with.
Over the 10 years or so, I've been contributing to somewhere between 10 and 20 different organizations and projects in this space and now recently, starting last year, I thought that I should really start scaling up. One half-joking way to say it is that I assign very significant probability to the human economy not surviving the introduction of human-level AI, meaning there is no point in maximizing the amount of money that you have at the point when money becomes meaningless.
So I need to look at how much time I have to spend this. I mean obviously, again, it's still possible that the human economy or something that resembles the human economy will continue after that, but I don't see why. I might be wrong, which means that indeed I have to scale up my giving.
So I've been working with BERI - existence.org - with Critch, who is sitting right there and seeing how can we "institutionalize" me. Critch put it yesterday in a nicer way, that BERI was originally conceived to help various x-risk institutions. Then, I approached BERI with the thought, "Okay. Perhaps you can help me." Yesterday, Critch was like, "Oh yeah, Jaan is like an institution, okay." I'm qualified.
Nathan: You've put a lot of energy, highly intentionally, into moving the Overton Window, specifically around what kind of thoughts or worries are okay to have. Going back eight to ten years, it was just too weird for anybody who was a respectable, tenured computer science professor, even in the AI space, to have worries about things going wrong. That seems to have shifted in a big way. Where do you see the Overton Window being today, and do you think that work is done? Has it shifted, or is there more to do?
Jaan: There is more to do. It seems to me that Overton Window right now caps out at technological unemployment, so the weirdest thing that you can talk about when it comes to AI, in a "respectable" setting, is a societal effect.
Nathan: This is, of course, not a respectable setting.
Jaan: No, it's not. By respectable setting, I mean one where people are mostly concerned about looking respectable and optimizing the opinions of others. I would really like to push the Overton Window higher by promoting the realization that AI might have effect beyond social stuff. So I've been pushing it by saying that AI would be an environmental risk. Because first of all, I think there is an environmental risk eventually. The nice thing about environmental risks is that these are things that unify humans. We are very picky about our environment. Change it by a hundred degrees, which is tiny compared to an astronomical scale, and we go extinct in a matter of minutes. An AI doesn't care about the environment. Why should it preserve the parameters?
The nice thing about it is that indeed, if there was a realization, "Oh, wait a minute. We need to control AI in order to not lose the environment", then this is a much more tangible thing to have a global discussion about, whereas if you just talk about social issues, it's harder. For example, in the Asilomar Principles that we did last year, there is this very contentious principle about human rights, which actually can automatically preclude the Chinese from joining the principles. However, if AI was about long-term environmental effects, there would be no problem of bringing everybody on board.
Nathan: That's fascinating. So you're going around giving a lot of talks on the subject, trying to find the right way to deliver the message so that it can be heard and sp that you seem just the right level of weird. You've mentioned China, and you said you were recently in China and give a series of talks there. So, what's the report from China? Obviously, AI is something that Chinese society is really working on.
Jaan: Yeah, I had a few updates. I mean, one update was that, I mean, I grew up in a totalitarian regime and basically, I knew sort of what to expect, but I hadn't been to mainland China for a decade or two. So one update was that it seems to be a much freer place than the place I remember behind the Iron Curtain. So that was nice, I guess. Then another update was that the AI researchers, at least the ones that I talked to, were very heads-down, optimizing classifiers and robotics doing robotics, things like that. They were proud to say that they were very practical.
But, on the other hand, there was a really interesting appetite about talking long-term, philosophical manifestations of AI, which was surprising even to the level that I don't see much in the West, or people are more careful about saying in West. So that was interesting. Half-jokingly, I don't know if this is a good idea or not, but I think it might be an interesting idea to have China as the grown-up in the room, because they take real pride in saying that they're an old civilization who thinks long-term as opposed to those silly democratic countries who just think four years at a time.
There are pros and cons, obviously, to that, but let's just exploit those pros and have them think five or six years into the future and perhaps be a couple of years ahead.
Nathan: I think, maybe, the best piece that I've seen in the American mainstream media about A.I. and the future of society was from a Chinese legal scholar who said, "With a powerful AI, you might actually be able to have a planned economy that is effective, and you don't necessarily have to rely on market mechanisms, and you don't necessarily have to fall into a scenario where a few people sort of own all the data in society, and we can actually collectivize that and all share the benefits of it." Were these things you heard there?
Jaan: I was aware of that. I did a lot of preparation work when I went to China, so I heard those things, but I heard them during the preparation work. But it's funny, actually. I've seen communism fail and the reason why it failed was because people don't like to work. If you have AI doing all the work you don't have that problem for communism at least. You might have other problems, but, at least, this particular, very crucial problem, you eliminate.
Nathan: In terms of your… we've covered a lot of your personal and reputational efforts to move the window and get people thinking in new ways, and you've also been supporting a lot of organizations. What kinds of organizations are you supporting? Do you think about investing in for-profits, nonprofits, a mix, or do you not care? What kind of organizations are you looking to support?
Jaan: Almost entirely nonprofits. A basic computer science realization is that when you have two optimization targets, then you are not going to be great on either one. So if you want to optimize for the good of the world and for profit, then you have to trade off one of those. I think the maximum effectiveness would be from, yeah, effective altruist organizations or nonprofits that don't have these constraints in profit.
So interestingly, for example, the same applies to… Okay, think about if there's a startup, whose business plan is to do a bunch of fundamental physics and then to use those fundamental results from fundamental physics research to gain a commercial advantage, it would just sound silly. However, that is a very typical pitch from AGI companies, that they're going to do this fundamental AI research and then use those results to get a commercial advantage, which almost never works. When you start doing that, you immediately get this tension, and so I think the most successful AI companies. capabilities wise, have been either in academia, or just they're just nicely cordoned off sections of big companies that have lots of cash. So, yeah.
Nathan: How about what's going on in your own home country of Estonia? I mean, I'm hearing more about great companies being built there. Obviously, the Estonian government is known for being a technology leader. Do you think that Estonia has a role to play in this kind of AI future, given its unique position as a digital society?
Jaan: Possibly. If so, then mostly about the near-term issues about how you integrate increasingly sophisticated technology, but I think there's a fairly stark phase change when you go from subhuman systems that are just really smart in certain domains to superhuman systems that might actually do their own technology development. It's not clear that the work that has been spent on the shorter term problems immediately scales. Some of that might scale to the longer term, but it's not obvious.
Nathan: Changing gears a little bit, you've also spoken quite a bit about global coordination as a challenge of interest to you. Just give us a little overview of your interest in that topic and how you think about-
Jaan: Well that will be like-
Nathan: In two minutes. Yeah.
Jaan: That'll be like two hours!
Nathan: Maybe even just pointers to other places where people can go find more would be good as well.
Jaan: Yeah, I mean, I've been thinking about it a little bit over many years now. So it's very diluted thinking about things like upcoming technologies that might make global coordination easier. There are some examples. Like, we're going to get a lot of data about what's happening on the planet and people already are using that to weed out or find bad players.
For example, in deforestation, it's kind of apparent that it's harder to deforest the planet now than it was before ubiquitous satellites. And blockchain is a particular interest of mine. Turns out, there's a few things that you can do. One way we are putting it is that for last seven, eight years, we have had a regime on the planet where it's possible to globally agree about a piece of data without trusting any central authority to maintain that data. Can we use this interesting property that the world has, to come up with coordination regimes that work in systems where the participants don't necessarily trust each other?
Nathan: Google it for more. There's a lot more out there. How much do leaders of huge tech companies actually talk to each other?
Jaan: I'm not actually sure.
Nathan: Fair.
Jaan: Yeah, I don't know.
Nathan: I always appreciate your willingness to say, "I don't know." What would you advise people to do who are interested in either earning to give to philanthropically support AI safety, or possibly trying to get more involved directly if they can? But let's just start with the money. Where is the best value per dollar right now in AI?
Jaan: I'm not sure if Critch appreciates me saying so, but I think that with the current situation we have in BERI - existence.org - is that, I easily can't give more to them because that would be basically then the fraction that I'm giving to them would be too big proportion. If I could have other people join me, then that actually might be helpful. So that's one thing. But yeah, I think just going to 80,000 hours and following their advice is good. I mean, they have done a lot of thinking about how people can be useful, so I can think anything that I can say is going to be strictly inferior to what they have been saying.
Nathan: How afraid do you find famous people to be about talking about issues that might be perceived as weird, even if they are important?
Jaan: It really depends on what type of reputation they have to protect. If they're politicians, they are obviously really careful about saying things. So actually, my co-founder at the Centre for Study of Existential Risk, Lord Martin Rees, he's a politician and it's fascinating to see how he really cleverly balances the "craziness" of the message that he's giving with an optimized perception. So that is interesting.
Nathan: Not sure if this one is right up your alley or not, but what do you think is the main bottleneck to increasing cryonics adoption?
Jaan: Main bottleneck. I might be wrong, but I think the main bottleneck is still that it's not certain that it works. I do have an investment in Nectome, which is competing with cryonics. They're doing destructive brain preservation and they say like, "Look." I mean, they have their bias to say that, but they said, "Look. If you just look at the scanning they have, after freezing brains, it doesn't seem that there's much information there. But, I mean, I don't have a lot of information, but indeed if there would be much higher confidence that this thing works, then hopefully it would become more of a popular thing to do, but obviously there are many weird social blocks that also need to be overcome.
Nathan: When you say destructive brain preservation is that like slicing and scanning?
Jaan: Yeah, basically what they do. First, they pump you full of cyanide, and other highs, and then like heavy metals to make a brain high contrast. Then, they freeze you, so there's no hope-
Nathan: You're not coming back.
Jaan: You're not coming back.
Nathan: Not in the same form.
Jaan: Yeah, because they preserve the information, not your brain.
Nathan: Beyond AI, which is obviously your number one focus, are there other x-risks that you see rising to near that AI level in terms of urgency?
Jaan: Synthetic bio has always been in a very close race with AI, because it seems much easier to use for destructive purposes. Much cheaper. On the other hand, it has this nice property that we can use intelligence to control it, whereas it's not clear if we can use that in the AI case.