Why were people skeptical about RAISE?

By Chris Leong @ 2019-09-04T08:26 (+14)

RAISE was a project that was aiming to build an online course for AI safety. It shut down because their attempt at a study didn't show any significant improvement, but I know that some people were sceptical of the project goal, not just its failure to achieve that. What was the worry here? Was it related to excessively growing the size of the field, the idea that anyone capable of significantly contributing wouldn't need an on-ramp, the choice of topics or something else?


Habryka @ 2019-09-04T17:06 (+24)

I was mostly skeptical because the people involved did not seem to have any experience doing any kind of AI Alignment research, or themselves had the technical background they were trying to teach. I think this caused them to focus on the obvious things to teach, instead of the things that are actually useful.

To be clear, I have broadly positive impressions of Toon and think the project had promise, just that the team didn't actually have the skills to execute on it, which I think few people have.

PeterMcCluskey @ 2019-09-04T13:29 (+18)

>anyone capable of significantly contributing wouldn't need an on-ramp

That's approximately why I was skeptical, although I want to frame it a bit differently. I expect that the most valuable contributions to AI safety will involve generating new paradigms, asking questions that nobody has yet thought to ask, or something like that. It's hard to teach the skills that are valuable for that.

I got the impression that RAISE was mostly oriented toward producing people who become typical MIRI researchers. Even if MIRI's paradigm is the right one, I expect that MIRI needs atypically good researchers, and would only get minor benefits from someone who is struggling to become a typical MIRI researcher.


richard_ngo @ 2019-09-05T21:11 (+1)
RAISE was oriented toward producing people who become typical MIRI researchers... I expect that MIRI needs atypically good researchers.

Slightly odd phrasing here which I don't really understand, since I think the typical MIRI researcher is very good at what they do, and that most of them are atypically good researchers compared with the general population of researchers.

Do you mean instead "RAISE was oriented toward producing people who would be typical for an AI researcher in general"? Or do you mean that there are only minor benefits from additional researchers who are about as good as current MIRI researchers?

PeterMcCluskey @ 2019-09-06T12:12 (+3)

I meant something like "good enough to look like a MIRI researcher, but unlikely to turn out to be more productive than the average MIRI researcher". I guess when I wrote that I was feeling somewhat pessimistic about MIRI's hiring process. Given optimistic assumptions about how well MIRI distinguishes good from bad job applicants, then I'd expect that MIRI wouldn't hire RAISE graduates.

rohinmshah @ 2019-09-04T16:22 (+15)

Depends what you call the "goal".

If you mean "make it easier for new people to get up to speed", I'm all for that goal. That goal encompasses a significant chunk of the value of the Alignment Newsletter.

If you mean "create courses that allow new people to get the required mathematical maturity", I'm less excited. Such courses already exist, and while mathematical thinking is extremely useful, mathematical knowledge mostly isn't. (Mathematical knowledge is more useful for MIRI-style work, but I'd guess it's still not that useful.)

riceissa @ 2019-09-04T20:45 (+1)

I'm not sure I understand the difference between mathematical thinking and mathematical knowledge. Could you briefly explain or give a reference? (e.g. I am wondering what it would look like if someone had a lot of one and very little of the other)

rohinmshah @ 2019-09-04T22:03 (+3)

Mathematical knowledge would be knowing that the Pythagoras theorem states that , mathematical thinking would be the ability to prove that theorem from first principles.

The way I use the phrase, mathematical thinking doesn't only encompass proofs. It would also count as "mathematical reasoning" if you figure out that means are affected by outliers more than medians are, even if you don't write down any formulas, equations, or proofs.

Larks @ 2019-09-05T00:06 (+8)

My notes from the time suggest I thought the team was inexperienced relative to the difficulty of the project, and that their roadmap was poorly calibrated