Singapore’s Technical AI Alignment Research Career Guide

By Yi-Yang @ 2020-08-26T08:09 (+34)

The original version of this career guide is on EA Singapore's website. I've removed pretty significant chunks from the guide and made some changes to it, so that it's better suited for the average EA forum audience. If you're based in Singapore, you might want to read the original version instead.

Epistemic status: 70% confident that technical AI alignment research is likely to be in the top 5 highest impact career pathways in Singapore. I spent about 100 hours into this, but maybe 40% of it is spent on the the problem profile part (section 3) and the remaining 60% on the career guide part (section 4). Besides some online research, I did two "formal" interviews and gotten "informal" feedback from local AI researchers by sending them the minimal viable product of this document. I don't have a good broad picture of the individual AI researchers and their work in Singapore, which means I can't really pinpoint the best AI researchers to work with in Singapore. Furthermore, the fact that I'm not an AI researcher myself means that I don't have a good "inside view" of the space too.

1. Introduction

AI risk is a pressing cause area we need to address urgently, but not many people are working directly on it in Singapore. This career guide is meant to clear up some uncertainties about working on AI risk, as well as inspire more people to seek a career pathway in technical AI alignment research.

In this career guide, I will (a) explain why pursuing a technical AI alignment research career in Singapore is potentially impactful, and (b) give my recommendations on how to plan for a career in this field.

2. Acknowledgements

Many of the original ideas that motivated me to write this career guide came from Loke Jia Yuan. I have also received many helpful insights and feedback from my discussions with: Tan Zhi-Xuan, Harold Soh, Nigel Choo, Jason Tamara Widjaja, Lin Xiaohao, Lauro Langosco, Aaron Pang, Wanyi Zeng, Simon Flint, and Pooja Chandwani. Their kind help does not imply they agree with everything I’ve written. All mistakes and opinions in this document remain my own.

3. Is Singapore a good country to pursue a career in technical AI alignment research?

3.1. Reasons in favour

3.2. Reasons against

3.3. Conclusion

Singapore is not able to directly compete with other countries in terms of the quality and quantity of AI research, and its research is more focused on short term AI capabilities. Yet I think there are pockets of opportunities that we can leverage on to contribute towards safe and beneficial AI research. There are existing organisations working on tangentially related short term AI alignment issues, and the Singapore government is already taking charge of developing an AI ecosystem in the country in terms of funding, talent, and regulation. Furthermore, the government’s core competency in long-term foresight and developing economic influence in other countries can further contribute to AI researcher’s impact.

What does this mean for you as a potential job seeker wanting to make an impact in safe and beneficial AI research? If you’re not able to move out of Singapore to pursue a career in safe and beneficial AI research, then your next-best option is to work in existing organisations in Singapore that are working on AI alignment issues (even if they are focused on short term AI capabilities). It might also be good to work in the intersection of AI alignment research and an industry endorsed by the National AI strategy. For example, NUS Ubicomp Lab, which works on AI explainability within public health, is in that intersection. Expertise developed here may be exported to other countries in the future, increasing the impact of your research. Furthermore, while building your career capital in AI research, it is likely to be impactful to build a community of safety-aligned researchers in Singapore too.

4. Key career recommendations

4.1. How to make an impact in technical AI alignment research within Singapore

In general, there are two broad approaches to make an impact in technical AI alignment research in Singapore. First, you can enter the for-profit sector, either as a researcher, engineer, or product manager, then move up the ladder while shaping priorities towards safe and beneficial AI R&D. However, it’s important that you remain up to date with the latest developments in the AI alignment space—this helps ensure that when you are communicating AI risks and ethics to those less familiar, it is done so in a way that is clear, science-based, and realistic (not scaremongering nor naively optimistic).

The second way is to enter academia. Here, your focus should be on conducting AI alignment research. A secondary goal you can also work on is to cultivate a community of safety-aligned researchers with the aim to collaborate on research, or even form an academic research group. You could also try to aim to move into the intersection of AI alignment research and an industry endorsed by the National AI strategy (freight management, municipal services, education, public health, and border control). With the government’s history of developing economic interdependencies and exporting expertise, you might be able to extend your impact beyond Singapore.

For a high level point of view of these career paths, you can take a look at this flowchart.

4.4. Recommended local organisations

4.4.1. Academic Institutions

There are three potentially promising academic centres that work on tangentially related safe and beneficial AI research:

If you’re not able to work in these recommended local academic institutions, it’s probably still worth trying to build career capital in AI research now, and then shift towards AI alignment research. There are still many other opportunities for AI research in different institutions such as NUS, NTU, or A*STAR.

And if you’re a Singapore citizen, some corporate-funded scholarships (such as Sensetime-NTU, Alibaba-NTU, Salesforce-NTU/NUS, or A*STAR’s scholarships) do offer a considerable amount of monthly allowance (SG$5,000), which is slightly above the median salary of a fresh graduate with a BSc in CS. This is a great entry point into becoming a researcher in industry, if you are very certain that you are not planning to go into academia in the long term.

4.4.2. Big tech companies

Besides university AI labs or PhD programmes that are partnered with tech companies, you can also apply directly to work in a company. I think the general aim is to aim for a big prestigious tech company that has an AI lab in Singapore as they have more resources and influence over the tech landscape. However, that means it’s likely to be more competitive too.

Furthermore, if your interests lie heavily in computer vision, it's better to find a job in the private sector R&D than in academia. According to one PhD student, tech companies generally have more resources in this area, as deep learning at a high level can be a competition of resources.

Here are some recommended companies to apply:

Besides the ones I’ve listed here, there are also potentially more tech companies in the Singapore High Impact Job Board.

4.5.4. Considerations for migration

4.5.4.1. Non-Singapore citizens

Graduate programmes, research jobs, and faculty positions are very welcoming to global talent. However, in non-research government organisations, I foresee that this would require a case-by-case inquiry. Organisations or sub-organisations in defense, strategy, or policy are likely to be restricted to Singapore citizens only. For example, only Singapore citizens are allowed to apply for jobs in the National AI Office.

In private sectors, this is probably easier if you have very prestigious credentials and exceptional experiences. It’s also easier if you are already in a Singapore university, which has career services for international students.

4.5.4.2. Singapore citizens

If you’re a Singapore citizen and if you’re able to get a job in the US, you can take advantage of H1B1 visas, which has a higher quota compared to the H1B visa.


rohinmshah @ 2020-08-26T18:29 (+7)
However, such research on short term AI capabilities is potentially impactful in the long term too, according to some AI researchers like Paul Christiano, Ian Goodfellow, and Rohin Shah.

Huh, I don't see where I said anything that implied that? (I just reread the summary that you linked.)

I'm not entirely sure what you mean by "short term AI capabilities". The context suggests you mean "AI-related problems that will arise soon that aren't about x-risk". If so, under a longtermist perspective, I think that work addressing such problems is better than nothing, but I expect that focusing on x-risk in particular will lead to orders of magnitude more (expected) impact.

(I also don't think the post you linked for Paul implies the statement you made either, unless I'm misunderstanding something.)

yiyang @ 2020-08-27T08:23 (+5)

In regards to what I meant by "short term AI capabilities", I was referring to prosaic AGI - potentially powerful AI systems that uses current techniques instead of hypothetical new ideas surrounding how intelligence works. When you mentioned "I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using 'essentially current techniques'", I took it as prosaic AGI too, but you might mean something else.

I've reread all the write-ups, and you're right that they don't imply that "research on short term AI capabilities is potentially impactful in the long term". I really have jumped the gun there. Thanks for letting me know!

I've rephrased the problematic part to the following:

"Singapore’s AI research is focused more on current techniques. If you think we need to have new ideas on how intelligence works to tackle AI alignment issues, than Singapore is not a good country for that. However, if you think prosaic AGI [link to Paul's Medium article] is a strong possibility, then working on AI alignment research in Singapore might be good."

If you feel like this rephrasing is still problematic, please do let me know. I don't have a strong background in AI alignment research, so I might have misunderstood some parts of it.

rohinmshah @ 2020-08-28T02:08 (+5)
When you mentioned "I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using 'essentially current techniques'", I took it as prosaic AGI too, but you might mean something else.

Oh yeah, that sounds correct to me. I think the issue was that I thought you meant something different from "prosaic AGI" when you were talking about "short term AI capabilities". I do think it is very impactful to work on prosaic AGI alignment; that's what I work on.

Your rephrasing sounds good to me -- I think you can make it stronger; it is true that many researchers including me endorse working on prosaic AI alignment.

yiyang @ 2020-09-01T03:41 (+3)

That's great! Thanks again for the feedback.

Misha_Yagudin @ 2020-10-03T12:45 (+6)

Working for SenseTime might be associated with reputational risks, according to FT:

The US blacklisted Megvii and SenseTime in October, along with voice recognition company iFlytek and AI unicorn Yitu, accusing the companies of aiding the “repression, mass arbitrary detention and high-technology surveillance” in the western Chinese region of Xinjiang.

At the same time, someone working for them might provide our community with cultural knowledge relevant to surveillance and robust totalitarianism.

Misha_Yagudin @ 2020-10-03T12:47 (+4)

This forecast suggests that extreme reputational risks are non-negligible.

yiyang @ 2020-10-16T13:04 (+1)

Hi Misha, sorry for the late reply. Thanks for the heads up! I've added this feedback for a future draft.