Reflections on EA Global London 2019 (Mrinank Sharma)
By Aaron Gertler 🔸 @ 2019-10-29T23:00 (+26)
This is a linkpost to https://mrinanksharma.github.io/post/eag_reflect/
Aaron's note: I'm posting this because I really enjoyed reading a participant's reflections on EA Global. Since every experience is different, I'd love to see more posts like this — especially from other first-timers!
I attended EA Global for the first time in October, and I absolutely loved the experience. I thought it would be useful to go over all the notes that I made, mostly for myself, but also on the off-chance that it would be helpful for other people.
I’ve summarised some of the notes that I’ve made on general topics below. Please note that there may be errors, and that I may have misrepresented people’s views though this is certainly not my intention!
Making the Most of EA Global
The primary advice that I read before going was to maximise the time spent in one-on-one meetings and workshops as opposed to talks, most of which are later uploaded online. I only ended up filling in my Whova (conference application) profile fairly late, but would strongly recommend doing this (early!), as well as reaching out to people who share similar interests to you. I think the advice that I recieved was mostly spot on, and the most useful experiences that I had were certainly these one-on-one meetings.
I’d also like to echo the advice to write down, or at least consider, your goals for the conference.
Additionally, bring a notepad and make notes! You’ll inevitably forget something really useful.
This EA forum post by Risto Uuk is great.
AI Policy & Governance
Broadly, we can divide the roles in this field into researchers and those implementing policies (either within industry, or within government). If you are interested in AI policy, it is best to focus primarily on fit within the role rather than shoe-horn yourself into a role which you may think is “more important”.
You are a good fit for a researcher if:
- You are happy and excited by applying concentrated effort on one particular idea.
- You have strong, internal motivation and are self driven.
- You are comfortable working without much supervision; it seems that there is a dearth of supervision at the moment.
- You are happy with being far from the decision making process in time.
You are a good fit for a role in government or industry if:
- You are a natural generalist, and enjoy working on many problems at once.
- You are extroverted, and have strong social skills.
- You are likeable, trustworthy and good at small talk.
- You are patient. It can be frustrating when people do not implement precisely what you are suggesting.
This is not to say that it isn’t useful being a researcher who has excellent social skills!
Considering long-termism, you ought to try and figure out needs to happen for a positive outcome post-AGI. The decisions made today will affect the future landscape, but when attempting to convince other people, beware that the long-termism standpoint may alienate them.
Do not underestimate the importance of institutional work; trying to improve institution capacity and establishing norms can be useful.
Do you need technical expertise in AI?
The advice that I heard was that it is mostly not necessary; unless you are already doing a PhD in ML/AI, it is probably not worth pursuing one. However, whilst most questions that you will be trying to answer will not benefit from this knowledge, there are a unique set of questions which you will be better equiped to answer, such as understanding the strategic important of new developments.
A good target of technical expertise would be to be able to make sense of the Import AI Newsletter.
So if you shouldn’t do an ML PhD, what should you do? The advice seems to be that a degree in International Relations would be very useful.
Industry vs Government
It seems that government work is more neglected compared to working in an industry lab. This experience is still be useful, but perhaps more of a middle step? It is better to learn skills and advance your reputation in industry, and people in such labs end up advising government anyway. It tends to be slower to build up credibility in government positions.
AI Research
Applying for Internships at OpenAI
Many internships at OpenAI are organised on an adhoc basis; if there is somebody you specifically want to work with, it’s best to send them an email with a few ideas, suggesting a collaboration.
Labs vs Academia
There is less pressure to publish at OpenAI compared to more traditional academia, and there is a significantly higher focus on impact i.e., the choice of project depends on the OpenAI mission. This is typically not the case in academia.
At OpenAI, the research leads suggest project proposals and roadmaps, which are then iterated upon.
OpenAI seems to focus on current techniques more and neuroscience less than Deepmind (I’m not entirely sure how accurate this is).
Choosing a PhD Topic
If you are unable to work in safety directly, bear in mind that the transition from normal ML research to safety research seems to be doable and commonly done. You could then choose your topic by considering the following factors:
- Prestige; working on a topic which is more likely to be read by others may lead to gaining a higher potential for impact on the future.
- Personal interest.
- Commerical incentives; consider what problems are likely to be neglected by industry (i.e., those problems for which there is no commerical incentive to do that research) but are still important.
- Immediate impact; working on a topic which has direct applications, for example in healthcare scenarious, could be beneficial.
- Skill development; which topics give you the opportunity to develop your skills in a number of different, useful areas.
- Neglectedness; consider what the marginal impact is of one additional researcher in XYZ.
Keeping Up with Papers
There are a vast numbers of papers to read, especially in AI! Keeping up with research is hard, but a simple way of prioritising is to ask senior people about which papers to read. Slowly, you will develop intution about which papers to prioritise.
Also follow the Import AI Newsletter, as well as the AI Alignment newsletter.
What’s Stopping Advanced Applications of AI?
In many cases, there are cultural issues (within an industry) about the application of algorithms to make crucial decisions. Whilst interpretability of systems would increase the buy in, there are also key issues with the quality of data, and the infrastructure to collect high quality data.
It is worth nothing that the barriers here seem to not be technical, so it is unclear how much of an impact technical research would have here.
Closing Comments
I absolutely loved attending EA Global 2019, and one of the most beneficial things of going there was starting to build up a network of people who share similar interests. I learnt a huge deal from other people, and strongly recommend going if you are on the fence!
I’ve you’ve spotted any errors in this post, please do contact me and I’ll do my best to respond and to fix them.
ofer @ 2019-10-30T20:36 (+1)
What’s Stopping Advanced Applications of AI?
In many cases, there are cultural issues (within an industry) about the application of algorithms to make crucial decisions. Whilst interpretability of systems would increase the buy in, there are also key issues with the quality of data, and the infrastructure to collect high quality data.
It is worth nothing that the barriers here seem to not be technical, so it is unclear how much of an impact technical research would have here.
Perhaps this model was proposed for certain domains? Maybe ones in which laws restrict applications, like driverless cars?
It doesn't seem to me plausible for all domains (for example, it doesn't seem to me plausible for language models and quantitative trading).