CLR Summer Research Fellowship 2024
By Center on Long-Term Risk @ 2024-02-15T18:26 (+89)
We, the Center on Long-Term Risk, are looking for Summer Research Fellows to explore strategies for reducing suffering in the long-term future (s-risks). For eight weeks, you will join our team at our office while working on your own research project. During this time, you will be in regular contact with our researchers and other fellows, and receive guidance from an experienced mentor.
You will work autonomously on challenging research questions relevant to reducing suffering. You will be integrated and collaborate with our team of intellectually curious, hard-working, and caring people, all of whom share a profound drive to make the biggest difference they can.
We worry that some people won’t apply because they wrongly believe they are not a good fit for the program. While such a belief is sometimes true, it is often the result of underconfidence rather than an accurate assessment. We would therefore love to see your application even if you are not sure if you are qualified or otherwise competent enough for the positions listed. We explicitly have no minimum requirements in terms of formal qualifications and many of the past summer research fellows have had no or little prior research experience. Being rejected this year will not reduce your chances of being accepted in future hiring rounds. If you have any doubts, please don’t hesitate to reach out (see “Application process” > “Inquiries” below).
Purpose of the fellowship
The purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship:
- People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong interest in s-risk and would like to learn more about research and test their fit.
- People seriously considering changing their career to s-risk research who want to test their fit for such work.
- People interested in s-risk who plan to pursue a research or research-adjacent career and who would like to gain a strong understanding of s-risk macrostrategy beforehand.
- People with a fair amount of research experience, e.g. from a (partly or fully completed) PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves.
There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above. In all cases, we will work with you to make the fellowship as valuable as possible given your strengths and needs. In many cases, this will mean focusing on learning and testing your fit for s-risk research, more than seeking to produce immediately valuable research output.
Activities
- Carrying out a research project related to one of our priority areas below, or otherwise targeted at reducing s-risks. You will determine this project in collaboration with your mentor, who will meet with you every week and provide feedback on your work.
- Attending team and Fellowship meetings, including giving occasional presentations on the state of your research.
What we look for in candidates
We don’t require specific qualifications or experience for this program, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria.
- Curiosity and a drive to work on challenging and important problems;
- Ability to answer complex research questions related to the long-term future;
- Willingness to work in scarcely-explored areas and to learn about new domains as needed;
- Independent thinking;
- A cautious approach to potential information hazards and other sensitive topics;
- Alignment with our mission or strong interest in one of our priority areas.
Further details
We encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation.
- Location: As mentioned in our 2023 review post, we are currently considering relocating a substantial part of our operations from London to Berkeley, California in 2024. We are currently uncertain whether we will choose to run the fellowship in London or Berkeley, so our application form asks which location(s) you’re willing to work from. In either case, we would prefer participants to work from the primary program location, but will also consider applications from people who are unable to relocate.
- International applicants: We expect to be able to facilitate in-person participation in Berkeley or London in the great majority of cases, including support with any immigration permissions or visas that are required.
- Compensation: If the program is based in London, fellows will receive a stipend of 4,000 GBP per month.
- If the program is based in Berkeley, we will consider raising the stipend amount further.
- In addition to the base stipend, we will provide funding for travel or immigration costs for fellows who relocate to London or Berkeley for the program.
- Funding will also be available for expenses to facilitate your productivity during the program.
- Number of available positions: We expect to accept six to twelve fellows.
- Program length & work quota: The program is intended to last for eight weeks in a full-time capacity. Exceptions, including part-time participation, may be possible.
- We’re also very happy for participants to take reasonable time out for other commitments such as holidays.
- Program dates: The default start date is June 17, 2024. Exceptions may be possible.
- Office space: Participants will have access to office space in London or Berkeley (see above), working alongside CLR staff and mentors.
- Catered plant-based lunch will be provided at the office space daily.
Priority areas
You can find an overview of our current priority areas here. However, if we believe that you can somehow advance high-quality research relevant to s-risks, we are interested in creating a position for you. If you see a way to contribute to our research agenda or have other ideas for reducing s-risks, please apply. We commonly tailor our positions to the strengths and interests of the applicants.
Mentors
All fellows will work with a mentor to guide their project. Below, each of our mentors has written about the topics in which they’re most interested in supervising research.
At stage 2 of our application process, applicants are asked to submit a research proposal and a list of research proposal ideas. A significant part of our selection process relates to consideration by our mentors of whether they are interested in supervising the Fellow, based on the Fellow’s and mentor’s research interests.
Anthony DiGiovanni
- Using frameworks from open-source game theory to model potential cooperation failures between AIs (especially due to the commitment races problem), and ways to mitigate those failures. (Examples: Safe Pareto Improvements; Commitment games with conditional information revelation)
- Improving our understanding of how to implement cooperative technologies like safe Pareto improvements in prosaic AI systems.
Nicolas Macé
- I’m focusing on the same themes as Anthony, and will co-mentor with him.
Mia Taylor
- Developing scalable evaluations for large language models to check for the presence of beliefs, patterns of behavior, preferences, and capabilities that make the model more likely to engage in conflict
- Assessing the validity of existing evaluation methods
- Experimenting with scaffolding and training methods to determine which methods are most likely to suppress or promote conflict-conducive beliefs, behaviors, or preferences
Jesse Clifton
- I’m focusing on the same topics as Mia listed above
- However, I'm also interested in considering strong proposals outside these areas.
Julian Stastny
- Designing evals for s-risk-relevant properties.
- Developing model organisms to test those evals.
- Investigate a new empirical research direction
Tristan Cook
- Quantitative modelling of macrostrategic considerations relevant to s-risks. For example, modelling how different pause AI proposals affect the probability of multipolar takeoff.
- Research on the design of scaffolded LLMs ('bureaucracies') to reduce the risks posed by the commitment races problem
- Prioritisation research related to Evidential Cooperation in Large Worlds (ECL).
Caspar Oesterheld
- I’m interested in supervising fellows working in any of my academic interest areas, as seen on my website and blog.
David Althaus
- Risks from malevolent actors.
- Risks from ideological fanaticism / extremism.
Application process
We value your time and are aware that applications can be demanding, so we have thought carefully about making the application process time-efficient and transparent. We plan to make the final decisions by April 15. We plan to decide on the location (Berkeley or London) by early- to mid-April.
Stage 1: To start your application for any role, please complete our application form. As part of this form, we also ask you to submit your CV/resume and give you the opportunity to upload an optional research sample. The deadline is midnight Pacific Time on Thursday, March 7, 2024. We expect this to take around 2 to 3 hours if you are already familiar with our work. In the interest of your time, you do not need to polish the language of your answers in the application form.
Stage 2: By Tuesday, March 12, we will decide whether to invite you to the second stage. We will ask you to write a research proposal (up to two pages excluding references) and a list of research proposal ideas, to be submitted by Thursday, March 28 at midnight Pacific Time. This means applicants will have 16 days to complete this stage, which we expect will take up to 12 hours of work. Applicants will be compensated with £350 for their work at this stage.
- You can see some example research proposals submitted by previous successful candidates here. Note that we will alter the instructions for the research proposals this year. We plan to make examples for the list of research proposal ideas available before stage 2.
Stage 3: By Thursday, April 4, we will decide whether to invite you to an interview via video call during the week of April 8. By April 15, we will send out final decisions to applicants.
Further details
- Application base rates: Last year, we received 174 applications for the summer research fellowship. We made 13 offers.
- Diversity and equal opportunity: CLR is an equal-opportunity employer, and we value diversity in our programs. We welcome applications from all sections of society and don’t want to discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, marital status, veteran status, social background/class, mental or physical health or disability, or any other basis for unreasonable discrimination, whether legally protected or not. If you would like to discuss any personal needs that may require adjustments to our application process, please feel very free to contact us.
Inquiries
If you have any questions about the process, please contact us at hiring@longtermrisk.org. If you want to send an email not accessible to the hiring committee, please contact Harriet Patterson at harriet.patterson@longtermrisk.org.
Why work with CLR
We aim to combine the best aspects of academic research (depth, scholarship, mentorship) with an altruistic mission to prevent negative future scenarios. So we leave out the less productive features of academia, such as administrative burden and publish-or-perish incentives, while adding a focus on impact and application.
As part of our fellowship, you will enjoy:
- a program tailored to your qualifications and strengths with ample intellectual freedom;
- working to facilitate a shared mission with dedicated and caring people;
- an interdisciplinary research environment, surrounded by friendly and intellectually curious people who will hold you to high standards and support you in your intellectual development;
- mentorship in longtermist macrostrategy, especially from the perspective of preventing s-risks;
- the support of a well-networked longtermist EA organization with substantial operational assistance instead of administrative burdens.
You will advance neglected research to reduce the most severe risks to our civilization in the long-term future. Depending on your specific project, your work may help inform impactful work across the s-risk and AI safety ecosystem, or any of CLR’s activities, including:
- Technical interventions: We aim to develop and communicate insights about the safe development of artificial intelligence to the relevant stakeholders (e.g. AI developers, key organizations in the longtermist effective altruism community). We are in regular contact with leading AI labs and AI safety research nonprofits.
- Research collaborations: CLR researchers have recently been involved in collaborations with researchers from CMU, Oxford, Stanford, Berkeley, MIT, and Google DeepMind.
- Research community: Alongside the Summer Research Fellowship, CLR runs research retreats, bringing together members of the research community to co-ordinate and make progress on problems.
- Grantmaking: In addition to the CLR Fund, some of our staff advise Polaris Ventures, a foundation committed to using all of its funds to improve the quality of life of future generations.
- New projects: In collaboration with people in our network, we are always looking for novel impactful organizations to set up. For instance, we have been involved in the founding of the Cooperative AI Foundation.
Other opportunities at CLR
We’ll soon be hiring for researchers focused on model evaluations. As an empirical researcher at CLR, you will primarily help us build evaluations that improve our understanding of s-risk-relevant properties of AI systems, developing prerequisites to intervening on advanced AI systems. To receive updates about this role and other opportunities at CLR, you can subscribe to our mailing list by submitting your email at the bottom of our website.
PipFoweraker @ 2024-02-15T22:18 (+2)
The default start date is in the past :-)
Center on Long-Term Risk @ 2024-02-16T04:02 (+3)
Thanks for catching this. Fixed now :)