Retrospective: PIBBSS Fellowship 2023

By Dušan D. Nešić (Dushan) @ 2024-02-16T17:48 (+17)

Between June and September 2023, we (Nora and Dusan) ran the second iteration of the PIBBSS Summer Fellowship. In this post, we share some of our main reflections about how the program went, and what we learnt about running it. 

We first provide some background information about (1) The theory of change behind the fellowship, and (2) A summary of key program design features. In the second part, we share our reflections on (3) how the 2023 program went, and (4) what we learned from running it

This post builds on an extensive internal report we produced back in September. We focus on information we think is most likely to be relevant to third parties, in particular:

Also see our reflections on the 2022 fellowship program. If you have thoughts on how we can improve, you can use this name-optional feedback form

Background

Fellowship Theory of Change

Before focusing on the fellowship specifically, we will give some context on PIBBSS as an organization. 

PIBBSS overall

PIBBSS is a research initiative focused on leveraging insights and talent from fields that study intelligent behavior in natural systems to help make progress on questions in AI risk and safety. To this aim, we run several programs focusing on research, talent and field-building. 

The focus of this post is our fellowship program - centrally a talent intervention. We ran the second iteration of the fellowship program in summer 2023, and are currently in the process of selecting fellows for the 2024 edition. 

Since PIBBSS' inception, our guesses for what is most valuable to do have evolved. Since the latter half of 2023, we have started taking steps towards focusing on more concrete and more inside-view driven research directions. To this end, we started hosting several full-time research affiliates in January 2024. We are currently working on a more comprehensive update to our vision, strategy and plans,  and will be sharing these developments in an upcoming post. 

PIBBSS also pursues a range of other efforts aimed more broadly at field-building, including (co-)organizing a range of topic-specific AI safety workshops and hosting semi-regular speaker events which feature research from a range of fields studying intelligent behavior and exploring their connections to the problem of AI Risk and Safety.

Zooming in on the fellowship

The Summer Research Fellowship pairs fellows (typically PhDs or Postdocs) from disciplines studying complex and intelligent behavior in natural and social systems, with mentors from AI alignment. Over the course of the 3-month long program, fellows and mentors work on a collaborative research project, and fellows are supported in developing proficiency in relevant skills relevant to AI safety research. 

One of the driving rationales in our decision to run the program is that a) we believe that there are many areas of expertise (beyond computer science and machine learning) that have useful (if not critical) insight, perspectives and methods to contribute to mitigating AI risk and safety, and b) to the best of our knowledge, there does not exist other programs that specifically aim to provide an entry point into technical AI safety research for people from such fields.

What we think the program can offer: 

In terms of more secondary effects, the fellowship has significantly helped us cultivate a thriving and growing research network which cuts across typical disciplinary boundaries, as well as combining more theoretical and more empirically driven approaches to AI safety research. This has synergized well with other endeavors already present in the AI risk space, and continuously provides us with surface area for new ideas and opportunities. 

Brief overview of the program

The fellowship started in mid-June with an opening retreat, and ended in mid-September with a final retreat and the delivery of Symposium presentations. Leading up to that, fellows participated in reading groups (developed by TJ) aimed at bringing them up to speed on key issues in AI risk. For the first half of the fellowship, fellows worked remotely; during the second half, we all worked from a shared office space in Prague (FixedPoint).

Visual representation of the PIBBSS program in 2023

 

We accepted 18 fellows in total, paired with 11 mentors. (You can find the full list of fellows and mentors on our website.) Most mentors were paired up with a single fellow, some mentors worked with two fellows, and a handful of fellows pursued their own research without a mentor. We have a fairly high bar for fellows working on their own project without mentorship. These these cases where we were both sufficiently excited about the suggested research direction and had enough evidence about the fellows’ ability to work independently. Ex-post, we think this essentially worked well, and is a relevant format to partially alleviate the mentorship bottleneck experienced by the field. 

Beyond mentorship, fellows are supported in various ways: 

We made some changes to the program structure compared to 2022: 

Organizing the fellowship has taken ~1.5 FTE split among two people, as well as various smaller bits of work provided by external collaborators, e.g. help with evaluating application, facilitating reading groups, developing a software solution for managing applications.  

Reflections

How did it go according to fellows?

Overall, fellows reported being satisfied with being part of the program, and having made useful connections.

Some (anonymized) testimonials from fellows (taken from our final survey): 

How did it go, according to mentors? 

Mentors overall find the fellowship a good use of their time, and overall think strongly the fellowship should happen again.

Some (anonymized) testimonials from mentors (taken from our final survey): 

How did it go according to organizers? 

Appendix

A complete list of research projects


CAISID @ 2024-02-16T18:39 (+5)

This looks like it produced a lot of really beneficial research and made a professional difference for people too. I also really like how this post is laid out. It's a good example for similar reports. I've signed up for updates from PIBBSS - looking forward to seeing what is next!

SummaryBot @ 2024-02-19T18:54 (+1)

Executive summary: This post reflects on the 2023 iteration of the PIBBSS Summer Fellowship, a 3-month program pairing PhD/postdoc fellows with mentors to collaborate on AI safety research, summarizing key aspects of the program and sharing learnings.

Key points:

  1. The fellowship aims to bring in external expertise to diversify perspectives and methods in AI safety research.
  2. In 2023, there were 18 fellows paired with 11 mentors, plus a 6-week in-person residency and new project report structure.
  3. Both fellows and mentors found value in the collaborations and connections formed through the program.
  4. Research output increased compared to 2022 across a range of AI safety topics according to organizer assessment.
  5. Key successes were transitioning academics to AI risk, high praise for some fellows' potential impact, and research contributions in interpretability, updateless decision theory, and other areas.
  6. Challenges remain around mentor bandwidth and further improving research output.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.