Future Academy - Successes, Challenges, and Recommendations to the Community

By SebastianSchmidt, Vilhelm Skoglund, Lowe Lundin @ 2023-07-08T20:15 (+61)

Introduction

Impact Academy is a new field-building and educational institution seeking to enable people to become world-class leaders, thinkers, and doers, using their careers and character to solve the world’s most pressing problems and create the best possible future. Impact Academy was founded by Vilhelm Skoglund, Sebastian Schmidt, and Lowe Lundin. We have already secured significant funding to set up the organization and carry out ambitious projects in 2023 and beyond. Please read this document for more about Impact Academy, our Theory of Change, and our two upcoming projects. 

The purpose of this document is to provide an extensive evaluation and reflection on Future Academy - our first program (and experiment). Future Academy aimed to equip university students and early-career professionals worldwide with the thinking, skills, and resources they need to pursue ambitious and impactful careers. It was a free six-month program consisting of four in-person weekends with workshops, presentations, socials, and monthly digital events. Furthermore, the 21 fellows worked on an impact project with an experienced mentor and received professional coaching to empower them to increase their impact and become their best selves. Upon completion of the program, all participants went to a global impact conference (EAGx Nordics) where four fellows presented their projects. We awarded stipends of a total of $20,000 to the best projects. The projects included a sentiment analysis of public perception of AI risk, a philosophy AI alignment paper, and an organization idea for improving research talent in Tanzania. Our faculty included entrepreneurs and professors from Oxford University and UC Berkeley.

Note that this document attempts to assess to what extent we’ve served the world. This involves an assessment of the wonderful fellows who participated in Future Academy, and our ability to help them. This is not meant as an evaluation of peoples’ worth nor a definite score of general abilities, but an evaluation of our ability to help. We hope we do not offend anyone and have tried our best not to do so, but if you think we write anything inappropriate, please let us know in the comments or by reaching out to sebastian [at] impactacademy.org.

Main results and successes

 

Main challenges and mistakes

 

Conclusions for Impact Academy

Overall, we think Future Academy was a successful experiment as we were satisfied with how we ran the program and the outcomes of the program. However, there was significant room for improvement, and we don’t want to run Future Academy in its exact form again. We’ve decided to run another version of Future Academy where we will continue to primarily target people who 

We’ll also update the program to reflect best practices within education, the science of learning, and other programs we think highly of. Finally, we’ll also explore the feasibility of targeting early-mid career professionals as the wider community seems to be very interested in individuals with 3+ years of experience.

You can learn more about the other project (an AI governance fellowship) we will be running here.

Recommendations for the EA community

Based on our experience with Future Academy, we think these recommendations might provide value to the EA community as a whole:

Acknowledgments

We are incredibly grateful for having been able to deliver Future Academy to everyone who supported us. Our funders who believed in the idea. Our speakers who were eager to give some of their time and travel. Our fellows who took a chance on us by being open-minded (and crazy enough) to join a completely new program. Our mentors who guided the fellows while they were finishing their project. Our own mentors (very much including Michael Noetel) for providing us with ongoing feedback and guidance and helping us set a high bar. To the people who provided feedback on our evaluation (Anine Andresen, Henri Thunberg, Eirik Mofoss, Emil Wasteson, Vaidehi Agarwalla, Cian Mullarkey, Jamie Harris, Varun Agrawal, Cillian Crosson, Raphaëlle Cohen, and Toby Tremlett). Finally, to the wider community of do-gooders who collaboratively provided input on everything from the program design to the impact evaluation. Thank you!


 


rileyharris @ 2023-07-09T23:22 (+7)

Great to see attempts to measure impact in such difficult areas. I'm wondering if there's a problem of attribution that looks like this (I'm not up to date on this discussion):

  1. An organisation like the Future Academy or 80,000 hours or someone says "look, we probably got this person into a career in AI safety, which has a higher impact, and cost us $x, so our impact per dollar is $x per probable career spent on AI safety".
  2. The person goes to do a training program, and they say "we trained this person to do good work in AI safety, which allows them to have an impact, and it only cost us $y to run the program, so our impact is $y per impactful career in AI safety"
  3. The person then goes on to work at a research organisation, who says "we spent $z including salary and overheads on this researcher, and they produced a crucial seeming alignment paper, so our impact is $z per crucial seeming alignment paper".

When you account for this properly, it's clear that each of these estimates is too high, because part of the impact and cost has to be attributed elsewhere.

A few off the cuff thoughts:

It seems there should be a more complicated discounted measure of impact here for each organisation, that takes into account additional costs.

It certainly could be the case that at each stage the impact is high enough to justify the program at the discounted rate.

This might be a misunderstanding of what you're actually doing, in which case I would be excited to learn that you (and similar organisations) already accounted for this!

I don't mean to pick on any organisation in particular if no one is doing this, it's just a thought about how these measures could be improved in general.

SebastianSchmidt @ 2023-07-12T09:27 (+3)

Hi Riley,
Thanks a lot for your comment. I'll mainly speak to our (Impact Academy) approach to impact evaluation but I'll also share my impressions with the general landscape.

Our primary metric (*counter-factual* expected career contributions) explicitly attempts to take this into account. To give an example of how we roughly evaluate the impact: 

Take an imaginary fellow, Alice. Before the intervention, based on our surveys and initial interactions, we expected that she may have an impactful career, but that she is unlikely to pursue a priority path based on IA principles. We rate her Expected Career Contribution (ECC) to be 2. After the program, based on surveys and interactions, we rate her as 10 (ECC) because we have seen that she’s now applying for a full-time junior role in a priority path guided by impartial altruism. We also asked her (and ourselves) to what extent that change was due to IA and estimate that to be 10%. To get our final Counterfactual Expected Career Contribution (CECC) for Alice, we subtract her initial ECC score of 2 from her final score of 10 to get 8, then multiply that score by 0.1 to get the portion of the expected career contribution which we believe we are responsible for. The final score is 0.8 CECC. As an formula: 10 (ECC after the program) - 2 (ECC before the program) * 0.1 (our counterfactual influence) = 0.8 CECC.

You can read more here: https://docs.google.com/document/d/1Pb1HeD362xX8UtInJtl7gaKNKYCDsfCybcoAdrWijWM/edit#heading=h.vqlyvfwc0v22

I have the sense that other orgs are quite careful about this too. E.g., 80,000hours seems to think that they only caused a relatively modest amount of significant career changes because they discovered that the people had updated significantly due to reasons not related to 80,000hours.