Joe Hardie on Arcadia Impact's projects (FBB #7)
By gergo @ 2025-07-08T13:22 (+17)
Crossposted on The Field Building Blog.
This is a new series within the blog where I interview leaders in the fieldbuilding space. The transcripts are lightly edited to improve readability. Expect more of these to come!
Gergő Gaspar: All right, I’d like to introduce Joe Hardie, the co-director of Arcadia Impact. Thanks so much for taking the time to chat! Arcadia runs a variety of projects, so I thought it would be helpful to start by giving an overview of what those look like and sharing a few words about each. Afterwards, we can dive deeper into some of them.
Overview and Origin Story
Joe Hardie: Yeah, sounds good. Initially, a lot of our work was focused on universities and students in London, which was our starting point when we founded Arcadia. Over time, we’ve broadened our focus to include professionals and people with more experience, as well as putting more emphasis on AI safety, while still supporting other cause areas, like global health and animal welfare. Some of the main programs we’re scaling up and have been focusing on recently are: LASR Labs, which is a technical research program for people interested in switching into AI safety; ASET, the AI Safety Engineering Taskforce, which is a remote-first program where we’ve recruited engineers globally to work on AI safety projects; And the Orion Initiative, which focuses on AI governance talent development. So, there are a few different projects and programs running under that umbrella.
Gergő: I’d love to dig deeper into all of those. First, though, could you tell us a bit about the history of Arcadia Impact and how you got started? Was it originally a volunteer effort that became more professionalised, or something else?
Joe: I was studying at Sussex University in Brighton, UK, and I started an Effective Altruism (EA) group there. I really didn’t know much about EA at the time - I was quite new to it, but I was really motivated by the idea of doing good with my career. Through the EA community, I met a few people who were interested in working in community building full-time after graduating. It was also around the time when Open Philanthropy was offering funding for full-time communitybuilding projects. I hadn’t really considered this as a career before, but through my experience at university, I realised it might be a good fit for me. At the beginning, our work was quite informal, but we started as a team and were able to secure initial funding from Open Phil. After a few months, we had formalised and created a real organisation. We came together to focus on community and fieldbuilding at London universities, since there was a big gap, given the number of talented students and the quality of universities in the city. So, we teamed up and started Arcadia, working closely with university EA groups in London to help them run good events and programs. We also helped set up an office space, which was originally started by UCL’s EA group and has now become the LEAH coworking space.
LASR Labs
Gergő: Amazing, thanks for sharing that background! Let’s now dive into your programs. Starting with LASR Labs, which is one of your newer projects. As I understand, it’s a research program focused on reducing the risk of loss of control from advanced AI, where participants work in teams supervised by an experienced researcher to write an academic-style paper. It’s a full-time, in-person program at your office, alongside other AI safety researchers. Does that sound accurate? Can you also say a bit more about the types of talent profiles you’re looking for?
Joe: Yeah, that’s a pretty good summary. In previous rounds, we’ve had participants with a wide range of experience - some have been software engineers for several years, some finished PhDs, and some have just finished their undergrad or are still undergrads. So there’s quite a range, but most participants tend to be somewhat more experienced. We’re aiming to get very talented people who can contribute to the field quickly, generally those with a background in AI research.
Gergő: Got it. So, you mentioned that many participants are more experienced and some are recent graduates. One of the themes in fieldbuilding is that we don’t have enough experienced professionals contributing to AI safety, which creates a management bottleneck. For those applying to LASR, how much previous experience or engagement do they have in AI safety? Or is the program designed to help people get up to speed and gain context?
Joe: Typically, people have a pretty good understanding of AI safety and are already fairly motivated by it. That might be because of where we advertise—the circles we reach tend to already be thinking about these issues. Many have done things like the AGI Safety Fundamentals reading group, and some have been engaged in the area for a while. But we are open to people from relevant backgrounds who haven’t thought about AI safety as much; it can be a good program for talented people who could contribute but are relatively new to the field.
Gergő: Cool, and can you say a bit about the application and selection process? For example, how many applications do you get, and what are the steps from applying to joining the program full-time?
Joe: In the most recent round, which we just closed applications for, we got about 500 applications. It’s quite competitive, but I still encourage people to apply! The selection process starts with an initial application where we look at your CV and a few written answers, then there’s a coding test, and finally an interview before we make offers.
Gergő: Does everyone need some coding background or technical skills for LASR Labs? AI safety work can sometimes be more theoretical, but I assume coding is pretty central for this particular program.
Joe: Yeah, pretty much. For LASR, the coding test is a required part.
Gergő: Got it. And what sorts of outputs come from the program in the end?
Joe: In the most recent round, I think only one blog post has been published so far, but we’re still waiting on most of the papers. From the previous summer’s round, several projects got positive feedback—their work was accepted to workshops and conferences, and some have been cited in other safety papers, which is a good sign that they’re genuinely useful.
Impact Research Groups
Gergő: Great, let’s move on to the Impact Research Groups. Here’s a quick summary: it’s a program to support talented and ambitious students in London who want to pursue high-impact research careers. Participants work in small groups with experienced mentors to explore research questions in one of several streams, over eight weeks, and then present their projects to a panel of judges. Does that sound right? What streams do you currently run?
Joe: Yeah, that’s a good summary. It’s a more junior research program - most participants are undergraduates, though we also accept master’s students. It’s generally people who haven’t engaged much with these ideas yet but want to learn more, so we place them into small groups to research projects in one of our streams. We currently run streams in animal welfare, global health, biosecurity, AI governance, and technical AI safety. Students work with a mentor, who helps design the project and guides the research. At the start, mentors pitch project ideas, and participants express their preferences for which ones they’d like to join.
Gergő: Is this program part-time or full-time over the summer?
Joe: It’s part-time - typically a few hours per week during term time. We are running a summer cohort too, but it will also be part-time.
Gergő: That’s great that it’s open to students, especially those new to these ideas. This seems like a bigger investment compared to something like an EA fellowship or reading group, since you secure mentors and run research projects. How do you think about investing in people who are new, given there’s a higher risk they don’t engage deeply, but also a higher counterfactual impact if they do?
Joe: Yeah, this is definitely a topic that other community builders have been thinking about - how research programs like this compare to more traditional reading groups. I think we’re still figuring out what works best. London universities also run the Arete [EA Intro] Fellowship, which we help with - that's more of a standard introductory program. One thing we’ve noticed is that students are really excited about actually doing research, and the research groups have been really popular. So it does seem like this is an effective way of reaching people who are interested in having an impact through research.
Gergő: Yeah, that makes sense. I think having a tangible project and clear output helps a lot—it’s easier to pitch than something like, "Read a bunch of philosophy, and hopefully something clicks." This way, students really know what they’ll be working on.
In terms of the selection process, is it difficult to select who gets mentors and spots, given the large interest?
Joe: Yeah, it does end up being fairly selective. But I’d say if you put a good amount of effort into your application, it isn’t that hard to get on. We are mainly looking for people who have thought about these ideas and have a basic understanding of the relevant streams. We do get applications from people who don’t seem to understand the program or focus areas well, so if you show genuine interest and effort, your chances are good.
Gergő: How large are the cohorts?
Joe: Last time we had around 40 people.
Gergő: Great. I was also curious about the written application questions. How do you go about evaluating those, especially since answers can be very lengthy? Do you use any large language models (LLMs) to help, or is it just manual review?
Joe: Right now, it’s still pretty much all manual. With LASR, we brought on some contractors to help go through first-stage applications, which made it more manageable, but overall, it’s still quite time-consuming.
Gergő: That makes sense. By the way, I’ve seen discussions recently in the AI safety community about building an LLM that’s specifically fine-tuned for evaluating applications—something that could help communitybuilders. It seems like a lot of people are struggling with the same challenge.
Could you share a bit about what happens to students after they finish the research group or fellowship program?
Joe: Sure. Most students are still studying, so they usually don’t go straight into jobs afterwards. But we do have some graduates who got jobs working directly in the cause area they focused on, which is great to see. Others have gotten involved in group organising at their universities. And since we have a variety of other programs, some IRG participants have gone on to do other things with us, or we’ve connected them to opportunities we know about. Over time, we're hoping to build more of a pipeline—IRG is more on the introductory side, but we have more advanced programs that people can move into.
Gergő: Could you give one or two examples of projects students have worked on in these streams? What kind of research questions did they tackle?
Joe: Yeah, sure! Some of our winning projects are posted on our website, but for example, in one of our previous programs, a project in the AI governance stream looked at data use in AI and user data sharing. Another one focused on the governance of frontier AGI labs. There was a project on alternative protein products in China and Southeast Asia. More recently, we had a biosecurity project that explored FAR-UVC. So there's quite a range.
AI Safety Engineering Task Force
Gergő: Thanks so much for sharing. Let’s move on to the AI Safety Engineering Taskforce, or ASET. This program connects experienced tech professionals with important AI safety projects, manages teams of skilled engineers and scientists, and helps them transition into full-time AI safety careers by providing entry-level opportunities. To start, how does ASET differ from LASR Labs?
Joe: Good question. ASET specifically focuses on evals and engineering talent, whereas LASR Labs is more research-oriented and geared toward producing academic papers. ASET is about working on practical projects in AI safety, and the outputs are less academic and more engineering-based. There’s definitely some overlap, and we have had people who have participated in both programs.
Gergő: Just to clarify, ASET is meant to be an entry-level opportunity but targets people with significant technical experience. So does that mean someone without much AI safety background, but with strong engineering skills, can still join?
Joe: Yes, that’s right. We’re still figuring out our long-term strategy, but so far, we’ve had quite a few experienced software engineers who haven’t actually worked in AI safety or evals before. So, it’s kind of “entry-level” in the sense of entering AI safety, even if they’re quite senior technically.
Gergő: So you probably advertise this more broadly than your other programs?
Joe: To be honest, we haven’t done a ton of advertising so far—a lot of the recruitment has been through our existing networks.
Gergő: Got it. So, in terms of applications and selection, how many people tend to apply, and how do you run that process?
Joe: For our first run, the application process was pretty closed—we didn’t get a huge number of applicants. It was mostly people we already knew and invited to apply, relying a lot on our network of engineers. We’re looking to open it up more and share more about the program in the coming months.
Gergő: I heard you have a project collaborating with the UK government. Is that through this program?
Joe: Yes, we collaborate with the AI Security Institute in the UK through ASET. Our first project was contributing to Inspect, which is their open-source evals framework, and we’ve continued collaborating with them on that.
Orion AI Governance Initiative
Gergő: That’s awesome. And then, about the Orion AI Governance Initiative—it’s a talent development program with components like the AI Policy Accelerator, a mentorship program, and internships. Are these separate streams, or do people do all of them? Could you say a bit more about each?
Joe: We originally had one application that let people access all three components, though not everyone did all of them. The AI Policy Accelerator includes a one-week “AI for Policymakers” course we developed this year, followed by a two-week project phase. Most participants did both, but some just did the course or just the project phase. The mentorship program paired students with AI governance professionals; most who did the policy accelerator also did mentorship, but not all. The mentorship program is still new, so we don’t have a ton of output yet, but it’s intended to give career advice and guidance in governance. For the internship, we’re running a pilot this summer—placing two interns with think tanks to work on AI governance research.
Gergő: That sounds amazing! On the policy accelerator, can you say more about what the course covers? It’s one week—is it pretty intensive?
Joe: Yeah, it’s an intensive, in-person, full-time week. It’s aimed at students with a policy background, but the course is quite technical—covering how AI works (including some of the math), which we think is a gap in policy training. It also talks about supply chains, AI hardware, and different governance approaches.
Gergő: So it’s designed to give students with policy backgrounds a good technical understanding of AI for future roles.
Joe: Exactly. There are definitely improvements we want to make, but that’s the idea.
Gergő: And are the other research projects also full-time?
Joe: Yes, everything is in-person and full-time for that week.
Safe AI London
Gergő: And then, moving on—SAIL supports people in London interested in reducing advanced AI risks. It aims to raise awareness of these risks, especially at universities, and provides resources to help students and professionals contribute. Does this program support university EA groups, AI safety groups, or both? Just a quick overview would be great.
Joe: Yeah. So, we originally started SAIL a few years ago and were running in-person reading groups, including some technical and governance versions of the BlueDot course. Over the last year or so, we haven’t done as much with SAIL as we would have liked, mostly because we’ve been focusing on other projects we talked about earlier. Right now, SAIL mainly exists as a website and newsletter, but we’re thinking about what to do with it going forward. We’re very interested in supporting students at London universities who are interested in AI safety. So yeah, we’d love to support AI safety societies at universities, or even just smaller events that students want to run at London universities. In general, there’s quite a lot of interest from people who discover the SAIL website. So we’re actively thinking about how we can help with that—how we can coordinate, support, and point people in the right direction if they want to learn more about AI safety in London.
Gergő: Nice. On the university side, is there much collaboration or overlap between your work and things like FSP or OSP, those mentorship programs that support emerging university group organisers?
Joe: Yeah, there probably is some overlap with what OSP and FSP are doing. I think our approach, working with EA groups and other student groups in London, is generally a bit more hands-on than OSP, which is more focused on mentorship. We work quite closely with the universities and often meet a lot of the students directly. A lot of our programs are actually designed for those students, so we advertise our programs and work with students directly to support their applications and involvement.
LEAH Office
Gergő: That’s really cool—it’s good to have an umbrella organisation providing hands-on support and opportunities for students getting involved through new groups. Last but not least, I wanted to ask about the LEAH coworking office in central London, which supports people working on projects with positive impact. From what I gather, it aims to build community by providing a productive workspace and supporting collaborations, and it also helps host other events you organise.
Joe: Yeah, that’s a good summary. We use LEAH as the main office for our team, which has grown a lot over the last few years. We also often use it to run events or base programs there, so it’s crucial to have a dedicated space for those activities. Day-to-day, it serves as a coworking space for people working on impactful projects. There are a lot of people in London who work remotely or independently, sometimes as just a small team or even alone, so it doesn’t make sense for them to rent a whole office. We try to provide a space for people like that. So we have different users, and actually, for more than half of them, they’re the only person from their organisation here. Our impact surveys show people report significant productivity improvements, and having everyone in one place really helps foster interactions and collaborations.
Gergő: Yeah, we’re recording this interview here, and as a user, I can confirm it’s a great space! What’s the distribution of people working here in terms of cause areas?
Joe: It’s quite a mix. There are quite a few people doing meta EA or field-building work, and some of our users work for Arcadia, so there’s that focus too. Otherwise, we have people working on biosecurity, AI safety, animal welfare, and a few in global health. Overall, it’s quite diverse.
Gergő: I love that it’s kind of a meta office where everyone doing EA-aligned work can come and work. I wanted to ask about your thoughts on the LEAH office, compared to LISA. How do you think about the overlap and difference there?
Joe: There is definitely an overlap. The main difference is that LISA hosts a lot of programs—like Arena, LASR, and Pivotal—so many desks are people participating in or running those. We usually don’t have as much capacity for large programs like that. LEAH is smaller, and we try to keep it lower-cost compared to LISA, which is bigger. But there’s a lot of overlap in terms of user base and focus.
Gergő: Yeah, I think if I had to summarize: LISA is larger and runs more programs, while LEAH is smaller and has a warmer, more personal vibe. Personally, I feel LEAH is a bit more EA, more like the classic hummus-bread energy we love! Also, I’m amazed at how you manage to run all these projects and still have time to put away the dishes in the evenings—it’s super humble and impressive.
Arcadia's target audiences
One other thing I wanted to ask about: your focus on students versus more experienced professionals. It sounds like maybe you’ve shifted a bit from students toward more professionals. How does that balance look now?
Joe: Yeah, we’re definitely more focused on experienced professionals than before. A few years ago, AI safety was a much smaller and somewhat niche area—not much media attention, not that many people thinking about it. But with things like ChatGPT, AI has become much more visible and obviously capable, so there’s more interest from experienced professionals wanting to contribute. That shift has been reflected in our work: we still focus a lot on students, but we’ve expanded to include professionals rather than moving away from students entirely. There are still a lot of interested and talented students, and we definitely want to keep supporting them. We’re always thinking about how best to balance our work for different experience levels, and we're looking for which talent bottlenecks we should focus on next.
Fundraising
Gergő: Great. I would also love to talk about fundraising, if we can. People are often curious about this, but it can be a bit of an opaque area for outsiders. You mentioned you’re mainly funded by Open Philanthropy. Could you share more about what that process is like, some wins and challenges, and any lessons you’ve learned?
Joe: Yeah, so Open Philanthropy is definitely the main funder in this space, especially for organizations trying to scale fieldbuilding projects. I’d recommend looking at their Catastrophic Risk Capacity Building team if you’re in this area. Our experience has been very positive—they’re a solid grantmaker, and we’ve built a good relationship with them over the years, which definitely helps create trust. The main downside is the risk of relying too much on one funder: if their priorities change, or for some reason they pull out, that’s a big risk. Hopefully, that won’t happen soon, but we are interested in diversifying our funding, though it can be tough, especially in the current landscape.
Gergő: Yeah, there is a lot of discussion on this these days. Most orgs would love to diversify, but until there’s, say, a new major philanthropist in the space, it’s going to be really hard. Is there anything else to share on fundraising? What have you learned that makes for an effective pitch?
Joe: Yeah, a few things stand out. When you’re doing lots of work, there’s a temptation to write a really detailed report explaining absolutely everything, especially since you want to account for how all the funding was used. Some detail is good, but it’s often important to really focus on the big wins—the most impressive things you’ve accomplished. Really highlight the outcomes you’re proudest of, the most impactful things. It’s also best to make reports easy to navigate—grantmakers are short on time. Use clear summaries and direct them to more detail if interested: "If you want to read more, go to this section." Transparency also helps. Open Phil has a helpful blog post called “Reasoning Transparently,” which covers how to approach reporting clearly and openly. Yeah, that’s been quite useful, especially for writing applications.
Gergő: Yeah, definitely. One thing I’d suggest to people is to have the main wins on the front page of your report—just the key things you’ve accomplished. Then you can include expandable sections or links if people want to read more details. But that front-page summary seems really important.
On the application side, you mentioned the importance of brevity and respecting grantmakers’ time. How do you structure your applications? Is there a general sense of length or structure that works particularly well?
Joe: It’s pretty similar: start with a summary, then break it down by our different projects. Usually, we’d start with some operations or team updates—so a bit about core organizational matters—and then give quick summaries of what we want to do in each program. After that, we go into the strategy: why we think our work will be impactful. Typically, we’d have a main Arcadia report linking to more detailed reports for the individual projects, so readers can go deeper if they want.
Gergő: When you say “report,” do you mean future plans, or reporting on what you’ve done in the past?
Joe: Both—we usually submit a funding application outlining our plans and then a report on what we did over the last year or so.
Gergő: Are you able to share if you’ve had any proposals rejected, or things that didn’t go through?
Joe: We’ve definitely had some specific things funded for less than we asked for, but we haven’t had anything outright rejected. There are usually some areas they’re less keen on than others, which could be if they aren’t sure about the value or how it fits into their priorities.
Gergő: If you’re able to share, I’d be curious about what Arcadia’s yearly budget looks like. Do you fundraise as one big organisation, or do you apply separately for different projects? And could you also give a quick summary of your team—number of FTEs and what everyone does?
Joe: Sure. Our core budget this year was about $1.6 million. We have a few other projects we worked on beyond that, but that figure covers most of our core funding and support. Typically, we fundraise for all projects together, though we’re also planning to apply for additional grants to scale specific things.
As for the team, we currently have seven full-time staff. There’s Erin and and I, who are co-directors. Erin is mostly working on LASR Labs, and I'm focused on Operations. Then most of our staff are leading work on one of our programs. Justin is mostly working on ASET, Alicia works on student group support and runs IRG, Belle is working on Orion, Brandon focuses on LASR Labs, and Ben is on the AI Governance Taskforce.
Gergő: Thanks for sharing. And, thanks so much for your time! Please keep up the good work!
tzukitchan @ 2025-07-08T13:43 (+3)
[meta] it felt fine listening to this post with the baked-in audio feature on eaforum, as it was intended? (question mark)
[praise] thank you for asking specific questions others would be curious to hear about as well. excited for this series Gergo!
gergo @ 2025-07-08T14:52 (+2)
Tbh I wasn't thinking too much of the internal reader, but I'm glad that it was usable for the text!!
SummaryBot @ 2025-07-08T21:51 (+1)
Executive summary: This interview with Joe Hardie offers an overview of Arcadia Impact’s evolution from a student-focused EA initiative into a multi-program organization supporting AI safety and related causes through research labs, talent development programs, and policy engagement, with a growing emphasis on experienced professionals alongside continued support for students.
Key points:
- Arcadia Impact began as a student-focused community building initiative and has since expanded to support professionals and operate multiple AI safety-focused programs, including LASR Labs (technical research), ASET (engineering-focused safety projects), and Orion (governance talent development).
- LASR Labs and ASET target technically skilled individuals at different stages, with LASR focused on producing academic research and ASET aimed at transitioning experienced engineers into AI safety work, particularly in evaluations and practical engineering.
- Impact Research Groups and Safe AI London (SAIL) serve as student-facing entry points, providing part-time research experience and community resources, often in collaboration with EA university groups in London.
- The LEAH coworking space supports cross-cause collaboration among EA-aligned professionals, serving as Arcadia’s operational base and fostering community among independent workers and small teams.
- Arcadia is primarily funded by Open Philanthropy, and while this support has enabled growth, the organization is exploring diversification due to the risks of reliance on a single funder, and emphasizes clarity and impact-focused storytelling in its reporting.
- There has been a strategic shift toward engaging experienced professionals in AI safety, prompted by increasing mainstream interest in AI risks, though Arcadia continues to invest in student programs and aims to build a pipeline from entry to advanced involvement.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.