The Hasty Start of Budapest AI Safety, 6-month update from a non-STEM founder
By gergo @ 2024-01-03T12:56 (+9)
How to read this post
- If you are interested in the specific things our group has done, just continue reading after this section.
- If you are interested in more abstract level CB strategy, read my sequences called Experiments in Local Community Building.
- If you are interested in the general lessons after the first 6 months, read the General thoughts section as well as 20+ tips, tricks, lessons and thoughts on hosting hackathons.
Background and soft launch
During the 2022 Fall semester there was already a big enough interest in our local EA community to pilot an AGISF fellowship, so I ended up facilitating[1] the technical AGISF course for ~10 people in 2 groups.
At the end of January 2023 I also organised/hosted a hackathon for about 7-12[2] people with Alignment Jam. My decision to host was very last minute so organizing it was a bit stressful - my reasoning was that not having attended a hackathon before I just wanted to get some experience organising such an event, as well as get some people who could later be interested in taking our AGISF course.
See footnote for a more personal account of my cause priorisation.[3]
2023 Spring Semester
2023 February to May
Courses
We officially launched Budapest AI Safety in 2023 February. Inspired by HAIST’s and MAIA’s approach, we used the in-session-style curriculum of Bluedot’s AGISF.
We ran an intensive, split fellowship a week before the Spring semester started. We had 22 applicants, about evenly split between the technical and policy course. The insentive course week was a success, although quite stressful, as we had minimal previous experience and at the time the facilitation questions by HAIST wasn’t yet available (their guide is super helpful <3) so our mentors had to spend more time preparing, and the fact that the sessions were daily certainly made things more rushed. We were also having some internal troubles at the time, so I decided to postpone the continuation with the second half. We were going to run the usual weekly course shortly after the semester starts anyway, so I thought people can just join that once they get to week 5 (see some considerations on this here). This was a tricky decision as we were risked losing the momentum and the excitement of the people who took part - but I think in the end it turned out fine. Despite overwhelmingly positive feedback, most of the people didn’t continue with week 5 of the weekly course once we got there. This seems discouraging, on the other hand, most of those who took the first half seriously are still in our network ie. didn’t just “disappear”.
As hinted above, we ran the usually weekly version of AGISF once the semester started. Unfortunately for this round we only had 13 applications, with 10 starting the course and 7 finishing.
One interpretation of these results is that most people who were interested in the fellowship for this time period did it with the pre-semester group (afterall, our marketing efforts for the two periods targeted the same audience ie. university students), however I’m not quite sure if that’s the explanation. At the time we had very little capacity for marketing properly, and I remember that our paid social media ads that we usually rely on didn’t “get off the ground” either (meaning that the budget we allocated for it wasn’t spent by Meta). I think there might be some diminishing returns on marketing if you ran a fellowship twice per semester, however I still think it is likely worth it. Next time I want to try having two fellowships again, but with proper marketing for both rounds.
For those interested, I recommend reading “You can run more than one fellowship per semester”
Retreat
We ran an AI Safety retreat at the end of 2023 February for ~20 people, it was great!
Based on the feedback people found it useful, even though on the organizing side it did feel like a bit of a mess, as we didn’t have enough capacity at the time.
A mistake I made was not thinking carefully enough about the mentors/contributors I want to invite, and leaving it to the very last minute. In the end, we had three more experienced people joining from our local community, who have been upskilling in AIS for 6-12 months, but haven’t had direct research experience at the time. Given that most of our attendees were very new to AI Safety (e.g. just finished AGISF) they were more than capable of answering people’s questions. I guess at the time I was sort of operating under the assumption that “this is an AIS retreat so we need at least one person who has experience in technical research” - which is a reasonable thing to think, however, having someone fly in from outside Hungary would probably not have been worth it, and I’m glad we didn’t have someone do it.[4] Of course, that’s not to say you shouldn’t have more senior people at your retreats, (especially if they are available locally), as long as you are aware of the counterfactuals of their time.
Hackathon
We hosted a hackathon with Alignment Jams on AI governance. We had something like 45 people sign up, however much fewer showed up. (I have written elsewhere that it is pretty hard to estimate how many people will show up and what you can try to do about this.)
Overall we had about 18 unique attendees plus the 5 organizers, with ~14 people committing to a significant extent to the hackathon. One great outcome was that someone learned about AI Safety through the hackathon who I eventually ended up hiring, and we also got to meet someone who connected us with another youth org, which led to further opportunities, such as being able to have a small slot at a future-themed conference called Brain Bar.
Newsletter
We launched a monthly newsletter in 2023 October. We try to not spend too much time on these, however, I think having at least an MVP is definitely worth it. If you are organizing a new AIS group and don’t have capacity to write one yourself, feel free to shoot us an email at info@aishungary.com - we can share our newsletter drafts with you each month, and you should be able to adjust it for your group with little effort. See a past issue here to get a sense of the content.
2023 Summer
Courses
We ran a split AGISF again, which had 21 applications with about 75% on the technical track. 19 people started and 12 finished part 1, of these people, about 10 finished part 2.
Socials
During this period I ran quite a few socials (weekly or biweekly depending on the month) with attendees between 5-15 people. I also organized a few just for “experienced members” ie. people who already know that they want to work on AI safety and are quite worried about it. I think there is value to having those events specifically, but I don’t think overall it’s worth it for us to put a huge amount of organizing time into socials, as long as you have an MVP (the exception being intro socials for people who are about to start AGISF, as those are important and we want to make sure they go well!)
EAGxWarsaw
We went to EAGxWarsaw, which was great!
Our community from Hungary brought overall about 28 people, of which counterfactually I think ~24 people wouldn’t have went otherwise. Most of these people came through EA Hungary though, but due to the overlap between the two communities it’s a bit hard to estimate the exact numbers. My best guess is that out of the ~24 people, counterfactually ~8-9 came because of the AIS group, although some found us through EA but now prioritize AIS.
General thoughts
Below I will write some general thoughts that are somewhere between strategic considerations and lessons to draw based on the past months’ experience.
On hackathons
Hackathons are great! I have written about our experience in a separate post called Tips, tricks, lessons and thoughts on hosting hackathons, for more strategic considerations, read the section on ToC.
On being a non-stem founder
I originally had a lot of impostor syndrome over not having a STEM background and still wanting to start an AIS group. I think this was mostly unjustified and I’m glad it didn’t stop me from pivoting towards AIS field building. (That’s not so say that some things wouldn’t have been easier have I had a STEM background.) In the past year the governance side of AIS also became more prominent, and I was also in a fortunate enough situation that I could hire some people part-time to lead our courses on both the technical and policy side. Thanks to that I didn’t have to facilitate AGISF for STEM-y people myself and I was able to focus more on operations and strategy.
On hastiness
To use something of a cliche, my intention was certainly not to “move fast and break things”, yet due to mixture of being a perfectionist, timelines, and my general worries about AI Safety, I really wanted to jumpstart and then grow our AIS group quickly. Because of that, I ended with something like “move fast and have overly ambitious goals that will strain yourself and your team, likely reducing your group’s long-term output”. Lessons learned!
The rest of this section were mostly copied from the EA Hungary review and slightly adapted for the AIS group. If you have read that already some parts will sound familiar.
On newsletters
Just to repeat what I wrote above: If you are organising a new AIS group and don’t have capacity to write one yourself, feel free to shoot us an email at info@aishungary.com - we can share our newsletter drafts with you each month, and you should be able to adjust it for your group with little effort. See a past issue here to get a sense of the content.
On marketing
We generally found that marketing the AIS course is somewhat easier compared to marketing a general intro to EA course. We also found that on average, people who sign up for the AIS course are more academically inclined - which is similar to what I have read and heard from other organizers.
That said, marketing is still a bottleneck for us, and we want to improve our reach. We currently rely on social media ads, and word of mouth as well as trying to share the program in Facebook groups for university students.
We also found that on average, people who sign up for the AIS course are more academically inclined - which is similar to what I have read and heard from other organizers.
On measuring impact
This report is mainly focused on describing the type of programs we ran, how we ran them, and some strategic considerations behind them. These are all instrumental to what we ultimately want to achieve, which is more talented and thoughtful people contributing to AI Safety. I try to keep track if someone is doing something awesome in our group e.g. we had a person get into SERI MATS, and another one to AIS camp., as well as a couple of people significantly changing their career trajectory.[5]
On socials
Over the 6 months, we have run quite a few social events, with varying success. Overall I have updated away from priorizing socials. I think they are great and should happen, but I’m now hoping to have them run by volunteers or happen more organically, as opposed to putting serious effort into organizing them by our staff. We will still organize at least one social per intro course, as well as have at least one MVP-type meetup per month though, so there is something to turn up to for people who are currently not enrolled in our courses.
Next steps
Our original strategy for the 2023 Fall Semester can be read here. Since then we have somewhat updated on these plans, I will soon share that in our upcoming interim report.
Concluding thoughts
Overall I’m pretty excited about going forward and also feel that doing CB for AIS allowed me to do a lot of “learning by doing”. I’m also pretty excited cause area-specific community building, as at least in our case the courses attracted a lof of cool people, many of whom likely wouldn’t have applied to our EA course. That’s not to say I don’t think EA CB is not important. Not to mention that the people who took our AIS course seriously also tend to be pretty sympathetic toward EA principles. Wish us luck going forward!
Thanks to Milán Alexy for reviewing this draft.
Thanks to Julia for giving us advice, as well as Dia, Athraa, Cristina, Eugen, Rain, Peti, and Melinda who have put invaluable work into creating and growing our group.
- ^
While I was the facilitator for the group, I was also doing the course for the first time and wouldn't have been comfortable/knowledgeable enough to be facilitating for e.g. university students who heard about AIS for the first time. For these groups, however, the attendees were already in my network and some of them already knew more about AIS than myself. My role was more of bringing people together and getting the ball rolling on AIS in the community.
- ^
Smaller number means people who stayed for the whole duration, higher number meaning overall number of people joining
- ^
I discovered AI safety through Effective Altruism. Having leant about EA through the work of Peter Singer on poverty, AI Safety wasn’t at the forefront of my worries for a long time. As I was learning more about the topic (as well as exrisk reduction in general) the more I thought it is important, but still couldn’t see how I would contribute to it. Shifting my focus from EA to AI Safety was a gradual (and still on
going) process, but a big update for me was getting to attend a CBG retreat in 2022 Summer after EAG SF and discussing with other EA CBs about how to contribute to AI Safety. This caused a shift in me realizing that this is something I could contribute to, despite not having a STEM background. I also read Superintelligence by Nick Bostrom sometime afterward which also gave me a big motivational push to learn more and contribute.
- ^
Even though someone from Apart Research was willing to hop on a plane last minute, those guys are awesome! <3
- ^
I realize it might come across as ironic that I don’t say much more about this here, but I feel a bit iffy about writing down something like "x got a job at y, and n number of people significantly changed their career trajectory. I do try to keep track of how we support, to what extent and what are my best guesses about the counterfactual impact we had on their career - but I feel this is somewhat sensitive information that includes a lot of guesswork, so I would rather not share it publicly even with everything anonymized.
SummaryBot @ 2024-01-03T14:46 (+1)
Executive summary: A report on the first 6 months of activities from Budapest AI Safety, a new group focused on building the AI safety community locally through courses, events, and outreach.
Key points:
- Ran intro fellowships, weekly courses, a retreat, and a hackathon to engage people new to AI safety.
- Attracted interested and thoughtful participants, leading some to change careers or contribute more.
- Faced challenges with inconsistent attendance across sequential courses and events.
- Learned lessons about pacing, the value of socials and newsletters, and optimizing roles.
- Planning to continue courses and events in the future while addressing past issues.
- Excited by progress so far and the group's potential for impact despite being non-STEM founder.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.