Here's how The Midas Project could use additional funding.
By Tyler Johnston @ 2024-11-17T22:15 (+11)
Since we got rolling this Summer, with around $25,000 (and volunteer support!), we've:
- launched a campaign calling for an AI coding startup to conduct dangerous capability evaluations (I don't think this has yet been strong enough, although it did get some response from the company, namely the release of an acceptable use policy)
- cosigned a campaign calling for social media companies to limit the spread of political deepfakes
- co-led a petition against OpenAI concerning the slow abandonment of their original safety and nonprofit mission.
- created an AI safety change monitoring platform
- created a platform for digital activism concerning AI safety, with around 60 users.
... and as of this week, got approved as a 501c3 nonprofit in the US. This will hopefully unlock a lot more scaling for next year. Optimistically, we'd like to raise another $119,000 to get us through mid-2025. Right now, we only really have funding for the salary for the executive director (myself) and minimal programs. Extra funding would go toward, in order of importance, to:
- Hiring a co-founder/program director
- Hiring a campaigner
- Paid contracting for the website/digital platforms (so far it's all homemade, and I fear it shows a little)
With as little as an additional $30,000, we could hire a second full-time employee, which I think would be the biggest unlock for me (being a solo founder is a bit challenging, and I know CE/YC/etc. report having another full-time cofounder as critical for success).
So far, our funding has mainly come from individual donors and (soon) SFF. We've received feedback that this is a challenging thing for some institutional funders to support since it's so adversarial (posing excess reputational and legal risk in particular). So small donations from individuals are particularly important for us.
I don't think we have a strong track record of success on our key goals (actually incentivizing important changes to self-governance/risk evaluation at leading AI companies). I think that's because our reputation/programs aren't strong enough to move these billion-dollar companies yet. But since we are so new and minimally resourced, I feel like we've created a strong foundation. Most of the value in contributing is probably speculative, i.e. what impact we could have in 2025 and beyond as we continue to grow. But at a certain point, without signs of clear impact, we'd consider spinning down or pivoting to adjacent issues/strategies.
If you want to discuss anything about our plans (feedback, ideas, questions, whatever) you can send me an email or book a call with me directly. Or, better yet, do so as a comment to this post so everyone can see it. And, if you'd like to contribute, you can donate to us at our website, or support us in the donation election.