Database of orgs relevant to longtermist/x-risk work
By MichaelA🔸 @ 2021-11-19T08:50 (+104)
Here’s a version of the database that you filter and sort however you wish, and here’s a version you can add comments to.
Update: I've been slow to properly update the database, but am collecting additional orgs in this thread for now.
Key points
-
I’m addicted to creating collections and have struck once more.
-
The titular database includes >130 organizations that are relevant to people working on longtermism- or existential-risk-related issues, along with info on:
- The extent to which they’re focused on longtermism/x-risks
- How involved in the EA community they are
- Whether they’re still active
- Whether they aim to make/influence funding, policy, and/or career decisions
- Whether they produce research
- What causes/topics they focus on
- What countries they’re based in
- How much money they influence per year and how many employees they have[1]
-
I aimed for (but likely missed) comprehensive coverage of orgs that are substantially focused on longtermist/x-risk-related issues and are part of the EA community.
-
I also included various orgs that are relevant despite being less focused on longtermism/x-risks and/or not being part of the EA community. But one could in theory include at least hundreds of such orgs, whereas I just included a pretty arbitrary subset of the ones I happen to know of.
-
I made this relatively quickly, based it partly on memory & guesswork, and see it as a minimum viable product that can be improved on over time. So please:
- If you spot any errors or if you know any relevant info I failed to mention about these orgs, let me know via an EA Forum message or via following this link and then commenting there
- Fill in this quick form if you know of other orgs worth mentioning.
- Let me know if you have questions about how best to use the database or how to interpret parts of it. (I expect many things will turn out to be confusing/unclear, and I’m relying on people to ask questions.)
Here’s a snippet of what the database looks like (from the "view" focused on "Funders/funding-influencers"):
I made this database and wrote this post in a personal capacity, not as a representative of my employers.
How, why, and when to use the database
(This is all how I use the database myself.)
You can filter, sort, and search the database based on the causes/topics and types of work (e.g., grantmaking vs policy advising vs research) you’re interested in.
You can use the database to:
- Generally learn about the landscape of actors in a given area
- Get ideas about what orgs could “provide inputs to you” (funding, advice, feedback, connections)
- Get ideas about what orgs could act as “nodes on your path to impact”, e.g. whose actions could be improved by a research project you’re considering doing or who could translate and transmit your findings on to key decision-makers
This could be useful in situations such as when you’re:
- Getting oriented to a new area
- Trying to build career capital in an area
- Generating project ideas, generating theories of change for those project ideas, and prioritising among them
- Conducting a project
- Helping someone else do any of the above things
(For elaboration on points 3 and 4 in the context of research projects specifically, see here, especially Slides 14-15. Those points are more relevant the more you aim to operate like a consultancy or think tank.)
These benefits could occur via:
- The database making you aware of orgs you didn’t know about
- The database making you aware of info you lacked on some orgs, or
- The database “jogging your memory”
- I find it’s easier to notice that an org is worth mentioning to someone I’m giving advice to or considering when making a project plan if I’m scanning a filtered list of maybe-relevant orgs than if I’m just doing free recall
Why I made this
Answer 1: As noted, I’m addicted to creating collections.
Answer 2: 18 months ago, I thought EAs should post more summaries and collections, and I still think that, and people seem to often like it when I do that.
Answer 3: 12 months ago, I made a smaller version of this database in hopes that it’d benefit the work of Rethink Priorities’ longtermism team (which I’m a part of) in the ways outlined in the previous section. I feel like it has indeed been useful (though mostly just through guiding my own work and my suggestions to other people; I think other people rarely use it directly). And I’ve also ended up fairly often using the database when giving career or project people advice (e.g., to remind myself what orgs I should suggest a person might want to talk to or check out the work of if they’re interested in nuclear risk or forecasting), or sharing snippets of it with people. So I figured I should make a publicly accessible version.
Caveats
Mainly just what I said earlier, but I’ll say it again in bold for good measure:
- I aimed for (but likely missed) comprehensive coverage of orgs that are substantially focused on longtermist/x-risk-related issues and are part of the EA community
- I also included various orgs that are relevant despite being less focused on longtermism/x-risks and/or not being part of the EA community. But one could in theory include at least hundreds of such orgs, whereas I just included a pretty arbitrary subset of the ones I happen to know of.
- I created this fairly quickly and based partly on memory & guesswork
Other caveats:
- A high level of focus on longtermism/x-risks and a high level of involvement in EA are of course neither necessary nor sufficient for an org to be impactful, “good”, wise, etc.
- Obviously I had to make many debatable judgement calls when filling the database in
- These orgs vary massively in their significance and in their relevance to longtermism-/x-risks
Possible next steps
- More orgs could be added (using this form)
- This could include drawing on GCRI's 2013 Organization Directory, organizations who have been included on 80k’s job board, and organizations featured in other relevant collections
- Info could be added and corrected (people can leave comments in the Airtable and then I’ll make the appropriate edits)
- Perhaps some other way to structure/display the info would be good?
- Perhaps this should be somehow integrated with other things, like 80k’s job board or my list of EA funding opportunities?
- People could duplicate and then adapt this database in order to make:
- A version that’s relevant to all EA cause areas
- A version that’s relevant to a particular other large EA cause area (e.g., animal welfare)
- A version that “zooms in on” some specific longtermist/x-risk-related area - adding more orgs, individuals, and info relevant to that area and cutting out other things
See also
If this database seems useful to you, you may also be interested in one or more of the following:
- A Database of EA Organizations & Initiatives (contains things relevant to cause areas other than longtermism/x-risks, but otherwise is less comprehensive/detailed)
- List of EA-related organisations (probably superseded by the above, more recent database)
- Job board - 80,000 Hours
- List of EA funding opportunities
- The EA Forum Wiki
- A central directory for open research questions
- Suggestion: EAs should post more summaries and collections
Acknowledgements
I drew on Pablo Stafforini’s and Jamie Gittins’ lists of EA-related orgs. An earlier version of the database benefitted from comments by Janique Behman, David Rhys Bernard, Juan Gil, and perhaps other people who I’m forgetting. The current version of the database and/or this post benefitted from comments from Will Aldred, Aaron Gertler, Jaime Sevilla, Ben Snodin, Pablo Stafforini, and Max Stauffer.
...well, I haven’t actually entered that info, but I’ve made fields for it in hopes of crowdsourcing it from you. ↩︎
Yonatan Cale @ 2021-11-19T13:04 (+10)
Two lists I'm considering making:
- Software developers who are interested in doing paid EA work (According to 80000 hours, it seems to be hard to hire software developers for EA orgs even though lots of software developers seem to exist in our community. Seems confusing. This would be a cheap first try at solving it)
- Pain points that could potentially be solved by software - from EA orgs (see #6 here. The post is about looking for places to invest in software. I think the correct place to approach this would be to start from actual needs. But there's no place for orgs to surface such needs beyond posting a job)
Any thoughts?
Ozzie Gooen @ 2021-11-21T16:21 (+5)
I'll note:
- When you say "paid", do you mean full-time? I've found that "part-time" people often drop off very quickly. Full-time people would be the domain of 80,000 Hours, so I'd suggest working with them on this.
- "no place for orgs to surface such needs beyond posting a job" -> This is complicated. I think that software consultancy models could be neat, and of course, full-time software engineering jobs do happen. Both are a lot of work. I'm much less excited about volunteer-type arrangements, outside of being used to effectively help filter candidates for later hiring.
I think that a lot of people just really can't understand or predict what would be useful without working in an EA org or in an EA group/hub. It took me a while! The obvious advice would be for people who want to really kickstart things, is to first try to work in or right next to an EA org for a year or so; then you'll have a much better sense.
Yonatan Cale @ 2021-11-22T19:53 (+1)
- Developers who'd like to do EA work: Not only full time
- I'm talking about discovering needs here. I'm not talking at all about how the needs would be solved
Working at an EA org to discover needs: This seems much slower than asking people who work there, no? (I am not trying to guess the needs myself)
Ozzie Gooen @ 2021-11-23T09:41 (+2)
Working at an EA org to discover needs: This seems much slower than asking people who work there, no? (I am not trying to guess the needs myself)
It really depends on how sophisticated the work is and how tied it is to existing systems.
For example, if you wanted to build tooling that would be useful to Google, it would probably be easiest just to start a job at Google, where you can see everything and get used to the codebases, than to try to become a consultant for Google, where you'd ask for very narrow tasks that don't require you to be part of their confidential workflows and similar.
Yonatan Cale @ 2021-11-23T13:13 (+1)
I agree I won't get everything
Still, I don't think Google is a good example. It is full of developers who have a culture of automating things and even free time every week to do side projects. This is really extreme.
A better example would be some organization that has 0 developers. If you ask someone in such an organization if there's anything they want to automate, or some repetitive task they're doing a lot, or an idea for an app (which is probably terrible but will indicate an underlying need) - things come up
Yonatan Cale @ 2021-11-23T13:16 (+3)
But also, I tried, and I think 0 such needs surfaced
That's what the experimental method is for, so that we don't have to resolve things just by arguing
Guy Raveh @ 2021-11-22T16:29 (+1)
Just throwing a thought: if many EA orgs have software needs and are struggling to employ people who'll solve them; and on the other hand, part-time employees or volunteer directories don't help that much - would it make sense to start a SaaS org aimed at helping EA orgs?
Ozzie Gooen @ 2021-11-22T18:46 (+2)
I could see a space for software consultancies that work with EA orgs, that basically help build and maintain software for them.
I'm not sure what you mean by SaaS in this case. If you only have 2-10 clients, it's sort of weird to have a standard SaaS business model. I was imagining more of the regular consultancy payment structure.
Yonatan Cale @ 2021-11-22T19:59 (+1)
EA Software Consultancy: In case you don't know these posts:
In part 1, I argue that tech work at EA orgs has three predictable problems:[1]
- It’s bad for developing technical skills
- It's inefficiently allocated
- It’s difficult to assess hires
In this part I argue that each problem could be mitigated or even fixed by consolidating the workers into a single agency. I focus here on the benefits common to any form of agency
This post explicitly compares the low-bono option with various others on two axes: on entity type (ie individual or agency) and on different funding models.
Ozzie Gooen @ 2021-11-23T09:44 (+2)
Yea, I was briefly familiar.
I think it's still tough, and agree with Ben's comment here.
https://forum.effectivealtruism.org/posts/kQ2kwpSkTwekyypKu/part-1-ea-tech-work-is-inefficiently-allocated-and-bad-for?commentId=ypo3SzDMPGkhF3GfP
But I think consultancy engineers could be a fit for maybe ~20-40% of EA software talent.
MichaelA @ 2021-11-19T13:08 (+2)
Both sound to me probably at least somewhat useful! I'm ~agnostic on how likely they are to be very useful, how they compare to other things you could spend your time on, or how best to do them, which is mostly because I haven't thought much about software development.
I expect some other people in the community (e.g., Ozzie Gooen, Nuno Sempere, JP Addison) would have more thoughts on that. But it might make sense to just spend like 0.5-4 hours on MVPs before asking anyone else, if you already have a clear enough idea in your head.
I can also imagine having a Slack workspace / Slack channel in an existing workspace for people in EA who are doing software development or are interested in that could perhaps be useful.
(Sidenote: You may also be interested in posts tagged software engineering and/or looking into their authors/commenters.)
Jan-WillemvanPutten @ 2021-11-19T09:25 (+6)
Great work Michael, I've already included this Airtable in the curriculum of Training For Good's upcoming impactful policy careers workshop. Well done, this work is of high value!
MichaelA @ 2021-11-19T10:03 (+3)
Glad to hear that you think this'll be helpful!
(Btw, your comment also made me realise I should add Training For Good to the database, so I've now done so. )
MichaelA @ 2021-12-23T14:41 (+4)
Also note that there are EA Forum Wiki entries for many of the orgs in this database, which will in some cases be worth checking out either for the text of the entry itself, for the links in the Bibliography section, or for the tagged posts.
BrianTan @ 2021-11-20T11:19 (+4)
Cool that you made this, and that you even made a Softr page! Although I think the Softr page is worse than just sharing a public grid view of the Airtable.
I realize it would be cool to have a similar database for all EA-related organisations. Jamie Gittins made one on Notion and has a Forum post here listing EA orgs, but they're both not easily filterable. It could have similar attributes to the Airtable you have. I saw that Taymon also has a Google Sheet, but it would be nice to have it on an Airtable and have it have more attributes, to make it more easily filterable and more colorful.
MichaelA @ 2021-11-20T11:48 (+4)
Can you share a public grid view of the Airtable in a way that allows people to filter and/or sort however they want but then doesn't make that the filtering/sorting that everyone else sees? I wasn't aware of how to do that, which is the sole reason I added the Softr option. I think the set of Airtable views I also link people to is probably indeed better if people are happy with the views (i.e., combos of filters and orders) that I've already set up.
Agreed that an all-of-EA version of this would also be useful, and that Airtable would be better for that than Notion, a Forum post, or a Google Sheet. I also expect it's something that literally anyone reading this could set up in less than a day, by:
- duplicating my database
- manually adding things from Gittins' and Taymon's database
- maybe removing anything that was in mine that might be out of scope for them (e.g., if they want to limit the scope to just orgs that are in or "aware of & friendly to" EA, since a database of all orgs that are merely quite relevant to any EA cause area may be too large a scope)
- looking up how to do Airtable stuff whenever stuck (I found the basics fairly easy, more so than expected)
BrianTan @ 2021-11-20T13:13 (+12)
You can share this link instead, which is better than the Softr view, and this means people don't need to get comment access to be able to view the Airtable grid. It also prevents people from being able to see each other's emails if they check the base collaborators. To find that link, I just pressed "Share" at the top right of the base, and scrolled down to the bottom of that modal/pop-up to find the link.
MichaelA @ 2021-11-20T14:19 (+4)
Ah, nice, thanks for that! It seems that that indeed allows for changing both "Filtered by" and "Sorted by", including from each of my pre-set views, without that changing things for other people, so that's perfect!
I still want to provide the comment access version as well, so people can more easily make suggestions on specific entries. But I'll edit my post to swap the softr link for the link you suggested and to make the comment access link less prominent.
BrianTan @ 2021-11-20T14:34 (+2)
No problem!
EricHerboso @ 2022-03-01T20:06 (+3)
I just wanted to leave a note saying that I found this database useful in my work.
MichaelA @ 2022-07-22T07:21 (+2)
I suggested as one possible next step "People could duplicate and then adapt this database in order to make [a] version that’s relevant to all EA cause areas"
I think such a database has now been made! (Though I'm not sure if that was done by duplicating & adapting my one.) Specifically, Michel Justen has made A Database of EA Organizations & Initiatives. I imagine this'd be useful to some people who find their way to this post.*
Here's the summary section of their post, for convenience:
"I’ve created a new database of EA organizations and initiatives that I host on the recently revamped EA Opportunities page. Here’s the raw Airtable.
- I think this is the most comprehensive collection of organizations in or closely involved with EA to date. It features orgs explicitly within or adjacent to EA, as well as a non-comprehensive list of other orgs working on global catastrophic risks, even if they have little involvement with EA. As of writing this, there are 276 organizations in this database. Of these, 130 are labeled as “Part of EA community” and the rest are labeled as either “aware of and friendly to EA” or uninvolved.
- I still recommend this database as the most valuable database of organizations doing longtermist/x-risk work given its more comprehensive indicators for how orgs are aiming to reduce x-risk.
- If you see any mistakes in this database, please let us know. You can also submit new organizations."
*I guess I should flag that I haven't looked closely at Michel's post or database, so can't personally vouch for its accuracy, comprehensiveness, etc.
MichaelA @ 2022-04-10T10:32 (+2)
Some orgs that should maybe be added (I'd be keen for someone to fill in the form to add them, including relevant info on them):
- Aligned AI
- Conjecture
- ML Progress research group
- Cohere?
- See https://forum.effectivealtruism.org/posts/DDDyTvuZxoKStm92M/ai-safety-needs-great-engineers , but see also my comment there
- Czech Priorities
- Sage
- (Not sure if there are public writings on them yet)
- Arb
- Samotsvety
MichaelA @ 2023-03-06T10:01 (+4)
We’re a team of researchers investigating and forecasting the development of advanced AI.
MichaelA @ 2023-02-12T10:39 (+4)
Is Britain prepared for the challenges ahead?
We face significant risks, from climate change to pandemics, to digital transformation and geopolitical tensions. We need social-democratic answers to create a fair and resilient future.Our vision
A leading role for the UK
Many long-term issues have an important political dimension in which the UK can play a leading role. Building on the work of previous Labour governments, we see a future where the UK can play a larger role in areas such as in reducing international tensions and in becoming a world leader in green technology.
MichaelA @ 2023-05-06T09:38 (+3)
EffiSciences is a collective of students founded in the Écoles Normales Supérieures (ENS) acting for more involved research in the face of the problems of our world. [translated from French]
MichaelA @ 2023-06-08T14:36 (+2)
"At Palisade, our mission is to help humanity find the safest possible routes to powerful AI systems aligned with human values. Our current approach is to research offensive AI capabilities to better understand and communicate the threats posed by agentic AI systems."
Jeffrey Ladish is the Executive Director.
MichaelA @ 2023-06-08T14:32 (+2)
"Admond is an independent Danish think tank that works to promote the safe and beneficial development of artificial intelligence."
"Artificial intelligence is going to change Denmark. Our mission is to ensure that this change happens safely and for the benefit of our democracy."
MichaelA @ 2023-06-08T14:31 (+2)
Senter for Langsiktig Politikk
"A politically independent organisation aimed at creating a better and safer future"
A think tank based in Norway.
MichaelA @ 2023-05-28T15:07 (+2)
Hi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.
We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.
In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting. See our demo or request your Confido workspace for free.
The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.
Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding. [emphasis added]
MichaelA @ 2023-05-28T15:05 (+2)
We are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.
Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).
MichaelA @ 2023-05-06T09:42 (+2)
SaferAI is developing the technology that will allow to audit and mitigate potential harms from general-purpose AI systems such as large language models.
MichaelA @ 2023-03-25T08:38 (+2)
A*PART is an independent ML safety research and research facilitation organization working for a future with a benevolent relationship to AI.
We run AISI, the Alignment Hackathons, and an AI safety research update series.
MichaelA @ 2023-03-25T08:37 (+2)
Also the European Network for AI Safety (ENAIS)
TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!
MichaelA @ 2023-03-06T10:03 (+2)
Riesgos CatastrĂłficos Globales
Our mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world.
There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising. [Quote from Introducing the new Riesgos CatastrĂłficos Globales team]
MichaelA @ 2023-03-05T10:22 (+2)
International Center for Future Generations
The International Center for Future Generations is a European think-and-do-tank for improving societal resilience in relation to exponential technologies and existential risks.
As of today, their website lists their priorities as:
- Climate crisis
- Technology [including AI] and democracy
- Biosecurity
MichaelA @ 2023-02-14T12:13 (+2)
Harvard AI Safety Team (HAIST), MIT AI Alignment (MAIA), and Cambridge Boston Alignment Initiative (CBAI)
These are three distinct but somewhat overlapping field-building initiatives. More info at Update on Harvard AI Safety Team and MIT AI Alignment and at the things that post links to.
MichaelA @ 2023-02-12T10:38 (+2)
Policy Foundry
an Australian-based organisations dedicated to developing high-quality and detailed policy proposals for the greatest challenges of the 21st century. [source]
MichaelA @ 2023-02-12T10:36 (+2)
The Collective Intelligence Project
We are an incubator for new governance models for transformative technology.
Our goal: To overcome the transformative technology trilemma.
Existing tech governance approaches fall prey to the transformative technology trilemma. They assume significant trade-offs between progress, participation, and safety.
Market-forward builders tend to sacrifice safety for progress; risk-averse technocrats tend to sacrifice participation for safety; participation-centered democrats tend to sacrifice progress for participation.
Collective flourishing requires all three. We need CI R&D so we can simultaneously advance technological capabilities, prevent disproportionate risks, and enable individual and collective self-determination.
MichaelA @ 2023-01-22T02:56 (+2)
Also Cavendish Labs:
Cavendish Labs is a 501(c)(3) nonprofit research organization dedicated to solving the most important and neglected scientific problems of our age.
We're founding a research community in Cavendish, Vermont that's focused primarily on AI safety and pandemic prevention, although we’re interested in all avenues of effective research.
MichaelA @ 2023-01-03T09:38 (+2)
Also the Forecasting Research Institute
The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good.
[...] our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.
[...] Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forecasting space.
MichaelA @ 2023-01-03T09:36 (+2)
Also School of Thinking
School of Thinking (SoT) is a media startup.
Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website.
Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.
MichaelA @ 2022-11-14T02:20 (+2)
Also Research in Effective Altruism and Political Science (REAPS)
MichaelA @ 2022-10-15T11:54 (+2)
Also AFTER (Action Fund for Technology and Emerging Risk)
MichaelA @ 2022-10-01T15:22 (+2)
Also Future Academy (but maybe that's not an org and instead a project of EA Sweden?).
MichaelA @ 2022-09-29T20:16 (+2)
Also anything in Alignment Org Cheat Sheet that's not in here. And maybe adding that post's 1-sentence descriptions to the info this database has on each org listed in that post.
MichaelA @ 2022-09-16T12:20 (+2)
Also fp21 and maybe Humanity Forward.
(Reminder: This is a database of orgs relevant to longtermist/x-risk work, and includes some orgs that are not part of the longtermist/x-risk-reduction community, don't associate with those labels, and/or don't focus specifically on those issues.)
MichaelA @ 2022-09-13T09:07 (+2)
Also Alvea and Nucleic Acid Observatory
MichaelA @ 2022-09-13T09:06 (+2)
Also Apollo Fellowship, Atlas Fellowship, Condor Camp, and Pathfinder Successif
MichaelA @ 2022-09-13T09:04 (+2)
MichaelA @ 2022-09-13T09:04 (+2)
MichaelA @ 2022-09-13T09:01 (+2)
Also Space Futures Initiative and Center for Space Governance
MichaelA @ 2022-07-08T11:54 (+2)
Also EA Engineers
MichaelA @ 2022-06-19T17:45 (+2)
MichaelA @ 2022-05-04T10:02 (+2)
MichaelA @ 2022-05-02T16:14 (+2)
Also Encultured AI
MichaelA @ 2022-04-26T10:28 (+2)
Also Pour Demain
Pablo @ 2022-04-10T21:07 (+2)
To the best of my knowledge, Samotsvety is a group of forecasters, not an organization (although some of its members have recently launched or will soon launch forecasting-related orgs).
NunoSempere @ 2021-12-20T14:24 (+2)
Times I have used this post in the course of my research: II.
MichaelA @ 2021-12-20T15:21 (+2)
Is that 11 or 2?
(Either way, thanks for letting me know :) )
NunoSempere @ 2021-12-22T11:40 (+2)
2. Cheers.
MichaelA @ 2021-12-05T14:37 (+2)
See also Description of some organizations in or adjacent to long-term AI governance (non-exhaustive) (2021) (linked to from https://forum.effectivealtruism.org/posts/68ANc8KhEn6sbQ3P9/ai-governance-fundamentals-curriculum-and-application ).
Davidmanheim @ 2021-11-21T09:08 (+2)
How do I submit notes / corrections on orgs in the table?
MichaelA @ 2021-11-21T09:56 (+2)
"If you spot any errors or if you know any relevant info I failed to mention about these orgs, let me know via an EA Forum message or via following this link and then commenting there."
(The very first link I provide in this post allows changing the filtering & sorting, but not commenting, so you have to instead either send a message or use that other link.)
Thanks for your interest in suggesting extra info / correction :)