The Future Fund’s Project Ideas Competition
By Nick_Beckstead, ketanrama, leopold, William_MacAskill @ 2022-02-28T17:27 (+236)
This is a linkpost to https://ftxfuturefund.org/our-project-ideas-competition/
The FTX Foundation's Future Fund is a philanthropic fund making grants and investments to ambitious projects in order to improve humanity's long-term prospects.
We have a longlist of project ideas that we’d be excited to help launch.
We’re now announcing a prize for new project ideas to add to this longlist. If you submit an idea, and we like it enough to add to the website, we’ll pay you a prize of $5,000 (or more in exceptional cases). We’ll also attribute the idea to you on the website (unless you prefer to be anonymous).
All submissions must be received in the next week, i.e. by Monday, March 7, 2022.
We are excited about this prize for two main reasons:
- We would love to add great ideas to our list of projects.
- We are excited about experimenting with prizes to jumpstart creative ideas.
To participate, you can either
- Add your proposal as a comment to this post (one proposal per comment, please), or
- Fill in this form
Please write your project idea in the same format as the project ideas on our website. Here’s an example:
Early detection center
Biorisk and Recovery from Catastrophes
By the time we find out about novel pathogens, they’ve already spread far and wide, as we saw with Covid-19. Earlier detection would increase the amount of time we have to respond to biothreats. Moreover, existing systems are almost exclusively focused on known pathogens—we could do a lot better by creating pathogen-agnostic systems that can detect unknown pathogens. We’d like to see a system that collects samples from wastewater or travelers, for example, and then performs a full metagenomic scan for anything that could be dangerous
You can also provide further explanation, if you think the case for including your project idea will not be obvious to us on its face.
Some rules and fine print:
- You may submit refinements of ideas already on our website, but these might receive only a portion of the full prize.
- At our discretion, we will award partial prizes for submissions that are proposed by multiple people, or require additional work for us to make viable.
- At our discretion, we will award larger prizes for submissions that we really like.
- Prizes will be awarded at the sole discretion of the Future Fund.
We’re happy to answer questions, though it might take us a few days to respond due to other programs and content we're launching right now.
We’re excited to see what you come up with!
(Thanks to Owen Cotton-Barratt for helpful discussion and feedback.)
Pablo @ 2022-02-28T18:49 (+102)
Retrospective grant evaluations
Research That Can Help Us Improve
EA funders allocate over a hundred million dollars per year to longtermist causes, but a very small fraction of this money is spent evaluating past grantmaking decisions. We are excited to fund efforts to conduct retrospective evaluations to examine which of these decisions have stood the test of time. He hope that these evaluations will help us better score a grantmaker's track record and generally make grantmaking more meritocratic and, in turn, more effective. We are interested in funding evaluations not just of our own grantmaking decisions (including decisions by regrantors in our regranting program), but also of decisions made by other grantmaking organizations in the longtermist EA community.
Avi Lewis @ 2022-03-06T21:17 (+4)
I'd like to expand on this: a think-tank/paper that formulates a way of evaluating all grants by a set of objective, quantifiable, criteria. This in order to better inform future allocation decisions so that each dollar spent ends up making the greatest impact possible.
In this respect Retrospective Grant Evaluations, is but one variable to measure grant effectiveness.
I have a few more ideas that can be combined to create some kind of weighted scoring mechanism for grant evaluation:
- Social return on investment (SROI). Arriving at a set of non-monetary variables to quantify social impact
- Cost effective analysis. GiveWell is a leader in this. We could consider applying some of their key learnings from the non-for-profit space to EA projects
- Horizon Scanning. Governmental bodies have departments that perform this kind of work. A proposal could be assessed by it's alignment with emerging technology forecasts
- Backcasting. Seek out ventures that are working towards a desirable future goal
- Pareto optimal. Penalize ideas that could have potential negative impact on factors/people outside of the intended target audience.
- Competence and track record. Prioritize grant allocators/judges based on previous successful grants. Prioritize grants to founder or organizations with a proven track record of competence
Obviously this list could go on and this is just a small number of possible variables. The idea is simply to build a model that can score the utility of a proposed grant.
brb243 @ 2022-03-07T17:37 (+1)
Is this neglecting the notion that some of the grants are to strategically develop interest by presentation appealing to different decisionmakers, since the objectives are rather already known, such as improve lives of humans and animals in the long term and prevent actors, including those who use and develop AI to reduce the wellbeing of these individuals? It can be a bit of a reputational loss risk to evaluate 'well, we started convincing the government to focus on the long term by appealing by the extent of the future so now we can start talking about the quality of life in various geographies, and if this goes well then we move onto the advancement of animal-positive systems across the spacetime?'
Nathan Young @ 2022-03-02T02:33 (+87)
This list should have karma hidden and entries randomised. I guess most poeple do not read and vote all the way to the bottom. I certainly didn't the first time I read it.
evelynciara @ 2022-03-02T05:28 (+17)
I agree; something like Reddit's contest mode would be useful here. I've sorted the list by "newest first" to avoid mostly seeing the most upvoted entries.
Stephen Clare @ 2022-03-03T16:19 (+7)
I'm (pleasantly) surprised by the number of entries! But as a result the Forum seems pretty far from optimal as a platform for this discussion. Would be helpful to have a way to filter by focus area, for example.
Nathan Young @ 2022-03-03T21:51 (+3)
Yeah I suggest it should be done like this, with search and filters as you suggest.
https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=G7aLWq4zypE77Fn6f
Taras Morozov @ 2022-03-04T17:56 (+6)
To prove the point: ATM the most upvoted comment is also the oldest one - Pablo's Retrospective grant evaluations.
Greg_Colbourn @ 2022-03-26T15:06 (+4)
The winners have been announced. It's interesting to note the low correlation between comment karma and awards. Of the (3 out of 6) public submissions, the winners had a mean of 20 karma [as of posting this comment], minimum 18, and the (9 out of 15) honourable mentions a mean of 39 (suggesting perhaps these were somewhat weighted "by popular demand"), minimum 16. None of the winners were in the top 75 highest rated comments; 8/9 of the publicly posted honourable mentions were (including 4 in the top 11).
There are 6 winners and 15 honourable mentions listed in OP (21 total); the top 21 public submissions had a mean karma of 52, minimum 38; the top 50 a mean of 40, minimum 28; and the top 100 a mean of 31, minimum 18. And there are 86 public submissions not amongst the awardees with higher karma than the lowest karma award winner. See spreadsheet for details.
Given that half of the winners were private entries (2/3 if accounting for the fact that one was only posted publicly 2 weeks after the deadline), and 40% of the honourable mentions, one explanation could be that private entries were generally higher quality.
Note karma is an imperfect measure (so in addition to the factor Nathan mentions, maybe the discrepancy isn't that surprising).
Nathan Young @ 2022-03-07T14:48 (+2)
Alternatively, there could be an alternate ranking mode where you get two comments shown at once and you choose if one is better if they are about the same. Even a few people doing that would start to get a sense of if they agree with the overall ranking.
Sam Marks @ 2022-03-01T06:36 (+75)
Starting EA community offices
Effective altruism
Some cities, such as Boston and New York, are home to many EAs and some EA organizations, but lack dedicated EA spaces. Small offices in these cities could greatly facilitate local EA operations. Possible uses of such offices include: serving as an EA community center, hosting talks or reading groups, providing working space for small EA organizations, reducing overhead for event hosting, etc.
(Note: I believe someone actually is looking into starting such an office in Boston. I think (?) that might already be funded, but many other cities could plausibly benefit from offices of their own.)
RyanCarey @ 2022-03-02T03:34 (+53)
Here is a more ambitious version:
EA Coworking Spaces at Scale
Effective Altruism
The EA community has created several great coworking spaces, but mostly in an ad hoc way, with large overheads. Instead, a standard EA office could be created in upto 100 towns and cities. Companies, community organisers, and individuals working full-time on EA projects would be awarded a membership that allows them to use these offices in any city. Members gain from being able to work more flexibly, in collaboration with people with similar interests (this especially helps independent researchers with motivation). EA organisations benefit from decreased need to do office management (which can be done centrally without special EA expertise). EA community organisers gain easier access to an event space and standard resources, such as a library, and hotdesking space, and some access to the expertise of others using the office.
Leo @ 2022-03-05T21:46 (+15)
Here is an even more ambitious one:
Found an EA charter city
Effective Altruism
A place where EAs could live, work, and research for long periods, with an EA school for their children, an EA restaurant, and so on. Houses and a city UBI could be interesting incentives.
RyanCarey @ 2022-03-05T22:39 (+9)
What would be the value add of an EA city, over and above that of an EA school and coworking space? For example, I don't see why you need to eat at an EA restaurant, rather than just a regular restaurant with tasty and ethical food.
Note also that the libertarian "Free State Project" seems to have failed, despite there being many more libertarians than effective altruists.
MakoYass @ 2022-03-07T08:57 (+2)
Lower cost of living, meaning you can have more people working on less profitable stuff.
I'm not sure 5000 free staters (out of 20k signatories) should be considered failure.
RyanCarey @ 2022-03-07T12:31 (+2)
Right, but it sounds like it didn't go well afterwards? https://www.google.com/amp/s/newrepublic.com/amp/article/159662/libertarian-walks-into-bear-book-review-free-town-project
Leo @ 2022-03-06T00:06 (+1)
Mere libertarians may have failed, as anarchists did in similar attempts. But I believe that EAs can do better. An EA city would be a perfect place to apply many of the ideas and polices we are currently advocating for.
RyanCarey @ 2022-03-06T01:02 (+3)
Could you elaborate on the policies? And what, roughly, are you picturing - an EA-sympathising municipal government, or a more of a Honduran special economic zone type situation?
Leo @ 2022-03-06T13:19 (+1)
I don't think I will elaborate on policies, given that they are the last thing to worry about. Even RP negative report counts new policies among the benefits of charter cities. Now we are supposed to have effective ways to improve welfare, why wouldn't we build a new city, start from scratch, do it better than everybody else, and show it to the world? While I agree that this can't be done without putting a lot of thinking into it, I believe it must be done sooner or later. From a longtermist point of view: how could we ever expect to carry out a rational colonization of other planets when nobody on earth has ever been able to successfully found at least one rational city?
MakoYass @ 2022-03-07T09:03 (+1)
Note, VR is going to get really good in the next three years, so I wouldn't personally recommend getting too invested in any physical offices, but I guess as long as we're renting it won't be our problem.
Jeff_Kaufman @ 2022-04-22T20:00 (+4)
I think it is pretty unlikely that VR improvements on the scale of 3y make people stop caring about being actually in person. This is a really hard problem that people have been working on for decades, and while we have definitely made a lot of progress if we were 3y from "who needs offices?" I would expect to already see many early adopters pushing VR as a comfortable environment for general work (VR desktop) or meetings.
MakoYass @ 2022-04-22T21:09 (+1)
This is a really hard problem that people have been working on for decades
What problem are you referring to. Face tracking and remote presence didn't have a hardware platform at all until 2016, and wasn't a desirable product until maybe this year (mostly due to covid), and wont be a strongly desirable product until hardware starts to improve dramatically next year. And due to the perversity of social software economics, it wont be profitable in proportion to its impact, so it'll come late.
There are currently zero non-blurry face tracking headsets with that are light enough to wear throughout a workday, so you should expect to not see anyone using VR for work. But we know that next year there will be at least one of those (apple's headset). It will appear suddenly and without any viable intermediaries. This could be a miracle of apple, but from what I can tell, it's not. Competitors will be capable of similar feats a few years later.
(I expect to see limited initial impact from applevr (limited availability and reluctance from apple to open the gates), the VR office wont come all at once, even though the technical requirements will.)
(You can get headsets with adequate visual acuity (60ppd) right now, but they're heavy, which makes them less convenient to use than 4k screens. They're expensive, and they require a bigger, heavier, and possibly even more expensive computer to drive them (though this was arguably partly a software problem), which also means they wont have the portability benefits that 2025's VR headsets will have, which means they're not going to be practical for much at all, and afaik the software for face tracking isn't available for them, and even if it were, it wouldn't have a sufficiently large user network in professional realms.)
Chris Leong @ 2022-03-07T09:30 (+2)
You think they'll get past the dizziness problem?
MakoYass @ 2022-03-08T01:06 (+1)
I think everyone will adapt. I vaguely remember hearing that there might be a relatively large contingent of people who never do adapt, I was unable to confirm this with 15 minutes of looking just now, though. Every accessibility complaint I came across seemed to be a solvable software problem rather than anything fundamental.
Chris Leong @ 2022-03-01T07:09 (+6)
I heard that New York was starting a coworking space as well
JanBrauner @ 2022-03-01T10:39 (+2)
I think Berlin has something like this
victor.yunenko @ 2022-03-01T13:15 (+4)
Indeed, the space was organized by Effektiv Spenden: teamwork-berlin.org
Yonatan Cale @ 2022-03-01T17:22 (+1)
I think EA Israel would have more people working remotely in international organizations if we had community offices.
[We recently got an office which I'm going to check out tomorrow; Not an ideal location for me but will try!]
jh @ 2022-03-01T14:55 (+74)
Investment strategies for longtermist funders
Research That Can Help Us Improve, Epistemic Institutions, Economic growth
Because of their non-standard goals, longtermist funders should arguably follow investment strategies that differ from standard best practices in investing. Longtermists place unusual value on certain scenarios and may have different views of how the future is likely to play out.
We'd be excited to see projects that make a contribution towards producing a pipeline of actionable recommendations in this regard. We think this is mostly a matter of combining a knowledge of finance with detailed views of the future for our areas of interest (i.e. forecasts for different scenarios with a focus on how giving opportunities may change and the associated financial winners/losers). There is a huge amount of room for research on these topics. Useful contributions could be made by research that develops these views of the future in a financially-relevant way, practical analysis of existing or potential financial instruments, and work to improve coordination on these topics.
Some of the ways the strategies of altruistic funders may differ include:
- Mission-correlated investing. That is, making investments such that they end up with more money in worlds where money is relatively more valuable. This increases the expected amount of good done. In some cases, but not all, it will also reduce the variance in the amount of good done ('mission hedging').
- Non-standard views on expected financial returns. Longtermist investors should arguably have non-standard attitudes toward and definitions of risk (including correlation with other longtermists). This could make certain investments more attractive. In addition, if altruistic research suggests more accurate views of the future this may also be a useful source of excess returns. Furthermore, longtermists may want to operate with a discount rate that differs from normal (either more or less patient).
- As is already part of our approach, some investments may also generate impact of their own or have strategic value via developing relationships in new areas.
Note: This builds on an idea from a recent post by Holden Karnofsky. However, I don't see it in your current project ideas list nor in the other comments here.
PeterSlattery @ 2022-03-08T04:01 (+10)
I have had a similar idea, which I didn't submit, relating to trying to create investor access to tax-deductible longtermist/patient philanthropy funds across all major EA hubs. Ideally these would be scaled up/modelled on the existing EA long term future fund (which I recall reading about but can't find now, sorry)
Edit - found it and some ideas - see this and top level post.
Greg_Colbourn @ 2022-03-04T09:23 (+2)
Just going to note that SBF/FTX/Alameda are already setting a very high benchmark when it comes to investing!
brb243 @ 2022-03-07T17:32 (+1)
A systemic change investment strategy for your review.
JBPDavies @ 2022-03-02T12:04 (+1)
You may be interested in the following project I'm working for: https://deeptransitions.net/news/the-deep-transition-futures-project-investing-in-transformation/ . The project goal is developing a new investment philosophy & strategy (complete with new outcome metrics) aimed at achieving transformational systems change. The project leverages the Deep Transitions theoretical framework as developed within the field of Sustainability Transitions and Science, Technology and Innovation Studies to create a theory of change and subsequently enact it with a group of public and private investors. Would recommend diving into this if you're interested in the nexus of investment and transformation of current systems/shaping future trajectories.
I can't say too much about future plans at this stage, except that following the completion of the current phase (developing the philosophy, strategies and metrics), there will be an extended experimentation phase in which these are applied, tested and continuously redeveloped.
JanBrauner @ 2022-03-01T09:34 (+65)
Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles
Effective Altruism
When it comes to enhancement of productivity, health, and wellbeing, the EA community does not sufficiently utilise division of labour. Currently, community members need to obtain the relevant knowledge themselves and do related research, e.g. on health issues, themselves. We would like to see dedicated experts on these issues that offer optimal productivity, health, and wellbeing, as a service. As a vision, a person working in a high-impact role could book calls with highly trained nutrition specialists, exercise specialists, sleep specialists, personal coaches, mental trainers, GPs with sufficient time, and so on, increasing their work output by 50% while costing little time. This could involve innovative methods such as ML-enabled optimal experiment design to figure out which interventions work for each individual.
Note: Inspired by conversations with various people. I won't name them here because I don't want to ask for permission first, but will share the prize money with them if I win something.
Brendon_Wong @ 2022-03-02T20:50 (+6)
I was going to write a similar comment for researching and promoting well-being and well-doing improvements for EAs as well as the general public! Since this already exists in similar form as a comment, strong upvoting instead.
Relevant articles include Ben Williamson’s project (https://forum.effectivealtruism.org/posts/i2Q3DTsQq9THhFEgR/introducing-effective-self-help) and Dynomight’s article on “Effective Selfishness” (https://dynomight.net/effective-selfishness/). I also have a forthcoming article on this.
Multiple project ideas that have been submitted also echo this general sentiment. For example “ Improving ventilation,” “Reducing amount of time productive people spend doing paperwork,” and “ Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin).”
Edit: I am launching this as a project called Better! Please get in touch if you're interested in funding, collaborating on, or using this!
JanBrauner @ 2022-03-01T09:19 (+58)
Reducing gain-of-function research on potentially pandemic pathogens
Biorisk
Lab outbreaks and other lab accidents with infectious pathogens happen regularly. When such accidents happen in labs that work on gain-of-function research (on potentially pandemic pathogens), the outcome could be catastrophic. At the same time, the usefulness of gain-of-function research seems limited; for example, none of the major technological innovations that helped us fight COVID-19 (vaccines, testing, better treatment, infectious disease modelling) was enabled by gain-of-function research. We'd like to see projects that reduce the amount of gain-of-function research done in the world, for example by targeting coordination between journals or funding bodies, or developing safer alternatives to gain-of-function research.
Additional notes:
- There are many stakeholders In the research system (funders, journals, scientists, hosting institutions, hosting countries). I think the concentration of power is strongest in journals: there are only a few really high profile life-science journals(*). Currently, they do publish gain-of-function research. Getting high-profile journals to coordinate against publishing such research would strongly reduce incentives for academic researchers. There is plenty of precedent for collaboration between journals, such as
- essentially all journals agreed to publish COVID research in Open Access (2016 Statement on Data Sharing in Public Health Emergencies)
- essentially all journals agree to not publish research with human participants if there was no ethics committee approval for this research
- essentially all journals agree on requiring authors to state their conflicts of interest
(*) Super roughly:
Tier 1: Science, Nature
Tier 2: Nature Medicine, Cell, (maybe some other Nature family journals)
Tier 3: Some more Nature family journals, Science Advances, PNAS, several others
-> If three publishers (Science, Nature Gropu, Cell Press) coordinated, this would cover all tier 1 and tier 2 journals, and a large part of tier 3 journals.
RyanCarey @ 2022-03-01T19:31 (+56)
Putting Books in Libraries
Effective Altruism
The idea of this project is to come up with a menu of ~30 books and a list of ~10000 libraries, and to offer to buy for each library, any number of books from the menu. This would ensure that folks interested in EA-related topics, who browse a library, discover these ideas. The books would be ones that teach people to use an effective altruist mindset, similar to those on this list. The libraries could be ones that are large, or that that serve top universities or cities with large English-speaking populations.
The case for the project is that if you assume that the value of discovering one new EA contributor is $200k, and that each book is read once per year (which seems plausible based on at least one random library) then the project will deliver far greater than the financial costs, of about $20 per book. The time costs would be minimised by doing much of the correspondence with libraries over the space over a short period of weeks to months. It also can serve as a useful experiment for even larger-scale book distributions, and could be replicated in other languages.
Greg_Colbourn @ 2022-03-02T11:38 (+24)
I like this idea, but I wonder - how many people / students actually use physical libraries still? I don't think I've used one in over 15 years. My impression is that most are in chronic decline (and many have closed over the last decade).
Cillian Crosson @ 2022-03-02T19:59 (+5)
A way around this could be to provide e-books and audio books instead of physical copies. Would also make the distribution easier.
(In the UK at least, it's possible to borrow e & audio from your local library using the Libby app)
Greg_Colbourn @ 2022-03-03T10:03 (+3)
I imagine that e-book systems (text and audio) work via access to large libraries, rather than needing people to request books be added individually? So maybe there is no action needed on this front (although someone should probably check that most EA books are available in such collections).
michaelchen @ 2022-03-03T14:32 (+2)
My understanding is that individual libraries license an ebook for a number of uses or a set period of time (say, two years).
michaelchen @ 2022-03-03T14:59 (+2)
I think print books are still preferred by more readers compared to e-books. You might as well donate the books in both the physical and digital formats and probably also as an audiobook.
It looks like libraries don't generally have an official way for you to donate print books virtually or to donate e-books, so I think you would have to inquire with them about whether you can make a donation and ask them to use that to buy specific books. Note that the cost of e-book licenses to libraries is many times the consumer sale price.
michaelchen @ 2022-03-03T14:48 (+10)
I really like this project idea! It's ambitious and yet approachable, and it seems that a lot of this work could be delegated to virtual personal assistants. Before starting the project, it seems that it would be valuable to quickly get a sense of how often EA books in libraries are read. For example, you could see how many copies of Doing Good Better are currently checked out, or perhaps you could nicely ask a library if they could tell you how many times a given book has been checked out.
Greg_Colbourn @ 2022-03-02T11:45 (+10)
In terms of the cost estimates, how would targeted social media advertising compare? Say targeting people who are already interested in charity and volunteering, or technology, or veg*anism, and offering to send them a free book.
RyanCarey @ 2022-03-02T12:16 (+8)
Not sure, but targeted social media advertising would also be a great project.
Greg_Colbourn @ 2022-03-03T10:45 (+6)
Peter Wildeford @ 2022-03-01T16:32 (+52)
Never Again: A Blue-Ribbon Panel on COVID Failures
Biorisk, Epistemic Institutions
Since effective altruism came to exist as a movement, COVID was the first big test of a negative event that was clearly within our areas of concern and expertise. Despite many high-profile warnings, the world was clearly not prepared to meet the moment and did not successfully contain COVID and prevent excess deaths to the extent that should've been theoretically possible if these warnings had been properly heeded. What went wrong?
We'd like to see a project that goes into extensive detail about the global COVID response - from governments, non-profits, for-profit companies, various high-profile individuals, and the effective altruism movement - and understands what the possibilities were for policy action given what we knew at the time and where things fell apart. What could've gone better and - more importantly - how might we be better prepared for the next disaster? And rather than try to re-fight the last war, what needs to be done now for us to better handle a future disaster that may not be bio-risk at all?
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Ozzie Gooen @ 2022-03-02T15:52 (+18)
Minor note about the name: "Never Again" is a slogan often associated with the Holocaust. I think that people using it for COVID might be taken as appropriation or similar. I might suggest a different name.
https://en.wikipedia.org/wiki/Never_again
Peter Wildeford @ 2022-03-02T16:29 (+2)
Sorry - I was not aware of this
Ozzie Gooen @ 2022-03-02T16:36 (+2)
No worries! I assumed as such.
Jackson Wagner @ 2022-03-01T20:26 (+10)
Are you thinking of EAs running this themselves? We already have an informal sense of what some top priorities are for action in biosafety/pandemic-preparedness going forwards (ramp up investment in vaccines and sterilizing technology, improve PPE, try to ban Gain of Function research, etc), even if this has never been tied together into a unified and rigorously prioritized framework.
I think the idea of a blue-ribbon panel on Covid failures could have huge impact if it had (in the best-case) official buy-in from government agencies like the CDC, or (failing that) at least something like "support from a couple prestigious universities" or "participation from a pair of senators that care about the issue" or "we don't get the USA or UK but we do get a small European country like Portugal to do a Blue Ribbon Covid Panel". In short, I think this idea might ideally look more like "lobby for the creation of an official Blue Ribbon Panel, and also try to contribute to it and influence it with EA research" rather than just running it entirely as an internal EA research project. But maybe I am wrong and a really good, comprehensive EA report could change a lot of minds.
IanDavidMoss @ 2022-03-03T00:38 (+2)
This is a great point. Also worth noting that there have been some retrospectives already, e.g. this one by the WHO: https://theindependentpanel.org/wp-content/uploads/2021/05/COVID-19-Make-it-the-Last-Pandemic_final.pdf
It would be worth considering the right balance between putting resources toward conducting an original analysis vs. mustering the political will for implementing recommendations from retrospectives like those above.
Jan_Kulveit @ 2022-03-18T12:15 (+4)
Note that CSER is running a project roughly in this direction.
Sean_o_h @ 2022-05-14T18:01 (+4)
An early output from this project: Research Agenda (pre-review)
Lessons from COVID-19 for GCR governance: a research agenda
The Lessons from Covid-19 Research Agenda offers a structure to study the COVID-19 pandemic and the pandemic response from a Global Catastrophic Risk (GCR) perspective. The agenda sets out the aims of our study, which is to investigate the key decisions and actions (or failures to decide or to act) that significantly altered the course of the pandemic, with the aim of improving disaster preparedness and response in the future. It also asks how we can transfer these lessons to other areas of (potential) global catastrophic risk management such as extreme climate change, radical loss of biodiversity and the governance of extreme risks posed by new technologies.
Our study aims to identify key moments- ‘inflection points’- that significantly shaped the catastrophic trajectory of COVID-19. To that end this Research Agenda has identified four broad clusters where such inflection points are likely to exist: pandemic preparedness, early action, vaccines and non-pharmaceutical interventions. The aim is to drill down into each of these clusters to ascertain whether and how the course of the pandemic might have gone differently, both at the national and the global level, using counterfactual analysis. Four aspects are used to assess candidate inflection points within each cluster: 1. the information available at the time; 2. the decision-making processes used; 3. the capacity and ability to implement different courses of action, and 4. the communication of information and decisions to different publics. The Research Agenda identifies crucial questions in each cluster for all four aspects that should enable the identification of the key lessons from COVID-19 and the pandemic response.
Sean_o_h @ 2022-03-18T12:22 (+2)
JanBrauner @ 2022-03-01T09:16 (+51)
Cognitive enhancement research and development (nootropics, devices, ...)
Values and Reflective Processes, Economic Growth
Improving people's ability to think has many positive effects on innovation, reflection, and potentially individual happiness. We'd like to see more rigorous research on nootropics, devices that improve cognitive performance, and similar fields. This could target any aspect of thinking ability---such as long/short term memory, abstract reasoning, creativity---and any stage of the research and development pipeline---from wet lab research or engineering over testing in humans to product development.
Additional notes on cognitive enhancement research:
- Importance:
- Sign of impact: You already seem to think that AI-based cognitive aids would be good from a longtermist perspective, so you will probably think that non-AI-based cognitive enhancement is also at least positive. (I personally think that's somewhat likely but not obvious and would love to see more analysis on it).
- Size of impact: AI-based cognitive enhancement is probably more promising right now. But non-AI-based cognitive enhancement is still pretty promising, there is some precedent (e.g. massive benefits from iodine supplementation), ...
- Neglectedness: The research community on many areas of cognitive enhancement research is absolutely tiny. Even very low-hanging fruit doesn't get picked (e.g. before my team started to work on it, nobody had tried to replicate this 20-year old paper, which found massive IQ increases).
- What funding could achieve: From talking to some people, this seems to be an issue of funding (medical funding bodies don't usually cover enhancement).There are legions of life scientists/psychologists/clinicians who have the relevant skills for this type of research, so if funding was available, I'd expect a lot more of this reserach would happen
Jackson Wagner @ 2022-03-01T22:19 (+5)
I think this is an underrated idea, and should be considered a good refinement/addition to the FTX theme #2 of "AI-based cognitive aids". If it's worth kickstarting AI-based research assistant tools in order to make AI safety work go better, then doesn't the same logic apply towards:
- Supporting the development of brain-computer interfaces like Neuralink.
- Research into potential nootropics (glad to hear you are working on replicating the creatine study!) or the negative cognitive impact of air pollution and other toxins.
- Research into tools/techniques to increase focus at work, management best practices for research organizations, and other factors that increase productivity/motivation.
- Ordinary productivity-enhancing research software like better note-taking apps, virtual reality remote collaboration tools, etc.
The idea of AI-based cognitive aids only deserves special consideration insofar as:
- Work on AI-based tools will also contribute to AI safety research directly, but won't accelerate AI progress more generally. (This assumption seems sketchy to me.)
- The benefit of AI-based tools will get stronger and stronger as AI becomes more powerful, so it will be most helpful in scenarios where we need help the most. (IMO this assumption checks out. But this probably also applies to brain-computer interfaces, which might allow humans to interact with AI systems in a more direct and high-bandwidth way.)
Linch @ 2022-03-04T01:28 (+48)
Create and distribute civilizational restart manuals
A number of "existential risks" we are worried about may not directly kill off everybody, but would still cause enough deaths and chaos to make rebuilding extremely difficult. Thus, we propose that people design and distribute "civilizational restart manuals" to places that are likely to survive biological or nuclear catastrophes, giving humanity more backup options in case of extreme diasters.
The first version can be really cheap, perhaps involving storing paper copies of parts of Wikipedia plus 10 most important books sent to 100 safe and relatively uncorrelated locations -- somewhere in New Zealand, the Antarctica research base, a couple of nuclear bunkers, nuclear submarines, etc.
We are perhaps even more concerned about great moral values like concern for all sentient beings surviving and re-emerging than preserving civilization itself, so we would love for people to do further research and work into considering how to preserve cosmopolitan values as well.
Denis Drescher @ 2022-03-05T22:55 (+10)
My comment from another thread applies here too:
Agreed, very important in my view! I’ve been meaning to post a very similar proposal with one important addition:
Anthropogenic causes of civilizational collapse are (arguably) much more likely than natural ones. These anthropogenic causes are enabled by technology. If we preserve an unbiased sample of today’s knowledge or even if it’s the knowledge that we consider to have been most important, it may just steer the next cycle of our civilization right into the same kind of catastrophe again. If we make the information particularly durable, maybe we’ll even steer all future cycles of our civilization into the same kind of catastrophe.
The selection of the information needs to be very carefully thought out. Maybe only information on thorium reactors rather than uranium ones; only information on clear energy sources; only information on proof of stake; only information on farming low-suffering food; no prose or poetry that glorifies natural death or war; etc.
I think that is also something that none of the existing projects take into account.
Greg_Colbourn @ 2022-03-09T20:40 (+5)
Relatedly, see this post about continuing AI Alignment research after a GCR.
Denis Drescher @ 2022-03-10T10:04 (+2)
Very good!
ben.smith @ 2022-03-07T08:37 (+3)
Building on the above idea...
Research the technology required to restart modern civilization and ensure the technology is understood and accessible in safe havens throughout the world
A project could ensure that not only the know-how but also the technology exists dispersed in various parts of the world to enable a restart. For instance, New Zealand is often considered a relatively safe haven, but New Zealand’s economy is highly specialized and for many technologies, relies on importing technology rather than producing it indigenously. Kick-starting civilization from wikipedia could prove very slow. Physical equipment and training enabling strategic technologies important for restart could be planted in locations like New Zealand and other social contexts which are relatively safe. At an extreme, industries could be subsidized which localize technology required for a restart. This would not necessarily mean the most advanced technology; rather, it means technologies that have been important to develop to the point we are at now.
Linch @ 2022-03-07T14:05 (+3)
Yes this is exciting to me, and related. Though of course generalist research talent is in short supply within EA, so the bar for any large-scale research project taking off is nontrivially high.
Denis Drescher @ 2022-03-05T23:01 (+2)
I didn’t write this up as a separate proposal as it seemed a bit self-serving, but creating underground cities for EAs with all the ALLFED technology and whatnot and all these backups could enable us to afterwards build a utopia with all the best voting methods and academic journals that require Bayesian analyses and publish negative results and Singer on the elementary school curriculum and universal basic income etc.
Hauke Hillebrandt @ 2022-03-04T11:58 (+2)
All of wikipedia is just 20GB.
Maybe there could be an way to share backups via Bittorrent or an 'offline version' of it... it would fit comfortably on most modern smartphones.
Linch @ 2022-03-04T12:17 (+8)
Digital solutions are not great because ideally you want something that can survive centuries or at least decades.
But offline USBs in prominent + safe locations might still be a good first step anyway.
Greg_Colbourn @ 2022-03-09T20:44 (+2)
I've got a full version of the English Wikipedia, complete with images, on my phone (86GB). It's very easy to get using the Kiwix app.
Greg_Colbourn @ 2022-03-09T20:52 (+2)
I note there isn't much on Kiwix in terms of survival/post-apocalype collections (just a few TED talks and YouTube videos): a low-hanging fruit ripe for the picking.
Greg_Colbourn @ 2022-03-09T20:44 (+2)
Maybe someone should make an EA related collection and upload it to Kiwix? (Best books, EA Forum, AI Alignment Forum, LessWrong, SSC/ACX etc). This might be a good way of 80/20-ing preserving valuable information. As a bonus, people can easily and cheaply bury old phones with the info on, along with solar/hand-crank chargers.
wbryk @ 2022-03-14T00:31 (+1)
The group who discovers this restart manual could gain a huge advantage over the other groups in the world population -- they might reach the industrial age within a few decades while everyone else is still in the stone age.
This discoverer group will therefore have a huge influence over the world civilization they create.
I wonder if there were a way to ensure that this group has good values, even better values than our current world.
For example, imagine there were a series of value tests within the restart manual that the discoverers were required to pass in order to unlock the next stage of the manual. Either multiple groups rediscover the manual and fail until one group succeeds, or some subgroup unlocks the next step and is able to leap technologically above the others in the group fast enough to ensure that their values flourish.
If those value tests somehow ensure that a high score means the test-takers care deeply about the values we want them to have, then only those who've adopted these values will rule the earth.
As a side note, this would be a really cool short story or movie :)
agnode @ 2022-03-01T22:12 (+48)
SEP for every subject
Epistemic institutions
Create free online encyclopedias for every academic subject (or those most relevant to longtermism) written by experts and regularly updated. Despite the Stanford Encyclopedia of Philosophy being widely-known and well-loved there are few examples from other subjects. Often academic encyclopedias are both behind institutional paywalls and not accessible on sci-hub (e.g. https://oxfordre.com/). This would provide decisionmakers and the public with better access to academic views on a variety of topics.
Peter S. Park @ 2022-03-03T16:18 (+5)
Can editing efforts be directed to Wikipedia? Or would this not suffice because everyone can edit it?
agnode @ 2022-03-03T18:41 (+2)
I've read that experts often get frustrated with wikipedia because their work ends up getting undone by non-experts. Also there probably needs to be financial support and incentives for this kind of work.
brb243 @ 2022-03-07T17:47 (+1)
Yeah make it accessible and normally accepted.
Yitz @ 2022-03-08T05:02 (+2)
This would have to be a separate project from my proposed direct Wikipedia editing, but I'd be very much in support of this (I see the efforts as being complementary)
Fai @ 2022-03-02T12:08 (+46)
Preventing factory farming from spreading beyond the earth
Space governance, moral circle expansion (yes I am also proposing a new area of interest.)
Early space advocates such as Gerard O’Neill and Thomas Heppenheimer had both included animal husbandry in their designs of space colonies. In our time, the European Space Agency, the Canadian Space Agency, the Beijing University of Aeronautics and Astronautics, and NASA, have all expressed interests or announced projects to employ fish or insect farming in space.
This, if successful, might multiply the suffering of farmed animals by many times of the numbers of farmed animals on earth currently, spanned across the long-term future. Research is needed in areas like:
- Continuous tracking of the scientific research on transporting and raising animals in space colonies or other planets.
- Tracking, or even conducting research on the feasibility of cultivating meat in space.
- Tracking the development and implementation of AI in factory farming, which might enable unmanned factory farms and therefore make space factory farming more feasible. For instance, the aquaculture industry is hoping that AI can help them overcome major difficulties in offshore aquaculture. (This is part of my work)
- How likely alternative proteins like plant-based meat, cultivated meat, are to substitute all types of factory farming, including fish and insect farming.
- The timelines of alternative proteins, particularly cultivated meat . We are particularly interested in its comparison with space colonization timelines, or in other words, whether alternative proteins will succeed before major efforts of space colonization.
- Philosophical work on the ethics of space governance, in relation to nonhuman animals.
(note: I am actually writing a blogpost on factory farming in space/in the long-term future, stay tuned or write a message to me if you are interested)
(update: I posted it: https://forum.effectivealtruism.org/posts/bfdc3MpsYEfDdvgtP/why-the-expected-numbers-of-farmed-animals-in-the-far-future)
Nathan Young @ 2022-03-02T01:48 (+46)
Purchase a top journal
Metascience
Journals give bad incentives to academics - they require new knowledge to be written in hard to understand language, without pre-registration at great cost and sometimes focused on unimportant topics. Taking over a top journal and ensuring it incentivised high quality work on the most important topics would begin to turn the scientific system around.
Jonathan Nankivell @ 2022-03-06T22:58 (+12)
We could, of course, simply get the future fund to pay for this. There is, however, an alternative that might be worth thinking about.
This seems like the kind of thing that dominant assurance contracts are designed to solve. We could run a Kickstarter, and use the future fund to pay the early backers if we fail to reach the target amount. This should incentivise all those who want the journals bought to chip in.
Here is one way we could do this:
- Use a system like pol.is to identify points of consensus between universities. This should be about the rules going forward if we buy the journal. For example, do they all want pre-registration? What should the copyright situation be? How should peer-review work? How should the journal be ran? etc
- Whatever the consensus is, commit to implementing it if the buyout is successful
- Start crowdsourcing the funds needed. To maximise the chance of success, this should be done using a DAC (dominant assurance contract). This works like any other crowdfunding mechanism (GoFundMe, Kickstarter, etc), except we have a pool of money that is used to pay the early backers if we fail to meet the goal. If the standard donation size we're asking the unis for is £X, and having the publisher bought is worth at least £X to the uni, then the the dominant strategy for the uni is to chip in.
- If we raise the money, great! We can do what we committed to doing. We're happy, the unis are happy, the shareholders of the publisher are happy. If we fail to raise the money, we pay all the early backers, and move on to other things.
Jonathan Nankivell @ 2022-03-08T17:02 (+3)
Update: I emailed Alex Tabarrok to get his thoughts on this. He originally proposed using dominant assurance contracts to solve public good problems, and he has experience testing it empirically.
He makes the following points about my suggestion:
- The first step is the most important. Without clarity of what the public good will be and who is expected to pay for it, the DAC won't work
- You should probably focus on libraries as the potential source of funding. They are the ones who pay subscription fees, they are the ones who would benefit from this
- DACs are a novel forum of social technology. It might be best to try to deliver smaller public goods first, allowing people to get more familiar, before trying to buy a journal
He also suggested other ways to solve the same problem:
- Have you considered starting a new journal? This should be cheaper. There would also be a coordination questions to solve to make it prestigious, but this one might be easier
- Have you considered 'flipping' a journal? Could you take the editors, reviewers and community that supports an existing journal, and persuade them to start a similar but open access journal? (The Fair Open Access Alliance seem to have had success facilitating this. Perhaps we should support them?)
My current (and weakly held) position is that flipping editorial boards to create new open access journals is the best way to improve publishing standards. Small steps towards a much better world. Would it be possible to for the Future Fund to entice 80% of the big journals to do this? The top journal in every field? Maybe.
brb243 @ 2022-03-07T17:44 (+2)
This is a reputational loss risk of an actor in the broader EA community seeking to influence the scientific discourse by economic/peer unreviewed means? There are repositories, such as of the Legal Priorities Project, of papers, that are cool and the EA community pays attention to aggregate narratives to keep some of its terms rather exclusive and convincing. If you mean coordinating research, to learn from the scientific community, then it can make sense to read papers and corresponding with academics. Maybe on the EA Forum or so. No need to buy a journal.
James Bailey @ 2022-03-04T00:26 (+2)
Agree, was thinking of submitting a proposal like this. A few ways to easily improve most journals:
-Require data and code to be shared
-Open access, but without the huge author fees most open access journals charge
-If you do charge any fees, use them to pay reviewers for fast reviews
Jonas Moss @ 2022-03-05T08:04 (+1)
Shouldn't reviewers be paid, regardless of fees? It is a tough job, and there should strong incentives to do it properly.
RyanCarey @ 2022-03-03T22:12 (+45)
A Longtermist Nobel Prize
All Areas
The idea is to upgrade the Future of Life Award to be more desirable. The prizemoney would be increased from $50k to$10M SEK (roughly $1.1M) per individual to match the Nobel Prizes. Both for prestige, and to make sure ideal candidates are selected, the selection procedure would be reviewed, adding extra judges or governance mechanisms as needed. This would not immediately mean that longtermism has something to match the prestige of a Nobel, but it would give a substantial reward and offer top longtermists something to strive for.
(A variation on a suggestion by DavidMoss)
Gavin @ 2022-03-03T22:54 (+2)
How much of the prestige is the money value, how much just the age of the prize, and how much the association with a fancy institution like the Swedish monarchy?
I seem to remember that Heisenberg etc were more excited by the money than the prize, back in the day.
RyanCarey @ 2022-03-04T01:15 (+2)
The money isn't necessary - see the Fields Medal. Nor is the Swedish Monarchy - see the Nobel Memorial Prize in Econ. Age obviously helps. And there's some self-reinforcement - people want the prize that others want. My guess is that money does help, but this could be further investigated.
Hauke Hillebrandt @ 2022-03-04T12:08 (+4)
The Jacobs Foundation awards $1m prizes to scientist as a grant - I think this might be one of the biggest - one could award $5-10m to make it the most prestigious prize in the world.
Taras Morozov @ 2022-03-04T12:24 (+1)
I think Templeton Prize has become prestigious because they give more money than the Nobel on purpose.
Greg_Colbourn @ 2022-03-02T13:18 (+45)
Megastar salaries for AI alignment work
Artificial Intelligence
Aligning future superhuman AI systems is arguably the most difficult problem currently facing humanity; and the most important. In order to solve it, we need all the help we can get from the very best and brightest. To the extent that we can identify the absolute most intelligent, most capable, and most qualified people on the planet – think Fields Medalists, Nobel Prize winners, foremost champions of intellectual competition, the most sought-after engineers – we aim to offer them salaries competitive with top sportspeople, actors and music artists to work on the problem. This is complementary to our AI alignment prizes, in that getting paid is not dependent on results. The pay is for devoting a significant amount of full time work (say a year), and maximum brainpower, to the problem; with the hope that highly promising directions in the pursuit of a full solution will be forthcoming. We will aim to provide access to top AI alignment researchers for guidance, affiliation with top-tier universities, and an exclusive retreat house and office for fellows of this program to use, if so desired.
Greg_Colbourn @ 2022-03-02T20:41 (+5)
Here's a more fleshed out version, FAQ style. Comments welcome.
Peter Wildeford @ 2022-03-01T16:40 (+45)
Longtermist Policy Lobbying Group
Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes
Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals . While longtermism can and should remain bi-partisan, there may be many opportunities to pull the rope sideways on policy areas of concern.
We'd like to see a project that attempts to carefully understand the lobbying process and explores garnering support for identified tractable policies. We think while such a project could scale to be very large once successful, anyone working on this project should really aim to start small and tred carefully, aiming to avoid issues around the unilateralist curse and ensuring to not make longtermism into an overly partisan issue. It's likely that longtermist lobbying might also be best done as lobbying for other clear areas related to longtermism but as a distinct idea, such as lobbying for climate change mitigation or lobbying for pandemic preparedness.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
IanDavidMoss @ 2022-03-03T01:01 (+4)
I think some form of lobbying for longtermist-friendly policies would be quite valuable. However, I'm skeptical that running lobbying work through a single centralized "shop" is going to be the most efficient use of funds. Lobbying groups tend to specialize in a specific target audience, e.g., particular divisions of the US federal government or stakeholders in a particular industry, because the relationships are really important to success of initiatives and those take time to develop and maintain. My guess is that effective strategies to get desired policies implemented will depend a lot on the intersection of the target audience + substance of the policy + the existing landscape of influences on the relevant decision-makers. In practice, this would probably mean at the very least developing a lot of partnerships with colleague organizations to help get things done or perhaps more likely setting up a regranting fund of some kind to support those partners.
Happy to chat about this further since we're actively working on setting something like this up at EIP.
Peter Wildeford @ 2022-03-03T01:20 (+4)
I agree with you on the value of not overly centralizing this and of having different groups specialize in different policy areas and/or approaches.
Peter Wildeford @ 2022-03-01T16:36 (+44)
Landscape Analysis: Longtermist Policy
Biorisk, Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes
Many social movements find a lot of opportunity by attempting to influence policy to achieve their goals - what ought we do for longtermist policy? Longtermism can and should remain bi-partisan but there may be many opportunities to pull the rope sideways on policy areas of concern.
We'd like to see a project that attempts to collect a large number of possible longtermist policies that are tractable, explore strategies for pushing these policies, and also use public opinion polling on representative samples to understand which policies are popular. Based on this information, we could then suggest initiatives to try to push for.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
PeterSlattery @ 2022-03-08T04:46 (+2)
I really like this idea and think that having a global policy network could be valuable over the long term. Particularly if coordinated with other domains of EA work. For instance, I can imagine RT and various other researcher orgs and researchers providing evidence on demand to EAs who are directly embedded within policy production.
brb243 @ 2022-03-07T17:55 (+1)
If it shows that policies that safeguard the long-term objectives of the top lobbyists in the nation while disregarding others' preferences are the most popular, do you recommend them as attention-captivating conversation starters so that impartial consideration can be explained one-on-one to support its internalization by regulators by implementing measures to prevent the enactment of these, possible catastrophically risky (codified dystopia for some actors) popular policies, if I understand it correctly?
JBPDavies @ 2022-03-02T10:21 (+1)
Hi Peter (if I may!),
I love this and your other Longtermism suggestions, thanks for submitting them! Not sure if you saw my below suggestion of a Longtermism Policy Lab - but maybe this is exactly the kind of activity that could fall under such an organisation/programme (within Rethink even)? Likewise for your suggestion of a Lobbying group - by working directly with societal partners (e.g. National Ministries across the world) you could begin implementation directly through experimentation.
I've been involved in a similar (successful) project called the 'Transformative Innovation Policy Consortium (TIPC)', which works with, for example, the Colombian governement to shape innovation policy towards sustainable and just transformation (as opposed to systems optimisation).
Would love to talk to you about these your ideas for this space if you're interested. I'm working with the Institutions for Longtermism research platform at Utrecht University & we're still trying to shape our focus, so there may be some scope for piloting ideas.
IanDavidMoss @ 2022-03-03T00:44 (+2)
JBPDavies, it sounds like you and I should connect as well -- I run the Effective Institutions Project and I'd love to learn more about your Institutions for Longtermism research and provide input/ideas as appropriate.
JBPDavies @ 2022-03-03T08:01 (+1)
Sounds fantastic - drop me an email at j.b.p.davies@uu.nl and I would love to set up a meeting. In the meantime I'll dive into EIP's work!
Peter Wildeford @ 2022-03-02T16:30 (+2)
Sure! Email me at peter@rethinkpriorities.org and I will set up a meeting.
Vaidehi Agarwalla @ 2022-03-01T01:04 (+43)
Experiments to scale mentorship and upskill people
Empowering Exceptional People, Effective Altruism
For many very important and pressing problems, especially those focused on improving the far future, there are very few experts working full-time on these problems. What's more, these fields are nascent, and there are few well-defined paths for young or early-career people to follow, it can be hard to enter the field. Experts in the field are often ideal mentors - they can vet newcomers, help them navigate the field, provide career advice, collaborate on projects and gain access to new opportunities, but there are currently very few people qualified to be mentors. We'd love to see projects that experiment with ways to improve the mentorship pipeline so that more individuals can work on pressing problems. The kinds of possible solutions possible are very broad - from developing expertise in some subset of mentorship tasks (such as vetting) in a scalable way, increasing the pool of mentors, improving existing mentors' ability to provide advice by training them, experimenting with better mentor-mentee matchmaking, running structured mentorship programs, and more.
Denis Drescher @ 2022-03-01T19:42 (+40)
Proportional prizes for prescient philanthropists
Effective Altruism, Economic Growth, Empowering Excetional People
A low-tech alternative to my proposal for impact markets is to offer regular, reliable prizes for early supporters of exceptionally impactful charities. These can be founders, advisors, or donors. The prizes would not only go to the top supporters but proportionally to almost anyone who can prove that they’ve contributed (or where the charity has proof of the contribution), capped only at a level where the prize money is close to the cost of the administrative overhead.
Donors may be rewarded in proportion to the aggregate size of their donations, advisors may be rewarded in proportion to their time investment valued at market rates, founders may be rewarded in proportion to the sum of both.
If these prizes are awarded reliably, maybe by several entities, they may have some of the same benefits as impact markets. Smart and altruistic donors, advisors, and charity serial entrepreneurs can accumulate more capital that they can use to support their next equally prescient project.
IanDavidMoss @ 2022-03-03T14:26 (+5)
Reading this again, I want to register that I am much more excited about the idea of rewarding donors for early investment than I am about the other elements of the plan. As someone who has founded multiple organizations, the task of attaching precise retrospective monetary values to different people's contributions of time, connections, talent, etc. in a way that will satisfy everyone as fair sounds pretty infeasible.
Early donations, by contrast, are an objective and verifiable measure of value that is much easier to reward in practice. You could just say that the first, say $500k that the org raises is eligible for retroactive reward/matching/whatever, with maybe the first $100k or something weighted more heavily.
It's also worth thinking through the incentives that a system like this would set up, especially at scale. It would result in more seed funding and more small charities being founded and sustained for the first couple of years. I personally think that's a good thing at the present time, but I also know people who argue that we should be taking better advantage of economies of scale in existing organizations. There is probably a point at which there is too much entrepreneurship, and it's worth figuring out what that point is before investing heavily in this idea.
Denis Drescher @ 2022-03-03T23:58 (+4)
Owen Cotton-Barrett and I have thought about this for a while and have mostly arrived at the solution that beneficiaries who collaborated on a project need to hash this out with each other. So make a contract, like in a for-profit startup, who owns how much of the impact of the project. I think that capable charity entrepreneurs are a scarce resource as well, so that we should try hard to foster them. So that’s probably where a large chunk of the impact is.
When it comes to the incentive structures: We – mostly Matt Brooks and I but the rest of the team will be around – will hold a talk on the risks from perverse incentives in our system at the Funding the Commons II conference tomorrow. Afterwards I can also link the video recording here. My big write-up, which is more comprehensive than the presentation but unfinished, is linked from the other proposal proposal.
That said …
I personally think that's a good thing at the present time, but I also know people who argue that we should be taking better advantage of economies of scale in existing organizations. There is probably a point at which there is too much entrepreneurship, and it's worth figuring out what that point is before investing heavily in this idea.
I don’t quite understand… More funding for donors -> more donors -> more money to charities -> higher scale, right? So this system would enable charities to hire more so people can specialize etc., not the opposite?
Thanks!
colin @ 2022-03-03T14:24 (+3)
This is really interesting. Setting up individual projects as DAOs could be an effective way to manage this. The DAO issues tokens to founders, advisors, and donors. If retrospectively it turns out that this was a particularly impactful project the funder can buy and burn the DAO tokens, which will drive up the price, thereby rewarding all of the holders.
Denis Drescher @ 2022-03-03T23:49 (+2)
Yep! There’s this other proposal for impact markets linked above. That’s basically that with slight tweaks. It’s all written in a technology-agnostic way, but one of the implementations that we’re currently looking into is on the blockchain. There’s even a bit of a prototype already. :-D
IanDavidMoss @ 2022-03-01T22:25 (+2)
I really like this idea, and FWIW find it much more intuitive to grasp than your impact markets proposal.
Denis Drescher @ 2022-03-01T22:33 (+2)
Sweet, thanks! :-D
Then it’ll also help me explain impact markets to people.
alexrjl @ 2022-03-01T07:07 (+40)
High quality, EA Audio Library (HEAAL)
all/meta, though I think the main value add is in AI
(Nonlinear has made a great rough/low quality version of this, so at least some credit/prize should go to them.)
Audio has several advantages over text when it comes to consuming long-form content, with one significant example being that people can consume it while doing some other task (commuting, chores, exercising) meaning the time cost of consumption is almost 0. If we think that broad, sustained engagement with key ideas is important, making the cost of engagement much lower is a clear win. Quoting Holden's recent post:
I think a highly talented, dedicated generalist could become one of the world’s 25 most broadly knowledgeable people on the subject (in the sense of understanding a number of different agendas and arguments that are out there, rather than focusing on one particular line of research), from a standing start (no background in AI, AI alignment or computer science), within a year
What does high quality mean here, and what content might get covered?
-
High quality means read by humans (I'm imagining paying maths/compsci students who'll be able to handle mathematical notation), with good descriptions of diagrams. If posts involve conversations (e.g. the MIRI logs), different voices are used for different people. Holden's cold takes read throughs are a good example.
-
High quality also means paying for or otherwise dealing with copyright, curating pieces into much more searchable/navigable collections than the current podcast feeds.
What sort of things?
-
alignment forum posts, with sequences collated into playlists.
-
The MIRI conversations
-
Key technical reports e.g. Carlsmith on power seeking AI, Cotra on Bioanchors
-
New books (The Long View)
-
everything on key reading lists, again organised into playable feeds. e.g. Tessa's biosec list, jtm's longtermism list, the AGI safety and governance fundamentals curricula, the introductory and in-depth fellowship reading materials.
Nathan Young @ 2022-03-02T13:28 (+2)
Frankly, I'd like the ability to send a written feed to somewhere and have it turned into audio, maybe crowdfunded. Clearly non-linear can do it, so why can't I have it for, say, Bryan Caplan's writing.
alexrjl @ 2022-03-02T14:06 (+3)
If you're ok with autogenerated content of roughly the quality of nonlinear, both Pocket and Evie are reasonable choices.
Milan_Griffes @ 2022-03-02T14:36 (+10)
High-quality human performance is much more engaging than autogenerated audio, fwiw.
alexrjl @ 2022-03-02T17:26 (+4)
Hence the original pitch!
Nathan Young @ 2022-03-02T13:26 (+2)
Non-Linear could be paid to repost the most upvoted posts but with voice actors.
Arb @ 2022-03-07T23:46 (+39)
Our World in Base Rates
Epistemic Institutions
Our World In Data are excellent; they provide world-class data and analysis on a bunch of subjects. Their COVID coverage made it obvious that this is a very great public good.
So far, they haven't included data on base rates; but from Tetlock we know that base rates are the king of judgmental forecasting (EAs generally agree). Making them easily available can thus help people think better about the future. Here's a cool corporate example.
e.g.
“85% of big data projects fail”;
“10% of people refuse to be vaccinated because of fearing needles (pre-COVID so you can compare to the COVID hesitancy)”;
"11% of ballot initiatives pass"
“7% of Emergent Ventures applications are granted”;
“50% of applicants get 80k advice”;
“x% of applicants get to the 3rd round of OpenPhil hiring”, "which takes y months";
“x% of graduates from country [y] start a business”.
MVP:
- come up with hundreds of baserates relevant to EA causes
- scrape Wikidata for them, or diffbot.com
- recurse: get people to forecast the true value, or later value (put them in a private competition on Foretold, index them on metaforecast.org)
Later, QURI-style innovations: add methods to combine multiple estimates and do proper Bayesian inference on them. If we go the crowdsourcing route, we could use the infrastructure used for graphclasses (voting on edits). Prominently mark the age of the estimate.
PS: We already sympathise with the many people who critique base rates for personal probability.
Ozzie Gooen @ 2022-03-07T23:53 (+13)
I think this is neat.
Perhaps-minor note: if you'd do it at scale, I imagine you'd want something more sophisticated than coarse base rates. More like, "For a project that has these parameters, our model estimates that you have a 85% chance of failure."
I of course see this as basically a bunch of estimation functions, but you get the idea.
Kat Woods @ 2022-03-06T18:12 (+38)
Teaching buy-out fund
Allocate EA Researchers from Teaching Activities to Research
Problem: Professors spend a lot of their time teaching instead of researching. Many don’t know that many universities offer “teaching buy-outs”, where if you pay a certain amount of money, you don’t have to teach. Many also don’t know that a lot of EA funding would be interested in paying that.
Solution: Make a fund that's explicitly for this, to make it so more EAs know. This is the 80/20 of promoting the idea. Alternatively, funders can just advertise this offering in other ways.
elifland @ 2022-03-03T05:08 (+38)
Adversarial collaborations on important topics
Epistemic Institutions
There are many important topics, such as the level of risk from advanced artificial intelligence and how to reduce it, among which there are reasonable people with very different views. We are interested in experimenting with various types of adversarial collaborations, which we define as people with opposing views working to clarify their disagreement and either resolve the disagreement or identify an experiment/observation that would resolve it. We are especially excited about combining adversarial collaborations with forecasting on any double cruxes identified from them. Some ideas for experimentation might be varying the number of participants, varying the level of moderation and strictness of enforced structure, and introducing AI-based aids.
Existing and past work relevant to this space include the Adversarial Collaboration Project, SlateStarCodex's adversarial collaboration contests, and the Late 2021 MIRI Conversations.
brb243 @ 2022-03-07T18:04 (+1)
What topics? Which are not yet covered? (E. g. militaries already talk about peace) What adversaries? Are they rather collaborators (such as considering mergers and acquisitions and industry interest benefits for private actors and trade and alliance advantages for public actors)? Do you mean decisionmaker-nondecisionmaker collaborations - the issue is that systems are internalized, so you can get from the nondecisionmakers I want to be as powerful over others as the decisionmakers or also an inability to express or know their preferences (a chicken is in the cage so what can it say or a cricket is on the farm what do they know about their preferences) - probably, adversaries would prefer to talk about 'how can we get the other to give us profit' rather than 'how can we make impact' since the agreement is 'not impact, profit?'
Peter Wildeford @ 2022-03-01T21:24 (+38)
Focus Groups Exploring Longtermism / Deliberative Democracy for Longtermism
Epistemic Institutions, Values and Reflective Processes
Right now longtermism is being developed within a relatively narrow set of stakeholders and participants relative to the broad set of people (and nonhumans) that would be affected by the decisions we make. We'd like to see focus groups that attempt to engage a more diverse group of people (diversity across many axes including but not limited to race, gender, age, geography, and socioeconomic status) and attempt to explain longtermism to them and explore what visions they have for the future of humanity (and nonhumans). Hopefully through many iterations we can find a way to go across what is likely rather large initial inferential distance to explore how a broader and more diverse group of people would think about longtermism once ideally informed. This can be related to and informed by engaging in deliberative democracy. This also could be helping to initiate what longtermists call "the long reflection".
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
IanDavidMoss @ 2022-03-03T01:29 (+7)
I absolutely love this idea and really hope it gets funded! It reminds me in spirit of the stakeholder research that IDinsight did to help inform the moral weights GiveWell uses in its cost-effectiveness analysis. At scale, it could parallel aspects of the process used to come up with the Sustainable Development Goals.
michaelchen @ 2022-03-01T05:49 (+38)
Foundational research on the value of the long-term future
Research That Can Help Us Improve
If we successfully avoid existential catastrophe in the next century, what are the best pathways to reaching existential security, and how likely is it? How optimistic should we be about the trajectory of the long-term future? What are the worst-case scenarios, and how do we avoid them? How can we make sure the future is robustly positive and build a world where as many people are flourishing?
To elaborate on what I have in mind with this proposal, it seems important to conduct research beyond reducing existential risk over the next century – we should make sure that the future we have afterwards is good as well. I'd be interested in research following up on subjects like those of the posts:
- "Disappointing Futures" Might Be As Important As Existential Risk - Michael Dickens
- Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese
- The expected value of extinction risk reduction is positive - Jan Brauner and Friederike Grosse-Holz and A longtermist critique of “The expected value of extinction risk reduction is positive”
- Should We Prioritize Long-Term Existential Risk? - Michael Dickens
- S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) – Center on Long-Term Risk
- Cooperation, Conflict, and Transformative Artificial Intelligence - Center on Long-Term Risk
Fai @ 2022-03-03T06:28 (+8)
This sounds great! I particularly liked that you brought up S-risks and MCE. I think these are important considerations.
Kat Woods @ 2022-03-06T17:58 (+37)
Incubator for Independent Researchers
Training People to Work Independently on AI Safety
Problem: AI safety is bottlenecked by management and jobs. There are <10 orgs you can do AI safety full time at, and they are limited by the number of people they can manage and their research interests.
Solution: Make an “independent researcher incubator”. Train up people to work independently on AI safety. Match them with problems the top AI safety researchers are excited about. Connect them with advisors and teammates. Provide light-touch coaching/accountability. Provide enough funding so they can work full time or provide seed funding to establish themselves, after which they fundraise individually. Help them set up co-working or co-habitation with other researchers.
This could also be structured as a research organization instead of an incubator.
Kat Woods @ 2022-03-06T18:21 (+36)
EA Marketing Agency
Improve Marketing in EA Domains at Scale
Problem: EAs aren’t good at marketing, and marketing is important.
Solution: Fund an experienced marketer who is an EA or EA-adjacent to start an EA marketing agency to help EA orgs.
NunoSempere @ 2022-03-04T16:55 (+36)
Expected value calculations in practice
Invest in creating the tools to approximate expected value calculations for speculative projects, even if hard.
Currently, we can’t compare the impact of speculative interventions in a principled way. When making a decision about where to work or donate, longtermists or risk-neutral neartermists may have to choose an organization based on status, network effects, or expert opinion. This is, obviously, not ideal.
We could instead push towards having expected value calculations for more things. In the same way that GiveWell did something similar for global health and development, we could try to do something similar for longtermism/speculative projects. Longer writeup here.
Kat Woods @ 2022-03-06T18:13 (+35)
AGI Early Warning System
Anonymous Fire Alarm for Spotting Red Flags in AI Safety
Problem: In a fast takeoff scenario, individuals at places like DeepMind or OpenAI may see alarming red flags but not share them because of myriad institutional/political reasons.
Solution: create an anonymous form - a “fire alarm” (like an whistleblowing Andon Cord of sorts) where these employees can report what they’re seeing. We could restrict the audience to a small council of AI safety leaders, who then can determine next steps. This could, in theory, provide days to months of additional response time.
Kat Woods @ 2022-03-06T18:07 (+35)
Alignment Forum Writers
Pay Top Alignment Forum Contributors to Work Full Time on AI Safety
Problem: Some of AF’s top contributors don’t actually work full-time on AI safety because they have a day job to pay the bills.
Solution: Offer them enough money to quit their job and work on AI safety full time.
Zac Townsend @ 2022-03-01T11:59 (+35)
(Per Nick's note, reposting)
Political fellowships
Values and Reflective Processes, Empowering Exceptional People
We’re like to fund ways to pull people who don’t run for political office to run for political office. It's like a MacArthur. You get a call one day. You've been selected. You'd make a great public servant, even if you don't know it. You'd get some training, like DCCC and NRCC, and when you run, you get two million spent by the super-PAC run by the best. They've done the analysis. They'll provide funding. They've lined up endorsers. You've never thought about politics, but they've got your back. Say what you want to say, make a difference in the world: run the campaign you don't mind losing. And if you win, make it real.
Jan-WillemvanPutten @ 2022-03-04T09:10 (+3)
Great idea, at TFG we have similar thoughts and are currently researching if we should run it and the best way to run a program like this. Would love to get input from people on this.
Nathan Young @ 2022-03-07T11:21 (+34)
The Billionaire Nice List
Philanthropy
A regularly updated list of how much impact we estimate billionaires have created. Billionaires care about their public image, people like checking lists. Let's attempt to create a list which can be sorted by different moral weights and incentivises billionaires to do more good.
PeterSlattery @ 2022-03-09T05:09 (+9)
I really like this. I had a similar idea focused on trying to change the incentive landscape for billionaires to make it as high status as possible to be as high impact as possible. I think that lists and awards could be a good start. Would be especially good to have the involvement of some aligned ultrawealthy people who might have a good understanding of what will be effective.
Nathan Young @ 2022-03-10T10:53 (+3)
Yeah, I would love those of us who know or are billionaires to give a sense of what motivates them.
evelynciara @ 2022-03-02T04:53 (+33)
Pro-immigration advocacy outside the United States
Economic Growth
Increasing migration to rich countries could dramatically reduce poverty and grow the world economy by up to 150%. Open Philanthropy has long had pro-immigration reform in the U.S. as a focus area, but the American political climate has been very hostile to and/or polarized on immigration, making it harder to make progress in the U.S. However, other high-income countries might be more receptive to increasing immigration, and would thus be easier places to make progress. For example, according to a 2018 Pew survey, 81% of Japanese citizens support increasing or keeping immigration levels about the same. It would be worth exploring which developed countries are most promising for pro-immigration advocacy, and then advocating for immigration there.
What this project could look like:
- Identify 2-5 developed countries where pro-immigration advocacy seems especially promising.
- Build partnerships with people and orgs in these countries with expertise in pro-immigration advocacy.
- Identify the most promising opportunities to increase immigration to these countries and act on them.
Related posts:
- Which countries are most receptive to more immigration?
- Understanding Open Philanthropy's evolution on migration policy
Greg_Colbourn @ 2022-03-02T12:28 (+5)
Japan is coming from a very low base - 2% of population is foreign-born - vs. 15% in the US. A lot of room for more immigrants before "saturation" is reached I guess. Although I imagine that xenophobia and racism is anti-correlated with immigration, at least at low levels [citation needed].
brb243 @ 2022-03-07T18:15 (+1)
Top countries by refugees per capita
The world's most neglected displacement crises
Should these countries be supported in their efforts (I read I think $0.1/person/day for food) and the crises prevented such as by supporting the source area parties to make and abide by legal agreements over resources, prevent drug trade by higher-yield farming practices and education or urban career growth prospects, improve curricula to add skills development in care for others (teaching preventive healthcare and others' preferences-based interactions), etc - as a possibly cost-effective alternative to pro-immigration advocacy - then, either privileged persons will be able to escape the poor situation, which will not be solved or unskilled persons with poor norms will be present at places which may not improve their subjective wellbeing, which is given by the norms' internalization?
evelynciara @ 2022-03-08T05:26 (+2)
Your question is very long and hard to understand. Can you please reword it in plain English?
brb243 @ 2022-03-08T12:51 (+1)
Displacement crises are large and neglected. For example, for one of the top 10 crises, 6,000 additional persons are displaced per day. Displaced persons can be supported by very low amounts, which make large differences. For example, $0.1/day for food and low amount for healthcare. In some cases, this would have otherwise not been provided. So, supporting persons in crises in emerging economies, without solving the issues, can be cost-effective compared to spending comparable effort on immigration reform.
Second, supporting countries that already host refugees of neglected crises to better accommodate these persons (so that they do not need to stay in refugee camps reliant on food aid and healthcare aid), for example, by special economic zones, if these allow for savings accumulation, and education, so that refugees can better integrate and the public welcomes it due to economic benefits, can be also competitive in cost-effectiveness compared to immigration reform in countries with high public attention and political controversy and much smaller refugee populations, such as the US. The intervention is more affordable, makes larger difference for the intended beneficiaries, has higher chance of political support, and can be institutionalized while solving the problem.
Third, allocating comparable skills to neglected crises rather than to immigration reform in industrialized nations where unit decisionmaker's attention can be much more costly, such as the US, can resolve the causes of these crises, which can include limited ability to draft and enforce legal agreements around natural resources or mitigate violence related to limited alternative prospects of drug farmers by sharing economic alternatives, such as higher-yield commodity farming practices, agricultural value addition skills, or upskilling systems related to work in urban areas. So, the cost-effectiveness of solving neglected crises by legal, political, and humanitarian assistance can be much higher than lobbying for immigration reform in the US.
Peter Wildeford @ 2022-03-01T16:47 (+33)
Improving ventilation
Biorisk
Ventilation emerged as a potential intervention to reduce the risk of COVID and other pathogens. Additionally, poor air quality is a health concern in its own right, negatively affecting cognition and cognitive development. Despite this, there still does not seem to be commonly accepted wisdom about what kind of ventilation interventions ought to be pursued in offices, bedrooms, and other locations.
We'd like to see a project that does rigorous research to establish strong ventilation strategies in a variety of contexts and explores their effectiveness on various ventilation issues. Once successful ventilation strategies are developed, assuming it would be cost-effective to do so, this project could then aim to roll out ventilation and campaign/market for ventilation interventions either as a for-profit, non-profit, or hybrid.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
simonfriederich @ 2022-03-01T09:19 (+33)
Advocacy organization for unduly unpopular technologies
Public opinion on key technologies.
Some technologies have enormous benefits, but they are not deployed very much because they are unpopular. Nuclear energy could be a powerful tool for enhancing access to clean energy and combating climate change, but it faces public opposition in Western countries. Similarly, GMOs could help solve the puzzle of feeding the global population with fewer resources, but public opinion is largely against them. Cellular agriculture may soon face similar challenges. Public opinion on these technologies must urgently be shifted. We’d like to see NGOs that create the necessary support via institutions and the media, without falling into the trap of partisan warfare with traditional environmentalists.
Jackson Wagner @ 2022-03-01T23:12 (+4)
Probably want to avoid unifying all of these under one "we advocate for things that most people hate" advocacy group! Although that would be pretty hilarious. But funding lots of little different groups in some of these key areas is great, such as trying to make it easier to build clean energy projects of all kinds as I mention here.
simonfriederich @ 2022-03-02T08:41 (+4)
Right, it sounds absurd and maybe hilarious, but it's actually what I had in mind. The advantage is internal coherence. The idea is basically to let "ecomodernism" go mainstream, having a Greenpeace-like org that has ideas more similar to the Breakthrough Institute. It's far from clear that this can work, but it's worth a try, in my view. About your suggestion: I love it and voted for it.
Jackson Wagner @ 2022-03-02T09:37 (+2)
Maybe so... like an economics version of the ACLU that builds a reputation of sticking up for things that are good even though they're unpopular. Might work especially well if oriented around the legal system (where ACLU operates and where groups like Greenpeace and the ever-controversial NRA have had lots of success), rather than purely advocacy? Having a unified brand might help convince people that our side has a point. For instance, a group that litigates to fight against nimbyism by complaining about the overuse of environmental laws or zoning regulations... the nimbys would naturally see themselves as the heroes of the story and assume that lawyers on the pro-construction side were probably villains funded by big greedy developers. Seeing that their opposition was a semi-respected ACLU-like brand that fought for a variety of causes might help change people's minds on an issue. (On the other hand, I feel like the legal system is fundamentally friendlier terrain for stopping projects than encouraging them, so the legal angle might not work well for GMOs and power plants. But maybe there are areas like trying to ban Gain-of-Function research where this could be a helpful strategy.)
We'd still probably want the brand of this group to be pretty far disconnected from EA -- groups like Greenpeace, the NRA, etc naturally attract a lot of controversy and demonization.
Andreas F @ 2022-03-02T17:50 (+2)
Since Lifecycle Analysis show that it most likely is the best option, I fully agree on the nuclear Part.
I also agree on the GMO part, since huge Meta Analysis show no adverse effects on the environment (compared as yield/area & Biodiversity/dollar & Yield/dollar labor/ yield), in comparison with other agriculture.
I have no Assessment on Cellular Agriculture, but I do think, that it is fair to support such schemes ( at least until we have solid data regarding this, and then decide again).
Peter S. Park @ 2022-03-01T22:36 (+2)
Note: Wanted to share an example. I think that while nuclear fission reactors are unpopular and this unpopularity is sticky, it is possible that efforts to preemptively decouple the reputation of nuclear fusion reactors with those of nuclear fission reactors can succeed (and that nuclear fusion's hypothetical positive reputation can be sticky over time). But it is also possible that the unpopularity of nuclear fission will stick to nuclear fusion.
Which of these two possibilities occurs, and how proactive action can change this, is mysterious at the moment. This is because our causal/theoretical understanding of the science of human behavior is incomplete. (see my submission, "Causal microfoundations for behavioral science") Preemptive action regarding historically unprecendented settings like emergent technologies---for which much of the relevant data may not yet exist---can be substantially informed by externally valid predictions of people's situation-specific behavior in such settings.
simonfriederich @ 2022-03-02T08:36 (+3)
Interesting thought. FWIW, I think it's more realistic that we can turn around public opinion on fission first, reap more of the benefits of fission, and then have a better public public landscape for fusion, then that we accept the unpopularity of fission as a given but will have somehow popular fusion. But I may well be wrong.
James Ozden @ 2022-02-28T23:59 (+33)
Building the grantmaker pipeline
Empowering Exceptional People, Effective Altruism
The amount of funding committed to Effective Altruism has grown dramatically in the past few years, with an estimated $46 billion dollars currently earmarked for EA. With this significant increase in available funding, there is now a greatly increased need for talented and thoughtful grantmakers, who can effectively deploy this money. It's plausible that yearly EA grantmaking could increase by a factor of 5-10x over the coming decade, and this requires finding and training new grantmakers on best practices, as well as developing sound judgement. We'd love to see projects that build the grantmaker pipeline, whether that's grantmaking fellowships, grantmaker mentoring, more frequent donor lotteries, more EA funds-style organisations with rotating fund managers, and more.
NB: This might be a refinement of fellowships, but I think it's particularly important.
Jackson Wagner @ 2022-03-01T21:23 (+7)
This is such a good idea that I think FTX is already piloting a regranting scheme as a major prong of their Future Fund program!
But it would be cool to build up the pipeline in other more general/systematic ways -- maybe with mentorship/fellowships, maybe with more experimental donation designs like donor lotteries and impact certificates, maybe with software that helps people to make EA-style impact estimates.
Cillian Crosson @ 2022-03-04T09:26 (+4)
It seems that FTX's Regranting Program could be a great way to scalably distribute funds & build the grantmaker pipeline.
We (Training for Good) are also developing a grantmaker training programme like what James has described here to help build up EA's grantmaking capacity (which could complement FTX's Regranting Program nicely). It will likely be an 8 week, part-time programme, with a small pot of "regranting" money for each participant and we're pretty excited to launch this in the next few months.
In the meantime, we're looking for 5-10 people to beta test a scaled-down version of this programme (starting at the end of March). The time commitment for this beta test would be ~5 hours per week (~2 hrs reading, ~2 hrs projects, ~1 hr group discussion). If anyone reading this is interested, feel free to shoot me an email cillian@trainingforgood.com
Kat Woods @ 2022-03-06T18:18 (+32)
Top ML researchers to AI safety researchers
Pay top ML researchers to switch to AI safety
Problem: <.001% of the world’s brightest minds are working on AI safety. Many are working on AI capabilities.
Solution: Pay them to switch. Pay them their same salary, or more, or maybe a lot more.
Kat Woods @ 2022-03-06T18:02 (+32)
EA Productivity Fund
Increase the output of top longtermists by paying for things like coaching, therapy, personal assistants, and more.
Problem: Longtermism is severely talent constrained. Yet, even though these services could easily increase a top EAs productivity by 10-50%, many can’t afford them or would be put off by the cost (imposter syndrome or just because it feels selfish).
Solution: Create a lightly-administered fund to pay for them. It’s unclear what the best way would be to select who gets funding, but a very simple decision metric could be to give it to anybody who gets funding from Open Phil, LTFF, SFF, or FTX. This would leverage other people’s existing vetting work.
Nathan Young @ 2022-03-02T01:16 (+32)
Automated Open Project Ideas Board
The Future Fund
All of these ideas should be submitted to a board where anyone can forecast their value in dollars lives saved per $ as rated by a trusted research organisation, say Rethink Priorities. The forecasts can be reputation or prediction markets. Then that research organisation checks 1% of the ideas and scores them. These scores are used to weight the other forecasts. This creates a scalable system for ranking ideas. Then funders can donate to them as they see fit.
Charlotte @ 2022-03-01T21:30 (+32)
Massive US-China exchange programme
Great power conflict, AI
Fund (university) students to live in the other country in a host family: between US-China, Russia-US, China-India, potentially India-Pakistan. This is important if one thinks that personal experience make it less likely that individuals incentivise or encourage escalation, war and certain competitive dynamics.
Jackson Wagner @ 2022-03-01T22:32 (+8)
This might have a hard time meeting the same effectiveness bar as #13, "Talent Search" and #17, "Advocacy for US High-Skill Immigration", which might end up having some similar effects but seem like more leveraged interventions.
IanDavidMoss @ 2022-03-03T01:06 (+2)
I disagree, as this idea seems much more explicitly targeted at reducing the potential for great power conflict, and I haven't yet seen many other tractable ideas in that domain.
Alex D @ 2022-03-07T03:30 (+5)
My understanding is the Erasmus Programme was explicitly started in part to reduce the chance of conflict between European states.
Chris Leong @ 2022-03-01T03:53 (+32)
Nuclear/Great Power Conflict Movement Building
Effective Altruism
Given the current situation in Ukraine, movement-building related to nuclear x-risk or great power conflict would likely be much more tractable than it was up until recently. We don't know how long this period will last for and the memory of the public can be short, so we intend to advantage of this opportunity. This outreach should focus on people with an interest in policy or potential student group organisers as these people are most likely to have an influence here.
Zac Townsend @ 2022-03-01T11:59 (+31)
(Per Nick's note, reposting)
Market shaping and advanced market commitments
Epistemic institutions; Economic Growth
Market shaping is when an idea can only be jump-started by committed demand or other forces. Operation Warp Speed is the most recent example of market-shaping through advanced market commitments, but it has been used several times for other vaccine development. We are interested in funding work to understand when market-shaping makes sense, ideas for creating and funding market-shaping methods, and specific market-shaping or advanced market commitments in our areas of interest.
jh @ 2022-03-02T12:38 (+36)
(I drafted this then realized that it is largely the same as Zac's comment above - so I've strong upvoted that comment and I'm posting here in case my take on it is useful.)
Crowding in other funding
We're excited to see ideas for structuring projects in our areas of interest that leverage our funds by aligning with the tastes of other funders and investors. While we are excited about spending billions of dollars on the best projects we can find, we're also excited to include other funders and investors in the journey of helping these projects scale in the best way possible. We would like to maximize the chance that other sources of funding come in. Some projects are inherently widely attractive and some others are only ever likely to attract (or want) longtermist funding. But, we expect that there are many projects where one or more general mechanisms can be applied to crowd in other funding. This may include:
- Offering financial incentives (e.g. advanced market commitments)
- Highlighting financial potential in major projects we would like to see (e.g. especially projects of the scale of the Grok / Brookfield bid for AGL)
- Portfolio structures / financial engineering (e.g. Bridge Bio)
- Appealing to social preferences (e.g. highlight points of 'common sense' overlap between longtermist views and ESG)
colin @ 2022-03-03T14:32 (+1)
I'll add that advanced market commitments are also useful in situations where a jump-start isn't explicitly required. In that case, they can act similarly to prize based funding
RyanCarey @ 2022-03-05T00:30 (+30)
An Organisation that Sells its Impact for Profit
Empowering Exceptional People, Epistemic Institutions
Nonprofits are inefficient in some respects: they don't maximize value for anyone the way for-profits do for their customers. Moreover, they lack market valuations, so successful nonprofits scale too slowly while unsuccessful ones linger too long. One way to address this is to start an organisation that only accepts funding that incentivizes impact. Its revenue would come from: (1) Selling Impact Cerificates, (2) Prizes, and/or (3) Grants (but only if they value the work at a similar level to the impact certificates). Such an organization could operate on an entirely for-profit basis. Funding would be raised from for-profit investors. Staff would be paid in salary plus equity. The main premise here is that increased salaries are a small price to pay for the efficiencies that can be gained from for-profit markets. Of course, this can only succeed if the funding mechanisms (1-3) become sufficiently popular, but given the increased funding in longtermist circles, this now looks increasingly likely.
See also Retrospective grant evaluations, Retroactive public goods funding, Impact Markets, Megastar salaries for AI Alignment Work, Limited Scope Impact Purchase.
Peter Wildeford @ 2022-03-02T16:38 (+30)
Rationalism But For Group Psychology
Epistemic Institutions
LessWrong and the rationalist community have done well to highlight biases and help individuals become more rational, as well as creating a community around this. But most of the biggest things in life are done by groups and organizations.
We'd like to see a project that takes group psychology / organizational psychology and turns it into a rationalist movement with actionable advice to help groups be less biased and help groups achieve more impact, like how the original rationalist movement did so with individuals. We imagine this would involve identifying useful ideas from group psychology / organizational psychology literature and popularizing them in the rationalist community, as well as trying to intentionally experiment. Perhaps this could come up with better ideas for meetings, how to hire, how to attract talent, better ways to help align employees with organizational goals, better ways to keep track of projects, etc.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Gavin @ 2022-03-08T15:30 (+9)
The Epistea Summer Experiment was a glorious example of this.
Fai @ 2022-03-02T15:23 (+30)
Wild animal suffering in space
Space governance, moral circle expansion.
Terraforming other planets might cause animals to come to exist in these planets, either because of intentional or unintentional behaviors. These animals might live net negative lives.
Also, we cannot rule out the possibility that there are already wild "animals" (or any form of sentient beings) who might be suffering from net negative lives in other planets. (this does not relate directly to the Fermi Paradox, which is highly intelligent lives, not lives per se)
Relevant research include:
- Whether wild animals lead net negative or positive lives on earth, under what conditions. And whether this might hold the same in different planets.
- Tracking, or even doing research on using AI and robotics to monitor and intervene with habitats. This might be critical if there planets there are planets that has wild "animals", but are uninhabitable for humans to stay close and monitor (or even intervene with) the welfare of these animals.
- Communication strategies related to wild animal welfare, as it seem to tend to cause controversy, if not outrage.
- Philosophical research, including population ethics, environmental ethics, comparing welfare/suffering between species, moral uncertainty, suffering-focused vs non-suffering focused ethics.
- General philosophical work on the ethics of space governance, in relation to nonhuman animals.
Denis Drescher @ 2022-03-06T00:24 (+6)
Another great concern of mine is that even if biological humans are completely replaced with ems or de novo artificial intelligence, these processes will probably run on great server farms that likely produce heat and need cooling. That results in a temperature gradient that might make it possible for small sentient beings, such as invertebrates, to live there. Their conditions may be bad, they may be r-strategists and suffer in great proportions, and they may also be numerous if these AI server farms spread throughout the whole light cone of the future.
My intuition is that very few people (maybe Simon Eckerström Liedholm?) have thought about this so far, so maybe there are easy interventions to make that less likely to happen.
Denis Drescher @ 2022-03-06T00:45 (+5)
Brian Tomasik and Michael Dello-Iacovo have related articles.
DonyChristie @ 2022-03-03T01:51 (+3)
Here's a related question I asked.
JanBrauner @ 2022-03-01T09:53 (+30)
AI alignment prize suggestion: Introduce AI Safety concepts into the ML community
Artificial Intelligence
Recently, there have been several papers published at top ML conferences that introduced concepts from the AI safety community into the broader ML community. Such papers often define a problem, explain why it matters, sometimes formalise it, often include extensive experiments to showcase the problem, sometimes include some initial suggestions for remedies. Such papers are useful in several ways: they popularise AI alignment concepts, pave the way for further research, and demonstrate that researchers can do alignment research while also publishing in top venues. A great example would be Optimal Policies Tend To Seek Power, published in NeurIPS. Future Fund could advertise prizes for any paper that gets published in a top ML/NLP/Computer Vision conference (from ML, that would be NeurIPS, ICML, and ICLR) and introduces a key concept of AI alignment.
Yonatan Cale @ 2022-03-01T17:18 (+2)
Risk:
The course presents possible solutions to these risks, and the students feel like they "understood" AI risk, and in the future it will be harder to these students about AI risk since they feel like they already have an understanding, even though it is wrong.
I am specifically worried about this because I try imagining who would write the course and who would teach it. Will these people be able to point out the problems in the current approaches to alignment? Will these people be able to "hold an argument" in class well enough to point out holes in the solutions that the students will suggest after thinking about the problem for five minutes?
I'm not saying this isn't solvable, just a risk.
Chris Leong @ 2022-03-01T03:15 (+30)
EA Macrostrategy:
Effective Altruism
Many people write about the general strategy that EA should take, but almost no-one outside of CEA has this as their main focus. Macrostrategy involves understanding all of the different organisations and projects in EA, how they work together, what the gaps are and the ways in which EA could fail to achieve its goals. Some resources should be spent here as an exploratory grant to see what this turns up.
Arb @ 2022-03-08T00:27 (+29)
Evaluating large foundations
Effective Altruism
Givewell looks at actors: object-level charities, people who do stuff. But logically, it's even more worth scrutinising megadonors (assuming that they care about impact or public opinion about their operations, and thus that our analysis could actually have some effect on them).
For instance, we've seen claims that the Global Fund, who spend $4B per year, meet a 2x GiveDirectly bar but not a Givewell Top Charity bar.
This matters because most charity - and even most good charity - is still not by EAs or run on EA lines. Also, even big cautious foundations can risk waste / harm, as arguably happened with the Gates Foundation and IHME - it's important to understand the base rate of conservative giving failing, so that we can compare hits-based giving. And you only have to persuade a couple of people in a foundation before you're redirecting massive amounts.
James Ozden @ 2022-03-01T00:34 (+29)
Refining EA communications and messaging
Values and Reflective Processes, Research That Can Help Us Improve
If we want to motivate a broad spectrum of people about the importance of doing good and ensuring the long-term goes well, it's imperative we find out which messages are "sticky" and which ones are forgotten quickly. Testing various communication frames, particularly for key target audiences like highly talented students, will support EA outreach projects in better tailoring their messaging. Better communications could hugely increase the number of people that consume EA content, relate to the values of the EA movement, and ultimately commit their life to doing good. We'd be excited to see people testing various frames and messaging, across a range of target audiences, using methodologies such as surveys, focus groups, digital media, and more.
Jack Lewars @ 2022-03-02T20:21 (+1)
I think this exists (but could be much bigger and should still be funded by this fund).
Yonatan Cale @ 2022-02-28T21:52 (+29)
TL;DR: EA Retroactive Public Good's Funding
In your format:
Deciding which projects to fund is hard, and one of the reasons for that is that it's hard to guess which projects will succeed and which will fail. But wait, startups have solved this problem perfectly: Anybody is allowed to vet a startup and decide to invest (bet) their money on this startup succeeding, and if the startup does succeed, then the early investors get a big financial return.
The EA community could do the same, only it is missing the part where we give big financial returns to projects that turned out good.
This would make the fund's job much easier: They would have to vet which project helped IN RETROSPECT, which is much easier, and they'll leave the hard prediction work to the market.
Context for proposing this
I heard of a promising EA project that is for some reason having trouble raising funds. I'm considering funding it myself, though I am not rich and that would be somewhat broken to do. But I AM rich enough to fund this project and bet on it working well enough to get a Retroactive Public Good grant in the future, if such a thing existed. I also might have some advantage over the EA Fund in vetting this project.
In Vitalik's words:
https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c
Ben Dean @ 2022-03-09T20:39 (+2)
Related: Impact Certificates
Kat Woods @ 2022-03-06T18:05 (+28)
EA Forum Writers
Pay top EA Forum contributors to write about EA topics full time
Problem: Some of the EA Forum’s top writers don’t work on EA, but contribute some of the community’s most important ideas via writing.
Solution: Pay them to write about EA ideas full time. This could be combined with the independent researcher incubator quite well.
Nathan Young @ 2022-03-07T14:44 (+5)
Pay users based on post karma.
(but not comment or question karma which are really easy to get in comparison)
Yitz @ 2022-03-08T05:05 (+3)
could lead to disincentive to post more controversial ideas there though
Chris Leong @ 2022-03-09T04:20 (+2)
Goodharts law
Nathan Young @ 2022-03-10T10:51 (+2)
don't think we'd be wedded to a single metric. Also isn't karma already weak to goodhearts law? I think we should already be concerned with this.
Nathan Young @ 2022-03-10T10:50 (+2)
I don't think we'd be wedded to this metric
Denis Drescher @ 2022-03-04T01:44 (+28)
A “Red Team” to rigorously explore possible futures and advocate against interventions that threaten to backfire
Research That Can Help Us Improve, Effective Altruism, Epistemic Institutions, Values and Reflective Processes
Motivation. There are a lot of proposals here. There are additional proposals on the Future Fund website. There are additional proposals also on various lists I have collected. Many EA charities are already implementing ambitious interventions. But really we’re quite clueless about what the future will bring.
This week alone I’ve discussed with friends and acquaintances three decisions in completely different contexts that might make the difference between paradise and hell for all sentient life, and not just in the abstract in the way that cluelessness forces us to assign some probability to almost any outcome but in the sense were we could point to concrete mechanisms along which the failure might occur. Yet we had to decide. I imagine that people in more influential positions than mine have to make similar decisions on almost a daily basis and on hardly any more information.
As a result, the robustness of an intervention has been the key criterion for prioritization for me for the past six years now. It’s something like the number and breadth of scenarios that trusted, impartial people have thought through along which the intervention may have any effect, especially unintended effects, divided by the number of bad failure modes that they’ve found and haven’t been able to mitigate. I mostly don’t bother to think about probabilities for this exercise, though that would be even better.
Tools. Organizations should continue to do their own red-teaming in-house, but that is probably always going to be less rigorous and systematic than what a dedicated red-teaming team could do.
I hear that Policy Horizons Canada (h/t Jaques Thibodeau), RAND (h/t Christian Tarsney), and others have experience in eliciting such scenarios systematically. The goal is often to have a policy response ready for every eventuality. The focus may need to be different for effective alturism and especially any interventions to do with existential and suffering risks: We can probe our candidate interventions using scenario planning or ensample simulations and discard them if they seem too risky.
I imagine that you could also set up a system akin to a prediction market platform but with stronger incentives to create new markets and to conditionally chain markets, including the UI/UX to make voting on conditional predictions frictionless, without much mental math.
Jaques Thibodeau is also thinking about using machine learning tools to aid with the elicitation of scenarios.
Organization. The final “Red Team” organization needs to have strong social buy-in from EA-branded organizations to offset the social awkwardness of being a perpetual critic. It’ll probably need to recruit the sort of people who thrive in the role of a devil’s advocate. But it’ll also need to hold itself to its own high standards and disband if it finds that its own mission is at risk of backfiring badly.
Will Kirkpatrick @ 2022-03-07T00:50 (+1)
I had a similar idea, and I think that a few more things need to be included in the discussion of this.
There are multiple levels of ideas in EA, and I think that a red team becomes much more valuable when they are engaging with issues that are applicable to the whole of EA.
I think ideas like the institutional critique of EA, the other heavy tail, and others are often not read and internalized by EAs. I think it is worth having a team that makes arguments like this, then breaks them down and provides methods for avoiding the pitfalls pointed out in them.
Things brought up in critique of EA should be specifically recognized and talked about as good. These ideas should be recognized, held up to be examined, then passed out to our community so that we can grow and overcome the objections.
I'm almost always lurking on the forum, and I don't often see posts talking about EA critiques.
That should change.
Denis Drescher @ 2022-03-07T14:44 (+2)
I basically agree but in this proposal I was really referring to such things as “Professor X is using probabilistic programming to model regularities in human moral preferences. How can that backfire and result in the destruction of our world? What other risks can we find? Can X mitigate them?”
I also think that the category that you’re referring to is very valuable but I think those are “simply” contributions to priorities research as they are published by the Global Priorities Institute (e.g., working papers by Greaves and Tarsney come to mind). Rethink Priorities, Open Phil, FHI, and various individuals also occasionally publish articles that I would class that way. I think priorities research is one of the most important fields of EA and much broader than my proposal, but it is also well-known. Hence why my proposal is not meant to be about that.
Nathan Young @ 2022-03-02T02:00 (+28)
Subsidise catastrophic risk-related markets on prediction markets
Prediction markets and catastrophic risk
Many markets don't exist because there isn't enough liquidity. A fund could create important longtermist markets on biorisk, AI safetry and nuclear war by pledging to provide significant liquidity once created. This would likely still only work for markets resolving in 1-10 years, due to inflation, but still*.
*It has been suggested to run prediction markets which use indices rather than currency. But people have shown reluctance to bet on ETH markets, so might show reluctance here too.
Jackson Wagner @ 2022-03-02T03:27 (+15)
FTX, which itself runs prediction markets, might be particularly well-suited for prediction-market interventions like this. I myself think that they could do a lot to advance people's understanding of prediction markets if in addition to their presidential prediction market, they also offered a conditional prediction market of how an indicator like the S&P 500 would do 1 week after the 2024 election, conditional on the Republicans winning vs the Democrats winning. Conditional prediction markets for important indicators on big national elections would provide both directly useful info in addition to educating people about prediction markets' potential.
Alex D @ 2022-03-08T17:21 (+1)
My company seeks to predict or rapidly recognize health security catastrophes, and also requires an influx of capital when such an event occurs (since we wind up with loads of new consulting opportunities to help respond).
Is there currently any way for us to incentivize thick markets on topics that are correlated with our business? The idea of getting the information plus the hedge is super appealing!
Peter Wildeford @ 2022-03-01T16:28 (+28)
Pandemic preparedness in LMIC countries
Biorisk
COVID has shown us that biorisk challenges fall on all countries, regardless of how prepared and well-resourced the countries are. While there certainly are many problems with pandemic preparedness high-income countries that need to be addressed, LMIC countries face even more issues in helping detect, identify, contain, mitigate, and/or prevent currently known and novel pathogens. Additionally, even after high income countries successfully contain a pathogen it may continue to spread within LMIC countries opening up risk of further more virulent mutations.
We'd like to see a project that works with LMIC governments to understand their current pandemic prevention plans and understand their local context. This project would especially focused on novel pathogens that are more severe than currently known pathogens -- and help provide the resources and knowledge needed to upgrade their plans to match the best practices of current bio-risk experts. Such a project would likely benefit from a team that contains expertise working with LMIC countries. An emergency fund and expert advice can also be provisioned to be ready to go when pathogens are detected.
Within the effective altruism movement, some organizations that have successfully worked with LMIC countries like the Lead Exposure Elimination Project, Fortify Health, Suvita, Wave, and Fish Welfare Initiative could be consulted about how they successfully adapted to local contexts. Outside of our immediate movement, it will be important to work with the Coalition for Epidemic Preparedness Innovations (CEPI) and understand what they are doing and where their efforts may need additional support.
A large grant to Coalition for Epidemic Preparedness Innovations (CEPI) targeted to concerns about novel pathogens may also be warranted.
Disclaimer: This is just my personal opinion and not the opinion of Rethink Priorities. This project idea was not seen by anyone else at Rethink Priorities prior to posting.
Arb @ 2022-03-10T18:22 (+27)
Language models for detecting bad scholarship
Epistemic institutions
Anyone who has done desk research carefully knows that many citations don't support the claim they're cited for - usually in a subtle way, but sometimes a total nonsequitur. Here's a fun list of 13 features we need to protect ourselves.
This seems to be a side effect of academia scaling so much in recent decades - it's not that scientists are more dishonest than other groups, it's that they don't have time to carefully read everything in their sub-sub-field (... while maintaining their current arms-race publication tempo).
Take some claim P which is below the threshold of obviousness that warrants a citation.
It seems relatively easy, given current tech, to answer: (1) "Does the cited article say P?" This question is closely related to document summarisation - not a solved task, but the state of the art is workable. Having a reliable estimate of even this weak kind of citation quality would make reading research much easier - but under the above assumption of unread sources, it would also stop many bad citations from being written in the first place.
It is very hard to answer (2) "Is the cited article strong evidence for P?", mostly because of the lack of a ground-truth dataset.
We elaborate on this here.
(Thanks to Jungwon Byun and Gwern Branwen for comments.)
Leo Gao @ 2022-03-08T04:17 (+27)
Getting former hiring managers from quant firms to help with alignment hiring
Artificial Intelligence, Empowering Exceptional People
Despite having lots of funding, alignment seems to not have been very successful at attracting top talent to date. Quant firms, on the other hand, have become known for very successfully acquiring talent and putting them to work on difficult conceptual and engineering problems. Although buy-in to alignment before one can contribute is often cited as a reason, this is, if anything, even more of a problem for quant firms, since very few people are inherently interested in quant trading as an end. As such, importing some of this know how could help substantially improve alignment hiring and onboarding efficiency.
Arb @ 2022-03-08T00:30 (+27)
On malevolence: How exactly does power corrupt?
Artificial Intelligence / Values and Reflective Processes
How does it happen, if it happens? Some plausible stories:
- Backwards causation: People who are “corrupted” by power always had a lust for power but deluded others and maybe even themselves about their integrity;
- Being a good ruler (of any sort) is hard and at times very unpleasant, even the nicest people will try to cover up their faults, covering up causes more problems... and at some point it is very hard to admit that you were incompetent ruler all along.
- Power changes your incentives so much that it corrupts all but the strongest. The difference with the last one is that value drift is almost immediate upon getting power.
- A mix of the last two would be: you get more and more adverse incentives with every rise in power.
- It might also be the case that most idealist people come into power under very stressful circumstances, which forces them to make decisions favouring consolidation of power (kinda instrumental convergence).
- See also this on the personalities of US presidents and their darknesses.
MaxRa @ 2022-03-10T18:15 (+2)
Yes, that's interesting and plausibly very useful to understand better. Might also affect some EAs at some point.
Power changes your incentives so much that it corrupts all but the strongest. The difference with the last one is that value drift is almost immediate upon getting power.
The hedonic treadmill might be part of it. You get used to the personal perks quickly, so you still feel motivated & justified to still put ~90% of your energy into problems that affect you personally -> removing threats to your rule, marginal status-improvements, getting along with people close to you
And some discussion about the backwards causation idea is here in an oldie from Yudkowsky: Why Does Power Corrupt?
Kat Woods @ 2022-03-06T18:07 (+27)
Bounty Budgets
Like Regranting, but for Bounties
Problem: In the same way that regranting decentralizes grantmaking, so do the same thing for bounties. For example, give the top 20 AI safety researchers up to $100,000 to create bounties or RFPs for, say, technical research problems. They could also reallocate their budget to other trusted people, creating a system of decentralized trust.
In theory, FTX’s regrantors could already do this with their existing budgets, but this would encourage people to think creatively about using bounties or RFPs.
Bounties are great because you only pay out if it's successful. If hypothetically each researcher created 5 bounties at $10,000 each that’d be 100 bounties - lots of experiments.
RFPs are great because it puts less risk on the applicants but also is a scalable, low-management way to turn money into impact.
Examples: 1) I’ll pay you $1,000 for every bounty idea that gets funded
2) Richard Ngo
SjirH @ 2022-03-04T10:34 (+27)
More public EA charity evaluators
Effective Altruism
There are dozens of EA fundraising organizations deferring to just a handful of organizations that publish their research on funding opportunities, most notably GiveWell, Founders Pledge and Animal Charity Evaluators. We would like to see more professional funding opportunity research organizations sharing their research with the public, both to increase the quality of research in the areas that are currently covered - through competition and diversity of perspectives and methodologies - and to cover important areas that aren’t yet covered such as AI and EA meta.
gruban @ 2022-03-02T17:40 (+27)
Longtermist risk screening and certification of institutions
Artificial Intelligence, Biorisk and Recovery from Catastrophe
Companies, nonprofits and government institutions participate and invest in activities that might significantly increase global catastrophic risk like gain-of-function research or research that might increase the likelihood of unaligned AGI. We’d like to see an organisation that evaluates and proposes policies and practices that should be followed in order to reduce these risks. Institutions that commit to following these practices and submit themselves to independent audits could be certified. This could help investors and funders to screen institutions for potential risks. It could also be used in future corporate campaigns to move companies and investors into adopting responsible practices.
Nathan Young @ 2022-03-07T14:43 (+2)
How would this be effective, rather than creating additional work on granmakers and increasing the entry barriers for grantees. Seems to many similar schemes for other kinds of risk end up as meaningless box-ticking enterprises which would lead to less effectiveness and possibly reputational harm to EA.
This is my prior when I hear a new audit proposed, though I hope it won't apply in your case.
gruban @ 2022-03-08T09:01 (+1)
I agree that there is a risk that this leads to additional burden without meaningful impact.
Seeing the numbers of certifications currently deployed that are used public-facing for marketing as well as to reduce supply-chain risks (see for example this certifier) I would see the chance that longtermist causes like biosecurity risks will be incorporated into existing standards or launched as new standards within the next 10 years at 70%.
If we can preempt this with building one or more standards based on actual expected impact instead of just using it to tick boxes. If this bet works out then we might make a counterfactual impact however I would also like to see the organisation shut down after doing research if it doesn't see a path to a certification having impact.
Jackson Wagner @ 2022-03-01T02:12 (+27)
Resilient ways to archive valuable technical / cultural / ecological information
Biorisk and recovery from catastrophe
In ancient Sumeria, clay tablets recording ordinary market transactions were considered disposable. But today's much larger and wealthier civilization considers them priceless for the historical insight they offer. By the same logic, if human civilization millennia from now becomes a flourishing utopia, they'll probably wish that modern-day civilization had done a better job at resiliently preserving valuable information. For example, over the past 120 years, around 1 vertebrate species has gone extinct each year, meaning we permanently lose the unique genetic info that arose in that species through millions of years of evolution.
There are many existing projects in this space -- like the internet archive, museums storing cultural artifacts, and efforts to protect endangered species. But almost none of these projects are designed robustly enough to last many centuries with the long-term future in mind. Museums can burn down, modern digital storage technologies like CDs and flash memory aren't designed to last for centuries, and many critically endangered species (such as those which are "extinct in the wild" but survive in captivity) would likely go extinct if their precarious life-support breeding programs ever lost funding or were disrupted by war/disaster/etc. At FTX, we're potentially interested in funding new, resilient approaches to storing valuable information, including the DNA sequences of living creatures.
(Filed under "recovery from catastrophe" because it involves archiving and burying stuff, but importantly I think the benefit of resilient cultural/ecological archiving (rather than preserving crucial technical knowledge) is actually larger in best-case utopian scenarios.)
Denis Drescher @ 2022-03-05T14:50 (+2)
Agreed, very important in my view! I’ve been meaning to post a very similar proposal with one important addition:
Anthropogenic causes of civilizational collapse are (arguably) much more likely than natural ones. These anthropogenic causes are enabled by technology. If we preserve an unbiased sample of today’s knowledge or even if it’s the knowledge that we consider to have been most important, it may just steer the next cycle of our civilization right into the same kind of catastrophe again. If we make the information particularly durable, maybe we’ll even steer all future cycles of our civilization into the same kind of catastrophe.
The selection of the information needs to be very carefully thought out. Maybe only information on thorium reactors rather than uranium ones; only information on clear energy sources; only information on proof of stake; only information on farming low-suffering food; no prose or poetry that glorifies natural death or war; etc.
I think that is also something that none of the existing projects take into account.
Kat Woods @ 2022-03-06T18:04 (+26)
AI Safety “school” / More AI safety Courses
Train People in AI Safety at Scale
Problem: Part of the talent bottleneck is caused by there not being enough people who have the relevant skills and knowledge to do AI safety work. Right now, there’s no clear way to gain those skills. There’s the AGI Fundamentals curriculum, which has been a great success, but aside from that, there’s just a handful of reading lists. This ambiguity and lack of structure lead to way fewer people getting into the field than otherwise would.
Solution: Create an AI safety “school” or a bunch more AI safety courses. Make it so that if you finish the AGI Fundamentals course there are a lot more courses where you can dive deeper into various topics (e.g. an interpretability course, values learning course, an agent foundations course, etc). Make it so there’s a clear curriculum to build up your technical skills (probably just finding the best existing courses, putting them in the right order, and adding some accountability systems). This could be funded course by course, or funded as a school, which would probably lead to more and better quality content in the long run.
Taras Morozov @ 2022-03-04T16:52 (+26)
Offer paid sabbatical to people considering changing careers
Empowering Exceptional People
People sometimes are locked-in in their non-EA careers because while working, they do not have time to:
- Prioritize what altruistic job would fit them best
- Learn what they need for this job
Create an organization that will offer paid sabbaticals to people considering changing careers to more EA-aligned jobs to help this transition. During the sabbatical, they could be members of a community of people in a similar situation, with coaching available.
PeterSlattery @ 2022-03-08T03:45 (+12)
Agree. I think that having an Advance Market Commitment system for this makes sense. E.g., FTX says 'We will fund mid-career academics/professionals for up to x months to do y. ' My experience is that most of the high value people I know who are good professional are sufficiently time poor and dissuaded by uncertainty that they won't spend 2-5 hours to apply for something they don't know they will get. The barriers and costs are probably greater than most EA funders realise.
An alternative/related idea is to have a simple EOI system where people can submit a fleshed out CV and a paragraph and then get a AMC on an application - e.g., We think that there is a more than 60% chance that we would fund this and would therefore welcome a full application.
SjirH @ 2022-03-04T10:38 (+26)
A public EA impact investing evaluator
Effective Altruism, Empowering Exceptional People
Charity evaluators that publicly share their research - such as GiveWell, Founders Pledge and Animal Charity Evaluators - have arguably not only helped move a lot of money to effective funding opportunities but also introduced many people to the principles of effective altruism, which they have applied in their lives in various ways. Apart from some relatively small projects (1) (2) (3) there is currently no public EA research presence in the growing impact investing sector, which is both large in the amount of money being invested and in its potential to draw more exceptional people’s attention to the effective altruism movement. We’d love to see an organization that takes GiveWell-quality funding opportunity research to the impact investing space and publicly shares its findings.
Brendon_Wong @ 2022-08-28T22:43 (+2)
Seeing this late, but this is a wonderful idea! Will Roderick and I worked on "GiveWell for Impact Investing" a while ago and published this research on the EA Forum. We ultimately pursued other professional priorities, but we continue to think the space is very promising, stay involved, and may reenter it in the future.
Linch @ 2022-03-04T01:21 (+25)
Predicting Our Future Grants
Epistemic Institutions, Research That Can Help Us Improve
If we had access to a crystal ball that allowed us to know exactly what our grants five years from now otherwise would have been, we can make substantially better decisions now. Just making the grants we'd otherwise have made five years in the future can save a lot of grantmaking time and money, as well as cause many amazing projects to happen more quickly.
We don't have a crystal ball that lets us see future grants. But perhaps high-quality forecasts can be the next best thing. Thus, we're extremely excited about people experimenting with Prediction-Evaluation setups to predict the Future Fund's future grants with high accuracy, helping us to potentially allocate better grants more quickly.
agnode @ 2022-03-01T22:44 (+25)
Participatory longtermism
Values and reflective processes, Effective Altruism
Most longtermist and EA ideas come from a small group of people with similar backgrounds, but could affect the global population now and in the future. This creates the risk of longtermist decisionmakers not being aligned with that wider population. Participatory methods aim to involve people decisionmaking about issues that affect them, and they have become common in fields such as international development, global health, and humanitarian aid. Although a lot could be learned from existing participatory methods, they would need to be adapted to issues of concern to EAs and longtermists. The fund could support the development of new participatory methods that fit with EA and longtermist concerns, and could fund the running of participatory processes on key issues.
Additional notes:
- There is a field called participatory futures, however it seems not very rigorous [based on a very rough impression, however see comment below about this], and as far as I know hasn't been applied to EA issues.
- Participedia has writeups of participatory methods and case studies from a variety of fields.
Gavin @ 2022-03-01T22:47 (+6)
This comments section is pretty participatory.
MaxRa @ 2022-03-10T17:59 (+3)
Cool idea! :) You might be interested in skimming the report Deliberation May Improve Decision-Making from Rethink Priorities.
> In this essay from Rethink Priorities, we discuss the opportunities that deliberative reforms offer for improving institutional decision-making. We begin by describing deliberation and its links to democratic theory, and then sketch out examples of deliberative designs. Following this, we explore the evidence that deliberation can engender fact-based reasoning, opinion change, and under certain conditions can motivate longterm thinking. So far, most deliberative initiatives have not been invested with a direct role in the decision-making process and so the majority of policy effects we see are indirect. Providing deliberative bodies with a binding and direct role in decision-making could improve this state of affairs. We end by highlighting some limitations and areas of uncertainty before noting who is already working in this area and avenues for further research.
JBPDavies @ 2022-03-03T09:44 (+3)
Love the idea - just writing to add that Futures Studies, participatory futures in particular & future scenario methodologies could be really useful for Longtermist research. Methods in these fields can be highly rigorous (I've been working with some futures experts as part of a project to design 3 visions of the future - which have just finished going through a lengthly stress-testing and crowd-sourcing process to open them up to public reflection and input), especially if the scenario design is approached in a systematised way using a well-developed framework.
I could imagine various projects that aim to create a variety of different desirable visions of the future through participatory methods, identifying core characteristics, pathways towards them, system dynamics and so on to illustrate the value and importance of longtermist governance to get there. Just one idea, but there are plenty of ways to apply this field to EA/Longtermism!
Would love to talk about your idea more as it also chimes with a paper I'm drafting, 'Contesting Longtermism', looking at some of the core tensions within the concept and how these could be opened up to wider input. If you're interested in talking about it, feel free to reach out to me at j.b.p.davies@uu.nl
agnode @ 2022-03-03T18:51 (+1)
Thanks for the point about rigor - I'm not that familiar with participatory futures but had encountered it through an organisation that tends to be a bit hypey. But good to know there is rigorous work in that field.
I agree that there are lots of opportunities to apply to EA/Longtermism and your paper sounds interesting. I'll send an email.
Jackson Wagner @ 2022-03-01T02:11 (+25)
Research on the long-run determinants of civilizational progress
Economic growth
What factors were the root cause of the industrial revolution? Why did industrialization happen in the time and place and ways that it did? How have the key factors supporting economic growth changed over the last two centuries? Why do some developing countries manage to "catch up" to the first world, while others lag behind or get stuck in a "middle-income trap"? Is the pace of entrepreneurship or scientific innovation slowing down -- and if so, what can we do about it? Is increasing amounts of "vetocracy" an inevitable disease that afflicts all stable and prosperous societies (as Holden Karnofsky argues here), or can we hope to change our culture or institutions to restore dynamism? At FTX, we'd be interested to fund research into these "progress studies" questions. We're also interested in funding advocacy groups promoting potential policy reforms derived from the ideas of the progress studies movement.
Jackson Wagner @ 2022-03-01T23:22 (+2)
See also many of Zac Townsend's ideas, the idea of nuclear power & GMO advocacy, and my list of object-level planks in the progress-studies platform.
tamc @ 2022-03-01T09:09 (+24)
Pay prestigious universities to host free EA-related courses to very large numbers of government officials from around the world
Empowering Exceptional People
The direct benefit of the courses would be to give government officials better tools for thinking and talking with each other.
The indirect benefit could be to allow large numbers of pre-disposed officials to be seen by <some organisation> who could use the opportunity to identify those with particular potential and offer them extra support or opportunities so they can make an even bigger impact.
The need for it to be free is to overcome the blocker of otherwise needing to write a business case for attendance which may then require some sort of tortuous approval process.
The need for it to be hosted at a prestigious university is to overcome the blocker of justifying to bosses or colleagues why the course is worthwhile by allowing piggybacking off the University's brand.
gavintaylor @ 2022-03-03T20:47 (+23)
Infrastructure to support independent researchers
Epistemic Institutions, Empowering Exceptional People
The EA and Longtermist communities appear to contain a relatively large proportion of independent researchers compared to traditional academia. While working independently can provide the freedom to address impactful topics by liberating researchers from the perversive incentives, bureaucracy, and other constraints imposed on academics, the lack of institutional support can impose other difficulties that range from routine (e.g. difficulties accessing pay-walled publications) to restrictive (e.g. lack of mentorship, limited opportunities for professional development). Virtual independent scholarship institutes have recently emerged to provide institutional support (e.g. affiliation for submitting journal articles, grant management) for academic researchers working independently. We expect that facilitating additional and more productive independent EA and Longtermist research will increase the demographic diversity and expand the geographical inclusivity of these communities of researchers. Initially, we would like to determine the main needs and limitations independent researchers in these areas face and then support the creation of a virtual institute focussed on addressing those points.
This project was inspired by proposals written by Arika Virapongse and recent posts by Linch Zhang.
Jackson Wagner @ 2022-08-14T01:16 (+4)
(I think this is a good idea! For anyone perusing these FTX project ideas in the future, here is a post I wrote exploring drawbacks and uncertanties that prevent people like me from getting excited about independent research as a career.)
Lauren Reid @ 2022-03-02T15:46 (+23)
EA Health Institute/Chief Wellness Officer
Empowering Exceptional People, Effective Altruism, Community Building
Optimizing physical and mental health can improve cognitive performance and decrease burnout. We need EAs/longtermists to have the health resilience to weather the storm - physical fitness, sleep, nutrition, mental health. An institution could be created to assist EA aligned organizations and individuals. Using best practices from high performance workplace health, both personal and organizational, and innovative new ideas, a wellness team could help EAs have sustainable and productive careers. This could be done through consulting, coaching, preparation of educational materials or retreats. From a community growth perspective, EA becomes more attractive to some when one doesn’t have to sacrifice health for deeply meaningful work.
(Disclosure -I'm a physician/physician wellness SME - helping with this could be a good personal fit)
Denis Drescher @ 2022-03-01T18:11 (+23)
Unified, quantified world model
Epistemic Institutions, Effective Altruism, Values and Refelctive Processes, Research That Can Help Us Improve
Effective altruism started out, to some extend, with a strong focus on quantitative prioritization along the lines of GiveWell’s quantitative models, the Disease Control Priorities studies, etc. But they largely ignore complex, often nonlinear effects of these interventions on culture, international coordination, and the long-term future. Attempts to transfer the same rigor to quantative models of the long-term future (such as Tarsney’s set of models in The Epistemic Challenge to Longtermism) are still in their infancy. Otherwise effective altruist prioritization today is a grab bag of hundreds of considerations that interact in complex ways that (probably) no one has an overview over. Decision-makers may forget to take half of them into account if they haven’t recently thought about them. That makes it hard to prioritize, and misprioritization becomes more and more costly with every year.
A dedicated think tank could create and continually expand a unified world model that (1) is a repository of all considerations that affect altruistic decision-making, (2) makes explicit the interactions between these considerations, (3) gauges its own uncertainty, (4) allows for the prioritization of interventions with no common proxy measure for their impact via interventions that can be measured via several proxies, and (5) averages between multiple ways to estimate uncertain quantities.
Alternatively, a tech (charity) startup could create standardized APIs for models of small parts of the world so that they can be recombined analogously to how I can recombine many open-source React libraries to create my own software. Then an ecosystem of researchers could form who publish any models they create for everyone to use and recombine. (This could be bootstrapped via consultancy services for those groups who are interested in small parts of the world.)
People who are working on this are QURI (Ozzie Gooen, Sam Nolen), Aryeh Englander, Paal Kvarberg, and maybe others. I considered it for a few months (summary of my thinking). Some of them pursue the approach of direct modeling via Bayesian networks while QURI pursues the approach of building an ecosystem around a standardized API.
MaxGhenis @ 2022-03-08T15:43 (+3)
Cool - you might also be interested in my submission, "Comprehensive, personalized, open source simulation engine for public policy reforms". It's not in the pitch but my intent is for it to be global as well.
Denis Drescher @ 2022-03-08T17:54 (+3)
Awesome, upvoted! You can also have a look at my “Red team” proposal. It proposes to use methods from your field applied to any EA interventions (political and otherwise) to steel them against the risk of having harmful effects.
Zac Townsend @ 2022-03-01T11:55 (+23)
Civic sector software
Economic Growth, Values and Reflective Processes
Software and software vendors are among the biggest barriers to instituting new public policies or processes. The last twenty years have seen staggering advances in technology, user interfaces, and user-centric design, but governments have been left behind, saddled with outdated, bespoke, and inefficient software solutions. Worse, change of any kind can be impractical with existing technology systems or when choosing from existing vendors. This fact prevents public servants from implementing new evidence-based practices, becoming more data-driven, or experimenting with new service models.
Recent improvements in civic technology are often at the fringes of government activity, while investments in best practices or “what works” are often impossible for any government to implement because of technology. So while over the last five years, there has been an explosion of investments and activity around “civic innovation,” the results are often mediocre. On the one hand, governments end up with little more than tech toys or apps that have no relationship to the outcomes that matter (e.g. poverty alleviation, service delivery). While on the other hand, tens of millions of dollars are invested in academic research, thought leadership, and pilot programs on improving outcomes that matter, but no government can ever practically implement them because of their software.
Done correctly software can be the wedge to radically improve governments. The process to build that technology can be inclusive: engaging users inside government, citizens that interface with programs, community stakeholders, and outside experts and academics.
We are interested in funding tools that vastly and fundamentally improve the provisioning of services by civic organizations.
Yonatan Cale @ 2022-03-01T17:09 (+3)
Hey, this is somewhat my domain.
The bottleneck is not building software, it is more like "governments are old gray organizations that don't want to change anything".
If you find any place where the actual software development is the bottleneck, I'd be very happy to hear and maybe take part in it. I also expect many other EA developers to want to take part, it sounds like a good project
Zac Townsend @ 2022-03-02T01:28 (+12)
(For context, I was the Chief Data Officer of the California State Government and CTO of Newark, NJ when Cory Booker was Mayor).
I actually think the way to do this is to partner with one city and build everything they need to run the city. The problem is that people can't use piecemeal systems very well. It would just take a huge initial set of capital -- like exactly the type of capital that could be provided here.
Yonatan Cale @ 2022-03-06T13:23 (+1)
Ah ok forget about it being somewhat my domain :P
Sounds like a really interesting suggestion. Especially if it would be for a city that "matters" (that will help people do important things?), I think this project could interest me and others
(I'm interested if you have opinions about https://zencity.io/, as a domain expert)
MaxGhenis @ 2022-03-08T15:47 (+1)
Somewhat related, I submitted "Comprehensive, personalized, open source simulation engine for public policy reforms". Governments could also use the simulation engine to explore policy reforms and to improve operations, e.g. to establish individual households' eligibility for means-tested benefit programs.
Akhil Bansal @ 2022-03-01T03:18 (+23)
Teaching secondary school students about the most pressing issues for humanity's long-term future
Values and Reflective Processes, Effective Altruism
Secondary education focuses mostly on the past and present, and tends not to address the most pressing issues for humanity’s long-term future. I would like to see textbooks, courses, and/or curriculum reform that promote evidence-based and thoughtful discourse about the major threats facing the long-term future of humanity. Secondary school students are a promising group for such outreach and education because they have their whole careers ahead of them, and numerous studies have shown that they care about the future. This may serve a significant benefit in making more young people care about these issues and support them with either their time or money
ElizabethBarnes @ 2022-03-21T21:54 (+22)
High-quality human data
Artificial Intelligence
Most proposals for aligning advanced AI require collecting high-quality human data on complex tasks such as evaluating whether a critique of an argument was good, breaking a difficult question into easier subquestions, or examining the outputs of interpretability tools. Collecting high-quality human data is also necessary for many current alignment research projects.
We’d like to see a human data startup that prioritizes data quality over financial cost. It would follow complex instructions, ensure high data quality and reliability, and operate with a fast feedback loop that’s optimized for researchers’ workflow. Having access to this service would make it quicker and easier for safety teams to iterate on different alignment approaches
Some alignment research teams currently manage their own contractors because existing services (such as surgehq.ai and scale.ai) don’t fully address their needs; a competent human data startup could free up considerable amounts of time for top researchers.
Such an organization could also practice and build capacity for things that might be needed at ‘crunch time’ – i.e., rapidly producing moderately large amounts of human data, or checking a large volume of output from interpretability tools or adversarial probes with very high reliability.
The market for high-quality data will likely grow – as AI labs train increasingly large models at a high compute cost, they will become more willing to pay for data. As models become more competent, data needs to be more sophisticated or higher-quality to actually improve model performance.
Making it less annoying for researchers to gather high-quality human data relative to using more compute would incentivize the entire field towards doing work that’s more helpful for alignment, e.g., improving products by making them more aligned rather than by using more compute.
[Thanks to Jonas V for writing a bunch of this comment for me]
[Views are my own and do not represent that of my employer]
zdgroff @ 2022-03-07T05:24 (+22)
Advocacy for digital minds
Artificial Intelligence, Values and Reflective Processes, Effective Altruism
Digital sentience is likely to be widespread in the most important future scenarios. It may be possible to shape the development and deployment of artificially sentient beings in various ways, e.g. through corporate outreach and lobbying. For example, constitutions can be drafted or revised to grant personhood on the basis of sentience; corporate charters can include responsibilities to sentient subroutines; and laws regarding safe artificial intelligence can be tailored to consider the interests of a sentient system. We would like to see an organization dedicated to identifying and pursuing opportunities to protect the interests of digital minds. There could be one or multiple organizations. We expect foundational research to be crucial here; a successful effort would hinge on thorough research into potential policies and the best ways of identifying digital suffering.
Kat Woods @ 2022-03-06T18:14 (+22)
X-risk Art Competitions
Fund competitions to make x-risk art to create emotion
Problem: Some EAs find longtermism intellectually compelling but not emotionally compelling, so they don’t work on it, yet feel guilty.
Solution: Hold competitions where artists make art explicitly intended to make x-risk emotionally compelling. Use crowd voting to determine winners.
Kat Woods @ 2022-03-06T18:00 (+22)
Translate EA content at scale
Reach More Potential EAs in Non-English Languages
Problem: Lots of potential EAs don’t speak English, but most EA content hasn’t been translated
Solution: Pay people to translate the top EA content of all time into the most popular languages, then promote it to the relevant language communities.
Denis Drescher @ 2022-03-07T14:07 (+7)
Little addition: I imagine that knowledgeable EAs in the respective target countries should do that as opposed to professional translators so that they can do full language and cultural mediation rather than just translating the words.
Taras Morozov @ 2022-03-04T15:52 (+22)
Provide personal assistants for EAs
Empowering Exceptional People
Many senior EAs spend way too much with busywork because it is hard to get a good personal assistant. This is currently so because:
- There is no obvious source of reliable, vetted assistants.
- If an EA wants to become an assistant, it is harder for them to find a job for EA or on EA-related projects.
- Assistants have an incentive to have many clients, to avoid loss of income if they would lose their client. This leads to assistants having less time per client, and thus more time is spent on communication and less on work itself.
- Assistants tend to be paid personally by EAs instead of by their employers. That leads to using them less than would be optimal.
- There is no community of assistants that would be sharing knowledge and helping each other.
All these factors would be removed if an agency managed personal assistants.
Denis Drescher @ 2022-03-05T22:18 (+4)
Kat Woods (Nonlinear) is someone to talk to when it comes to this project.
SjirH @ 2022-03-04T10:42 (+22)
Institutions as coordination mechanisms
Artificial Intelligence, Biorisk and Recovery from Catastrophe, Great Power Relations, Space Governance, Values and Reflective Processes
A lot of major problems - such as biorisk, AI governance risk and the risks of great power war - can be modeled as coordination problems, and may be at least partially solved via better coordination among the relevant actors. We’d love to see experiments with institutions that use mechanism design to allow actors to coordinate better. One current example of such an institution is NATO: Article 5 is a coordination mechanism that aligns the interests of NATO member states. But we could create similar institutions for e.g. biorisk, where countries commit to a matching mechanism - where “everyone acts in a certain way if everyone else does” - with costs imposed to defectors to solve a tragedy of the commons dynamic.
Brendon_Wong @ 2022-08-28T22:48 (+2)
Sjir, you may be interested in Roote's work on meta existential risk!
SjirH @ 2022-09-02T09:19 (+1)
Thank you!
SjirH @ 2022-03-04T10:41 (+22)
Experiments with and within video games
Values and Reflective Processes, Empowering Exceptional People
Video games are a powerful tool to reach hundreds of millions of people, an engine of creativity and innovation, and a fertile ground for experimentation. We’d love to see experiments with and within video games that help create new tools to address major issues. For instance, we’d love experiments with new governance and incentive systems and institutions, new ways to educate people about pressing problems, games that simulate actual problems and allow players to brainstorm solutions, and games that help identify and recruit exceptional people.
Peter S. Park @ 2022-03-02T20:01 (+22)
Replicate the Project Ideas Competition for other types of communities than EAs
Research That Can Help Us Improve
People have contributed a lot of really insightful and promising ideas here. Given that "there are no wrong ideas in brainstorming" and that there may be systematic blind spots for effective altruists/longtermists' paradigm, perhaps doing this broad-idea-crowdsourcing exercise in other types of communities could get us new, potentially promising ideas.
LRudL @ 2022-03-02T15:31 (+22)
Regular prizes/awards for EA art
Effective Altruism
Works of art (e.g. stories, music, visual art) can be a major force inspiring people to do something or care about something. Prizes can directly lead to work (see for example the creative writing contest), but might also have an even bigger role in defining and promoting some type of work or some quality in works. Creating a (for example) annual prize/award scheme might go a long way towards defining and promoting an EA-aligned genre (consider how the existence of Hugo and Nebula awards helps define and promote science fiction). The existence of a prestigious / high-paying prize for the presence of specific qualities in a work is also likely to draw attention to those qualities more broadly; news like "Work X wins award for its depiction of [thoughtful altruism] / [the long-term future] / [epistemic rigor under uncertainty]" might make those qualities more of a conversation topic and something that more artists want to depict and explore, with knock-on effects for culture.
Denis Drescher @ 2022-03-01T19:06 (+22)
Impact markets to smooth out retroactive funding
Effective Altruism, Empowering Excetional People, Economic Growth, Epistemic Institutions
Yonatan Cale already made the case for retroactive funding, i.e. that it’s easier to tell what has succeeded than what will succeed. The questions of what will succeed, in turn, can be answered by a market.
Investors will try to predict which charities will succeed to the point of receiving retroactive funding. A retroactive funder can make larger grants in proportion to their reduction in uncertainty (5–10x), time savings from having to do less vetting (~ 2x), and delay (~ 1.5x). Hence investors with enough foresight can even make a profit and turn the prediction of retro fund decisions into their business model. Promising charities can bootstrap rapidly with these early financial injections, successful serial charity entrepreneurs can accumulate more and more capital to reinvest into their next charity venture, and funders save time because they have to do only a fraction of the vetting.
We – Kenny Bambridge, Matt Brooks, Dony Christie, Denis Drescher, and a number of advisors – are actively working toward this goal. I’ve been thinking about the mechanisms and risks of this undertaking for a few months and I’m working on a future EA Forum post. I’ve also been in touch with Owen Cotton-Barratt.
Zac Townsend @ 2022-03-01T11:47 (+21)
Studying Economics Growth Deterrents and Cost Disease
Economic growth
Economic growth has forces working against it. Cost disease is the most well-known and pernicious of these in developed economies. We are interested in funding work on understanding, preventing, and reversing cost disease and other mechanisms that are slowing economic growth.
(Inspired by Patrick Collison)
PhilC @ 2022-03-02T19:19 (+20)
Secure full-stack open-source computing for information security
Artificial Intelligence, Biorisk, Research that will help us improve
Much of our sensitive research and weaponry, like AI, biolabs, nuclear weapons, etc, are built upon insecure infrastructure. Think of a scenario in the future where one hacker could hack and control fleets of self-driving cars and essentially have a swarm of missiles. Real information security would need to build the full stack of computing from the hardware, OS, compilers, to application layers. It would also ideally be open-source and inspectable to ensure security.
jknowak @ 2022-03-02T10:26 (+20)
Funding Stress/Penetration Tests of vital orgs/infrastructure
Cyber Risks, Cybersecurity
Most orgs don't spend enough on ensuring their infrastructure is safe from hackers and we should ensure that labs working on AI safety, biorisk companies, EA orgs etc. are safe from malicious hackers.
evelynciara @ 2022-03-02T04:09 (+20)
Longtermist democracy / institutional quality index
Values and Reflective Processes, Epistemic Institutions
Several indices exist to quantify the degree of liberal democracy in all countries and territories around the world, like Freedom in the World and the EIU's Democracy Index. These indices are convenient for describing and comparing the state of liberal democracy in different countries, because they distill the various complicated aspects of a state's political system into one or more numbers that are easy for a layperson to understand.
We propose a "democracy index" that emphasizes the qualities of political systems that are most relevant to making the long-term future go well. Such qualities could include voting systems, free and fair elections, voter competence, and capacity for long-term planning in government - and the set of qualities used could be based on research such as this post. This index would help make analysis of countries and territories' political systems more accessible to EAs/longtermists who aren't political scientists, since it would distill them down to a few easy-to-understand numbers. It would also help the longtermist community track progress towards better political systems and identify opportunities to improve institutions.
See also: the CGD's Commitment to Development Index
Nathan Young @ 2022-03-02T02:04 (+20)
Fund Sentinel, a nationwide pandemic early response system (originally suggested by alexrjl)
Biowarfare
Fund the biosecurity program explained on this podcast. Any time anyone gets sick you sequence a sample. Any unknown genetic material gets sequenced again at a higher level. This allows for rapid response to new pathogens.
Nathan Young @ 2022-03-02T01:30 (+20)
Politician forecasting stipend
Politics, better epistemics
Many people think politicians are underpaid. Many think they have a poor grasp of the likelihood of future events. Offer every Senator and Representative a yearly sum to make public predictions about future public statistics. The forecasting would help them correct their own errors and provide a valuable source of information on who makes good decisions about the future and who doesn't.
Jakob @ 2022-03-06T10:45 (+3)
See one version of this here: https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=rTHGFbfr8DXwqnA2B
Charlotte @ 2022-03-01T21:29 (+20)
Making Future Grantmaking More Optimal
Effective altruism
- The EA community will likely spend much more money in the future than what they spend know. Grantmaking is hard and the right setup is controversial. Hence, it might make sense to spend money on how to do it well.
- One could invite people to so called "donation parliaments" with 100 randomly selected citizens who get expert/EA input, invite 100 top academics to give away 10 million. Try out expert committees or democratic control. Organising such donation parliaments etc could also receive positive media attention.
Chris Leong @ 2022-03-01T04:12 (+20)
Moderators for EA/Longtermist FB/Groups or Discords
Effective Altruism
(Refinement of EA-relevant Substacks, Youtube, social media, etc. )
Given the huge amount of funding available to EA, we probably don't want to skimp on moderators for major Facebook or Slack or Discord groups even though these have traditionally been run by volunteers. It'd be worthwhile at least experimenting to see if paid part-time moderators would be able to add extra value by writing up summaries/content for the groups, running online calls, setting up networking spreadsheets and spending more time thinking through strategy.
Risks: We might end up paying money for work that we would have gotten for free. Attempts to set up networking spreadsheets or run calls might have minimal participation and hence minimal impact.
michaelchen @ 2022-03-01T04:55 (+1)
Fyi paid part-time moderators would need buy-in from the online community – the EA Corner Discord seems against paid moderators, for example.
I really appreciate how many ideas you're proposing, Chris!
Chris Leong @ 2022-03-01T06:05 (+2)
Fyi paid part-time moderators would need buy-in from the online community – the EA Corner Discord seems against paid moderators, for example.
Of course. I guess I could see some negative effects from how it could encourage people to seek the mod roles as a way of being paid rather than because they could do a good job. I think that this issue could mostly be avoided by offering student level rates. Running a Facebook group seems like a nice entrypoint into EA movement building.
I really appreciate how many ideas you're proposing, Chris!
Thanks, you're welcome!
Greg_Colbourn @ 2022-03-01T10:29 (+3)
Incentives could also be aligned by offering existing volunteer mods a salary to spend more time moderating.
Arb @ 2022-03-08T00:16 (+19)
More Insight Timelines
In 2018, the Median Group produced an impressive timeline of all of the insights required for current AI, stretching back to China's Han Dynasty(!)
The obvious extension is to alignment insights. Along with some judgment calls about relative importance, this would help any effort to estimate / forecast progress, and things like the importance of academia and non-EAs to AI alignment. (See our past work for an example of something in dire need of an exhaustive weighted insight list.)
Another set in need of collection are more general breakthroughs - engineering records broken, paradigms invented, art styles inaugurated - to help us answer giant vague questions about e.g. slowdowns in physics, perverse equilibria in academic production, and "Are Ideas Getting Harder to Find?"
Taras Morozov @ 2022-03-07T06:46 (+19)
Research differential technological progress and trajectory changes
Research That Can Help Us Improve, Values and Reflective Processes
The idea of Differential technological progress (DTP) may be a crucial consideration for many at-first-glance good ideas like:
- improving scientific publishing
- increasing GDP
- increasing average intelligence
But given its importance, there hasn`t been much research and publications on GTP.
Central question for research is how to use DTP to prioritize interventions. Examples of subquestion to research are:
- when in the past there were intentional trajectory changes.
- what subgoals seem to be good when DTP is considered.
- and so on.
aviv @ 2022-03-03T01:42 (+19)
Bridging-based Ranking for Recommender Systems
Artificial Intelligence, Epistemic Institutions, Values and Reflective Processes, Great Power Relations
Recommender systems are used by platforms like FB/Meta, YouTube/Google, Twitter, TikTok, etc. to direct the attention of billions of people every day. These systems, due to a combination of psychological, sociological, organizational, etc. factors are currently most likely to reward content producers with attention if they stoke division (e.g. outgroup animosity). Because attention is a currency that can be converted into money, power, and status, this "bias toward division” has impacts groups at every scale; from local school boards to Congress to geopolitics.
Ensuring that recommender systems can mitigate this bias is crucial to functional democracy, to cooperation on catastrophic risks (e.g. AGI, pandemics, climate change), and simply to reducing the likelihood of escalating wars. We urgently need more research on how to better design recommender systems; we need to create open source implementations that do the right thing from the start which can be adopted by cash-strapped startups; and we need a mix of pressure and support to ensure these improvements will be rapidly deployed at platform scale.
Lauren Reid @ 2022-03-02T16:13 (+19)
Headhunter Office: targeted recruitment of aligned MDs, and other mid-career professionals
Effective Altruism, Community Growth and Diversity
I am a physician, and I have several conversations a week with bright, altruistic, and burned out colleagues. These professionals are often in a position to earn to give, and also can be entrepreneurial and adept at navigating complex systems and could be future organizational leaders or 'founder types'. Currently, there are cosmetic MLM groups and others recruiting from this group of physicians looking to make their lives more fulfilling and meaningful while still earning an income - there is literally an MD Exit strategy facebook group.
I propose an EA headhunter office to recruit for the community. For example, recruiting physicians explicitly, using some of the successful techniques that pharma uses like having physicians recruit their peers. Perhaps there are similar aligned mid-career professionals in law, public administration, engineering, etc.
Lauren Reid @ 2022-03-02T15:41 (+19)
Support for EAs having children
Empowering Exceptional People, Effective Altruism
Children of EAs are much more likely to become EAs (100-1000x?) and future generations of EAs may have a large impact. Having children usually means a pause in work which is poorly compensated and difficult to time. I propose an institute to support EAs wishing to have children. EAs could be could be supported with fertility costs, including egg freezing, and be given grants for parental leave which improve parental and child health outcomes. There are many trade offs in parenting, which could be discussed in an EA parenting forum. Building a community could benefit these EA parents and their children.
Greg_Colbourn @ 2022-03-03T08:16 (+11)
Evidence for 100-1000x estimate? What is the base rate for children following their parents? When I've seen this discussed before, the conclusion is usually that memetic transfer of EA is much easier than genetic transfer of EA.
Lauren Reid @ 2022-03-04T03:44 (+1)
That’s a good point. I agree it’s the culture more than the DNA which matters and I don’t know the real numbers. Of course, my husband and I were identified as gifted and it looks very likely our children will be too, and we also have read them Open Boarders as a bedtime story. Adopted children of EAs are also probably much more likely than average to become EAs. I think of organized religions and how they encourage children and future generations - there may be a lesson for us there.
JackM @ 2022-03-04T09:17 (+3)
I would have thought it would be higher value on the margin to spread EA to talented people who already exist than to make more people.
Larks @ 2022-03-04T04:35 (+3)
There are many trade offs in parenting, which could be discussed in an EA parenting forum.
Very minor, but just wanted to check you were aware of, and had joined if interested, the EA parents facebook group.
Lauren Reid @ 2022-03-04T14:37 (+3)
I didn’t know that, nor did my husband (Alex D), who is much more in the EA space. Thank you for posting, I will join. We are seriously considering going to Nassau with our neuro atypical kids (4 and 7) next winter, and are trying to figure out how it would work for schooling/childcare in particular.
MaxG @ 2022-03-02T09:33 (+19)
DIY decentralized nucleic acid observatory
Biorisk and Recovery from Catastrophes
As part of the larger effort of building an early detection center for novel pathogens, a smaller self-sustaining version is needed for remote locations. The ideal early-detection center would not only have surveillance stations in the largest hubs and airports of the world, but also in as many medium sized ones as possible. For this it is needed to provide a ready-made, small and transportable product which allows meta-genomic surveillance of wastewater or air ventilation. One solution would be designing a workflow utilizing the easily scalable and portable technology of nanopore sequencing and combining it with a workflow to extract nucleic acids from wastewater. The sharing of instructions on how to build and use this method could lead to a "do it yourself" (DIY) and decentralized version of a nucleic acid observatory. Instead of staffing a whole lab at a central location, it would be possible to only have one or two personnel in key locations who use this product to sequence samples directly and only transmit the data to the larger surveillance effort.
IanDavidMoss @ 2022-02-28T22:32 (+19)
A global observatory for institutional improvement opportunities
Research That Can Help Us Improve, Great Power Relations, Epistemic Institutions
Actions taken by powerful institutions—such as central governments, large corporations, influential media outlets, and R&D labs—can dramatically shape people's lives today and cast a shadow long into the future. It can be hard to know what philanthropic strategies would be most likely to drive better would outcomes, however, because each individual institution is itself a complex ecosystem of incentives, external pressures, norms, policies, and bureaucratic structures. An ongoing project to document how important institutions operate in practice and spot relevant windows of opportunity (e.g., legislation under consideration, upcoming leadership transitions, etc.) as they emerge would be very helpful for mapping the strategic landscape across virtually all of our other interest areas.
Konstantin Pilz @ 2022-03-04T12:34 (+18)
EA content translation service
Effective Altruism, Movement Building
(Maybe add to #30 - diversity in EA)
EA-related texts are often using academic language needed to convey complex concepts. For non-native speakers reading and understanding those texts takes a lot more time than reading about the same topic in their native language would. Furthermore, today many educated people in important positions, especcially in non-western countries, do not at or only poorly speak English. (This is likely part of the reason that EA currently mainly exists in English speaking countries and almost exclusively consists of people speaking English well.)
To make EA widely known and easy to understand there needs to be a translation service enabling e.g. 80k, important Forum posts or the Precipice to be read in different languages. This would not only make EA easier to understand - and thus spread ideas further - but also likely increase epistemic diversity of the community by making EA more international.
Peter S. Park @ 2022-03-03T09:01 (+18)
Pipeline for writing books
Effective altruism
It's plausible that more EAs/longtermists should be writing books on the interesting subjects they are experts in, but they currently do not because of a lack of experience or other types of friction. Crowdsourced resources, networks, and grants may help facilitate this. Books written by EAs would have at least two benefits: (a) dissemination of knowledge, and (b) earning-to-give opportunities (via royalties).
Jackson Wagner @ 2022-03-03T20:08 (+11)
This is an interesting idea; it definitely seems plausible that EAs (who often have a lot of unique knowledge!) might be underrating the benefits of writing books. Could you expand a little on what you are thinking here? (I'd also be interested to hear from anyone else with relevant experience.) How hard is it to publish a book? If you try, do you have a high chance of getting rejected? How do people usually do marketing and get people to read their stuff?
Maybe this is too cynical of me (or too internet-centric), but I doubt the main benefits would come from earning royalties (not likely to be very profitable relative to other things skilled EAs could be doing!) or spreading knowledge (just read the blog posts!). But I think trying to publish more EA books might help greatly with:
- Prestige and legibility (just like how academic papers are considered more legit than blog posts by academics and governments). It might be easier for, say, the US Democratic Party to get behind an EA-inspired pandemic-prevention plan, foreign-aid revamp, or predction-markets-y institutional-reform agenda if they could point to a prestigious book rather than just a bunch of Forum posts and pdf reports from places like OpenPhil, Rethink Priorities, etc.
- Spreading EA ideas to an older audience of folks who read books more than blog posts. This could help to diversify EA and could accelerate EA's trajectory by connecting us to more people who are already in influential positions, not just college kids who might one day inherit the earth.
- Discoverability -- having our stuff in college bookstores and libraries could make EA a more visible and legit-seeming movement compared to being so heavily online.
Say we found a new EA organization, "80,000 Pages", to help people publish books on impactful themes. What kinds of helpful stuff could this organization do?
- Help people get a sense of whether their ideas would be well-suited for a book, whether publishers would be interested, etc, so people could know whether they were wasting their time or not.
- Help people understand and craft a target audience with altruistic impact in mind. (for instance, as an aerospace engineer I might be tempted to write a book about space governance that appeals to other aerospace engineers. But probably it would be more impactful to target policymakers. Or maybe I should be trying to inspire college kids who might become engineers and policymakers later?)
- Explain to people the basic steps involved in writing and publishing a book. (I for one have no idea what this looks like, besides "write draft" -> "editing" -> "sell to publisher" -> "print books".)
- Potentially help with any finnicky technical aspects of publishing, like formatting the text properly. Maybe take preexisting studies, like the OpenPhil AI timelines report or Rethink Priorities' nuclear winter investigations, and publish them in physical book form.
- Helping with marketing and etc; helping estimate how many people might read a book about a given topic.
- Put out requests for "someone should write a good book about X" just like how Charity Entrepreneurship requests "we're looking for someone to found a charity about X".
- Publishing books directly? (Selling them directly?? Giving them away somehow? See also Ben Pace's idea that EA could buy and run a famously trend-setting academic bookstore in Oxford.)
- Other stuff that I'm missing?
Peter S. Park @ 2022-03-04T02:36 (+1)
Thanks so much, Jackson!
I have never published a book, but some EAs have written quite famous and well-written books. In addition to what you suggested, I was thinking "80,000 pages" could organize mentoring relationships for other EAs who are interested in writing a book, writer's circles, a crowdsourced step-by-step guide, etc. Networking in general is very important for publishing and publicizing books, from what I can gather, so any help on getting one's foot in the door could be quite helpful.
Elriggs @ 2022-03-14T19:20 (+1)
My brother has written several books and currently coaches people on how to publish it and market it on Amazon. He would be open to being paid for advice in this area (just dm me)
I think the dissemination and prestige are the best arguments so far.
Lauren Reid @ 2022-03-02T15:51 (+18)
Institute/Grants for improving the science of indoor air quality
Biorisk
During Covid we learned that ‘air is the new poop’ in terms of hygiene. Improving indoor air quality can prevent respiratory pathogen transmission both in the case of a pandemic and for general health. A granting agency could support advances in indoor air quality and their implementation such as in airplanes and classrooms.
JBPDavies @ 2022-03-01T13:14 (+18)
Longtermism Policy Lab
Epistemic Institutions, Values and Reflective Processes, Great Power Relations, Space Governance, Research That Can Help Us Improve
Despite the growing recognition of the importance of long-term perspectives, governance remains oriented around short-term incentives. More coordination and collaboration between researchers and policymakers, practicioners and industry professionals is needed to translate research into policy. The Longtermism Policy Lab will bridge this gap, working with societal partners and governments at all levels (local to global) to undertake policy experiments. The Lab will also contain a research component, establishing and pursuing an ambitious interdisciplinary Longtermism research agenda, including an emphasis on research that doesn't fit well within either academia or traditional research institutes. We want to see this organisation serve as a direct link between longtermism as a governance approach and its implementation within all levels of governance across the globe.
Zac Townsend @ 2022-03-01T12:01 (+18)
(Per Nick's post, reposting)
Private-sector ARPA models
All
Many of the technological innovations of the last fifty years have their genesis in experiments run by DARPA. ARPA models are characterized by individual decision-makers taking on risky bets within defined themes, setting ambitious goals, and mobilizing top researchers and entrepreneurs to meet them. We are interested in funding work to study these funding models and to create similar models in our areas of interest.
mariushobbhahn @ 2022-03-01T07:59 (+18)
In case you drew inspiration from some of our suggestions in the megaprojects article, we would like to retroactively apply.
Ricky @ 2022-03-06T08:46 (+17)
Promote Ethical Corporate Behavior
EA to purchase 5% of Blackrock and 5% of Vanguard shares. To be clear, I don't mean 5% of their index funds, but rather 5% of the underlying fund management companies.
EA's investment of circa $10 billion can be leveraged into a board seat on companies that manage circa $20 trillion in assets. EA could lobby these companies to apply a corporate ethics tests on all their index funds. E.g. excluding coal and promoting other EA priorities.
MaxRa @ 2022-03-12T00:29 (+4)
Thanks for this suggesting, I'm really interested in this general direction, in case anybody wants to dig into it a bit more. It spontaneously seems unlikely to me that investing this large share of EA money is the best bet, but I wonder if there are other ways to influence them (e.g. as I understand Blackrock and Vanguard senior managers simply got convinced that climate change is a downside for their longterm profit, and they probably should believe the same for misaligned AI / AI races). Maybe another route would be to ensure that those investment firms will be able to influence & coordinate the behavior of tech firms to reduce competitive dynamics.
Andrew Wong @ 2022-03-02T09:52 (+17)
Anti-Pollution of the universe
Space governance
As we take one small step for man, our giant leap for humanity leaves footprints of toxicity that we justify as ‘negative externalities’. There are currently 20,000 catalogued objects comprised of rocket shards, collision debris and inactive satellites which cause major traffic risks in orbit around our planet whilst also likely polluting the universe. As Boeing, OneWeb, SpaceX etc increase their launches, we similarly add to the congestion and space collision probabilities. (read as disasters waiting to happen). There are currently NO debris removal methods. If we’ve learnt anything from our current micro history of mankind on Earth – it’s that the nature/universe around us is important since we’re intricately linked and that there are costs to our polluting behaviour in the pursuit of ‘territory/energy etc’. Hence when we’re playing at the macro cosmic level – it is even more imperative that we get this framework/relationship/thought process right.
Nathan Young @ 2022-03-02T01:34 (+17)
Nuclear Funding Shortfall
Nuclear Risk
There has been a significant shortfall in nuclear risk funding. The most effective elements of this could be covered by the fund.
Nathan Young @ 2022-03-02T00:56 (+17)
Superforecasting team
Global catastrophic risks
We know that top forecasters exist, but few are currently employed to predict around long term risks. These forecasters should be supported by developers to help maximise their accuracy and output. Multiple organisations could employ 100s or 1000s of top forecasters to analyse developing situations and suggest outcomes that are most likely to resolve them for the interests of all consciousness.
renanaraujo @ 2022-03-05T18:35 (+16)
CEA for the developing world
Effective Altruism
The main EA movement building organization, CEA, focuses primarily on talented students in top universities of developed countries. This seems to be due to a combination of geographical and cultural proximity, quantity of English speakers, and ease of finding top talent. However, there is a huge amount of untapped talent in developing countries that may be more easily reached through dedicated organizations optimized for being culturally, linguistically, and geographically close to such talent, such as a CEA for India or Brazil. Such an organization would develop its own goals and strategies tailored to their respective regions, such as prioritizing nationwide prizes over group-by-group support, hiring local EA talent to lead projects, and identifying and partnering with regionally influential universities and institutions. This project would not only contribute to increasing diversity in EA, but also foster organizational competition by allowing different movement building strategies, and better position the EA movement for unexpected geopolitical power shifts.
Denis Drescher @ 2022-03-05T17:44 (+16)
An ecosystem of organizations to initiate a “Hasty Reflection”
Values and Reflective Processes, Epistemic Institutions, Effective Altruism, Research That Can Help Us Improve
The Long Reflection appears to me to be robustly desirable. It only suffers from being more or less unrealistic depending on how it is construed.
In particular, I feel that two aspects of it are in tension: (1) delaying important, risky, and irreversible decisions until after we’ve arrived at a Long Reflection–facilitated consensus on them, and (2) waiting with the Long Reflection itself until after we’ve achieved existential security.
I would expect, as a prior, that most things happen because of economic or political necessity, which is very hard to influence. Hence the Long Reflection either has to ramp up early enough that we can arrive at consensus conclusions and then engage in the advocacy efforts that’ll be necessary to improve over the default outcomes or else risk that flawed solutions get locked in forever. But the first comes at the risk of diverting resources from existential security. This indicates that there is some optimal trade-off point between existential security and timely conclusions. (From this 2020 blog post of mine.)
Michael Aird suggested to not use the term “Long Reflection” for the institution that I’m aiming for because it doesn’t share enough features with Ord’s Long Reflection. I call it “Hasty Reflection” for now. If AI allows, we’ll perhaps leave our solar system within the coming millennium or even just a few centuries. The communication delays between parts of the human civilization will then quickly increase to years, which will prevent efficient conversations. Barring the invention of faster-than-light communication, we will need to solve ethics and coordination and resiliently install the solution in our civilization before that happens. That seems like a project that may well take more than a millennium, so it’s fairly urgent. (Though likely less urgent than AI safety.)
I envision that the Hasty Reflection will have the following components:
- Organizations that aim to improve incentives in academia, and maybe differentially the branches most relevant for the Hasty Reflection.
- Organizations that, in the meantime, create strong alternatives to academia for researches in the relevant fields, e.g., through proportional prizes and impact markets.
- Organizations that improve epistemics, like QURI, Rational Animations and Kelsey Piper.
- Organizations that build coalitions with political parties and media.
- Organizations that think about the strategy of it all and coordinate the other organizations.
- Organization that conduct research into faster-than-light communication because it might buy us time if it’s at all imaginable.
Parts of the Effective Altruism community already form some sort of proto–Hasty Reflection, so it should be easier to bootstrap a proper Hasty Reflection out of EA than to start from scratch.
Kjersti Moss @ 2022-03-05T11:01 (+16)
Scandinavian-like parental leave (25 weeks +) in EA organizations
Leading the way with a policy combating demographic decline, while supporting talent selection and diversity in EA community
Paid parental leave creates an incentive to have (more) kids - or rather, it takes away part of the large financial incentive not to have kids. My concrete suggestion is to fund Scandinavian-like parental leaves for employees in specified EA organizations. This would open up more access to the large pool of talented family oriented persons. Further, having an unusually beneficial parental leave benefit could inspire other organizations to follow, and thus help combat demographic decline. The idea should be quite easy to pilot, implement and scale, and the results relatively easy to measure.
Greg_Colbourn @ 2022-03-05T12:25 (+2)
Good idea, just in terms of talent selection and diversity. Has such parental leave had a noticeable effect on fertility / demographic decline in Scandinavia?
Kjersti Moss @ 2022-03-05T18:46 (+2)
Hi, Greg. Thank you for your question. I'm very interested in exploring this idea further. First, I want to say that I have not done deep research on the topic. But I know some stuff, and I suspect some stuff. I could be wrong. Here are some of my thoughts:
1) My first point is that it is very plausible that paid leave period has a substantial effect on birth rates. I would sort of have a null hypothesis that there is a large effect, rather than zero/small effect.
2) I'm a statistician, and normally don't put too much weight on personal experience. But before all my three pregnancies a thorough analysis with the conclusion "this is doable financially", was very important in my decision-making. This N=1 (or N=3 if counting three kids), partly forms my opinion on the null hypothesis above. My impression is that most responsible adults have similar thought processes before getting pregnant. Further, after graduating, I was very motivated to have an impactful career. The apparent lack of job security and paid parental leave made sure I was 0% interested in any job at an EA org. Not the end of the world in this specific case, but there are probably a lot of more talented women out there, who also have a 0% interest for the same reasons.
3) I wrote Scandinavian-like because these are countries with generous parental leaves, and I know the setup well, as I'm Norwegian. Again, I have not researched in detail, but all three Scandinavian countries have birth rates well above the average in Europe. I also know France is on top of the birth statistics, and have generous set-ups. If I was French, the headline would maybe point to France instead of Scandinavia :)
4) Apart from (3), it is not straightforward to see a direct effect on leave times and birth rates in Scandinavia. Policies change very slowly over time (might add 1 week every now and then), and changes go hand-in-hand with other policies as free/subsidized kindergardens.
5) It seems to be a bit under-researched whether these policies have effect or not, and it is hard to analyze because of (4). That is at least my impression. So this could be really interesting and valuable as an experiment as well. It could provide insight into the effect of the Trillions (++??) governments use to support the systems today.
6) In Norway I have seen an interesting development over the last 10 years or so. Here, the social security covers 100% of your salary for 49 weeks, but with a max limit of about 60,000 $. About 10 years ago a few companies began to pay any salary above 60K to their employees during the leave. Now, 10 years later, this is standard in the market, and expected from any decent employer. This supports the opportunity to lead the way and affect what other companies might do in the future.
Does this make any sense? :) Happy to discuss this further, and to hear from others who might have more thoughts/research on the topic.
simeon_c @ 2022-03-05T09:28 (+16)
Monitoring Nanotechnologies and APM
Nanotechnologies, and a catastrophic scenario linked to it called “Grey goo”, have received very little attention recently (more information here ), whereas nanotechnologies keep moving forward and that some think that it’s one of the most plausible ways of getting extinct.
We’d be excited that a person or an organization closely monitors the evolution of the field and produces content on how dangerous it is. Knowing whether there are actionable steps that could be taken now or not would be very valuable for both funders and researchers of the longtermist community.
Alex D @ 2022-03-04T20:10 (+16)
Quick start kit for new EA orgs
EA ops
Stipe atlas for longtermist orgs. Rather than figuring out the best tools, registrations, and practices for every new org, figure out the best default options and provide an easy interface to start up faster.
Denis Drescher @ 2022-03-05T22:10 (+7)
I just read the the Charity Entrepreneurship handbook How to Launch a High-Impact Nonprofit. That seems to fit the bill. Maybe having country-specific versions of it and versions for longtermist orgs, would be even better.
Rory Fenton @ 2022-03-03T18:37 (+16)
Campaign to eliminate lead globally
Economic Growth
Lead exposure limits IQ, takes over 1M lives every year and costs Africa alone $130B annually, 4% of GDP: an extraordinary limit on human potential. Most lead exposure is through paint in buildings and toys. The US banned lead paint in 1978 but 60% of countries still permit it. We would like to see ideas for a global policy campaign, perhaps similar to Bloomberg’s $1B tobacco advocacy campaign (estimated to have saved ~30M lives), to push for regulations and industry monitoring.
Epistemic status: The “prize” feels very large but I am not aware of proven interventions for lead regulations. 30 minutes of Googling suggests the only existing implementer (www.leadelimination.org) might be too small for this level of funding so there may not be many applicants.
Conflict of interest: I work for a small, new non-profit focused on $B giving. We are generally focused on projects with large, existing implementers so have not pursued lead elimination policy beyond initial light research
agnode @ 2022-03-02T12:07 (+16)
Build LMIC university capacity
Economic growth, Empowering exceptional people
Universities in LMICs often have limited access to funding. Additional funding could enable many good outcomes including:
- Greater opportunities for exceptional people born in LMICs
- Better and more influential academic contributions from people in LMICs, which would increase the diversity of backgrounds of people contributing to global academia, and perhaps uncover key errors and blindspots in Western academic thought.
- Boost economic growth in LMICs
- Boost epistemic standards in LMICs
- Help improve LMIC capacity to undersand and plan for catastrophes, e.g. from pandemics and climate change.
Funding could be focussed on issues of concern to EAs, such as pandemics, or could be unrestricted to boost overall university capacity. As well as funding universities, funds could be provided for networks, independent labs, access to journals, travel and conferences, spinout companies etc.
Andrew Wong @ 2022-03-02T09:57 (+16)
Increasing Earth’s probability of survival
Space governance
Currently as we transition from a Kardashev Type 0 to Type 1 civilization, our probability of encountering/alerting other civilisations increases exponentially. This is somewhat ironic that we may fall upon our impetus. Citing dark forest theory that the end game is such that ‘lacking assurances, the safety option for any species is to annihilate other life forms before they have a chance to do the same’, means that humanity is immediately on the defensive. (Applying a chronological framework and assuming linearity of time) As of such – we should fund ways to increase our probability of survival (by either deterrence mechanisms, signalling non threat or camouflage) such that we may evolve uninterrupted. (This also assumes we don’t kill ourselves first, the probability of which is sadly also non zero)
Just throwing out crazy suggestions (I’m sensing that’s what the thematic is here) would be something like hyper gravity generation device that bends observable light emitted from our planet, so much so that when observed – we would look like a blackhole.
Andrew Wong @ 2022-03-02T09:54 (+16)
Combatting DeepFake
Epistemic Institutions, Artificial Intelligence
As AI advances –Numerous high quality deepfake videos/images are being produced at an alarmingly increasing rate. Delving into the question ‘What happens when we can’t trust our eyes and ears anymore?’, This immediately raises obvious signals that it will affect many industries such as journalism, military, celebrities, government etc. Proactively funding a superior ML anti-deepfake bot for commercial use is important such that images/videos can be properly verified. The end game will likely come down to some degree of superior computing power since both are ML based algos – hence the advantage here would be first mover and/or altruistic (think along the same example of free antivirus software) in nature.
Peter S. Park @ 2022-03-02T03:06 (+16)
Targeted practical statistical training
Economic Growth, Values and Reflective Processes
"Human cognition is characterized by cognitive biases, which systematically lead to errors in judgment: errors that can potentially be catastrophic (e.g., overconfidence as a cause of war). For example, a strong case can be made that Russia's invasion of Ukraine has been an irrational decision of Putin, a consequence of which is potential nuclear war. Overconfidence is a cause of wars and of underpreparation for catastrophes (e.g., pandemics, as illustrated by the COVID-19 pandemic).
One way to reduce detrimental and potentially catastrophic decisions is to provide people with statistical training that can help empower beneficial decision-making via correct calibration of beliefs. (Statistical training to keep track of the mean past payoff/observation can be helpful in a general sense; see my paper on the evolution of human cognitive biases and implications.) At the moment, statistical training is provided to a very small percentage of people, and most provisions of statistical training are not laser-focused on the improvement of practical learning/decision-making capabilities, but for other indirect goals (e.g., prerequisite for STEM undergraduate majors). It may be helpful to (1) encourage practical, impactful aspects in the provision of statistical training and (2) broaden its provision to a wider segment of people."
(Quote from my post 'Broadening statistical education")
Given resource limitations, it may make sense to target the provision of practical statistics training to high-impact decision-makers, such as those in government. An ambitious example is that just as the US president is given a confidential briefing about the nuclear protocols, so too can the president be briefed about statistical reasoning and how to thereby make well-calibrated decisions on behalf of the nation.
Peter S. Park @ 2022-03-01T20:38 (+16)
Mental health treatment to prevent anthropogenic catastrophic/existential risks
Biorisk and Recovery from Catastrophe
Issues of mental health can be very harmful to the well-being of the self and others. The degree to which this harm can occur can, when combined with technology, even result in catastrophic/existential risks. (The Russian invasion of Ukraine, the cause of which may be the mental state of Putin, can plausibly lead to nuclear war. Another example is engineered pandemics.) Given the disproportionately anthropogenic skew of catastrophic/existential risks, research/funding/advocacy for mental health treatment (general or targeted) may help prevent such risks.
IanDavidMoss @ 2022-03-01T21:59 (+17)
Reminds me of some of the proposals here: https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors
We therefore consider interventions to reduce the expected influence of malevolent humans on the long-term future.
- The development of manipulation-proof measures of malevolence seems valuable, since they could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs. (More)
- We also explore possible future technologies that may offer unprecedented leverage to mitigate against malevolent traits. (More)
- Selecting against psychopathic and sadistic tendencies in genetically enhanced, highly intelligent humans might be particularly important. However, risks of unintended negative consequences must be handled with extreme caution. (More)
Peter S. Park @ 2022-03-04T03:07 (+1)
Yes, I think these proposals together could be especially high-impact, since people who pass screening may develop issues of mental health down the line.
mkmkmk @ 2022-03-01T18:47 (+16)
Rule of Law Fund
Values and Reflective Processes and Economic Growth
A strong rule of law helps ensure equity, human rights, property rights, contract enforcement, and due process. Many countries are still developing their legal systems. Between 2010 and 2020 twenty-four different countries ratified a constitution. The legal systems that evolve today will have a lasting impact on future generations.
This fund would offer funding for organizations and individuals engaged in legal scholarship and litigation that align with the Future Fund’s guiding principles, with a specific focus on strengthening the rule of law in countries with less developed legal institutions.
Chris Leong @ 2022-03-01T13:22 (+16)
Reflection Retreats
Effective Altruism
There are certain points in our life when the decisions we make can greatly affect the trajectory of our lives. This could include deciding what degree to study, graduating or making a major career change. These retreats would bring together a bunch of EAs together (possibly some non-EAs too) to reflect on these decision and start making applications/plans, ect.
JanBrauner @ 2022-03-01T10:32 (+16)
AI alignment prize suggestion: Improve our ability to evaluate (and provide training signal for) fuzzy tasks
Artificial Intelligence
There are many tasks that we want AI systems to do, for which performance cannot be evaluated automatically (and thus training signal provision is hard). If we don't make progress on our ability to train systems for such tasks, we might end up in a world full of systems that optimise for that which is easy to measure, rather than what we actually want. One example of such a task is the evaluation of free-form text; there is currently no automated method to evaluate free-form text (with respect to criteria such as usefulness or correctness) that matches human evaluation. The Future Fund could offer prizes for work that takes a task for which the gold-standard of evaluation is humans, and demonstrates an automated evaluation method that matches human evaluation very closely (or work that demonstrates an automated evaluation method to be superior to human evaluation).
Note: This is crucially not the same as "training models to perform well on the task in question". There are a number of technical reasons why what I suggest is easier. Intuitively, evaluating performance is often considerably easier than generating good performance. For example, I can watch a movie and say if it's good, but I can't make a good movie.
Chris Leong @ 2022-03-01T08:30 (+16)
EA Programming Bootcamp
Effective Altruism
Providing a programming bootcamp to members of the Effective Altruism community could be a way of assisting struggling community members whilst avoiding the issues inherent with directly providing cash assistence. It could also allow communities members to accelerate their career progression.
Notes: See the comments here for some of the issues with giving cash.
I suspect that the impact of this would be larger than it first appears as a) talented people generally want to be part of a community where people are successful b) if community members are struggling then that takes up the time of other community members who try to help them.
Yonatan Cale @ 2022-03-01T17:04 (+5)
I think funding programming bootcamps is a great idea (if anyone needs it) and I intend to fund the first 3 people who'll ask me even just to see how it goes.
[This is not a formal commitment because I don't want to think of all the edge cases like a $10k course; but I do currently intend to do it. DM me if you want]
Chris Leong @ 2022-03-01T20:59 (+6)
This is not a formal commitment because I don't want to think of all the edge cases like a $10k course; but I do currently intend to do it. DM me if you want
Most proper bootcamps are very expensive, like about that kind of rate I'd guess.
zdgroff @ 2022-03-07T20:34 (+15)
Research institute focused on civilizational lock-in
Values and Reflective Processes, Economic Growth, Space Governance, Effective Altruism
One source of long-term risks and potential levers to positively shape the future is the possibility that certain values or social structures get locked in, such as via global totalitarianism, self-replicating colonies, or widespread dominance of a single set of values. Though organizations exist dedicated to work on risks of human extinction, we would like to see an academic or independent institute focused on other events that could have an impact on the order of millions of years or more. Are such events plausible, and which ones should be of most interest and concern? Such an institute might be similar in structure to FHI, GPI, or CSER, drawing on the social sciences, history, philosophy, and mathematics.
CristinaSchmidtIbáñez @ 2022-03-06T17:35 (+15)
Nonprofit Growth Research Think Tank/Consultancy
EA Ops, Effective Altruism
Most EA organisations and projects will be faced (at several times during their organisational lifecycle) with changes to their organisations due the growth of their teams.
If handled poorly a team can grow with many "growing pains" such as processes, policies, financial systems, (project) management and organisational/team structures that are not fit to the new status quo.
We'd love to see an organization that guides other EA organisations on their path to growth by identifying the right strategies and blind spots to manage the change phase in a period of growth.
Denis Drescher @ 2022-03-05T20:24 (+15)
Prevent stable global totalitarian regimes through uncensorable broadcasts
Great Power Relations, Epistemic Institutions
Human civilization may get caught in a stable global totalitarian regime. Current and past totalitarian regimes have struggled with influences from the outside. So it may be critical to make sure now that future global totalitarian regimes will also have influences from the outside.
North Korea strikes me as a great example of a totalitarian regime straight out of 1984. Its systematic oppression of its citizens is so sophisticated that I could well imagine a world-wide regime of this sort to be stable for a very long time. Even as it exists today, it’s remarkably stable.
The main source of instability is that there’s a world all around North Korea, and especially right to its south, that works so much better in terms of welfare, justice, prospecity, growth, and various moral preferences that are widely shared in the rest of the world.
There may be other sources of instability – for example, I don’t currently understand why North Korea’s currency is inflated to worthlessness – but if not, then we, today, are to a hypothetical future global totalitarian state what the rest of the world is to North Korea.
Just like some organizations are trying to send leaflets with information about the outside world into North Korea, so we may need to try to send messages into the future just in case a totalitarian dystopia takes hold. These messages would need to be hard to censure and should not depend on people acting against their self-interest to distribute. (Information from most normal time capsules could easily be suppressed.) Maybe a satellite can be set on a course that takes it past earth every century and projects messages against the moon. This probably not the most cost-effective method, so I’d first like to think about approaches to this more. (From my blog.)
MaxRa @ 2022-03-10T19:18 (+2)
Interesting idea.
The main source of instability is that there’s a world all around North Korea
Have you thought more about sources of instability and weighed them? Would be interested. Others that come to mind are:
- North Korean citizens must be fairly unhappy about a lot of the government and wouldn't take much to support a coup against the government
- the military leadership is never perfectly aligned with the government and historically seems ready to coup under certain circumstances
- having successors that can sustain autocratic rule
Denis Drescher @ 2022-03-12T17:28 (+4)
I’ve written this article about human rights in North Korea. Some parts are probably outdated now, but others are not, and the general lessons hold, I think.
-
All but very few of the citizens are isolated from all information from the outside, so that they have no way to know that the rest of world isn’t actually envious of the prosperity of North Korea and they aren’t under a constant threat from the US, and the south isn’t just US-occupied territory, etc. The only things that can weaken this information monopoly are phone networks from China that extend a bit across the border, leaflets from South Korea, and similar influences from the outside. But they are localized because people are not allowed to move freely within the country. The information monopoly of the government is probably fairly complete a bit further away from the borders. But note that I haven’t been following this closely in the past 5 years.
They also have this very powerful system in place where everyone is forced to snitch on everyone else if they learn that someone else knows something that they shouldn’t know or else you and your whole family can go to prison or concentration camp. The snitching is also systematically, hierarchically organized so that there are always overseers for small groups of citizens, and those overseers have their own overseers and so on, so that everyone can efficiently be monitored 24/7.
A big exception to that is all the “corruption” and the gray markets. They’ve basically become the real economy of the country. But those are mostly based on Chinese currency, Chinese phones and networks, etc. So again I think black markets would be easier to prevent if there were no outside influences.
-
Without outside forces to defend against, you can concentrate completely on using the military as a mechanism of oppression as opposed to giving it any real power. Almost everyone in NK is in the military but that’s just to keep them busy and to have them build stuff. They have no useful military training. The real military in NK is said to be well-trained but tiny by comparison. It would probably not be needed and even a risk factor if it weren’t for other countries.
-
That was probably a real mistake that Kim Il-sung made. Everyone thought that he was immortal, so when he died it was probably hard to spin. He should’ve predicted that he might die and create a fictional ruler from the start who would then really be immortal, sort of like with God and the pope or something. Generally he combined the most successfully manipulative strategies of Stalin, Hitler, Mao, and others, so this seems like a strange lapse in evil judgment. An even more perfect system of oppression would probably not make such a mistake. But I suppose after his death he probably become the sort of incorporeal, immortal leader, so maybe that loose end is tied up now too, sadly.
Peter S. Park @ 2022-03-10T19:53 (+1)
It's plausible that compared to a stable authoritarian nuclear state, an unstable or couped authoritarian nuclear state could be even worse (in worst-case scenario and even potentially in expected value).
For a worst-case scenario, consider that if a popular uprising is on the verge of ousting Kim Jong Un, he may desperately nuke who-know's-where or order an artillery strike on Seoul.
Also, if you believe these high-access defectors' interviews, most North Korean soldiers genuinely believe that they can win a war against the U.S. and South Korea. This means that even if there is a palace coup rather than a popular uprising, it's plausible that an irrational general rises to power and starts an irrational nuclear war with the intent to win.
So I think it's plausible that prevention is an entirely different beast than policy regarding already existing stable, authoritarian, and armed states.
Peter S. Park @ 2022-03-03T08:44 (+15)
Facilitating relocation
Economic growth, Effective altruism
People are over-averse to moving, even if it moving leads to much better opportunities (e.g., when a volcano destroyed a fraction of nearby houses, their inhabitants who were forced to move ended up better off on earnings and education, conditional on being young; see this paper). Research and incentivization can help reduce this over-aversion.
It is plausible that even EAs underconsider relocation.If so, it means a lot of impactful value may be achieved by convincing and facilitating EAs' relocation to high-impact career opportunities.
Jackson Wagner @ 2022-03-03T20:57 (+5)
Personally I believe that we should go even further, and look into using assurance contracts to help create "affinity cities" and zoom-towns based on common interests -- we should create new EA hubs in well-chosen parts of the USA, then when people move there we can experiment with various kinds of community support (childcare, etc) and exciting new forms of community governance/decisionmaking (maybe all the EAs who use a coworking space pay a fee that gets spent on community-improvement projects as decided by a quadratic-funding process).
Besides the direct effect of creating new, well-functioning EA community hubs in a variety of useful locations, I think that supporting "affinity cities" in general (making them easier for other groups to start, providing a best-practices template of what they can be, etc) would have powerful effects for creating "governance competition" (cities and towns trying to improve and reform themselves in order to sell themselves as a zoom-town destination) and encouraging more cultural/legal/institutional experimentation which has positive externalities for the whole society (since everyone benefits from adopting the fruits of the most successful experiments).
I have numerous additional thoughts on this subject, which unfortunately this comment is too small to contain. Hopefully it'll become a Forum post soon. In the meantime, just facilitating individual moves like you're saying would probably be helpful, although it would be strange to have an independent group working solely on this. Better perhaps to build a culture where large EA organizations especially willing to help their employees with moving. (IMO they are already trying to do this to some extent, for instance many EA orgs try to have the ability to easily hire internationally.) This would be similar to how many EA orgs make a special effort to compensate people for time spent applying for EA jobs -- getting paid for time spent on a job application is much more common in EA than in most other fields.
Peter S. Park @ 2022-03-04T02:42 (+1)
Thanks for the great big-picture suggestions! Some of these are quite ambitious (in a good way!) and I think this is the level of out-of-the-box thinking needed on this issue.
This idea goes hand-in-hand with a previous post "Facilitate U.S. voters' relocation to swing states." For a project aiming to facilitate relocation to well-chosen parts of the US, it could be additionally impactful to consider geographic voting power as well, depending on the scale of the project.
Nathan Young @ 2022-03-02T01:23 (+15)
EA/AI Hiring Round
Effective Altruism/ AI Safety
Meet with a variety of organisations and design an short set of questions to best predict good candidates for roles. Allow anyone to take this test every 3 months and apply for a broad range of positions eg all EA ops roles in their city or all AI safety roles. Hire more, higher quality candidates.
evelynciara @ 2022-03-02T05:30 (+4)
Candidates could also be matched with orgs using an algorithm like the one used by the National Resident Matching Program.
ZacharyRudolph @ 2022-03-02T00:36 (+15)
Funding private versions of Longtermist Political Institutions to lay groundwork for government versions
Some of the seemingly most promising and tractable ways to reduce short-termist incentives for legislators are Posterity Impact Assessments (PIA) and Futures Assemblies (see Tyler John's work). But, it isn't clear just how PIAs would actually work, e.g. what would qualify as an appropriate triggering mechanism, what evaluatory approaches would be employed to judge policies, how far into the future policies can be evaluated. It seems like it would be relatively inexpensive to fund an organization to do PIAs in order to build a framework which a potential in-government research institute could adopt instead of having to start from scratch. The precedent set by this organization seems like it would also contribute to reducing the difficulty of advocating for longtermist agency/research institutes within government.
Similarly, it would be reasonably affordable to run a trial Futures Assembly wherein a representative sample of a country's population is formed to deliberate over how and to what extent policy makers should consider the interests of future persons/generations. This would provide a precedent for potential government funded versions as well as a democratically legitimate advocate for longtermist policy decisions.
Basically, EAs could lay the groundwork for some of the most promising/feasible longtermist political institutions without first needing to get legislation passed.
Peter S. Park @ 2022-03-01T22:11 (+15)
Movement-building targeted at existential-risk-relevant fields' international scientific communities
Biorisk, Artificial Intelligence
Scientists of the Manhattan Project built the first nuclear bombs, the development and use of which normalized nuclear proliferation. Contrast this with bioweapons, which in principle could also have been normalized if not for the advocacy of scientists like Matthew Meselson, which led to a lasting international agreement to not develop bioweapons (Biological Weapons Convention).
Targeted efforts to build the movement of reducing catastrophic/existential risks (and longtermism in general) specifically in the international scientific communities of fields that are highly relevant to certain existential risks, whose lasting cooperation would be crucial for the non-realization of these risks, could potentially be very impactful. Some potential approaches include funding of fellowships/grants/collaboration opportunities, creating scientific societies/conferences, and organizing advocacy/outreach/petitions.
Zane @ 2022-03-01T19:16 (+15)
Towards Better Epistemology in Medicine
Epistemic Institutions, Values and Reflective Processes
Medicine is a field subject to an incentive landscape that can, among other issues, encourage pathological risk aversion in treatment and research, which holds back patients getting the care
with the greatest expected value to them and limits our ability as a society to adapt
to new and changing health issues such as global pandemics. Medical professionals are often trained in a narrow set of epistemic norms that lead to slow updates on new evidence, overreliance on individual decisionmaking, and difficulty communicating about complex tradeoffs. The unavoidable closeness to moral and ethical issues, as well as difficulties in reasoning about decisions that hold lives directly in the balance, exacerbate the problem.
We're interested in projects that address these problems, perhaps including the following:
- Literature and media that promotes truth-seeking and expected-value-thinking norms
in medicine, whether explicit in non-fiction or training material, or in fictional settings
- Resources that seek to aggregate medical evidence relevant to a specific condition or
clinical application, and attempts to normalize bringing up such a resource to your
medical provider
- Efforts to open-source medical metadata, particularly with regard to outcomes of different
treatment plans, and to precisely relax certain regulations that prevent this data from being collected
- Increasing incentives for the reporting of null results, unconventional results, and meta-analyses of existing medical studies. Establishment of specific prizes for meta-analysis studies, and literature that communicates neutral and evidence-based research effectively at a layperson's level
Jackson Wagner @ 2022-03-01T23:10 (+3)
I think this is quite important insofar as:
- It could help change the existing academic culture of overly-restrictive "bioethics" around public health issues like pandemics to think more rationally about when to approve things like rapid tests and vaccines, when to impose mandates and travel bans versus not, etc.
- It might lead to broader reforms and readjustments of focus, leading to a faster pace of developing medicines (ultimately saving many QALYs), reductions in healthcare cost, more progress in understanding aging, etc.
One reason not to focus on this intervention is if you thought that general epistemology-improving efforts across academia would work well, and there's no particular reason to target medicine/bioethics/etc first.
michaelchen @ 2022-03-01T04:53 (+15)
AI safety university groups
Artificial Intelligence
Leading computer science universities appear to be a promising place to increase interest in working to address existential risk from AI, especially among the undergraduate and graduate student body. In Q1 2022, EA student groups at Oxford, MIT, Georgia Tech, and other universities have had strong success with AI safety community-building through activities such as facilitating the semester-long AGI Safety Fundamentals program locally, hosting high-profile AI safety researchers for virtual guest speaker events, and running a research paper reading group. We'd also like to see student groups which engage students with opportunities to develop relevant skills and which connect them with mentors to work on AI safety projects, with the goal of empowering students to work full-time on AI safety. We'd be happy to fund students to run AI safety community-building activities alongside their studies or to take a gap semester, or to sponsor other people to support an EA group at leading university in building up the AI safety community.
Some additional comments on why I think AI safety clubs are promising:
- For those unfamiliar, the AGI Safety Fundamentals alignment track is a reading group to learn about AI safety over the course of 8+ weeks, with discussions led by a facilitator familiar with the readings. The curriculum is written by Richard Ngo, a researcher at OpenAI.
- EA at Georgia Tech (my group) has over 36 participants in our AI Safety Fundamentals program. To give a sense of demographics, 22 are on-campus, 11 are online master's students, 1 is a TA, and 2 is are alumni. I haven't done a formal count but I think of our applicants are fairly new if not completely new to both AI safety and EA. As part of our application, we had applicants read Vox's The case for taking AI seriously as a threat to humanity and the introduction to The Precipice. Even though we accepted all but one applicant, most applicants were quite interested in existential risk from AI. I think the main way we got applicants was through simple emails to the College of Computing newsletter, which were sent out to all the CS students. Though we had the benefit of having an EA student group already established the prior semester, only four applicants had prior engagement with our group, so I don't think it was a major factor for our applicant pool.
- OxAI Safety Hub has been able to have an impressive lineup of guest speakers. Their first event with Rohin Shah from DeepMind attracted 70 attendees (though it's worth noting that OxAI Safety Hub has the benefit of being at the location with the largest EA student group already). They plan on running AGI Safety Fundamentals locally and starting a local summer research program connecting students to local mentors to work on AI safety projects.
- MIT's unofficial new AI Safety Club has apparently been quite successful with an interpretability reading group and talk series. I'd like to thank Kaivu from MIT for inspiring me to think about AI safety clubs in the first place.
- For those who don't have time to facilitate several cohorts of AGI Safety Fundamentals, it's possible that we might be able to obtain most of the same value by broadly advertising the virtual AGI Safety Fundamentals by EA Cambridge. That said, I'm not sure the EA Cambridge application and acceptance process used in early 2022 would be suitable for people who are completely new to EA or AI safety. EA NYU was able to get 40+ applications to their AI Alignment Fellowship program (based on the AGI Safety Fundamentals technical track) weeks ahead of the application deadline, and recruited virtual facilitators in order to have enough capacity to facilitate the program.
- The part about having people from outside the university help run the group is basically the campus specialist position proposed by the Centre for Effective Altruism, but applied to AI safety instead of EA.
- I wanted to make this proposal fairly concrete and grounded in existing examples to demonstrate feasibility. But if this sounds too under-ambitious, some ways that a local AI safety community group could deploy funding could be: having a large team of organizers, sponsoring value-aligned members to attend bootcamps to skill up, and offering stipends for research fellowships. For reference, CEA claims that a campus specialist "could be leading a large team and managing a multi-million dollar budget within three years of starting".
MaxRa @ 2022-03-11T16:51 (+2)
Thanks for sharing this idea, super exciting to me that there is so much traction for getting junior CS people excited about AI Safety. I'd love to see much more of this happen and will likely (70%?) try to spend > a day thinking about this in the next month. If you have more ideas or pointers to look into, would highly appreciate it.
Chris Leong @ 2022-03-01T04:48 (+15)
EA Crisis Fund:
Effective Altruism/X-risk
The EA Crisis Fund would respond to crisises around the world such as the current crisis of Ukranian refugees. This would help develop the capabilities of EA to respond to novel situations on short timelines, provide great publicity and build connections and credibility with government. This would increase the chance that EA would have a seat at the table in important discussions.
Potential Downside: It may be hard to respond to these crisises in a way that builds credibility without burning a lot of money.
Jackson Wagner @ 2022-03-01T21:07 (+8)
I think if we are just jumping into the same highly-salient crises as everybody else (Ukraine today, Afghanistan yesterday, Black Lives Matter, Covid, etc), we burn a lot of money quickly at only middling effectiveness (even if we try to identify specific "most effective" interventions in each crisis, like providing oxygen tanks to Indian hospitals during their covid surge) and don't even get a huge amount of publicity because everybody else is also playing that same game (see: Elon Musk giving starlinks to Ukraine, etc).
This idea maybe works better if we are trying to respond to other crises elsewhere in the world that everyone else isn't already going bananas over -- like doing famine/disaster relief in countries that aren't getting headlines, or doing pandemic early-response stuff before the world realizes it's a problem, or having some kind of "Pivotal Action Fund" on hair-trigger alert to attempt a response to the potential emergence of transformative AGI capabilities. I'm not sure what specific approaches such a fund would use to reliably improve response times above the current situation (which is presumably "OpenPhil has the ability to spend a lot of money fast if they all start really freaking out about an emerging crisis"), but I'd certainly be interested to hear someone explore this idea.
IanDavidMoss @ 2022-03-03T01:22 (+4)
I think the experience of the FRAPPE donor circle, which formed in response to the first COVID wave in spring 2020, is relevant. We found that it didn't take that much money or time for us to be able to 1) get access to high-quality, often non-public information about the crisis and how it was unfolding and 2) find strong giving opportunities that not enough other people were paying attention to. I like Chris's idea because the combination of high salience + fast-moving environment is often a good one for finding high-leverage opportunities, but it's easier to intervene effectively and take on a leadership role when you have gone to the trouble of setting up some infrastructure for it in advance.
Chris Leong @ 2022-03-01T02:37 (+15)
Mentors/tutors for AI safety:
AI Safety
Many people want to contribute to AI safety, but they may not be able to get up to the level where they would be able to conduct useful research. On the other hand, given time, many of these people could probably become knowledgeable enough about on a particular agenda in order to mentor potential researchers pursuing this agenda. These mentors could help people understand the reasons for and against pursuing a particular agenda, help people navigate the content that has been written on that topic, address common misconceptions and help people who are confused about a particular point.
Kat Woods @ 2022-03-06T18:10 (+14)
Academic AI Safety Journal
Start an Academic Journal for AI Safety Research
Problem: There isn’t one. There should be. It would boost prestige and attract more talent to the field.
Solution: Fund someone to start one.
Gavin @ 2022-03-07T17:50 (+20)
This has come up a few times before and is controversial.
Pros:
- more incentive for academics to work on pure safety without shoehorning their work
- higher status
- better peer review / less groupthink
Cons:
- risks putting safety into an isolated ghetto. Currently a lot of safety stuff is published in the best conferences
- Journals matter 100x less than conferences in ML
- I think academics are a minority in AIS at the moment (weighted by my subjective sense of importance anyway)
FWIW I take the first con to be decisive against it. Higher status takes a long time to build, and better peer review is (sadly) a mirage.
Elriggs @ 2022-03-14T19:27 (+1)
You can still have a conference for AI safety specifically and present at both conferences, with a caveat. From NeurIPS:
> Can I submit work that is in submission to, has been accepted to, or has been published in a non-archival venue (e.g. arXiv or a workshop without any official proceedings)? Answer: Yes, as long as this does not violate the other venue's policy on dual submissions (if it has one).
The AI Safety conference couldn't have an official proceeding. This would still be great for networking and disseminating ideas, which is definitely worth it.
Denis Drescher @ 2022-03-07T14:00 (+2)
Another option may be a conference (that forms more of a Schelling point in the field than all the existing ones). These seem to be more popular in the wider field. But both solutions also have the risk that fewer people outside AI safety may read the AI safety papers.
SjirH @ 2022-03-04T10:40 (+14)
A better overview of the effective altruism community
Effective Altruism
The effective altruism movement has grown large enough that it has become hard for any individual to have a good overview of ongoing projects and existing organizations. There is currently no central repository on what is happening across different causes and parts of the movement, which means many opportunities for coordination may be left on the table. We would like to see more initiatives like the yearly EA survey and a more detailed version of Ben Todd’s recent post that research and provide an overview of what is happening across the effective altruism movement.
Lauren Reid @ 2022-03-04T04:35 (+14)
Increase diversity with more ‘medium term’ plans to enable participation when travel is required
Community Building and Diversity, Values and Reflective Processes
I’m new here and it seems like many opportunities are planned with short notice. This can work well for people with lots of flexibility, but may discourage participation from people who are mid-career/working and people with families. I propose that organizations within EA encourage diversity by lengthening some planning horizons. Funding a stable hub with enough runway to have a 6 month planning horizon would be helpful for professionals and parents like my family.
JanBrauner @ 2022-03-02T19:14 (+14)
Enlightenment at scale (provocative title :-) )
Values and Reflective Processes (?), X-risk (?)
A strong meditation practice promises enticing benefits to the meditator---less suffering, more control over ones attention and awareness, more insight, more equanimity. Brahmavihara practice promises the cultivation of loving-kindness, compassion, and empathetic joy. The world would be a much better place if everybody suffered less, had more equanimity, and felt strong compassion and empathy with other beings. But meditation is hard! Becoming a skilled meditator, and reaping these benefits, requires probably thousands of hours of dedicated practice. Most people will just not put in this amount of effort. But maybe it doesn't need to be this way. The field of meditation teaching seems underdeveloped, and innovative methods that make use of technology (e.g. neurofeedback) seem largely unexplored. We are interested in supporting scalable solutions that bring the benefits of meditation to many people.
Note:
- I don't actually know if meditation really has these benefits; this would needed to be established first (there should be quite some research on this by now). It seems plausible to me that meditation can be very beneficial. Several of my friends claim to have experienced significant benefits from meditation, and I think I can also point to tangible benefits in my own life.
- These innovation need not be directly related to meditation; for example, one could imagine development of an extremely safe and non-addictive pharmaceutical substance that would let people experience, say, strong compassion, and thus increase compassion in everyday life (see e.g. the use of MDMA in therapy).
Denis Drescher @ 2022-03-06T00:04 (+2)
Are there high-quality safety trials for different meditation practices? I’ve heard of a variety of really bad side effects, usually from very intense, very goal-oriented meditative practice. The Dark Night that Daniel Ingram describes, the profligacy that Scott Alexander warned of, more inefficient perception that Holly Elmore experienced, etc. I have no idea how common those are and whether one is generally safe against them if one only meditates very casually… It would be good to have more certainty about that, especially since a lot of my friends are casual meditators.
Peter S. Park @ 2022-03-02T02:56 (+14)
Optimal 90-second pitches for EA/longtermism
Effective altruism
Longtermism is nuanced; a full discussion requires a large amount of time. It is possible that more people than currently may be interested in learning more about the movement if they are presented with a short but compelling pitch that is suited for the quickness of many people's lifestyle. (I've stated a spontaneous and very suboptimal pitch for EA on at least one occasion, which I regret.)
Optimized 90-second or so pitches may potentially help the movement's outreach. Persuasive pitches (each focused on each of a myriad of topics/angles that the listener may be interested in) can be selected by community contests/focus groups and posted online, both for viewing and for informing movement builders' efforts.
Peter S. Park @ 2022-03-02T03:13 (+3)
Addendum: Found out that this is just a special case of James Ozden's 'Refining EA communications and messaging'
Alex D @ 2022-03-01T17:10 (+14)
Monitoring and advocacy to make Zoonotic Risk Prediction projects safer
Biorisk and recovery from catastrophe
Following COVID-19, a great deal of funding is becoming available for "Zoonotic Risk Prediction" projects, which intend to broadly sample wildlife pathogens, map their evolutionary space for pandemic potential, and publish rank-ordered lists of the riskiest pathogens. Such work is of dubious biodefence value, presents a direct risk of accidental release in the field and lab, and the resulting information is a clear biosecurity infohazard.
We would be excited to fund projects to collect, monitor, and report on the activities of these projects. ZRP projects have multiple components- field sampling, computational modelling, and lab characterization - each of which carry distinct risks and leaves an information trail. Monitoring and reporting on open source information associated with ZRP projects could disincentivize the riskiest aspects of this work, target resources for event surveillance and early warning of accidental release, and provide material for advocacy efforts.
There is some overlap with portions of the BWC project, but I think this is best tackled as a separate body of work/by a different team (due to radically different OPSEC, deception, and scrutiny profiles). I've thought about this a fair bit and am happy to discuss offline.
Chris Leong @ 2022-03-01T04:23 (+14)
Better Reporting on Other Countries' Perspectives
Epistemic Institutions
(Refinement of better news)
It's very hard for a regular person to understand what the Russian or Chinese or Turkish perspectives on events are from reading Western media. It would be valuable to have a high-quality mainstream news media source that takes special effort to make sure that this is explored, including by having on-staff anthropologists. This would increase understanding between countries and reduce the chance of Great Power Conflict.
Denis Drescher @ 2022-03-05T23:47 (+4)
I wonder whether Larissa MacFarquhar (author of Strangers Drowning) may be someone to talk to about this. She managed to understand Julia Wise so well that I learned new things about myself from reading her chapter in the book. The only other people who can do that are close friends of mine who’ve known me for years. Maybe Larissa is just similar to Julia and so had this level of insight, but maybe she’s also exceptionally gifted at perspective-taking.
Chris Leong @ 2022-03-01T03:58 (+14)
Agent Foundations and Philosophy Engagement Fund:
AI Safety
Agent Foundations research may potentially be important for AI Safety, but currently it has received very little engagement from the philosophical community. This fund would offer funding and or scholarships for people who want to engage with these ideas in an academic philosophical context. This project aims to improve clarity about whether this research actually worthwhile and, if so, to help make progress on these problems.
Adam Binks @ 2022-03-07T22:13 (+13)
EA Founders Camp
Effective altruism, empowering exceptional people
The EA community is scaling up, and funding ambitious new projects. To support continued growth of new organisations and projects, we would be excited to fund an organisation to run EA Founders Camps. These events would provide an exciting, sparky environment for (1) Potential founders to meet co-founders, (2) Founders to hear about and generate great ideas for impactful projects and organisations, (3) Founders to get key training tailored to their project area, (4) Founders to build a support network of other new and existing founders, (5) Founders to connect with funders and advisers.
Guillaume Corlouer @ 2022-03-06T18:37 (+13)
Regulating AI consciousness.
Artificial intelligence, Values and reflective process
The probability that AIs will be capable of conscious processing in the incoming decades is not negligible. With the right information dynamics, some artificial cognitive architecture could support conscious experiences. The global neural workspace is an example of a leading theory of consciousness compatible with this view. Furthermore, if it turns out that conscious processing improves learning efficiency then building AI capable of consciousness might become an effective path toward more generally capable AI. Building conscious AIs would have crucial ethical implications given their high expected population. To decrease the chance of bad moral outcomes we could follow two broad strategies. First, we could fund policy projects aiming to work with regulators to ban or slow down research that poses a substantial risk to building conscious AI. Regulations slowing the arrival of conscious AIs could be in place until we gain more moral clarity and a solid understanding of machine consciousness. For example, philosopher Thomas Metzinger advocated a moratorium on synthetic phenomenology in a previously published paper. Second, we need to fund more research in machine consciousness and philosophy of mind improving our understanding of synthetic phenomenology in AIs and their moral status. Note that machine consciousness is currently very neglected as an academic field.
CristinaSchmidtIbáñez @ 2022-03-06T17:30 (+13)
Vetting and matchmaking organization of consultants and contractors for EA founders
Empowering Exceptional People, Effective Altruism
Founders of new projects, charities and other EA-aligned organisations can have an extremely high impact. These individuals tend to suffer more from issues such as overwhelm, burnout, etc. which can easily lead them to have much less impact both short and long-term. A potential intervention against this is decreasing the decision-making overload by helping them outsource some of their decision-making.
We'd love to see an organization that offers vetting and matchmaking for independent consultants and contractors in several relevant areas of decision-making for these people so they can tap into knowledge and expertise faster with less effort and cognitive load.
This service can be considered and expansion of this idea by aviv.
Taras Morozov @ 2022-03-04T15:01 (+13)
Open-source intelligence agency
Great Power Relations
Create an organization that will collect and analyze open-source intelligence information on critical topics (e.g. US nuclear arsenal, more examples below) and publish it online.
Many documents on US nuclear arsenal and military activities were obtained through Freedom of Information Act. Still, they were never analyzed properly because it is a lot of tedious work that journalists do not have the capacity or incentive to do. Standard open-source intelligence gathering methods can provide even more information. As a result, there is only a limited public understanding of important sources of x-risk.
Possible subjects of investigation:
- State of nuclear arsenals of US and Russia.
- Military developments of artificial intelligence
- Propaganda and hacking capabilities of Russia and China.
- State of AI Armsraces, both between states and between companies.
- Monitoring activities of secret services of both Russian and USA. (For example, to better estimate the capabilities of GRU, NSA, and others).
- (bioweapons has its own comment)
SjirH @ 2022-03-04T10:40 (+13)
Scaling successful policies
Biorisk and Recovery from Catastrophe, Economic Growth
Information flow across institutions (including national governments) is far from optimal, and there could be large gains in simply scaling what already works in some places. We’d love to see an organization that takes a prioritized approach to researching which policies are currently in place to address major global issues, identifying which of these are most promising to bring to other institutions and geographies, and then bringing these to the institutions and geographies where they are most needed.
Peter S. Park @ 2022-03-03T18:44 (+13)
Reduce meat consumption
Biorisk, Moral circle expansion
Research and efforts to reduce broad meat consumption would help moral circle expansion, pandemic prevention, and climate change mitigation. Perhaps messaging from the pandemic-prevention angle (in addition to the climate change angle and the moral circle expansion angle) may help.
aviv @ 2022-03-03T04:26 (+13)
Platform Democracy Institutions
Artificial Intelligence (Governance), Epistemic Institutions, Values and Reflective Processes, Great Power Relations
Facebook/Meta, YouTube/Google, and other platforms make incredibly impactful decisions about the communications of billions. Better choices can significantly impact geopolitics, pandemic response, the incentives on politicians and journalists, etc. Right now, those decisions are primarily in the hands of corporate CEO’s—and heavily influenced by pressure from partisan and authoritarian governments aiming to entrench their own power. There is an alternative: platform democracy. In the past decade, a new suite of democratic processes have been shown to be surprisingly effective at navigating challenging and controversial issues, from nuclear power policy in South Korea to abortion in Ireland.
Such processes have been tested around the world, overcome the pitfalls of elections and referendums, and can work at platform scale. They enable the creation of independent ‘people’s mandates’ for platform policies—something invaluable for the impacted populations, well-meaning governments which are unable to act on speech, and even the platforms themselves (in many cases at least, they don't want to decide things since it opens them up to more government retaliation). We have a rapidly closing policy window to test and deploy platform democracy and give it real power and teeth. We'd like to see new organizations to advocate for, test, measure, certify, and scale platform democracy processes. We are especially excited about exploring the ways that these approaches can be used beyond just platform policies, but also for governance of the AI systems created and deployed by powerful corporations.
(Note: This is not as crazy as it sounds; several platforms you have heard are dedicating significant resources to actively explore this, but they need neutral 3rd party orgs to work with; relevant non-profits are very interested but are stretched too thin to do much. The primary approaches I am referring to here are mini-publics and systems like Polis.)
More detail at platformdemocracy.com (not an org; just a working paper right now)
Nathan Young @ 2022-03-02T23:03 (+13)
Polis lobbying
Political
Pol.is is a tool for mapping coalitions (mentioned in this 80,000 Hours podcast). Rather than running standard polls on issues, large Polis pols could be run, as Taiwan does. These would seek to build solutions which hold broad support before taking them to lobbyists.
PhilC @ 2022-03-02T19:34 (+13)
Backup communication systems
Biorisk and Recovery from Catastrophe
In the event of GCRs, conflicts or disasters, communication systems are key to sensemake and coordinate effectively. They prevent chaos and further escalation of conflicts. Today, there are many threats to the global communication infrastucutre including EMPs, widespread cyber attacks, and solar flares.
Nathan Young @ 2022-03-02T02:08 (+13)
Metaculus Competitor
Forecasting
Prediction markets don't incentivise long term questions and the Good Judgement Open has slow question creation. This leaves Metaculus as the only place to forecast questions over long time horizons. This is too important a problem to have a single organisation solving. At least one more forecasting organisation should exist to try and build the infrastructure necessary to take forecasts, improve individual forecasting, display track records and make 5 - 1000 year forecasts.
Linch @ 2022-03-01T16:24 (+13)
Securing offices and schools against SARS-3
Biorisk
The COVID-19 pandemic has demonstrated failures of our scientific, political, and epistemic institutions, but also of our physical structures. We believe that accurate and high-quality designs of offices and schools to be secure against pathogen spread of airborne viruses can be a) directly useful, b) potentially generalize well to future pandemics, and c) provide the necessary training ground for building more robust and ambitious projects in the future, including large-scale civilizational refuges.
We picked offices and schools to limit the threat model and surface area, but we're in theory excited about designs that can contain pathogen spread in any well-trafficked built environment.
Jackson Wagner @ 2022-03-01T21:38 (+2)
Is there a way to get more leverage on this? Maybe:
- Research new sterilization tech (like shining UV-C light horizontally across the ceiling in a way that cleans the air but doesn't harm people) so that buildings can be retrofitted more easily, without redoing the whole HVAC system? This would count under FTX's project idea #8.
- Lobbying for better air-filtration systems to be made a requirement for schools and offices as a matter of government budgets (for schools) and regulation (for offices)? I'm sure we could swing a state or local ballot proposition in a covid-cautious and wildfire-plagued place like California.
Linch @ 2022-03-01T22:36 (+2)
I think we're bottlenecked more on really good designs than on the politics, but I'm not sure. I also vaguely have this cached view that a lot of whether built-environment innovations are used in practice depends on things that look more like building codes than office politics, but this is a pretty ill-formed view that I have low confidence in.
I guess that I sort of believe all three should be done in a sane world, and which things we ought to prioritize in practice will depend on a combination of "POV of the universe" modeling and personal fit considerations of whoever wants to implement any of these considerations.
zdgroff @ 2022-03-07T20:23 (+12)
Consulting on best practices around info hazards
Epistemic Institutions, Effective Altruism, Research That Can Help Us Improve
Information about ways to influence the long-term future can in some cases give rise to information hazards, where true information can cause harm. Typical examples concern research into existential risks, such as around potential powerful weapons or algorithms prone to misuse. Other risks exist, however, and may also be especially important for longtermists. For example, better understanding of ways social structures and values can get locked in may help powerful actors achieve deeply misguided objectives.
We would like to support an organization that can develop a set of best practices and consult with important institutions, companies, and longtermist organizations on how best to manage information hazards. We would like to see work to help organizations think about the tradeoffs in sharing information. How common are info hazards? Are there ways to eliminate or minimize downsides? Is it typically the case that the downsides to information sharing are much smaller than upsides or vice versa?
MaxGhenis @ 2022-03-07T19:00 (+12)
Comprehensive, personalized, open source simulation engine for public policy reforms
Epistemic Institutions, Economic Growth, Values and Reflective Processes
Policy researchers apply quantitative modeling to estimate the impacts of immigration reform on GDP, child benefits on fertility, safety net reform on poverty, carbon pricing on emissions, and other policies. But these analyses are typically narrow, impersonal, inflexible, and closed-source, and the public can rarely access the models that produce them.
We'd like to see a general simulation engine—built with open source code and freely available to researchers and the public—to estimate the impact of a wide variety of public policy reforms on a wide variety of outcomes, using a wide variety of customizable parameters and assumptions. Such a simulation engine could power analyses like those above, while opening up policy analysis to more intricate reforms, presented as a technology product that estimates impacts on society and one's own household.
A common technology layer for public policy analysis would promote empiricism across institutions from government to think tanks to the media. Exposing households to society-wide and personalized effects of policy reforms can align the policymaking and democratic processes, ultimately producing more effective public policy.
Disclaimer: My nonprofit, PolicyEngine, is building toward this vision, starting with the tax and benefit system in the UK and the US. We plan to apply for the Future Fund's first round.
See also "Unified, quantified world model" and "Civic sector software".
Taras Morozov @ 2022-03-07T06:53 (+12)
Create an organization doing literature reviews and research on demand
Values and Reflective Processes, Effective Altruism
Create a research organization that will offer literature reviews and research to other EA organizations. They will focus on questions that are not theory-heavy and can be approached by a generalists without previous deep knowledge of the field. Previous examples of such research are publications of AI Impacts or literature reviews by Luke Muehlhauser.
Besides research itself, this is useful also for:
- it frees up the time of senior researchers
- It can be a good training place for junior researchers.
- it may enable a larger infusion of valuable ideas from academia.
PeterSlattery @ 2022-03-08T03:28 (+4)
I think that this is a good idea and something that READI could be interested in supporting. We have extensive experience doing reviews as research consultant and in providing related training (both as volunteers and professionals). One related idea that some of us are exploring, is developing a sort of 'micro-course and credential' to i) train EAs to do reviews and ii) curate teams of credential junior researcher who can to support the undertaking of literature reviews under the supervision an expert.
Jakob @ 2022-03-07T11:17 (+3)
Would this be another organization like Rethink Priorities, or is it different from what they are doing? (Note: I don't think this space is crowded yet, so even if it is another organization doing the same things, it could still be very helpful!)
PeterSlattery @ 2022-03-07T02:50 (+12)
Creating more EA relevant credentials
Movement building
EA wants to equip young people with knowledge and motivation to improve the long-term future by providing high quality online educational resources for anyone in the world to learn about effective altruism and longtermism. Most young people follow established education paths (e.g., school, university, and professional courses) and seek related credentials during this time. There are relatively few credentialed courses or activities which provide exposure to EA ideals and core capabilities. We would therefore like to fund more of these. For instance, these might include talent based scholarships (e.g., a ‘rising social impact star award’), cause related Olympiads (e.g., AI safety), MOOCs/university courses (e.g., on causes or, key skill sets, with an EA tie-in), and EA themed essay writing competitions (.e.g., asking high school students to write about 'the most effective ways to improve the world’ and giving awards to
the best ones).
PeterSlattery @ 2022-03-07T02:43 (+12)
New EA incubation and scaling funds and organisations
Movement building, coordination, coincidence of wants problems, & scaling
Charity entrepreneurship, Y Combinator, Rocket Internet and similar have had notable and disproportionate economic and social impacts and accelerated the dissemination of innovative ideas. The EA community has also called for more founders. We would therefore like to support EA and social impact funds that initiate, or scale relevant initiatives such as charities (e.g., tax-deductible EA charity funds, long term future fund equivalents or research institutes)
CristinaSchmidtIbáñez @ 2022-03-06T17:33 (+12)
Job application support for underrepresented groups
Increasing diversity in EA, Effective Altruism
Underrepresented groups usually face additional (or exacerbated) challenges in job applications: language barriers, impostor-syndrome, smaller networks, etc. that affect their application success. There are organisations within the EA ecosystem that provide career coaching but none provides dedicated, on demand support with job applications.
We'd love to see an organisation that provides ongoing support to people from underrepresented groups in job applications including: finding the right opportunities, preparing application documents, preparing for interviews, etc. so they are more likely to land high impact roles.
Ricky @ 2022-03-06T07:36 (+12)
Economic growth
Work with developing countries to buy an area of land to form an EA special economic area. This can be a place where EA can congregate and innovate in IT and other fields. It can also be a place where EA can demonstrate new policies, technologies and pioneer new ways of thinking.
EA could expand on this idea to build communities in remote places that are likely to survive extinction events. It will provide a good opportunity to test technology that will be used in any space colonies.
ren @ 2022-03-05T22:19 (+12)
Generous prizes to attract young top talent to EA in big countries
Effective altruism
Prizes are a straightforward way to attract top talent to engage with EA ideas. They also require relatively low human capital or expertise and therefore are conceivably scalable for different countries. Through a nationwide selection process optimized for raw talent, ability to get things done, and altruistic alignment, an EA prize could quickly make the movement become well-known and prestigious in big countries. High school graduates and early university students would probably be the best target audience. The prize could come with a few strings attached, such as participating in a two-week-long EA fellowship, or with more intense commitments, such as working for a year on an EA-aligned project. Brazil and India are probably the best fit, considering their openness to Western ideas and philanthropic investment (in comparison to China and Russia). Other candidates may include the Philippines, where EA groups have been relatively successful, Indonesia, Argentina, Nigeria, and Mexico.
Lauren Reid @ 2022-03-05T15:18 (+12)
Fund/Create training for mental health workers
Effective Altruism
A limiting reagent in health care in Canada right now is that there aren’t enough psychologists/psychiatrists/ mental health workers. People don’t have access to these services and end up in the emergency department and crashing the health system in other ways. Mental health is fundamental for participation in societal roles, and highly conscientious people are at risk and children are waiting years for assessments (like for ADHD) which can change the course of their lives.
Psychiatry is one of the least well paid medical specialties, it takes many years to train psychologists and psychiatrists.
I propose looking at funding the training of the mental health workforce, as well as lobbying to have mental health services to be included as essential health care services.
Arran McCutcheon @ 2022-03-04T14:58 (+12)
Website for coordinating independent donors and applicants for funding
Empowering exceptional people, effective altruism
At EAG London 2021, many attendees indicated in their profiles that they were looking for donation opportunities. Donation autonomy is important to many prospective donors, and increasing the range of potential funding sources is important to those applying for funding. A curated website which allows applicants to post requests for funding and allows potential donors to browse those requests and offer to fully or partially fund applicants, seems like an effective solution.
Milan_Griffes @ 2022-03-04T13:19 (+12)
Nuclear arms reduction to lower AI risk
Artificial Intelligence and Great Power Relations
In addition to being an existential risk in their own right, the continued existence of large numbers of launch-ready nuclear weapons also bears on risks from transformative AI. Existing launch-ready nuclear weapon systems could be manipulated or leveraged by a powerful AI to further its goals if it decided to behave adversarially towards humans. We think understanding the dynamics of and policy responses to this topic are under-researched and would benefit from further investigation.
aogara @ 2022-03-18T21:38 (+4)
Strongly agree with this. There are only a handful of weapons that threaten catastrophe to Earth’s population of 8 billion. When we think about how AI could cause an existential catastrophe, our first impulse shouldn’t be to think of “new weapons we can’t even imagine yet”. We should secure ourselves against the known credible existential threats first.
Wrote up some thoughts about doing this as a career path here: https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aidan-o-gara-s-shortform?commentId=rnM3FAHtBpymBsdT7
Greg_Colbourn @ 2022-03-05T11:02 (+2)
On the flip side, you could make part of your 'pivotal act' be the neutralisation of all nuclear weapons.
Rory Fenton @ 2022-03-03T18:29 (+12)
Pilot emergency geoengineering solutions for catastrophic climate change
Research That Can Help Us Improve
Toby Ord puts the risk of runaway climate change causing the extinction of humanity by 2100 at 1/1000, a staggering expected loss. Emergency solutions, such as seeding oceans with carbon-absorbing algae or creating more reflective clouds, may be our last chance to prevent catastrophic warming but are extraordinarily operationally complex and may have unforeseen negative side-effects. Governments are highly unlikely to invest in massive geoengineering solutions until the last minute, at which point they may be rushed in execution and cause significant collateral damage. We’d like to fund people who can:
- Identify and pilot at large scale top geoengineering initiatives over the next 5-10 years to develop operational lessons. E.g. promote algae growth in a large, private lake, launch a small cluster of mirrors into space
- Develop advanced supercomputer models, potentially with input from the above pilots, of the potential negative side-effects of geoengineering solutions
- Identify and pilot harm-mitigation responses for geoengineering solutions
Epistemic status: there seems to be reasonable expert agreement on the kinds of geoengineering solutions that might work. I have no idea how much funding geoengineering pilots might need.
Conflict of interest: I work for a small, new nonprofit focused on $B giving. We are generally focused on projects that already have large implementers so have not pursued geoengineering beyond initial light research
Khorton @ 2022-03-04T21:04 (+3)
I thought China has already done some low-key geoengineering?
https://80000hours.org/podcast/episodes/kelly-wanser-climate-interventions/
Rory Fenton @ 2022-03-07T17:50 (+1)
Thanks for sharing!
My initial sense is that China's method is focused on controlling rainfall, which might mitigate some of the effects of climate change (e.g. reduce drought in some areas, reduce hurricane strength) but not actually prevent it. The ideas I had in mind were more emergency approaches to actually stopping climate change either by rapidly removing carbon (e.g. algae in oceans) or reducing solar radiation absorbs on the Earth's surface (making clouds/oceans more reflective, space mirrors).
will_c @ 2022-03-03T15:01 (+12)
Incremental Institutional Review Board Reform
Epistemic Institutions, Values and Reflective Process
Institutional Review Boards (IRBs) regulate biomedical and social science research. In addition to slowing and deterring life-saving biomedical research, IRBs interfere with controversial but useful social science research, eg, Scott Atran was deterred from studying Jihadi terrorists; Mark Kleiman was deterred from studying the California prison system, and a Florida State University IRB cited public controversy as a reason to deter research. We would like to see a group focused on advocating for plausible reforms to IRBs that allow more social science research to be performed. Some plausible examples:
- Prof. Omri Ben-Shahar’s proposal to replace exempt IRB reviews with an electronic checklist or
- Zachary Schrag’s proposal (from Ethical Imperialism) that Congress remove social science research from OHRP jurisdiction by amending the National Research Act of 1974.
Concrete steps to these goals could be:
- sponsoring a prize for the first university that allowed use of Prof. Omri Ben-Shahar’s electronic checklist tool;
- setting up a journal for “Deterred Social Science Research”, in which professors publicly submit research proposals that their IRBs have rejected.
Peter S. Park @ 2022-03-02T19:46 (+12)
Longtermism movement-building/election/appointment efforts, targeted at federal and state governments
Effective altruism
Increasing knowledge of and alignment with longtermism in government by targeted movement-building and facilitating the election/appointment of sympathetic people (and of close friends and family of sympathetic people) could potentially be very impactful. If longtermism/EA becomes a social norm in, say, Congress or the Washington 'blob', we could benefit from the stickiness of this social norm.
Mathieu Putz @ 2022-03-02T19:41 (+12)
Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)
Economic Growth, Effective Altruism
Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you're not taking it)? If it's beneficial, what's the effect size? What frequency hits the best trade-off between building up tolerance vs short-term productivity gains? What are the long-term health effects? Does it affect longevity?
Some people think that taking stimulants regularly provides a large net boost to productivity. If true, that would mean we could relatively cheaply increase the productivity of the world and thereby increase economic growth. In particular, it could also increase the productivity of the EA community (which might be unusually willing to act on such information), including AI and biorisk researchers.
My very superficial impression is that many academics avoid researching the use of drugs in healthy people and that there is a bias against taking medications unless "needed".
So I'd be interested to see a large-scale, longterm RCT (randomized controlled trial) that investigated these issues. I'm unsure about exactly how to do this. One straightforward example would be having two randomized groups, giving the substance to one of them for X months/years, and seeing whether that group has higher earnings after that period. Ideally, the study participants would perform office jobs, rather than manual labor (since that is where most of the value would come from); perhaps even especially cognitively demanding tasks, such as research or trading. In the case of research, metrics such as the number of published articles or number of citations would likely make more sense than earnings.
One could also check health outcomes, probably incl. mental health. Multiple substances or different dosing-regimes could be tested at once by adding study arms.
Notes:
- One of the reasons I would most care about this might be improving the effectiveness of people working to prevent X-risks, but I'm not sure whether that fits neatly into any of your categories (and whether that's intentional).
- I'm not at all sure whether this is a good idea, but tried to err on the side of over-including since that seems productive while brainstorming; I haven't thought about this much.
- It may be that such studies exist and I just don't know about them (pointers?).
- It may be impossible to get this approved by ethics boards, though hopefully in some country somewhere it could happen?
quinn @ 2022-03-02T18:20 (+12)
Sub-extinction event drills, games, exercises
Civilizational resilience to catastrophes
Someone should build up expertise and produce educational materials / run workshops on questions like
- Nuclear attacks on several cities in a 1000 mile radius of you, including one within 100 miles. What is your first move?
- Reports of a bioweapon in the water supply of your city. What do you do?
- You're a survivor of an industrial-revolution-erasing event. What chunks of knowledge from science can be useful to you? After survival, what are the steps to rebuilding?
- 6 billion people died and the remaining billion are uniformly distributed throughout the planet's former population centers. How can you build up robustness of basic survival, food and water production, shelter, etc.?
- (for the IT folks) 5 years after number 4, basic needs are largely met, and scavengers have filled a garage with old laptops and computer parts. Can you begin rebuilding the internet to connect with other clusters around the world?
Differentially distributing these materials/workshops to people who live in geographical areas likely to survive at all could help rebuilding efforts in worlds where massive sub-extinction events occur.
Chris Leong @ 2022-03-01T13:06 (+12)
Centralising Information on EA/AI Safety
Effective Altruism, AI Safety
There are many list of opportunities available in EA/AI Safety and many lists of what organisations exist. Unfortunately these lists tend to get outdated. It would be extremely valuable to have a single list that is up to date and filterable according to various criteria. This would require someone being paid to maintain these part-time.
Another opportunity for centralisation would be to create an EA link shortener with pretty URLs. So for example, you'd be able to type in ea.guide/careers to see information on careers or ea.guide/forum to jump to the forum.
Notes: I own the URL ea.guide so I'd be able to donate it.
Rhett_Gentile @ 2022-03-01T01:44 (+12)
Physical AI Safety
Drawing from work done in the former Soviet Union to improve safety in their bioweapons and nuclear facilities (e.g. free consultations and install of engineering safety measures, at-cost upgrades of infrastructure such as ventilation and storage facilities, etc), developing a standard set of physical/infrastructure technologies to help monitor AI Development labs/hardware and provide physical failsafes in the event of unexpectedly rapid takeoff (e.g., a FOOM scenario). Although unlikely, some standard guidelines modifying current best-practices for data center safety (e.g., restrictions on devices, physical air gaps between critical systems and the broader world, extensive onsite power monitoring and backup generators) could be critical to prevent anxiety over both physical and digital security from encouraging risk-taking behaviors by AI Development programs (Such as rushing builds, hiding locations, inappropriate dual-use or shared facilities which decrease control over data flows). In particular, physical low-tech hardware such as low-voltage switches have already provided demonstrable benefit in safeguarding high-tech, high-risk activity (See the Goldsboro B52 Crash, where a single low-voltage switch prevented disaster after numerous higher-tech controls failed in the chaotic environment of a bomber breaking apart in mid-air). These technologies have low dual-use risk, low-cost of installation and development, but as physical hardware are potentially easily overlooked either due to lack of interest, perceived risk of adding friction/failure points to the main mission, and belief in high-tech safeguards being more 'reliable' or 'sophisticated'.
Avenues for progress could be establishing an international standard for physical security in AI facilities, sponsoring or subsidizing installation or retrofit into new/existing facilities, and advocacy within AI organizations for attention to this or similar problems.
SoerenMind @ 2022-05-04T13:02 (+11)
Acquire and repurpose new AI startups for AI safety
Artificial intelligence
As ML performance has recently improved there is a new wave of startups coming. Some are composed of top talent, carefully engineered infrastructure, a promising product, well-coordinated teams, with existing workflows and management capacity. All of these are bottlenecks for AI safety R&D.
It should be possible to acquire some appropriate startups and middle-sized companies. Examples include HuggingFace, AI21, Cohere, and smaller, newer startups. The idea is to repurpose the mission of some select companies to align them more closely with socially beneficial and safety-oriented R&D. This is sometimes feasible since their missions are often broad, still in flux, and their product could benefit from improving safety and alignment.
Trying this could have very high information value. If it works, it has enormous potential upside as many new AI startups are being created now that could be acquired in the future. It could potentially more than double the size of the AI alignment R&D.
Paying existing employees to do safety R&D seems easier than paying academics. Academics often like to follow their own ideas but employees are already doing what their superior tells them to. In fact, they may find alignment and safety R&D more motivating than their company's existing mission. Additionally, some founders may be more willing to sell to a non-profit org with a social-good mission than to Big Tech.
Big tech companies acquire small companies all the time. The reasons for this vary (e.g. killing competition), but overall it suggests that it can be feasible and even profitable.
Caveats:
1) A highly qualified replacement may be needed for the top-level management.
2) Some employees may leave after an acquisition. This seems more likely if the pivot towards safety is a big change to the skills and workflows. Or if the employees don't like the new mission. It seems possible to partially avoid both of these by acquiring the right companies and steering them towards a mission that is relevant to their existing work. For example, natural language generation startups would usually benefit from fine-tuning their models with alignment techniques.
MaxRa @ 2022-05-04T13:25 (+2)
Thanks, I think that's a really interesting and potentially great idea. I'd encourage you to post it as a short stand-alone post, I'd be interested in hearing other people's thoughts.
Girish_Sastry @ 2022-03-09T00:02 (+11)
A center applying epistemic best practices to predicting & evaluating AI progress
Artificial Intelligence and Epistemic Institutions
Forecasting and evaluating AI progress is difficult and important. Current work in this area is distributed across multiple organizations or individual researchers, not all of whom possess (a) the technical expertise, (b) knowledge & skill in applying epistemic best practices, and (c) institutional legitimacy (or otherwise suffer from cultural constraints). Activities of the center could include providing services to AI groups (e.g. offering superforecasting training or prediction services), producing bottom-line reports on "How capable is AI system X?", hosting adversarial collaborations, pointing out deficiencies in academic AI evaluations, and generally pioneering "analytic tradecraft" for AI progress.
brb243 @ 2022-03-07T18:26 (+11)
Effective Altruism, Research That Can Help Us Improve, Economic Growth
Issuing and trading impact certificates can popularize and normalize impact investment and profitable strategic research among the world's economic influencers. Then, economic growth will have an approximately good direction, only the relative popularization of impact certificates management/incentivization would remain.
PeterSlattery @ 2022-03-07T02:59 (+11)
Better understanding the needs of organisational leaders
Coincidence of wants problems
In EA, organisational leaders and potential workers often don't have good information about each other’s needs and offerings (See EA needs consultancies). The same is true for researchers who might like to do research for organisations but don't know what to do. We would like to fund work to help to resolve this. This could involve collecting advanced market commitments for funders (e.g., org group x would pay up to x for y hours of design time next year, on average). It could involved identifying unknowns for key decision makers in EA in relevant areas (e.g., instructional decision-making, longtermism, or animal welfare) which could be used to develop a research agendas and kickstart research.
Denis Drescher @ 2022-03-06T12:10 (+11)
Organization to push for mandatory liability insurance for dual-use research
Biorisk and Recovery from Catastrophe
Owen Cotton-Barratt for the Global Priorities Project in 2015:
Research produces large benefits. In some cases it may also pose novel risks, for instance work on potential pandemic pathogens. There is widespread agreement that such ‘dual use research of concern’ poses challenges for regulation.
There is a convincing case that we should avoid research with large risks if we can obtain the benefits just as effectively with safer approaches. However, there do not currently exist natural mechanisms to enforce such decisions. Government analysis of the risk of different branches of research is a possible mechanism, but it must be performed anew for each risk area, and may be open to political distortion and accusations of bias.
We propose that all laboratories performing dual-use research with potentially catastrophic consequences should be required by law to hold insurance against damaging consequences of their research.
This market-based approach would force researcher institutions to internalise some of the externalities and thereby:
Encourage university departments and private laboratories to work on safer research, when the benefits are similar; Incentivise the insurance industry to produce accurate assessments of the risks; Incentivise scientists and engineers to and devise effective safety protocols that could be adopted by research institutions to reduce their insurance premiums. Current safety records do not always reflect an appropriate level of risk tolerance. For example, the economic damage caused by the escape of the foot and mouth virus from a BSL-3 or BSL-4 lab in Britain in 2007 was high (mostly through trade barriers) and could have been much higher (the previous outbreak in 2001 caused £8 billion of damage). If the lab had known they were liable for some of these costs, they might have taken even more stringent safety precautions. In the case of potential pandemic pathogen research, insurers might require it to take place in BSL-4 or to implement other technical safety improvements such as “molecular biocontainment”.
Denis Drescher @ 2022-03-06T12:15 (+2)
The (late) Global Priorities Project produced a long list of policy interventions and found that none of them were feasible at that time and place (UK in 2015), but maybe some of them can be adapted to other times or places where they are feasible.
Niel Bowerman’s article “Research note: Good policy ideas that won’t happen (yet)” from 2015 gives an overview.
christian.r @ 2022-03-04T19:01 (+11)
A Project Candor for Global Catastrophic Risks
Biorisk and Recovery from Catastrophe, Values and Reflective Processes, Effective Altruism
This is a proposal to fund a large-scale public communications project on global catastrophic risks (GCRs), modeled on the Eisenhower administration's Project Candor. Project Candor was a Cold War public relations campaign to "inform the public of the realities of the 'Age of Peril'" (see Unclassified 1953 Memo from Eisenhower Library). Policymakers were concerned that the public did not yet understand that the threats from nuclear weapons and the Soviet Union had inaugurated a new era in human history: the Age of Peril. Today, at the precipice, the Age of Peril continues with possible risks from engineered pandemics, thermonuclear exchange, great power war, and more. Voting behavior and public discourse, however, do not seem attuned to these risks. A new privately-funded Project Candor would communicate to the public the nature of the threats, their probabilities, and what we can do about them. This proposal is related to "a fund for movies and documentaries" and "new publications on the most pressing issues," but differs in that it would be a unified and coordinated campaign across multiple media.
SjirH @ 2022-03-04T10:42 (+11)
A social media platform with better incentives
Epistemic Institutions, Values and Reflective Processes
Social media has arguably become a major way in which people consume information and develop their values, and the most popular platforms are far from optimally set up to bring people closer to truthfulness or altruistic ends. We’d love to see experiments with social media platforms that provide more pro-social incentives and yet have the potential to reach a large audience.
Rory Fenton @ 2022-03-03T18:41 (+11)
Eliminate all mosquito-borne viruses by permanently immunizing mosquitoes
Biorisk and Recovery from Catastrophe
Billions of people are at risk from mosquito-borne viruses, including the threat of new viruses emerging. Over a century of large-scale attempts to eradicate mosquitoes as virus vectors has changed little: there could be significant value in demonstrating large-scale, permanent vector control for both general deployment and rapid response to novel viruses. Recent research has shown that infecting mosquitoes with Wolbachia, a bacterium, out-competes viruses (including dengue, yellow fever and Zika), preventing the virus from replicating within the insect, essentially immunizing it. The bacterium passes to future generations by infecting mosquito eggs, allowing a small release of immunized mosquitoes to gradually and permanently immunize an entire population of mosquitoes. We are interested in proposals for taking this technology to massive scale, with a particular focus on rapid deployment in the case of novel mosquito-borne viruses.
Epistemic status: Wolbachia impact on dengue fever has been demonstrated in a large RCT and about 10 city-level pilots. Impact on other viruses only shown in labs. The approach is likely to protect against novel viruses but that has not been demonstrated.
Conflict of interest: I work for a small, new nonprofit focused on $B giving. I have had conversations with potential Wolbachia implementers to understand their work but have no direct commercial interest.
Peter S. Park @ 2022-03-03T16:13 (+11)
Increasing social norms of moral circle expansion/cooperation
Moral circle expansion
International cooperation on existential risks and other impactful issues is largely downstream of social norms of, for example, whether foreigners are part of one's moral circle. Research and efforts to encourage social norms of moral circle expansion and cooperation to include out-group members could potentially be very impactful, especially in relevant countries (e.g., US and China) and among relevant decision-makers.
Peter S. Park @ 2022-03-03T15:47 (+11)
Movement-building/research/pipeline for content creators/influencers
Effective altruism
Content creators/influencers have (if popular) a lot of outreach potential and earning-to-give potential. We should investigate the possibility of investing in movement-building or a pipeline into this field. Practical research on how to be a successful influencer is also likely to be broadly applicable for movement-building in general.
Jackson Wagner @ 2022-03-03T19:33 (+7)
Rather than a pipeline for turning EAs (of which there are few) into media creators and celebrity influencers, it might be wiser to go the other way, and try to specifically target media creators and celebrity influencers for conversion to EA. In my view, the quickest path to something like a high-quality youtube documentary series about EA probably looks more like "find an existing youtube studio with some folks who are interested in EA" than it does "get a group of EAs together and create a media studio". Although the quickest path of all probably involves a mix of both strategies -- like 2-3 committed EAs with experience in media getting funding and hiring a bunch of other people already working in media to help them build the project.
I've been talking about documentaries/videos because there seem to be a number of EA efforts currently to create media studios or etc. But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile.
Peter S. Park @ 2022-03-04T02:51 (+3)
"find an existing youtube studio with some folks who are interested in EA"-> This sounds very doable and potentially quite impactful. I personally enjoy watching Kurzgesagt and they have done EA-relevant videos in the past (e.g., meat consumption).
"But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile." -> I agree!
Greg_Colbourn @ 2022-03-03T11:33 (+11)
Burying caches of basic machinery needed to rebuild civilisation from scratch
Recovery from Catastophe
Should the worst happen, and a global catastrophe happens, we want to be able to help survivors rebuild civilisation as quickly and efficiently as possible. To this end, burying caches of machinery that can be used to bootstrap development is a useful part of a civilisation recovery toolkit. Such a cache could be in the form of a shipping container filled with heavy machines of open source design, such as a wind turbine, an engine, a tractor with back hoe, an oven, basic computers and CNC fabricators, etc. Written instructions would also be included of course! Along with a selection of useful books. First we aim to put together a prototype of such a cache and test it in various locations with people of various skill levels, to see how well they fare at "rebuilding" in simulated catastrophe scenarios. Learning from this, we will iterate the design until at least 10% of simulations are successful (to what is judged to be a reasonable level). We ultimately aim to bury 10,000 such caches at strategic locations around the world. Some will be in well known locations (for the case of sudden catastrophe); some hidden with their location to be automatically broadcast should a catastrophe be imminent (to protect them from vandals and malevolent actors); and some hidden with some level of "treasure hunt" required to find them (to provide longer term viability should first attempts to rebuild fail).
Greg_Colbourn @ 2022-03-05T16:25 (+2)
(I've edited the last part re locations after some feedback in this post (worth a read!))
Greg_Colbourn @ 2022-03-03T10:44 (+11)
Targeted social media advertising to give away high-value books
Effective Altruism, Values and Reflective Processes, Epistemic Institutions
Books are a high-fidelity means of spreading ideas. We think that high-value books are those that promote the safeguarding and flourishing of humanity and all sentient life, using evidence and reason. Many of the most valuable books have come out of the Effective Altruism (EA) movement over the last decade. We are keen for more people who want to maximize the good they do to read them. Offering those most likely to be interested in EA ideas free high-value books via targeted adverts on social media could be a highly cost effective means of growing the EA movement in a values-preserving manner. Examples of target demographics are people interested in charity and volunteering, technology, or veg*anism. Examples of books that could be offered are The Life You Can Save, Doing Good Better, The Precipice, Human Compatible, The End of Animal Farming. Perhaps a list of books could be offered, with people being allowed to chose any one.
MaxRa @ 2022-03-10T22:51 (+4)
One related idea might be to offer the books with a heavy discount. Historically, I'm much more likely to read a book if it pops up on my kindle like this: 10€ 0.99€, compared to books that are given away for free. Maybe book vendors are open to accept a subsidy to lower the price of EA books?
Greg_Colbourn @ 2022-03-03T10:47 (+2)
This was inspired by Ryan Carey's books in Library idea, and the trend of EA book giveaways to various groups (such as those attending EA Cambridge's AGI Safety Fundamentals course).
PhilC @ 2022-03-02T20:15 (+11)
DNA banks and backup of Svalbard Global Seed Vault
Biorisk and Recovery from Catastrophe
Arguably, the most important information that the world has generated is the diversity of codes for life. Technologies are available to allow all these to be stored quickly and at low cost in DNA banks. Seed banks currently provide security for the world’s food supply. In the event of a catastrophe, it may be important to have multiple seed banks for redundancy.
Andrew Wong @ 2022-03-02T09:58 (+11)
Redefine humanity & assisting its transition
Artificial intelligence, values and reflective processes
As humanity inevitably evolves into coexistence with AI – the adage “if a man will not work, he shall not eat” needs to be redefined. Apart from AI’s early displacement effects already apparent (cue autonomous driving/trucking industry etc), humanity’s productivity function will continue rising due to the intrinsic nature of AI (consider 3D printing normal/lux goods at economies of scale), so much so that even plentitude becomes a potential problem. (In the usual then followed citation of ‘what about the African kids’ – kindly note this is a separate distribution problem) Ultimately – we should be contributing towards smoothing the AI transition curve and managing initial displacement by AI followed by proactively managing integration.
Leo Gao @ 2022-03-02T07:01 (+11)
AI alignment: Evaluate the extent to which large language models have natural abstractions
Artificial Intelligence
The natural abstraction hypothesis is the hypothesis that neural networks will learn abstractions very similar to human concepts because these concepts are a better decomposition of reality than the alternatives. If it were true in practice, it would imply that large NNs (and large LMs in particular, due to being trained on natural language) would learn faithful models of human values, as well as bound the difficulty of translating between the model and human ontologies in ELK, avoiding the hard case of ELK in practice. If it turns out that the natural abstraction hypothesis is true at relevant scales, this would allow us to sidestep a large part of the alignment problem, and if it is false then this allows us to know to avoid a class of approaches that would be doomed to fail.
We'd like to see work towards gathering evidence on whether natural abstractions holds in practice and how this scales with model size, with a focus on interpretability of model latents, and experiments in toy environments that test whether human simulators are favored in practice. Work towards modifying model architectures to encourage natural abstractions would also be helpful towards this end.
evelynciara @ 2022-03-02T05:20 (+11)
Refinement of idea #33, "A fund for movies and documentaries":
I'd like to see filmmakers (including screenwriters and directors) working on EA-inspired films collaborate with social scientists and other subject-matter experts to ensure that their films realistically depict EA issues (such as x-risks) and social dynamics. These collaborations can help filmmakers avoid pitfalls like those committed by Don't Look Up and The Ministry for the Future.[1]
- ^
From this review: "But while here and there an offhand reference to some reluctant group or other is made, they are, in Ministry, always feckless. The initial disaster undermines India’s Hindu nationalist party, rather than strengthening it. Further disasters are met with turns to socialism. The anti-fossil fuel terrorism that is portrayed (and both criticized and seen as necessary by varying characters) does not provoke anti-environmental terrorism in response. One particular striking example is about two-thirds of the way through the novel, when a small American town is evacuated in the name of half-Earth. While not welcomed, this evacuation is accepted in a way that is all but impossible to imagine, at least while we, looking up from Robinson’s pages, see violent resistance to medical masks in a pandemic, and a political movement burning with fury at the slightest gestures of perceived disrespect. The rural fury at urban technocrats not ignored, but it is toned down beyond any realistic hopes, lessened to an almost unimaginable degree."
Zac Townsend @ 2022-03-01T11:43 (+11)
Accelerating Accelerators
Economic Growth
Y Combinator has had one of the largest impacts on GDP of any institution in history. We are interested in funding efforts to replicate that success across different geographies, sectors (e.g. healthcare, financial services), or corporate form (e.g. not-for-profit vs. for-profit).
Nathan Young @ 2022-03-02T02:17 (+2)
I'd like research alongside this to try and ascertain how GDP affects existential risk.
Greg_Colbourn @ 2022-03-05T16:22 (+3)
See this (by one of the Future Fund team!)
Chris Leong @ 2022-03-01T03:23 (+11)
Salary Negotiation Service:
Effective Altruism
This service could negotiate salaries on behalf of EAs or others who would then commit a proportion of the extra to charity. This would increase the amount of money going to EA causes, promote Effective Altruism and draw people deeper into the community. Given the number of EAs who are working at high-paying tech companies this would likely be profitable.
(I remembered hearing this idea from someone else a few years back, but I can't remember who it was, unfortunately, so I can't give them credit unless they name themselves)
Risks: Might be expensive to find someone with the skills to do this and this might outweigh the money raised.
Jan-WillemvanPutten @ 2022-03-01T13:48 (+7)
Hi Chris! We run this on a recurring base with Training For Good! We already had a few dozens of people on the program and we are currently measuring the impact.
See https://www.trainingforgood.com/salary-negotiation
Chris Leong @ 2022-03-01T21:10 (+3)
I was suggesting an actual service and not just training.
Jackson Wagner @ 2022-03-01T02:51 (+11)
Ambitious Altruistic Software Engineering Efforts
Values and Reflective Processes, Effective Altruism
There is a long list of altruistic software projects waiting to be built, with various worthy goals such as improving forecasting, improving groups' ability to intelligently coordinate, or improving the quality of research and social-media conversations.
Hanna Pálya @ 2022-03-07T22:51 (+10)
Biorisk and information hazard workshops for iGEM competitors
Biorisk and Recovery from Catastrophe, Empowering Exceptional People
iGEM competitions are interdisciplinary synthetic biology competitions for students. They bring together the best and brightest university students with a considerable interest in synthetic biology. They already have knowledge and skills in bioengineering and many of them will likely choose it as a career path and will be very good at it. Educating them on biorisks and especially information hazards would therefore be a great contribution to safeguarding. They could also be introduced to EA ideas and rationalist approaches in general, bringing talented young people on board.
Tessa @ 2022-05-04T16:15 (+2)
You might be interested to know that iGEM (disclosure: my employer) just published a blog post about infohazards. We currently offer biorisk workshops for teams; this year we plan to offer a general workshop on risk awareness, a workshop specifically on dual-use, and potentially some others. We don't have anything on general EA / rationality, though we do share biosecurity job and training opportunities with our alumni network.
tessa @ 2022-03-07T13:13 (+10)
Screen and record all DNA synthesis
Biorisk and Recovery from Catastrophe
Screening all DNA synthesis orders for potentially serious hazards would reduce the risk that a dangerous biological agent is engineered and released. Robustly recording what DNA is synthesized (necessarily in an encrypted fashion) would allow labs to prove that they had not engineered an agent causing an outbreak. We are interested in funding work to solve technical, political and incentive problems related to securing DNA synthesis.
Meta note: there are already some cool EA-aligned projects related to this, such as SecureDNA from the MIT Media Lab and Common Mechanism to Prevent Illicit Gene Synthesis from NTI/IBBIS. Also, this one is not an original idea of mine to an even greater extent than the others I've posted.
ben.smith @ 2022-03-07T07:41 (+10)
Group psychology in space
Space governance
When human colonies are established in outer space, their relationship with Earth will be very important for their well-being. Initially, they’re likely to be dependent on Earth. Like settler colonies on Earth, they may grow to desire independence over time. Drawing on history and research on social group identities from social psychology, researchers should attempt to understand the kind of group identities likely to arise in independent colonies. As colonies grow they’ll inevitably form independent group identities, but depending on relationships with social groups back home, these identities could support links with Earth or create antagonistic relationships with them. Attitudes on Earth might also vary from supportive, exclusionary, or even prejudiced. Better understanding intergroup relations between Earth powers and their settler colonies off-world could help us develop equitable governance structures that promote peace and cooperation between groups.
Alex D @ 2022-03-09T21:08 (+4)
Would mostly apply to bunkers too!
zdgroff @ 2022-03-07T05:24 (+10)
Lobbying architects of the future
Values and Reflective Processes, Effective Altruism
Advocacy often focuses on changing politics, but the most important decisions about the future of civilization may be made in domains that receive relatively less attention. Examples include the reward functions of generally intelligent algorithms that eventually get scaled up, the design of the first space colonies, and the structure of virtual reality. We would like to see one or more organizations focused on getting the right values considered by influential decision-makers at institutions like NASA and Google. We would be excited about targeted outreach to promote consideration of aligned artificial intelligence, existential risks, the interests of future generations, and nonhuman (both animal and digital) minds. The nature of this work could take various forms, but some potential strategies are prestigious conferences in important industries, retreats including a small number of highly-influential professionals, or shareholder activism.
Avi Lewis @ 2022-03-06T09:42 (+10)
EA ops: "Immigration Tech"
I have an idea for a cloud based, AI-powered SaaS platform to help governments handle immigration. Think KYC meets immigration
Today the immigration process is disjointed and fragmented amongst different countries and in most cases it's cumbersome, overly bureaucratic. That means that difficulties for immigrants, particularly in clear Human Rights cases, as well as for countries, who may be losing out on highly skilled migrants.
The idea is a platform that connects between potential immigrants and potential host countries. Instead of an immigrant applying individually to a number of countries, he would upload his relevant documentation to the platform that will then be shared with his countries of choice. Another model could be for interested countries to directly reach out to the potential immigrant of their own accord.
Part of the work of the platform would be to perform the relevant KYC work to authenticate the request as legitimate - thereby saving time and resources for national immigration departments, particularly when a request is lodged to multiple countries.
Obviously the idea is still in it's early stages and there are a number of details that would need to be fleshed out. For example:
- Compliance. Each country has its own procedures, required documentation. The platform would need to comply and "onboard" with each country individually
- Authentication / KYC. The platform would need to validate the authenticity of the documentation and of the request in order to prevent fraudulent requests
But there are already solutions for these issues that are employed in other areas (for example KYC in Crypto, Authentication in HR tech platforms), so I'm sure that an appropriate path can be found.
There are a tonne of useful platforms in the KYC space - from banking, to HR and talent sourcing. I don't think there is a single "Immigration Tech" platform that connects international partners to smooth out the immigration process
We've all seen the sheer scale of the human catastrophe in Ukraine in the last week.
Immigration is a pressing issue.
Having a platform that smoothes out and streamlines the process can be a huge win-win for both immigrants and countries alike.
Avi Lewis
Avi Lewis @ 2022-03-06T17:14 (+1)
Basically, the aim here is twofold:
- Skilled migrants. Enable host countries to perform a reverse-lookup to attract skilled migrants with a background in say Tech, STEM or IT. And vice verca. Support skilled migrants in their search for a new home environment that can foster their growth and development. An influx of academic and entrepreneurial immigrants can be a boost to the economies of their newly adoptive countries, and can lead to a increase scientific advancement
- Human Right Cases. All too often these fall through the cracks. Long wait times, particularly in danger zones. A principle aim of this platform would be to help find a new home country for those that need it most
SjirH @ 2022-03-04T10:41 (+10)
Representation of future generations within major institutions
Values and Reflective Processes, Epistemic Institutions
We think at least part of the issues facing us today would be better handled if there was less political short-termism, and if there were more incentives for major political and non-political institutions to take into account the interests of future generations. One way to address this is to establish explicit representation of future generations in these institutions through strategic advocacy, which can be done in many ways and has been piloted in the past few decades.
Peter S. Park @ 2022-03-03T08:29 (+10)
Normalizing regular wear of PPE
Biorisk
Containing a potential pandemic is extremely high-impact. If a high proportion of people regularly wore PPE, this could make the difference in determining whether or not the outbreak is stopped before it becomes a pandemic. Regularly wearing masks is much more doable than regularly wearing hazmat suits, although the political polarization of masks in certain countries is a barrier. Even so, preventing a fraction of future pandemics (which can on expectation be achieved by regular mask-wearing in a fraction of the world's countries) is still quite high impact. Applying the theory of social norms and of prestige may help normalize the regular wear of PPE. Convincing and publicizing prestigious individuals' regular mask-wearing and associating regular mask-wearing to morality may potentially be helpful on this front (in America, this may only work in certain types of communities) .
Peter S. Park @ 2022-03-02T17:15 (+10)
Targeting movement-building efforts at top universities' career offices
Effective altruism
Wouldn't it be great if top universities' career offices were aligned with EA and with longtermism? Maybe they can use material from 80,000 hours in their help of their universities' students. An ambitious endgame is that all top universities' career offices are aligned with EA/longtermism, or at least highly aware of the paradigm and of resources like 80,000 hours, so that they can directly convince and/or facilitate students' pursuit of high-impact career options.
PeterSlattery @ 2022-03-08T03:37 (+2)
I like this idea. However, it might be hard to change existing career advice organisations. I therefore wonder if setting up and funding competitors would be better. These competitors could be very affordable and prestigious career advice organisations with EA affiliated founders and members. The aim would be to help as many high ability students as possible who are seeking advice and use the engagement and influence to prompt ethical and impactful career decisions and where possible/appropriate.
agnode @ 2022-03-02T09:34 (+10)
Pragmatic forecasting training
Epistemic institutions
There is a big jump between reading Superforecasting and actually doing forecasting, especially at work. One problem is that the book is written as a popular book, and so doesn't cover the specifics you need - e.g. what techniques should you use to combine data to get a base rate? It would be useful to have something more textbooky which teaches specific techniques and gives lots of worked examples and exercises. Furthermore, there are many additional challenges of implementing forecasting in a policy or funder environment such as:
- Decisionmaking is often messy and depends on answers to vague questions.
- There is often a lot of time pressure that makes adding a forecasting process (or even just learning forecasting) difficult.
- There are stakeholders that may need to be convinced of the value of forecasts.
- How do you implement a forecasting system across a team such that you will keep adjusting your forecasts and come back and check how you did in the future?
It would be valuable to have a consultancy helping organisations such as funders and government departments implement forecasting in a real-world context. This consultancy could then over time build up a course or textbook that teaches what they have learned to a wider audience.
Peter S. Park @ 2022-03-01T23:37 (+10)
Targeted facilitation of high-impact career pivots for ex-academics
Effective altruism
Effective altruists/longtermists have targeted their movement-building efforts to young people (undergraduate and high-school students), an effective strategy given that young people are more likely to be in the process of career exploration and investments in them will be long-lasting.
Another effective movement-building strategy may be to help Ph.D. graduates, postdocs, etc. who are pivoting out of academia. Ex-academics are likely to have difficult-to-obtain, and often impactful/generalizable skills, and are likely undervalued by the hypercompetitive academic job market (due to academics' strong, social-norm-based preference for academic jobs and consequent oversupply). Ex-academics are likely to be in the process of career exploration. Targeted outreach, fellowships, and careering coaching by student organizations and EA movement-building experts may help direct more of these ex-academics to high-impact career pivots.
Peter S. Park @ 2022-03-01T21:38 (+10)
Causal microfoundations for behavioral science
Artificial Intelligence, Values and Reflective Processes
The science of human behavior is afflicted by a replication crisis. By some estimates, over half of the empirical literature does not replicate. A significant cause of this problem is undertheorization. Without a cumulative theoretical framework from which to work, researchers often lack meaningful hypotheses to test, and so instead default to their personal, often culturally biased folk intuitions. Their resulting interpretations of studies’ data thus frequently fail to replicate and generalize (See the seminal paper of Michael Muthukrishna and my advisor Joe Henrich.)
Finding the correct causal microfoundations for behavioral science can provide a deeper understanding of precisely when we can extrapolate empirical findings out-of-sample. This could be especially helpful for making externally valid predictions in historically unprecedented situations (e.g., regarding emergent technologies or anthropogenic catastrophic/existential risks), for which much of the relevant data required for empirically estimating policy counterfactuals may not yet exist.
One area where the correct causal theory of descriptive human behavior would be particularly helpful is correctly understanding and solving the AI-human alignment problem.
Some approaches include the provision of fellowships, grants, and collaborative opportunities to researchers, as well as teaching/mentoring/incentivizing of undergraduate students to help them become researchers or practitioners of plausible causal theories of behavioral science. (e.g., cultural evolutionary theory; see The Secret of our Success by Joe Henrich)
JBPDavies @ 2022-03-01T11:14 (+10)
Space Policy Lab
Space Governance, Epistemic Institutions
Human activity in space is intensifying with the growing challenge of space debris, the deployment of satellite mega-constellations, and the prospects of asteroid mining and long-term colonisation raising unique challenges to a vital yet neglected domain. Current space governance - the laws, rules, norms and institutions that structure interactions in space - is falls far short of meeting these challenges. A Space Policy Lab would research governance frameworks analyse policy issues shape expert discourse, and engage in advocacy for effective regulatory frameworks. We would like to see a Lab bringing together applied researchers, academia and societal stakeholders within a dynamic collaborative & transdisciplinary environment through undertaking policy experiments to identify levers for improving space governance.
JanBrauner @ 2022-03-01T10:19 (+10)
AI alignment prize suggestion: Demonstrate a true sandwiching project
Artificial Intelligence
Sandwiching projects are a concrete way for how to make progress on aligning narrowly superhuman models. They “sandwich” the model in between one set of humans which is less capable than it and another set of humans which is more capable than it at the fuzzy task in question, and b) figure out how to help the less-capable set of humans reproduce the judgments of the more-capable set of humans. For example, first fine-tune a coding model to write short functions solving simple puzzles using demonstrations and feedback collected from expert software engineers. Then try to match this performance using some process that can be implemented by people who don’t know how to code and/or couldn’t solve the puzzles themselves.
Importantly, there are many ways to attack a sandwiching project that are slightly cheating. The most challenging version of a sandwiching project would need to make sure that no information whatsoever from the more-capable set of humans is used in the training process. The Future Fund could offer prizes for demonstrations of sandwiching projects on various levels of impressiveness and generality of the employed method.
JanBrauner @ 2022-03-01T09:39 (+10)
Refinement of project idea #22, Prediction Markets
Add: "In particular, we'd like to see prediction platforms that do all of the following three: use real money, are very easy to use, allow very easy creation of markets.
Chris Leong @ 2022-03-01T02:42 (+10)
Masters Degrees for Movement Building:
AI Safety
Many people want to contribute to AI safety and they may have strong technical abilities, but not yet be in a position to be able to contribute to research. Some of these people might also have experience in movement building. It might be worthwhile to pick Masters of AI programs that are highly ranked and pay for a pair of AI Safety movement builders to study there so that they can promote the idea among the school, whilst upskilling at the same time. (This could work for other cause areas like biosecurity)
Risks: Masters degrees are very expensive.
Peter S. Park @ 2022-03-01T22:54 (+5)
Maybe only tangentially related, but a master's in passing (quitting mid-Ph.D.) is free. In fact, one receives a Ph.D. stipend while completing the degree.
Addendum: This option can in theory be utilized by (1) helping EAs/longtermists apply to Ph.D. programs (perhaps in non-technical related fields rather than technical fields) and (2) convincing and facilitating mid-Ph.D. students looking to make career pivots from research to movement building.
Mathieu Putz @ 2022-03-08T10:22 (+9)
EA Hotel / CEEALAR except at EA Hubs
Effective Altruism
CEEALAR is currently located in Blackpool, UK. It would be a lot more attractive if it were in e.g. Oxford, the Bay Area, or London. This would allow guests to network with local EAs (as well as other smart people, of which there are plenty in all of the above cities). In as far as budget is less of a constraint now and in as far as EA funders are already financing trips to such cities for select individuals (for conferences and otherwise), it seems an EA Hotel would similarly be justified on the same grounds. (E.g. intercontinental flights can sometimes be more expensive than one month's rent in those cities)
DonyChristie @ 2022-03-08T05:16 (+9)
Research into the dual-use risks of asteroid safety
Space Governance
There is a small base rate of asteroids/comets hitting the Earth naturally. There are efforts out there to deflect/destroy asteroids if they were about to hit Earth. However, based on the relative risk of anthropogenic vs natural risk, we think that getting better at manipulating space objects is dual-use as it would allow malevolent actors to weaponize asteroids, and that this risk could be orders of magnitudes larger. We want to see research on what kinds of asteroid defense techniques are likely to not lead towards concomitant progress in asteroid offense techniques.
See:
- https://forum.effectivealtruism.org/posts/RZf2KqeMFZZEpvBHp/risks-from-asteroids
- "This ‘dual-use’ concern mirrors other kinds of projects aimed at making us safer, but which pose their own risks, like ‘gain of function’ research on diseases. In such cases, effective governance may be required to regulate the dual-use technology, especially through monitoring its uses, in order to avoid the outcomes where a malign actor gets their hands on it. With international buy-in, a monitoring network can be set up, and strict regulations around technology with the potential to divert planetary bodies can (and probably should) be implemented."
- https://forum.effectivealtruism.org/posts/vuXH2XAeAYLc4Hxyj/why-making-asteroid-deflection-tech-might-be-bad
- "A cost benefit analysis that examines the pros and cons of developing asteroid deflection technology in a rigorous and numerical way should be a high priority. Such an analysis would consider the expected value of damage of natural asteroid impacts in comparison with the increased risk from developing technology (and possibly examine the opportunity cost of what could otherwise be done with the R&D funding). An example of such an analysis exists in the space of global health pandemics research, which would be a good starting point. I believe it is unclear at this time whether the benefits outweigh the risks, or vice versa (though at this time I lean towards the risks outweighing the benefits – an unfortunate conclusion for a PhD candidate researching asteroid exploration and deflection to come to).
- Research regarding the technical feasibility of deflecting an asteroid into a specific target (e.g. a city) should be examined, however this analysis comes with drawbacks (see section on information hazards).
- We should also consider policy and international cooperation solutions that can be set in place today to reduce the likelihood of accidental and malicious asteroid deflection occurring.
- https://www.nature.com/articles/368501a0.pdf
- "It is of course sensible to seek cost effective reduction of risks from all hazards to our civilization - even low probability hazards, of which many may remain unidentified. At a total cost of some $300 million, Spaceguard arguably constitutes a reasonable measure of defence against the impact hazard. But premature deployment of any asteroid orbit modification capability, in the real world and in light of well-established human frailty and fallibility, may introduce a new category of danger that dwarfs that posed by the objects themselves."
Leo Gao @ 2022-03-08T04:30 (+9)
Creating materials for alignment onboarding
Artificial Intelligence
At present, the pipeline from AI capabilities researcher to AI alignment researcher is not very user friendly. While there are a few people like Rob Miles and Richard Ngo who have produced excellent onboarding materials, this niche is still fairly underserved compared to onboarding in many other fields. Creating more materials for a field has the advantage that because there are different formats that different people find more helpful, having more increases the likelihood that something works for any given person. While there are many different possible angles for onboarding, there are several potential avenues that stand out as promising due to successes in other fields:
- High production value videos (similar to 3blue1brown, Kurzgesagt)
- Course-like lectures and quizzes (similar to Khan academy)
- Interactive learning apps (similar to Brilliant)
Andreas Hicketier @ 2022-03-07T20:44 (+9)
Machine olfaction for disease detection
Biorisk and Recovery from Catastrophe
Dogs can be trained to recognize the smell of Covid-19 and many other diseases. However, this takes a lot of time. It might be possible in the very near future to build robotic noses (machine olfaction), that work as well as a dog's. This would mean that once one neural net has been trained to recognize a new pathogen, the software could easily be distributed around the globe. Sensors in public places could then pick up in real time whether someone infectious was close by. This would reduce the need for non-pharmeceutical interventions and PPE or even stop the spread of a disease completely. It would also help with the diagnosis of many common diseases when diagnosis is expensive, invasive or takes a long time.
Thanks to advances in machine learning and sensor manufacturing, machine olfaction is coming up with its first limited successes. See here for a prostate cancer prototype and a general explainer.
Laura Kleiman @ 2022-03-07T04:42 (+9)
Cheap, lifesaving treatments
Epistemic institutions; Artificial Intelligence; Economic Growth; Effective Altruism; Research That Will Help Us Improve
Hundreds of existing, low-cost, and widely available generic drugs could be repurposed as effective treatments for additional indications. Yet this major opportunity to improve outcomes for patients suffering from cancer and other diseases while lowering healthcare costs is being ignored due to a market failure. We are interested in funding innovative solutions for bringing repurposed generic drugs to widespread patient use, including:
- Methods to identify new uses for existing drugs and gather definitive data more efficiently (e.g., using AI, real-world evidence, and adaptive trial designs).
- New ways to fund randomized controlled clinical trials at scale (e.g., using social impact bonds).
- Approaches to change the standard of care for patients around the world (e.g., through cross-sector partnerships).
Charles He @ 2022-03-07T04:21 (+9)
High Quality Outward-Facing Communications Organization
This creates a new communications organization that deeply understands outside media and attitudes, and reports events to the community. The organization will expertly provide content and services tailored to EAs and their projects on demand. This organization is a servant, an expression of the community and respects Truth. Carefully created, this organization should be invaluable as EA grows many times and into new domains and competencies.
Imagine a new megaproject. How do we talk about a giant new bio refuge to the world? How do we explain who goes in and the vision behind it? This is an opportunity and a risk, and the difference between good and bad outcomes is large. These challenges and opportunities are faced by many projects.
Charles He @ 2022-03-07T04:26 (+2)
Background/context:
Early in EA, some events created lasting adverse narratives, such as those around earning to give. It seems like these narratives started by small, initial, events and could have been avoided.
More recently, there have been several articles and minor media events that have presented narratives against EA or certain values in EA. Some people have expressed that their work has been made more difficult by them.
If you believe these narratives represent real issues, they should be investigated and acted on. If you don’t, a high quality communication strategy should be implemented (including inaction). Insufficient competence in communications harms projects and people.
Media events tend to be lumpy and unpredictable. It’s unclear what will happen as many more projects and efforts are made by EAs.
It seems like several senior EAs serve as ad-hoc comms or PR leaders. In some sense, this is great and the ideal. But reliance on a few people is bad. These leaders have other skills and it’s unlikely outside communications is their comparative advantage. Future events could create much more pressure and complexity and burnout seems possible. Expert services, subordinate to these efforts, would be good (the same EA you like now would "front-woman", but be supported by a team).
PeterSlattery @ 2022-03-07T02:48 (+9)
Converting key EA research outputs into academic publications
Conceptual dissemination
Academic publications are considered to be significantly more credible than other types of publications. However, many EA aligned organisations such as Rethink Priorities produce valuable research that is never published. To help address this, we would like to fund academic publication support organisations, to help organisation which are unaffiliated with universities to get ethics approval, write grants, produce academic research outputs etc.
PeterSlattery @ 2022-03-07T02:39 (+9)
Developing GCR scenario response teams and plans
Global catastrophic risks
As Covid-19 demonstrated, groups are unable to efficiently mobilise and coordinate to deal with potential Global Catastrophic Risks (GCRs) or large scale events without prior preparation. This leads to extensive inefficiencies, risks and social costs. Organisations address such unpreparedness by simulating key risks and training to handle them. We would similarly like to fund relevant institutions and organisations teams to simulate GCR related outcomes (e.g, nuclear attacks, wars or pandemic outbreaks) to develop and practice responses and disseminate best practice.
Guillaume Corlouer @ 2022-03-05T15:37 (+9)
Funding the AI alignment institute, a Manhattan project scale for AI alignment.
Artificial intelligence
Aligning AI with human interests could be very hard. The current growth in AI alignment research might be insufficient to align AI. To speed up alignment research, we want to fund an ambitious institute attracting hundreds to thousands of researchers and engineers to work full-time on aligning AI. The institute would enable these researchers to work with computing resources competitive with top AI industries. We could also slow down risky AI capability research by offering top AI capability researchers competitive wages and autonomy, draining them from top AI organizations. While small specialized teams would pursue innovative alignment research, the institute would enhance their collaboration, bridging AI alignment theory, experiment, and policy. The institute could also offer alignment fellowships optimized to speed up the onboarding of bright young students in alignment research. For example, we would fund stipends and mentorships competitive with doctoral programs or entry-level jobs in the industry. The institute would be located in a place safe from global catastrophic risks and facilitate access to high-quality healthcare, food, housing, transportation to optimize researchers well being and productivity.
MaxRa @ 2022-03-12T17:03 (+2)
I think this is a pretty interesting idea, though one would need to think much more about it. One feedback I found useful when I pitched a very related idea was that the Manhattan Project might not be the ideal framing as it's so intertwined with offensive military applications of technology.
Denis Drescher @ 2022-03-05T13:51 (+9)
A think tank to investigate the game theory of ethics
Values and Reflective Processes, Effective Altruism, Research That Can Help Us Improve, Space Governance, Artificial Intelligence
Caspar Oesterheld’s work on Evidential Cooperation in Large Worlds (ECL) shows that some fairly weak assumptions about the shape of the universe are enough to arrive at the conclusion that there is one optimal system of ethics: the compromise between all the preferences of all agents who cooperate with each other acausally. That would solve ethics for all practical purposes. It would therefore have enormous effects on a wide variety of fields because of how foundational ethics is.
The main catch is that it will take a lot more thought and empirical study to narrow down what that optimal compromise ethical system looks like. The ethical systems and bargaining methods used on earth can serve as a sample and convergent drives can help us extrapolate to unobserved types of agents. We may never have certainty that we’ve found the optimal ethical system, but we can go from a state of overwhelming Knightian uncertainty to a state of quantifiable uncertainty. Along the way we can probably rule out many ethical systems as likely nonoptimal.
First and foremost, this is a reflective process that will inform altruistic priorities, which suggests the categories Values and Reflective Processes, Effective Altruism, and Research That Can Help Us Improve. But I also see applications wherever agents have trouble communicating: cooperation between multiple mass movements, cooperation between large groups of donors, cooperation between anonymous donors, cooperation between camps of voters, cooperation on urgent issues between civilizations that are too far separated to communicated quickly enough, cooperation between agents on different levels of the simulation hierarchy. ECL may turn out to be a convergent goal of a wide range of artificial intelligences. Thus it also has indirect effects on the categories of Space Governance and Artificial Intelligence. (But I don’t think it would be good for someone to prioritize this over more direct AI safety work at this time.)
I see a few weaknesses in the argument for ECL, so first step may be to get experts in game theory and physics together to probe these and work out exactly what assumptions go into ECL and how likely they are.
Some people have thought about this more than I have – including (of course) Caspar Oesterheld, Johannes Treutlein, David Althaus, Daniel Kokotajlo, and Lukas Gloor – but I don’t think anyone is currently focused on it.
Jim Buhler @ 2022-03-19T17:52 (+1)
Caspar Oesterheld’s work on Evidential Cooperation in Large Worlds (ECL) shows that some fairly weak assumptions about the shape of the universe are enough to arrive at the conclusion that there is one optimal system of ethics: the compromise between all the preferences of all agents who cooperate with each other acausally. That would solve ethics for all practical purposes. It would therefore have enormous effects on a wide variety of fields because of how foundational ethics is.
ECL recommends that agents maximize a compromise utility function averaging their own and those of the agents that action-correlate with them (their "copies"). The compromise between me and my copies would look different from the compromise between you and your copies, right? So I could "solve ethics" for myself, but not for you, and vice versa. Ethics could be "solved" for everyone if all agents in the multiverse were action-correlated with each other to the exact same degree, which appears exceedingly unlikely. Do I miss something?
(Not a criticism of your proposal. I'm just trying to refine my understanding of ECL) :)
Denis Drescher @ 2022-03-19T19:29 (+4)
Thanks for the comment! I think that’s a misunderstanding because trading with copies of oneself wouldn’t do anything since you already want the same thing. The compromise between you would be the same as what you want individually.
But with ECL you instead employ the concept of “superrationality,” which Douglas Hofstadter, Gary Drescher, and others have already looked into in isolation. You have now learned of superrationality, and others out there have perhaps also figured it out (or will in the future). Superrationality is now the thing that you have in common and that allows you to coordinate our decisions without communicating.
That coordination relies a lot on Schelling points, on extrapolation from the things that we see around us, from general considerations when it comes to what sorts of agents will consider superrationality to be worth their while (some brands of consequentialists surely), etc.
Jim Buhler @ 2022-03-21T08:38 (+1)
Thanks for the reply! :)
By "copies", I meant "agents which action-correlate with you" (i.e., those which will cooperate if you cooperate), not "agents sharing your values". Sorry for the confusion.
Do you think all agents thinking superrationaly action-correlate? This seems like a very strong claim to me. My impression is that the agents with a decision-algorithm similar enough to mine to (significantly) action-correlate with me is a very small subset of all superrationalists . As your post suggests, even your past-self doesn't fully action-correlate with you (although you don't need "full correlation" for cooperation to be worthwhile, of course).
In a one-shot prisoner's dilemma, would you cooperate with anyone who agrees that superrationality is the way to go?
In his paper on ECL, Caspar Oesterheld says (section 2, p.9): “I will tend to make arguments from similarity of decision algorithms rather than from common rationality, because I hold these to be more rigorous and more applicable whenever there is not authority to tell my collaborators and me about our common rationality.”
However, he also often uses "the agents with a decision-algorithm similar enough to mine to (significantly) action-correlate with me" and "all superrationalists " interchangeably, which confuses me a lot.
Denis Drescher @ 2022-03-21T10:36 (+2)
Do you think all agents thinking superrationaly action-correlate?
Yes, but by implication not assumption. (Also no, not perfectly at least, because we’ll all always have some empirical uncertainty.)
Superrationalists want to compromise with each other (if they have the right aggregative-consequentialist mindset), so they try to infer what everyone else wants (in some immediate, pre-superrationality sense), calculate the compromise that follows from that, determine what actions that compromise implies for the context in which they find themselves (resources and whatnot), and then act accordingly. These final acts can be very different depending on their contexts, but the compromise goals from which they follow correlate to the extent to which they were able to correctly infer what everyone wants (including bargaining solutions etc.).
In a one-shot prisoner's dilemma, would you cooperate with anyone who agrees that superrationality is the way to go?
Yes.
Hmm, it’s been a couple years since I read the paper, so not sure how that is meant… But I suppose either the decision algorithm is similar (1) because it goes through the superrationality step, or the decision algorithm has to be a bit similar (2) in order for people to consider superrationality in the first place. You need to subscribe to non-causal DTs or maybe have indexical uncertainty of some sort. It might be something that religious people and EAs come up with but that seems weird to most other people. (I think Calvinists have these EDT leanings, so maybe they’d embrace superrationality too? No idea.) I think superrationality breaks down in many earth-bound cases because too many people here would consider it weird, like the whole CDT crowd probably, unless they are aware of their indexical uncertainty, but that’s also still considered a bit weird.
Jim Buhler @ 2022-03-21T15:43 (+1)
Oh interesting! Ok so I guess there are two possibilities.
1) Either by “supperrationalists”, you mean something stronger than “agents taking acausal dependences into account in PD-like situations”, which I thought was roughly Caspar’s definition in his paper. And then, I'd be even more confused.
2) Or you really think that taking acausal dependences into account is, by itself, sufficient to create a significant correlation in two decision-algorithms. In that case, how do you explain that I would defect against you and exploit you in one-shot PD (very sorry, I just don’t believe we correlate ^^), despite being completely on board with supperrationality? How is that not a proof that common supperrationality is insufficient?
(Btw, happy to jump on a call to talk about this if you’d prefer that over writing.)
Denis Drescher @ 2022-03-21T18:46 (+2)
I think it’s closer to 2, and the clearer term to use is probably “superrational cooperator,” but I suppose that’s probably meant by “superrationalist”? Unclear. But “superrational cooperator” is clearer about (1) knowing about superrationality and (2) wanting to reap the gains from trade from superrationality. Condition 2 can be false because people use CDT or because they have very local or easily satisfied values and don’t care about distant or additional stuff.
So just as in all the thought experiments where EDT gets richer than CDT, your own behavior is the only evidence you have about what others are likely to predict about you. The multiverse part probably smooths that out a bit, so your own behavior gives you evidence of increasing or decreasing gains from trade as the fraction of agents in the multiverse that you think cooperate with you increases or decreases.
I think it would be “hard” to try to occupy that Goldilocks zone where you maximize the number of agents who wrongly believe that you’ll cooperate while you’re really defecting, because you’d have to simultaneously believe that you’re the sort of agent that cooperates despite actually defecting, which should give you evidence that you’re wrong about what reference class you’re likely to be put in. There may be agents like that out there, but even if that’s the case, they won’t have control over it. The way this will probably be factored in is that superrational cooperators will expect a slightly lower cooperation incidence to agents in reference classes of agents that are empirically very likely to cooperate while not being physically forced to cooperate because being in that reference class makes defection more profitable up to the point where it actually changes the assumptions others are likely to make about the reference class that have enabled the effect in the first place. That could mean that for any given reference class of agent who are able to defect, cooperation “densities” over 99% or so get rapidly less likely.
But really, I think, the winning strategy for anyone at all interested in distant gains from trade is to be a very simple, clear kind of superrational cooperator agent because that maximizes the chances that others will cooperate with that sort of agent. All that “trying to be clever” and “being the sort of agent that tries to be clever” probably just costs so much gains from trade right away that you’d have to value the distant gains from trade very low compared to your local stuff for it to make any economic sense, and then you can probably forget about the gains from trade anyway because others will also predict that. I think David Althaus and Johannes Treutlein have thought about this from the perspective of different value systems, but I don’t know of any published artifacts from that.
We can have a chat some time, gladly! But it’s been a while that I’ve done all this so I’m a bit slow. ^.^'
gavintaylor @ 2022-03-03T20:45 (+9)
Stratospheric cleaning to mitigate nuclear winters
Recovery from Catastrophes
Proposals to recover from a nuclear winter have primarily focused on providing alternative means of food production until agriculture recovers. A complementary strategy would be to develop technologies to remove stratospheric soot, which could reduce the duration and severity of the nuclear winter if used soon after nuclear strikes while smoke remains concentrated above a relatively small geographic area. Stratospheric cleaning could also prove useful in the event of supervolcano eruptions, meteor impacts, or geoengineering accidents and would offer an option for non-nuclear and neutral states to mitigate the worst-case consequences of nuclear war between other states on both their own and the global population. This approach does not appear to have been explored, and we would like to fund initial feasibility studies and proof-of-concept projects on the possibility of stratospheric cleaning. Promising technology could be tested on ash plumes from volcanic eruptions or pyrocumulus clouds from wildfires. Current atmospheric models of nuclear winter scenarios may also need to be refined to guide a stratospheric cleaning response. We expect that mature technological solutions for stratospheric cleaning would be maintained as emergency response infrastructure at the national or intergovernmental level, and if the approach showed promising initial results, we would support lobbying governments to develop this capacity.
Denkenberger @ 2022-03-12T08:28 (+11)
You may be interested in this. I considered some pretty speculative things to prevent or mollify a supervolcanic eruption, but the volume of the stratosphere is so enormous that I think cleaning it would be very challenging.
gavintaylor @ 2022-03-28T18:15 (+1)
Yeah, I haven't looked into this much but I think goal would be getting as much soot as possible before it spread out across the whole stratosphere. For instance, dumping coagulant into the rising smoke plume so that it got carried up with the smoke could be a good option if one can respond while a city fire is still burning, as the coagulant is then going to get mixed in with most of the soot. IIRC from Robock's paper it also takes a while (weeks/months) for the soot to completely spread out and self-loft into the upper stratosphere, so that gives more time to respond while it's still fairly concentrated around the sources. Determining what an effective response would be at that stage is kind of the aim of the project - one suggestion would be to send up stratospheric weather balloons with high-voltage electrostatic fields (not 100% sure but I expect soot aerosol would be charged and could be electrostatically attracted) under areas of dense soot.
Jakob @ 2022-03-06T09:21 (+4)
A potential complementary strategy to this one, could be research into putting out large-scale wildfires (though I'm not sure about the feasibility of this - are anyone aware of existing research on this?)
Lauren Reid @ 2022-03-02T16:41 (+9)
'Bunker' survival research grants
Biorisk and Recovery from Catastrophe
Grants for investigation of what skills and tools/materials would be needed in ideal emergency kits to improve chances of survival/health. For example, what should you have in your bunker? Training in basic medical skills - like wilderness first aid, how to keep people mentally well under these conditions, which micronutrients should be stocked, PPE. The greater the proportion of the population that has these things on hand, may increase chances of survival.
jknowak @ 2022-03-02T10:36 (+9)
EA-themed Superhero Graphic Novel / Shounen Anime / K Drama
Effective Altruism Meta, Community Building
I really like to think about that Superman fanfic where he tried to aim for 'most good'. Many existing superhero stories could be rewritten so the main protagonists tries to maximize their impact. I know non-fiction movies/documentaries were mentioned but I think the 3 types of media I mentioned have the potential to become really popular (are consumed by vast number of teenagers and (young) adults globally. It's a risk (it could be a flop), but I think one we could take. I am pretty confident a big enough budget can 'buy quality' so it would be better than average story.
Denis Drescher @ 2022-03-06T00:49 (+2)
Anecdotally, a bunch of my friends are fans of or enjoy superhero fiction along the lines of Marvel, so this could aim at just the right demographic. Or it could aim at an already over-represented demographic.
Nathan Young @ 2022-03-02T01:55 (+9)
Eliminate disease-bearing mosquitos (originally suggested by David Manheim)
Malaria
Act on the long-running plan to design and release mosquitos to outcompete those which spread malaria thereby avoiding infection.
Alex D @ 2022-03-04T19:49 (+8)
Suggestion - start with a focus on eradicating Aedes mosquitoes (aegypti, albopictus, and maybe japonicus) from the Western hemisphere.
These species are invasive/non-native to the Americas (so "ecological risks" arguments against are more tenuous), cause a tremendous burden of illness (Zika, Dengue, Yellow Fever, Chikungunya, ...), and have been subject to previous eradication efforts (so there's precedent).
There isn't particularly a "biorisk/GCBR" angle to this problem, but such projects being executed by a team that was very biosecurity-aware seems wise since effective tools would include some theoretically dual-use biotech.
Projects could include a mix of advocacy, strategic research, tool development, and execution.
Nathan Young @ 2022-03-02T01:40 (+9)
Approval Voting in the UK
Politics
The Centre for Election Science has done good work pushing approval voting in the US. In the UK there aren't ballot initiatives, but both political parties could allow approval voting in their constituencies. If they did then it would be easier to push at a national level.
Denis Drescher @ 2022-03-06T01:01 (+2)
Ranked choice voting seems to be another top contender. I think I came away liking it more back in the day but I forgot all the details.
Zac Townsend @ 2022-03-01T11:58 (+9)
(Per Nick's note, reposting)
Replication funding and publication
Epistemic Institutions
The replication crisis is a foundational problem in (social) science. We are interested in funding publications, registries, and other funds focused on ensuring that trials and experiments are replicable by other scientists.
Jackson Wagner @ 2022-03-01T02:40 (+9)
Advocacy for [metascience, land-use reform, clean energy technologies, or other individual planks of the progress studies platform]
Economic growth, Epistemic institutions
You already list high-skill immigration advocacy, pandemic-prevention breakthroughs, and a variety of institutional-innovation topics; why not the rest of the "abundance agenda"? (I already listed general/high-level philosophical research, but here I am suggesting specific sub-areas.)
Land use, construction costs, "yimby", etc. -- Has it gotten more difficult for civilization to build things? At this point, ordinary Yimby activism might be too mainstream for a cutting-edge FTX program, but there are still a lot of potential angles here -- maybe experiment with Harberger taxes or help develop software to estimate land value accurately and finally make Georgism possible in practice? Maybe figure out how to reverse the causes of high construction costs?
Metascience -- you already mention "Higher epistemic standards for journalism and books", but metascience is a big field! Other people in this thread have already suggested great ways to experiment with new funding and research models like "focused research organizations" and ARPA models. Also, apart from trying to get scientific journals to adopt higher epistemic standards, I'd be interested in research into ways that we could better incentivize scientists to focus on higher-impact areas and focus on exploring new areas, rather than unduly rewarding incremental work in fashionable fields.
New energy technologies like nuclear, geothermal, fusion, and utility-scale storage -- abundant energy would be a huge boon for human welfare. In some places, like when it comes to investing in geothermal or nuclear-power startups, for-profit venture capitalists are probably better suited to the task than a charitable longtermist fund. But there might be some helpful lobbying angles where a charitable group could get valuable leverage. Advocating for streamlined construction permitting, better and more flexible nuclear regulation, improved electrical grids, time-varying electricity prices to encourage efficient off-peak power usage, etc.
DonyChristie @ 2022-03-08T06:09 (+8)
Legalization of MDMA & psychedelics to reduce trauma and cluster headaches
Values and Reflective Processes, Empowering Exceptional People
Millions of people have PTSD that causes massive suffering.
MDMA and psychedelics are being legalized in the U.S., and there are both non-profit and for-profit organizations working in this space. Making sure everyone who wants it has access, via more legalization, and subsidization, would reduce the amount of trauma, which could have knock-on benefits not just for them but the people they interact with.
Cluster headaches are a particularly nasty condition that has extreme amounts of suffering associated. Legalizing psychedelics that ameliorate the condition, such as DMT, would help sufferers get access that they need.
- https://forum.effectivealtruism.org/posts/4dppcsbcbHZxyBC56/treating-cluster-headaches-using-n-n-dmt-and-other-1
- https://forum.effectivealtruism.org/posts/wfXrMmKcD6eLEbS6R/opis-initiative-on-access-to-psilocybin-for-cluster
- https://forum.effectivealtruism.org/posts/gtGe8WkeFvqucYLAF/logarithmic-scales-of-pleasure-and-pain-rating-ranking-and
louisbarclay @ 2022-03-07T13:45 (+8)
EA storytelling
Research That Can Help Us Improve, Values and Reflective Processes, Effective Altruism
The stronger the stories that EA tells are, the more people will be convinced to do something about EA in their own lives. We’re interested in funding people with a proven track record in storytelling, including generating viral content, to create EA stories that could reach millions of people.
(Potentially extends existing Project Ideas ‘A fund for movies and documentaries’ and 'Critiquing our approach'.)
Project ideas from this page that are relevant to this idea:
EA-themed Superhero Graphic Novel / Shounen Anime / K Drama (jknowak)
louisbarclay @ 2022-03-07T13:43 (+8)
Research into why people don't like EA
Research That Can Help Us Improve
Many people have heard of EA and weren’t convinced. We want to understand why, so that we can find approaches to convince them. If we can win more people over to EA, we can directly increase the impact that EA has in the world.
We’re excited to fund proposals to research why people do and don’t like EA, and the approaches that are most effective in winning people over to EA.
(Potentially extends existing Project Idea 'Critiquing our approach'.)
PeterSlattery @ 2022-03-08T03:22 (+2)
I have had similar ideas about this. Your idea also potentially relates/work well with my 'EA brand assessment/public survey idea.
Taras Morozov @ 2022-03-07T06:42 (+8)
Find good ways to distribute books to people with high potential
Epistemic Institutions, Effective Altruism
This project has two parts:
1) find people with high potential, especially students.
2) find a good way to distribute books on world problems to them.
Ad 1: Examples:
- students in low and medium income countries may have a higher demand for English books
- participants on STEM olympiads
- people with SAT scores > x
- students in selective schools
Ad 2: It is important to do it in a nice, non-preaching way.
One possible implementation is a book club that sends out a book every two months, with regular online meetups for its readers.
Taras Morozov @ 2022-03-07T06:38 (+8)
Create and curate educational materials on EA-related topics
Effective Altruism
EA Fellowship and EA Handbook took existing resources and curated them into a good introduction to EA. Do something similar with different formats and subjects.
I.e., create:
- Fellowships
- Reading lists.
- record existing courses in academia
- and so on
With a goals to:
- make it easy to take up new fields.
In fields like:
- Rationality
- Bioweapons
- Forecasting
- and so on.
noahchonlee @ 2022-03-07T03:38 (+8)
EA Berkeley Hostel
Effective Altruism
Every week, EAs pass through Berkeley and someone needs to pay around $200 a night to house them or scramble to find a couch they can crash on. This becomes increasingly complicated when someone finds a trial run offer and needs to stay another week than expected or even find a job offer and suddenly need to rush to find housing. Currently, there exists NO hostel (or even a hotel room that costs less than a couple hundred bucks) even close to Berkeley, much less an EA hostel. A hostel in Berkeley would allow flexibility in stay times, provide a place for EAs to stay when moving between housing situations, and provide a homely space for co-working and networking.
Feel free to comment to ask more details.
PeterSlattery @ 2022-03-07T03:28 (+8)
EA services consultancy network or organisation (early draft)
Movement building and resolving coordination problems
Considerable need for support for small projects on tech, design etc. Many effective charities lacking key ingredients for improvements. Many good ideas never get off the ground due to lack of technical expertise. Can do surveys of movement leaders and also scale up as needed when there is more demand. Incl:
Tech support organisation
Associated media and PR services for EA organisations to publicise work via media
Content creation for SEO and media
Research services and training for young EAs
See this and related replies for other thoughts. Will's reply to this post is also probably a better articulation than mine so I won't refine my draft further.
Chris Leong @ 2022-03-07T03:41 (+4)
Altruistic Agency cover the tech portion of this, but providing other services could be valuable as well.
PeterSlattery @ 2022-03-08T03:40 (+2)
Yes, I agree. I think that they and similar organisation should be well funded once validated as being useful. Right now, Altruistic Agency are helping READI to build a new website. I also have several other EA projects that I am going to ask for help with.
Alex D @ 2022-03-07T03:20 (+8)
EA Micro Schools
Effective Altruism
We would be excited to fund projects that make it easier to start up an EA-aligned, accredited private school.
As EA matures, there will be more and more parents. Kids of self-identified EAs are likely to be smart and neurodivergent, and may struggle with the default schooling system. They're also likely to grow into future adult EAs. Remote work options will free up location choice, and there could be major community-building gains if parents can easily find their ideal school in an EA hub.
Variation: develop an EA stream or instance of an existing private school with a strong model, like Acton Academy.
PeterSlattery @ 2022-03-07T02:45 (+8)
Creating more EA aligned journals or conferences
Movement building
Academic publications are considered to be significantly more credible than other types of publications. However, the academic publication system is highly misaligned with key EA values (e.g., efficiency and intellectual novelty/impartiality). We would therefore like to encourage initiatives to start, influence or acquire influential academic journals or conferences to enable EA to have better academic impacts towards our desired outcomes.
Just FYI, here is copy explaining a related idea that I discussed with David Reinstein (who is doing work in this space). Its about how to create a very low effort EA 'unjournal'. This could provide a way to more easily publish small scale EA research papers and projects:
- To create a quick and easy prototype to test you fork the EA forum and use that fork as a platform for the 'unjournal' project (maybe called something like 'The Journal of Social Impact Improvement and Assessment').
- People (ideally many EA) would use the forum like interface to submit papers to this 'unjournal'.
- These papers would look like EA forum posts but with an included OSF link to a PDF version. Any content (e.g., slides/video) could be embedded in the submission.
- All submissions would be reviewed by a single admin (you?) for basic quality standards.
- Most drafts would be accepted to the unjournal.
- Any accepted drafts would be publicly 'peer reviewed'. They would achieve 'peer reviewed' status when >x (3?) people from a predetermined/elected editors/expert board had publicly or anonymously reviewed the paper by commenting publicly on the post. Reviews might also involve ratings the draft on relevant criteria (INT?). Public comment/review/rating would also be possible.
- Draft revisions would be optional but could be requested. These would simply be new posts with version X/v X appended to the title
- All good comments/posts to the journal would receive upvotes etc so authors, editors and commentators would gain recognition, status and 'points' etc from participation. This is sufficient for generating participation in most forums and notably lacking in most academic settings.
- Good papers submitted to the journal would be distinguished by being more widely read, engaged with, and praised than others. If viable, they would also win prizes. As an example, there might be a call for papers on solving issue x with a reward pool of grant/unconditional funding of up to x for winning submissions. The top x papers submitted to the unjournal in response to that call would get grant funding for further research.
- A change in reward/incentives (from I had a paper accepted/cited to I won a prize), seem to have various benefits
- It still works for traditional academic metrics - grant money is arguably even more prized than citations and publications in many settings
- It works for non-academics who don't care about citations or prestigious journal publications
- As a metric 'funds received' would probably better tracks researchers' actual impact than their citations and acceptance in a top journal. People won't pay for more research that they don't value but they will cite or accept that to a journal for other reasons.
- Academics could of course still cite the DOIs and get citations tracked this way.
- Reviewers could be paid per review by research commissioners.
- Here is a quick example of how it could work for the first run. Open Philanthropy call for research on something they want to know about (e.g., interventions to reduce wild animal suffering). They commit to provide up 100,000 in research funding for good submissions and 10,000 for review support. 10 relevant experts apply and are elected to the expert editorial boards to review submissions. They will receive 300USD per review and are expected to review at least x paper. People submit papers, these are reviewed, OP award follow up prizes to the winning papers. The cycle repeats with different funders and so on.
monadica @ 2022-03-09T21:20 (+1)
Hi Peter, very awesome idea, I am working on this kind of project, it would be nice to talk with you
PeterSlattery @ 2022-03-07T02:40 (+8)
Better recruitment and talent scouting networks
Movement building, coordination, coincidence of wants problems
Decentralised social good communities face significant coordination problems: Many talented social actors and influencers are either unaware of key knowledge or unable to find a clear fit for their skills. This is particularly true in less developed countries, where relevant networks are relatively nascent. To address this, we’d like to support the work that develops the global network of recruiters and talent scouts. For instance, these organisations and individuals would collect leads for key jobs and projects, identify and proactively fund and connect talented but ‘naive’ social actors, and assess and curate talent pools related to key career domains and skill-sets.
WilliamKiely @ 2022-03-08T00:12 (+6)
Recruitment agencies for EA jobs
Empowering Exceptional People, Effective Altruism
There are hundreds of organizations in the effective altruism ecosystem and even more high-impact job openings. Additionally, there are new organizations and projects we’d like to fund that need to recruit talent in order to establish founding teams and grow. Many of these often lack adequate resources to do proper recruiting. As such, we’d be excited to fund EA-aligned recruitment agencies to help meet these hiring needs by matching talented job-seekers with high-impact roles based on their skills and personal fit.
----------------
(Also submitted via the Google Form.)
Other very similar ideas: Lauren Reid’s Headhunter Office idea and aviv’s Operations and Execution Support for Impact idea.
PeterSlattery @ 2022-03-07T02:38 (+8)
EA community housing network
Movement building & coordination
Social movement building requires key members of the community to have regular rewarding interactions. To catalyse social movement building, we would like to establish more EA organisations and institutions across the world more travel arrangements between them. For instance, this could be modelled on approaches such as the “International House” student accommodation, which provides cheap accommodation for students and works to instil cosmopolitan values.
...
A late update is that I would also really like to see support for EA co-working communities in major hubs. Imagine this could be funding for renting a space for one day of the week. This would be very valuable in Sydney, for instance.
agnode @ 2022-03-06T18:23 (+8)
Intellectual coaching
Empowering exceptional people, Effective Altruism
Many people with the potential to do good research and writing work hit blockers that are a complex mix of psychological blockers and intellectual issues. For example uncertainty and fear around what to work on, lack of confidence in one's ability. It's difficult to find someone to help address this kind of problem. Therapists and mainstream coaches don't have a good understanding of research and EA work. But within EA most of the coaching available is focussed on career choice or productivity techniques, without the sometimes deep psychological work needed to unblock people's research potential. This fund could support the training and work of people who have the rare combination of therapeutic ability and understanding of research work. This kind of coaching could also be useful for people outside of EA within academia.
Additional notes:
- This idea partially stimulated by this tweet from a writing coach, which says "Okay so at this point it’s clear that the real point of my coaching is helping people unlock dormant emotions and unsuppress unacceptable elements of self through expression, although there’s prose tuneups too What do I call this Compassionate creative coaching?"
samuel @ 2022-03-06T16:33 (+8)
Bonuses/prizes/support for critically situated or talented workers
Empowering Exceptional People
Work that advances society should be rewarded and compensated at fair market value. Unfortunately, rewards are often incommensurate, delayed or altogether unrealized. We'd be excited to see a funding process that 1) identifies work that’s under appreciated by or insulated from the market and 2) provides incentives for workers/teams to stay put and complete said work.
EA often focuses on building new organizations to solve problems, but talented people are already situated within organizations that can foster real change. In government, academia, large legacy companies and non-profits, incentives are usually in the form of slowly accrued assets like prestige, job security or future private sector paydays. Unfortunately, these are also organizations that are tasked with addressing urgent matters such as climate change, pandemics, housing shortages, etc.
How do we incentivize important work outside of the market’s reach? How do we incentivize talented but poorly compensated workers to stay at essential but bureaucratic organizations that are optimally positioned to foster change?
- Challenge Prizes: Small to medium sized prizes or donations for the completion of work that’s going too slowly. This provides a market signal that stresses urgency in no uncertain terms. This is similar to moonshots but more immediate/focused/localized.
- Bonuses: Set up externally funded performance bonuses for well-placed individuals that are at low-paying but important organizations. Or external signing bonuses for obtaining high-leverage roles in these institutions.
- Coddling Services: Basically, personal assistant services for identified high-performing individuals that could use more time focusing (this is similar to an idea already posted by @JanBrauner).
Ricky @ 2022-03-06T08:18 (+8)
EA to create an incubator to fund social enterprises with a high social return on investment.
This will help improve the visibility of the EA brand. It will also help connect ideas to improve the world with capital.
Denis Drescher @ 2022-03-05T19:52 (+8)
Safety of comprehensive AI services
Artificial Intelligence
I imagine that comprehensive AI services (CAIS) could face similar problems to intelligence agencies. Ideally, an intelligence agency would only hire those people who are maximally trusted, but then they could hire hardly anyone. Instead they split the information that any one person can see such that (1) that person can’t do much harm with the one piece of the full picture that they have and (2) if it leaks or the person exploits their knowledge in illegitimate ways, the higher-ups can trace the leak. Swap in particular capabilities for the particular information, and you have the situation that a network of specialized AI services faces.
Specialized services may be designed to recombine other services in ways that gives them, in aggregate, close to general abilities, or individual services may be undiscovered mesa-optimizers. Human overseers may then benefit from research that has transferred the experience of intelligence agencies to CAIS.
This may be part of a more general push to turn CAIS into a safer and more efficient alternative to AGI for the industry.
Lauren Reid @ 2022-03-05T14:53 (+8)
Solve Type 2 Diabetes
Biorisk and Recovery from Catastrophe
Type 2 Diabetes, caused by insulin resistance, is one of the top 10 causes of disability (DALYs) and also is root cause for ischemic heart disease and stroke, which are also in the top 10. People with diabetes are immune compromised and have worse outcomes from infection (as we saw in Covid). Several treatments to reverse diabetes are known, and there are groups like Virta Health doing good work in this space, but some treatments are prohibitively expensive (like GLP-1 agonists). Prevention and nutrition education is underfunded, especially when compared to the health care costs of treating complications like amputations, and dementia.
Due to the use of fructose and other poor quality food, Type 2 diabetes is increasing in both the developed and developing world. Right now, there’s a major shortage of insulin in Ukraine. It is estimated that 95% of people with diabetes are Type 2, if we could prevent a significant fraction of that disease, the insulin could be used for the 0.55% of the population (US) that can’t make endogenous insulin because of Type 1 diabetes.
From a longevity/biotech view, people with type 2 diabetes are much less likely to benefit from future interventions due to poor health of tissues.
(Disclosure - I’m a physician specializing in disability and diabetes is my nemesis)
Lukas_Gloor @ 2022-03-20T06:58 (+2)
Inspired by this proposal, researching the claim that seed oils may be responsible for many "diseases of civilization" (contamination theory of obesity). Probably(?) not true but highly important and actionable if true.
Mohamed Labadi @ 2022-03-10T01:30 (+1)
How about treating diabetes with fasting?
I know many people who used fasting to cure diabetes..
Lauren Reid @ 2022-03-10T20:16 (+1)
Yes, I am very pro-fasting (not medical advice, just opinion). The Obesity Code by Jason Fung is a really good description of why this works and I have given copies to colleagues to convince them. People often need support to start fasting - I have a dream of a retreat with these supports, like bone broth.
tessa @ 2022-03-04T23:27 (+8)
Continuous sampling for high-risk laboratories
Biorisk and Recovery from Catastrophe
We would be excited to fund efforts to test laboratory monitoring systems that would provide data for biosafety and biosurveillance. The 1979 Sverdlovsk anthrax leak happened because a clogged air filter had been removed from the bioweapons laboratory's exhaust pipe and no one informed the night shift manager. What if, by default, ventilation ducts in high-containment laboratories were monitored to detect escaping pathogens? Establishing a practice of continuous sampling would also support efforts to strengthen the biological weapons convention; it would become easier to verify the convention we had a baseline data signature for benign high-containment work.
Additional note: the OSINT data sources mentioned in the Strengthening the Bioweapons convention project (publication records, job specs, equipment supply chains) are also a form of continuous monitoring, but it seemed useful to carve this out as a separate technical priority.
Alex D @ 2022-03-05T22:07 (+4)
Add-on: for natural epidemics, there are a number of “event-based surveillance systems” that monitor news, social media, and other sources for weak signals of potential emergencies. WHO, PAHO, and many national governments run such systems, and there are a few private ones (one of which I run).
One could set up such a system focussing exclusively on the regions immediately surrounding high containment labs.
There are only ~60 BSL-4 labs, so you could conceivably monitor each of these regions quite closely without an impossibly large team.
Direct monitoring would be much better, but this might be a useful adjunct.
christian.r @ 2022-03-04T19:18 (+8)
Creative Arms Control
Biorisk and Recovery from Catastrophe
This is a proposal to fund research efforts on "creative arms control," or non-treaty-based international governance mechanisms. Traditional arms control -- formal treaty-based international agreements -- has fallen out of favor among some states, to the extent that some prominent policymakers have asked whether we've reached "The End of Arms Control."[1] Treaties are difficult to negotiate and may be poorly suited to some fast-moving issues like autonomous weapons, synthetic biology, and cyber operations; by the time traditional arms control is negotiated, the technology may have outpaced the regulations. Partly for this reason, states and private actors alike have increasingly turned to informal "norms" processes (e.g. on cyber), codes of conduct and technical agreements, or confidence-building measures (CBMs). How well does such "creative arms control" work? Is it a suitable instrument for regulating emerging technologies this century? How hard is it to turn a norms process into a verification-based treaty regime? Research on these questions is still thin, and greater funding could therefore be very valuable for future regulation on GCR/X-risk-related technologies.
- ^
For a while, it seemed like the 2021 extension of New START had invalidated Ambassador Brooks's points in that 2020 article. I think Russia's 2022 invasion of Ukraine is going to make agreement on formal arms control very difficult again.
Taras Morozov @ 2022-03-04T15:31 (+8)
Look for UFOs
Space Governance
In recent years, there has been an upsurge in reports by the military on sightings of UFOs including detecting the same object with multiple modalities at once (examples: 1, 2).
Avi Loeb proposes to create a network of high-resolution sensors, (just as the military have). But compared to the military their results will not be classified and can be openly analyzed by scientists. The cost of doing this is in the order of millions of dollars.
Knowing if there are aliens has many consequences, including for the longtermism, since if the universe is densely populated, its expected moral value does not depend solely on our decisions.
Taras Morozov @ 2022-03-04T15:13 (+8)
Write encyclopedias (esp. Wikipedia), then translate them (esp. to Russian and Chinese)
Epistemic Institutions
Create a team of people who will write articles on Wikipedia on subjects related to EA. Why this is important is described here.
Besides writing articles on English Wikipedia, they can also:
- Create good illustrations (somehow, medical articles tend to have much beter pictures than other areas)
- Translate these articles to other languages (especially Russian and Chinese)
- Topics that are not notable enough for Wikipedia can be described in a separate encyclopedia.
MaxRa @ 2022-03-12T15:48 (+3)
I really like the idea, especially regarding the idea of improving understanding between the West and China. Unfortunately, I think Wikipedia won't work because Wikipedia has very strict norms against non-volunteer contributions. There are Chinese alternatives, but IIRC they are under relatively tight ideological control.
Milan_Griffes @ 2022-03-04T13:21 (+8)
Researching valence for AI alignment
Artificial Intelligence, Values and Reflective Processes
In psychology, valence refers to the attractiveness, neutrality, or aversiveness of subjective experience. Improving our understanding of valence and its principal components could have large implications for how we approach AI alignment. For example, determining the extent to which valence is an intrinsic property of reality could provide computer-legible targets to align AI towards. This could be investigated experimentally: the relationship between experiences and their neural correlates & subjective reports could be mapped out across a large sample of subjects and cultural contexts.
Greg_Colbourn @ 2022-03-05T10:57 (+6)
I've been wondering whether AGI independently discovering valence realism could be a "get out clause" for alignment. Maybe this could even happen in a convergent manner with natural abstraction?
Milan_Griffes @ 2022-03-04T13:18 (+8)
Researching the relationship between subjective well-being and political stability
Great Power Relations, Values and Reflective Processes
Early research has found a strong association between a society's political stability and the reported subjective well-being of its population. Political stability appears to be a major existential risk factor. Better understanding this relationship, perhaps by investigating natural experiments and running controlled experiments, could inform our views of appropriate policy-making and intervention points.
Konstantin Pilz @ 2022-03-04T13:03 (+8)
Prevent community drainage due to value drift
Effective Altruism, Movement building
Most Effective Altruists are still young and will have the greates impact with their careers (and spend the greatest amounts of money) in several decades. However, people also change a lot and for some this leads to a decrease of engagement or even full drop-out. Since there is evidence, that drop out rates might be up to 30% throughout the career of higly engaged EAs, this is some serious loss of high impact work and well directed money.
Ways of tackling this problem might include:
- Introducing more formal commitment steps when getting into EA
- Encouraging people to write down and reflect on their reasons for being part of EA
- Creating events especially aimed at strengthening the core community and encouraging friendships
Khorton @ 2022-03-04T20:56 (+6)
I find most discussion about discouraging value drift pretty distasteful. I don't have any reason to believe my future self's values will be worse than my current self's, so I don't want to be uncooperative with her or significantly constrain her options.
I'm especially uncomfortable with the implication that becoming less involved with EA means someone's values have gotten worse.
Gavin @ 2022-03-04T21:48 (+2)
What about the predictable effect of becoming less open-minded and tolerant as we age? Sure, there's a sense in which I don't know that that state is worse than my current one. But it seems worse, and that seems enough to worry about it.
Khorton @ 2022-03-05T00:27 (+4)
Becoming less open-minded seems like a classic case of a healthy explore/exploit over a lifetime. I'm less open-minded about a lot of things than I was a decade ago and I don't think that's a bad thing. I wouldn't worry about a change of the same magnitude again over the next decade.
Edit: For me open-mindedness to isn't a moral value, it's just a means to an end. People who intrinsically value open-mindedness might be much more nervous about becoming less open-minded! That would be totally reasonable.
Edit 2: There's something very ironic about using "older people become less open-minded" as a rationale for "I should commit myself to one social movement for the rest of my life".
Gavin @ 2022-03-05T15:31 (+2)
We may be talking past each other, because what I mean by open-mindedness I mean seems extremely instrumentally valuable on all kinds of views:
If I become less impartial, less open to evidence, and less willing to adapt to good changes in the world, this ought to concern me.
(I actually don't know how reliable the above results about old-age conservatism are, so discount the above to the extent you don't trust those studies.)
@ Edit 2: I'm not OP and don't intend this as an argument for committing to EA 4eva. Instead it's an example of value drift which concerns me, independent of where the social movement lands.
aviv @ 2022-03-03T03:49 (+8)
Operations and Execution Support for Impact
Empowering Exceptional People, Effective Altruism
The skill of running operations for building and growing a non-profit organization is often very different from doing the "core work" of that org. Figuring out operational details can suck energy away from the core work, leaving many promising people deciding not to start new orgs even when it is appropriate and necessary for scaling impact. We'd like to see an organization that could provide a sort of recruiting and matchmaking service which identifies promising operations people in more traditional domains, vets them, and matches them with potential founder/grantees. In addition, such an organization could provide executive coaching to the founders. This would unlock people who would only start an org if they could jump into the core work and building out a team with operational support and coaching (and the health etc. benefits that operational support could do the setup for). Over time, such an org could build out a network and knowledge about how best to identify relevant operational talent and support success in other ways.
JackM @ 2022-03-02T22:43 (+8)
Optimal strategies for existential security
Research That Can Help Us Improve
If we don't achieve existential security (a persistent state of negligible x-risk), an existential catastrophe is destined to happen at some point, wiping out humanity's longterm potential. Despite the incredible importance of achieving existential security, there is a lack of a consensus within the EA community on how best to do so, which is partly down to a lack of high-quality, in-depth research on this question. Instead, most research has focused on reducing specific existential risks. I'd like to see a major research project initiated to identify the optimal strategies to achieving existential security. This project should be inter-disciplinary, including both theoretical and applied experts in economics, philosophy, international relations, sociology, policy, and more. Not only would this project identify the optimal strategies to achieve existential security, but it would also identify the most effective actions to be taken by policymakers and other relevant stakeholders to help achieve this goal.
agnode @ 2022-03-02T08:48 (+8)
Credible expert Q&A forums
Epistemic institutions
Decisionmakers (e.g. funders and policymakers) tend to use a mixture of desk research, interviews with experts, and workshops with experts to inform their decisions. Online forums where questions can be asked of experts could be a useful part of this process. Forums are useful compared with desk research as information can be sought that may not be covered in existing sources. They are useful compared with interviews and workshops as they require less organisational overhead to get expert input and what is learned is automatically public and usable by others. However, in current expert forums (e.g. stackexchange and various Ask subreddits) it is unclear how credible the people answering are, and it is likely that many of them are enthusiastic amateurs rather than experts, especially in forums on more qualitative subjects. There could be a fund to support more rigorous forums that vet the people answering and moderate carefully. Money could also be used to incentivise contribution, as one challenge with forums is having enough experts to answer questions, especially niche questions. These forums could not only help decisionmakers but also other people seeking expert knowledge, including journalists, practitioners, and the general public. If the forums were well-known and credible enough, then linking to answers could become a form of citation.
IanDavidMoss @ 2022-03-03T13:52 (+2)
This sounds a bit like the EA Librarian?
Nathan Young @ 2022-03-02T10:50 (+2)
I'd like that on this forum tbh
Zac Townsend @ 2022-03-01T11:58 (+8)
(Per Nick's note, reposting)
Longitudinal studies
Epistemic Institutions; Economic Growth
We are interested in funding long-term, large-scale data collection efforts. One of the most valuable research tools in social science is the collection of cross-sectional data over time, whether on educational outcomes, political attitudes and affiliations, health access, and outcomes. We are interested in funding research projects that intend to collect data over twenty years. The projects require significant funding to ensure follow-up data collection.
Greg_Colbourn @ 2022-03-01T10:18 (+8)
Airdrop for EA Forum karma holders
Empowering Exceptional People, Effective Altruism
Take a snapshot from some time in the past (e.g. date of OP), and award $100 for each karma point to all EA Forum holders. This could be extended and scaled as appropriate to the AI Alignment Forum and perhaps r/EffectiveAltruism and other places as seen fit. As a one-off, this can't be gamed. It might encourage more participation going forward, but it should be made clear that there should be no expectation of a repeat. Ideally, the money would be no strings attached. It would be interesting to see how it is spent, and -- assuming a lot of it is regranted or spent on direct work[1]-- perhaps could serve as an ultimate example of decentralised grant making in the EA community (so high VoI?). EA Forum karma seems like a good proxy for positive participation in the EA community, although I understand that many people make great contributions but aren't active on the Forum. It would be left as an exercise to altruistic karma holders to remedy any injustices.
[Note that this is only a semi-serious proposal, based on a common strategy in the crypto community -- which FTX is a big player in -- for rewarding holders of coins and tokens in order to encourage investment and participation (and gain attention). As the proposer, I waive my right to any airdrops should this or something like it actually happen.]
- ^
If there isn't much regranting or spending on direct work, then this could be evidence for financial insecurity in the community. (Or worse, a lack of altruism when it actually comes down to having money to spend.)
Khorton @ 2022-03-01T15:41 (+25)
I have 6611 karma and if y'all gave me $600k no strings attached, I'm not gonna lie I would buy a really nice house.
Larks @ 2022-03-01T21:05 (+10)
And now an extra $1.5k worth of house on top of that!
Greg_Colbourn @ 2022-03-03T12:00 (+3)
I appreciate the honesty. [Note the rest of this is not directed at Khorton; more to the people upvoting her comment]. But I'm disheartened by the fact that this comment has got high karma. It looks pretty bad from an outside perspective that such selfish use of a windfall is celebrated by effective altruists. And also from an inside perspective - it makes me wonder how altruistic most EAs actually are. I mean, I hope most of us would at least give the standard GWWC 10% away (and maybe that is implicit, but it isn't to an outsider reading this -- and a lot of outsiders probably are reading this given the attention that the FTX Future Fund is getting).
Where are the comments saying "I'd fund X", "..start Y", "..do independent research on Z"!? Maybe it's just that no one is taking this seriously -- and I get it, it was meant partly as an amusing play on the crypto airdrop phenomenon -- but it's still a bit sad to see such cynicism around altruism being promoted on the EA Forum.
If EAs can't be expected to do EA things with large unexpected windfalls without there being strings attached, then I question the integrity of the movement.
You might argue that EA is no longer funding constrained (so therefore it's fine to be selfish), but funding saturation is not evenly distributed.
alexrjl @ 2022-03-03T17:04 (+9)
Khorton buying a nice house and meeting her GWWC pledge seem perfectly compatible, and suggesting that her planning to do this casts significant doubt on the integrity of the movement seems both over the top and unkind, and I don't think the 'I'm directing my complaining at upvoters not khorton' does much to mitigate that.
Greg_Colbourn @ 2022-03-03T19:40 (+2)
For the record, I'm not saying that "house + GWWC pledge" is lacking in integrity, I'm saying that "house" alone is (for an EA) (and that's what it looks like to an outsider who won't know about Khorton taking the GWWC pledge).
Khorton @ 2022-03-03T16:03 (+4)
I doubt the people who upvoted this comment are encouraging me (although maybe they are!). I think it's more likely that they think it was a valuable piece of information.
Greg_Colbourn @ 2022-03-04T08:57 (+4)
I guess I'm reading more into it. To me it looks something like: "Haha, Greg is so naive to think that rank and file EAs can be trusted to do good things if we give them free money, no strings attached. See, this is the kind of thing we should expect." Possibly with the additional: "And why not? EA is no longer funding constrained, and there isn't much that non-expert, small-to-medium donors can do with money now" [both quotes would come with more courteous, careful phrasing, and caveats, in real life of course. I've written it how I have because I'm somewhat emotionally invested; my apologies.].
And outsiders looking on might be thinking "See, these so-called 'effective altruists' are no different than the rest of us when it really comes down to it. The most upvoted comment on a thread about an airdrop is one about spending the cash on a house!"
Linch @ 2022-03-01T16:17 (+21)
There are some serious incentives issues here where the EAF users with the most karma (and thus most incentive to gain from this proposal) are also the ones with most strong upvote power. :O
Yonatan Cale @ 2022-03-01T16:56 (+3)
Inspired by this, I am reading all the suggestions from the bottom (from least-karma)
Greg_Colbourn @ 2022-03-01T16:31 (+2)
Yes. FTX: please try to ignore the karma on the proposal comment when considering it!
Greg_Colbourn @ 2022-03-02T11:36 (+19)
Slightly disappointed that this has ended up on negative karma. I think it's at least triggered some somewhat fruitful discussion. I do think a broad-based retroactive funding of public good in the EA community would be good; especially in terms of it's knock-on effects for the next generation of projects. Mediation of this via crypto and impact certificates seems promising, even if a direct airdrop based on an imprecise metric such as EA Forum karma isn't the way to go.
Jackson Wagner @ 2022-03-01T20:47 (+4)
Even putting aside Nuno's list of pretty serious issues preventing karma from correlating well with impact, I think $100 is way too high a value for current-day EA karma points (maybe it could be appropriate for Karma points earned years ago in the Forum's infancy).
If one point of karma was worth on average more than $100 donated to EA charities, then posting on the EA forum would be so preposterously effective that my 1300 karma points accrued this year would be worth ~$130,000 to the movement, massively outweighing any donations I could hope to make to EA charities, also seemingly outweighing the impact of many other forms of direct work (since most EA salaries are lower than $130K/year) and equivalent to saving more than 25 lives just by commenting. It would also imply that CEA is massively underinvesting in support for the Forum.
On the other hand, if karma points were worth only $1 of donations to EA charities, then everyone would be completely wasting their time here (depending on how long it takes you to write comments, conceivably doing less good than you could do by donating 10% of your income after working an extra hour at minimum wage, etc), and CEA would be massively overinvesting by spending more money on supporting the Forum than the value it actually produces.
Realistically I think Karma points are probably worth $20-$30 "on average". But the average is dragged upwards by a small number of extremely valuable posts. From an inside-view perspective, I think my participation on the Forum has been decently helpful to folks, but I probably haven't discovered any totally revolutionary insights that will become foundational for EA causes going forward. So I figure if folks like me want to try to quantify their Forum contributions despite all those valid objections I linked, they should figure each karma point to be worth ~$10.
Greg_Colbourn @ 2022-03-01T21:46 (+4)
Interesting analysis. The airdrop wouldn't need to be based on the estimated value of karma points though. I was thinking of it more in terms of a mechanism for decentralising (grant making) power in the EA movement. $100 was chosen to make the sums allocated to people significant in a way that $10 probably wouldn't be (e.g. if it was $10, most people wouldn't really get enough to fund or start new projects, quit their job and do independent research, etc).
Nuño's list probably means that there should be some attempt to apply adjustments to scores. But this does open a can of worms.
Are there any other promising proxies for EA impact that could be used for an airdrop?
Jackson Wagner @ 2022-03-01T22:51 (+4)
Maybe instead of airdropping something that can be directly exchanged for cash (in which case many people would just buy a house with their $600K), we airdrop a resource that is somehow restricted such that it has to be a donation? A Forum-Karma-based airdrop seems like it would be an awesome way to kick off an impact certificates program -- people could use their KarmaCoin to invest in impact certificates, with the promise that if you invest wisely, down the road the certificates for the most impactful projects might get bought by a mega-donor like OpenPhil, and that's how you'd ultimately get a cash payout.
Greg_Colbourn @ 2022-03-02T11:29 (+2)
Sounds good! I wonder what loopholes could emerge though? Most cryptos end up with a market value even if they don't intend to have one. I suppose KarmaCoin could be timelocked somehow. It makes it more difficult to trade, but people can still make IOU contracts.
Yonatan Cale @ 2022-03-01T17:00 (+3)
I'd be afraid of playing around with the karma system. I think the EA Forum / Lesswrong might become the high-quality-discussion-social-media of the future, and I wouldn't make changes to the karma system without at least considering how the change impact that vision
Greg_Colbourn @ 2022-03-01T18:10 (+10)
It wouldn't be a change. It's a one-off reward for past activity (a retro-active funding of public good as it were :))
Greg_Colbourn @ 2022-04-03T11:53 (+2)
Ok, so LessWrong are actually doing this(!) - but for a week going forward from April Fool's Day - rather than retroactively, and for $1/karma point (rather than $100).
Denis Drescher @ 2022-03-01T17:30 (+2)
Note that there are some forum users who have posted highly upvoted posts and comments under different pseudonymous accounts. :-)
Greg_Colbourn @ 2022-03-01T18:11 (+4)
Yes. Perhaps we need to add Metamask support to the Forum :)
Chris Leong @ 2022-03-01T07:59 (+8)
Sponsoring Debates on Future Fund Issues
Effective Altruism
The Fund Future could run debates on these issues with high-level debaters (ie. World Champions or finalists) receiving significant compensation to take part. One format which would be particularly exciting would involve prominent academics giving the opening speeches for both sides and debaters taking the debate from there (for example, imagine Bostrom and Peter Singer debating how much we should focus on x-risks from AI vs. the present day). The debates would be recorded and prominently advertised on social media to relevant people. This would allow EA to engage and recruit people from the debating community.
Risks: Debating is focused on persuading people rather than reaching the truth.
Greg_Colbourn @ 2022-03-01T10:54 (+2)
See also: introduce important people to the most important ideas by way of having seminars they are paid a "speaker's fee" to attend (more).
Khorton @ 2022-03-01T15:44 (+4)
I am inherently suspicious of paid seminars and would personally downgrade the credibility of any ideas I heard in a paid seminar (even if I went to get the money!)
Greg_Colbourn @ 2022-03-01T15:56 (+2)
Would you feel the same way about a conference you were paid a fee for speaking at?
One way of averting this could be to give participants an amount of money to allocate to a charity of their choice, instead of paying them (like on celebrity game shows).
Khorton @ 2022-03-01T20:04 (+2)
If I'm paid to speak, that's not suspicious; if I'm paid to listen (in any way), that's suspicious.
Edit: Actually now that I work for government, being paid to speak is a little suspicious, and I am required to decline and report paid speaking invitations! Because it's an easy cover for bribery. But in general I don't think it's suspicious.
Greg_Colbourn @ 2022-03-01T21:34 (+2)
Ok, yes in my proposal I say "it should be made clear that the fee is equivalent to a “speakers fee” and people shouldn’t feel obliged to “tow the party line”, but rather speak their opinions freely." There would be some listening involved too though. I also say "In addition to (or in place of) the fee, there could be prestige incentives like having a celebrity (or someone highly respected/venerated by the particular group) on the panel or moderating, or hosting it at a famous/prestigious venue". But maybe this would also arouse suspicion.
Monique Kwakman @ 2022-03-08T09:45 (+7)
Happy Altruist Hotel
I have submitted this idea: creating a Happy Altruist Hotel.
My project idea focusses on improving the wellbeing of effective altruists by creating a center for that. I am thinking of a physical location, preferably in a nature environment. I will call it (for now) " Happy Altruist Hotel". The way I see it, the happy altruist hotel is a place where all kinds of programs, workshops, retreats and trainings will be organized for (aspiring) effective altruists.
The happy altruist hotel will be a place where EA's come together for inspiration, to learn and connect. A lot of EA's have the tendency to work so hard, that there are severe risks of burnout and even depression. We can organize retreats that focusses on improving wellbeing and happiness within the community. Also this place can be used for career workshops, workshops in applied rationality, etc. I really think there are a lot of possibilities if we would create a place like that.
In my ideal picture we can make a year program with all kinds of retreats for the EA community (also provided by EA's) with a broad diversity of inspirational content. My target group will be the existing community and also new EA's. Or it's also possible to organize ( a part of the year) for profit trainings for companies and groups. In that way the hotel can be financially sustainable in the long run.
I think it's good to invest in a project like this, because it's my sincere belief that a happy (effective) altruist takes good care of himself to have a more sustainable positive impact.
Peter S. Park @ 2022-03-08T00:38 (+7)
Research on how to minimize the risk of false alarm nuclear launches
Effective Altruism
Preventing false alarm nuclear launches (as Petrov did) via research on the relevant game theory, technological improvements, and organization theory, and disseminating and implementing this research, could potentially be very impactful.
Joss Oliver @ 2022-03-07T22:30 (+7)
Organising/sponsoring Hackathons
Epistemic institutions, empowering exceptional people
Many highly skilled programmers are lured into the private sector either to work for prestigious companies or found a startup, often with little positive impact. We’d like to see these people instead working for or starting their own EA aligned organisations.
To encourage this, we’d be excited to fund an organisation that involves themselves with programming hackathons, to scout for highly creative and skilled individuals and groups. This could mean sponsoring existing hackathons or running their own.
Adam Binks @ 2022-03-07T22:15 (+7)
Prestigious forecasting tournaments for students
Epistemic institutions, empowering exceptional people
To scale up forecasting efforts, we will need a large body of excellent forecasters to recruit from. Forecasting is a skill that improves over time, and it takes time to build a track record to distinguish excellent forecasters from the rest - particularly on long-term questions. Additionally, forecasting builds generally useful research and rationality skills, and supports model-building and detailed understanding of question topics. Therefore, getting students to forecast high-impact questions might be particularly useful for both students own development and the development of the forecasting community.
While existing forecasting platforms allow students to participate, the prestige and compensation offered by success is limited, especially outside of the narrow forecasting community.
We would be excited to fund highly prestigious forecasting tournaments for students, similar to the Maths Olympiad and IGEM in that it would aim to attract top talent, while being focussed on highly impactful questions. A second option is working with universities to give course credit for participation and success in the tournaments. In either case - excellent student forecasters would be rewarded by a prestigious marker on their CV, and fast-tracked application to superforecasting organisations.
louisbarclay @ 2022-03-07T13:44 (+7)
Solving institutional dysfunction
Values and Reflective Processes
Thousands of institutions have potential to do more good, but are hampered by dysfunctions such as excess bureaucracy, internal politics, and misalignment of the values they and their employees hold with their actions. Often these dysfunctions are well-known by their employees, but still persist.
We're excited to fund proposals to study institutional dysfunction and investigate solutions, as well as tools to monitor dysfunctions that lead to poor EA outcomes, and to empower employees to solve them.
--
Project ideas from this page that are relevant to this idea:
Longtermist democracy / institutional quality index (evelynciara)
Longtermism Policy Lab (JBPDavies)
A global observatory for institutional improvement opportunities (IanDavidMoss)
Platform Democracy Institutions (aviv)
Scaling successful policies (SjirH)
Representation of future generations within major institutions (SjirH)
--
Existing example of work in this space: Joe Edelman's 'Values-Based Social Design'.
(This idea is potentially related to existing Project Idea ‘Institutional experimentation’.)
ben.smith @ 2022-03-07T07:40 (+7)
Fund publicization of scientific datasets
Epistemic institutions
Scientific research has made huge strides in the last 10 years towards more openness and data sharing. But it is still common for scientists to keep some data proprietary for some length of time, particularly large datasets that cost millions of dollars to collect like, for instance, fMRI datasets in neuroscience. More funding for open science could pay scientists when their data is actually used by third parties, further incentivizing them to make data not only accessible but useable. Open science funding could also facilitate the development of existing open science resources like osf.io and other repositories of scientific data. Alternatively, a project to systematically catalogue scientific data available online–a “library of raw scientific data” could greatly expand access and use of existing datasets.
PeterSlattery @ 2022-03-07T05:42 (+7)
Buying and building products and services that influence culture
Movement building
Mass media producers such as news services, computer games and books and movies studios, etc, heavily influence culture. Culture in turn creates and influences norms for collective values (e.g., trust in various groups and institutions) and behaviours (e.g., prosocial or antisocial behaviour). Collective values and behaviour then influence social outcomes. We'd therefore welcome work to build or acquire mass media producers and use these to promote relevant values and behaviours. For instance, this could involve producing popular media, books and games containing a significant portion of EA themed content within and alongside other engaging content.
PeterSlattery @ 2022-03-07T03:04 (+7)
EA movement building evaluation support
Movement building
Effective social movement building requires us to understand what is working well and why. However, there is very limited information on how to track EA groups performance, and on how different approached perform on in achieving key outcomes. We would like to support work to address this, for instance, to help with standardisation of EA group metrics and the creation of simple tracking systems (e.g., distribution of a single sheet and related data visualisation program for tracking attendees across all groups).
PeterSlattery @ 2022-03-07T02:56 (+7)
Understanding public awareness and opinion of key EA values, the EA movement, and/or key organisations
Movement building & conceptual dissemination
What the public thinks of EA is quite relevant to many key outcomes, including movement building. We would therefore like to fund work to understand public trends on topics such as key values (e.g., longtermism, cosmopolitanism or resource maximisation), attitudes towards activist movement 'brands' (e.g., EA, vegan activism, extinction rebellion), and awareness and attitude towards key EA organisations (e.g., 80,000 hours), and of key EA behavioural outcomes (e.g., participation in supporting effective charities) etc.
See also recent work from Lucius Caviola which could be applied here
Samantha Kassirer has also made a very good point in a slack discussion about using machine learning techniques and social media (like this paper) to find/study proto-EAs, rather than self report methods.
jacobpfau @ 2022-03-06T19:28 (+7)
On-demand Software Engineering Support for Academic AI Safety Labs
AI safety work, e.g. in RL and NLP, involves both theoretical and engineering work, but academic training and infrastructure does not optimize for engineering. An independent non-profit could cover this shortcoming by providing software engineers (SWE) as contractors, code-reviewers, and mentors to academics working on AI safety. AI safety research is often well funded, but even grant-rich professors are bottlenecked by university salary rules and professor hours which makes hiring competent SWE at market rate challenging. An FTX Foundation funded organization could get around these bottlenecks by doing independent vetting of SWE and offering industry-competitive salaries and then having hired SWE collaborate with academic safety researchers at no cost to the lab. If successful, academic AI safety work ends up faster in terms of researcher hours and higher impact because papers are accompanied by more legible and standardized code bases -- i.e. AI safety work ends up looking more like distill. Estimating potential impact of this proposal could be done by soliciting input from researchers who moved from academic labs to private AI safety organizations.
EDIT: This seems to already exist at https://alignmentfund.org/
aogara @ 2022-03-06T23:43 (+3)
Really like the idea. Would be very interested in working on projects like this if anyone’s looking for collaborators.
CristinaSchmidtIbáñez @ 2022-03-06T17:27 (+7)
Leadership and management auditing
Effective Altruism
It is uncertain at what cost to employees' well-being EA organisations achieve impact. A sustainable ecosystems of EA organizations that has long-term impact should have a foundation of evidence-based leadership and management that doesn't harm employees or volunteers (or at least tries to avoid this).
We'd love to see an organisation that evaluates the leadership and management practices of EA organisations and its effect on the well-being of their employees at all levels of the organisation as well as make recommendations for improvement.
MaxRa @ 2022-03-12T16:51 (+2)
I really like this idea. My tentative impression is that
- management quality has low hanging room for improvement at more than half of EA orgs
- management quality is very important
- you can probably find a non-EA consultancy that could understand EA culture, collect best practices and support picking most low-hanging fruits
Ricky @ 2022-03-06T07:50 (+7)
Establish a virtual EA co-working space in the metaverse or on another platform to allow EA's from every country to meet and create new ideas together.
Guillaume Corlouer @ 2022-03-05T20:48 (+7)
Making AI alignment research among the most lucrative career path in the world.
AI alignment
Having the most productive researchers in AI alignment would increase our chances to develop competitive aligned models and agents. As of now, the most lucrative careers tend to be in top AI companies. They attract many bright graduate students and researchers. We want this to change and enable AI alignment research to become the most attractive career choice for excellent junior and senior engineers and researchers. We are willing to fund AI alignment workers with wages higher than top AI companies' standards. For example, wages could start around 250k$/year and grow with productivity and experience.
ryanbloom @ 2022-03-05T14:30 (+7)
A few people have mentioned retroactive public goods funding. I'd suggest broadening the scope a bit:
Better funding models for altruistic projects
Effective Altruism, Research That Will Help Us Improve
Market-oriented funding models are often a poor fit for altruistic projects due to the free-rider problem. On the other hand, traditional philanthropy is limited by available funding and uncertainty about how to best allocate it. Various mechanisms have been proposed to address these problems, including certificates of impact, mutual matching, quadratic funding, and others. We'd like to support work in this vein to (a) design new funding models, (b) evaluate them in small experiments, and/or (c) implement them at scale.
Arran McCutcheon @ 2022-03-04T15:24 (+7)
Givewell for AI alignment
Artificial intelligence
When choosing where to donate to have the largest positive impact on AI alignment, the current best resource appears to be Larks annual literature review and charity comparison on the EA/LW forums. Those posts are very high-quality but they’re only published once a year and are ultimately the views of one person. A frequently updated donation recommendation resource contributed to by various experts would improve the volume and coordination of donations to AI alignment organisations and projects.
This is probably not the first time this idea has been suggested but I haven’t seen it explicitly mentioned within the current project ideas or commented suggestions. Refinement of idea #29.
Arran McCutcheon @ 2022-03-04T14:04 (+7)
Research scholarships / funding for self-study
Empowering exceptional people
The value of a full-time researcher in some of the most impactful cause areas has been estimated as being between several hundred thousand to several million dollars per year, and research progress is now seen by most as the largest bottleneck to improving the odds of good outcomes in these areas. Widespread provision of scholarships / funding for self-study could enable far more potential researchers to gain the necessary experience, knowledge, skills and qualifications to make important contributions. Depending on the average amount granted to scholarship / funding applicants, even a hit rate of 5-10% (in terms of creating full-time researchers in high impact cause areas) could be a good use of funds.
EA Funds and other orgs already do this to some extent, I’m envisaging a much wider program.
nadavb @ 2022-03-04T08:08 (+7)
Quantify the overall suffering from different conditions, and determine whether there's misallocation of resources in biomedical research.
I suspect there's a big gap between the distribution of resources allocated to the study of different diseases and what people actually suffer from the most. Among other factors that lead to non-optimal allocation, I'd guess that life-threatening diseases are overstudied whereas conditions that may really harm people's well-being, but are not deadly, are understudied. For example, I'd guess that chronic pain is understudied compared to how much suffering it inflicts on society. It would be valuable to quantify the overall human suffering from different conditions, and spot misallocations in biomedical research (and other societal efforts to treat these conditions). For example, a random cohort of individuals could be asked to take a survey asking what conditions they would want the most to get rid of, and how many life years they would be willing to sacrifice for it (either asking what is the maximum number of years they would be willing to sacrifice for an operation that would be guaranteed to reduce that number of years from their life expectancy and solve the condition they suffer from, or asking what's the maximum probability of dying in that operation that they would be willing to take). Given the survey's results, it should be possible to quantify the overall suffering from different conditions, and then detect mismatches between these estimates and estimates of the resources (money and talent) allocated into addressing these problems. It could also be interesting to try to address other potential reasons for mismatches between the social importance of conditions (in terms of overall well-being/suffering) and allocated resources, primarily the issue of tractability. For example, maybe condition A is causing more suffering than condition B, but it's easier to make progress on B, so we should prioritize it more. This could be figured out by interviewing experts and asking them to estimate how many resources it would take to make a given amount of progress (such as cutting the prevalence of the disease in half). I imagine that a similar style of studies could be carried out in other settings where we'd want to find out whether society's allocation of resources really reflects what people care about the most.
EdoArad @ 2022-04-24T12:53 (+2)
Related: Cochrane's series of papers on waste in science and Global Priorities Project's investigation into the cost-effectiveness of medical research
Denis Drescher @ 2022-03-04T00:40 (+7)
A think tank to develop proof of stake for international conflicts
Artificial Intelligence, Great Power Relations, Space Governance
International conflicts pose a risk already, and that’ll only get worse when AI arms races start among countries. Yet, establishing a central world government is hard and bears the risk that it may be taken over by a dictator.
Currently we’re implementing an algorithm that puts at stake the lives of millions of citizens, and where almost anyone can slash the stake of almost anyone else. Instead we could put a lot of weath at stake and establish mechanisms for when the wealth of a nation gets slashed.
Various social services may be a starting point. Health care insurance, welfare, justice, and others don’t work very well in many countries. Various market mechanisms could be made available to people to use instead. People could pay regularly into an insurance pot that is at the same time lent out or invested to generate passive income for the firm and the insured person, and used to compensate anyone who the insured person might harm. (This is inspired by Hanson’s proposal for a tort law reform.) The insurers can collaborate with firms that aggregate judgements. These firms request judgements on cases form random judges and are themselves insured against biasing their random selection.
For big decisions, such as whether to slash the stake of a large group of people, such as a nation, many judges are needed. That also has the advantage that the judges can be far apart, to the point that it may take years for their judgments to travel at lightspeed to reach the nation they’re judging. Meanwhile the state of the nation can fork, which causes linearly more overhead in the number of forks as people need to do their transactions multiple times. But gradually more and more judgments will come in and will potentially resolve the uncertainty even before all judgments are in. That way it would scale better than a central government for all worlds.
I’m unfortunately pessimistic that this can be established in time to prevent AI arms races. But maybe it’ll turn out that we have more than a few decades left after all.
Peter S. Park @ 2022-03-03T18:12 (+7)
Research into reducing general info-hazards
Biorisk
Researching and diseminating knowledge on how to generally reduce info-hazards could potentially be very impactful. An ambitious goal would be to have an info-hazard section in the training of journal editors, department chairs, and biotech CEOs in relevant scientific fields (although perhaps such a training would also be an info-hazard!)
tessa @ 2022-03-07T13:01 (+5)
yeah, to expand upon this:
Best practices for assessment and management of dual-use infohazards
Biorisk and Recovery from Catastrophe, Values and Reflective Processes
Lots of important and well-intended research, including research into AI alignment and pandemic prevention, generates information which may be hazardous if misused. We would like to better understand how to assess and manage these hazards, and would be interested in funding expert elicitation studies and other empirical work on estimating information risks. We would also be interested in funding work to make organizations, including research labs, publishers and grantmakers, better equipped to handle dual-use through offering training and incentives to follow certain best practices.
Peter S. Park @ 2022-03-03T15:50 (+7)
Reducing vaccine hesitancy
Biorisk
Even if we have extremely quick development of vaccines for pandemic pathogens, vaccine hesitancy can limit the impact of vaccines. Research and efforts to reduce vaccine hesistancy in general could potentially be high-impact.
Greg_Colbourn @ 2022-03-03T12:58 (+7)
Re: Expert polling for everything (already listed on ftxfuturefund.org/projects)
Some questions that I think it would be very valuable to get the answers for:
1. Year with 10% chance of AGI?
2. P(doom|AGI in that year)?
3. What would it take for you to work on AGI Alignment ($ amount, other)?
1 & 2 because I think that, for AGI x-risk timelines, 10% chance (by year X) estimates should be the headline, not 50%.
And 3 should be asked specifically to the topmost intelligent/qualified/capable people in the world, as an initial investigation into this project idea.
PhilC @ 2022-03-02T19:58 (+7)
Research to solve global coordination problems
Epistemic Institutions, Values and Reflective Processes
In Scott Alexander's Meditations on Moloch, Scott argues that a number of humanity's major problems (corruption, environmental extraction, arms races, existential risks from emerging technologies, etc) occur because agents are unable to coordinate for a positive global outcome. Our current major coordination mechanisms of free markets, international institutions and democracy are inadequate to solve this problem. Research needs to be done to design better coordination mechanisms.
Note:
- I contend that current market solutions, extensions of market solutions and governance solutions don't solve this problem adequately. I may write more about this.
LRudL @ 2022-03-02T15:59 (+7)
New academic publishing system
Research that will help us improve, Epistemic Institutions, Empowering Exceptional People
It is well-known that the incentive structure for academic publishing is messed up. Changing publish-or-perish incentives is hard. However, one particular broken thing is that some journals operate on a model where they rent out their prestige to both authors (who pay to have their works accepted) and readers (who pay to read), extracting money from both while providing little value except their brand. This seems like a situation that could be disrupted, though probably not directly through competing on prestige with the big journals. Alternatives might look like something simple like expanding the scope of free preprint services like arXiv to bioRxiv to every field, or something more complicated like providing high-quality help and services for paper authors to incentivize them to submit to the new system. If established, a popular and prestigious academic publishing system would also be a good platform from which to push other academia-related changes (especially incentivizing the right kinds of research).
Peter S. Park @ 2022-03-02T03:19 (+7)
Research on solving wicked problems
Economic growth, Values and Reflective Processes
It seems that many (almost all?) of the outstanding problems we effective altruists wish to solve are wicked problems. A better general understanding on how wicked problems could be solved may potentially be very impactful. This can be done by establishing relevant fellowships, grants, and collaboration opportunities to facilitate research on this topic.
Nathan Young @ 2022-03-02T02:29 (+7)
EA follower bounties
EA community building
Offer a fixed rate for subscribers to EA accounts on different platforms. Ask forum users to note all the accounts above a certain size they can think of which they think post quality EA content and remunerate all according to the same standard, per platform. Alternatively only pay midsized accounts of those for whom it's not paid already or on platforms we would like more coverage on.
Denis Drescher @ 2022-03-01T20:14 (+7)
Regulatory markets of AI safety
Artificial Intelligence
A political think tank to refine and push for regulatory markets of AI safety in as many countries as possible. Jack Clark, Gillian K. Hadfield: “We propose a new model for regulation to achieve AI safety: global regulatory markets. We first sketch the model in general terms and provide an overview of the costs and benefits of this approach. We then demonstrate how the model might work in practice: responding to the risk of adversarial attacks on AI models employed in commercial drones.”
It is probably hard and slow to establish such markets worldwide. This and similar proposal also put a focus on safety regulation right before the deployment. But that implies that that it is assumed that any development and testing systems are perfectly sandboxed so that an AGI cannot break out at any stage other than when it is intentionally deployed. That seems unwarranted to me. So this regulation would have to go much deeper than just testing the product but would have to test the safety of the development process too.
For starters, someone could be contracted to conduct a historical analysis of how long it took for similar forms of regulation to take hold.
From all the proposals for certification or safety-consulting for AI safety, this one seems most promising to me (but that’s not a high bar). I would feel mildly safer with something like this in place.
RyanCarey @ 2022-03-05T00:27 (+5)
It would also be good to offer whistleblower bounties for AI safety and biosafety!
Zac Townsend @ 2022-03-01T12:03 (+7)
(Per Nick's post, reposting)
Large-scale randomized controlled trials
Values and Reflective Processes; Epistemic institutions; Economic Growth
RCTs are the gold standard in social science research but are frequently too expensive for most researchers to run, particularly in the United States. We are interested in large-scale funding of RCTs that are usually impossible due to a lack of funding.
Jackson Wagner @ 2022-03-01T22:26 (+2)
Higher-leverage way to do this might be to lobby for reforms making it easier to gather "Phase 4" data on therapies already in use? Or reform the FDA in one of various other ways, for instance so they give provisional approval to therapies which have merely been shown to be safe but not necessarily effective? Or code up some kind of platform that makes it easier for small organizations to run large trials by doing stuff like mailing people supplements and fitbit-like devices without having to jump through a bunch of formidable bureaucratic hoops.
Zac Townsend @ 2022-03-02T01:30 (+2)
I think this is all correct! By the way, I was mostly thinking of RCTs in the social sciences -- like randomized school vouchers or the Perry Preschool Experiment -- but it's equally true in the FDA/medical context.
Zac Townsend @ 2022-03-01T11:59 (+7)
(Per Nick's note, reposting)
Development of cross-disciplinary talent
Economic Growth, Values and Reflective Processes, Empowering Exceptional People,
The NIH successfully funded the creation of interdisciplinary graduate programs in, for example, computational biology and Ph.D./MD programs. Increasingly, the returns to studying in one discipline, artificially constructed, cannot solve our most pressing problems. We are interested in funding the development of fluent individuals in two or more fields — particularly people with expertise in technology and social or economic issues. Universities have computer science + math or computer science + biology degrees, but we are interested in cultivating talents at the intersection of any disciplines that can affect our long-term future, and with a particular emphasis on non-university contexts.
Chris Leong @ 2022-03-09T04:30 (+6)
Situational Analysis Agency
Epistemics
When events of great global importance occur, they often have a bearing on EA projects. Sometimes EAs will want to do something in response. Take for example, the invasion of Ukraine, the coronavirus pandemic and supply chains. At the moment, most of the investigation of these issues is conducted on the side by EAs who are busy with other projects. It would be great to have some researchers available to investigate these issues on short notice so that we are better able to navigate these situations.
DonyChristie @ 2022-03-08T05:47 (+6)
Research into Goodhart’s Law
Artificial Intelligence, Epistemic Institutions, Values and Reflective Processes, Economic Growth, Space Governance, Effective Altruism, Research That Can Help Us Improve
Goodhart’s Law states: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”, or more simply, "When a measure becomes a target, it ceases to be a good measure.”
The problem of ‘Goodharting’ seems to crop up in many relevant places, including alignment of artificial intelligence and the social and economic coordination of individuals and institutions. More research into this property of reality and how to mitigate it may be fruitful for structuring the processes that are reordering the world.
barkbellowroar @ 2022-03-08T05:35 (+6)
Publish an EA-inspired magazine like Time Magazine's "Time for Kids" (TFK)
Empowering Exceptional People, Values and Reflective Processes, Effective Altruism
Time for Kids has almost 2 million subscribers and has been used by educators for over 25 years to introduce elementary students to issues in science, history and civic engagement, while empowering students to take action and have a positive impact on the world. An EA-oriented magazine could do something similar by introducing students to topics like current pressing issues, relevant career pathways and the skills that are highly needed to address global problems. Additionally, this could include developing an evergreen website to accompany the magazine, which can produce content providing support for educators and parents by covering topics like how to better incorporate EA topics into lessons and how to advise students wanting to pursue high-impact careers.
Hanna Pálya @ 2022-03-07T22:51 (+6)
Accident reporting in biology research labs
Biorisk and Recovery from Catastrophe
Currently, accident reporting is framed as an unpleasant and largely unimportant chore, even though there’s evidence of lab leaks causing massive harm. Encouraging research groups to report their accidents in a fast and thorough way could therefore be very impactful. We could build a reporting system in a variety of ways, this research in itself would be a good thing to fund. A potential system could be to implement insurance policies that require efficient and honest documentation. A different approach could be to fund a safety person in each lab/research group whose responsibility is to ensure the maintenance of the equipment, good risk assessment and reporting accidents.
Alex D @ 2022-03-09T21:05 (+2)
To some degree these already exists (eg here's a description of Canada's system), but I'm certain they could be drastically expanded, standardized, synthesized, and otherwise improved.
Ernst Stueckelberg @ 2022-03-07T16:29 (+6)
Just looked at the website and the following probably fits under talent-search / innovative educational experiments. Apologies for the formatting (this is from a private doc of ideas some time ago, and I currently don't have the time to reformat it / I'm also travelling with spotty internet).
Project 1:
Title:
Longtermist movement building via "cash transfers" (i.e. grants/fellowships) to talented (high-school) students (from developing countries) to support them to work on the world's most pressing problems.
Idea:
Identify talented (e.g. top 0.01%) high school students (in developing countries) who demonstrate intelligence, altruism, ambition, etc.
Invite them to programs where they can learn about deep ideas (of which EA-aligned content is a non-trivial subset, but not everything) through project-based learning. Provide them with a small exploratory grant to see what they do / make them more ambitious (cf. Emergent Ventures: maximizing ambition per dollar).
After successively larger grants which are dependant on student performance/ their requirements (e.g. $1K, $2K, $10K, $25K), you have data on their performance, as well as having successively raised their ambition. Offer exceptional people $100K unrestricted for 2 years (cf. Thiel Fellowship: go build/ do something interesting with no-strings attached) from the ages of 17+, with mentorship/internships/support from people/orgs in EA-aligned spheres (e.g. orgs on 80K's job board)
After the 2-year fellowship, evaluate their performance. Offer the most exceptional ones Bay-Area salaries/jobs and support (e.g. EA networks/friends) to pursue their most ambitious vision (their ambition having been increased by previous Emergent Ventures-style grants/networks).
How this could be funded in the long-run:
If necessary, a potential funding mechanism could be an ISA based on student earnings when legally permissible (potentially with an impact certificates/retroactive funding component) to ensure the program is well-incentivized to focus on impact rather than optimizing for students making the most money. Even without the impact certificates element, this'd probably be better incentivized than the current model (e.g. Universities)
Why this may be exciting:
Extremely talented students are being supported to work on the world's most pressing problems (e.g. longtermist-focused/ x-risk reduction) without having to struggle for several years as they receive guidance (e.g. mentorship/internships). A potential win for longtermist causes.
The students selected may come from countries where avg. income is e.g. < $10K/year. If they're earning >$100K/yr, this is a substantial cash transfer which they could decide to donate back home / set up initiatives to support individuals in their native countries. A potential win for near-termist causes (global health & development, increasing economic growth, etc.)
Word then gets back to students' home countries about them entering these programs and becoming highly successful which incentivizes students back in their home countries to apply; this helps organically grow the movement and eventually lead to EA-aligned communities being formed in the native countries as more and more students apply (longtermist movement building in developing countries).
Next steps:
Speak to people about this at Feb 2020 Effective Giving Oxford party [done]
Project 2:
Title:
Identifying EA scriptwriters/substack authors through longtermist All Souls examinations.
PeterSlattery @ 2022-03-09T05:02 (+2)
I like this! I had a similar idea about curating exceptional people in Third World countries and connecting them to training, resources and networks so that they could create marketplaces that would help to enrich their home countries by creating employment and reduce poverty/inequality.
Asa Cooper Stickland @ 2022-03-07T13:31 (+6)
AI Safety Academic Conference
Technical AI Safety
The idea is to fund and provide logistical/admin support for a reasonably large AI safety conference along the lines of Neurips etc. Academic conferences provide several benefits: 1) Potentially increasing the prestige of an area and boosting the career capital of people who get accepted papers. 2) Networking and sharing ideas, 3) Providing feedback on submitted papers and highlighting important/useful papers. This conference would be unusual in that the work submitted shares approximately the same concrete goal (avoiding risks from powerful AI). While traditional conferences might focus on scientific novelty and complicated/"cool" papers, this conference could have a particular focus on things like reproducibility or correctness of empirical results, peer support and mentorship, non-traditional research mediums (e.g. blog posts/notebooks) , and encouraging authors to have a plausible story for why their work is actually reducing risks from AI.
MaxRa @ 2022-03-12T16:17 (+9)
As Gavin mentioned somewhere here, one significant downside would be to silo AI Safety work from the broader AI community.
Alexander_Zatko @ 2022-03-07T13:28 (+6)
Promote ways that suppress status seeking
Great Power Relations, Economic Growth
Status seeking is associated with massive economic inefficiencies (waste production, economic inequality,..). The zero sum game nature of status seeking also puts a toll on individual well being and consequently on suboptimal ways the societies function.
In the political domain, status seeking can lead to wars (as the recent developments illustrate).
The EA community should invest into institutions/research/solutions leading to diverting from status seeking.
Taras Morozov @ 2022-03-07T07:19 (+6)
Research raising sanity waterline
It seems that teaching the general public rationality tools may cause more polarisations. That is because many ideas seem to be used primarily for argument-winning instead of truth-seeking.
There is a risk that some ideas will make people less rational when teaching rationality. For example, Eliezer Yudkowsky wrote an article Knowing About Biases Can Hurt People.
Scott Alexander uses the term Symmetric and Asymmetric Weapons for a similar idea: Some thinking-tools are more useful for winning arguments than truth-seeking.
Therefore, there is a need for research into improving the general public's rationality without causing harm and even more polarisation.
Yoav_Ravid @ 2022-03-07T05:17 (+6)
Assessment companies
Epistemic Institutions, Empowering Exceptional People
Most certification processes (e.g schools & universities) require going through their teaching process in order to be certified. Due to either bad incentives or central planning, their tests are also often very bad at accurately assessing the skills of the assessed. This creates a situation where most certifications (e.g high school diplomas & degrees) aren't as credible and reliable as they should be, and yet people have to go through years of studying to get them because there aren't any other certifications processes they can go through to make their skills legible. Assessment companies fill this gap by separating teaching and assessment. They design their own tests and provide their own certifications, which they offer directly to people who want to take them to make their skills legible to other people (e.g employers). By competing on designing good tests and offering credible and reliable certifications, these companies allow people to study however they like and then take the test, they give teaching institutions more freedom as they no longer have to worry about certifying their students, and they help employers hire the right people (especially those they would currently miss).
(note: I've seen this idea mentioned by some people on LW, but I haven't seen anyone expand on it. I'm currently working on essay that expands on this idea.)
Charles He @ 2022-03-07T05:14 (+6)
Much better narratives of the future and understanding of “Utopia”
Many efforts to discuss “utopia” are unproductive, and often the word is disliked. This is despite most people caring deeply about the future and how it is shaped. Improving communication of the future is important for practical reasons, like improving public understanding of longtermist projects. Also, limited understanding of preferences over even the medium term future could unduly influence work and limit progress toward better outcomes more broadly. This project includes research and design of visions of the future and better communication of it, as well as understanding limitations of the project itself or understanding important considerations.
Note that FTX is interested in funding multiple independent projects in this category, that could work either collaboratively or orthogonally.
Charles He @ 2022-03-07T05:18 (+2)
Credit to this existing content and also this existing content for the idea.
Chris Leong @ 2022-03-07T03:58 (+6)
Training Course for Professional AI Ethicists on Longterm Impacts
Artifical Intelligence
Most AI Ethicists focus on the short-term impacts of AI rather than the longer term impacts. Many might be interested in a free professional development course covering this topic. Such a course should cover a variety of perspectives, including that of prominent AI Safety skeptics.
MaxRa @ 2022-03-14T02:45 (+4)
Cool idea. I have some worry that a majority of AI ethicists have sufficiently bad epistemics (sth. like fairly strong views, relatively weak understanding of the world outside their discipline, and little skill at honestly/curiously/patiently exploring disagreements) that this would end up being regretable. Would be interested in updates here.
Maybe it's similar to the [bioethicists issue](https://forum.effectivealtruism.org/posts/JwDfKNnmrAcmxtAfJ/the-bioethicists-are-mostly-alright) and I got my impression only from public discussions, & maybe a selection of the weakest ideas?
noahchonlee @ 2022-03-07T03:00 (+6)
Bountied Rationality Website
Effective Altruism
Oftentimes great ideas fail to find funding through a grant because those who come up with a great proposal are not the right people to complete the proposal. An inducement prize platform separates who comes up with ideas (proposers) and those who complete the ideas (bounty hunters), thereby allowing the best ideas to be elevated based on the quality of the idea itself. It also makes it easier to find others working on the same project because there can be a "competitors and collaborators" tab that shows who else is working on the project. Finally, this opens up ways to accomplish useful tasks to people around the world and particular in lower cost of living countries where they can be paid for accomplishing bounties such as coding tasks.
For those familiar with Xprize funded by Elon Musk and Richard Branson and others, the idea is essentially to create a democratized version of Xprize.
I propose a public list of bountied projects and tasks to incentivize public works, streamline networking, and to decentralize funding. Feel free to comment to ask more or see the following outline: https://docs.google.com/document/d/17h_PtFoRE-W7mRtVZOAyRinAFv542O-kR22VBXxGC6c/edit?usp=sharing
PeterSlattery @ 2022-03-07T02:51 (+6)
Better understanding social movements
Movement building & Conceptual dissemination
People involved with social movements are important collaborators for the EA movement. However, there is relatively little high quality survey work to understand how these groups differ and overlap. We would therefore like to fund research to regularly survey members of social movements to better understand them. For instance, this could involve understanding i) aggregations of behaviours and attitudes (e.g., what different identities, demographics/geographies/groups do/think about key issues), ii) awareness of internal and unobservable behavioural drivers and barriers (e.g., whether people in other movements fail to act as we desire because they are unaware, unable, or unmotivated, and why they justify this), iii) forecasts for future behaviour (e.g., whether people in other movements expect to think or act more or less in line with our desires in the future), iv) audience targeting (e.g., which movement we should target particular outreach to for best effect) and v) intervention tailoring (e.g., what to say to whom to get the best outcomes). Ideally, we could compare the results from these surveys against a similar EA sample (maybe the EA survey) and track divergence and convergence over time.
Will Kirkpatrick @ 2022-03-07T01:08 (+6)
Cotton Bot
Economic growth
Problem: In 2021, a mere 30% of the world’s cotton harvest was gathered by machinery. This means
that over 60% of the 2021 worldwide supply of cotton was harvested using the same methods as
American slaves in the 1850’s. A significant amount of the hand harvesting includes forced labor.
Solution: The integration of existing technologies can provide a modular, robust, swarming team of
small-scale, low-cost harvesters. Thoughtful system design will ensure the harvesters are simple to
operate and maintain while still containing leading edge technical capability.
How to: The project is focused on developing a single row, robotic harvester that meets key
performance parameters and system attributes to allow operation in the most technologically remote
areas of the world with little or no logistics tail. The single row harvesters can intuitively communicate to swarm harvest in teams from two to two hundred independent systems.
Background: My father has been the REDACTED for a few years now. We have been talking about how much cotton gets wasted in the field near our house for years now, but this grant strikes me as a perfect to see if a prototype could be built.
Pluses:
1. He is not an EA (though he is adjacent, mostly from my prodding), so it's an opportunity to drag a non-EA to work on our projects.
2. He has no desire to develop the business after making prototype and proving use case, so the patent would come back to the FTX future fund as investors.
3. He has a lot of experience doing exactly this, so he will most likely be able to execute.
Cons:
1. It's expensive because he intends to hire employees to work on it full time.
2. He isn't an EA, so he may not perfectly represent EA interests in this (somewhat mitigated because I will also be working on it.)
3. He has no desire to develop the business after making, so we'll have to have someone do that (or give away the tech for free.)
His name is REDACTED, and he works at the REDACTED in case anyone wants to look him up!
Denis Drescher @ 2022-03-07T13:51 (+7)
A significant amount of the hand harvesting includes forced labor.
I think this is key. If most of the harvest is not forced labor, then the cotton bot may just steal the least terrible employment opportunity from these people and they have to fall back on something more terrible. Then again it can maybe be marketed specifically to the places that use forced labor.
Will Kirkpatrick @ 2022-03-07T01:09 (+1)
I also filled out the form, so apologies if this is a double entry!
Peter S. Park @ 2022-03-06T20:53 (+6)
Increase the number of STEM-trained people, in EA and in general
Economic growth, Research that can help us improve
Research and efforts to increase the numberof quantitatively skilled people in general, and targeted EA movement-building efforts to them could potentially be very impactful. (e.g., AI alignment research, biorisk research, scientific research in general) Incentivizing STEM education at the school and university levels, facilitating immigration of STEM degree holders, and offering STEM specific guidance via 80,000 Hours and other organizations could potentially be very impactful.
agnode @ 2022-03-06T18:09 (+6)
New non-academic intellectual communities
Empowering exceptional people, Values and reflective Processes
The pathologies of academia are well known, and there are many people who would like to engage with and contribute to research but once they are outside of academia they don't have the structures to do so. Recently there have been some new projects springing up to fill this gap, such as:
- InterIntellect, where people can host and take part in online salons on any topic. The founder of this (Anna Gat) was supported by an emergent ventures grant.
- The Catharine Project, which offers free Oxbridge-style tutorials and reading groups in classic works of philosophy and literature.
- The Stoa - less familiar with this one but I think it's a bit like InterIntellect.
The future fund could support more communities of this kind, and particularly help them develop beyond just learning and discussing and towards enabling members to make real research contributions. It would be important not to have all the communities funded be part of EA, as it is valuable to have many types of community coming from different intellectual perspectives.
Additional notes:
- My bias here is that I really want more of these so I can be part of them and develop my research career outside of academia.
Jonathan Nankivell @ 2022-03-06T13:43 (+6)
Self-Improving Healthcare
Biorisk and Recovery from Catastrophe, Epistemic Institutions, Economic Growth
Our healthcare systems aren't perfect. One underdiscussed part of this is that we learn almost nothing from the vast majority of treatment that happens. I'd love to see systems that learn from the day-to-day process of treating patients, systems that use automatic feedback loops and crowd wisdom to detect and correct mistakes, and that identify, test and incorporate new treatments. It should be possible to do this. Below is my suggestion.
I suggest we allocate treatments to patients in a specific way: the probability that we allocate a treatment to a patient, should match the probability that that treatment is the best treatment for that patient. This will create a RCT of similar patients, which we can use to update the probabilities that we use for allocation. Then repeat. This will maximise the number of patients given the best treatment in the medium to long-term. It does this by detecting and correcting mistakes, and by cautiously testing novel treatments and then, if warranted, rolling them out to the wider population.
This idea is still in it's early stages. More detailed thoughts (such as where the probabilities come from) can be found here. If you have any thoughts or feedback, please get in touch.
Chris Leong @ 2022-03-06T08:55 (+6)
Responsible AI Incubator
AI Safety
Creating an incubator to encourage new startups to invest in the responsible use of AI (including longer-term safety issues) by making this a requirement of investment. In addition to influencing companies, this could enhance the credibility of the field and help more AI safety researchers to become established.
Downsides: This could accelerate AI timelines, but the fund would only have to offer slightly better terms in order to entice startups to join.
Ricky @ 2022-03-06T07:56 (+6)
Create a suite of online and in person EA qualifications to help attract new people into the movement and unskill existing members.
The suite of online qualifications could follow a similar model to the Khan academy. Short, interactive courses led by gifted teachers and delivered online. These courses would cover foundational EA materials.
EA could also partner with universities to deliver formal courses in areas such as existential risk or AI safety.
PeterSlattery @ 2022-03-08T03:54 (+2)
I had a similar idea here.
Denis Drescher @ 2022-03-05T20:50 (+6)
Research into increasing the “surface area” of important problems
Artificial Intelligence, Biorisk and Recovery from Catastrophe, Epistemic Institutions, Values and Reflective Processes, Economic Growth, Great Power Relations, Space Governance, Effective Altruism
The idea here is that 80,000 Hours seems to follow an approach along the lines of (1) What are the biggest problems? (2) What are the obvious ways to make progress on these problems? (3) How can we get people to implement these obvious ways?
If we hold the first question constant, we can instead ask: (2) What are the skillsets of the people interested in solving these problems? (3) How can people with those skillsets make progress on these problems?
This way we might find that (say) there are many cultural anthropologists who want to avert risks from AI. So how can a cultural anthropologist specialize or make an easy career change to contribute to AI safety? That is the hard question that will take a lot of research to answer. But if, like in the made-up example, there are enough cultural anthropologists like that, the research may be worth it, even if the work of each cultural anthropologist may be less impactful than that of a machine learning specialist.
This example is about increasing the surface area that can be used by people, but one might also increase the surface area that can be used by entrepreneurs or funders, e.g., find creative ways in which foundations that are bound by restrictive by-laws can yet contribute to AI safety – maybe they can’t donate to MIRI but they can fund a conference on automated theorem provers in Haskell that is useful for MIRI for recruiting.
Denis Drescher @ 2022-03-05T14:22 (+6)
A project to investigate and prioritize project proposals such as all of these
Research That Can Help Us Improve, Effective Altruism, Empowering Exceptional People
Even long lists of project proposals like this one can miss important projects. The proposals (including my own) are also rarely concrete enough to gauge their importance or tractability.
Charity entrepreneurs are currently mostly on their own when it comes to prioritizing between project proposals and making them more concrete. There may be great benefits to specialization and economies of scale here that a dedicated organization could realize:
-
Charity entrepreneurs are currently more likely to succeed if they are in the intersection of the sets of all people who are (1) excellent at running and scaling a charity, and (2) sufficiently broadly knowledgeable and impartial to recognize the best proposals from a very wide range of proposals. If they could draw on a separate organization (whose staff don’t all need to be excellent at running and scaling charities) to take care of the second problem, many more of the entrepreneurs from the first set could succeed.
-
A separate organization could categorize project proposals by the additional nonentrepreneurial aptitudes that are needed to realize them. That would make the number of projects that entrepreneurs have to weight more manageable for them. Conversely, the organization could also match entrepreneurs with complementary aptitudes who might not otherwise have met and thus widen the sets of suitable projects if they are few.
-
A separate organization could be networked with existing organizations like 80,000 Hours and Impact CoLab and form an efficient funnel for prospective entrepreneurs.
-
A separate organization could also be networked with funders and advisors who are interested in particular project proposals.
-
A separate organization could, over time, build expertise in efficiently drafting business plans and prioritizing them far beyond what any individual entrepreneur might achieve.
I’ve considered starting such a project, but I’ve currently prioritized impact markets higher.
Peter S. Park @ 2022-03-04T16:42 (+6)
A fast and widely used global database of pandemic prevention data
Biorisk
Speed is of the essence for pandemic prevention when emergence occurs. A fast and widely used global database could potentially be very impactful. It would be great if events like the early discovery of potential pandemic pathogens, doctors' diagnoses of potential pandemic symptoms, etc. regularly and automatically gets uploaded to the database, and high-frequency algorithms can use this database to predict potential pandemic outbreaks faster than people can do.
marsxr @ 2022-03-04T16:10 (+6)
One Device Per Human
Similar to: https://en.wikipedia.org/wiki/One_Laptop_per_Child
Allowing people from all over the world to vote on global issues.
(this is assuming we have global governance)
Taras Morozov @ 2022-03-04T14:41 (+6)
Create an independent organization working along the lines of Implementation Support Unit of Biological Weapons Convention
Biorisk and Recovery from Catastrophe
Biological Weapons Convention, forbidding the development of biological weapons, was signed in 1972 by most countries. But compliance is verified by Implementation Support Unit (BCW ISU), with the budget in the range $1-2m and only roughly four employees. At the same time, it seems there is a fair probability that Russia has an active biological weapon development program.
Create an organization that will do open-source intelligence information gathering and analysis and publication on the bioweapon programs in Russia and others, including terrorist groups. Possibly also include monitoring dangerous biological research in academia, like gain-of-function, work with smallpox, etc.
tessa @ 2022-03-03T17:13 (+6)
Reducing risks from laboratory accidents
Biorisk and Recovery from Catastrophe
Some life sciences research, such as gain-of-function work with potential pandemic pathogens, poses serious risks even in the absence of bad actors. What if we could eliminate biological risks from laboratory accidents? We'd like to see work to reduce the likelihood of accidents, such as empirical biosafety research and human factors analysis on laboratory equipment. We'd also like to see work that reduces the severity of accidents, such as warning systems to inform scientists if a pathogen has not been successfully deactivated and user-friendly lab strains that incorporate modern biocontainment methods.
will_c @ 2022-03-03T14:57 (+6)
Replacing Institutional Review Boards with Strict Liability
Biorisk, Epistemic Institutions, Values and Reflective Process
Institutional Review Boards (IRBs) regulate biomedical and social science research. As a result of their risk-averse nature, important biomedical research is slowed or deterred entirely; eg, the UK human challenge trial was delayed by several months because of a protracted ethics review process and an enrollment delay in a thrombolytics trial cost thousands of lives. In the US, a plausible challenge to IRB legality can be mounted on First Amendment grounds. We would be interested in funding a civil rights challenge to IRB legality, with the eventual goal of FDA guidance on control groups and strict liability replacing IRBs as a means of research regulation. This would have substantial overlap with our project idea of rapid countermeasure development to new pathogens.
gunnar_v @ 2022-03-02T21:22 (+6)
Calculating the cost-effectiveness of research into foundational moral questions
Research That Can Help Us Improve
All actions aiming at improving the world are either implicitly or explicitly founded on a moral theory. However, there are many conflicting moral theories and little consensus regarding which theory, if any, can be considered the correct one (this issue is also known as Moral Uncertainty). Further adding to the confusion are issues such as whom to include as moral agents (animals? AIs?) and Moral Cluelessness. These issues make it extremely difficult to know whether our actions are actually improving the world.
Our foundation's goal is to improve humanity's long-term prospects. Therefore, it is potentially worthwhile to spend significant resources researching foundational issues such as reducing or eliminating Moral Uncertainty and Moral Cluelessness. However, it is currently unclear how cost-effective funding such research would be.
We are interested in projects that aim at calculating the cost-effectiveness of research into such foundational moral questions. We are also interested in smaller projects that aim to find solutions to parts of these cost-effectiveness equations, such as the scale of a foundational issue and the tractability of researching it. One concrete project could be calculating the expected value that is lost from not knowing which moral theory is correct, or equivalently, the expected value of information gained by learning which moral theory is correct.
Peter S. Park @ 2022-03-02T19:55 (+6)
Reducing amount of time productive people spend doing paperwork
Economic Growth, Research That Can Help Us Improve
One example is productive researchers working in high-impact fields who are forced to write copious paperwork for grants. Another is filing taxes. Funding various approaches to reduce this problem, such as research on optimal streamlining of grant decision processes, nonprofits/volunteers/crowdsourced advice for helping fill out paperwork like taxes, improving pipelines into lab managers/personal assistants to high-productivity researchers, etc. could potentially be impactful.
Peter S. Park @ 2022-03-02T17:26 (+6)
Develop organizations like the Institute for Advanced Study, but for longtermism
Effective altruism
The Global Priorities Institute in the UK is one example. It could be very impactful to develop similar research organizations in other locations, such as the US and the EU. (Perhaps they exist already and I just don't know about them!)
Addendum: Even the GPI could be more interdisciplinary like the IAS. e.g., branch out in addition to economics and philosophy.
Peter S. Park @ 2022-03-02T17:00 (+6)
A public longtermism pledge/petition
Effective altruism
One way to increase the solidarity of EAs and longtermists, and to increase the gravitas with which longtermism is associated, is to have a public pledge or petition that people can sign. Public intellectuals, academic faculty, and prestigious individuals can be recruited to sign and publicly highlighted if they agree to sign. This would facilitate longtermism becoming a social norm. That this could have high impact is demonstrated, for example, by the substantial underconsideration of the risks of nuclear war (from the Russia-Ukraine war) by many public intellectuals' public comments at the moment.
Peter S. Park @ 2022-03-02T16:35 (+6)
Targeting movement-building efforts at top universities' administration and admissions
Effective altruism
Currently, the admissions officers of top (say, US) universities select and recruit high-potential students (modulo things like Harvard's Z list), and EA thereby uses targeted efforts to persuade and facilitate these high-potential students to go into high-impact careers. Yet, most graduates of top universities still do not do so, and a significant proportion of them go into zero-sum or negative-sum careers due to sticky social norms.
One solution may be to target movement-building efforts for EA and longtermism at admissions officers of top universities. Specifically, persuade and facilitate admissions officers' preference towards high-potential students who are likely to explicitly maximize for career impact compared to the status quo. A lot of these selected students would then be naturally interested in EA/longtermism.
A complementing solution would be to do the same for the universities' administration, to prevent a conflict of interest or of strategy between the administration and the admissions office. A university administration receptive to longtermism could be extremely impactful in its own right, such as for movement-building longtermism among the university's students/faculty and allocating resources to high-impact research and teaching.
LRudL @ 2022-03-02T15:11 (+6)
Prosocial social platforms
Epistemic institutions, movement-building, economic growth
The existing set of social media platforms is not particularly diverse, and existing platforms also often create negative externalities: reducing productive work hours, plausibly lowering epistemic standards, and increasing signalling/credentialism (by making easily legible credentials more important, and in some cases reducing the dimensionality of competition, e.g. LinkedIn reducing people to their most recent jobs and place of study, again making the competition for credentials in those things harsher). An enormous amount of value is locked away because valuable connections between people don't happen.
It might be very high-value to search through the set of possible social platforms and try to find ones that (1) make it easy to find valuable connections (hiring, co-founders, EA-aligned people, etc.) and trust in the people found through that process, (2) provide incentives to help other people and do useful things, and (3) de-emphasize unhealthy credentialism.
Clement Brenot @ 2022-03-02T04:24 (+6)
Extinction-level events outside of biorisk and nuclear catastrophes
Biorisk and Recovery from Catastrophe
In order to prepare for worst-case catastrophes, we need to anticipate them. Biological weapons and nuclear catastrophes are two well-identified threats to humanity's long term survival, as is climat change. However, there may be emerging risks that are yet to be addressed by policymakers or the EA community.
We'd be interested in convincing works highlighting credible, large-scale risks that are overlooked by most forecasters and the EA community, as well as any recovery strategies that are applicable.
Greg_Colbourn @ 2022-03-03T11:04 (+6)
You've neglected to mention AI! Arguably this is considered the biggest x-risk by the EA community (see The Precipice). Summary table from the book [highlighted by me]:
I'll also note that in a couple of decades of serious research, no new x-risks have been been identified. But of course it is still worth remaining vigilant to new identified threats.
agnode @ 2022-03-01T22:25 (+6)
Tools for improved transmission of tacit knowledge
Biorisk and recovery from catastrophe
Many scientific and technological skills require learning through apprenticeship under a more experienced practitioner, and can't easily be described in writing. If a global catastrophe breaks the transmission of skills from masters to apprentices, it may be difficult to recover those skills. This would make recovery from catastrophe difficult. But there may be ways of improving the recording of these skills, such as through video or methods of observing expert performance.
Additional notes: There is a blog series on tacit knowledge and tacit knowledge extraction here: https://commoncog.com/blog/the-tacit-knowledge-series/
Peter S. Park @ 2022-03-01T09:52 (+6)
Facilitate U.S. voters' relocation to swing states
Values and Reflective Processes
A key difficulty of implementing alternative voting systems which can more effectively aggregate voters' preferences/information (and of implementing beneficial policies or constitutional amendments in general) is political gridlock. The political party that stands to lose power if a voting-system reform passes will vigorously attempt to obstruct it. The resolution of political gridlock could not only enable large-scale policy solutions to previously intractable societal problems, but also help implement (via policies or constitutional amendments) alternative voting systems which can bring about a lasting reduction in future political gridlock.
One way to reduce political gridlock is to help likely U.S. voters to move to swing states. (This is related to my ongoing research collaboration with Feng Fu.) Some tentative ideas include creating a website to inform people where they could move to make their vote more meaningful, applying social-norm theory to facilitate large-scale relocation to swing states, and contemplating various policies (e.g., to reduce housing costs, facilitate remote work, and provide relocation incentives) that may help reduce people's empirically high aversion to moving in general.
samuel @ 2022-03-06T16:52 (+1)
Peter - great idea, I've been doing some thinking on this as well, will probably send you an email!
Chris Leong @ 2022-03-01T07:52 (+6)
An EA Vegan Restaurant Chain:
Effective Altruism
Setting up a vegan restaurant chain associated with Effective Altruism could provide a cost-neutral or even profitable way of providing home bases for EA Societies in major cities. It would also provide opportunites to grow the community by prominently advertising any EA events running at the place.
Downside: This might be seen as cultish. It wouldn't surprise me if there was no-one who was value-aligned who had the relevant skills. That said, we might be able to sign a franchise agreement with an existing restaurant.
(Probably not a good idea, but when brainstorming it is better to share more rather than less)
MaxGhenis @ 2022-03-06T16:44 (+3)
Framing it as EA hubs that also happen to serve vegan food could come off as less cultish. The restaurant could also donate 10% of revenue to GiveWell. Edit: Or let the customer select a GiveWell charity to receive 10% of their bill.
Yonatan Cale @ 2022-03-01T16:50 (+2)
- Vegans want to live where vegan restaurants exist
- Vegan restaurants want to exist where vegans live
Perhaps a place we could add value is in coordination. The rest should happen by itself, theoretically
Chris Leong @ 2022-03-01T03:19 (+6)
Leadership development:
Effective Altruism
People who are ambitious are often keen on developing their leadership skills. A program that supported ambitious and altruistic people could both increase people's individual impact and provide a form of EA outreach through sharing EA frames and perspectives. This program would also be useful for developing the leadership skills of people within EA.
yfu @ 2022-03-01T01:57 (+6)
A search engine for micro-level data
Macro-level data is easy to find these days. If you want to know the historical GDP of China or carbon emissions of the U.S., you can find the information on many non-profit and for-profit sites via Google.
But suppose you want to quickly look up "people's satisfaction with their daily lives" and "the amount they spend on food," you'd have to read dozens of papers, locate the names of the datasets used, find the places where such survey data is hosted (if it's available at all), create an account on the hosting site, download the data, and check whether the variable matches what you were looking for. The process wastes researchers' time and stifles novel and cross-disciplinary use of existing data.
I'd like to see/build a search engine that catalogs all variable names and other pieces of meta-data for all datasets that humans have ever created. (Google's product https://datasetsearch.research.google.com fails to catalog many important datasets and doesn't allow variable-level search, which I think is the main value proposition of this hypothetical search engine.)
Using this hypothetical search engine, researchers can quickly look up datasets that contain the variable they want, filter by relevant parameters such as age, country, and year of data collection. Lots of academic journals now require authors to make their data public (e.g. https://dataverse.harvard.edu), so we should build on this momentum to further increase the value of open data. Re-use of existing data is very limited because researchers have no tool for discovery. Knowledge of "what data is available on X topic" largely exists in experts' heads and transmit via word of mouth.
Another reason this search engine should be funded is that it lacks commercial viability: The amount of manual labor doesn't decrease with scale. The datasets that will be catalogued by this hypothetical search engine take on all sorts of formats, and the codebooks don't follow a fixed machine-readable template. (I assume large language models won't be of much help either.) Thus, if we think that such a search engine ought to exist, it would be funded only by philanthropy.
Chris Leong @ 2022-03-08T11:58 (+5)
EA from the ground up
Effective Altruism
Intellectual movements tend to develop by building upon the work of the previous generation and rejecting some of its foundational assumptions. We'd be keen to see an experiment to accelerate this. We'd suggest that the first step would be to identify the assumptions that are underlying EA or specific EA cause areas or specific EA strategies and try to figure out when these break. The project would then focus on those that are most likely to be false, particularly those which would be high impact if false. Efforts would then be made to think through whether or not these assumptions are actually true and what it'd mean for EA if they were false. The project might find itself naturally splitting into a few different subgroups depending on the assumptions that they bring to the table.
The project should mostly consistent of young up and coming EAs rather than EAs who are part of the "establishment" and who already have a platform. More experienced EAs should be involved at later stages of the process, but including them too soon may prevent people considering ideas outside of the current paradigm.
Vladimir @ 2022-03-08T07:58 (+5)
An EA Space Agency
Effective Altruism, Space Governance
Let’s build an organization which formulates and implements space programs, missions, and systems, which are aimed at the highest-priority things that humanity can be doing in space. There is currently no space organization, public or private, which formulates and implements programs and missions aligned solely with doing the most good, in an impartial and longtermist sense. There are many organizations which do some or much good, such as NASA, ESA, SpaceX, and others, but there is no example today which spends its budget on a portfolio of projects all aimed at doing the most good for humanity, most effectively, in an impartial and broad sense, such as how many EA grantmaking or evaluating organizations operate today. That should change. Costs for spacecraft development and launches to space are about to drop by so much that it might drop enough for an independent organization to formulate and implement its own missions aimed solely at the highest-priority actions for which space access is required. Traditionally, costs have been prohibitive such that only nations or very large organizations could afford access. There is reason to think this is about to change within this decade.
Costs for launching payloads to space are about to (~5 year timeframe) drop dramatically relative to the status quo (1/100x or less) because of SpaceX Starship, and its eventual competitors. Underdiscussed and underappreciated in this topic area is that payload development costs, not just launch costs, will also drop dramatically as a direct result of the combination of lowered launch costs and the increase in absolute payload capacity. The scale of that second cost drop could be comparably large, in my view, meaning a total mission cost reduction on the order of 1/10,000x could be on the table. Payload capacity to Earth orbit will 5x (100T to LEO on Starship vs 20T typical on many platforms). Payload capacity (to LEO, in this example, but to anywhere, generally speaking) is a variable which many key factors are highly sensitive to in aerospace systems. Technical factors are directly sensitive to this, of course, but programmatic factors, the main cost drivers, are also highly sensitive to this. A 5x increase (20T to LEO vs 100T to LEO) is not merely a multiplier on top of cost, simply lowering the cost per kg further by a factor of 5. It’s not just more kg’s in the denominator with a fixed cost numerator. Rather, when you can actually fly a payload 2x or 5x the mass you normally would, for a fraction of the launch cost, the nature of the engineering problem undergoes a step change. The kinds of engineering, the types of resources, the timeframe, and the scope and scale of the organization required to pull it off all change. The bar is lowered. For tens of millions per mission, which could be tens of millions per year or just a few million per year over a few years, an organization could develop and launch missions to close the gaps in space opportunities that can be used to reduce existential and catastrophic risk.
While NASA continues to develop and fund many climate science and near-Earth object detection projects (even one deflection demonstration), the overall budget leaves much to be desired from an effective altruist viewpoint, in my estimation. As a U.S. agency subject to U.S. congressional budgets… do I need to say more? I fully support aligning NASA more with EA over time, but that is a massive ship to steer, and I don’t necessarily support it more than creating a new space organization which is aligned from the ground up, as hard as that could be. While SpaceX is laser-focussed, dedicated, and actually serious about colonizing Mars, which I fully support, becoming multi-planetary, as necessary as it is, is not the only space-based EA near-term idea we should be able to come up with - far from it. Some things like climate science targeted towards extreme climate change scenarios, better/more supervolcano monitoring from orbit, and direct observation of nearby exoplanet systems for signs of alien megaprojects using dedicated megatelescopes, come to mind but I think part of the reason the ideas in this area are limited might be because we haven’t been considering having our own space organization as an option. Further work should go into understanding the cost-effectiveness of this line of thinking, as well as the actual funding model for such an organization. Do you go fully-philanthropic (is that feasible?), or do you start with philanthropic seed funding to create a revenue-generating for-profit or non-profit entity? These questions bear greatly on the range of possibilities and degree of prioritization of this idea. What might we think of to be able to do or learn by doing it in space, if we believed we were able to?
[opinions are my own, not those of my employer]
DonyChristie @ 2022-03-08T07:08 (+5)
Improving Critical Infrastructure
Effective Altruism
Some dams are at risk of collapse, potentially killing hundreds of thousands. The grid system is very vulnerable to electromagnetic pulse attack. Infrastructural upgrades could prevent sudden catastrophes from failure of critical systems our civilization runs on.
barkbellowroar @ 2022-03-08T05:50 (+5)
Build an Infrastructure Organization for The EA Movement (TEAM)
Effective Altruism, Empowering Exceptional People
Many high impact organizations in effective altruism have expressed issues with sourcing operations talent which takes time away from the key programs these charities provide, reducing overall impact. An infrastructure organization could provide operational support and build valuable tools that would alleviate the burden from these meta charities and streamline processes across organizations to improve movement coordination. This organization could also tackle major bottlenecks like hiring talent, vetting grants and projects, collecting data and user feedback and even building software to support internal activities like cost benefit analysis tools or a community intranet.
PeterSlattery @ 2022-03-08T04:25 (+5)
An EA insurance and finance fund to make it easier for people to fund and take significant personal risk for important social benefits, e.g., due to early career change, founding a startup etc.
Movement building & Helping exceptional people
Risk avoidance is a major reason why people don't change careers or take risks relating to having greater impact. We'd therefore like to see more attempts to establish financial services which can help to reduce risk and promote more rapid and impactful impact amongst exceptional individuals. We note that there may be advantages in combining long term investing initiatives for patient philanthropy with insurance offerings.
DonyChristie @ 2022-03-08T03:22 (+5)
Decentralized incentives for resilient public goods after global catastrophic risk
Recovery from Catastrophe
Using cryptoeconomics to bootstrap the incentivization of a resilient grid via which further cryptoeconomic incentives induce the bottom-up production of survival bunkers and other post-catastrophe public goods that could survive GCRs such as nuclear war. Figuring out how to reward people for preparing themselves, moreso if they help others or build critical infrastructure that lasts.
A Facebook comment I wrote that I am copy-pasting, I will likely edit it down in a bit: "Generally speaking, a collection of resources that are useful in the post-apocalypse will become some orders of magnitude more valuable than they currently are, since they will be more scarce and useful when one's life is at stake. These could be directly consumed, financialized using whatever monetary systems still exist, or otherwise used as a bargaining chip. I posit that buying things like food/water/coal mines/gold/land/solar panels/guns/???/etc now, either individually or collectively, will go up in value 100x-1000x from their current usefulness, broadly speaking, making it an extremely cost-effective hedge in one's financial portfolio.
The meaning I had in mind for a vault would be something like a millions-of-dollars vault meant to restart as some kind of oasis after a collapse. Something that's been an idea kicking in my head for a decade, so somewhat surprised no one's gone for it unless they're keeping it antimemetically cloaked. I recall a figure of '$100 million' on the EA Forum as the price tag of such a thing; personally I believe that individuals could start by digging a hole in the ground to start a survival cache or little underground house, and we can scale from there bottom-up. I envision perhaps something like a distributed network of such bunkers, directed more towards being public goods than the standard individualist ethos of preppers. I have a strong intuition that cryptocurrency incentives would make this more valuable/interesting, perhaps via making the financializability of survival goods persist past armageddon, or using impact certificates or other public goods funding mechanisms to incentivize the altruistic component, though I haven't formalized this into anything concrete beyond creative speculation and an impulse to make survival more economically attractive. I think there is a strong moral imperative for people to have an easier time making public goods investments that will pay off if a major global catastrophic risk happens."
Adam Binks @ 2022-03-07T22:12 (+5)
Find promising candidates for “Cause X” with an iterative forecast-guided thinktank
Epistemic institutions
How likely is it that the EA community is neglecting a cause area that is more pressing than current candidates? We are fairly confident in the importance of the community’s current community areas, but we think it’s still important to keep searching for more candidates.
We’d be excited to fund organisations attacking this problem in a structured, rigorous way, to reduce the chance that the EA community is missing huge opportunities.
We propose an organisation with two streams: generalist research, and superforecasting. The generalist researchers create shallow, exploratory evaluations of many different cause areas. Forecasters then use these evaluations to forecast the likelihood of each cause area being a top cause area recommended (e.g. by 80,000 Hours) in 5 years time. The generalist researchers then perform progressively more in-depth evaluations of the cause areas most favoured by forecasters. Forecasters update their forecasts based on these evaluations. If the forecasted promising-ness exceeds a threshold, the organisation recommends that an EA funder funds in-depth research into the cause area.
Adam Binks @ 2022-03-07T22:12 (+5)
Help high impact academics spend more time doing research
Empowering exceptional people
Top academic researchers are key drivers of progress in priority areas like biorisk, global priorities research and AI research. Yet even top academics are often unable to spend as much time as they want to on their research.
We’d be excited to fund an organisation providing centralised services to maximise research time for top academics, while minimising the overheads of setting up these systems for academics. It might focus on:
(1) Funding and negotiating teaching buy-outs,
(2) Providing an efficient shared team of PAs to handle admin, streamline academic service duties, submit papers, scout and screen PhD students, and accelerate literature surveys.
As AI research assistants like Elicit improve, this organisation could scalably offload work to these services.
Ludus @ 2022-03-07T19:29 (+5)
Modern Public Forums
Values and Reflective Processes, Epistemic Institutions, Effective Altruism
Violence begins when conversations stop. We'd love to see a renaissance of ancient Greek agoras or Roman fora which offered their citizens a public space where they could gather, study and discuss current events as well as everything else that is timelessly important for the future of humanity. In modern times, such places have become increasingly scarce and social media do not constitute a suitable replacement since many critical layers of human communication are absent.
We believe it is essential to have well-structured conversations in real life to exchange our narratives, continuously update our beliefs about the world and transform our newly found insights into actions on an individual level in everyday life. This should become a recurring, ordinary activity like going to the gym or shopping for groceries.
Therefore, we are looking for teams that are willing to establish and run many such modern forums for the general public. This entails a suitable building architecture to fit its purpose (e.g. Eudaimonia Machine), a diverse educational offering (e.g. lectures, film screenings, group discussions, 1-on-1 mentoring, library etc) and a primary focus on one of our Areas of Interest (e.g. one center in NYC could focus on Economic Growth while another one in Berlin focuses on Values and Reflective Processes). In the future, we hope to see several forums focusing on different topics right in the center of every large city.
PeterSlattery @ 2022-03-07T03:01 (+5)
An annual reports on cryptocurrency activity and philanthropy
Public influence & attention economy
Lots of money has been invested in cryptocurrency, and it seems likely that this will continue to be the case. The growth in the market has led to many new millionaires, some of whom are quite atypical and young relative to high wealth individuals in other areas. Cryptocurrency philanthropic norms appear to differ from the main population of donors and are not as well established. Thus, identifying, and publicising key trends and opportunities in this area could be unusually helpful in normalising desired actions (e.g., effective giving to key causes). We would therefore like to support work to produce an annual report on crypto philanthropy
that seeks to help identify good opportunities for donation and influence potential donors to give to effective causes.
PeterSlattery @ 2022-03-07T02:54 (+5)
Better understanding the role of behaviour science and systems thinking in producing key EA outcomes
Social change and movement building
Behaviour and systems change are core to all EA outcomes. We would therefore like to support research to provide a better understanding of the causes of EA relevant behaviour (e.g., career change, donation, involvement in EA or social movement), at both psychological and structural levels.
---
See this for some examples of ideas potentially relevant to AI governance or safety
PeterSlattery @ 2022-03-07T02:48 (+5)
Create EA focused communication initiatives
Movement building & Conceptual dissemination
Optimising EA Movement building and coordination requires confident and effective communicators for compelling and high fidelity conceptual dissemination. To help to improve communication across member of the EA community, we would welcome applications for courses focused on helping EAs to communicate better, for instance, modelled on the toastmasters program or the Dale Carnegie course.
PeterSlattery @ 2022-03-07T02:44 (+5)
Supporting Longitudinal studies of Effective Altruists
Movement building
One significant part of the EA movement is helping individuals to have maximal impact across their lifecycle. However, EA lacks evidence for how different choices, circumstances and lifestyles affect individual impacts. To address this we would like to support longitudinal studies to understand, for instance, how important factors such as age, career, happiness, mental health and actions (e.g., taking pledges, attending events, undergoing career changes etc) interact and change perceived impacts and EA involvements over lifespans and how these differ between current, and former EAs etc.
Jonathan Nankivell @ 2022-03-06T23:59 (+5)
Research Coordination Projects
Research that can help us improve
At the root of many problems that are being discussed are coordination problems. People are in prisoners' dilemmas, and keep defecting. This is the case in the suggestion to buy a scientific journal: if the universities coordinated they could buy the journal, remove fees, improve editorial policies, and they would be in a far better situation. Since they don't coordinate, they have to pay to access their own research.
Research into this type of coordination problem has revealed two general strategies for overcoming the prisoners' dilemma type effects: quadratic funding and dominant assurance contracts.
I propose a research project to investigate opportunities to use these techniques, which, if appropriate, would get bankrolled by the future fund.
agnode @ 2022-03-06T22:20 (+5)
Combined conferences
Effective altruism, Epistemic institutions, Values and reflective processes
Fund teams that have roots in both EA and in other relevant fields and communities to put on conferences that bring together those communities. For example, it could be valuable to put on a conference for EA and RadicalxChange, given that there is a lot of overlap in interests but significant differences in approach. This could help bring in new ideas into EA, especially as a conference is a good way to build relationships and have lengthy, careful discussions. Other communities may also have members from different backgrounds to EAs, and so this could help increase diversity in EA. As well as conferences, this could also be done at a smaller scale with local meetups putting on events in collaboration with other meetups.
Additional notes:
- The 2015 Five Worlds Collide conference is an example of this. It combined effective altruism, quantified self, rationality/scientific thinking, transhumanism and artificial intelligence. However, those communities already have a lot of overlap and it would be good to explore involving communities that are less closely related to EA.
agnode @ 2022-03-06T19:07 (+5)
Audio/video databases of people's experiences of problems
Values and reflective processes, Effective altruism, Research that can help us improve, epistemic institutions
Grantmakers and policymakers are usually far removed from the problems that people face in their daily lives, especially from the problems of people who are more marginalised. Part of the solution to this should be that grantmakers and policymakers make sure to talk to a variety of people to involve them in decisionmaking. However, databases of audio and video interviews with people could also help. For example, interviews could be held with a variety of people around the world to ask them questions like "what are the main problems you face in your life?" and "how have things changed for the better or worse over your lifetime?" Questions about values could also be asked, such as "what do you hope to see for yourself/family/community/country in the future?"
This would likely give a richer picture of people's experiences and problems than surveys, and could help distant decisionmakers understand and empathise with the situations of people they are making decisions about.
Additional notes:
- This process would need to be done carefully and sensitively, making sure that this doesn't become an exploitative or manipulative process. There is lots of expertise within the health research, international development, and humanitarian fields on how to do this sort of research well. Where possible, people could be supported to make their own videos about the problems that they face, in a similar way to the Photo Voice methodology.
- https://healthtalk.org/ is a useful model. A team of qualitative researchers hold interviews with people who have health conditions to understand their experience, and these are collated into a resource that includes audio and video clips from the interviews. e.g. see their section on depression: https://healthtalk.org/depression/overview
- Anthropologists sometimes use film to understand the people they study. This project could draw on ethnographic filmmaking techniques.
agnode @ 2022-03-06T17:41 (+5)
Multilingual web searching and browsing
Effective altruism, epistemic institutions
Despite the capability of automated translation, there is no smooth way to browse the web in multiple languages. It would be useful to have search engines return results from any language, with the results automatically translated into English. When you click on them, you then go to a web page automatically translated into English and can continue browsing in English. This seems important for EA because EA research currently relies primarily on English resources, and this could be causing bias in EA research. It would also be useful for other researchers and policy people working on global issues, e.g. in global health, to be able to research across multiple languages.
This has been an issue I've come up against when working on global health - a funder might express an interest in learning about funding opportunities in e.g. South America or South East Asia, and this is challenging to research because of the language issue.
Greg_Colbourn @ 2022-03-05T16:39 (+5)
A start-up accelerator for pledge-signing EAs and longtermists.
Economic Growth, Effective Altruism, Empowering Exceptional People
Y-combinator/Entrepreneur First meets Founders Pledge. A top-tier start-up accelerator where applicants sign a pledge to donate a significant amount of exit proceeds/profits to doing the most good they can from an effective altruist/longtermist perspective. Build start-ups and network with your value aligned peers!
Denis Drescher @ 2022-03-05T21:44 (+4)
Maybe Founders Pledge itself can be turned into a top-tier startup accelerator? If they’re up for it?
Taras Morozov @ 2022-03-04T13:24 (+5)
Find a niche to create a subsidized prediction market
Epistemic Institutions
One of the problems of current forecasting is that it isn’t getting attention from decision-makers. One way to jumpstart this is to create a subsidized market in some well-chosen area that will work well and thus publicly prove and legitimize the use of prediction markets.
One suitable idea is Robin Hanson’s idea with fire-CEO market:
Make two subsidized real-money markets on the stock price of each Fortune 500 firm, one market conditional on its CEO stepping down by quarter’s end, and the other conditional on not stepping down. The difference between these two prices would advice the board on dumping the CEO.
If active, these markets should attract business press, and then most of these CEOs would come to see what the markets say about them. Half a million would pay for legal/admin. The other half would only cover a $1000 subsidy per firm, but CEOs trying to manipulate would add lots of liquidity. A few years of data would let us clearly compare the returns of firms following market advice to firms not following. With clear data I’d encourage shareholders to sue boards ignoring market advice, and after a few wins most boards would weigh market advice heavily. A revolution in CEO accountability would then be complete, all for only a million.
Jakob @ 2022-03-06T10:42 (+3)
One potential niche could be betting markets around outcomes of political events (e.g., betting on outcome metrics such as GDP growth, expected lifespan, GINI coefficient, or carbon emissions; linked to events such as a national election, new regulatory proposals, or the passing of government budgets). Depending on legal restrictions, this market could even ask policy makers or political parties to place bets in these markets, to help the public assess which policy makers have the best epistemics, to hold policy makers accountable, and to incentivize policy makers to invest in better epistemics. (note: this also links to an idea presented in a different comment here -https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=zjvCCNuLEToCQyHdn)
Konstantin Pilz @ 2022-03-04T09:56 (+5)
Formulate AI-super-projects that would be both prestigious and socially beneficial
Artificial Intelligence, Great Power Relations
There are already some signs of race dynamics between the US and China in developing TAI. Arguably, they are at least partly motivated by concerns of national prestige. If race dynamics speed up, it might be beneficial to present a set of prestigious AI-projects that the US and other countries can adopt. These projects should have the following features:
- Be highly visible and impressive for a wide audience
- Contribute to safer AI (e.g. through interpretability or alignment playing a great role in the project)
- Be socially beneficial (i.e. the benefits should be distributed widely, ideally the technology would be open access after development)
(Idea adopted from Joslyn Barnhart)
Konstantin Pilz @ 2022-03-04T10:02 (+3)
Possible downside: Contribute to further speed-up of AI development, possibly leaving less time for alignment research
(However, if done correctly, this project only harvests pre-existing dynamics and leads funds to beneficial projects.)
IanDavidMoss @ 2022-03-03T19:00 (+5)
The Petrov Prize for wise decision-making under pressure
Epistemic Institutions, Values and Reflective Processes
On September 26, 1983, Stanislav Petrov singlehandedly averted a nuclear war when he decided to wait for more evidence before reporting an apparent launch of ICBMs aimed at the Soviet Union. The incident was later determined to be a false alarm caused by an equipment malfunction. While Petrov's story is one of the most dramatic examples ever of impactful decision-making under pressure, there are plenty of other people and organizations throughout history whose choices have deeply shaped the long-term future. This prize would seek out recent instances of good judgment in high-stakes environments and generously reward the individuals, teams, and/or institutions involved, so that their stories can both serve as an inspiration to others and promote broader adoption of strong decision-making principles and practices when they matter most.
HaydnBelfield @ 2022-03-03T21:13 (+4)
Note the Future of Life Award, which has been going for the last 5 years - https://futureoflife.org/future-of-life-award/
Given to Arkhipov, Petrov, Meselson, Foege & Zhranov, Farman & Solomon & Andersen
RyanCarey @ 2022-03-03T22:12 (+2)
Here is a variation on the suggestion - boosting the FLI Award to be more like a Nobel!
Peter S. Park @ 2022-03-03T18:08 (+5)
Simultaneously reliable and widely trusted media
Epistemic institutions
Eeliable (in the truthseeking sense) media seems to not be widely trusted, and widely trusted media seems to not be reliable. Research and efforts to simultaneously achieve both could potentially be very impactful, for political resolution of a broad range of issues. (Ambitious idea: Can EAs/longtermists establish a media competitor?)
aviv @ 2022-03-03T17:41 (+5)
Global Mini-public on AI Policy and Cooperation
Artificial Intelligence (Governance), Epistemic Institutions, Values and Reflective Processes, Great Power Relations
We'd like to fund an organization to institutionalize regular (e.g. yearly) global mini-publics to create recommendations on AI policy and cooperation; ideally in partnership with the key academic journals (and potentially the UN, major corporations, research instituions, etc.) . Somewhat analogous to globalca.org which focuses on gene editing (https://www.science.org/doi/10.1126/science.abb5931) and globalassembly.org which focuses on climate (those are essentially pilots, heavily limited by funding).
Peter S. Park @ 2022-03-03T16:15 (+5)
Influencing culture to align with longtermism/EA
Effective altruism
"Everything is downstream of culture." So, basic research and practical efforts to make culture more aligned with longtermism/EA could potentially be very impactful.
Peter S. Park @ 2022-03-03T16:03 (+5)
Global cooperation/coordination on existential risks
AI, Biorisk
Negative relationships between, for example, US and China are detrimental to pandemic prevention efforts, to the detriment of all people. Research on and efforts to facilitate fast, effective, and transparent global cooperation/coordination on pandemic prevention can be very impactful. Movement building on the sheer importance of this (especially among the relevant scientists and governmental decision-makers) would be especially impactful. Perhaps pandemic prevention can be "carved out" in U.S.-China relations? This also applies to other existential risks.
Peter S. Park @ 2022-03-03T15:55 (+5)
Reducing antibiotic resistance
Biorisk
If say a plague bacterium (maybe there are better examples) became resistant to all available antibiotics and started spreading, it could cause a pandemic like the Black Death. Research on how to behaviorally reduce antibiotic use (e.g., reduce meat consumption, convince meat companies to not use antibiotics, reduce overprescription) and how to develop new antibiotics (AI could help), and advocacy of reducing antibiotic use could potentially be high impact.
Jack Lewars @ 2022-03-02T20:25 (+5)
EA influencers
Effective Altruism
More awareness of EA = more talent and money for EA
Pay A-list influencers, with followings independent of EA, to promote EA content and themes. Concentrate on influencers popular with GenZ.
Risks: lack of message fidelity
Peter S. Park @ 2022-03-02T16:45 (+5)
Research on predicting talent
Effective altruism, Economic growth, Research that will help us improve
The prediction of which people (e.g., prospective students, prospective employees, people to whom movement-builders target their efforts) are likely to have high potential is extremely important. But it is plausible that the current way in which these predictions are made are incomplete, cognitively biased, and substantially suboptimal. Research into identifying general or field-specific talent could be very impactful. This can be done by funding fellowships, grants, and collaboration opportunities on the topic.
Peter S. Park @ 2022-03-01T21:03 (+5)
Broadening statistical education
Economic Growth, Values and Reflective Processes
Human cognition is characterized by cognitive biases, which systematically lead to errors in judgment: errors that can potentially be catastrophic (e.g., overconfidence as a cause of war). For example, a strong case can be made that Russia's invasion of Ukraine has been an irrational decision of Putin, a consequence of which is potential nuclear war. Overconfidence is a cause of wars and of underpreparation for catastrophes (e.g., pandemics, as illustrated by the COVID-19 pandemic).
One way to reduce detrimental and potentially catastrophic decisions is to provide people with statistical training that can help empower beneficial decision-making via correct calibration of beliefs. (Statistical training to keep track of the mean past payoff/observation can be helpful in a general sense; see my paper on the evolution of human cognitive biases and implications.) At the moment, statistical training is provided to a very small percentage of people, and most provisions of statistical training are not laser-focused on the improvement of practical learning/decision-making capabilities, but for other indirect goals (e.g., prerequisite for STEM undergraduate majors). It may be helpful to (1) encourage practical, impactful aspects in the provision of statistical training and (2) broaden its provision to a wider segment of people. One upside is that policies that voters or politicians choose may become more prudent and consistent with empirical evidence.
One ambitious course of action is to (1) design a curriculum for practical statistics training designed for the median (say American) high-school student and (2) advocate for the use of this curriculum in education. This may have the secondary benefit of getting more young students interested in longtermism and effective altruism. A general goal that can be pursued is to increase the importance of practical data science in high-school and undergraduate education, the room for which can be made for example by making subjects like Euclidean geometry optional.
jknowak @ 2022-03-02T09:56 (+2)
What I think I'd love to see is one of the below:
- statistics bootcamps
- statistics tutoring (or more like lack of problems to work on with your tutor, my idea was to try and go through actuary exam questions)
- something like Cochrane Training (where you can learn interventions review) but more broad/general?
Peter S. Park @ 2022-03-04T02:47 (+1)
Thanks so much for these suggestions! I would also really like to see these projects get implemented. There are already bootcamps for, say, pivoting into data science jobs, but having other specializations of statistics bootcamps (e.g., an accessible life-coach level bootcamp for improving individual decision-making, or a bootcamp specifically for high-impact CEOs or nonprofit heads) could be really cool as well.
James Ozden @ 2022-03-01T10:27 (+5)
International mass movement lobbying against x-risks
Biorisk and Recovery from Catastrophe, Great Power Relations, Values and Reflective Processes
In recent years, there has been a dramatic growth in grassroots movements concerned about climate change, such as Fridays for Future and Extinction Rebellion. Some evidence implies that these movements might be instrumental in shifting public opinion around a topic, changing dominant narratives, influencing voting behaviour and affecting policymaker beliefs. Yet, there are many more pressing existential risks that receive comparatively little attention, such as nuclear security, unaligned AI, great power conflict, and more. We think an international movement focused on promoting key values, such as concern for future generations, and the importance of reducing existential risk, could have significant spillover effects into public opinion, policy, and the broader development of positive societal values. This could be a massively scalable project, with the potential to develop hubs in over 1000 cities across 100+ countries (approximately the same as Extinction Rebellion Global).
NB: I'm aware this might not be a good idea for biorisk due to infohazards.
Akhil Bansal @ 2022-03-01T03:05 (+5)
Risk modelling and preparedness for climate-induced risks
Research That Will Help Us Improve
Climate change is a risk factor for several threats to the long-term future of humanity. It increases the likelihood of infectious diseases, including novel pathogens. As well as this, it is correlated with increased fragility of states and greater propensity for conflict. Therefore an organisation that models the climate resilience of social, health and political systems, and subsequently seeks to strengthen and improve their preparedness, may reduce the likelihood of significant threats to humanity’s long-term future
Zac Townsend @ 2022-03-01T01:00 (+5)
1. Longitudinal studies
Epistemic Institutions; Economic Growth
We are interested in funding long-term, large-scale data collection efforts. One of the most valuable research tools in social science is the collection of cross-sectional data over time, whether on educational outcomes, political attitudes and affiliations, health access, and outcomes. We are interested in funding research projects that intend to collect data over twenty years. The projects require significant funding to ensure follow-up data collection.
2. Replication funding and publication
Epistemic Institutions
The replication crisis is a foundational problem in (social) science. We are interested in funding publications, registries, and other funds focused on ensuring that trials and experiments are replicable by other scientists.
3. Market shaping and advanced market commitments
Epistemic institutions; Economic Growth
Market shaping is when an idea can only be jump-started by committed demand or other forces. Operation Warp Speed is the most recent example of market-shaping through advanced market commitments, but it has been used several times for other vaccine development. We are interested in funding work to understand when market-shaping makes sense, ideas for creating and funding market-shaping methods, and specific market-shaping or advanced market commitments in our areas of interest.
4. Development of cross-disciplinary talent
Economic Growth, Values and Reflective Processes, Empowering Exceptional People,
The NIH successfully funded the creation of interdisciplinary graduate programs in, for example, computational biology and Ph.D./MD programs. Increasingly, the returns to studying in one discipline, artificially constructed, cannot solve our most pressing problems. We are interested in funding the development of fluent individuals in two or more fields — particularly people with expertise in technology and social or economic issues. Universities have computer science + math or computer science + biology degrees, but we are interested in cultivating talents at the intersection of any disciplines that can affect our long-term future, and with a particular emphasis on non-university contexts.
5. Political fellowships
Values and Reflective Processes, Empowering Exceptional People
We’re like to fund ways to pull people who don’t run for political office to run for political office. It's like a MacArthur. You get a call one day. You've been selected. You'd make a great public servant, even if you don't know it. You'd get some training, like DCCC and NRCC, and when you run, you get two million spent by the super-PAC run by the best. They've done the analysis. They'll provide funding. They've lined up endorsers. You've never thought about politics, but they've got your back. Say what you want to say, make a difference in the world: run the campaign you don't mind losing. And if you win, make it real.
6. Cross-university research
Values and Reflective Processes, Research That Will Help Us Improve, Epistemic Institutions, Empowering Exceptional People
Since 1978, more than 30 scientists supported by the Howard Hughes Medical Institute have won the Nobel prize in medicine. We are interested in funding other cross-institutional collections of researchers and financial support beyond the biosciences, focusing on economic growth, public policy, and general social sciences.
7. Practitioner research
All
Universities are primarily filled with professors trained in similar ways. Although universities sometimes have “professors of the practice,” these positions are often reserved for folks nearing retirement. We are interested in funding ways for practitioners to spend time conducting and publishing “research” informed by their lived real-world experiences.
8. Private-sector ARPA models
All
Many of the technological innovations of the last fifty years have their genesis in experiments run by DARPA. ARPA models are characterized by individual decision-makers taking on risky bets within defined themes, setting ambitious goals, and mobilizing top researchers and entrepreneurs to meet them. We are interested in funding work to study these funding models and to create similar models in our areas of interest.
9. Large-scale randomized controlled trials
Values and Reflective Processes; Epistemic institutions; Economic Growth
RCTs are the gold standard in social science research but are frequently too expensive for most researchers to run, particularly in the United States. We are interested in large-scale funding of RCTs that are usually impossible due to a lack of funding.
10. Development of measures of success in governments
Values and Reflective Processes; Epistemic institutions
Markets keep score easily: through money. In governments, what success looks like is often more opaque and harder to measure. We are interested in funding studies of effectiveness/success and measuring it. For example, we are interested in comparative critiques of similar government institutions (i.e., DMV vs. DMV, EPA vs. EPA) across states and local entities regarding institutional design, people, and performance.
11. Civic sector software
Economic Growth, Values and Reflective Processes
Software and software vendors are among the biggest barriers to instituting new public policies or processes. The last twenty years have seen staggering advances in technology, user interfaces, and user-centric design, but governments have been left behind, saddled with outdated, bespoke, and inefficient software solutions. Worse, change of any kind can be impractical with existing technology systems or when choosing from existing vendors. This fact prevents public servants from implementing new evidence-based practices, becoming more data-driven, or experimenting with new service models.
Recent improvements in civic technology are often at the fringes of government activity, while investments in best practices or “what works” are often impossible for any government to implement because of technology. So while over the last five years, there has been an explosion of investments and activity around “civic innovation,” the results are often mediocre. On the one hand, governments end up with little more than tech toys or apps that have no relationship to the outcomes that matter (e.g. poverty alleviation, service delivery). While on the other hand, tens of millions of dollars are invested in academic research, thought leadership, and pilot programs on improving outcomes that matter, but no government can ever practically implement them because of their software.
Done correctly software can be the wedge to radically improve governments. The process to build that technology can be inclusive: engaging users inside government, citizens that interface with programs, community stakeholders, and outside experts and academics.
We are interested in funding tools that vastly and fundamentally improve the provisioning of services by civic organizations.
12. Social sector infrastructure
Values and Reflective Processes, Empowering Exceptional People
If an entrepreneur starts or runs a for-profit company, there is a range of software and other infrastructure to help you run your business: explainer guides, AWS, Salesforce.com, etc. Similar infrastructure for not-for-profits and other NGOs exist, particularly cross-border. We are interested in finding a new generation of infrastructure that supports the creation and maintenance of the social sector. This could look like a next-generation low-cost fiscal sponsor or an accounting system focused on NFP accounting and filing 990s, anything that makes it easier to start and run institutions.
13. Studying Economics Growth Deterrents and Cost Disease
Economic growth
Economic growth has forces working against it. Cost disease is the most well-known and pernicious of these in developed economies. We are interested in funding work on understanding, preventing, and reversing cost disease and other mechanisms that are slowing economic growth.
(Inspired by Patrick Collison)
14. Accelerating Accelerators
Economic Growth
Y Combinator has had one of the largest impacts on GDP of any institution in history. We are interested in funding efforts to replicate that success across different geographies, sectors (e.g. healthcare, financial services), or corporate form (e.g. not-for-profit vs. for-profit).
Nick_Beckstead @ 2022-03-01T02:07 (+21)
Thanks so much for all of these ideas! Would you be up for submitting these as separate comments so that people can upvote them separately? We're interested in knowing what the forum thinks of the ideas people present.
ThomasWoodside @ 2022-03-05T02:29 (+4)
Some of this has been said in threads above, but I don't think that upvotes are a very good way of knowing what the forum thinks. People are definitely not reading this whole thread and the first posts they see will likely get all of their attention.
On top of that, I do not expect forum karma to be a good indicator of much even in the best case. People tend to upvote what they can understand and what is interesting and useful to them. I suspect what the average EA forum user finds useful and interesting is probably only loosely related with what a large EA grantmaker should fund. For instance, in general good writing is a very good way to get upvotes, but that doesn't correlate much with the strength of the ideas presented.
Zac Townsend @ 2022-03-01T12:04 (+1)
Apologies. I tried. The forum definitely thinks I'm spamming it with fourteen comments, but we'll see how it goes.
Nathan Young @ 2022-03-02T00:01 (+2)
You have to pause for about 30s between comments
Seamus @ 2022-02-28T19:28 (+5)
Making Public Information More Public
Access to public information is hampered by arcane systems and government roadblocks that prevent people from getting direct access to data. Federal court records are behind a government paywall. Filing and keeping up with a freedom of information act requests requires herculean dedication. Government data are sometimes timed to be released when people are least likely to focus on it. This is but just a few examples of the barriers placed between what should be public information and the actual public. As a result, unadulterated information is limited to a series of relatively small number of gatekeepers, be it public officials, bureaucrats or traditional news organizations who possess the time, means and understanding of the system. All with their own benevolent or sometimes decidedly not benevolent, priorities. This in turn, allows those same organizations unchecked latitude to make daily decisions on access that affects large swathes of people. Making public information more public is an inherent right in a functioning and healthy society.
DonyChristie @ 2022-03-08T03:28 (+4)
Research on Competitive Sovereignties
Governance, New Institutions, Economic Growth
The current world order is locked in stasis and status quo bias. Enabling the creation of new jurisdictions, whether via charter cities, special economic zones, or outright creation of new lands such as seasteading, could allow more competition between countries to attract subscriber-citizens, increasing welfare.
It would also behoove us to think about standards for international interoperability in a world where '1000 nations bloom'. Greater decentralization of power could increase certain kinds of existential risk, so standards for cooperation at scale should be created. Otherwise, the greater the N of actors, the more surface area for them to go to war with each other.
Carlos Jarquín @ 2022-03-07T04:28 (+4)
Materials Informatics
For centuries, we have identified some of the most important materials in modern society by chance. Some of these materials include steel, copper, rubber, etc.
With the current grand challenges of today's world, the discovery and scaling of new advanced materials are necessary to create the impact. (After all, everything around us is materials).
I'd like to see more funding on materials informatics and guidance/regulation in materials informatics so we're not creating any advanced materials or nanomaterials that could cause a catastrophe.
PeterSlattery @ 2022-03-07T03:08 (+4)
A replication lab or project to replicate and expand key EA research
Movement building & conceptual dissemination
EA outreach and strategy is supported by a growing pool of social psychology research exploring EA related topics (e.g., appeals to change dietary or donation choice, understand moral views or related interventions etc). However, much social psychology research doesn’t replicate or varies depending on audience when retested. It's therefore possible that some key findings and theories that guide EA are more or less valid or robust than currently believed.
We would therefore like to fund work to replicate key theories and findings. For instance, several independent teams at different universities/institutes might coordinate to attempt to replicate seminal early EA work using the researcher's materials and Positly samples). The replication project could also expand to test findings relevant to EA that haven't been replicated (e.g., interventions to promote charitable donation or prosocial behaviour). The work could be modelled on the Many Labs approach. We would also welcome work to build on or translate key conceptual tools from EA (e.g., the INT framework) within academic literature.
---
People seem to be relatively unimpressed by this ideas, so I'll just add some explanation. My theory of change for how EA values affect science suggests that having stronger evidence for, and more frequent mentions of, key EA researcher and researchers is absolutely critical. Replication is one established way to do this and, in my view, the one that is mostly likely to succeed. A further benefit of replication work is that is allows junior researchers to develop skills and expertise much more efficiently and safely than if they have to produce new theoretical work from their own thinking. Doing a replication also makes them much better equipped to collaborate with and support the authors of the replicated papers with future work, or to expand on the work they replicated.
PeterSlattery @ 2022-03-07T02:46 (+4)
Creating EA aligned research labs
Conceptual dissemination
Academic publications are considered to be significantly more credible than other types of publications. Many academics with outsized impacts lead publication labs. These hire many junior researchers to help maximise the return on the knowledge and experience of a more senior researcher. We would like to support attempts to found and scale up academic research labs aligned with relevant cause areas.
agnode @ 2022-03-06T17:21 (+4)
Synthesis book fund/prize
Senior academics or practitioners have the accumulated experience and knowledge to be able to write grand syntheses of their subjects, or to put forward grand theories, without those just being wild speculation. This fund would proactively support and/or retroactively reward work of this type. To make this kind of work more likely, the fund could seek out academics that seem in a particularly good place to create a work of this type and encourage them to do this. In addition, the fund could support the writing of both a popular and an academic version of the work. This could help overcome the issue where popular grand syntheses tend to be widely influential, even when they are seen as dubious by experts (e.g. as happened with Guns, Germs, and Steel).
Examples of the kinds of works I mean: James C. Scott's Seeing Like a State, Vaclav Smil's Creating the Twentieth Century, Thomas Piketty's Capital in the Twenty-First Century, Fernand Braudel's The Mediterranean and the Mediterranean World in the Age of Philip II.
Denis Drescher @ 2022-03-07T14:13 (+2)
Great! Some of them may need (or benefit from) ghostwriters. I don’t know how easy it is to find good ghostwriters for a given subject, but that could be another problem that such an organization could solve for them.
Jonathan Nankivell @ 2022-03-06T17:10 (+4)
Credence Weighted Citation Metrics
Epistemic Institutions
Citation metrics (total citations, h-index, g-index, etc.) are intended to estimate a researcher's contribution to a field. However, if false claims get cited more then true claims (Serra-Garcia and Gneezy 2021), these citation metrics are clearly not fit for purpose.
I suggest modifying these citation metrics by weighing each paper by the probability that it will replicate. If each paper has citations and probability of replicating , we can modify each formula as follows: instead of measuring total citations we consider credence weighted total citations Instead of using the h-index where we pick 'the largest number such that articles have ', we could use the credence weighted h-index where we pick the largest number such that articles have . We can use this idea to modify citation metrics that evaluate researchers (as above), journals (Impact factor and CiteScore) and universities (rankings).
We can use prediction markets to elicit these probabilities, where the questions are resolved using a combination of large scale replication studies and surrogate scoring. DARPA SCORE is a proof of concept that this can be done on a large scale.
Prioritising credence weighted citation metrics over citation metrics, would improve the incentives researchers have. No longer will they have to compete with people who write 70 flimsy papers a year that no one actually thinks will replicate; now researchers who are right will be rewarded.
Guillaume Corlouer @ 2022-03-06T16:49 (+4)
Funding AI policy proposals to slow down high-risk AI capability research.
AI alignment, AI policy
We want AI alignment research to catch up and surpass AI capability research. Among others, AI capability research requires a friendly political environment. We would be interested in funding AI policy proposals that would increase the chance of obtaining effective regulations slowing down highly risky AI capability R&D. For example, some regulations could impose large language models to pass a thorough safety audit before deployment or scaling in parameters above determined safety thresholds. Another example would be funding AI policy projects increasing the chance of banning research aiming to build generally capable AI before solving the AI alignment problem. Such regulations would probably need to be implemented on a national and international scale to be effective.
Chris Leong @ 2022-03-07T03:46 (+4)
One worry is that redtape increases the chance that someone who doesn't care about regulation can frontrun the first team to AGI.
Guillaume Corlouer @ 2022-03-07T10:28 (+1)
Yes. To reduce that risk we could aim for an international agreement on banning high-risk AI capability research but might not be satisfying. I have the impression that very few people (if any) are working on that flavor of regulations and could be useful to explore it more. Ideally, if we could simply coordinate to not produce direct work on producing generally capable AI until we figure out safety it could be an important win.
marsxr @ 2022-03-04T16:18 (+4)
Global (baseline) Education Curriculum
Getting people aligned, avoiding division, humans on this planet are in the same team.
By creating a basic program and common understanding it will be much easier to implement any of the global policies required to handle climate change.
Some of the proposed subjects:
- Literacy, numeracy
- English. Alternatively: Latin and Esperanto are not really competitors, Chinese too difficult
- Health, human body, food, nutrition
- Nature, earth sciences, environment
- Making, engineering, tinkering
- Communication, relationships, culture, tolerance, other religions
- History - learning from dudes like Hitler and Putin
(naturally, we need to be careful so that it is not hijacked by bad actors)
On the fundamental level humans around the world share so many values, everyone shares similar needs. Every major religion is against theft, murder, rape.
If we align education, that will help mitigate the conflicts and agree on short-term sacrifices.
(such as cutting CO2 emissions)
Related: https://www.xprize.org/articles/global-learning-xprize-two-grand-prize-winners
☝️ ☝️ ☝️ would be great to utilize XPRIZE technology and let it run without a teacher.
Peter S. Park @ 2022-03-02T20:30 (+4)
Research on solving the wicked problem of underinvestment into interdisciplinary research
Economic Growth, Research That Can Help Us Improve
"Interdisciplinary research is widely considered a hothouse for innovation, and the only plausible approach to complex problems such as climate change," but are systematically underfunded and underconsidered (Bromham et al., 2016). Thinking of this problem as a wicked problem and researching how to systematically solve it (at the university, department, publication journal, and grant agency levels) could potentially be impactful.
steve2152 @ 2022-03-01T19:39 (+4)
A better open-source human-legible world-model, to be incorporated into future ML interpretability systems
Artificial intelligence
[UPDATE 3 MONTHS LATER: Better description and justification is now available in Section 15.2.2.1 here.]
It is probable that future powerful AGI systems will involve a learning algorithm that builds a common-sense world-model in the form of a giant unlabeled black-box data structure—after all, something like this is true in both modern machine learning and (I claim) human brains. Improving our ability, as humans, to look inside and understand the contents of such a black box is overwhelmingly (maybe even universally) viewed by AGI safety experts as an important step towards safe and beneficial AGI.
A future interpretability system will presumably look like an interface, with human-legible things on one side of the interface, and things-inside-the-black-box on the other side of the interface. For the former (i.e., human-legible) side of the interface, it would be helpful to have access to an open-source world-model / knowledge-graph data structure with the highest possible quality, comprehensiveness, and especially human-legibility, including clear and unambiguous labels. We are excited to fund teams to build, improve, and open-source such human-legible world-model data structures, so that they may be freely used as one component of current and future interpretability systems.
~
Note 1: For further discussion, see my post Let's buy out Cyc, for use in AGI interpretability systems? I still think that a hypothetical open-sourcing of Cyc would be a promising project along these lines. But I’m open-minded to the possibility that other approaches are even better (see the comment section of that post for some possible examples). As it happens, I’m not personally familiar with what open-source human-legible world-models are out there right now. I’d just be surprised if they're already so good that it wouldn't be helpful to make them even better (more human-legible, more comprehensive, fewer errors, uncertainty quantification, etc.). After all, there are people building knowledge webs right now, but nobody is doing it for the purpose of future AGI interpretability systems. So it would be quite a coincidence if they were already doing everything exactly right for that application.
Note 2: Speaking of which, there could also be a separate project—or a different aspect of this same project—which entails trying to build an automated tool that matches up (a subset of) the entries of an existing open-source human-legible world-model / web-of-knowledge data structure with (a subset of) the latent variables in a language model like GPT-3. (It may be a fuzzy, many-to-many match, but that would still be helpful.) I’m even less of an expert there; I have no idea if that would work, or if anyone is currently trying to do that. But it does strike me as the kind of thing we should be trying to do.
Note 3: To be clear, I don't think of myself as an interpretability expert. Don’t take my word for anything here. :-) [However, later in my post series I'll have more detailed discussion of exactly where this thing would fit into an AGI control system, as I see it. Check back in a few weeks. Here’s the link.]
Chris Leong @ 2022-03-01T04:52 (+4)
Incubator Incubator
Effective Altruism
Effective Altruism needs more incubators. Why not have an incubator to incubate them?
Risks: We end up with too many incubators.
(This is my least serious proposal)
Chris Leong @ 2022-03-01T02:50 (+4)
(This is a refinement of Yonatan Cale's proposal)
Limited Scope Impact Purchase:
Various cause areas incl. AI Safety and Effective Altruism
The biggest challenge with impact purchases is that the market for selling is usually much larger than the market for buying. This project would limit the scope of the purchase to particular people to ensure a) that impact sellers were aware of the impact purchase's existence when they decided to pursue that project* and b) to address this market imbalance and therefore increase people's odds that they are paid and hence motivation.
For example, it might be worthwhile creating an impact purchase for each cohort of the AGI safety fundamentals course with purchases after 1 year and 2 years for safety-relevant work. Alternatively, I could imagine an impact purchase for Oxford University students.
I could also imagine a system where people pre-register their projects which would allow people to see how big the pool is and how competitive it is likely to be.
Risks: Even if people mostly believe that they will be compensated by impact purchase for their work, their risk aversion might prevent it from having any significant impact on their motivation.
*Increases the chance of it being counterfactual
Jake Toth @ 2022-03-18T12:46 (+3)
Thanks for running this competition, looks like there are plenty of great ideas to choose from!
I submitted my entry on improving human intelligence through non-invasive brain stimulation through the Google form, it said my entry was recorded but I got no email confirmation.
Has anyone else submitted through the Google Form, and did they also get no email confirmation?
Does anyone know when the winners of the competition will be announced?
Lauren Reid @ 2022-03-16T14:27 (+3)
Just came to stay that this ideas competition really turned me on - I loved it. I hope this becomes an ongoing community ‘suggestion box’, perhaps monitored once a month.
I understand that one could write a blog post with an idea, but I think this is an even better low barrier way of getting ideas quickly.
Personally, this competition helped me realize that I have a different lens that many EAs, and that my ideas and skills could be valued. Thank you.
Joey Wong @ 2022-03-08T13:44 (+3)
Funds for study efficient logistic or run a logistic company
We can see logistics is one of the bottlenecks for goods and services. It makes uneven distribution of resources. Especially in pandemic and lockdowns, not enough delivery guys leads to shortage of foods. In outbreak area, there's food shortages while a surplus in other regions. Digitalisation can help information transfer in a speedy and cheap way. But what about real products delivery? It's something more than autonomous vehicles. Human beings is a fragile part of the procedure. Now, in hk, we estimate over 1m ppl are suffering covid, 20% of the population. These ppl are supposed to be quarantined for 14 days. Usually, one got infected, the whole team/company got infected. It's a catastrophe for some industries. Shipping also rely on dock labour. WE need to minimise labor usage.
barkbellowroar @ 2022-03-08T05:46 (+3)
Reframe U.S. college EA chapters as an alternative to Greek life
Values and Reflective Processes, Empowering Exceptional People, Effective Altruism
Following the model of Alpha Phi Omega, the largest coed service fraternity in the U.S. with ~335 chapters and 400,000 alumni, reframing EA chapters as social organizations may help with recruitment and retention. It could also encourage a broader range of activities for chapters to run throughout the year including things like hosting workshops for other students on how to think about careers, hosting film screenings and speakers, introducing pressing problems, red-teaming career plans, hosting campus debate tournaments, raising money and awareness for high-impact charities and encouraging students to sign giving pledges like One for the World and Giving What We Can.
MaxRa @ 2022-03-08T04:37 (+3)
[fairly unsure, would be interested in thoughts]
Facilitate global cooperation via economic relationships and shared ownership
Values and Reflective Processes
We live in an economically connected world that is characterized by mutually beneficial trades. On top of that, countries are generally heavily invested in diverse financial securities of other countries. This way, economic progress in one country is generally to the benefit of the whole international community. Consequently there are strong incentives for peaceful coexistence, internalization of problems that are affected by tragedies of the commons, and a general atmosphere of multilateral cooperation and understanding.
Downsides of this co-dependency are increased relative costs of sanctioning countries that violate international codes of conduct, and accelerated distribution of technological innovations that possibly will pose risks when not handled with care and strong coordination. While being conscious of those downsides, we tentatively think that projects which facilitate a global cooperative atmosphere between major actors have the potential to unlock considerable abilities to solve the most pressing problems going forward. Concrete projects may aim at emphasizing the economic positive-sum situation humanity is facing, research and facilitate current international cooperation around global issues such as climate change, and facilitating financial interdependence.
PeterSlattery @ 2022-03-08T04:33 (+3)
Making significant improvements to the EA wiki (last minute submission)
See this for a range of ideas for improving the EA wiki which could be funded. I'd suggest that all changes made to the wiki should also be replicated and linked across the EA ecosystem and onto normal Wikipedia.
PeterSlattery @ 2022-03-08T04:10 (+3)
A living 'cause prioritisation flowchart' /Better visualisation template or graphic design copy for EA communicators (quick submission)
[Inspired by this comment]
EA has many aims and a complex causal logic behind these aims. Visualisation helps to explain this better. Flow charts are one established way we do this. These could be used effectively in many communication settings but there is a coordination problem as most individual actors who need such a chart also lack sufficient expected ROI or experience to create one. We would therefore welcome more work to create helpful template
For instance, this could be done by creating an easily editable/accessible flowchart that is in an open access format (draw.io) and shared on the forum with occasional updates. Similar work could also be done in terms of producing and sharing attractive slides and graphic design elements.
brb243 @ 2022-03-07T21:34 (+3)
Systemic change marginal cost-effectiveness program estimation and evaluation
Effective Altruism, Research That Can Help Us Improve, Artificial Intelligence
Instead of focusing on single, the ones which are measurable and highlighted by academia, outcomes, one can focus on advancing systemic change (institutionalizing safe positive systems) by selecting programs with the highest (and lowest) marginal cost-effectiveness, considering impact costs development. Then, impact can be increased by 1) advising resource shifts from low to high cost-effectiveness programs, 2) funding additionalities that would not be switched into. For example, could a charity that advises on vaccinations cover a few less beneficiaries but the existing ones inform on 'all they need to live well,' using their already developed trusting relationships with local informants which reduces the marginal cost? The idea is to get this system acquired by Charity Navigator's Impact Unit to significantly improve global philanthropic allocation.
brb243 @ 2022-03-07T17:09 (+3)
Effects of humanitarian development on peace and conflict
Great Power Relations, Values and Reflective Processes, Effective Altruism, Biorisk and Recovery from Catastrophe
Is it not that conflict stems from suboptimal institutions, such as those which value aggression and disregard, for the lack of better alternatives known, so can be prevented by general humanitarian development? It can be that when people are more able to contribute and benefit from others' upskilling rather than competing for scarce resources because it is challenging to increase efficiency (e. g. farming), then they better care about others, for example educate them in relevant skills. Conversely also, better skills, better care. This must somehow perpetuate to decisionmakers: they see abuse, so they abuse, or, see GVC competitiveness and specialization strategies advanced through human and physical capital investments, so they do that. If it is shown that basically systemic change is effective in GCR, then global problems are solved, and GCR too. If not, then you can continue the specific GCR response and prevention and hope nothing else occurs or these measures will be sufficient.
brb243 @ 2022-03-07T16:56 (+3)
Wellbeing determinants' understanding
Research That Will Help Us Improve
Without understanding the fundamentals of individuals' wellbeing, you cannot build institutions based in and optimizing for wellbeing, even if you have a lot of attention and prediction capacity: you do not know what to advocate for or research. So, you should fund a team of neuroscientists, sociologists, and anthropologists, to provide an interdisciplinary interperspective understanding of what, fundamentally, makes individuals happy. This should be understood fundamentally (e. g. safety), to emphasize, not based on historical examples (e. g. power over others) or through wellbeing means (e. g. building a risk shelter). Then, as the next step you can develop solutions overall favored by decisionmakers and implementers. In this case, you would not need as much ability to gain interest by means extraneous to providing good solutions.
Peter S. Park @ 2022-03-07T04:14 (+3)
Facilitate interdisciplinarity in governmental applications of social science
Values and Reflective Processes, Economic Growth
At the moment, governmental applications of social science (where, for example, economists who use the paradigm of methodological individualism are disproportionately represented) could benefit from drawing on other fields of social science that can fill potential blind spots. The theory of social norms is a particularly relevant example. Also, behavioral scientists and psychologists could potentially be very helpful in improving the judgement of high-impact decision-makers in government, and in improving predictions on policy counterfactuals via filling in previous informational blind spots. Research and efforts to increase the consideration of diverse plausible scientific paradigms in governmental applications of social science could potentially be very impactful.
PeterSlattery @ 2022-03-07T03:11 (+3)
Using the EA survey to answer key research questions.
Research & movement building
We would like to support work by EA researchers to preregister hypothesises and measures to test (with ethics approval) i) in the EA survey (maybe as a non-mandatory final section) and ii) with the public to compare the results. For instance, this could to explore how different demographics, personality types and identities (e.g., identification as social justice activist/climate change activist) interact with different moral views or arguments for key EA behaviours such as giving to effective charities/caring about longtermism etc. New questions and interventions could be tested each year and explore new differences etc. Doing this could help to better understand how EAs are different from the public, produce research paper and guide movement building.
PeterSlattery @ 2022-03-07T02:41 (+3)
Collective financing for EA products
Movement building, coordination, coincidence of wants problems
As shown by crowdfunding platforms, collective financing has many benefits. For instance, it allows individuals to collectively fund projects that they could not fund as individuals and for projects to start and scale when they would not otherwise exist. We would therefore like to fund projects to support collective financing with the EA community. For instance, this could involve allowing individuals to commit to providing a project or service (e.g., a detailed review of the arguments for and against X) if offered x funding, or to commit to provide x% funding for a project being offered if a threshold is reached. Larger funders could elect to fund projects if the revealed community need was sufficiently high.
agnode @ 2022-03-06T17:33 (+3)
EA-oriented research search engines
Effective altruism
EA researchers and people in similar roles such as grantmakers and policy analysts face a difficult search challenge. They are often trying to find high-quality resources that synthesise expert consensus in fields that are unfamiliar to them. Google often returns results that are too low-quality and popularly-oriented, but google scholar returns results that are too specific or which are only tangentally related to EA/policy/grantmaker interests. An improved search engine would return quality synthesis resources such as books, lectures, podcast episodes, expert blog posts, etc. A simple way to implement this would just be a custom search engine that searches a curated list of websites such as think tanks, blogs, etc.
Additional notes:
- APO is a good example of the kind of thing that is useful - it is a searchable collection of policy documents mostly from Australia and New Zealand.
- Possibly https://elicit.org/ is already going to be an overall solution for this.
Ricky @ 2022-03-06T08:10 (+3)
Lobby big tech companies to create AI Safety departments to monitor the growth of machine learning technology and implement proactive risk mitigation.
Peter S. Park @ 2022-03-06T02:08 (+3)
Incentivize researchers to prioritize paradigm shifts rather than incremental advances
Economic growth, Research That Can Help Us Improve
There's a plausible case that societal under-innovation is one of the largest causes (if not the largest cause) of people's suboptimal well-being. For example, scientific research could be less risk-averse/incremental and more pro-moonshots. Interdiscplinary research on how to achieve society's full innovation potential, and movement-building targeted at universities, scientific journals, and grant agencies to incentivize scientific moonshots could potentially be very impactful.
Denis Drescher @ 2022-03-05T20:15 (+3)
Research to determine what human cultures minimize the risks of major catastrophes
Great Power Relations, Values and Reflective Processes, Artificial Intelligence
I posit that human cultures differ and that there’s a chance that some cultures are more likely to punish in minor ways and more likely to adapt to new situations peacefully while other may be more likely to wage wars. This may be completely wrong.
But if it is now, we could investigate what processes can be used to foster the sort of culture that is less likely to immanentize global catastrophes, and to structure the cultural learning of future AI systems such that they also learn that culture, so that (seeming) cooperation failures between AIs are frequent and minor and really part of their bargaining process rather than infrequent and civilization-ending. It might be even more important to set a clear cultural Schelling point for AIs if some cultures play well with all other cultures but all cultures play well with themselves.
Some more detail on my inspiration for the idea (copied from my blog):
Herrmann et al. (2008) have found that in games that resemble collective prisoners dilemmas with punishment, cultures worldwide fall into different groups. Those with antisocial punishment fail to realize the gains from cooperation but two other cultures succeed: In the first (cities Boston, Copenhagen, and St. Gallen), participants cooperated at a high level from the start and used occasional punishments to keep it that way. In the second (cities Seoul, Melbourne, and Chengdu), the prior appeared to be low cooperation, but through punishment they achieved after a few rounds the same level of cooperation as the first group.
These two strategies appear to me to map (somewhat imperfectly) to the successful Tit for Tat and Pavlov strategies in iterated prisoner’s dilemmas.
Sarah Constantin writes:
In Wedekind and Milinski’s 1996 experiment with human subjects, playing an iterated prisoner’s dilemma game, a full 70% of them engaged in Pavlov-like strategies. The human Pavlovians were smarter than a pure Pavlov strategy — they eventually recognized the DefectBots and stopped cooperating with them, while a pure-Pavlov strategy never would — but, just like Pavlov, the humans kept “pushing boundaries” when unopposed.
As mentioned, I think these strategies map somewhat imperfectly to human behavior, but I feel that I can often classify the people around me as tending toward one or the other strategy.
Pavlovian behaviors:
- Break rules until the cost to yourself from punishments exceeds the profits from the rule-breaking.
- View rules as rights for other people to request that you stop a behavior if they disapprove of it. Then stop if anyone invokes a rule.
- Push boundaries, the Overton window, or unwritten social rules habitually or for fun, but then take note if someone looks hurt or complains. Someone else merely looking unhappy with the situation is a form of punishment for an empathetic person. (I’m thinking of things like “sharp culture.”)
- Don’t worry about etiquette because you expect others to give you frank feedback if they are annoyed/hurt/threatened by something you do. Don’t see it as a morally relevant mistake so long as you change your behavior in response to the feedback. (This seems to me like it might be associated with low agreeableness.)
Tit for Tat behaviors:
- Try to anticipate the correct behavior in every situation. Feel remorse over any mistakes.
- Attribute rule-breaking, boundary-pushing behaviors to malice.
- Keep to very similar people to be able to anticipate the correct behaviors reliably and to avoid being exploited (if only for a short number of “rounds”).
This way of categorizing behaviors has led me to think that there are forms of both strategies that seem perfectly nice to me. In particularly, I’ve met socially astute agents who noticed that I’m a “soft culture” tit-for-tat type of person and adjusted to my interaction style. I don’t think it would make sense for an empathetic tit-for-tat agent to adjust to a Pavlovian agent in such a way, but it’s a straightforward self-modification for an empathetic Pavlovian agent.
Further, Pavlovian agents probably have a much easier time navigating areas like entrepreneurship where you’re always moving in innovative areas that don’t have any hard and fast rules yet that you could anticipate. Rather they need to be renegotiated all the time.
Pavlov also seems more time-consuming and cognitively demanding, so it may be more attractive for socially astute agents and for situations where there are likely gains to be had as compared to a tit for tat approach.
The idea is that one type of culture may be safer than another for AIs to learn from through, e.g., inverse reinforcement learning. My tentative hypothesis is that the Pavlovian culture is safer because punishments are small and routine with little risk of ideological, fanatical retributivism emerging.
Derek Shiller @ 2022-03-05T18:16 (+3)
Authoritative Statements of EA Views
Epistemic Institutions
In academia, law, and government, it would be helpful to have citeable statements of EA relevant views presented in an authoritative and unbiased manner. Having such material available lends gravitas to proposals that help address related problems and provides greater justification in taking those views for granted.
(This is a variation on 'Expert polling for everything' focused on providing authority of views to non-experts. The Cambridge Declaration on Consciousness is a good example.)
Jonas Moss @ 2022-03-05T13:45 (+3)
Scoring scientific fields
Epistemic Institutions
Some fields of science are uncontroversially more reliable than others. Physics is more reliable than theoretical sociology, for example. But other fields aren't that easy to score. Should you believe the claims of a random sleep research paper? Or a paper from personality psychology? Efficacy is just as important, as a scientific field with low efficacy is probably not worth engaging with at all.
A scientific field can be evaluated by giving it a score along one or more dimensions, where a lower score indicates the field might not be worth taking seriously. Right now, people score fields of science informally. For instance, it is common to be skeptical of results from social psychology due to the replication crisis. Claims of nutrition scientists are often ignored due to their over-reliance on observational studies. If the field hasn't been well investigated, the consumer of the scientific literature is on his own.
Scoring can be based on measurable factors such as
- community norms in the field,
- degree of p-hacking and publication bias,
- reliance on observational studies over experimentation,
- amount of "skin in the game",
- open data and open code,
- how prestige-driven it is.
Scoring of the overall quality of a field serves multiple purposes.
- A low score can dissuade people from taking the field seriously, potentially saving lots of time and money.
- The scores can be used informally when forming an opinion. More formally, they can be used as input into other methods, e.g. to correct p-values when reading a paper.
- If successful, the scores can incite reform in the poorly performing subfields.
- Can be used as input to other EA organizations such as 80,000 hours.
simeon_c @ 2022-03-05T09:24 (+3)
Making Impactful Science More Reputable
There are two things that matter in science: reputation and funding. While there is more and more funding available for mission-driven science, we’d be excited to see projects that would try to increase the reputation of impactful science. We think that increasing the reputation of impactful work could over time increase substantially the amount of research done on most things that society care about.
Some of the ways we could provide more reputation to impactful research:
- Awarding prizes to past and present researchers that have done mostly impactful work.
- Organizing “seminars of impact” where the emphasis is put on researchers who have been able to make their research impactful
- Communicating and sharing impactful research being done. That could be done in several ways (e.g simply using social media or making short films on some specific mission-driven research projects).
ThomasWoodside @ 2022-03-04T23:09 (+3)
Ethics Education
Values and Reflective Processes
Over the next century, leaders will likely have to make increasingly high-stakes ethical decisions. In democratic societies, large numbers of people may play a role in making those decisions. And yet, ethics is seldom thoroughly taught in most educational curricula. While it may be covered briefly in secondary school and is covered in detail at university for those who attend and choose to study it, many accomplished people do not have even a superficial understanding of the most important ethical theories and arguments for and against them. We think that better knowledge of ethics might enable people to behave more ethically, and better understand the limitations of commonsense morality: for instance, that it typically neglects people in the far future. We'd like to see projects that aim to increase high fidelity knowledge of ethics in high-leverage ways, such as the creation of high-quality standardized curricula or promotion of existing ethics courses to large audiences online.
christian.r @ 2022-03-04T22:47 (+3)
Experimental Wargames for Great Power War and Biological Warfare
Biorisk and Recovery from Catastrophe, Epistemic Institutions
This is a proposal to fund a series of "experimental wargames," on great power war and biological warfare. Wargames have long been a standard tool of think tanks, the military, and the academic IR world since the early Cold War. Until recently, however, these games were largely used to uncover unknown unknowns and help with scenario planning. Most such games continue to be unscientific exercises. Recent work on "experimental wargames" (see, e.g. this paper on drones and escalation), however, has leveraged wargaming methods with randomly-assigned groups and varying scenarios to see how decision-makers will react in hypothetical crisis situations. A series of well-designed experimental wargames on crisis decision-making in a great power confrontation or during a biological attack could help identify weaknesses, quantify risks, and uncover cognitive biases at work in high-pressure decision-making. Additionally, they would have the added benefit of raising awareness about global catastrophic risks.
Peter S. Park @ 2022-03-03T17:58 (+3)
Normalize broad ownership of hazmat suit (and of N-day supply of non-perishable food and water)
Biorisk
If everyone either wore a hazmat suit all the time or stayed at home for 14 days (especially in the early stages of the COVID-19 pandemic), the pandemic would have been over. Normalize, fund, and advocate for broad ownership of hazmat suits and of non-perishable food and water, for preventing future pandemics. This may be more feasible in developing countries than developed countries, but in principle foreign aid/EA can make it feasible for developed countries as well.
Greg_Colbourn @ 2022-03-05T13:06 (+2)
This would only work for pandemics if literally everyone in the world did it at the same time. I think we'd probably need effective global governance for that (that itself isn't an x-risk in terms of authoritarianism or permanently curtailing humanity's flourishing).
Peter S. Park @ 2022-03-02T19:29 (+3)
Building in reciprocal altruism into exercise, via a nonprofit with a mobile app
Effective altruism
Regular exercise likely has a very large positive impact on health and well-being. A lot of Americans do not do sufficient regular exercise, which is probably a major reason for suboptimal quality of life and subsequently suboptimal productivity.
One reason why people don't like regular exercise from going to the gym is that it feels artificial or unpleasant, and feels like a waste of time and energy. In a sense, this viewpoint is correct; moving heavy objects back and forth or running on a treadmill has no benefit other than via exercise.
Evolutionarily relevant foragers---and likely, most ancestral humans---do regular exercise, but for the explicit purpose of cooperative foraging. This is why their quality of life (in the sense of health) rivals or even exceeds that of many Americans despite their lack of access to modern medicine.
Building in humans' tendency to partake in reciprocal altruism into exercise can have potentially high impact on quality of life and productivity. The idea is that a nonprofit can build an mobile app with 'altruism points' that can be exchanged for donations. Instead of going to the gym to exercise, you look at the list of requests on the mobile app to deliver groceries or food for people in need, or deliver essential objects to people who are busy with work when the store is open. After you fulfill the request, you get 'altruism points'. You can then use 'altruism points' when you are in need of some delivery quickly. This is not confined to charitable giving/delivery (you can use your 'altruism points' for things like restaurant food delivery), but charitable requests can be requested to the app without donations (elderly people's grocery trips during COVID, etc.).
The upside is that more EAs (and more people in general) will feel good about exercising, higher reciprocal cooperation and solidarity in the community in general, more enthusiasm/less guilt for saving valuable time via requesting help (e.g., ordering food instead of cooking), and less "deadweight loss" from moving heavy weights back and forth.
(A friend contributed to this idea, and I will be sharing the prize money with her if this idea is selected.)
Peter S. Park @ 2022-03-02T16:49 (+3)
Research on predicting interest in EA/longtermism
Effective altruism, Research that will help us improve
In order to help movement-builders better target their efforts, research on how to identify people who are more likely than average to be receptive to EA/longtermism could be quite impactful. Facilitating this research in the behavioral sciences can be done by funding fellowships, grants, and collaboration opportunities on the topic.
Denis Drescher @ 2022-03-06T00:12 (+3)
Lucius Caviola and colleagues are working on this. Doesn’t mean that there shouldn’t be more efforts like that or that they don’t need help. :-)
Nathan Young @ 2022-03-02T10:46 (+3)
Wikipedia research/infrastructure/support
Epistemics
Wikipedia is a hugely valuable public resource. Internally however, there are slow processes and aging mechanisms, as in many institutions. Run a research and lobbying organisation to help wikipedia maximise its value to the world.
jknowak @ 2022-03-02T10:02 (+3)
Internal market for (EA) recruitment
Effective Altruism Operations, Economic Growth
Open source tool that would allow companies/orgs to set up internal (prediction) markets where all employees could bet on which candidate would be the best fit and be awarded points/real money for every month they stayed at the company.
Nathan Young @ 2022-03-02T10:48 (+2)
You would want to run markets on who would stay, i think, since that's the resolution criteria.
jknowak @ 2022-03-07T14:38 (+1)
Yes, that too, but what I was thinking is that the votes on "whom to hire" could be used then (if you voted on the winning candidate) as shares of bonus paid out monthly.
Zac Townsend @ 2022-03-01T12:01 (+3)
(Per Nick's post, reposting)
Practitioner research
All
Universities are primarily filled with professors trained in similar ways. Although universities sometimes have “professors of the practice,” these positions are often reserved for folks nearing retirement. We are interested in funding ways for practitioners to spend time conducting and publishing “research” informed by their lived real-world experiences.
Zac Townsend @ 2022-03-01T12:00 (+3)
(Per Nick's note, reposting)
Cross-university research
Values and Reflective Processes, Research That Will Help Us Improve, Epistemic Institutions, Empowering Exceptional People
Since 1978, more than 30 scientists supported by the Howard Hughes Medical Institute have won the Nobel prize in medicine. We are interested in funding other cross-institutional collections of researchers and financial support beyond the biosciences, focusing on economic growth, public policy, and general social sciences.
Zac Townsend @ 2022-03-01T11:52 (+3)
Social sector infrastructure
Values and Reflective Processes, Empowering Exceptional People
If an entrepreneur starts or runs a for-profit company, there is a range of software and other infrastructure to help you run your business: explainer guides, AWS, Salesforce.com, etc. Similar infrastructure for not-for-profits and other NGOs exist, particularly cross-border. We are interested in finding a new generation of infrastructure that supports the creation and maintenance of the social sector. This could look like a next-generation low-cost fiscal sponsor or an accounting system focused on NFP accounting and filing 990s, anything that makes it easier to start and run institutions.
Yonatan Cale @ 2022-03-01T16:54 (+1)
Monday.com recently founded a social-impact team which is trying to help charities in ways that (1) use technology, and (2) are scalable (lots of charities can enjoy a single thing that Monday builds).
If you have ideas, let me know, I know someone in their team
Zac Townsend @ 2022-03-02T01:29 (+2)
Would be happy to help, but they might be farther along than my thinking either way. I just know a ton of people who have tried to get fiscal sponsors and it's a pain (and expensive!).
Chris Leong @ 2022-03-09T04:15 (+2)
Effective Altruism Promotional Materials
Effective Altruism
We are looking to invest in the production of high-quality materials for promoting Effective Altruism and Effective Altruism cause areas including posters, brochures and booklets. Effective Altruism is heavily focused on the fidelity of transmission, so these materials should be designed to avoid low-quality transmission. This could be achieved by distributing materials that promote opportunities for deeper engagement or by designing materials very carefully. Such an organisation would likely conduct studies and focus groups to understand the effectiveness of the material being distributed and whether it is maintaining its fidelty.
Leo Gao @ 2022-03-08T22:32 (+2)
Historical investigation on the relation between incremental improvements and paradigm shifts
Artificial Intelligence
One major question that heavily influences the choice of alignment research directions is the degree to which incremental improvements are necessary for major paradigm shifts. As the field of alignment is largely preparadigmatic, there is a high chance that we may require a paradigm shift before we can make substantial progress towards aligning superhuman AI systems, rather than merely incremental improvements. The answer to this question determines whether the best approach to alignment is to choose metrics and try to make incremental progress on alignment research questions, or to attempt to mostly fund things that are long shots, or something else entirely. Research in this direction would entail combing through historical materials in the field of AI, as well as in other scientific domains more broadly, to gain a better understanding of the context in which past paradigm shifts occurred, and putting together a report summarizing the findings.
Some possible ways-the-world-could-be include:
- Incremental improvements have negligible impact on when paradigm shifts happen and could be eliminated entirely without any negative impact on when paradigm shifts occur. All or the vast majority of incremental work is visible from the start as low risk low reward, and potentially paradigm shift causing work is visible from the start as high risk high reward.
- Incremental improvements serve to increase attention in the field and thus increase the amount of funding for the field as a whole, thereby proportionally increasing the absolute number of people working on paradigmatic directions, but funding those working on potential paradigm shifts directly would yield the same paradigm shifts at the same time
- Incremental improvements are necessary to convince risk averse funding sources to continue funding something, since putting money into something for years with no visible output is not popular with many funders, and thus forces researchers to divert a certain % of their time to working on funder-legible incremental improvements.
- Most paradigm shifts arise from attempts to make incremental improvements that accidentally uncover something deeper in the process. It is difficult to tell before embarking on a project whether it will only yield an incremental improvement, no improvement at all, or a paradigm shift.
- Most paradigm shifts cannot occur until incremental improvements lay the foundation for the paradigm shift to happen, no matter how much effort is put into trying to recognize paradigm shifts.
DonyChristie @ 2022-03-08T05:22 (+2)
Antarctic Colony as Civilizational Backup
Recovery from Catastrophe
Antarctica could be a good candidate for a survival colony. It is isolated, making it more likely to survive a nuclear war, pandemic, or roving band of automated killer drones. It is tough, making it easier to double up as a practice space for a Mars colony. Attempting to build and live there at a larger scale than has been done may spur some innovations. One bottleneck here that may likely need resolving is how to get cheaper transportation to Antarctica, which currently relies on flying there or a limited number of specialized boats.
PeterSlattery @ 2022-03-08T04:39 (+2)
Creating a giving what we can for volunteering time and bequesting (last minute)
Given the success of GWWC we would like to see organisation emerge to seek pledges and build communities around the effective use of resources , but in different ways (e.g., time rather than mone or by bequesting rather than donating) [inspired by this].
brb243 @ 2022-03-07T17:17 (+2)
EA community's trading bot
Artificial Intelligence, Effective Altruism
If you have the capital to invest while being able to influence the market and you are just aligned with EA, why would you not get a trading bot. EAs who are the world's top experts on AI can code it, possibly using the knowledge of their respective institutions, and of course impact is generated. It saves time just think about it.
Alexander Ugarov @ 2022-03-11T22:24 (+1)
We see now that dictatorships slow down the progress of humanity and can plausibly threaten large-scale nuclear wars. Dictatorships are often toppled from inside with public protests (e.g. Poland 1988-1989, Tunisia 2011) but public protests face the coordination problem. There are many people willing to protest in dictatorships (e.g. Russia), but protesting in large groups is both more efficient and less risky because law enforcement has the cap on the number of detained. Idea: develop an app to sign-up for a prospective protest in advance and call the protest only if the N of protesters>threshold. Participants stake their money or reputation on showing up to the protest (showing up verified by phone geolocation) when they sign-up for it. A participant is losing the stake if not showing up for a called protest. As calling protests is often illegal, the app should be anonymous , identity-based (person=account) and hard to block. I imagine that doing a hash of a fingerprint would create a unique ID, and there are existing solutions for other technical problems (e.g. Telegram is hard to block due to domain fronting).
DonyChristie @ 2022-03-12T02:56 (+1)
I've thought about this space a good deal. I think this is really dangerous stuff. It must be aligned with the good. Don't call up what you can't put down.
"Coordination is also collusion." - Alex Tabarrok
Ben_Harack @ 2022-03-10T20:27 (+1)
Sad that I missed this! Only saw this the day after it closed.
yiyang @ 2022-03-10T04:02 (+1)
A service/consultancy that calculates the value of information of research projects
Epistemic Institutions, Research That Can Help Us Improve
When undertaking any research or investigations, we want to know whether it's worth spending money or time on it. There are a lot of research-type projects in EA and the best way to evaluate and prioritise them is to calculate their value of information (VOI). However, VoI calculations can be complex and we need to build a team of experts that can form a VoI consultancy or service provider.
Examples of use cases:
1. A grant maker wants to know whether it's worth spending 0.5FTE on investigating cause area Y vs cause area X.
2. A thinktank has generated a list of policy ideas to investigate but is uncertain which to prioritise.
3. A research org also has a list of research questions but want to know which one has the highest VoI.
In each of this use case, I suspect a VoI consultancy can be extremely valuable.
David Manheim has written more about VoI here.
I think there might be harder meta-problem: should we even spend time and money on calculating the VoI of certain investigations? A failure mode is where the VoI consultancy calculates a bunch of research projects that turn out to have very low VoI.
I guess figuring out baseline, the cost of doing VoI calculations, and having a cheap heuristic as a preliminary calculation could help, but I'm highly uncertain.
barkbellowroar @ 2022-03-08T05:56 (+1)
Build an intranet for the effective altruism community
Effective Altruism, Empowering Exceptional People
If effective altruism is going to be "the last social movement the world needs" it will need to operate differently from past movements in order to last longer and reach more people. Given that coordination is a crucial element for success within a distributed global network, a movement intranet could improve coordination on projects, funding and research and build a greater sense of community. An intranet would also help the movement (1) consolidate and streamline processes for onboarding new people to the movement, (2) help connect people to relevant, up-to-date information and (3) reduce the burden on current organizations by encouraging greater peer-to-peer learning and mentorship. An intranet also provides greater visibility of the movement's activities in real time, helping inform leaders and donors where resources and attention are most needed. This can include supporting community health in developing and reinforcing prosocial norms for a safer, more diverse movement.
Chris Leong @ 2022-03-08T11:59 (+4)
What's the advantage of an intranet vs. a website with registration?
barkbellowroar @ 2022-03-09T04:43 (+3)
(short answer) more security, more features and the consolidation of a lot of existing but disconnected infrastructure tools... which could strengthen movement coordination, increase collaboration and calibration and sustain longterm engagement with the community.
Just like you can't catch rain with a sieve, you can miss a lot of value with a fragmented ecosystem.
(longer answer)
An intranet would subsume under one platform a lot of current tools like... event sign-ons, the forum, EA hub's directory, facebook groups, job/internship boards, the Wiki, various communication channels (twitter, discords, slacks, email etc), surveys and polls, chapter sites, separate application forms, the librarian project and organization newsletters.
An intranet can also provide a greater array of features that do not currently exist in the ecosystem including (but not limited to) spaces for sub-group discussions, tiered engagement levels, guided on-boarding for new members, greater analytics and much more.
I think the biggest benefit of all is concentrating the online activity of the movement in one place versus the present state of having to check a disorganized collection of websites, blogs, sign-ons and social accounts in order to keep up with what is going on with the community. The majority of our time should be spent on our work and collaboration - not trying to track down important or relevant information, trying to figure out how to get involved and meet people in the movement, and figuring out how to learn, grow and develop as an effective altruist.
Given the recent sunsetting of the EA Hub - and their comments that implied CEA may be attempting to develop a larger platform - this idea may be in progress. However, I still wanted to share and spark more discussion on the need for an intranet because I believe it would greatly improve movement coordination and strengthen the sense of community while significantly reducing the workload for meta organizations so they can invest more time and energy into their high impact programs.
Given the EA movement's desire to grow more, and the inconceivable amounts of money currently floating around, it may make sense to invest in a pre-packaged intranet for now while also funding a team to begin building an in-house intranet platform that can be fully customized to the needs of the movement as it grows.
If you are interested in learning more about what a unifed platform for EA could look like here are some of the more popular intranets on the market: Sharepoint, Interact, GreenOrbit, Guru or Mangoapp (p.s. my favorites so far are Sharepoint and Interact).
[As someone personally interested in information architecture and digital taxonomy I started looking into this idea a while back and began drafting a proposal on how an EA intranet would operate and what benefits it could have for different roles within the movement. Let me know if you would be interested in reading a forum post on it - I have lots of articles in draft stage and it's hard to prioritize which ones to work on, so an expression of interest in this particular piece would definitely push it to the top of my list!]
If anyone is interested here is a quick breakdown on differences in intranets, extranets and the internet and the value they provide.
Chris Leong @ 2022-03-09T06:16 (+3)
If you write a post on this I would read it.
Two minor comments:
- It's possible to create a central hub platform without making it an intranet
- I'm skeptical of the security benefits given how open EA is (vs. a normal company)
Joss Oliver @ 2022-03-07T22:28 (+1)
Evaluating powerful political groups and people (political parties/activists/…)
values and reflective processes
Currently GiveWell provides people with a guide for effective giving. We could apply a similar model to provide a guide for effective voting and advocacy.
We’d like to see an organisation that evaluates particularly powerful political individuals/groups/parties and advocates for those that align with EA values.
We could evaluate them on things like:
Commitment to using evidence and careful reasoning to work out how to maximise good (particularly long-term) given the resources (this could be based on their proposed policies, parliamentary voting histories and track record).
Adam Binks @ 2022-03-07T22:13 (+1)
Align university careers advising incentives with impact
Effective altruism
Students at top universities often have lots of exposure to a limited set of career paths, such as consulting and finance. Many graduates who would be well-suited to high-impact work don’t consider it because they are just unaware of it. Universities have little incentive to improve this state of affairs, as the eventual social impact of graduates is hard to evaluate and has little effect on their alma mater (with some notable exceptions). We would therefore be excited to fund efforts to more directly align university incentives with supporting their students to enter high-impact careers. We would be interested in work identifying simple heuristic metrics of career impact, and lobbying efforts to have university league tables incorporate these measures into their rankings, rewarding universities who support students in entering impactful work.
brb243 @ 2022-03-07T21:45 (+1)
Space's preferences and objectives research
Space Governance, Artificial Intelligence, Epistemic Institutions, Values and Reflective Processes, Great Power Relations, Research That Can Help Us Improve
In order to govern space well, one needs to understand its preferences and objectives: for example, that of dark energy and dark matter. These can be then weighted by an AI approved under the veil of ignorance by all entities, and solutions that maximize the weighted sum, while centralizing wellbeing and systemic stability, selected, and supported by any space governing AI-assisted system. This is based on sound processes to develop values and can improve relations among great powers in the universe. These insights can inform the Fund's grantmaking strategy across multiple fields.
brb243 @ 2022-03-07T17:28 (+1)
Commercial marketing analysis
Artificial Intelligence, Epistemic Institutions, Economic Growth
What tricks to manipulate humans does AI use? For example, why are the glossy balls used increasingly more often in unrelated advertisement? The color gradients to captivate attention (day&night), physical or mental space intrusion narrated as giving one power to defend themselves from such issues or offend others, racial and gender hierarchical power stereotypes in conjunction with images that narrate positive relationships, etc. AI would love it, since analysis gives it power. Companies that use AI to influence consumer behavior would also enjoy if such is researched, so that they prevent AI misalignment and outcompete others whose tricks they can explain while using wellbeing-focused alternatives. Public would enjoy it since persons can realize that they can focus on more meaningful alternatives, such as long-term impact, while ignoring negative-emotions-based products.
Ricky @ 2022-03-06T06:52 (+1)
Blockchain for people to prove their ID. Often in a disaster people's identity documents are lost or taken. This Blockchain will allow people to prove who they are and will also allow direct disaster relief payments to be made via the Blockchain.
gavintaylor @ 2022-03-03T20:46 (+1)
Refinement of project idea #8, Pathogen sterilization technology
Add: ‘We’d also be interested in the development of therapeutic techniques that could treat infections using these (e.g. relying on physical principles) or similar approaches.’
Peter S. Park @ 2022-03-03T19:44 (+1)
Pipeline for podcasts
Effective altruism
Crowdsourced resources, networks, and grants may help facilitate EAs and longtermists' creation of high-impact, informative podcasts.