Possible gaps in the EA community
By Michelle_Hutchinson @ 2021-01-23T18:56 (+97)
I’m interested in having a better sense of what new kinds of projects should be set up within the EA community. I think I tend to bias towards scepticism, and so find it easier to get a sense of what worries me about projects than which projects I’m excited about. I thought I’d have a go at writing out a few ideas which seem promising to me. I’d love to hear people’s views on them, and also to read other people’s lists. To provide a nudge towards others producing such lists, I’ve also shared some of the prompts I used to come up with the thoughts below.
I haven’t put a lot of time into this list, so I’m not suggesting any, let alone all, are great ideas - they’re just ones I’d be interested to hear more discussion around. I’m also biased by the corners of EA and the world I’ve spent most time in, for example academia.
Prompts for ideas
Aside from ‘what could we do with more of in EA?’, here are some of the specific questions I considered:
How do we win?
Along with: How are we currently falling short on that?
This is a different way of asking what our theory of change as a movement is, and what part of that theory of change currently seems weakest.
For example: I think one way we could make the world far better in decades’ time is by making it the case that all major decision makers (politicians, business leaders etc) use ‘will this most improve wellbeing over the long run?’ as their main decision criterion. Something which would make that most likely to happen is having EA ideas discussed in courses in all top universities. That led me to wonder whether we’re currently neglecting supporting and encouraging lecturers to do that.
What have I wanted from EA (but not gotten)?
For example: The UK government discussed the possibility of folding the Department for International Development into the Foreign and Commonwealth Office, and subsequently did so. DfID, in addition to having an extremely important mission, was achieving that mission pretty well: It had a reputation for being unusually evidence-based amongst development agencies. I had the general sense that the merger would be bad, and would redirect money from trying to help those in the poorest countries to pursuing British interests abroad. But I didn’t have good evidence about whether it would be good or bad overall, or an idea of what I should do about it if it was bad (write to my local MP? Sign a particular petition?). It’s possible I simply missed the work that was done on this (there certainly is some EA work adjacent to this).
What I’d have liked was:
- A succinct summary of what seemed good and bad about the change to give me an idea of whether I agreed with it.
- A really clear action plan if I wanted to help in some way. That might include, for example: sample letters to send to your MP, some considerations on what makes letters to your MP more/less likely to succeed (are emails better than physical letters, or vice versa?), a link to where you can find out who your local MP is and what the best way to contact them is.
What problems have others experienced in EA?
For example: People often appreciate being surrounded by like-minded people. That’s one benefit people often seek from working at an organisation which explicitly identifies as EA. Another possible benefit is a clearer sense that you’re probably heading in the right direction. That comes from others with the same goals as you being able to give you frequent feedback on your direction. But almost all of the impactful positions in the world are at organisations which don’t identify as EA. So it’s important for us to find ways to make sure that wherever they work, people can still have a sense of being often around people with similar values and who help them figure out their path.
Don’t make the perfect the enemy of the good
I find it hard to think about what projects EAs might work on, because of the pressure of needing to work on the thing that will help people most, rather than simply something which has a good shot at helping people some. That pressure becomes more pronounced when I think about how that time could be spent earning money to buy bed nets or deworming medicine. But I think that pressure is ultimately counterproductive, because I think we’ll only be able to do the best we can if we consider a broad array of options and think about them carefully.
Possible gaps
In academia
Supporting teaching of effective altruism at universities:
Teaching courses at universities varies widely, and sometimes there’s little flexibility - for example where the person teaching doesn’t set the exam. Even when I’ve been teaching a class of that type, the resources used by different tutors varied, and I was grateful for people having made different reading lists because they varied in difficulty and emphasis. But often there’s great latitude - either to teach a course entirely of your own design following your interests, or to teach a class requested by students, where the level of generality is something like ‘a course on bioethics’.
There are already a number of syllabi and reading lists on effective altruism out there, as well as this teaching resources database from St Andrews. But I wonder if it would be useful for there to be a point person who had experience lecturing on these topics and who was keen to field queries and generally help people find the best resources and ways of teaching them. That person might have a sense of which guest lecturers would be good fits for complementing the class. They might keep track of which existing syllabi / reading lists suit what types of classes, so that when someone is thinking of teaching on this it’s as easy as possible to find the materials that are suited to the situation.
Something I think would be particularly useful is helping people think through which topics in ethics and bioethics courses seem more and less important to cover, and where to put the emphasis. When teaching ethics, particularly applied ethics, I found it tempting to focus on issues that are known to be contentious and seem interesting to debate (such as abortion and euthanasia). Those aren’t actually the ones that seem most important to have gotten my (Oxford) students thinking about. More important was what proportion of your income should you donate if you end up in the richest 10% of the UK. I would have appreciated seeing more examples of applied ethics courses with more focus on topics like the latter. (One caveat here is that I only taught as a grad student, so it’s very plausible others have less use of support in this vein than I would have.)
Setting up academic institutes
The Global Priorities Institute is set up to do theoretic research into how to do the most good, particularly in philosophy and economics. So far, it seems to be the only center aimed squarely at that. It seems to have done an excellent job attracting world class philosophers, but found it slower going hiring economists. That’s likely in large part due to founder effects of being set up by philosophers. But another problem is plausibly that Oxford is far better ranked globally for philosophy than for economics. Setting up a global priorities center at a top economics university seems like a hard project, but one that seems great if it succeeded. I expect it would require someone with links to the institution and an economics background, as well as solid buy in from at least one established economist there. These are pretty specific constraints. On the other hand I felt wholly unqualified when I started working with Hilary to set up the Global Priorities Institute at Oxford. It ended up only taking about two years, requiring quite a bit of perseverance and a willingness to ask for a lot of guidance from a multitude of people.
I also wonder whether it would be useful to have academic institutes set up by EAs in disciplines such as psychology and history. It seems like an important advantage to be able to hire researchers into posts where they can spend all their effort on research rather than on administration or on teaching standard courses. It also seems useful to allow researchers to feel a bit less beholden to the standard moulds for their discipline, whether that be by method (for example writing and publishing papers individually rather than collaboratively) or by publication subject matter.
Policy
Advising on civic action
I’m not a big fan of following current affairs in detail because it takes so much time and attention. But I do feel fairly strongly about taking part in democratic decision making when there are effective ways of doing so, like voting in a general election. I rely on various EA friends to help me figure out when there are significant occasions for taking part, and in those cases what I should do (for example here’s a guide to EU referendum campaigning from years ago which made it far easier for me to take part). I’d love for there to be more EA guides out there on things like ‘how much you should care about DfID merging into the FCO, and what you might do about it’ - preferably in as concrete and user-friendly a format as possible.
Translating research into action
My impression is that EAs tend to be rather more inclined to research than action, and that one result of this is there being more theoretic arguments and academic papers than fleshed out policy recommendations. That’s more the case in some places than others. For example in the UK there is an All Party Parliamentary Group for Future Generations which seems to be doing good work. Developing policy recommendations seems both hard to get right and sensitive. But it does seem very important.
Philanthropy
My perception is that when it comes to donations, the EA community has historically focused most on long-term pledges (most notably Giving What We Can, for which the main pledge is lifelong). That seems sensible for a number of reasons, including the amounts of money involved and creating a culture of seriousness about helping others.
But it might also be useful to do more experimentation with different ways of encouraging giving. That may be hard to do for an organisation which is all about a lifelong pledge, so it could be useful to have others doing it. For example, when you inherit money might be a good time to make a significant donation: if the money isn’t part of your usual revenue stream, you might not need all of it. And you might want to honour the person you inherited from by helping others in their name. Yet doing so might be kind of tricky: you might be making a more substantial donation than you have in the past, and so want more information about where to donate and how to do so effectively, without knowing where to go for that. Providing support for people in those situations could be pretty useful.
Relatedly, it might be useful to have some easy way for someone who’s about to make their yearly donation to chat to another person about it. I find it kind of hard to know where I should donate and useful to chat to others about it, particularly if they’re considering similar donation targets to me. On the other hand it might be challenging to set this up, because talking to a stranger seems pretty aversive. Or it could be that people already use things like the EA London community directory to find someone to talk to about their donation decisions if they want that.
Other EA endeavours in this area are Momentum (which aims to integrate giving seamlessly into life) and fundraising experiments by Charity Science.
Cause- or sector-specific community builders
Figuring out what’s most impactful is really hard. It seems especially difficult and lonely when you’re not surrounded by others aiming for the same thing. I liked Charity Entrepreneurship’s idea of starting a non-profit to support people doing what they called ‘earn to give plus’ - where the ‘plus’ was things like learning communication skills and training others in EA in them. Alongside EtG+, I think it would be great if people could figure out how to best influence important companies towards doing good. That might mean encouraging large pharma companies to increase their in-kind donations of treatments for neglected tropical diseases, or working on improving recommender systems at top tech firms.
I could imagine the kind of support CE envisions being useful for people in a wide variety of areas. It could be cool to have a point person for an area who does things like: chats to people considering moving into that area (to help them decide), regularly checks in with people working in the area (to support them in their journey), and connects people who could productively collaborate. There seem to be people playing these types of roles in some parts of EA, but I expect we could do with more.
One problem with area specific community building is that in order to be taken seriously and know enough to be helpful to people, you might yourself need to be doing object level work in the area. In that case you might have rather little time for community building. Another challenge is that these kinds of activities might particularly benefit from someone doing it long term (so that all the people in an area are aware that they’re the point person, and know them well enough to be in regular contact with them, for example). That takes time to build up, and is demanding for the person involved.
MichaelA @ 2021-01-24T09:30 (+17)
But almost all of the impactful positions in the world are at organisations which don’t identify as EA. So it’s important for us to find ways to make sure that wherever they work, people can still have a sense of being often around people with similar values and who help them figure out their path.
I share the view that this seems potentially really valuable. Anecdotally, I know an EA who seems like they could do well in roles at EA orgs, or could potentially rise to fairly high positions in government roles in a country that's not a major EA hub. There are of course many consideration influencing their thinking about which path to pursue, but one notable one is that the latter just understandably sounds less fun, less satisfying over the long term, and more prone to value drift.
I think efforts to address this issue might ideally also try to address the issue that status, validation, etc. within the EA movement are easier to access by working at EA orgs than at other orgs, and probably especially hard to access by working at orgs outside the major EA hubs (e.g., a key department of a government agency in an Asia country rather than in the UK or US).
We tried to brainstorm some ideas for how EA in general could support people like this EA I know to happily pursue roles that where (by default) there'd be no EAs in their orgs and maybe only a few in their city/country as well. Some (not necessarily good) ideas, from memory:
- Have more EA conferences in these not-currently-EA-hubs, so that the people living there can sometimes get "booster shots" of EA interactions
- Provide funding for these people to occasionally travel to EA conferences / EA hubs
- Make the EA movement more geographically distributed, e.g. by some EA orgs moving to places that aren't currently hubs
Some (also not necessarily good) ideas that come to mind now:
- Support more EA community building in these areas
- Support the creation of organisations like HIPE in these areas
- This could be seen as supporting community building that's more targeted in terms of sector/career, yet not necessarily explicitly EA-branded. It could build a network of people with similar values and a desire to help each other, even if few/none explicitly identify as part of the EA movement.
- (I don't actually know much about HIPE)
- Some sort of virtual community building stuff?
- Things like the EA Anywhere group?
- Things like online coworking spaces?
- (There's obviously a lot that could be done in this broad bucket)
- Efforts to just make EAs less concerned about status, validation, etc. within the EA movement (or more concerned about those things from outside the EA movement)
- (No big ideas for this immediately come to my mind)
- Efforts to just make status, validation, etc. make those things easier to access for people who work at non-EA orgs and outside of EA hubs
- This could include EAs sharing info about a broader range of organisations, geographical areas, career paths, etc., so that more EAs can easily see why a wider range of things are impactful
MaxRa @ 2021-01-31T14:36 (+17)
Re: Make status easier accessible
One idea that just came up to me was making it easier to reap status benefits from the GWWC giving pledge, e.g. I feel kind of proud of seeing my name on this huge numbered list and being among the first ten thousand people to sign. Relatedly, Subreddits and Wikipedia Projects seem to actively use badges of honor to acknowledge things like being a donor, having helped with some task etc. Maybe we could have „Pledge“ badges.
Another idea: getting access to people one holds in high regard could also be something to think about. One could promote speakers coming to local groups, or generally promote networking within the community more.
Another thought that came up: Not being chosen for 80,000Hours‘ career coaching felt like it was a symptom of my relatively low value for the community (not saying there is room for improvement how they communicated that, was years ago). I imagine it feels similar for some others. Maybe having motivated volunteers taking up the rejected applicants would be a cheap way to signal „there are people in the community that value you being here and trying to work out an EA career path“?
MichaelA @ 2021-02-01T00:37 (+7)
I feel kind of proud of seeing my name on this huge numbered list and being among the first ten thousand people to sign.
That resonates with me.
And the mention of Wikipedia is interesting. When I was a pretty active Wikipedia editor, I indeed felt proud of and motivated by badge-type things (mainly "barnstars", if I recall correctly), as well as by random people thanking me for contributions (either by clicking a button or by posting on my talk page).
I'd guess a lot of EAs have similar mindsets, motivational patterns, etc. to a lot of Wikipedia editors, so it does seem like it could be interesting to try to learn from how Wikipedia "recruits", motivates, and retains editors.
Could you expand on what you mean by "Maybe we could have „Pledge“ badges"? E.g., where are you envisioning those badges being displayed? Are you envisioning them just being for taking the pledge, or also for other actions (e.g., recording donations, hitting some milestone in donations, being in the first 10,000 members, a badge another pledger can give you to say you helped them decide where to give...)?
(Your other ideas also seem potentially interesting, but I don't have anything in particular to say about them :) )
MaxRa @ 2021-02-01T16:02 (+9)
Could you expand on what you mean by "Maybe we could have „Pledge“ badges"? E.g., where are you envisioning those badges being displayed?
I thought about people's forum accounts. There are also the EA hub accounts, but I basically never open it, not sure about others. I'd probably do it similar to Wikipedia (e.g. here), just having a small icon for the pledge and when you hover on it "GivingWhatWeCan member since April 2nd, 2020". I didn't think about other ideas, e.g. being helpful for a person deciding on a donation! I like the idea. One worry that comes up is that it could get a bit cluttered. Also, something in me feels a bit awkward when proudly displaying something, like I could become the target of the bullies of my highschool for feeling "too cool". The GWWC pledge is already so socially accepted as something cool that I don't feel this in that case.
MichaelA @ 2021-02-01T23:53 (+9)
Yeah, I think this idea - and other things in the same neighbourhood - is worth considering.
One thing worth mentioning is that GWWC already have badges you can display on websites, as well as Facebook photo frames. (This is where I found them.) So I think the intervention here wouldn't be creating them, but rather:
- getting the EA Forum - and maybe other sites - to have a clearly visible option for putting a badge there if one is a GWWC member
- normalising using them
- E.g., by directly talking to a few people about using them, and making a public statement to let people know about the idea
- maybe creating variants
I think it could be worth talking to people like Luke Freeman (who's head of GWWC) and/or Aaron Gertler (the lead Forum moderator) about this.
MichaelA @ 2021-04-03T07:07 (+2)
See also the post EA jobs provide scarce non-monetary goods, which probably influenced the views I expressed here but which I'd forgotten about till recently.
MichaelA @ 2021-03-27T05:22 (+2)
I was just re-reading the transcript of the 80k interview with Ben Todd from November 2020 and saw that that includes a section that's relevant to what I was saying here, which I'll quote below in case it's of interest to any future readers:
Benjamin Todd: And so now in this third stage [of the effective altruism movement], we’re a bit less constrained by kind of generally interested, talented people, and a bit more constrained by either people who have very particular skills that are needed, such as we used the AI technical safety example earlier, or grantmaker skill sets, the kinds of things we list on our priority problems. Or maybe we’re more constrained now by what you might want to call an organizational bottleneck, which is ability to figure out who’s interested. So there’s a kind of searching/vetting bottleneck, and figure out who would be able to contribute and then train them, manage them. And even just have things that lots of people could do.
[...]
Arden Koehler: So yeah, I guess maybe one complication here is that it feels most easy to imagine this organizational capacity bottleneck or something, in the case of like, “Well, organizations that have the effective altruism label, aren’t big enough and don’t have enough managers to basically be able to hire these people.” But then I guess since we think so many people can make such a big, positive difference working in areas besides effective altruism organizations, in government, in research, what’s the equivalent of this capacity bottleneck for those cases?
Benjamin Todd: Well, I was almost wondering if I should emphasize now that what I’ve been talking about is always just a matter of degree about which bottleneck seems like the very most pressing right now, but always additional organizational capacity, talented people, funding, they’re always useful and there’s always good things to do with those things. So I’m not saying that all those other things are just not useful at all. And you’re giving some really good examples of, “Well, if you are just a generally talented person, maybe it’s a bit harder to get some of these jobs, particularly at the nonprofits that are most central to the community right now than it was, say, in 2015.”
Benjamin Todd: But that doesn’t mean there’s nothing useful to do. You can go and train up in academia or start focusing on some kind of research there. There’s many, many hundreds or even thousands of people could go and work in government and policy positions. Yeah, you could go and work at some other nonprofits that are relevant to these issues, but not labeled as effective altruist organizations. And so yeah, having extra talented people is still really useful. It’s just, exactly what you might focus on, would be a bit different.
Arden Koehler: Yeah. I was thinking maybe one answer to the question of, what’s the analog of organizational capacity for those other areas, it might be guidance or something. Of course, this is something that 80,000 Hours is trying to provide, but figuring out what are the best roles in those other institutions and people having support or community when they’re in those other institutions so that they feel good about it and feel motivated for the long haul. Those could be sort of the equivalent. And if we got those, then it’d be easier for people to put their skills to work.
Benjamin Todd: Yes. And I think one thing that does make it hard to go and do those other things is it often requires more independence, because you might be going out alone or it might feel like that. And so yeah, in a sense that’s like another type of organizational bottleneck, is like, could someone form a really good community of people that are all trying to work in a certain area of policy together, and that would help them all do that more easily.
Arden Koehler: Yeah. I mean, I think working at an effective altruist organization, you and I are super lucky because we get to talk to people about the things that we care about all day, and talk to people who share our values. I think it’s really hard for people who are like, “I really care about these things, but I’m going to go out into the wild and work in a department of a government where nobody else will care about the same things I care about.” But I guess, if there was some way to make that less true and make those communities more supportive, then that would make it a bit more attractive and easier for people.
Benjamin Todd: Yes. And I mean, this is starting to change a bit. There are lots of other people interested in these ideas, doing those things, who will be up for chatting to you.
(I'd heard that episode back in November 2020, so it may have been one of many influences informing my comment.)
I also made a tag this morning for posts relevant to Working at EA vs Non-EA Orgs (and tagged this post), so readers interested in this topic may be interested in those posts as well.
remmelt @ 2021-02-02T15:38 (+9)
An impression after skimming this post (not well thought through; do point out what I missed):
Some of the tentative project ideas listed are oriented around extending EA's reach via new like-minded groups who will share our values and strategies.
Sentences that seemed to be supporting this line of thinking:
... making it the case that all major decision makers (politicians, business leaders etc) use ‘will this most improve wellbeing over the long run?’ as their main decision criterion.
...So it’s important for us to find ways to make sure that wherever they work, people can still have a sense of being often around people with similar values and who help them figure out their path.
...One problem with area specific community building is that in order to be taken seriously and know enough to be helpful to people, you might yourself need to be doing object level work in the area.
I'm unsure how much I misinterpreted specific project ideas listed in this post.
Leaving that aside, I generally worry about encouraging further outreach focused on creating like-minded groups of influential professionals (and even more about encouraging initiators to focus their efforts on making such groups look 'prestigious'). I expect that will discourage efforts in outreach to integrate importantly diverse backgrounds, approaches, and views. I would expect EA field builders to involve fewer of the specialists who developed their expertise inside a dissimilar context, take alternative approaches to understanding and navigating their field, or have insightful but different views that complement views held in EA.
A field builder who simply aims to increase EA's influence over decisions made by professionals will tend to select for and socially reward members that line up with their values/cause prio/strategy as a default tactic, I think. Inversely, taking the tactic of connecting EAs who like to talk with other EAs who are climbing similar career ladders leads to those gathered themselves agreeing to and approving each other more for exerting influence in stereotypically EA ways. Such group dynamics can lead to a kind of impoverished homogenisation of common knowledge and values.
I imagine a corporate, academic, or bureaucratic decision maker getting involved in an EA-aligned group and consulting their collaborators on how to make an impact. Given that they're surrounded by like-minded EAs, they may not become aware of shared blindspots in EA. Conversely, they'd less often reach out and listen attentively to outside stakeholders who can illuminate them on those blindspots.
Decision makers who lose touch with other important perspectives will no longer spot certain mistakes they might make, and may therefore become (even more) overconfident about certain ways of making impact on the world. This could lead to more 'superficially EA-good' large-scale decisions that actually negatively impact persons far removed from us.
In my opinion, it would be awesome if
- along with existing field-building initiatives focused on expanding the influence of EA thought,
- we encourage corresponding efforts to really get in touch and build shared understandings with specialised stakeholders (particularly, those with skin in the game) who have taken up complementary approaches and views to doing good in their field.
Some reasons:
- Dedicated EA field builders seem to naturally incline towards type 1 efforts. Therefore, it's extra important for strategic thinkers and leaders in the EA community to be deliberate and clear about encouraging type 2 efforts in the projects they advise.
- 1 is challenging to implement but EA field builders have been making steady progress in scaling up initiatives there (e.g. staff at Founder's Pledge, Global Priorities Institute, Center for Human-Compatible AI).
- 2 seems much more challenging intellectually. They require us to build bridges that allow EA and non-EA-identifying organisations to complement each other: complex, nuanced perspectives that allow us to traverse between general EA principles and arguments, and the contextual awareness and domain-specific know-how (amongst others) of experienced specialists. I have difficulty recalling EA initiatives that were explicitly intended for coordinating type 2 efforts.
At this stage, I would honestly prefer if field builders start paying much deeper attention to 2. before they go out changing other people's minds and the world. I'm not sure how much credence to put in this being a better course of action though. I have little experience reaching out to influential professionals myself. It also feels I'm speculating here on big implications in a way that seems unnecessary or exaggerated. I'd be curious to hear more nuanced arguments from an experienced field-builder.
casebash @ 2021-01-24T02:28 (+8)
Yeah, I agree that there would be significant benefits to trying to set up another academic research institute at a university more focused on economics.
MichaelA @ 2021-01-24T09:35 (+8)
Same here.
The idea of "academic institutes set up by EAs in disciplines such as psychology and history" also sounds potentially exciting to me. And I wrote some semi-relevant thoughts in the post Some history topics it might be very valuable to investigate (and other posts tagged History may be relevant too).
jared_m @ 2021-01-27T00:18 (+5)
Agreed. The University of Chicago — with its Becker Friedman Institute, Center for Decision Research, broad EA community, and generous economics funders — could be a promising option.
starmz12345@gmail.com @ 2021-01-28T02:50 (+3)
Definitely agree with this, as someone currently at UChicago! The Center for Radical Innovation for Social Change (RISC) recently put out a call for animal welfare proposals and Steve Levitt has connections to Schmidt Futures (an EA-adjacent philanthropic initiative), so that could be a promising place to start.
jared_m @ 2021-01-30T11:37 (+2)
Thank you for sharing! I hadn't looked deeply into RISC's work before — and very helpful to know about Levitt's ties to Schmidt Futures.
MichaelA @ 2021-01-24T09:13 (+4)
What I’d have liked was:
- A succinct summary of what seemed good and bad about the change to give me an idea of whether I agreed with it.
- A really clear action plan if I wanted to help in some way. That might include, for example: sample letters to send to your MP, some considerations on what makes letters to your MP more/less likely to succeed (are emails better than physical letters, or vice versa?), a link to where you can find out who your local MP is and what the best way to contact them is.
This seems like a good idea to me. And the second idea seems to me like a potential Task Y, meaning something which has some or all of the properties:
- "Task Y is something that can be performed usefully by people who are not currently able to choose their career path entirely based on EA concerns*.
- Task Y is clearly effective, and doesn't become much less effective the more people who are doing it.
- The positive effects of Task Y are obvious to the person doing the task."
Relatedly, that second idea also seems like something anyone could just start and provide value in right away - no need for permission, special resources, or unusual skills. (My local EA group actually discussed similar things previously in the context of climate change, and took some minor actions in this direction.)
MichaelA @ 2021-02-22T01:26 (+3)
I've just made a shortform post on Some ideas for projects to improve the long-term future. I brainstormed the ideas before seeing this post, but this post is part of what prompted me to share the ideas publicly. And the shortform is only moderately rather than massively long, so I'll copy the whole thing below rather than just linking to it. (Maybe that's a bit weird? If so, sorry!)
---
In January, I spent ~1 hour trying to brainstorm relatively concrete ideas for projects that might help improve the long-term future. I later spent another ~1 hour editing what I came up with for this shortform. This shortform includes basically everything I came up with, not just a top selection, so not all of these ideas will be great. I’m also sure that my commentary misses some important points. But I thought it was worth sharing this list anyway.
The ideas vary in the extent to which the bottleneck(s) to executing them are the right person/people, buy-in from the right existing organisation, or funding.
I’m not expecting to execute these ideas in the near-term future myself, so if you think one of these ideas sounds promising and relevant to your skills, interests, etc., please feel very free to explore the idea further, to comment here, and/or to reach out to me to discuss it! [If commenting, please comment on the shortform version of this, so centralise discussion there.]
- Something along the lines of compiling a large set of potentially promising cause areas and interventions; doing rough Fermi estimates, cost-effectiveness analyses, and/or forecasts; thereby narrowing the list down; and then maybe gradually doing more extensive Fermi estimates, cost-effectiveness analyses, and/or forecasts
- This is somewhat similar to things that Ozzie Gooen, Nuño Sempere, and Charity Entrepreneurship have done or are doing
- Ozzie also discusses some similar ideas here
- So it’d probably be worth talking to them about this
- This is somewhat similar to things that Ozzie Gooen, Nuño Sempere, and Charity Entrepreneurship have done or are doing
- Something like a team of part-time paid forecasters, both to forecast on various important questions and to be “on-call” when it looks like a catastrophe or window of opportunity might be looming
- I think I got this idea from Linch Zhang, and it might be worth talking to him about it
- 80,000 Hours-style career reviews on things like diplomacy, arms control, international organisations, becoming a Russia/India/etc specialist
- Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI
- This might allow them to complete additional valuable projects
- This also might help the research or writing assistants build career capital and test fit for valuable roles
- Maybe BERI can already provide this?
- It’s possible it’s not worth being proactive about this, and instead waiting for people to decide they want an assistant and create a job ad for one. But I’d guess that some proactiveness would be useful (i.e., that there are cases where someone would benefit from such an assistant but hasn’t thought of it, or doesn’t think the overhead of a long search for one is worthwhile)
- See also this comment from someone who did this sort of role for Toby Ord
- Research or writing assistance for certain independent researchers?
- Ops assistance for orgs like FHI?
- But I think orgs like BERI and the Future of Humanity Foundation are already in this space
- Additional “Research Training Programs” like summer research fellowships, “Early Career Conference Programmes”, internships, or similar
- Probably best if this is at existing orgs
- Could perhaps find an org that isn’t doing this yet but has researchers who would be capable of providing valuable mentorship, suggest the idea to them, and be or find someone who can handle the organisational aspects
- Something like the Open Phil AI fellowship, but for another topic
- In particular, something that captures the good effects a “fellowship” can have, beyond the provision of funding (since there are already some sources of funding alone, such as the Long-Term Future Fund)
- A hub for longtermism-relevant research (or a narrower area, e.g. AI) outside of US and UK
- Found an organization/community similar to HIPE and/or APPGFG, but in countries other than the UK
- I’d guess it’d probably be easiest in countries where there is a substantial EA presence, and perhaps easier in smaller countries like Switzerland rather than in the US
- Why this might/might not be good:
- I don’t know a huge amount about HIPE or APPGFG, but from my limited info on those orgs they seem valuable
- I’d guess that there’s no major reason something similar to HIPE couldn’t be successfully replicated in other countries, if we could find the right person/people
- In contrast, I’d guess that there might be more barriers to successfully replicating something like APPGFG
- E.g., most countries probably don’t have an institution very similar to APPGs
- But I imagine something broadly similar could be replicated elsewhere
- Potential next steps:
- Talk to people involved in HIPE and APPGFG about whether they think these things could be replicated, how valuable they think that’d be, how they’d suggest it be done, what countries they’d suggest, and who they’d suggest talking to
- Talk to other EAs, especially outside of the UK, who are involved in politics, policy, and improving institutional decision-making
- Ask them for their thoughts, who they’d suggest reaching out to, and (in some cases) whether they might be interested in collaborating on this
- I also had some ideas for specific research or writing projects, but I’m not including them in this list
- That’s partly because I might publish something more polished on that later
- It’s mostly because people can check out A central directory for open research questions for a broader set of research project ideas
- See also Why you (yes, you) should post on the EA Forum
See also:
The views I expressed here are my own, and do not necessarily reflect the views of my employers.
MichaelA @ 2021-01-24T08:59 (+3)
Thanks for this post! I think I basically share the view that all of those prompts are useful and all of those "gaps" are worth seriously considering. I'll share some thoughts in separate comments.
(FWIW, I think maybe the idea I feel least confident is worth having an additional person focus ~full-time on - considering what other activities are already being done - is creating "some easy way for someone who’s about to make their yearly donation to chat to another person about it.")
Regarding influencing future decision-makers
Something which would make that most likely to happen is having EA ideas discussed in courses in all top universities. That led me to wonder whether we’re currently neglecting supporting and encouraging lecturers to do that.
Both of those claims match my independent impression.
On the first claim: This post using neoliberalism as a case study seems relevant (I highlight that mainly for readers, not as new evidence, as I imagine that article probably already influenced your thinking here).
On the second claim: When I was a high school teacher and first learned of EA, two of the main next career steps I initially considered were:
- Try to write a sort of EA textbook
- Try to become a university lecturer who doesn't do much research, and basically just takes on lots of teaching duties
- My thinking was that:
- I'd seen various people argue that it's a shame that so many world-class researchers have to spend much of their time teaching when that wasn't their comparative advantage (and in some cases they were outright bad at it)
- And I'd also heard various people argue that a major point of leverage over future leaders may be influencing what ideas students at top unis are exposed to
- So it seemed like it might be worth considering trying to find a way to specialise in taking teaching load off top researchers' plates while also influencing future generations of leaders
- I didn't actually look into whether jobs along those lines exist. I considered that maybe, even if they don't exist, one could be entrepreneurial and convince a uni to create one, or adapt another role into that.
- Though an obstacle would probably be the rigidity of many universities.
- My thinking was that:
I ultimately decided on other paths, partly due to reading more of 80k's articles. And I do think the decisions I made make more sense for me. But reading this post has reminded me of those ideas and updated me towards thinking it could be worth some people considering the second one in particular.
Supporting teaching of effective altruism at universities
I feel quite good about the ideas in this section - I'd definitely be excited for one or more things along those lines to be done one or more people who are good fits for that.
Some of those activities sound like they might be sort-of similar to some of the roles people involved in other EA education efforts (e.g., Students for High-Impact Charity, SPARC) and Effective Thesis have played. So maybe it'd be valuable to talk to such people, learn about their experiences and their perspectives on these ideas, etc.
MichaelA @ 2021-01-24T09:50 (+2)
Misc small comments
For example, when you inherit money might be a good time to make a significant donation: if the money isn’t part of your usual revenue stream, you might not need all of it.
This does seems like a good idea to me, but I think Generation Pledge might already be doing something like that? (That said, I don't know much about them, and I don't necessarily think that one org doing ~X means no other org should do ~X.)
Also, for people thinking about this broader idea of potentially setting up pledges (or whatever) that cover things GWWC isn't designed for, it may be useful to check out A List of EA Donation Pledges (GWWC, etc).
It could be cool to have a point person for an area who does things like: chats to people considering moving into that area (to help them decide), regularly checks in with people working in the area (to support them in their journey), and connects people who could productively collaborate.
I know very little about Animal Advocacy Careers, but this sounds like the sort of thing they might do? And if they don't do it, then maybe they could start doing so for the animal space (which could be useful directly and also could provide a model others could learn from)? And if they raise strong specific reasons to be inclined against doing that (rather than just reasons why it's not currently their top priority), that could be useful to learn from as well.
But I think that pressure is ultimately counterproductive, because I think we’ll only be able to do the best we can if we consider a broad array of options and think about them carefully.
Yeah, I think it'd be pretty terrible if people took EA's focus on prioritisation, critical thinking, etc. as a reason to not raise ideas that might turn out to be uninteresting, low-quality, low-priority, or whatever. It seems best to have a relatively low bar for raising an idea (along with appropriate caveats, expressions of uncertainty, etc.), even if we want to keep the bar for things we spend lots of resources on quite high. We'll find better priorities if we start with a broad pool of options.
(See also babble and prune [full disclosure: I don't know if I've actually read any of those posts].)
(Obviously some screening is needed before even raising an idea - we won't literally say any random sequence of syllables, and we should probably not bother writing about every idea that seemed potentially promising for a moment but not after a minute of thought. But it basically seems)
casebash @ 2021-01-24T11:14 (+2)
I also think charity science might have tried getting people to pledge in their wills.
MichaelA @ 2021-01-24T09:00 (+2)
A long quibbly tangent
I think one way we could make the world far better in decades’ time is by making it the case that all major decision makers (politicians, business leaders etc) use ‘will this most improve wellbeing over the long run?’ as their main decision criterion.
I'd say there’s a >50% chance that this would indeed be good, and that it’s plausible it'd be very good. But it also seems to me plausible that this would be bad or very bad. This is for a few reasons:
- You didn't say what you meant by wellbeing. A decision maker might say "wellbeing" and mean only the wellbeing of humans, or of people in countries like theirs (e.g., predominantly English-speaking liberal democracies), or of people in their country, or of an in-group of theirs within their country (e.g., people with the same political leaning or race as them).
- This could be because they explicitly believe that only those people are moral patients, or just because that's who they implicitly focus on.
- If the decision makers do have a narrow subset of all moral patients in mind when they they think about increasing wellbeing, would probably at least reduce the benefits of decision makers having that as their main criterion. It might also lead to that criterion being net harmful, if it means people are consequentialist altruists for one group only, having stripped away the norms and deontological constraints that often help prevent certain bad behaviours.
- Maybe this is just a nitpick, as you could just edit your statement to incorporate some sort of impartiality. But then you'd have to grapple with exactly how to do that - do we want the criteria decision makers use to come pre-loaded with our current best guesses about moral patienthood and weights? Or with some particular of handling moral uncertainty? Or with some general principles for thinking about how to handle moral uncertainty?
- I have an intuition that just making people more consequentialist and more altruistic-in-some-sense, without also making them more rational, reflective, cautious, etc., has a decent chance of being harmful. I think the (overlapping) drivers of this intuition are:
- The fact doing that would move a seemingly important variable into somewhat uncharted territory, so we should start out pretty uncertain about what outcomes it would have, and thus predict a nontrivial chance of fairly bad outcomes
- The various potential ways people have suggested naive consequentialism could cause harms (even from a consequentialist perspective)
- There seeming to have been some historical cases where people have been mobilised to do bad things by consequentialist and altruistic-in-some-sense arguments ("for the greater good")
- A sort of Chesterton's fence / Secrets of Our Success-style argument for thinking very carefully before substantially changing anything that currently seems like a major part of how the world runs (even if it seems at first glance like the consequences of the change would be good)
[The above statements of mine are pretty vague, and I can try to elaborate if that’d be useful.]
So I'd favour thinking more about precisely what sort of changes we want to make to future decision-makers’ values, reasoning, and criteria for decision-making, and doing so before we make any major pushes on those fronts.
And that generic "more research needed" statement, I'd favour trying to package increases in consequentialism and generic altruism with more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, and probably some other things like that.
The following posts and their comment sections contain some relevant prior discussion:
- Everyday Longtermism
- Especially the section Safeguarding against naive utilitarianism, which presents a model/graph that I think is very interesting and helpful
- Improving the future by influencing actors' benevolence, intelligence, and power
...but, I think all of this might be pretty much just a tangent. That’s because I think we could just change the sentence of yours that I quoted at the start of this comment to make it reflect a broader package of attributes we want to change in future leaders, and your other points would still stand. E.g., teaching at universities could try to inculcate not just consequentialism and generic altruism but also more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, etc.