Near-Term Effective Altruism Discord
By ozymandias @ 2018-09-09T21:14 (+4)
I have started a Discord server for near-term effective altruists. (If you haven’t used Discord before, it’s a pretty standard chat server. Most of its functions are fairly self-explanatory.)
Most of my effective altruist friends focus on the far future. While far-future effective altruists are great, being around them all the time can get pretty alienating. I don’t often argue the merits of bednets versus cash transfers, which means I get intellectually sloppy knowing I won’t be challenged. I’m slow to learn about new developments relevant to near-term effective altruism, such as discoveries in development economics. Many of the conversations I participate in work from assumptions I don’t share, such as the assumption that we have a double-digit chance of going extinct within the next twenty years.
I suspect that many other near-term effective altruists may be in the same boat, and if so I encourage them to come participate. Even if not, I hope this server can be a fun and interesting place to learn more about effective altruism and connect to other effective altruists.
“Near-term” is hard to define. I intend it to be inclusive of all effective altruists whose work and priority cause areas do not focus on the far future, whether they work on global poverty, animal welfare, mental health, politics, meta-charity, or another cause area. I ask that far-future effective altruists and people whose priority cause area is AI risk or s-risks do not participate. This runs on the honor system; I’m not going to be the Near Term EA police. There are lots of people who are edge cases and I ask them to use their best judgment.
The server is intended to be welcoming to new effective altruists, people who aren’t certain whether they want to be effective altruists or not, and people who are not currently in a place where it makes sense for them to donate, volunteer, or change careers. If you’re wondering whether you’re “not EA enough” to participate, you probably are welcome!
undefined @ 2018-09-10T10:02 (+25)
This is a nice idea though I'd like to suggest some adjustments to the welcome message (also in view of kbog's worries discussed above). Currently the message begins with:
"(...) we ask that EAs who currently focus on improving the far future not participate. In particular, if you currently prioritize AI risks or s-risks, we ask you not participate."
I don't think it's a good idea to select participants in a discussion according to what they think or do (it pretty much comes down to an Argumentum ad Hominem fallacy). It would be better to specify what the focus of the discussion is, and to welcome those interested in that topic. So I suggest replacing the above with:
"we ask that the discussion be focused on improving the near future, and that the far-future topics (such as AI risks or s-risks) be left for other venues, unless they are of direct relevance for an ongoing discussion on the topic of near future improvements." (or something along those lines).
undefined @ 2018-09-10T14:17 (+19)
I like this suggestion - personally I feel a lot of uncertainty about what to prioritize, and given that a portion of my donations go to near-term work I'd enjoy taking part in discussion about how to best do that, even if I'm also seriously considering whether to prioritize long-term work. But I'd be totally happy to have the topic of that space limited to near-term work.
undefined @ 2018-09-10T15:15 (+8)
+1. I'm in a very similar position - I make donations to near-term orgs, and am hungry for discussion of that kind. But because I sometimes do work for explicitly long-term and x-risk orgs, it's hard for me to be certain if I qualify under current wording.
undefined @ 2018-09-10T00:08 (+12)
Discord lets you separate servers into different channels for people to talk about different things. There is already an EA Discord, of course new and near term EAs are welcome there. I think it would be bad if we split things like this because the more the near term EAs isolate themselves, the more and more "alienated" people will feel elsewhere, so it will be a destructive feedback loop. You're creating the problem that you are trying to solve.
Also, it would reinforce the neglect of mid-term causes which have always gotten too little attention in EA.
I ask that far-future effective altruists and people whose priority cause area is AI risk or s-risks do not participate.
Yeah, this isn't good policy. It should be pretty clear that this is how groupthink happens, and you're establishing it as a principle. I get that you feel alienated because, what, 60% of people have a different point of view? (perish the thought!) And you want to help with the growth of the movement. But hopefully you can find a better way to do this than creating an actual echo chamber. It's clearly a poor choice as far as epistemology is concerned.
You're also creating the problem you're trying to solve in a different way. Whereas most "near-term EAs" enjoy the broad EA community perfectly well, you're reinforcing an assumption that they can't get along, that they should expect EA to "alienate" them, as they hear about your server. As soon as people are pointed towards a designated safe space, they're going to assume that everything on the outside is unfriendly to them, and that will bias their perceptions going forward.
You are likely to have a lighter version of the problem that Hatreon did with Patreon, Voat with Reddit, etc - whenever a group of people has a problem with the "mainstream" option and someone tries to create an alternative space, the first people who jump ship to the alternative will be the highly-motivated people on the extreme end of the spectrum, who are the most closed-minded and intolerant of the mainstream, and they are going to set the norms for the community henceforth. Don't get me wrong, it's good to expand EA with new community spaces and be more appealing to new people, it is always nice to see people put effort into new ideas for EA, but this is very flawed, I strongly recommend that you revise your plans.
undefined @ 2018-09-10T02:18 (+18)
Moderator note: I found this harsher than necessary. I think a few tone changes would have made the whole message feel more constructive.
undefined @ 2018-09-10T02:25 (+2)
What statements were "harsher than necessary"?
undefined @ 2018-09-10T14:18 (+6)
I'll PM you.
undefined @ 2018-09-12T08:33 (+5)
I don't find your objections here persuasive.
Yeah, this isn't good policy. It should be pretty clear that this is how groupthink happens, and you're establishing it as a principle. I get that you feel alienated because, what, 60% of people have a different point of view?
If you want to talk about how best to X, but you run into people who aren't interested in X, it seems fine to talk to other pro-Xers. It seems fine that FHI gathers people who are sincerely interested about the future of humanity. Is that a filter bubble that ought to be broken up? Do you see them hiring people who strongly disagree with the premise of their institution? Should CEA hire people who effective altruism, broadly construed, is just a terrible idea?
You're also creating the problem you're trying to solve in a different way. Whereas most "near-term EAs" enjoy the broad EA community perfectly well, you're reinforcing an assumption that they can't get along, that they should expect EA to "alienate" them, as they hear about your server
To be frank, I think this problem already exists. I've literally had someone laugh in my face because they thought my person-affecting sympathies were just idiotic, and someone else say "oh, you're the Michael Plant with the weird views" which I thought was, well, myopic coming from an EA. Civil discourse, take a bow.
undefined @ 2018-09-12T10:38 (+3)
It seems fine that FHI gathers people who are sincerely interested about the future of humanity. Is that a filter bubble that ought to be broken up?
If so, then every academic center would be a filter bubble. But filter bubbles are about communities, not work departments. There are relevant differences between these two concepts that affect how they should work. Researchers have to have their own work departments to be productive. It's more like having different channels within an EA server. Just making enough space for people to do their thing together.
Do you see them hiring people who strongly disagree with the premise of their institution? Should CEA hire people who effective altruism, broadly construed, is just a terrible idea?
These institutions don't have premises, they have teloses, and if someone will be the best contributor to the telos then sure they should be hired, even though it's very unlikely that you will find a critic who will be willing and able to do that. But Near Term EA has a premise, that the best cause is something that helps in the near term.
To be frank, I think this problem already exists. I've literally had someone laugh in my face because they thought my person-affecting sympathies were just idiotic, and someone else say "oh, you're the Michael Plant with the weird views" which I thought was, well, myopic coming from an EA. Civil discourse, take a bow.
That sounds like stuff that wouldn't fly under the moderation here or the Facebook group. The first comment at least. Second one maybe gets a warning and downvotes.
undefined @ 2018-09-10T20:45 (+3)
I don’t often argue the merits of bednets versus cash transfers, which means I get intellectually sloppy knowing I won’t be challenged.
OK, but in that case wouldn't it be better to stick around people with opposing points of view?
This seems like a pretty severe misreading to me. Ozy is saying that they want to hone their arguments against people with expertise in a particular field rather than a different field, which is perfectly reasonable.
undefined @ 2018-09-10T20:51 (+2)
You're right, I did misread it, I thought the comparison was something against long term causes.
In any case you can always start a debate over how to reduce poverty on forums like this. Arguments like this have caught a lot of interest around here. And just because you put all the "near-term EAs" in the same place doesn't mean they'll argue with each other.
undefined @ 2018-09-11T02:47 (+1)
For what it's worth, I felt a bit alienated by the other Discord, not because I don't support far-future causes or that it was even discussing the far future, but because I didn't find the conversation interesting. I think this Discord might help me engage more with EAs, because I find the discourse more interesting, and I happen to like the way Thing of Thing discusses things. I think it's good to have a variety of groups with different cultures and conversation styles, to appeal to a broader base of people. That said, I do have some reservations about fragmenting EA along ideological lines.
undefined @ 2018-09-10T02:57 (+1)
Is the other Discord not publicly viewable? I've never heard of it.
undefined @ 2018-09-11T07:32 (+3)
I am personally very interested in cause areas like global poverty, so it is great to see more people wanting to discuss the related issues in depth.
Nevertheless, I strongly support the definition of EA as a question (how can we use our resources to help others the most?) and that makes me not want to tag myself as a "[enter category here] EA" (e.g. "near-term EA", "far-future EA"...).
In practical terms, the above leads me to enjoy my views being challenged by people who have come to different conclusions and I tend to favour a "portfolio approach" to doing good, somewhat along the lines of Open Phil's "worldview diversification".
Regarding discussion, there should be great spaces for both the meta topics and the cause-specific ones. Wouldn't it be ideal if we could host all those discussions under the same roof? Maybe this thread can be used as an input for the upcoming EA Forum 2.0. The feature request would be something like "make it easy to host and find worldview-specific discussions".