Possible changes to EA, a big upvoted list

By Nathan Young @ 2023-01-18T18:56 (+43)

We should put all possible changes/reforms in a big list, that everyone can upvote/downvote, agree disagree.

EA is governed but a set of core EAs, so if you want change, I suggest that giving them less to read and a strong signal of community consensus is good.

The top-level comments should be a short clear explanation of a possible change. If you want to comment on a change, do it as a reply to the top level comment
 

This other post gives a set of reforms, but they are a in a big long list at the bottom. Instead we can have a list that changes by our opinions! https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1 


Note that I do not agree with all comments I post here.


Muireall @ 2023-01-18T21:35 (+62)

Beyond transparently disclosing financial and personal relationships with (e.g.) podcast guests or grantees, EA institutions should avoid apparent conflicts of interest more strictly. For example, grant reviewers should recuse themselves from reviewing proposals by their housemates.

Nathan Young @ 2023-01-19T17:57 (+4)

I'd be curious to hear disagreements with this.

Nathan Young @ 2023-01-18T22:05 (+2)

I guess the latter half of this suggestion already happens.

Muireall @ 2023-01-18T22:11 (+4)

Does it? The Doing EA Better post made it sound like conflict-of-interest statements are standard (or were at one point), but recusal is not, at least for the Long-Term Future Fund. There's also this Open Philanthropy  OpenAI grant, which is infamous enough that even I know about it. That was in 2017, though, so maybe it doesn't happen anymore.

Nathan Young @ 2023-01-18T22:22 (+3)

Sorry what was the CoI with that OpenAI grant?

Muireall @ 2023-01-18T23:14 (+9)

I'm mainly referring to this, at the bottom:

OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.

Holden is Holden Karnofsky, at the time OP's Executive Director, who also joined OpenAI's board as part of the partnership initiated by the grant. Presumably he wasn't the grant investigator (not named), just the chief authority of their employer. OP's description of their process does not suggest that he or the OP technical advisors from OpenAI held themselves at any remove from the investigation or decision to recommend the grant:

OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors.

Nathan Young @ 2023-01-18T23:47 (+2)

Hm. I still don't really see the issue here. These people all work at OpenPhil right? 

I guess maybe it looks fishy, but in hindsight do we think it was?

Muireall @ 2023-01-19T00:08 (+24)

No, Dario Amodei and Paul Christiano were at the time employed by OpenAI, the recipient of the $30M grant. They were associated with Open Philanthropy in an advisory role.

I'm not trying to voice an opinion on whether this particular grant recommendation was unprincipled. I do think that things like this undermine trust in EA institutions, set a bad example, and make it hard to get serious concerns heard. Adopting a standard of avoiding appearance of impropriety can head off these concerns and relieve us of trying to determine on a case-by-case basis how fishy something is (without automatically accusing anyone of impropriety).

Jason @ 2023-01-19T01:51 (+50)

Give users the ability to choose among several karma-calculation formulas for how they experience the Forum. If they want a Forum experience where everyone's votes have equal weight, there could be a Use Democratic Karma setting. Or stick with Traditional Karma. Or Show Randomly / No Karma. There's no clear need for the Forum to impose the same sorting values on everyone.

Chris Leong @ 2023-01-19T02:16 (+8)

That’s actually a pretty creative idea.

titotal @ 2023-01-18T19:36 (+49)

EA should engage more with existing academic research in fields such as such as Disaster Risk Reduction, Futures Studies, and Science and Technology Studies

Ozzie Gooen @ 2023-01-18T20:10 (+17)

I'd recommend splitting these up into different answers, for scoring.  I imagine this community is much more interested in some of these groups than others.

Jaime Sevilla @ 2023-01-19T02:36 (+8)

Ways of engaging #3: inviting experts from fields to EAG(X)s

Jaime Sevilla @ 2023-01-19T02:35 (+8)

Ways of engaging #2: proactively offering funding experts from respective fields to work on EA relevant topics

Jaime Sevilla @ 2023-01-19T02:35 (+8)

Ways of engaging #1: literature reviews and introductions of each field for an EA audience.

Nathan Young @ 2023-01-19T17:57 (+2)

And put them on the forum wiki.

Jaime Sevilla @ 2023-01-19T02:38 (+6)

Ways of engaging #4: making a database of experts in fields who are happy to review papers and reports from EAs

Guy Raveh @ 2023-01-19T13:41 (+2)

Ways of engaging #5: prioritise expertise over value alignment during hiring (for a subset of jobs).

oivavoi @ 2023-01-19T21:17 (+1)

...and updated research on climate risk.

Nathan Young @ 2023-01-23T12:08 (+2)

80k's view is pretty recent right?

BrownHairedEevee @ 2023-01-18T20:29 (+43)

Set up at least one EA fund that uses the quadratic funding mechanism combined with a minimal vetting process to ensure that all donation recipients are aligned with EA.

How this could work:

This dovetails with increasing diversity of moral views in EA.

Nathan Young @ 2023-01-19T17:59 (+5)

Have you considered applying for funding for running one?

BrownHairedEevee @ 2023-01-19T19:06 (+4)

I have thought of it but it wasn't a priority for me at the time.

Gitcoin has retired their original grants platform, but they're replacing it with a new decentralized grants protocol that anyone can use, which will launch in early Q2, 2023. I would like to wait until then to use that.

Nathan Young @ 2023-01-18T19:07 (+38)

There should be 1 searchable database of all EA grants.

Nathan Young @ 2023-01-18T19:38 (+32)

EA orgs should experiment with hiring ads targets specifically at experts in the field and consider how much those experts need a knowledge of EA for the specific role.

Julia_Wise @ 2023-01-19T02:50 (+5)

I'm going to interpret this to include "hiring outreach beyond ads" for fields where hiring isn't done mostly through ads.

Minh Nguyen @ 2023-01-22T10:38 (+2)

Wait, is this not the case? 0.0

I worked in some startups and a business consultancy and this is like, the first thing I learned in hiring/headhunting. While writing up Superlinear prize ideas, I made a few variations of SEO prizes targeting mid to senior-level experts, such as field-specific jargon, upcoming conferences, common workflow queries and new regulations.

JoshuaBlake @ 2023-01-20T15:20 (+2)

This seems like an inefficient way to approach experts initially

Nathan Young @ 2023-01-18T19:11 (+28)

Acknowledge that sometimes issues are fraught and that we should discuss them more slowly (while still having our normal honesty norms)

Linda Linsefors @ 2023-01-21T21:21 (+6)

I don't understand this suggestion.  How is this not just applause lights? What would be a sensical opposing view?

Tsunayoshi @ 2023-01-22T02:06 (+5)

While Nathan's suggestion is certainly framed very positively, people might object that sometimes the only way to change a system where power is highly concentrated at the top is to use anger about current news as a coordination mechanism to demand immediate change. Once attention invariably fades away, it becomes more difficult to enact bottom up changes.

Or to put it differently: often slowing down discussions really is an attempt at shutting them down ("we will form a committee to look into your complaints"). That's why I think that even though I agreed with the decision to collect all Bostrom discussion in one post, it's important to honestly signal to people that their complaints are read and taken seriously.

Nathan Young @ 2023-01-23T12:10 (+4)

It certainly felt like the Bostrom stuff needed to be discussed now. I wish I'd felt comfortable to say "let's wait a couple of days". 

MichaelStJules @ 2023-01-19T03:54 (+2)

How would we ensure this happens? Censorship, e.g. keeping related posts in Personal category rather than Community? Heavier moderation?

MichaelStJules @ 2023-01-19T08:30 (+3)

Or should it just be the EA Forum mods' job to pin comments to such posts or making centralized threads with such reminders? Or is it everyone's job? Will responsibility become too diffuse and nothing changes?

Nathan Young @ 2023-01-19T17:58 (+2)

I think "ensure" is too strong. I think if several people say "let's take a day" then that would be effective.

titotal @ 2023-01-18T20:44 (+24)

Employees of EA organisations should not be pressured by their superiors against publishing work critical of core beliefs. 

Arepo @ 2023-01-19T00:17 (+27)

Is there evidence that they are?

dan.pandori @ 2023-01-19T00:53 (+17)

While I agree with this question in the particular, there's a real difficulty because absence of evidence is only weak evidence of absence with this kind of thing.

titotal @ 2023-01-19T08:58 (+1)

There are allegations of this occurring in the doing EA better post. Ironically, if this is occurring, then it easily explains why we don't have concrete evidence of it yet: people would be worried about their jobs/careers. 

Arepo @ 2023-01-19T12:04 (+1)

Can you point me to where? I don't have time to read the post in full, and searching 'pressure' didn't find anything that looked relevant (I saw something about funding being somewhat conditional on not criticising core beliefs, but didn't see anything about employees specifically feeling so constrained).

Tsunayoshi @ 2023-01-19T00:19 (+23)

A study should be conducted that records and analyses the reactions and impressions of people when first encountering EA. Special attention should be paid to reactions of underrepresented groups such as groups based on demographics (age, race, gender, etc.), worldview (politics, religion, etc.) or background (socio economic status, major etc.).

titotal @ 2023-01-18T19:38 (+23)

EA should recruit more from the humanities and social studies fields.

kbog @ 2023-01-19T04:51 (+16)

We should recruit more from every field.

Is a more precise idea: "EA should spend less time trying to recruit from philosophy, economics and STEM, in order to spend more time trying to recruit from the humanities and social studies"?

Edit: although with philosophy and economics, those are already humanities and social studies...

titotal @ 2023-01-19T09:18 (+3)

I think this is revealing of the shortcomings of making decisions using this kind of upvoted and downvoted poll, in that the results will be highly dependent on the "vibe" or exact wording of a proposal. 

I think your wording would end up with a negative score, but if instead I phrased it as "the split between STEM and humanity focus should be 80-20 instead of 90-10" (using made up numbers), then it might swing the other way again. The wording is a way of arguing while pretending we're not arguing. 

kbog @ 2023-01-19T17:06 (+2)

I think the format is fine, you just have to write a clear and actionable proposal, with unambiguous meaning.

Nathan Young @ 2023-01-18T18:56 (+23)

Any answer below this shouldn't happen.

ie any answer with less upvotes on it's top level comment shouldn't happen. This is a way to broadly signal at what point you think the answers "become worth doing". Edited for clarity, thanks Guy

Felix Wolf @ 2023-01-19T08:19 (+3)

What is the line: Karma or Agreement?

Max Clarke @ 2023-01-20T00:11 (+1)

It's karma - which is kind of wrong here.

Nathan Young @ 2023-01-23T12:11 (+2)

Can an opinion be right but unimportant?

Max Clarke @ 2023-01-26T22:58 (+6)

Definitely, for example if people are bikeshedding (vigorously discussing something that doesn't matter very much)

Guy Raveh @ 2023-01-18T20:21 (+3)

I'm confused, what did you mean to happen with this comment?

dan.pandori @ 2023-01-19T00:11 (+2)

This post makes it harder than usual for me to tell if I'm supposed to upvote something because it is well-written, kind, and thoughtful vs whether I agree with it.

I'm going to continue to use up/downvote for good comment/bad comment and disagree/agree for my opinion on the goodness of the idea.

[EDIT: addressed in the comments. Nathan at least seems to endorse my interpretation]

Max Clarke @ 2023-01-20T00:14 (+1)

I think because the sorting is solely on karma, the line is "Everything above this is worth considering" / "Everything below this is not important" as opposed to "Everything above this is worth doing"

Coafos @ 2023-01-19T02:01 (+20)

Cap the number of strong votes per week.

Strong votes with large weights have their uses in uncommon situations. But these situations are uncommon, so instead of weakening strong votes, make them rarer.

The guideline says use them only in exceptional cases, but there is no mechanism enforcing it: socially, strong votes are anonymous and look like standard votes; and technically, any number of them could be used. They could make a comment section appear very one-sided, but with rarity, some ideas can be lifted/hidden, and the rest of the section can be more diverse.

I do not think this is a problem now, because current power users are responsible. But this is our fortune and not a fact, and could change in the future. Incidentally, this would also set a bar for what is considered exceptional, like this comment is in the top X this week.

Michael_PJ @ 2023-01-19T18:58 (+5)

The guideline says use them only in exceptional cases

I've never noticed this guideline! If this is the case, I would prefer to make it technically harder to do. I've just been doing it if I feel somewhat strongly about the issue...

Jason @ 2023-01-19T19:48 (+5)

Do we know what number of votes Forumwide are strong vs standard? If it is fairly low, publishing that might help us all understand how to use them better.

(My take is that I should not be using a looser standard than the norm because that would make my voice count more than it should. So if I saw data suggesting my standard were looser than the norm, it would inform when I strongvote in the future.)

Max Clarke @ 2023-01-20T00:07 (+2)

I think some kind of "strong vote income", perhaps just a daily limit as you say, would work.

Nathan Young @ 2023-01-19T19:01 (+2)

I will sort of admit to not being that responsible. I probably use a couple of strong votes a blog - when I think something is really underrated usually.  I guess I might be more sparing now.

Max Clarke @ 2023-01-20T00:08 (+1)

One situation I use strong votes for is whenever I do "upvote/disagree" or "downvote/agree". I do this to offset others who tend not to split their votes.

Nick Whitaker @ 2023-01-19T00:47 (+18)

Central EA organizations should not make any major reforms for 6 months to allow for a period of reflection and avoid hasty decisions

Jaime Sevilla @ 2023-01-18T19:05 (+16)

Every EA-affiliated org should clearly state in their website their sources of funding that contributed over >$100k.

Ivy_Mazzola @ 2023-01-18T23:56 (+12)

Why? I don't see the point except that then a reader can shame the org for taking money from someone the reader doesn't like. Let orgs be judged on their outputs per dollar spent please

Jaime Sevilla @ 2023-01-19T02:34 (+7)

More transparency about money flows seems important for preventing fraud, understanding centralization of funding (and so correlated risk) and allowing people to better understand the funding ecosystem!

Ivy_Mazzola @ 2023-01-19T09:02 (+15)

I have to be honest.. I think this is a horrible solution for all three of those problems. As in, if you enact this solution you can't say you've made meaningful progress on any of those.

Not only that but I don't think EA actually contains those 3 as "problems" to a degree that they would even warrant new watchdogging policies for orgs. Like, maybe those 3 aspects of EA aren't actually on fire or otherwise notably bad?

Example: People like to say that funding is not transparent in EA. But are they talking about the type of transparency which would be solved by this proposal? I think not. I think EA Funds and OPP are very transparent. You just have to go to their websites, which is a much better tactic than visiting dozens of EA org grantee websites. I think what people who are in the know mean when they say "EA needs funding transparency" is something like "people should be told why their grants were not approved" and "people ought to know how much money is in each fund so applicants know how likely it is to get a grant at what scale of project and so donors know which funds are neglected". Which is fair, but it has nothing to do with EA orgs listing their major donors on their websites.

In some sense "EA needs funding transparency" has become an information cascade. Many people say it not realizing why they and others say it and assume there is a problem where there isn't one.

And my concern with a poll like this (Edit: and all comments on the EA Forum actually) is that people will read those buzzwords from information-cascades, then quickly conclude that the suggestion sounds important (because it hits buzzwords), and assume it will solve something, and vote for it. The result, I think, is that the real systemic issues in EA remain unsolved or are are being "solved" poorly, in a pasted-on manner that's satisfying to the critics but doesn't actually get at the important bits.

It almost feels like EA is full of motte-and-bailey fallacies. And I think something similar is going on with the other reasonings named above too, I just have already said enough with that one as an example 😐

oivavoi @ 2023-01-19T21:22 (+1)

The fact that a commonsensical proposal like this gets downvoted so much is actually fairly indicative of current problems with  tribalism and defensiveness in EA culture.

Nathan Young @ 2023-01-19T21:29 (+10)

I disagree, I think people just disagree with it. If it's tribalism because people downvote it it would be tribalism if they upvoted it too. 

Michael_PJ @ 2023-01-20T09:22 (+4)

You really don't think there are any legitimate reasons to disagree with this? I can think of at least a few:

  • The cost in terms of time and maintenance is non-negligible.
  • The benefit is small, especially if you think that funding conflicts are not actually a big deal right now.
titotal @ 2023-01-19T09:29 (+13)

As of this writing, the suggestion "EA institutions should select for diversity with respect to hiring" has a karma of 17 upvotes, -21 disagreement (with 52 votes).

My suggestion "EA orgs should aim to be less politically and demographically homogenous" has 14 upvotes, +27 agreement (with 21 votes). 

Why are these two statements so massively different in agreement score?

These suggestions, while not exactly equivalent, seem very similar. (How exactly will you become less demographically homogenous without aiming to be more diverse in hiring?) 

My hypothesis is that either EA likes vaguer statements, but is allergic to more concrete proposals,  or that people are reflexively downvoting anything that comes off as culture warrish or "woke". I'd be interested in hearing from anyone that downvoted statement 1 and upvoted statement 2. 

This also reveals the limitations of this method for actually making decisions: small changes in wording can have a huge effect on the result. 

Jason @ 2023-01-19T10:25 (+12)

Statement 2 can be furthered by a number of methods -- e.g., seeking new people and new hires in more/different places. Its easy to agree as long as you think there is at least one method of furthering the end goal you would support.

Statement 1 reads like a specific method with a specific tradeoff/cost. As I read it, it calls for sometimes hiring Person X for diversity reasons even though you think Person Y would have been a better choice otherwise (otherwise, "select for diversity" isn't actually doing any work).

I don't think this is just a small change in wording. It's unsurprising to me that more people would endorse a goal like Statement 2 than a specific tradeoff like Statement 1.

titotal @ 2023-01-19T15:06 (+1)

I think that makes sense as a reason, if that's how people interpreted the two statements. However, statement 1 was explicitly not referring to a narrow "hire a worse candidate" situtation.  Statement 1 came from the  megapost, which was linked along with statement 1. Heres a relevant passage:

Worryingly, EA institutions seem to select against diversity. Hiring and funding practices often select for highly value-aligned yet inexperienced individuals over outgroup experts, university recruitment drives are deliberately targeted at the Sam Demographic (at least by proxy) and EA organisations are advised to maintain a high level of internal value-alignment to maximise operational efficiency. The 80,000 Hours website seems purpose-written for Sam, and is noticeably uninterested in people with humanities or social sciences backgrounds.

They are advocating for the exact same things you are, eg "seeking new people and new hires in more/different places", and that's what they meant by selecting for diversity in hiring. 

I think this makes it clearer what happened. Statement 1 resembles an existing culture war debate, so people assumed it was advocating for a side and position in said debate, and downvoted, whereas statement 2 appeared more neutral, so it was upvoted. I think this really just tells us to to be careful with interpreting these upvote/downvote polls. 

Jason @ 2023-01-19T16:04 (+2)

People likely read it as a standalone statement without referring back to the megapost, and gave "select" its most common meaning in ordinary jargon. I agree that the wording of these items is tricky and can skew outcomes, I just feel the summary here did not accurately capture what the broader statement said. So I am not convinced that voters were actually inconsistent or that this finding represents a deep problem with this kind of sorting exercise.

Denkenberger @ 2023-01-23T07:59 (+4)

See my comment above on the political version - usually when people call for more diversity, they are not referring to adding political diversity. So I think the additional of political makes it significantly different.

Tsunayoshi @ 2023-01-19T01:38 (+13)

We should encourage and possibly fund adversarial collaborations on controversial issues in EA.

Nathan Young @ 2023-01-19T17:59 (+5)

I thought the sense was adversarial collab was a bit overrated.

Tsunayoshi @ 2023-01-19T23:15 (+1)

[epistemic status: my imprecise summaries of previous attempts]

Well, I guess it depends on what you want to get out of them. I think they can be useful as epistemic tools in the right situation: They tend to work better if they are focused on empirical questions, and they can be help by forcing the collaborators to narrow down broad statements like "democratic decision making is good/bad for organisations". It's probably unrealistic however to expect that the collaborators will change their minds completely and arrive at a shared conclusion.

They might also be good for building community trust. My instinct is that it would be really helpful in the current situation if the two sides see that their arguments are being engaged with reasonably by the other side. (see this ac on transgender children transitioning, nobody in the comments expresses anger at the author holding opposite views)

Tsunayoshi @ 2023-01-19T01:39 (+1)

We could pester Scott Alexander to do another, EA themed, adversarial collaboration contest.

pradyuprasad @ 2023-01-19T11:33 (+2)

this seems like a very good idea!

titotal @ 2023-01-18T20:36 (+13)

Peer reviewed academic research on a given subject should be given higher credence than blogposts by EA friendly sources. 

Max Görlitz @ 2023-01-18T20:39 (+25)

Seems highly dependent of the subject, how established the field is

kbog @ 2023-01-19T17:36 (+4)

Really depends on context and I don't recall a concrete example of the community going awry here. You're proposing this as a change to EA, but I'm not sure it isn't already true.

If you compare apples to apples, a paper and a blog answering the same question, and the blog does not cite the paper, then sure the paper is better. But usually there are good contextual reasons for referring to blogs.

Also, peer review is pretty crappy, the main thing is having an academic sit down and write very carefully.

Nathan Young @ 2023-01-18T19:25 (+12)

Karma should have equal weight between users.

edited to add "between users"

titotal @ 2023-01-18T19:53 (+14)

I feel like there is an inherent problem with trying to use the current upvote system to determine whether the current upvote system is good. 

Nathan Young @ 2023-01-18T19:56 (+4)

Ehhh only if you don't think you can convince people to change their minds. 

Max Clarke @ 2023-01-20T00:52 (+1)

Another proposal: Visibility karma remains 1 to 1, and agreement karma acts as a weak multiplier when either positive or negative.

So:

  • A comment with [ +100 | 0 ] would have a weight of 100
  • A comment with [ +100 | 0 ] but with 50✅ and 50❌ would have a weight of 100 + log10(50 + 50) = 200
  • A comment with [ +100 | 100✅ ] would have a weight of say 100 * log10(✓100) = 200
  • A comment with [+0 | 1000✅ ] would have a weight of 0.

Could also give karma on that basis.

However thinking about it, I think the result would be people would start using the visibility vote to express opinion even more...

Max Clarke @ 2023-01-20T00:20 (+1)

A little ambiguous between  "disagree karma & upvote karma should have equal weight" and "karma should have equal weight between people"

Lumpyproletariat @ 2023-01-18T19:29 (+1)

Noting that I strongly disagreed with this, rather than it being the case that someone with weighty karma did a normal disagree. 

Guy Raveh @ 2023-01-18T20:29 (+2)

Both weak and strong votes increase in power when you get more karma, although I think for every currently existing user the weak vote is at most 2 (and the strong vote up to 9).

BrownHairedEevee @ 2023-01-18T20:15 (+11)

Making EA appeal to a wider range of moral views

EA is theoretically compatible with a wide range of moral views, but our own rhetoric often conflates EA with utilitarianism. Right now, if you hold moral views other than utilitarianism (including variants of utilitarianism such as negative utilitarianism), you often have to do your own homework as to what those views imply you should do to achieve the greatest good. Therefore, we should spend more effort making EA appeal to a wider range of moral views besides utilitarianism.

What this could entail:

Ariel Simnegar @ 2023-01-18T20:43 (+12)

Would this include making EA appeal to and include practical advice for views like nativism and traditionalism?

kbog @ 2023-01-19T17:23 (+3)

Let's not forget retribution - ensuring that wrongdoers experience the suffering that they deserve. Or more modestly, disregarding their well-being.

EricHerboso @ 2023-01-19T08:57 (+2)

I incorrectly (at 4a.m.) first read this as saying "Would this include making EA apparel…for views like nativism and traditionalism?", and my mind immediately started imagining pithy slogans to put on t-shirts for EAs who believe saving a single soul has more expected value than any current EA longtermist view (because ∞>3^^^3).

BrownHairedEevee @ 2023-01-18T20:56 (+1)

What do you mean by nativism and traditionalism?

Ariel Simnegar @ 2023-01-18T22:10 (+6)

A nativist may believe that the inhabitants of one's own country or region should be prioritized over others when allocating altruistic resources.

A traditionalist may perceive value in maintaining traditional norms and institutions, and seek interventions to effectively strengthen norms which they perceive as being eroded.

BrownHairedEevee @ 2023-01-18T22:29 (+1)

Thanks for clarifying. Yes, I think EA should (and already does, to some extent) give practical advice to people who prioritize the interests of their own community. Since many normies do prioritize their own communities, doing this could help them get their feet in the door of the EA movement. But I would hope that they would eventually come to appreciate cosmopolitanism.

As for traditionalism, it depends on the traditional norm or institution. For example, I wouldn't be comfortable with someone claiming to represent the EA movement advising donors on how to "do homophobia better" or reinforce traditional sexual norms more effectively, as I think these norms are bad for freedom, equality, and well-being. At least the views we accommodate should perhaps not run counter to the core values that animate utilitarianism.

River @ 2023-01-21T19:52 (+7)

I actually think EA is inherently utilitarian, and a lot of the value it provides is allowing utilitarias to have a conversation among ourselves without having to argue the basic points of utilitarianism with every other moral view. For example, if a person is a nativist (prioritizing the well being of their own country-people), then they definitionally aren't an EA. I don't want EA to appeal to them, because I don't want every conversation to be slowed down by having to argue with them, or at least find another way to filter them out. EA is supposed to be the mechanism to filter the nativists out of the conversation.

BrownHairedEevee @ 2023-01-18T22:11 (+3)

For those disagreeing with this idea, is it because you think EA should only appeal to utilitarians, should not try to appeal to other moral views more than it does, or should try to appeal to other moral views but not too much?

kbog @ 2023-01-19T17:17 (+1)

#2. From the absolute beginnings, EA has been vocal about being broader than utilitarianism. The proposal being voted on here looks instead like elevating progressivism to the same status as utilitarianism, which is a bad idea.

Nathan Young @ 2023-01-18T19:26 (+11)

"EA institutions should select for diversity with respect to hiring"

Paraphrased from https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

Tsunayoshi @ 2023-01-18T23:48 (+11)

I am hesitant to agree. Often proponents for this position emphasize the value of different outlooks in decision making as justification, but the actual implemented policies select based on diversity in a narrow subset of demographic characteristics, which is a different kind of diversity.

Arepo @ 2023-01-19T00:32 (+5)

I'm sceptical of this proposal, but to steelman it against your criticism, I think we would want to say that the focus should be diversity of a) non-malleable traits that b) correlate with different life experiences - a) because that ensures genuine diversity rather than (eg) quick opinion shifts to game the system, and b) because it gives you a better protection against unknown unknowns. There are experiences a cis white guy is just far more/less likely to have had than a gay black woman, and so when you hire the latter (into a group of otherwise cisish whiteish mannish people), you get a bunch of intangible benefits which, by their nature, the existing group are incapable of recognising.

 The traits typically highlighted by proponents of diversity tend to score pretty well on both counts - ethnicity, gender, and sexuality are very hard to change and (perhaps in decreasing order these days) tend to go hand in hand with different life experiences. By comparison, say, a political viewpoint is fairly easy to change, and a neurodivergent person probably doesn't have that different a life experience than a regular nerd (assuming they've dealt with their divergence well enough to be a remotely plausible candidate for the job).

kbog @ 2023-01-19T18:11 (+3)

If you want different life experiences, look first for people who had a different career path (or are parents), come from a foreign country with a completely different culture, or are 40+ years old (rare in EA).

I think these things cause much more relevant differences in life experience compared to things like getting genital surgery, experiencing microaggressions, getting called a racial slur, etc.

Tsunayoshi @ 2023-01-19T01:30 (+3)

Thanks for the reply! I had not considered how easily game-able some selection criteria based on worldviews would be. Given that on some issues the worldview of EA orgs is fairly uniform, and the competition for those roles, it is very conceivable that some people would game the system!

I should however note that the correlation between opinions on different matters should apriori be stronger than the correlation between these opinions and e.g. gender. I.e. I would wager that the median religious EA differs more from the median EA in their worldview than the median woman differs from the median EA.

Your point about unknown unknowns is valid. However, it must be balanced against known unknowns, i.e. when an organization knows that its personnel is imbalanced in some characteristic that is known or likely to influence how people perform their job. It is e.g. fairly standard to hire a mix of mathematicians, physicists and computer scientists for data science roles, since these majors are known to emphasize slightly different skills.

I must say that my vague sense is that for most roles the backgrounds that influence how people perform in a role are fairly well known because the domain of the work is relatively fixed.
Exceptions are jobs where you really want decisions to be anticorrelated and where the domain is constantly changing, like maybe an analyst at a venture fund. I am not certain at all however, and if people disagree would very much like links to papers or blog posts detailing to such examples.

Nathan Young @ 2023-01-18T19:30 (+6)

I sense that EA orgs should look at some appropriate baseline for different communities and then aim to be above that by blind hiring, adversing outside the community etc. 

dan.pandori @ 2023-01-20T03:51 (+3)

It's hard to be above baseline for multiple dimensions, and eventually gets impossible.

dan.pandori @ 2023-01-20T03:52 (+1)

Agreed with the specific reforms. Blind hiring and advertising broadly seem wise.

Nathan Young @ 2023-01-18T19:24 (+11)

"EA should establish public conference(s) or assemblies for discussing reforms within 6 months, with open invitations for EAs to attend without a selection process. For example, an “online forum of concerns”:

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

Nathan Young @ 2023-01-18T18:59 (+11)

OpenPhil should found a counter foundation that has as its main goal critical reporting, investigative journalism and “counter research” about EA and other philanthropic institutions.

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

BrownHairedEevee @ 2023-01-18T22:09 (+12)

I think that paying people/orgs to produce critiques of EA ideas etc. for an EA audience could be very constructive, i.e. from the perspective of "we agree with the overall goal of EA, here's how we think you can do it better".

By contrast, paying an org to produce critiques of EA from the perspective of EA being inherently bad would be extremely counterproductive (and there's no shortage of people willing to do it without our help).

MichaelStJules @ 2023-01-19T03:46 (+4)

There could be a risk of fake scandals, misquoting and taking things out of context that will damage EA.

Nathan Young @ 2023-01-18T19:22 (+10)

The wiki should aim to contain distillations of useful knowledge in other fields in EA language - feminism, psychology etc.

Nathan Young @ 2023-01-18T19:53 (+2)

Curious to hear from people who disagree with this

Ariel Simnegar @ 2023-01-18T20:30 (+4)

Hi Nathan! If a field includes an EA-relevant concept which could benefit from an explanation in EA language, then I don’t see why we shouldn’t just include an entry for that particular concept.

For concepts which are less directly EA-relevant, the marginal value of including entries for them in the wiki (when they’re already searchable on Wikipedia) is less clear to me. On the contrary, it could plausibly promote the perception that there’s an “authoritative EA interpretation/opinion” of an unrelated field, which could cause needless controversy or division.

Chris Leong @ 2023-01-19T00:58 (+2)

I don’t think the wiki adequately covers EA topics yet, so I wouldn’t expand the scope until we've covered these topics well.

niplav @ 2023-01-19T17:36 (+1)

Writing good Wiki articles is hard, and translating between worldviews even harder. If someone wants to do it, that's cool and I would respect them, but funding people to do it seems odd—"explain X to the ~10k EAs in the world". Surely those fields have texts that can explain themselves?

Nathan Young @ 2023-01-18T19:29 (+9)

"When EA books or sections of books are co-written by several authors, co-authors should be given appropriate attribution"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

Nathan Young @ 2023-01-18T20:26 (+10)

Though EA books are still published by normal publishers and this may be a big ask. I asked someone about this in relation to WWOTF and they were like Will did a huge amount to acknowledge contributions and while it would be good to acknowledge all that's just not how it works. 

I'd still like us to push for a film like model "directed by, produced by" but it's not a high priority.

titotal @ 2023-01-18T20:41 (+6)

EA should periodically re-evaluate and re-examine core beliefs to see if they still hold up over time. 

Michael_PJ @ 2023-01-19T18:02 (+4)

Disagree voted for being too vague - what specifically would people to do implement this?

River @ 2023-01-21T19:59 (+3)

What is the EA that you think should do this re-examining? In what sense is something that has different beliefs still EA? If an individual re-evaluates their beliefs and changes their mind about core EA ideas, wouldn't they leave EA, go do something else, EA gets smaller, newer better philosophies get bigger, and resources therefor get allocated as they should?

Nathan Young @ 2023-01-18T19:03 (+6)

Feel less of a need to quantify everything

"I am 90% sure that" - you don't need to say it.

Note I am providing options for people to vote on. I disagree with this one.

Guy Raveh @ 2023-01-18T20:25 (+6)

Note that as someone who strongly agrees with this, saying you're 90% sure is still fine sometimes. More problematic are things like over-simplifying and flattening ideas by conflating them with small set of numbers, or giving guesses and confidence intervals on things you basically have no idea about.

titotal @ 2023-01-18T21:01 (+5)

EA orgs should aim to be less politically and demographically homogenous.

Denkenberger @ 2023-01-23T07:23 (+3)

I'm curious how people are interpreting the "and" here. Because EA is only 3% right or center right politically, it seems that increasing demographic diversity along lines of race/gender/sexuality, at least in developed countries, would make EA more politically homogeneous. So is the suggestion that EA recruit more older people, people from rural areas, and potentially people from low and middle income countries?

BrownHairedEevee @ 2023-01-18T22:06 (+3)

How are we supposed to use agree/disagree votes? It looks to me like regular votes are to be used to move responses up and down the page.

Nathan Young @ 2023-01-18T22:15 (+4)

You can think something is important but wrong. I'm not allowed to agree or disagree with my own posts, but if I could I would upvote but disagree with this. 

It's a good discussion but the point is wrong.

Nathan Young @ 2023-01-18T19:17 (+3)

So I have time for Bob Jacob's criticism that this is the same post as I posted last month. It looks similar doesn't it. 

But it's gonna get highly upvoted so I don't think people felt they had that discussion. Lizka could post this every 2 months if she wants, but I think the desire for this discussion is here and this is the best way to have it. If I get the karma, sobeit. 

I used to allow ways to devote my karma, but it's just a huge hassle to try and create that and It confused everyone.

Bob Jacobs @ 2023-01-18T19:44 (+2)

You could've made a poll. That wouldn't have given you nearly as much karma/voting-power, and that wouldn't have given those who already have a lot of power the ability to influence the results. For the record I'm not angry at you, I'm angry at the karma system and the groupthink it generates. Given that I also have undemocratic power, I will stick to my own principles and not vote on these questions.

Nathan Young @ 2023-01-18T19:48 (+2)

I don't like how much karma have. I agree that's a bit ridiculous at this stage, though some disagree. But I think that those who have spent a long time on the forum do tend to be better informed and I do want their votes to count for more.

Democracy is good at avoiding famine and war, but I am unconvinced it is best at making decisions. So a little upweighting of those who the community tends to agree with seems good. 

Honestly, I might suggest it more. 

Max Clarke @ 2023-01-20T00:34 (+1)

Would you gift your karma if that option was available?

Michael_PJ @ 2023-01-20T09:24 (+3)

Destroying it seems better. Gifting it requires identification of a worthy recipient and seems like it opens all kinds of additional problems.

Nathan Young @ 2023-01-20T16:15 (+2)

Yes. I think so, haven't looked at the utility curves but I imagine I can find people I think are underrated.

Nathan Young @ 2023-01-19T01:12 (+2)

You should be able to give away your forum karma.

Coafos @ 2023-01-19T01:24 (+3)

If without restraints then note: this opens up an influence market, which could lead into plutocracy.

titotal @ 2023-01-18T20:33 (+2)

There should be some way of telling whether a karma score is caused by a number of small upvotes by several people or whether it is a result of a single strong upvote/downvote by one person. Edit: Turns out there's already a way to do this, see the comment below.

Sarah Cheng @ 2023-01-18T20:48 (+5)

Hovering over the karma score displays how many votes there are. Does that address your request, or is there something missing?

Lin BL @ 2023-01-19T00:30 (+3)

This does not give a complete picture though.

Say something has 5 karma and 5 votes. First obvious thought: 5 users upvoted the post, each with a karma of 1. But that's not the only option:

  • 1 user upvotes (value +9), 4 users downvote (each value -1)
  • 2 users upvote (values +4 and +6), 3 users downvote (values -1, -1 and -3)
  • 3 users upvote (values +1 and +2 and +10), 2 users downvote (values -1 and -7)

Or a whole range of other permutations one can think of that add up to 5, given that different users' votes have different values (and in some cases strong up/downvoting). Hovering just shows the overall karma and overall number of people who have voted, unless I am missing a feature that shows this in more detail?

Sarah Cheng @ 2023-01-19T19:17 (+1)

Yeah I was wondering if this was what the question asker was getting at. Thank you for clearly explaining it.

You're right that this doesn't exist. My instinct is that this doesn't provide enough value to be worth the cost of the extra UX complication and the slight deanonymizing affect on voting. I'd be curious to hear how this kind of feature would be helpful for you.

Lin BL @ 2023-01-19T22:55 (+1)

They'd have the information of upvotes and downvotes already (to calculate the overall karma). I don't know how the forum is coded, but I expect they could do this without too much difficulty if they wanted to. So if you hover, it would say something like: "This comment has x overall karma, (y upvotes and z downvotes)." So the user interface/experience would not change much (unless I have misinterpreted what you meant there).

It'll give extra information. Weighting some users higher due to contribution to the forum may make sense with the argument that these are the people who have contributed more, but even if this is the case it would be good to also see how many people overall think it is valuable or agree or disagree.

Current information:

  • How many votes
  • How valuable these voters found it adjusted by their karma/overall Forum contribution

New potential information:

  • How many votes
  • How valuable these voters found it adjusted by their karma/overall Forum contribution
  • How many overall voters found this valuable

e.g. 2 people strongly agreeing and 3 people weakly disagreeing may update me differently to 5 people weakly agreeing. One is unanimous, the other people have more of a divided opinion of, and it would be good for me to know that as it might be useful to ask why (when drawing conclusions based on what other people have written, or when getting feedback on my own writing).

I would like to see this implemented, as the cost seems small, but there is a fair bit of extra information value.

Coafos @ 2023-01-19T02:15 (+1)

Note: I tried to do it on mobile, and it's not working everywhere? I tried to tap on post karma or question answer karma but it did not show total vote count.

(On my laptop it works.)

Sarah Cheng @ 2023-01-19T19:03 (+3)

Yeah, the forum relies a lot on hover effects, which don't work very well on mobile. To avoid that in this case seems like it would overcomplicate the UI though, so I'm not sure what an improved UX would look like. I'll add this to our backlog for triage.

Nathan Young @ 2023-01-18T19:37 (+2)

"Funding bodies should not be able to hire researchers who have previously been recipients in the last e.g. 5 years, nor should funders be able to join recipient organisations within e.g. 5 years of leaving their post"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Employment

Nathan Young @ 2023-01-18T19:23 (+2)

"EA institutions should recruit known critics of EA and offer them e.g. a year of funding to write up long-form deep critiques"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique

Chris Leong @ 2023-01-19T00:53 (+10)

For me, this would depend heavily on how good these critics are and probably not sensible to pay people who are just going to use their time to write more attacks, rather than constructive feedback.

Guy Raveh @ 2023-01-18T20:17 (+2)

Mostly seems to me like at least with EA as it currently is, they won't be interested.

JoshuaBlake @ 2023-01-20T15:24 (+1)

We should consider answers on this thread based on agreement karma not upvoting

Guy Raveh @ 2023-01-21T00:19 (+2)

I honestly don't know. I personally agreevoted but did not upvote suggestions that, for example, I thought would be good in theory but impossible to implement.

Nathan Young @ 2023-01-18T19:54 (+1)

There should be a way to repost something with 0 karma so that I don't have to keep writing this same post every few months. 

dan.pandori @ 2023-01-19T00:17 (+2)

Can you elaborate? I don't understand what problem this solves.

niplav @ 2023-01-19T17:38 (+1)

At least on LessWrong you can move something to drafts and then publish it again, IIRC. Given the underlying infrastructure is the same this should also work on the EA forum?

Jeroen Willems @ 2023-01-18T19:42 (+1)

New users' strong vote equals two votes, and the moment you get a 100 karma it equals 5 votes. But after that it doesn't keep increasing.

(Agree vote this even if you don't agree with the specific numbers but just the general gist of it.)

Arepo @ 2023-01-19T00:39 (+9)

I would rather have no increases at all, or perhaps a nominal one (eg an unlock of a 2-karma strong upvote) after a relatively cursory amount of karma - just enough to prove that you're not a troll.

I do not think that my contributions to this forum merit me having ~3.5x as much weight as someone like Jobst Heitzig just because he's too busy with a successful academic career to build up a backlog on this forum. Weighted karma selects for people whose time has low market value in the same way that long job interviews do.

Karma weighting also encourages Goodharting and rewards the people best at it.

Nathan Young @ 2023-01-19T01:11 (+2)

I think Jobst is very unrepresentative. From the recommendations, he's getting I wish I could transfer some of my karma to him.

Arepo @ 2023-01-19T01:19 (+3)

I don't know about unrepresentative. New poster to this forum run a gamut from 'probably above averagely smart' to 'extremely intelligent and thoughtful'. Obviously we're going to have far more of the former, but we should also expect some number of the latter - and the karma system hides both. 

I think Scott's argument for for openness to eccentrics on the ground that a couple of great ideas have far more positive value than a whole bunch of negative ones have negative value in generalises to an argument for being open to 'eccentrics' who comprise large numbers of new or intermittent posters.

Gordon Seidoh Worley @ 2023-01-19T01:32 (+2)

I think Scott's argument for for openness to eccentrics on the ground that a couple of great ideas have far more positive value than a whole bunch of negative ones have negative value in generalises to an argument for being open to 'eccentrics' who comprise large numbers of new or intermittent posters.

You've got to consider the base rates. Most eccentrics are actually just people with ungrounded ideas that are wrong since it's easy to have wild ideas and hard to have correct ideas and thus even harder to have wild and correct ideas.

In the old days of Less Wrong excess criticism was actually a huge problem and did silence a bunch of folks incorrectly. EAF and Less Wrong (which has basically the same cultural norms) have this problem to a much lesser extent now due a few structural changes:

  • new posters don't post directly to the front page and instead only can post there once they get enough karma or explicit approval by moderators
  • this lets new posters work out the site norms without being exposed to the full brunt of the community
  • weighted voting also allows respected users to correct errors on their own, so when they see something of value they can give it a strong upvote rather than it languishing due to five other new people voting it down

If your concern is that the site is not making it easy enough for eccentrics with good ideas to post here, I can say from the experience of the way Less Wrong used to run that it's likely they'd have an even worse time if it weren't for weighted voting.

Arepo @ 2023-01-19T02:43 (+7)

You've got to consider the base rates. Most eccentrics are actually just people with ungrounded ideas that are wrong since it's easy to have wild ideas and hard to have correct ideas and thus even harder to have wild and correct ideas.

It is tiresome to have conversations in which you assume I only started thinking about this yesterday and haven't considered basic epistemic concepts. 

a) I am not talking about actual eccentrics; I'm drawing the analogy of a gestalt entity mimicking (an intelligent) eccentric. You don't have to agree that the tradeoff is worthwhile, but please claim that about the tradeoff I'm proposing, not some bizarre one where we go recruiting anyone who has sufficiently heterodox ideas.

b) I am not necessarily suggesting removing the karma system. I'm suggesting toning it down, which could easily be accompanied by other measures to help users find the content they'd most like to see. There's plenty of room for experimentation - the forum seems to have been stuck in a local maximum (at best - perhaps not a maximum) for the last few years, and CEA should have the resources for some A/B testing of new ideas.

c) Plenty of pre-Reddit internet forums have been successful in pursuing their goal with no karma system at all, let alone a weighted one. Looking at the current posts on the front page of the EA Reddit, only one is critical of EA, and that's the same Bostrom discussion that's been going on here. So I don't see good empirical evidence that toning down the karma system would create the kind of wild west you fear.

Grayden @ 2023-02-13T07:25 (+5)

If only there were some kind of measure of an individuals contribution. Maybe we could call it something like PELTIV

Jeroen Willems @ 2023-01-18T20:58 (+4)

Why do people think vote weight should keep on increasing after a certain amount of karma? I'm curious!

Gordon Seidoh Worley @ 2023-01-19T00:13 (+15)

This is a mechanism for maintaining cultural continuity.

Karma represents how much the community trusts you, and in return, because you are trusted, you're granted greater ability to influence what others see because your judgement has been vetted over a long series of posts. The increase in voting power is roughly logarithmic with karma, so the increased influence in practice hits diminishing returns pretty quickly.

If we take this away it allows the culture of the site to drift more quickly, say because there's a large influx of new folks. Right now existing members can curate what happens on the Forum. If we take away the current voting structure, we're at greater risk of this site becoming less the site the existing user base wants.

I don't speak for the Forum by any means, but as I see it we're trying to create a space here to talk about certain things in a certain way, and that means we want new people to learn the norms and be part of what exists first before they try to change it, since outsiders often fail to understand why things work the way they do until they've gotten enough experience to see how the existing mechnismism make things work. Once you understand how things work, it becomes possible to try to change things in ways that keeps what works and changes what doesn't. The voting mechanism is downstream of this and is an important tool of the membership to curate the site.

That said, you can also just ignore the votes if you don't agree with them and read whatever you want.

Jeroen Willems @ 2023-01-19T10:01 (+7)

I really don't think the libertarian "if you don't like it, go somewhere else" works here as the EA forum is pretty much the place where EA discussions are held. Sure, they happen on twitter and reddit too but you have to admit it's not the same. Most discussions start here and are then picked up there.

I agree with your other arguments, I don't want the culture of the site to drift too quickly because of a large influx of new folks. But why wouldn't a cut off be sufficient for that? I don't see why the power has to keep on increasing after, say, a 200 karma. Because at that point value lock-in might become an issue. Reminds me a bit of the average age of US senators being 64 years old. Not too dismiss the wisdom of experienced people, but insights from new folks is important too.

Arepo @ 2023-01-19T00:43 (+4)

If we take away the current voting structure, we're at greater risk of this site becoming less the site the existing user base wants.

This doesn't seem self-evidently bad or obviously likely.

Gordon Seidoh Worley @ 2023-01-19T01:25 (+9)

Sure, not everyone likes curated gardens. If that's not the kind of site you want, there's other places. Reddit, for example, has active communities that operate under different norms.

The folks who started the Forum prefer the sort of structure it has. If you want something else and you don't have a convincing argument that convinces us, you're free to participate in discussions elsewhere.

As to deeper reasons why the Forum is the way it is, see, for example, https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism

Arepo @ 2023-01-19T01:36 (+7)

'There are other places' seems like a terrible benchmark to judge by. Reddit is basically the only other active forum on the internet for EA discussion and nowhere else has any chance of materially affecting EA culture. The existence of this place suppresses alternatives - I used to run a utilitarianism forum that basically folded into this because it didn't seem sensible at the time to compete with people we almost totally agreed with.

Posting a a single unevidenced LW argument as though it were scripture as an argument against being exposed to a wider range of opinions seems like a poor epistemic practice. In any case, that thread is about banning, which I've become more sympathetic to, and which is totally unrelated to the karma system.