Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

By Raemon @ 2017-01-11T17:45 (+31)

This is in response to Sarah Constantin's recent post about intellectual dishonesty within the EA community.

I roughly agree with Sarah's main object level points, but I think this essay doesn't sufficiently embody the spirit of cooperative discourse it's trying to promote. I have a lot of thoughts here, but they are building off a few existing essays. (There's been a recent revival over on Less Wrong attempting to make it a better locus for high quality discussion. I don't know if it's especially succeeded, but I think the concepts behind that intended revival and very important)

  1. Why Our Kind Can't Cooperate (Eliezer Yudkowsky)
  2. A Return to Discussion (Sarah Constantin)
  3. The Importance of [Less Wrong, OR another Single Conversational Locus] (Emphasis mine) (Anna Salamon)
  4. The Four Layers of Intellectual Conversation (Eliezer Yudkowsky)

    I think it's important to have all three concepts in context before delving into:

  5. EA has a lying problem (Sarah Constantin)

I recommend reading all of those. But here's a rough summary of what I consider the important bits. (If you want to actually argue with these bits, please read the actual essays before doing so, so you're engaging with the full substance of the idea)

    1. FB discussion is fragmented - it's hard to find everything that's been said on a topic. (And tumblr is even worse)
    2. It's hard to know whether OTHER people have read a given thing on a topic.
    3. A related point (not necessarily in "A Return to Discussion" is that social media incentives some of the worst kinda of discussion. People share things quickly, without reflection. People read and respond to things in 5-10 minute bursts, without having time to fully digest them. 

Cooperative Epistemology

So my biggest point here, is that we need to be more proactive and mindful about how discussion and knowledge is built upon within the EA community.

To succeed at our goals:

    1. How do we evaluate messy studies
    2. How do we discuss things online so that people actually put effort into reading and contributing the discussion.
    3. What kinds of conversational/debate norms lead people to be more transparent.

I have specific concerns about Sarah's post, which I'll post in a comment when I have a bit more time.

 


null @ 2017-01-12T19:19 (+33)

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

While I'm sympathetic to the fact that there's also a lot of low-quality / lazy criticism of EA, I don't think responses that involve setting a high bar for high-quality criticism are the right way to go.

(Note that I don't think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)

null @ 2017-01-12T21:54 (+13)

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

This is completely true.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

There are at least a dozen people for whom this is true.

null @ 2017-01-13T13:47 (+3)

I feel like this is true for me too. I'd guess I've got more spare time on my hands than you guys. I also don't currently work for any EA charities. It's really hard to make your beliefs pay rent when you're in near mode and you're constantly worried about how if you screw up a criticism you'll lose connections and get ostracized, or you'll hurt the trajectory of a cause or charity you like by association because as much as we like to say we're debiased a lot of time affective rationalizations sneak into our motivations. Well, we all come from different walks of life, and a lot of us haven't been in communities trying to be as intellectually honest and epistemically virtuous as EA tries to be. It's hard to overcome that aversion to keeping our guard up because everywhere else we go in life our new ideas are treated utterly uncharitably, like, worse than anything in EA on a regular day. It's hard to unlearn those patterns. We as a community need to find ways to trust each other more. But that takes a lot of work, and will take a while.

In the meantime, I don't have a lot to lose by criticizing EA, or at least I can take a hit pretty well. I mean, maybe there are social opportunity costs, what I won't be able to do in the future if I became low-status, but I'm confident I'm the sort of person who can create new opportunities for himself. So I'm not worried about me, and I don't think anyone else should either. I've never had a cause selection. Honestly, it felt weird to talk about, but this whole model uncertainty thing people are going for between causes now is something I've implicitly grasped the whole time. Like, I never understood why everyone was so confident in their views on causes when a bunch of this stuff requires figuring out things about consciousness, or the value of future lives, which seem like some philosophically and historically mind-boggling puzzles to me.

If you go to my EAHub profile, you'll notice the biggest donation I made was in 2014 for $1000 to Givewell for unrestricted funds. That was because I knew those funds would increase the pool of money for starting the Open Philanthropy Project. And it was matched. You'll also notice I select pretty much every cause as something to consider, as I'm paranoid about myself or EA in general missing out on important information. All I can say about my politics is I'm a civil libertarian and otherwise I don't get offended by reading things when they're written by people who want to improve EA in earnest. I hope you'll take my word that I didn't just edit my EA Hub profile now. That's what I got for a badge to show I really try to stay neutral.

If anyone wants to privately and/or anonymously send me their thoughts on an EA organization, and what they're doing wrong, no matter what it is, I'll give my honest feedback and we can have a back and forth and hopefully hammer something out to be published. I also don't particularly favour any EA org right now as I feel like a lot of these organizations are people who've only been in academia, or the software industry, or are sometimes starting non-profits right out of college, who might just not have the type or diversity of experience to alone make good plans/models, or get skills for dealing with different types of people and getting things done. I've thought for a while all these organizations at different points have made little or big mistakes, which are really hard to talk about in public, and it feels a bit absurd to me they're never talked about.

Feel free to send me stuff. Please don't send me stuff about interpersonal drama. Treat what you send me like filing a bug report.

null @ 2017-01-12T21:17 (+8)

though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better

Interesting. Which groups could we learn the most from?

null @ 2017-01-13T08:30 (+5)

I think parts of academia do this well (although other parts do it poorly, and I think it's been getting worse over time). In particular, if you present ideas at a seminar, essentially arbitrarily harsh criticism is fair game. Of course, this is different from the public internet, but it's still a group of people, many of whom do not know each other personally, where pretty strong criticism is the norm.

My impression is that criticism has traditionally been a strong part of Jewish culture, but I'm not culturally Jewish so can't speak directly.

I heard that Bridgewater did a bunch of stuff related to feedback/criticism but again don't know a ton about it.

Of course, none of these examples address the fact that much of the criticism of EA happens over the internet, but I do feel that some of the barriers to criticism online also carry over in person (though others don't).

null @ 2017-01-13T15:15 (+3)

Thanks!

I think parts of academia do this well (although other parts do it poorly, and I think it's been getting worse over time). In particular, if you present ideas at a seminar, essentially arbitrarily harsh criticism is fair game. Of course, this is different from the public internet, but it's still a group of people, many of whom do not know each other personally, where pretty strong criticism is the norm.

One guess is that ritualization in academia helps with this -- if you say something in a talk or paper, you ritually invite criticism, whereas I'd be surprised to see people apply the same norms to e.g. a prominent researcher posting on facebook. (Maybe they should apply those norms, but I'd guess they don't.)

Unfortunately, it's not obvious how to get the same benefits in EA.

null @ 2017-01-14T17:45 (+5)

I'm surprised to hear that people see criticizing EA as incurring social costs. My impression was that many past criticisms of EA have been met with significant praise (e.g., Ben Kuhn's). One approach for dealing with this could be to provide a forum for anonymous posts + comments.

null @ 2017-01-14T21:25 (+5)

I think it really depends on who you criticize. I perceive criticizing particular people or organizations as having significant social costs (though I'm not saying whether those costs are merited or not).

null @ 2017-01-14T19:24 (+4)

In my post, I said

anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

I would expect that conditioned on spending a large amount of time to write the criticism carefully, it would be met with significant praise. (This is backed up at least in upvotes by past examples of my own writing, e.g. Another Critique of Effective Altruism, The Power of Noise, and A Fervent Defense of Frequentist Statistics.)

null @ 2017-01-13T15:32 (+5)

This is a great point -- thanks, Jacob!

I think I tend to expect more from people when they are critical -- i.e. I'm fine with a compliment/agreement that someone spent 2 minutes on, but expect critics to "do their homework", and if a complimenter and a critic were equally underinformed/unthoughtful, I'd judge the critic more harshly. This seems bad!

One response is "poorly thought-through criticism can spread through networks; even if it's responded to in one place, people cache and repeat it other places where it's not responded to, and that's harmful." This applies equally well to poorly thought-through compliments; maybe the unchallenged-compliment problem is even worse, because I have warm feelings about this community and its people and orgs!

Proposed responses (for me, though others could adopt them if they thought they're good ideas):

  • For now, assume that all critics are in good faith. (If we have / end up with a bad-critic problem, these responses need to be revised; I'll assume for now that the asymmetry of critique is a bigger problem.)
  • When responding to critiques, thank the critic in a sincere, non-fake way, especially when I disagree with the critique (e.g. "Though I'm about to respond with how I disagree, I appreciate you taking the critic's risk to help the community. Thank you! [response to critique]")
  • Agree or disagree with critiques in a straightforward way, instead of saying e.g. "you should have thought about this harder".
  • Couch compliments the way I would couch critiques.
  • Try to notice my disagreements with compliments, and comment on them if I disagree.

Thoughts?

null @ 2017-01-12T16:34 (+23)

Issue 1:

The title and tone of this post is playing with fire, i.e courting controversy, in a way that (I think, but am not sure) undermines its goals.

A, there's the fact that describing these as "lying" seems approximately as true as the first two claims, which other people have mentioned. In a post about holding ourselves to high standards, this is kind of a big deal. Others have mentioned this.

B: Personal integrity/honesty is only one element you need to have a good epistemic culture. Other elements you need include trust, and respect for people's time, attention, and emotions.

Just as every decision to bend the truth has consequences, every decision to inflame emotions has consequences, and these can be just as damaging.

I assume (hope) it was a deliberate choice to use a provocative title that'd grab attention. I think part of the goal was to punish the EA Establishment for not responding well to criticism and attempting to control said criticism.

That may not be a bad choice. Maybe it's necessary but it's a questionable one.

The default world (see: modern politics, and news) is a race to the bottom of outrage and manufactured controversy. People love controversy. I love controversy. I felt an urge to share this article on facebook and say things off the cuff about it. I resisted, because I think it would be harmful to the epistemic integrity of EA.

Maybe it's necessary to write a provocative title with a hazy definition of "lying" in order to get everyone's attention and force a conversation. (In the same way it may be necessary to exaggerate global warming by 4x to get Jane Q Public to care). But it is certainly not the platonic ideal of the epistemic culture we need to build.

null @ 2017-01-13T00:38 (+21)

Hi everyone! I’m here to formally respond to Sarah’s article, on behalf of ACE. It’s difficult to determine where the response should go, as it seems there are many discussions, and reposting appears to be discouraged. I’ve decided to post here on the EA forum (as it tends to be the central meeting place for EAs), and will try to direct people from other places to this longer response.

Firstly, I’d like to clarify why we have not inserted ourselves into the discussion happening in multiple Facebook groups and fora. We have recently implemented a formal social media policy which encourages ACE staff to respond to comments about our work with great consideration, and in a way that accurately reflects our views (as opposed to those of one staff member). We are aware that this might come across as “radio silence” or lack of concern for the criticism at hand—but that is not the case. Whenever there are legitimate critiques about our work, we take it very seriously. When there are accusations of intent to deceive, we do not take them lightly. The last thing we want to do is respond in haste only to realize that we had not given the criticism enough consideration. We also want to allow the community to discuss amongst themselves prior to posting a response. This is not only to encourage discussion amongst individual members of the community, but also so that we can prioritize responding to the concerns shared by the greatest number of community members.

It is clear to us now that we have failed to adequately communicate the uncertainty surrounding the outcomes of our leafleting intervention report. We absolutely disagree with claims of intentional deception and the characterization of our staff as acting in bad-faith—we have never tried to hide our uncertainty about the existing leafleting research report, and as others have pointed out, it is clearly stated throughout the site where leafleting is mentioned. However, our reasoning that these disclaimers would be obvious was based on the assumption that those interested in the report would read it in its entirety. After reading the responses to this article, it’s obvious that we have not made these disclaimers as apparent as they should be. We have added a longer disclaimer to the top of our leafleting report page, expressing our current thoughts and noting that we will update the report sometime in 2017.

In addition, we have decided to remove the impact calculator (a tool which included an ability to enter donations directed to leafleting and receive estimates of high and low bounds of animals spared) from our website entirely until we feel more confident that it is not misleading to those unfamiliar with cost effectiveness calculations and/or an understanding of how the low/best/high error bounds exemplify the uncertainty regarding those numbers. It is not typical for us to remove content from the site, but we intend to operate with abundant caution. This change seems to be the best option, given that people believe we are being intentionally deceptive in keeping them online.

Finally, leadership at ACE all agree it has been too long since we have updated our Mistakes page, so we have added new entries concerning issues we have reflected upon as an organization.

We also notice that there is concern among the community that our recommendations are suspect due to the weak evidence supporting our cost-effectiveness estimates of leafleting. The focus on leafleting for this criticism is confusing to us, as our cost-effectiveness estimates address many interventions, not only leafleting, and the evidence for leafleting is not much weaker than other evidence available about animal advocacy interventions. On top of that, cost-effectiveness estimates are only a factor in one of the seven criteria used in our evaluation process. In most cases, we don’t think that they have changed the outcome of our evaluation decisions. While we haven’t come up with a solution for clarifying this point, we always welcome and are appreciative of constructive feedback.

We are committed to honesty, and are disappointed that the content we've published on the website concerning leafleting has caused so much confusion as to lead anyone to believe we are intentionally deceiving our supporters for profit. On a personal note, I’m devastated to hear that our error in communication has led to the character assassination not only of ACE, but of the people who comprise the organization—some of the hardest working, well-intentioned people I’ve ever worked with.

Finally, I would like everyone to know that we sincerely appreciate the constructive feedback we receive from people within and beyond the EA movement.

*Edited to add links

CarlShulman @ 2017-01-24T03:13 (+13)

After reading the responses to this article, it’s obvious that we have not made these disclaimers as apparent as they should be...until we feel more confident that it is not misleading to those unfamiliar with cost effectiveness calculations

When there are debates about how readers are interpreting text, or potentially being misled by it, empirical testing (e.g. having Mechanical Turk readers view a page and then answer questions about the topic where they might be misled) is a powerful tool (and also avoids reliance on staff intuitions that might be affected by a curse of knowledge). See here for a recent successful example.

null @ 2017-01-13T01:14 (+5)

Well said, Erika. I'm happy with most of these changes, though I'm sad that we have had to remove the impact calculator in order to ensure others don't get the wrong idea about how seriously such estimates should be taken. Thankfully, Allison plans on implementing a replacement for it at some point using the Guesstimate platform.

For those interested in seeing the exact changes ACE has made to the site, see the disclaimer at the top of the leafleting intervention page and the updates to our mistakes page.

null @ 2017-01-13T09:55 (+4)

Thank you for the response, and I'm glad that it's being improved, and that there seems to be a honest interest in doing better.

I feel "ensure others don't get the wrong idea about how seriously such estimates should be taken" is understating things- it should be reasonable for people to ascribe some non-zero level of meaning to issued estimates, and especially it should be that using them to compare between charities doesn't lead you massively astray. If it's "the wrong idea" to look at an estimate at all, because it isn't the true best reasoned expectation of results the evaluator has, I think the error was in the estimate rather than in expectation management, and find the deflection of responsibility here to the people who took ACE at all seriously concerning.

The solution here shouldn't be for people to trust things others say less in general.

Compare, say, GiveWell's analysis of LLINs (http://www.givewell.org/international/technical/programs/insecticide-treated-nets#HowcosteffectiveisLLINdistribution); it's very rough and the numbers shouldn't be assumed to be close to right (and responsibly, they describe all this), but their methodology makes them viable for comparison purposes.

Cost-effectiveness is important- it is the measure of where putting your money does the most good and how much good you can expect to do, and a fully inclusive of risks and data issues cost effectiveness estimate is basically what one is arriving at when one determines what is effective. Even if you use other selection strategies for top charities, incorrect cost effectiveness estimates are not good.

null @ 2017-01-13T19:30 (+7)

I agree: it is indeed reasonable for people to have read our estimates the way they did. But when I said that we don't want others to "get the wrong idea", I'm not claiming that the readers were at fault. I'm claiming that the ACE communications staff was at fault.

Internally, the ACE research team was fairly clear about what we thought about leafleting in 2014. But the communications staff (and, in particular, I) failed to adequately get across these concerns at the time.

Later, in 2015 and 2016, I feel that whenever an issue like leafleting came up publicly, ACE was good about clearly expressing our reservations. But we neglected to update the older 2014 page with the same kind of language that we now use when talking about these things. We are now doing what we can to remedy this, first by including a disclaimer at the top of the older leafleting pages, and second by planning a full update of the leafleting intervention page in the near future.

Per your concern about cost-effectiveness estimates, I do want to say that our research team will be making such calculations public on our Guesstimate page as time permits. But for the time being, we had to take down our internal impact calculator because the way that we used it internally did not match the ways others (like Slate Star Codex) were using it. We were trying to err on the side of openness by keeping it public for as long as we did, but in retrospect there just wasn't a good way for others to use the tool in the way we used it internally. Thankfully, the Guesstimate platform includes upper and lower bounds directly in the presented data, so we feel it will be much more appropriate for us to share with the public.

You said "I think the error was in the estimate rather than in expectation management" because you felt the estimate itself wasn't good; but I hope this makes it more clear that we feel that the way we were internally using upper and lower bounds was good; it's just that the way we were talking about these calculations was not.

Internally, when we look at and compare animal charities, we continue to use cost effectiveness estimates as detailed on our evaluation criteria page. We intend to publicly display these kinds of calculations on Guesstimate in the future.

As you've said, the lesson should not be for people to trust things others say less in general. I completely agree with this sentiment. Instead, when it comes to us, the lessons we're taking are: (1) communications staff needs to better explain our current stance on existing pages, (2) comm staff should better understand that readers may draw conclusions solely from older pages, without reading our more current thinking on more recently published pages, and (3) research staff should be more discriminating on what types of internal tools are appropriate for public use. There may also be further lessons that can be learned from this as ACE staff continues to discuss these issues internally. But, for now, this is what we're currently thinking.

null @ 2017-01-14T11:09 (+5)

Fwiw, I’ve been following ACE closely the past years, and always felt like I was the one taking cost-effectiveness estimates too literally, and ACE was time after time continually and tirelessly imploring me not to.

null @ 2017-01-13T21:30 (+4)

This all makes sense, and I think it is a a very reasonable perspective. I hope this ongoing process goes well.

null @ 2017-01-14T16:30 (+1)

We have recently implemented a formal social media policy which encourages ACE staff to respond to comments about our work with great consideration, and in a way that accurately reflects our views (as opposed to those of one staff member).

Is this policy available anywhere? Looking on your site I'm finding only a different Social Media Policy that looks like maybe it's intended for people outside ACE considering posting on ACE's fb wall?

null @ 2017-01-13T05:29 (+1)

Major props for the response. Your new social media policy sounds probably-wise. :)

null @ 2017-01-13T12:04 (+9)

I find such social-media policies quite unfortunate. :) I understand that they may be necessary in a world where political opponents can mine for the worst possible quotes, but such policies also reduce the speed and depth of engagement in discussions and reduce the human-ness of an organization. I don't blame ACE (or GiveWell, or others who have to face these issues). The problem seems more to come from (a) quoting out of context and (b) that even when things are quoted in context, one "off" statement from an individual can stick in people's minds more strongly than tons of non-bad statements do. There's not an easy answer, but it would be nice if we could cultivate an environment in which people aren't afraid to speak their minds. I would not want to work for an organization that restricted what I can say (ignoring stuff about proprietary company information, etc.).

null @ 2017-01-13T12:15 (+1)

I agree that these are tradeoffs and that that's very sad. I don't have a very strong opinion on the overall net-balance of the policy. But (it sounds like we both agree?) that they are probably a necessary evil for organizations like this.

null @ 2017-01-13T17:40 (+1)

I'm not sure what to do. :) I think different people/organizations do it differently based on what they're most comfortable with. There's a certain credibility that comes from not asking your employees to toe a party line. Such organizations are usually less mainstream but also have a more authentic feel to them. I discussed this a bit more here.

null @ 2017-01-13T19:53 (+5)

I share the same concerns about internal social media policies, especially when it comes to stifling discussion staff members would have otherwise engaged in. The main reason I rarely engage in EA discussions is that I'm afraid what I write will be mistaken as representative of my employer—not just in substance, but also tone/sophistication.

I think it's fairly standard now for organizations to request that employees include a disclaimer when engaging in work-related conversations—something like "these are my views and not necessarily those of my employer". That seems reasonable to include in the first comment, but becomes cumbersome in subsequent responses. And in instances where comments are curated without context, the disclaimer might not be included at all.

Also, I wonder how much the disclaimer helps someone distinguish the employee from the organization? For highly-visible people in leadership roles, I suspect their views are often conflated with the views of the organization.

null @ 2017-01-14T07:36 (+4)

I agree with these concerns. :) My own stance on this issue is driven more by my personality and "virtue ethics" kinds of impulses than by a thorough evaluation of the costs and benefits. Given that I, e.g., talk openly about (minuscule amounts of) suffering by video-game characters, it's clear that I'm on the "don't worry about the PR repercussions of sharing your views" side of the spectrum.

I've noticed the proliferation of disclaimers about not speaking for one's employer. I personally find them cumbersome (and don't usually use them) because it seems to me rare that anyone does actually speak for one's employer. (That usually only happens with big announcements like the one you posted above.) But presumably other people have been burned here in the past, which is why it's done.

null @ 2017-01-12T17:19 (+20)

Issue 2: Running critical pieces by the people you're criticizing is necessary, if you want a good epistemic culture. (That said, waiting indefinitely for them to respond is not required. I think "wait a week" is probably a reasonable norm)

Reasons and considerations:

a) they may have already seen and engaged with a similar form of criticism before. If that's the case, it should be the critic's responsibility to read up on it, and make sure their criticism is saying something new. Or, that it's addressing the latest, best thoughts on the part of the person-being-criticized. (See Eliezer's 4 layers of criticism)

b) you may not understand their reasons well. Especially with something off-the-cuff on facebook. The principle of charity is crucial because our natural tendency is to engage with weaker versions of ideas.

c) you may be wrong about things. Because our kind have trouble cooperating because we tend to criticize a lot, it's important for criticism of Things We Are Currently Trying to Coordinate On to be made as-accurate-as-possible through private channels before unleashing the storm.

Controversial things are intrinsically "public facing" (see: Scott Alexander's post on Trump that he specifically asked people not to share and disabled comments on, but which Ann Coulter ended up retweeting). Because it is controversial it may end up being people's first exposure to Effective Altruism.

Similar to my issue 1, I think Sarah intended this post as tit-for-tat punishment for EA Establishment not responding enough to criticism. Assuming I'm correct about that, I disagree with it on two grounds:

null @ 2017-01-14T16:18 (+14)

I note Constantin's post, first, was extraordinary uncharitable and inflammatory (e.g. the title for the section discussing Wiblin's remark "Keeping promises as a symptom of Autism", among many others); second, these errors were part of a deliberate strategy to 'inflame people against EA'; third, this strategy is hypocritical given the authors (professed) objections to any hint of 'exploitative communication'. Any of these in isolation is regrettable. In concert they are contemptible.

{ETA: Although in a followup post Constantin states her previous comments which were suggestive of bad faith were an "emotional outburst", it did not reflect her actual intentions either at the time of writing or subsequently.}

My view is that, akin to Hostadter's law, virtues of integrity are undervalued even when people try to account for undervaluing them: for this reason I advocate all-but lexical priority to candour, integrity, etc. over immediate benefits. The degree of priority these things should be accorded seems a topic on which reasonable people can disagree: I recommend Elmore's remarks as a persuasive defence of according these virtues a lower weight.

'Lower', however, still means 'quite a lot': if I read Elmore correctly, her view is not one can sacrifice scrupulously honest communication for any non-trivial benefit, but that these norms should on occasion be relaxed if necessary to realise substantial gains. The great majority of EAs seem to view these things as extremely important, and the direction of travel appears to me that 'more respected' EAs tend to accord these even greater importance (see MacAskill; c.f. Tsipursky).

My impression is that EAs - both individually and corporately - do fairly well in practice as well as principle. As Naik notes, many orgs engage in acts of honesty and accountability supererogatory to secure funding. When they do err, they tend to be robustly challenged (often by other EAs), publicly admit their mistake, and change practice (all of the examples Constantin cites were challenged at the time, I also think of Harris's concerns with promotion of EA global, GPP's mistaken use of a Stern report statistic, and now ACE). Similar sentiments apply to an individual level: my 'anecdata' is almost the opposite of Fluttershy's extremely bad experience: I sincerely believe (and even more sincerely hope) that mine is closer to the norm than theirs.

In absolute terms, I don't think EA in toto has a 'lying problem' (or a 'being misleading', 'not being scrupulously honest' problem). It seems to do quite well at this, and the rate and severity of the mistakes I see don't cause great alarm (although it can and should do better). Although relative terms are less relevant, I think it does better than virtually any other group I can think of.

I offer some further remarks on issues raised by some of the examples given which do not fit into the 'lying problem' theme:

1) Ironic, perhaps, that the best evidence for Todd's remark on the 'costs of criticism' arise from the aftermath of a post which (in part) unjustly excoriates him for that particular remark. My impression is that bad criticism is on average much more costly than bad praise, and some asymmetry in how these are treated seem reasonable.

I do not know whether journalistic 'best practice' around 'right of reply' extends to providing the criticism in full to its subject - regardless, it seems good practice to adopt for the reasons Todd explains. I have done this with my co-contributors re. Intentional Insights, and I have run a (yet to be published) piece about MIRI by MIRI as it had some critical elements to it. Naturally, if a critic does not do this for whatever reason, it does not mean their criticism should be ignored (I have yet to see a case of criticism 'shunned' for these reasons) but I think this is a norm worth encouraging.

2) Nonetheless, it may not have been advisable for the head of one 'part' of CEA to bring this up in context of criticism addressed to another part of CEA. Issues around appropriate disclosure have been mentioned before. In addition, remarks by 'EA public figures' may be taken as indicative of the view of their organisations or EA in toto even if explicitly disclaimed as 'personal opinion only'. A regrettable corollary (as Gordon-Brown notes) is a chilling effect on 'EA public figures' refraining from making unguarded remarks publicly. The costs of not doing so may be worse: if EA grows further, we may collectively regret 'providing more ammunition' to external critics to misuse.

3) Given the social costs towards an individual critic, there may be benefit (by organisations or, better, an independent 'Grand Inquisition' collaboration) to canvass these anonymously. The commonest shared could then be explored further: this would be valuable whether they point to a common misconception or a common fault. In the meanwhile, anyone is welcome to disclose criticisms or concerns to me in confidence.

4) Certain practices could be more widely adopted by EA orgs - beyond recording predictions, a prominent 'mistakes' page (per Givewell) would be desirable, likewise scrupulous declaration of relevant conflicts of interests.

5) (I owe this to Carl Shulman). Donors could also pitch in by carefully evaluating empirical or normative claims made by particular EA organisations: Plant, Dickens, and Hoffman would all be laudable examples, and I hope both to contribute some of my own work to this genre and to encourage others to do likewise.

null @ 2017-01-12T06:24 (+14)

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

The incentive structure of the majority of EA-affiliated orgs has centered around growth metrics related to number of people (new pledge signups, number of donors, number of members), and money moved (both for charity evaluators and for movement-building orgs). These are the headline numbers they highlight in their self-evaluations and reports, and these are the numbers that people giving elevator pitches about the orgs use ("GiveWell moved more than $100 million in 2015" or "GWWC has (some number of hundreds of millions) in pledged money"). Some orgs have slightly different metrics, but still essentially ones that rely on changing the minds of large numbers of people: 80,000 Hours counts Impact-Adjusted Significant Plan Changes, and many animal welfare orgs count numbers of converts to veganism (or recruits to animal rights activism) through leafleting.

These incentives don't directly align with improved epistemic virtue! In many cases, they are close to orthogonal. In some cases, they are correlated but not as much as you might think (or hope!).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

With that said, the organizations I am aware of in the EA community hold themselves to much higher standards than (as far I can make out) their donor and supporter base seems to demand of them. My guess is that GiveWell could have been a LOT more sloppy with their reviews and still moved pretty similar amounts of money as long as they produced reviews that pattern-matched a well-researched review. (I've personally found their review quality improved very little from 2014 to 2015 and much more from 2015 to 2016; and yet I expect that the money moved jump from 2015 to 2016 will be less, or possibly even negative). I believe (with weaker confidence) that similar stuff is true for Animal Charity Evaluators in both directions (significantly increasing or decreasing review quality won't affect donations that much). And also for Giving What We Can: the amount of pledged money doesn't correlate that well with the quality or state of their in-house research.

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

My best guess is that unless we can get a better handle on epistemic virtue and quantify quality in some meaningful way, the incentive structure problem will remain.

CarlShulman @ 2017-01-12T07:52 (+17)

One bit of progress on this front is Open Phil and GiveWell starting to make public and private predictions related to grants to improve their forecasting about outcomes, and create track records around that.

There is significant room for other EA organizations to adopt this practice in their own areas (and apply it more broadly, e.g. regarding future evaluations of their strategy, etc).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable number of donors (e.g., a few dozen donors giving hundreds of thousands of dollars). Donors who are donating those large sums of money are selected for being less naive (just by virtue of having made that much money) and the scale of donation makes it worth their while to demand high standards. I think this is related to GiveWell having relatively high epistemic standards (though causality is hard to judge).

This is part of my thinking behind promoting donor lotteries: by increasing the effective size of donors, it lets them more carefully evaluate organizations and opportunities, providing better incentives and resistance to exploitation by things that look good on first glance but don't hold up on close and extended inspection (they can also share their findings with the broader community).

The story I want to believe, and that I think others also want to believe, is some version of a just-world story: in the long run epistemic virtue ~ success. Something like "Sure, in the short run, taking epistemic shortcuts and bending the truth leads to more growth, but in the long run it comes back to bite you." I think there's some truth to this story: epistemic virtue and long-run growth metrics probably correlate better than epistemic virtue and short-run growth metrics. But the correlation is still far from perfect.

The correlation gets better when you consider total impact and not just growth.

null @ 2017-01-12T17:53 (+16)

Prediction-making in my Open Phil work does feel like progress to me, because I find making predictions and writing them down difficult and scary, indicating that I wasn't doing that mental work as seriously before :) I'm quite excited to see what comes of it.

null @ 2017-01-13T05:30 (+3)

Wanted to offer something stronger than an up vote in starting the prediction-making: that sounds like a great idea and want to see how it goes. :)

null @ 2017-01-12T06:49 (+9)

I like your thoughts and agree with reframing it as epistemic virtue generally instead of just lying. But I think EAs are always too quick to think about behavior in terms of incentives and rational action. Especially when talking about each other. Since almost no one around here is rationally selfish, some people are rationally altruistic, and most people are probably some combination of altruism, selfishness and irrationality. But here people are thinking that it's some really hard problem where rational people are likely be dishonest and so we need to make it rational for people to be honest and so on.

We should remember all the ways that people can be primed or nudged to be honest or dishonest. This might be a hard aspect of an organization to evaluate from the outside but I would guess that it's at least as internally important as the desire to maximize growth metrics.

For one thing, culture is important. Who is leading? What is their leadership style? I'm not in the middle of all this meta stuff, but it's weird (coming from the Army) that I see so much talk about organizations but I don't think I've ever seen someone even mention the word "leadership."

Also, who is working at EA organizations? How many insiders and how many outsiders? I would suggest that ensuring that a minority of an organization is composed of identifiable outsiders or skeptical people would compel people to be more transparent just by making them feel like they are being watched. I know that some people have debated various reasons to have outsiders work for EA orgs - well here's another thing to consider.

I don't have much else to contribute, but all you LessWrong people who have been reading behavioral econ literature since day one should be jumping all over this.

null @ 2017-01-12T14:37 (+5)

I suspect that a crux of the issue about the relative importance of growth vs. epistemic virtue is whether you expect most of the value of the EA community comes from novel insights and research that it does, or through moving money to the things that are already known about.

In the early days of EA I think that GiveWell's quality was a major factor in getting people to donate, but I think that the EA movement is large enough now that growth isn't necessarily related to rigor -- the largest charities (like Salvation Army or YMCA) don't seem to be particularly epistemically rigorous at all. I'm not sure how closely the marginal EA is checking claims, and I think that EA is now mainstream enough that more people don't experience strong social pressure to justify it.

null @ 2017-01-12T11:43 (+2)

The idea that EA charities should somehow court epistemic virtue among their donors seems to me to be over-asking in a way that will drastically reduce their effectiveness.

No human behaves like some kind of Spock stereotype making all their decisions merely by weighing the evidence. We all respond to cheerleading and upbeat pronouncements and make spontaneous choices based on what we happen to see first. We are all more likely to give when asked in ways which make us feel bad/guilty for saying no or when we forget that we are even doing it (annual credit card billing).

If EA charities insist on cultivating donations only in circumstances where the donors are best equipped to make a careful judgement, e.g., eschewing 'Give Now' impulse donations, fundraising parties with liquor and peer pressure and insist on reminding us each time another donation is about to be deducted from our account, they will lose out on a huge amount of donations. Worse, because of the role of overhead in charity work, the lack of sufficient donations will actually make such charities bad choices.

Moreover, there is nothing morally wrong with putting your organization's best foot forward or using standard charity/advertising tactics. Despite the joke it's not morally wrong to make a good first impression. If there is a trade off between reducing suffering and improving epistemic virtue there is no question which is more important and if that requires implying they are highly effective so be it.

I mean it's important charities are incentivized to be effective but imagine if the law required every charitable solicitation to disclose the fraction of donations that went into fundraising and overhead. It's unlikely the increased effectiveness that resulted would make up for the huge losses that forcing people to face the unpleasant fact that even the best charities can only send a fraction of their donation to the intended beneficiaries.


What EA charities should do, however, is pursue a market segmentation strategy. Avoid any falsehoods (as well as annoying behavior likely to result in substantial criticism) when putting a good face on their situation/effectiveness and make sure detailed truthful and complete data and analysis is available for those who put in the work to look for it.

Everyone is better off this way. No on is lied to. The charities get more money and can do more with it. The people who decide to give for impulsive or other less than rational reasons can feel good about themselves rather than feeling guilty they didn't put more time into their charitable decisions. The people who care about choosing the most effective evidence backed charitable efforts can access that data and feel good about themselves for looking past the surface. Finally, by having the same institution chase both the smart and dumb money the system works to funnel the dumb money toward smart outcomes (charities which lose all their smart money will tend to wither or at least change practices).

null @ 2017-01-12T04:15 (+12)

This issue is very important to me, and I stopped identifying as an EA after having too many interactions with dishonest and non-cooperative individuals who claimed to be EAs. I still act in a way that's indistinguishable from how a dedicated EA might act—but it's not a part of my identity anymore.

I've also met plenty of great EAs, and it's a shame that the poor interactions I've had overshadow the many good ones.

Part of what disturbs me about Sarah's post, though, is that I see this sort of (ostensibly but not actually utilitarian) willingness to compromise on honesty and act non-cooperatively more in person than online. I'm sure that others have had better experiences, so if this isn't as prevalent in your experience, I'm glad! It's just that I could have used stronger examples if I had written the post, instead of Sarah.

I'm not comfortable sharing examples that might make people identifiable. I'm too scared of social backlash to even think about whether outing specific people and organizations would even be a utilitarian thing for me to do right now. But being laughed at for being an "Effective Kantian" because you're the only one in your friend group who wasn't willing to do something illegal? That isn't fun. Listening to hardcore EAs approvingly talk about how other EAs have manipulated non-EAs for their own gain, because doing so might conceivably lead them to donate more if they had more resources at their disposal? That isn't inspiring.

null @ 2017-01-12T04:24 (+10)

I should add that I'm grateful for the many EAs who don't engage in dishonest behavior, and that I'm equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they'd previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.

null @ 2017-01-12T14:06 (+4)

I find it difficult to combine "I want to be nice and sympathetic and forgiving of people trying to be good people and assume everyone is" with "I think people are not taking this seriously enough and want to tell you how seriously it should be taken". It's easier to be forgiving when you can trust people to take it seriously.

I've kind of erred on the side of the latter today, because "no one criticises dishonesty or rationalisation because they want to be nice" seems like a concerning failure mode, but it'd be nice if I were better at combining both.

null @ 2017-01-13T09:23 (+1)

Thanks. May I ask what your geographic locus is? This is indeed something that I haven’t encountered here in Berlin or online. (The only more recent example that comes to mind was something like “I considered donating to Sci-Hub but then didn’t,” which seems quite innocent to me.) Back when I was young and naive, I asked about such (illegal or uncooperative) options and was promptly informed of their short-sightedness by other EAs. Endorsing Kantian considerations is also something I can do without incurring a social cost.

null @ 2017-01-13T17:43 (+3)

Thank you! I really admired how compassionate your tone was throughout all of your comments on Sarah's original post, even when I felt that you were under attack . That was really cool. <3

I'm from Berkeley, so the community here is big enough that different people have definitely had different experiences than me. :)

null @ 2017-01-13T19:25 (+1)

Oh, thank you! <3 I’m trying my best.

Oh yeah, the Berkeley community must be huge, I imagine. (Just judging by how often I hear about it and from DxE’s interest in the place.) I hope the mourning over Derek Parfit has also reminded people in your circles of the hitchhiker analogy and two-level utilitarianism. (Actually, I’m having a hard time finding out whether Parfit came up with it or whether Eliezer just named it for him on a whim. ^^)

null @ 2017-01-24T08:22 (+1)

The hitchhiker is mentioned in Chapter One of Reasons and Persons. Interestingly, Parfit was more interested in the moral implications than the decision-theory ones.

null @ 2017-02-05T09:54 (+1)

Thanks!

null @ 2017-01-12T14:01 (+10)

One very object-level thing which could be done to make longform, persistent, not hit-and-run discussion in this particular venue easier: Email notifications of comments to articles you've commented in.

There doesn't seem to be a preference setting for that, and it doesn't seem to be default, so it's only because I remember to come check here repeatedly that I can reply to things. Nothing is going to be as good at reaching me as Facebook/other app notifications on my phone, but email would do something.

null @ 2017-01-15T06:01 (+2)

https://github.com/tog22/eaforum/issues/65

null @ 2017-01-13T19:28 (+9)

My thoughts on this are too long for a comment, but I've written them up here - posting a link in the spirit of making this forum post a comprehensive roundup: http://benjaminrosshoffman.com/honesty-and-perjury/

null @ 2017-01-12T17:37 (+9)

I have very mixed feelings about Sarah's post; the title seems inaccurate to me, and I'm not sure about how the quotes were interpreted, but it's raised some interesting and useful-seeming discussion. Two brief points:

null @ 2017-01-11T19:47 (+8)

Copying my post from the Facebook thread:

Some of the stuff in the original post I disagree on, but the ACE stuff was pretty awful. Animal advocacy in general has had severe problems with falling prey to the temptation to exaggerate or outright lie for a quick win today. especially about health, and it's disturbing that apparently the main evaluator for the animal rights wing of the EA movement has already decided to join it and throw out actually having discourse on effectiveness in favour of plundering their reputation for more donations today. A mistake is a typo, or leaving something up accidentally, or publishing something early by accident, and only mitigation if corrective action was taken once detected. This was at the minimum negligence, but given that it's been there for years without making the trivial effort to fix it should probably be regarded as just a lie. ACE needs replacing with a better and actually honest evaluator.

One of the ways this negatively impacted the effectiveness discourse: During late 2015 there was an article written arguing for ethical offsetting of meat eating (http://slatestarcodex.com/.../vegetarianism-for-meat-eaters/), but it used ACE's figures, and so understated the amounts people needed to donate by possibly multiple orders of magnitude.

More concerning is the extent to which the (EDIT: Facebook) comments on this post and the previously cited ones go ahead and justify even deliberate lying, "Yes, but hypothetically lying might be okay under some circumstances, like to save the world, and I can't absolutely prove it's not justified here, so I'm not going to judge anyone badly for lying", as with Bryd's original post as well. The article sets out a pretty weak case for "EA needs stronger norms against lying" aside for the animal rights wing, but the comments basically confirm it.

I know that answering "How can we build a movement that matches religious movements in output (http://lesswrong.com/.../can_humanism_match_religions.../), how can we grow and build effectiveness, how can we coordinate like the best, how can we overcome that people think that charity is a scam?" with "Have we considered /becoming pathological liars/? I've not proven it can't work, so let's assume it does and debate from there" is fun and edgy, but it's also terrible.

I can think of circumstances where I'd void my GWWC pledge; if they ever pulled any of this "lying to get more donations" stuff, I'd stick with TLYCS and a personal commitment but leave their website.

null @ 2017-01-11T20:03 (+9)

I'm involved with ACE as a board member and independent volunteer researcher, but I speak for myself. I agree with you that the leafleting complaints are legitimate -- I've been advocating more skepticism toward the leafleting numbers for years. But I feel like it's pretty harsh to think ACE needs to be entirely replaced.

I don't know if it's helpful, but I can promise you that there's no intentional PR campaign on behalf of ACE to over-exaggerate in order to grow the movement. All I see is an overworked org with insufficient resources to double check all the content on their site.

Judging the character of the ACE staff through my interactions with them, I don't think there was any intent to mislead on leaflets. I'd put it more as negligence arising from over-excitement from the initial studies (despite lots of methodological flaws), insufficient skepticism, and not fully thinking through how things would be interpreted (the claim that leafleting evidence is the strongest among AR is technically true). The one particular sentence, among the thousands on the site, went pretty much unnoticed until Harrison brought it up.

null @ 2017-01-11T22:21 (+9)

Thanks for the feedback, and I'm sorry that it's harsh. I'm willing to believe that it wasn't conscious intent at publication time at least.

But it seems quite likely to me from the outside that if they thought the numbers were underestimating they'd have fixed them a lot faster, and unless that's not true it's a pretty severe ethics problem. I'm sure it was a matter of "it's an error that's not hurting anyone because charity is good, so it isn't very important", or even just a generic motivation problem in volunteering to fix it, some kind of rationalisation that felt good rather than "I'm going to lie for the greater good"- the only people advocating that outright seem to be other commenters- but it's still a pretty bad ethics issue for an evaluator to succumb to the temptation to defer an unfavourable update.

I think some of this might be that the EA community was overly aggressive in finding them and sort of treating them as the animal charity GiveWell, because EA wanted there to be one, when ACE weren't really aiming to be that robust. A good, robust evaluator's job should be to screen out bad studies and to examine other peoples' enthusiasm and work out how grounded it was, with transparent handling of errors (GiveWell does updates that discuss them and such) and updating in response to new information, and from that perspective taking a severely poor study at face value and not correcting it for years, resulting in a large number of people getting wrong valuations was a pretty huge failing. Making "technically correct" but very misleading statements which we'd view poorly if they came from a company advertising itself is also very bad in an organisation whose job is basically to help you sort through everyone else's advertisements.

Maybe the sensible thing for now is to assume that there is no animal charity evaluator that's good enough to safely defer to, and all there are are people who may point you to papers which caveat emptor, you have to check yourself, for now.

null @ 2017-01-12T12:24 (+5)

Maybe I'm being simple about this, but I find it's helpful to point people towards ACE because there doesn't seem to be any other charity researchers for that cause.

Just by suggesting people donate to organisations that focus on animal farming, that seems like it can have a large impact even if it's hard to pick between the particular organisations.

null @ 2017-01-12T01:34 (+5)

apparently the main evaluator for the animal rights wing of the EA movement has already decided to join it and throw out actually having discourse on effectiveness in favour of plundering their reputation for more donations

This seems like an exaggerated and unhelpful thing to say.

null @ 2017-01-12T13:46 (+5)

Perhaps. It's certainly what the people suggesting that deliberate dishonesty would be okay are suggesting, and it is what a large amount of online advocacy does, and it is in effect what they did, but they probably didn't consciously decide to do it. I'm not sure how much credit not having consciously decided is worth, though, because that seems to just reward people for not thinking very hard about what they're doing, and they did it from a position of authority and (thus) responsibility.

I stand by the use of the word 'plundering'- it's surprising how some people are willing to hum and har about maybe it being worth it, when doing it deliberately would be a very short-sighted, destroy-the-future-for-money-now act. It calls for such a strong term. And I stand by the position that it would throw out actually having discourse on effectiveness if people played those sorts of games, withheld information that would be bad for causes they think are good, etc, rather than being scrupulously honest. But again to say they 'decided' to do those things is perhaps not entirely right.

I think in an evaluator, which is in a sense a watchdog for other peoples' claims, these kind of things really are pretty serious- it would be scandalous if e.g. GiveWell were found to have been overexcited about something and ignored issues with it on this level. Their job is to curb enthusiasm, not just be another advocate. So I think taking it seriously is pretty called for. As I mentioned in a comment below, though, maybe part of the problem is that EA people tried to take ACE as a more robust evaluator than it was actually intending to be, and the consequence should be that they shift to regarding it as a source for pointers whose own statements are to be taken with a large grain of salt, the way individual charity statements are.

null @ 2017-01-13T00:38 (+11)

ACE's primary output is its charity recommendations, and I would guess that it's "top charities" page is viewed ~100x more than the leafleting page Sarah links to.

ACE does not give the "top charity" designation to any organization which focuses primarily on leafleting, and e.g. the page for Vegan Outreach explicitly states that VO is not considered a top charity because of its focus on leafleting and the lack of robust research on that:

We have some concerns that Vegan Outreach has relied too heavily on poor sources of evidence to determine the effectiveness of leafleting as compared to other interventions... Why didn’t Vegan Outreach receive our top recommendation? Although we are impressed with Vegan Outreach’s recent openness to change and their attempts to measure their effectiveness, we still have reservations about their heavy focus on leafleting programs

You are proposing that ACE says negative things on its most prominent pages about leafleting, but left some text buried in a back page that said good things about leafleting as part of a dastardly plot to increase donations to organizations they don't even recommend.

This seems unlikely to me, to put it mildly, but more importantly: it's incredibly important that we assume others are acting in good faith. I disagree with you about this, but I don't think that you are trying to "throw out actually having discourse on effectiveness". This, more than any empirical fact about the likelihood of your hypothesis, is why I think your comment is unhelpful.

null @ 2017-01-13T10:08 (+5)

This definitely isn't the kind of deliberate where there's an overarching plot, but it's not distinguishable from the kind of deliberate where a person sees a thing they should do or a reason to not write what they're writing and knowingly ignores it, though I'd agree in that I think it's more likely they flinched away unconsciously.

It's worth noting that while Vegan Outreach is not listed as a top charity it is listed as a standout charity, with their page here: https://animalcharityevaluators.org/research/charity-review/vegan-outreach/

I don't think it is good to laud positive evidence but refer to negative evidence only via saying "there is a lack of evidence", which is what the disclaimers do- in particular there's no mention of the evidence against there being any effect at all. Nor is it good to refer to studies which are clearly entirely invalid as merely "poor" while still relying on their data. It shouldn't be "there is good evidence" when there's evidence for, and "the evidence is still under debate" when there's evidence against, and there shouldn't be a "gushing praise upfront, provisos later" approach unless you feel the praise is still justified after the provisos. And "have reservations" is pretty weak. These are not good acts from a supposedly neutral evaluator.

Until the revision in November 2016, the VO page opened with: "Vegan Outreach (VO) engages almost exclusively in a single intervention, leafleting on behalf of farmed animals, which we consider to be among the most effective ways to help animals.", as an example of this. Even now I don't think it represents the state of affairs well.

If in trying to resolve the matter of whether it has high expected impact or not, you went to the main review on leafleting (https://animalcharityevaluators.org/research/interventions/leafleting/), you'd find it began with "The existing evidence on the impact of leafleting is among the strongest bodies of evidence bearing on animal advocacy methods.".

This is a very central Not Technically a Lie (http://lesswrong.com/lw/11y/not_technically_lying/); the example of a not-technically-a-lie in that post being using the phrase "The strongest painkiller I have." to refer to something with no painkilling properties when you have no painkillers. I feel this isn't something that should be taken lightly:

"NTL, by contrast, may be too cheap. If I lie about something, I realize that I'm lying and I feel bad that I have to. I may change my behaviour in the future to avoid that. I may realize that it reflects poorly on me as a person. But if I don't technically lie, well, hey! I'm still an honest, upright person and I can thus justify visciously misleading people because at least I'm not technically dishonest."

The disclaimer added now helps things, but good judgement should have resulted in an update and correction being transparently issued well before now.

The part which strikes me as most egregious was in the deprioritising of updating a review on what was described in a bunch of places as the most cost effective (and therefore most effective) intervention. I can't see any reason for that, other than that the update would have been negative.

There may not have been conscious intent behind this- I could assume that this was as a result of poor judgement rather than design- but it did mislead the discourse on effectiveness, that already happened, and not as a result of people doing the best thing given information available to them but as a result of poor decisions given this information. Whether it got more donations or not is unclear- it might have tempted more people into offsetting, but on the other hand each person who did offsetting would have paid less because they wouldn't have actually offset themselves.

However something like this is handled is also how a bad actor would be handled, because a bad actor would be indistinguishable from this; if we let this by without criticism and reform, then bad actors would also be let by without criticism and reform.

I think when it comes to responding to some pretty severe stuff of this sort, even if you assume the people made them in good faith and just made some rationality failings, more needs to be said than "mistakes were made, we'll assume you're doing the best you can to not make them again". I don't have a grand theory of how people should react here, but it needs to be more than that.

My inclination is to at the least frankly express how severe I think it is- even if it's not the nicest thing I could say.

null @ 2017-01-13T23:48 (+2)

Thanks for the response, it helps me understand where you're coming from.

I agree that the sentence you cite could be better written (and in general ACE could improve, as could we all). I disagree with this though:

However something like this is handled is also how a bad actor would be handled, because a bad actor would be indistinguishable from this; if we let this by without criticism and reform, then bad actors would also be let by without criticism and reform.

At the object level: ACE is distinguishable from a bad actor, for example due to the fact that their most prominent pages do not recommend charities which focus on leafleting.

At the metalevel: I don't think we should have a conversational norm of "everyone should be treated as a bad actor until they can prove otherwise". It would be really awful to be a member of a community with that norm.

All this being said, it seems that ACE is responding in this post now, and it may be better to let them address concerns since they are both more knowledgeable and more articulate than me.

null @ 2017-01-13T20:04 (+2)

in particular there's no mention of the evidence against there being any effect at all.

To be clear, it's inaccurate to describe the studies as showing evidence of no effect. All of the studies are consistent with a range of possible outcomes that include no effect (and even negative effect!) but they're also consistent with positive effect.

That isn't to say that there is a positive effect.

But it isn't to say there's a negative effect either.

I think it is best to describe this as a "lack of evidence" one way or another.

-

I don't think it is good to laud positive evidence but refer to negative evidence only via saying "there is a lack of evidence", which is what the disclaimers do

I don't think there's good evidence that anything works in animal rights and if ACE suggests anything anywhere to the contrary I'd like to push against it.

null @ 2017-01-12T01:37 (+5)

Since there are so many separate discussions surrounding this blog post, I'll copy my response from the original discussion:

I’m grateful for this post. Honesty seems undervalued in EA.

An act-utilitarian justification for honesty in EA could run along the lines of most answers to the question, “how likely is it that strategic dishonesty by EAs would dissuade Good Ventures-sized individuals from becoming EAs in the future, and how much utility would strategic dishonesty generate directly, in comparison?” It’s easy to be biased towards dishonesty, since it’s easier to think about (and quantify!), say, the utility the movement might get from having more peripheral-to-EA donors, than it is to think about the utility the movement would get from not pushing away would-be EAs who care about honesty.

I’ve [rarely] been confident enough to publicly say anything when I’ve seen EAs and ostensibly-EA-related organizations acting in a way that I suspect is dishonest enough to cause significant net harm. I think that I’d be happy if you linked to this post from LW and the EA forum, since I’d like for it to be more socially acceptable to kindly nudge EAs to be more honest.

null @ 2017-01-18T14:37 (+4)

In the interest of completeness: Sarah posted a follow-up on her post. Reply to Criticism on my EA Post.

null @ 2017-01-13T23:21 (+3)

I was definitely disappointed to see that post by Sarah. It seemed to defect from good community norms such as attempting to generously interpret people in favour of quoting people out of context. She seems to be applying such rigourous standards to other people, yet applying rather loose standards to herself.

Nathan Young @ 2022-08-16T21:31 (+2)

Good piece

While I think it's good to expect people to have read the same central set of works, I do think we lose out by not being able to snythesisynthesisese discussions. Why isn't there a single community post with the state of the art on this discussion and where key disagreements are? It's understandable that I should have to find the various articles, but why not make it easier for me?

Raemon @ 2022-08-17T01:41 (+2)

I'm not sure what your imagining, in terms of overall infrastructural update here. But, here's a post that is in some sense a followup post to this:

https://www.lesswrong.com/posts/FT9Lkoyd5DcCoPMYQ/partial-summary-of-debate-with-benquo-and-jessicata-pt-1 

null @ 2017-01-17T05:38 (+2)

I was overall a bit negative on Sarah's post, because it demanded a bit too much attention, (e.g. the title), and seemed somewhat polemic. It was definitely interesting, and I learned some things.

I find the most evocative bit to be the idea that EA treats outsiders as "marks".
This strikes me as somewhat true, and sadly short-sighted WRT movement building. I do believe in the ideas of EA, and I think they are compelling enough that they can become mainstream.

Overall, though, I think it's just plain wrong to argue for an unexamined idea of honesty as some unquestionable ideal. I think doing so as a consequentialist, without a very strong justification, itself smacks of disingenuousness and seems motivated by the same phony and manipulative attitude towards PR that Sarah's article attacks.

What would be more interesting to me would be a thoughtful survey of potential EA perspectives on honesty, but an honest treatment of the subject does seem to be risky from a PR standpoint. And it's not clear that it would bring enough benefit to justify the cost. We probably will all just end up agreeing with common moral intuitions.

null @ 2017-01-12T12:46 (+1)

As for the issue of acquiring power/money/influence and then using it to do good it is important to be precise here and distinguish several questions:

1) Would it be a good thing to amass power/wealth/etc.. (perhaps deceptively) and then use those to do good?

2) Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do X" where X is a good thing.

2') Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do good".

3) Is it a good idea to support (or not object) to others who profess to be amassing wealth/power/etc.. to do good

Once broken down this way it is clear that while 1 is obviously true 2 and 3 aren't. Lacking the ability to perfectly bind one's future self means there is always the risk that you will instead use your influence/power for bad ends. 2' raises further concerns as to whether what you believe to be good ends really are good ends. This risk is compounded in 3 by the possibility that the people are simply lying about the good ends.

Once we are precise in this way it is clear that it isn't the in principle approval of amassing power to do good that is at fault but rather the trustworthiness/accuracy of those who undertake such schemes that is the problem.


Having said this some degree of amassing power/influence as a precursor to doing good is probably required. The risks simply must be weighed against the benefits.

null @ 2017-01-11T21:48 (+1)

Why Our Kind Can't Cooperate (Eliezer Yudkowsky)

Note to casual viewers that the content of this is not what the title makes it sound like. He's not saying that rationalists are doomed to ultimately lie and cheat each other. Just that here are some reasons why it's been hard.

From the recent Sarah Constantin post

Wouldn’t a pretty plausible course of action be “accumulate as much power and resources as possible, so you can do even more good”?

Taken to an extreme, this would look indistinguishable from the actions of someone who just wants to acquire as much power as possible for its own sake. Actually building Utopia is always something to get around to later; for now you have to build up your strength, so that the future utopia will be even better.

Lying and hurting people in order to gain power can never be bad, because you are always aiming at the greater good down the road, so anything that makes you more powerful should promote the Good, right?

Obviously, this is a terrible failure mode.

I don't buy this logic. Obviously there's a huge difference between taking power and then expending effort into positive activities, or taking power and not giving it up at all. Suppose that tomorrow we all found out that a major corporation was the front for a shady utilitarian network that had accumulated enough power and capital to fill all current EA funding gaps, or something like that. Since at some point you actually do accomplish good, it's clearly not indistinguishable.

I mean, you can keep kicking things back and say "why not secretly acquire MORE power today and wait till tomorrow, and then you'll never do any good?" but there's obvious empirical limitations to that, and besides it's a problem of decision theory which is present across all kinds of things and doesn't have much to do with gaining power in particular.

In practical terms, people (not EAs) who try to gain power with future promises of making things nicer are often either corrupt or corruptible, so we have that to worry about. But it's not sufficient to show that the basic strategy doesn't work.

...

{epistemic status: extremely low confidence}

The way I see a lot of these organizational problems where they seem to have controversial standards and practices is that core people are getting just a little bit too hung up on EA This and EA That and Community This and Community That... in reality what you should do is take pride in your organization, those few people and resources you have in your control or to your left and right, and make it as strong as possible. Not by cheating to get money or anything, but by fundamentally adhering to good principles of leadership, and really taking pride in it (without thinking about overall consequences all the time). If you do that, you probably won't have these kinds of problems, which seem to be kind of common whenever the organization itself is made subservient to some higher ideal (e.g. cryonics organizations, political activism, religions). I haven't been inside these EA organizations so I don't know how they work, but I know how good leadership works in other places and that's what seems to be different. It probably sounds obvious that everyone in an EA organization should run it as well as they can, but after I hear about these occasional issues I get the sense that it's kind of important to just sit and meditate on that basic point instead of always talking about the big blurry community.

To succeed at our goals:

I'd agree with all that. It all seems pretty reasonable.

null @ 2017-01-11T22:10 (+2)

I think that the main point here isn't that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn't actually strong evidence that they're going to follow through, and that it encourages you to be slightly evil more than you have to be.

I've heard other people argue that that strategy literally doesn't work, making a claim roughly along the lines of "if you achieved power by maximizing influence in the conventional way, you wind up in an institutional context which makes pivoting to do good difficult". I'm not sure how broadly this applies, but it seems to me to be worth considering. For instance, if you become a congressperson by playing normal party politics, it seems to be genuinely difficult to implement reform and policy that is far outside of the political Overton window.

null @ 2017-01-11T22:25 (+1)

I think that the main point here isn't that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn't actually strong evidence that they're going to follow through,

True. But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists.

and that it encourages you to be slightly evil more than you have to be.

Maybe, but this is common folk wisdom where you should demand more applicable psychological evidence, instead of assuming that it's actually true to a significant degree. Especially among the atypical subset of the population which is core to EA. Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people's thinking.

null @ 2017-01-12T03:12 (+3)

But if we already know each other and trust each other's intentions then it's different. Most of us have already done extremely costly activities without clear gain as altruists.

That signals altruism, not effectiveness. My main concern is that the EA movement will not be able to maintain the epistemic standards necessary to discover and execute on abnormally effective ways of doing good, not primarily that people won't donate at all. In this light, concerns about core metrics of the EA movement are very relevant. I think the main risk is compromising standards to grow faster rather than people turning out to have been "evil" all along, and I think that growth at the expense of rigor is mostly bad.

Being at all intellectually dishonest is much worse for an intellectual movement's prospects than it is for normal groups.

instead of assuming that it's actually true to a significant degree

The OP cites particular instances of cases where she thinks this accusation is true -- I'm not worried that this is likely in the future, I'm worried that this happens.

Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people's thinking.

I agree, but I think more likely ways of dealing with the issues involve more credible signals of dealing with the issues than just saying that they should be solvable.

null @ 2017-01-12T03:45 (+1)

I think the main risk is compromising standards to grow faster rather than people turning out to have been "evil" all along, and I think that growth at the expense of rigor is mostly bad.

Okay, so there's some optimal balance to be had (there are always ways you can be more rigorous and less growth-oriented, towards a very unreasonable extreme). And we're trying to find the right point, so we can err on either side if we're not careful. I agree that dishonesty is very bad, but I'm just a bit worried that if we all start treating errors on one side like a large controversy then we're going to miss the occasions where we err on the other side, and then go a little too far, because we get really strong and socially damning feedback on one side, and nothing on the other side.

The OP cites particular instances of cases where she thinks this accusation is true -- I'm not worried that this is likely in the future, I'm worried that this happens.

To be perfectly blunt and honest, it's a blog post with some anecdotes. That's fine for saying that there's a problem to be investigated, but not for making conclusions about particular causal mechanisms. We don't have an idea of how these people's motivations changed (maybe they'd have the exact same plans before having come into their positions, maybe they become more fair and careful the more experience and power they get).

Anyway the reason I said that was just to defend the idea that obtaining power can be good overall. Not that there are no such problems associated with it.