Four practices where EAs ought to course-correct

By kbog @ 2019-07-30T05:48 (+52)

Here are some areas where I've felt for a long time that fellow EA community members are making systematic mistakes.

Summary: 1. Don't worry much about diet change 2. Be generally cynical or skeptical about AI ethics and safety initiatives that are not closely connected to the core long-run issue of AGI alignment and international cooperation 3. Worry more about object-level cause prioritization and charity evaluation, worry less about meta-level methodology 4. Be more ruthless in promoting Effective Altruism.

Over-emphasis on diet change

EAs seem to place continuously high emphasis on adopting vegan, vegetarian and reducetarian diets.

However, the benefits of going vegan are equivalent to less than a nickel per day donated to effective charities. Other EAs have raised this this point before; the only decent response given at the time was that the estimates for the effectiveness of animal charities were likely over-optimistic. However, in the linked post I took the numbers displayed by ACE in 2019, and scaled them back a few times to be conservative, so it would be tough to argue that they are over-optimistic. I also used conservative estimates of climate change charities to offset the climate impacts, and also toyed with using climate change charities to offset animal suffering by using the fungible welfare estimates (I didn't post that part but it's easy to replicate). In both cases, still the vegan diet is only as good as donations of pennies per day, suggesting that there is nothing particularly optimistic about animal charity ratings, it's just the nature of individual consumption decisions to make a tiny impact. And then we have to contend with other effective charities like x-risk and global poverty alleviation possibly being better than animal and climate change charities. Therefore, this response is now very difficult to substantiate.

The basic absolute merit of veganism is of course not being debated here - it saves a significant number of animals, which is sufficient to prefer that a generic member of society be vegan (given current farming practices at least).

However, the relative impact of other efforts seems to be much much higher, so there are other implications. First, putting public emphasis on being vegan/vegetarian is a bad choice, compared to placing that emphasis on donations (or career changes, etc). This study suggests that nudges to "turn off the lights" etc can reduce people's support for a carbon tax, as they feel like there is an alternative and easier solution for the environment besides legislation. What if a similar effect applies to animal welfare legislation or donations? The effect goes away when people know just how little of an impact they are actually having, but such messages are rarely given when it comes to veg*n activism - even when EAs are doing it. In addition to a possibly detrimental impact on the political attitudes and donation habits of our audience (committed EAs themselves, almost certainly, are not so vulnerable to these nudges) there is a risk that it reduces the popular appeal of the EA movement. While veg*nism seems to be significantly more accepted in public discourse now than it was ~10 years ago, it's still quite controversial.

Second, actually being vegan/vegetarian may be a bad choice for someone who is doing productive things with their career and donations. If a veg*n diet is slightly more expensive, more time consuming, or less healthy, then adopting it is a poor choice. Of course, many people have adequately pointed out that veg*n diets need not be more expensive, time consuming, or unhealthy than omnivorous diets. However, it's substantially more difficult to make them satisfy all three criteria at the same time. As for expense and time consumption - that's really something for people to decide for themselves, based on their local food options and habits. As for health:

Small tangent on the healthiness of vegan/vegetarian diets

I am not a nutritionist but my very brief look at the opinions of expert and enthusiast nutritionists and the studies they cite has told me that the healthiest diet is probably not vegetarian.

First, not all animal products are equal, and the oft-touted pro-veg*n studies overlook these differences. Many of the supposed benefits of veg*n diets seem to come from the exclusion of processed meat, which is meat that has been treated with modern preservatives, flavorings, etc. This is really backed up by studies, not just anti-artificial sentiment. Good studies looking at the health impacts of unprocessed meat (which, I believe, generally includes ground beef) are rare. I've only found one, a cohort study, and it did find that unprocessed red meat increased mortality, but not as much as processed red meat. Whether unprocessed white meat and fish have detrimental impacts seems like a very open question. And even when it comes to red meat, nutritional findings that were backed by similarly strong evidence as this have been overturned in the past, I believe. Then there are a select few types of meat which seem particularly healthy, like sardines, liver and marrow, and there is still less reason to believe that they are harmful. Moving on to dairy products, it seems that fermented dairy products are significantly superior to nonfermented ones.

Second, vegan diets miss out on creatine, omega-3 fat in its proper EHA/DHA form, Vitamin D, taurine, and carnosine. Dietary intake of these is not generally necessary for a basically decent life as far as I know, but being fully healthy (longest working life + highest chance of living to a longevity horizon + best cognitive function) is a different story, and these chemicals are variously known or hypothesized to be beneficial. You can of course supplement, but at the cost of extra time and money - and that's assuming that you remember to supplement. For some people who are simply bad at keeping habits - me, at least - supplementing for an important nutrient just isn't a reliable option; I can set my mind to do it but I predictably fail to keep up with it.

Third, vegan/vegetarian diets reduce your flexibility to make other healthy changes. As an omnivore, it's pretty easy for me to minimize or avoid unhealthy foods such as store-bought bread (with so many preservatives, flavorings etc) and fortified cereal. As a vegetarian or vegan, this would be significantly more difficult. When I was vegan and when I was vegetarian, both times I made it work by eating some less-than-healthy foods, otherwise I would have had to face greater time and/or money spent on putting my diet together.

Finally, nutritional science is frankly a terrible mess, and not necessarily due to ill motives and practices on the part of researchers (though there is some of that) but also because of just how difficult it is to tease out correlation from causation in this business. There's a lot that we don't understand, including chemicals that may play a valuable health role but haven't been properly identified as such. Therefore, in the absence of clear guidance it's wise to defer to eating (a) a wide variety of foods, which is enhanced by including animal products, and (b) foods that we evolved to eat, which has usually included at least a small amount of meat.

For these reasons, I weakly feel that the healthiest diet will include some meat and/or fish, and feel it more strongly if we consider that someone is spending only a limited amount of time and money on their diet. Of course that doesn't mean that a typical Western omnivorous diet is superior to a typical Western veg*n diet (it probably isn't).

Too much enthusiasm for AI ethics

The thesis of misaligned AGI risk, developed by researchers like Yudkowsky and Bostrom, has motivated a rather wide range of efforts to establish near-term safety and ethics measures in AI. The idea is that by starting conversations and institutions and regulatory frameworks now, we're going to be in a better position to build safe AGI in the future.

There is some value in that idea, but people have taken it too far and willingly signed onto AI issues without a clear benefit for long-run AI safety or even for near-term AI use in its own right. (I've been guilty of this.) The problem is a lack of good reason to believe that better outcomes are achieved when people put a greater emphasis on AI ethics. Most people outside of EA do not engage in robust consequentialist analysis for ethics. One example would be the fact that Google's ethics board was dissolved because of outrage against the inclusion of the conservative Kay Coles James, larger on the basis of her views on gender politics; an EA writing for Vox, Kelsey Piper, mildly fanned the flames by describing (but, commendably, not endorsing) the regular outrage while simultaneously taking Google to task for not assigning substantial power to the ethics board. Yet it's not really clear if a powerful ethics board - especially one which is composed only of people approved by Google's constituency - is desirable, as I shall argue. An example of AI ethics boards in action would be an ethics report produced by the ethics board at the policing technology company Axon, which recommended against using facial recognition technology on body cams. While it purports to perform a "cost-benefit analysis", and included the participation of one Miles Brundage who is affiliated with the EA community, the recommendation was developed on a wholly rhetorical and intuitive basis without any quantification nor explicit qualitative comparison of costs and benefits. It had a dubious and partisan emphasis on improving the relative power and social status of racial minorities as opposed to a cleaner emphasis on improving aggregate welfare, and an utterly bizarre omission of the benefit that facial recognition tech could make it easier to identify suspects and combat crime. My attempts to question two of the authors about some of these problems led nowhere.

EAs have piled onto the worries over "killer robots" without adequate supporting argument. I have seen EAs circulate half-baked fears of suicide drones making it easy to murder people (just carry a tennis racket, or allow municipalities to track or ban drone flights if they so choose) or assassinate political leaders (they already speak behind bullet-resistant plexiglass sometimes, this is not a problem) or overwhelm defenses (just use turrets with lasers or guns; every measure has a countermeasure). As I argued here, introducing AI into international warfare does not seem bad overall. This point was generally accepted; the remaining quarrel was that AI could facilitate more totalitarian rule as the government could take domestic actions without the consent of human police/militaries. I think this argument is potentially valid but unsolved; maybe stronger policing is better for countries, it needs more investigation. These robots will be subject to democratic oversight and approval, not totalitarian command. When unethical police behavior is restrained, it is almost always done by public outrage and oversight, not by freethinking police officers disobeying their orders.

For a more extreme hypothesis, Ariel Conn at FLI has voiced the omnipresent Western fear of resurgent ethnic cleansing, citing the ease of facial recognition of people's race - but has that ever been the main obstacle to genocide? Moreover, the idea of thoughtless machines dutifully carrying out a campaign of mass murder takes a rather lopsided view of the history of ethnic cleansing and genocide, where the real death and suffering is not mitigated by the presence of humans in the loop more often than it is caused or exacerbated by human passions, grievances, limitations, and incompetency. To be clear, I don't think weaponized AI would make the risks of genocide or ethnic cleansing smaller, there just seems to be no good reason to expect it to make the risks bigger.

On top of all this, few seem to have seriously grappled with the fact that we only have real influence in the West, and producing fewer AI weapons mainly just means fewer AI weapons in the West. You can wish for a potent international treaty, but even if that pans out (history suggests it probably won't) it doesn't change the fact that EAs and other activists are incorrectly calling to stop AI weapon development now. And better weapons for the West does mean better global outcomes - especially now that the primary question for Western strategic thinkers is probably not about expanding or even maintaining a semblance of Western global hegemony, but just determining how much Western regional security and influence can be saved from falling victim to rising Russian, Chinese and other challenges. But even when the West was engaging in very dubious wars of global policing (Vietnam, Iraq) it still seems that winning a bad war would have been much better than losing a bad war. Even Trump's recently speculated military adventures in Venezuela and Iran, if they had occurred, would be less bad if they resulted in American victory than American defeat. True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success - it would just partially counterbalance them. (Piper, writing for Vox, did mention improved military capability as a benefit of AI weapons.)

So generally speaking, giving more power to philosophers and activists and regulators to restrict the development and applications of AI doesn't seem to lead anywhere good in the short or medium run. EA-dominated institutions would be mostly trustworthy to do it well (I hesitate slightly because of FLI's persistent campaigning against AI weaponry), but an outside institution/network with a small amount of EA participation (or even worse, no EA participation) is a different story.

The real argument for near-term AI oversight is that it will lead to better systems in the long run. But I am rather skeptical that, in the long run, we will suffer from a dearth of public scrutiny of AI ethics and safety. AI ethics and safety for current systems is not neglected; arguably it's over-emphasized at the expense of liberty and progress. Why think it will be neglected in the future? As AI advances and proliferates, it will likely gain more public attention, and by the time that AGI comes around, we may well find ourselves being restrained by too much caution and interference from activists and philosophers. Of course Bostrom and Yudkowsky's thesis on AGI misalignment will not be so neglected when people see AI on the verge of surpassing humans! Yes, AI progress can be unexpectedly rapid, so there may be some neglect, but there will still be less neglect than there is now. And faster AGI rollout could be preferable because AI might reduce global risk, or because Bostrom's 'astronomical waste' argument for great caution at the expense of growth is flawed. I think it likely is, because it relies on the debatable assumptions of (a) existential risks being concentrated in the near/medium term future and (b) a logistic (as opposed to exponential) growth in the value of humanity as time goes by. Tyler Cowen has argued for putting growth as comparably important to risk management. Nick Beckstead puts more doubts on the astronomical waste argument. Therefore even AGI/ASI rollout should arguably follow the status quo or be accelerated, so more ethics/safety oversight and regulation on the margin will possibly be harmful.

To be sure, international institutions for cooperation on AI and actual alignment research, ahead of time, are both robustly good things where we can reliably expect society to err on the side of doing too little. But the other stuff has minimal or possibly negative value.

Top-heavy emphasis on methodology at the expense of object level progress (edit: OK, few people are actually inhibited by this, not a big deal)

It pains me to see so much effort going into writeups and arguments along the lines of EA needs more of [my favorite type of research] or EA needs to rely less on quantitative expected value estimates and so on. This is often cheap criticism which doesn't really lead anywhere but intractable arguments, and can weaken the reputation of the EA movement. This seems reminiscent of the perennial naive-science versus philosophy-of-science wars, but where most science fields seem to have fifty scientists for every philosopher of science, we seem to have two or three EA researchers for every methodology-of-EA philosopher. Probably an exaggeration but you get the point.

EA has made exactly one major methodological step forward since its beginnings, which was identifying the optimizer's curse about eight years ago, something which had the benefit of a mathematical proof. I can't think of any meta level argument that has substantially contributed to the EA cause prioritization and charity evaluation process since then. I at least have not benefited from other such arguments. To be clear, such inquiry is better than nothing. But what's much better is for people to engage in real, object level arguments about causes and charities. If you think that EA can benefit by paying more attention to, say, psychoanalytic theory, then great! Don't tell us or berate us about it; instead, lead by example, use psychoanalytic theory, and show us what it says about a charity or cause area. If you're right about the value of this research methodology, then this should be easy for you to do. And then we will see your point and we'll know how to look into this research for more ideas. This is very similar to Noah Smith's argument on the two-paper rule. It's a much more epistemically and socially healthy way of doing things. And along the way, we can get directly useful information about cause areas that we may be missing. Until then, don't write me off as an ideologue just because I'm not inclined to spend my limited free time struggling through Deleuze and Guattari.

Not ruthless enough

This post suggested the rather alarming idea that EA's growth is petering out in a sort of logistic curve. It needs to be taken very seriously. In my biased opinion, this validates some of my longtime suspicions that EAs are not doing enough to actively promote EA as something to be allied with. We've been excessively nice and humble to criticisms, and allowed outsiders' ideas to dominate public conversations about EA. We've over-estimated the popular appeal that comes from being unusually nice and deferential, neglected the popular appeal that comes from strength and condemnation, imagined everything in terms of 'mistake theory' instead of developing a capacity to wield 'conflict theory', and assumed that the popular human conception of "ethics" and "niceness" was as neurotic, rigid and impartial as the upper class urban white Bay Area/Oxford academic conception of "ethics" and "niceness". In today's world, people don't care how "ethical" or "nice" you are if you are on the wrong team, and people who don't have a team won't be motivated to action unless you give them one.

I can't spell out more precisely what I think EAs should do differently, not because I'm trying to be coy about some unspeakable subversive plot, but because every person needs to look in their own life and environment to decide for themselves what they should do to develop a more powerful EA movement, and this is going to vary person to person. Generally speaking, I just think EAs should have a change in mindset and take leaves out of the books of more powerful social movements. We should absolutely be very nice and fair to each other, and avoid some of the excesses of hostility displayed by other social movements, but there's more to the issue than that.


John_Maxwell_IV @ 2019-08-03T05:48 (+45)

Given that ruthlessness has downside risks, maybe we should brainstorm a number of new ideas for movement growth (assuming movement growth is, in fact, valuable) instead of jumping straight to ruthlessness?

In today's world, people don't care how "ethical" or "nice" you are if you are on the wrong team, and people who don't have a team won't be motivated to action unless you give them one.

This is a terrible incentive gradient. I would much rather we make an EA project out of changing or mitigating this incentive gradient than give in to it.

Yes, we could have a large number of people who call themselves "EAs", and all they care about is whether you are on the right team... but would it be an EA movement worth the name?

Please read this post: https://www.effectivealtruism.org/articles/hard-to-reverse-decisions-destroy-option-value/

aarongertler @ 2019-07-31T22:19 (+30)

I work for CEA, but these views are my own.

Ruthlessness comment:

Short version of my long-winded response: I agree that promotion is great and that we should do more of it if we see growth slowing down, but I don't see an obvious reason why promotion requires "ruthlessness" or more engagement with criticism.

It is likely that popular appeal can help EA achieve some of its aims; it could grow our talent pool, increase available fundraising dollars, and maybe help us push through some of our policy projects.

On the other hand, much of the appeal that EA already has is tied to the way it differs from other social movements. Being "nice" in a Bay Area/Oxford sense has helped us attract hundreds of skilled people from around the world who share that particular taste (and often wound up moving to Oxford or the Bay Area). How many of these people would leave, or never be found at all, if EA shifted in the direction of "ruthlessness"?

----

But this all feels like I'm nitpicking at one half of your point. I'm on board with this:

Every person needs to look in their own life and environment to decide for themselves what they should do to develop a more powerful EA movement, and this is going to vary person to person.

Some people are really good at taking critics apart, and more power to them. Even more power to people who can produce wildly popular pro-EA content that brings in lots of new people; Peter Singer has been doing this for decades, and people like Julia Galef and Max Roser and Kelsey Piper are major assets.

But "being proud of EA and happy to promote it" doesn't have to mean "getting into fights". Total ignorance of EA is a much larger (smaller?) bottleneck to our growth than "misguided opposition that could be reversed with enough debate".

So far, the "official"/"formal" EA approach to criticism has been a mix of "polite acknowledgement as we stay the course", "crushing responses from good writers", and "ignoring it to focus on changing the world". This seems basically fine.

What leads you to believe that the problem of "growth tapering off" is linked to "insufficient ruthlessness" rather than "insufficient cheerful promotion without reference to critics"?

kbog @ 2019-08-01T09:18 (+8)
  • This need not be about ruthlessness directed right at your interlocutor, but rather towards a distant or ill-specified other.
  • I think it would be uncontroversial that a better approach is not to present yourself as authoritative, but instead present a conception of general authority in EA scholarship and consensus, and demand that it be recognized, engaged with, cited and so on.
  • Ruthless content drives higher exposure and awareness in the very first place.
  • There seems like an inadequate sticking rate of people who are just exposed to EA, consider for instance the high school awareness project.
  • Also, there seems like a shortage of new people who will gather other new people. When you just present the nice message, you just get a wave of people who may follow EA in their own right but don't go out of their way to continue pushing it further. Because it was presented to them merely as part of their worldview rather than as part of their identity. (Consider whether the occasionally popular phrase "aspiring Effective Altruist" obstructs one from having an real EA identity.) How much movement growth is being done by people who joined in the recent few years compared to the early core?
beth​ @ 2019-07-30T11:20 (+23)
For a more extreme hypothesis, Ariel Conn at FLI has voiced the omnipresent Western fear of resurgent ethnic cleansing, citing the ease of facial recognition of people's race - but has that ever been the main obstacle to genocide? Moreover, the idea of thoughtless machines dutifully carrying out a campaign of mass murder takes a rather lopsided view of the history of ethnic cleansing and genocide, where the real death and suffering is not mitigated by the presence of humans in the loop more often than it is caused or exacerbated by human passions, grievances, limitations, and incompetency.

I am not a historian, but during the Nazi regime, The Netherlands had among the highest percentages of Jews killed in all of Western Europe. I remember historians blaming this on the Dutch having thorough records of who the Jews were and where they lived. Access to information is definitely a big factor in how succesful a genocidal regime can be.

The worry is not so much about killer robots enacting a mass murder campaign. The worry is that humans will use facial recognition algorithms to help state-sanctioned ethnic cleansing. This is not a speculative worry. There are a lot of papers on Uyghur facial recognition.

kbog @ 2019-07-30T17:59 (+4)

But who is talking about banning facial recognition itself? It is already too widespread and easy to replicate.

beth​ @ 2019-07-30T20:39 (+14)

Just in the past weeks, San Francisco, Oakland and Cambridge.

kbog @ 2019-07-30T23:21 (+5)

Okay, very well then. But if a polity wanted to do something really bad like ethnic cleansing, they would just allow facial recognition again, and get it easily from elsewhere. If a polity is liberal and free enough to keep facial recognition banned then they will not tolerate ethnic cleansing in the first place.

It's like the Weimar Republic passing a law forbidding the use of Jewish Star armbands. Could provide a bit of beneficial inertia and norms, but not much besides that.

beth​ @ 2019-07-31T09:51 (+10)

As per my initial comment, I'd compare it to pre-WWII Netherlands banning government registration of religion. It could safe tens of thousands of people from deportation and murder.

kbog @ 2019-08-01T09:28 (+6)

OK, sounds like the biggest issue is not the recognition algorithm itself (can be replicated or bought quickly) but the acquisition of databases of people's identities (takes time and maybe consent earlier on). They can definitely come together, but otherwise, consider the possibilities (a) a city only uses face recognition for narrow cases like comparing video footage to a known suspect while not being able to do face-rec for the general population, and (b) a city has profiles and the ability to identify all its citizens for some other purpose but just doesn't have the recognition algorithms (yet).

Larks @ 2019-08-03T16:43 (+4)

It seems like a big distinction between the two lies in how quickly they could be rolled out. A pre-WWII database of religion would have taken a long time to create, so pre-emptively not creating one significantly inhibited the Germans, while the US already had the census data so could intern the Japanese. But it doesn't seem likely that not using facial recognition now would make it significantly harder to use later.

aarongertler @ 2019-07-31T21:27 (+21)

I found this to be thought-provoking and I'm glad you posted it. With that in mind, this list of points will skew a bit critical, as I'm more interested to see responses in cases where I disagree.

Diet change comment:

Larks @ 2019-07-30T10:56 (+17)

it's pretty easy for me to minimize or avoid unhealthy foods such as ... fortified cereal

Sorry for the tangent to the main point of the post, but is fortified cereal bad? I had assumed that public health authorities + food companies were adding useful nutrients than most people's diets lacked.

kbog @ 2019-07-30T17:56 (+16)

To be sure it is better than unfortified cereal (ceteris paribus), but they usually have a lot of refined grains + added sugar.

aarongertler @ 2019-07-31T21:48 (+13)

Methodology comment:


Regarding this claim:

EA has made exactly one major methodological step forward since its beginnings, which was identifying the optimizer's curse about eight years ago, something which had the benefit of a mathematical proof.

I appreciate that you went on to qualify this statement, but I'd still have appreciated some more justification. Namely, what are some popular ideas that many people thought were a step forward, but that you believe were not?

If methodological ideas generally haven't been popular, EA wouldn't be emphasizing methodology; if they were popular, I'd be curious to see any other writing you've done on reasons you don't think they helped. (I realize that would be a lot of work, and it may not be a good use of your time to satisfy my curiosity.)

When I look at the top ~50 Forum posts of all time (sorted by karma), I only see one that is about methodology, and it's not as much prescriptive as it is descriptive ("EA is biased towards some methodologies, other methodologies exist, but I'm not actively recommending any particular alternatives"). Almost all the posts are about object-level research or community work, at least as far as I understand the term "object-level".

I can only think of a few cases when established EA orgs/researchers explicitly recommended semi-novel approaches to methodology, and I'm not sure whether my examples (cluster thinking, epistemic modesty) even count. People who recommend, say, using anthropological methods in EA generally haven't gotten much attention (as far as I can recall).

kbog @ 2019-08-01T08:53 (+4)

I am also thinking of how there has been more back-and-forth about the optimizer's curse, people saying it needs to be taken more seriously etc.

I don't think that the prescriptive vs descriptive nature really changes things, descriptive philosophizing about methodology is arguably not as good as just telling EAs what to do differently and why.

I grant that #3 on this list is the rarest out of the 4. The established EA groups are generally doing fine here AFAIK. There is a CSER writeup on methodology here which is perfectly good: https://www.cser.ac.uk/resources/probabilities-methodologies-and-evidence-base-existential-risk-assessments-cccr2018/ it's about a specific domain that they know, rather than EA stuff in general.

HenryStanley @ 2019-07-30T15:25 (+9)

On your final point: I've often been torn on the question of "how big should EA get?" (cf. Buck Shlegeris' point about EA saying 'small and weird'). For what it's worth, I asked Peter Singer this and he emphatically said we should be trying to grow the movement as much as possible.

Relatedly, I often notice that most EAs are media-shy. I can recall a handful of occasions where an EA (individual or org) had the chance to speak with the press and declined for fear of a negative outcome. Maybe it's time to embrace the limelight?

G Gordon Worley III @ 2019-07-30T17:44 (+37)

I can't speak for any individual, but being careful in how one engages with the media is prudent. Journalists often have a larger story they are trying to tell over the course of multiple articles and they are actively cognitively biased towards figuring out how what you're saying confirms and fits in with that story (or goes against it such that you are now Bad because you're not with whatever force for Good is motivating their narrative). This isn't just an idle worry either: I've talked to multiple journalists and they've independently told me as much straight out, e.g. "I'm trying to tell a story, so I'm only interested if you can tell me something that is about that story".

Keeping quiet is probably a good idea unless you have media training so you know how to interact with journalists. Otherwise you function like a random noise generator that might accidentally generate noise that confirms what the journalist wanted to believe anyway and if you don't endorse whatever the journalist believes you've just done something that works against your own interests and you probably didn't even realize it!

sky @ 2019-07-31T21:07 (+28)

[Note: I’m a staff member at CEA]

I have been thinking a lot about this exact issue lately and agree. I think that as EA is becoming more well-known in some circles, it’s a good time to consider if — at a community level — EA might benefit from courting positive press coverage. I appreciate the concern about this. I also think that for those of us without media training (myself included), erring on the side of caution is wise, so being media-shy by default makes sense.

I think that whether or not the community as a whole or EA orgs should be more proactive about media coverage is a good question that we should spend time thinking about. The balance of risks and rewards there is an open question.

At an individual level though, I feel like I’ve gotten a lot of clarity recently on best practices and can give a solid recommendation that aligns with Gordon’s advice here.

For the past several months, I’ve sought to get a better handle on the media landscape, and I’ve been speaking with journalists, media advisors, and PR-type folks. Most experts I’ve spoken to (including journalists and former journalists) converge on this advice: For any individual community member or professional (in any movement, organization, etc), it is very unwise to accept media engagements unless you’ve had media training and practice.

I’m now of the mind that interview skills are skills like any other, which need to be learned and practiced. Some of us may find them easier to pick up or more enjoyable than others, but very few of us should expect to be good at interviews without preparation. Training, practice, and feedback can help someone figure out their skills and comfort level, and then make informed decisions if and when media inquiries come up.

To add on to Gordon’s good advice for those interested, here is a quick summary of what I’ve learned about the knowledge and skills required for media engagements:

  • General understanding of a journalist’s role, an interviewee’s role, and journalistic ethics (what they typically will and will not do; what you can and cannot ask or expect when participating in a story)
  • An understanding of the story’s particular angle and where you do or don’t fit
  • Researching the piece and the journalist’s credibility in advance, so that you can…
    • evaluate and choose opportunities where your ideas are more likely to be understood or represented accurately versus opportunities where you’re more likely to be misrepresented; and
    • predict the kinds of questions you’re likely to be asked so that you can practice meaningful responses. (Even simple questions like “what is EA?” can be surprisingly hard to answer briefly and well).
  • Conveying key ideas in a clear, succinct way so that the most important things you want to say are more likely to be what is reported
    • This includes the tricky business of predicting the ways in which certain ideas might be misunderstood by a variety of audiences and practicing how to convey points in a way that avoids such misunderstandings
  • Clearly understanding the scope of your own expertise and only speaking about related issues, while referring questions outside your expertise to others

I think having more community members with media training could be useful, but I also think only some people will find it worth their time to do the significant amount of preparation required.

This feels very timely, because several of us at CEA have recently been working on updating our resources for media engagement. In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received. I’d be happy to have people’s feedback on this resource!

G Gordon Worley III @ 2019-08-01T17:41 (+7)
This feels very timely, because several of us at CEA have recently been working on updating our resources for media engagement. In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received. I’d be happy to have people’s feedback on this resource!

This seems to be a private document. When I try to follow that link I get a page asking for me to log in to Google Drive with a @centreforeffectivealtruism.org Google account, which I don't have (I'm already logged into Google with two other Google accounts, so those don't seem to give me enough permission to access this document).

Maybe this document is intended to be private right now, but if it's allowed to be accessed outside CEA it doesn't seem that you currently can.

sky @ 2019-08-05T16:59 (+2)

Thanks, Gordon; I've fixed the sharing permissions so that this document is public.

Milan_Griffes @ 2019-07-31T21:31 (+5)
In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received.

The Media Training Bible is also good for this.

Milan_Griffes @ 2019-07-31T21:23 (+10)

See On the construction of beacons (a):


Finally, some advice for geeks, founders of subcultures, constructors of beacons. Make your beacon as dim as you can get away with while still transmitting the signal to those who need to see it. Attracting attention is a cost. It is not just a cost to others; it increases the overhead cost you pay, of defending this resource against predatory strategies. If you have more followers, attention, money, than you know how to use right now - then either your beacon budget is unnecessarily high, or you are already being eaten.
MichaelStJules @ 2019-08-02T13:37 (+8)
However, in the linked post I took the numbers displayed by ACE in 2019, and scaled them back a few times to be conservative, so it would be tough to argue that they are over-optimistic. I also used conservative estimates of climate change charities to offset the climate impacts, and also toyed with using climate change charities to offset animal suffering by using the fungible welfare estimates (I didn't post that part but it's easy to replicate).

With a skeptical prior, multiplying by factors like this might not be enough. A charity could be 100s of times (or literally any number of times) less cost-effective than the EV without such a prior if the evidence is weak, and if there are negative effects with more robust evidence than the positive ones, these might come to dominate and turn your positive EV negative. From "Why we can’t take expected value estimates literally (even when they’re unbiased)":

I have seen some using the EEV framework who can tell that their estimates seem too optimistic, so they make various “downward adjustments,” multiplying their EEV by apparently ad hoc figures (1%, 10%, 20%). What isn’t clear is whether the size of the adjustment they’re making has the correct relationship to (a) the weakness of the estimate itself (b) the strength of the prior (c) distance of the estimate from the prior. An example of how this approach can go astray can be seen in the “Pascal’s Mugging” analysis above: assigning one’s framework a 99.99% chance of being totally wrong may seem to be amply conservative, but in fact the proper Bayesian adjustment is much larger and leads to a completely different conclusion.

On the other hand, the more direct effects of abstaining from specific animal products rely largely on estimates of elasticities, which are much more robust.

casebash @ 2019-07-31T03:28 (+8)

"I have seen EAs circulate half-baked fears of suicide drones making it easy to murder people (just carry a tennis racket, or allow municipalities to track or ban drone flights if they so choose) or assassinate political leaders (they already speak behind bullet-resistant plexiglass sometimes, this is not a problem)" - Really not that easy. A tennis racket? Not like banning drones stops someone flying a drone from somewhere else. And political leaders sure you can speak behind the glass, but are you going to spend your whole life behind a screen?

Maybe EA should grow more, but I don't think that the issue is that we are "not ruthless enough". Instead I'd argue that meta is currently undervalued, at least in terms of donations.

kbog @ 2019-07-31T04:23 (+5)

Yes, the "slaughterbots" video produced by Stuart Russell and FLI presented a dystopian scenario about drones that could be swatted down with tennis rackets. Because the idea is that they would plaster to your head with an explosive.

Not like banning drones stops someone flying a drone from somewhere else.

Yes, but it means that on the rare occasion that you see a drone, you know it's up to no good and then you will readily evade or shoot it down.

And political leaders sure you can speak behind the glass, but are you going to spend your whole life behind a screen?

No... but so what? I don't travel in an armored limousine either. If someone really wants to kill me, they can.

More donations for movement growth: I would tentatively agree.

Agrippa @ 2019-08-21T11:41 (+7)

Anecdote re: ruthlessness:

During my recent undergrad, I was often openly critical of the cost effectiveness of various initiatives being pushed in my community. I think anyone who has been similarly ruthless is probably familiar with the surprising amount of pushback and alienation that comes from doing this. I think I may have convinced some small portion of people. I ended up deciding that I should focus on circumventing defensiveness by proactively promoting what I thought were good ideas and not criticizing other people's stupid ideas, which essentially amounts to being very nice.

I wonder how well a good ruthlessness strategy about public contexts generalizes to private contexts and vice versa.

MichaelStJules @ 2019-08-02T07:10 (+6)

Is veganism a foot in the door towards effective animal advocacy (EAA) and donation to EAA charities? Maybe it's an easier sell than getting people to donate while remaining omnivores, because it's easier to rationalize indifference to farmed animals if you're still eating them.

Maybe veganism is also closer to a small daily and often public protest than turning off the lights, and as such is more likely to lead to further action later than be used as an excuse to accomplish less overall.

Of course, this doesn't mean we should push for EAs to go vegan. However, if we want the support (e.g. donations) of the wider animal protection movement, it might be better to respect their norms and go veg, especially or only if you work at an EA or EAA org or are fairly prominent in the movement. (And, the norm itself against unnecessary harm is probably actually valuable to promote in the long-term.)

Finally, in trying to promote donating to animal charities face-to-face, will people take you more or less seriously if you aren't yourself vegan? I can see arguments each way. If you're not vegan, then this might reduce their fear of becoming or being perceived as a hypocrite if they donate to animal charities but aren't vegan, so they could be more likely to donate. On the other hand, they might see you as a hypocrite, and feel that if you don't take your views seriously enough to abstain from animal products, then they don't have to take your views seriously either.

reallyeli @ 2019-07-31T04:41 (+5)

Although I think this post says some important things, I downvoted because some conclusions appear to be reached very quickly, without what to my mind is the right level of consideration.

For example, "True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success - it would just partially counterbalance them." My intuition says the opposite of this. I don't think it's at all clear (whether increasing the capability of the U.S. military is a good or bad thing).

I agree that object-level progress is to be preferred over meta-level progress on methodology.

kbog @ 2019-07-31T06:24 (+5)

Here's some support for that claim which I didn't write out.

There was a hypothesis called "risk homeostasis" where people always accept the same level of risk. E.g. it doesn't matter that you give people seatbelts, because they will drive faster and faster until the probability of an accident is the same. This turned out to be wrong; for instance people did drive faster, but not so much faster as to meet or exceed the safety benefits. The idea of moral hazard from victory leading to too many extra wars strikes me as very similar to this. It's a superficially attractive story that allows one to simplify the world and not have to think about complex tradeoffs as much. In both cases you are taking another agent and oversimplifying their motivations. The driver - just has a fixed risk constraint, and beyond that wants nothing but speed. The state - just wants to avoid bleeding too much, and beyond that threshold it wants nothing but foreign influence. But the driver has a complex utility function or maybe a more inconsistent set of goals about the relative value of more safety vs less safety, more speed vs less speed; therefore, when you give her some new capacities, she isn't going to spend all of it on going faster. She'll spend some on going faster, then some on being safer.

Likewise the state does not want to spend too much money, does not want to lose its allies and influence, does not want to face internal political turmoil, etc. When you give the state more capacities, it spends some of it on increasing bad conquests, but also spends some of it on winning good wars, on saving money, on stabilizing its domestic politics, and so on. The benefits of improved weaponry for the state are fungible, as it can e.g. spend less on the military while obtaining a comparable level of security.

Security dilemmas throw a wrench into this picture, because what improves security for one state harms the security of another. However in the ultimate theoretical case I feel that this just means that improvements in weaponry have neutral impact. Then in the real world, where some US goals are more positive sum in nature, the impacts of better weapons will be better than neutral.

zdgroff @ 2019-07-30T17:12 (+4)
This post suggested the rather alarming idea that EA's growth is petering out in a sort of logistic curve.

Is this the right link? I don't see that claim in the post, but maybe I'm missing it.

kbog @ 2019-07-30T17:55 (+4)

Sorry. This is it: https://forum.effectivealtruism.org/posts/MBJvDDw2sFGkFCA29/is-ea-growing-ea-growth-metrics-for-2018

zdgroff @ 2019-07-31T16:14 (+2)

Great, thanks!

anonymous_ea @ 2019-07-30T16:45 (+3)

There's an incorrect link in this sentence:

This post suggested the rather alarming idea that EA's growth is petering out in a sort of logistic curve.

The link goes to Noah Smith's blog post advocating the two paper rule.

MichaelStJules @ 2019-08-01T04:10 (+2)

(I'm not disagreeing with your overall point about the emphasis on the vegan diet)

You can of course supplement, but at the cost of extra time and money - and that's assuming that you remember to supplement. For some people who are simply bad at keeping habits - me, at least - supplementing for an important nutrient just isn't a reliable option; I can set my mind to do it but I predictably fail to keep up with it.

One way to make this easier could be to keep your supplements next to your toothbrush, and take them around the first time you brush your teeth in a day.

I actually have most of my supplements (capsules/pills) on my desk in front of or next to my laptop. I also keep my toothbrush and toothpaste next to my desk in my room.

I would usually put creatine powder in my breakfast, but I've been eating breakfast at work more often lately, so I haven't been consistent. Switching to capsules/pills would probably be a good idea.

I think you could keep your supplements under $2 a day. Some of these supplements you might want to take anyway, veg or not, too. So I don't think you'd necessarily be spending more on a vegan diet than an omnivorous one, if you're very concerned with cost, since plant proteins and fats are often cheaper than animal products. If you're not that concerned with cost in the first place, then you don't need to be that concerned with the cost of supplements.

There's a lot that we don't understand, including chemicals that may play a valuable health role but haven't been properly identified as such. Therefore, in the absence of clear guidance it's wise to defer to eating (a) a wide variety of foods, which is enhanced by including animal products, and (b) foods that we evolved to eat, which has usually included at least a small amount of meat.

You could also be bivalvegan/ostrovegan, and you don't need to eat bivalves every day; just use them to fill in any missing unknowns in your diet, so the daily cost can be reduced even if they aren't cheap near you. Bivalves also tend to have relatively low mercury concentrations among sea animals, and some are good sources of iron or omega-3.

Here's a potentially useful meta-analysis of studies on food groups and all-cause mortality, but the weaknesses you've already pointed out still apply, of course. See Table 1, especially, and, of course, the discussions of the limitations and strength of the evidence. They also looked at processed meats separately, but I don't think they looked at unprocessed meats separately.

Another issue with applying this meta-analysis to compare vegan and nonvegan diets, though, is that the average diet with 0 servings of beef probably has chicken in it, and possibly more than the average diet with some beef in it. Or maybe they adjusted for these kinds of effects; I haven't looked at the methodology that closely.

unhealthy foods such as store-bought bread (with so many preservatives, flavorings etc)

Do you think it's better to not eat any store-bought whole grain bread at all? I think there's a lot of research to support their benefits. See also the meta-analysis I already mentioned; even a few servings of refined grains per day were associated with reduced mortality. (Of course, you need to ask what people were eating less of when they ate more refined grains.)

How bad are preservatives and flavourings?

MichaelStJules @ 2019-08-01T03:00 (+1)

On being ruthless, do you think we should focus on framing EA as a moral obligation instead of a mere opportunity? What about using a little shaming, like this? I think the existence of the Giving Pledge with its prominent members, and the fact that most people aren't rich (although people in the developed world are in relative terms) could prevent this light shaming from backfiring too much.

kbog @ 2019-08-01T08:42 (+3)

I've long preferred expressing EA as a moral obligation and support the main idea of that article.

JoshYou @ 2019-07-30T22:50 (+1)

On point 4, I wonder if more EAs should use Twitter. There are certainly many options to do more "ruthless" communication there, and it might be a good way to spread and popularize ideas. In any case it's a pretty concrete example of where fidelity vs. popularity and niceness vs. aggressive promotion trade off.

John_Maxwell_IV @ 2019-08-03T05:53 (+18)

Keep in mind that Twitter users are a non-representative sample of the population... Please don't accept kbog's proposed deal with the devil in order to become popular in Twitter's malign memetic ecosystem.

JoshYou @ 2019-08-03T22:55 (+4)

Absolutely, EAs shouldn't be toxic, inaccurate, or uncharitable on Twitter or anywhere else. But I've seen a few examples of people effectively communicating about EA issues on Twitter, such as Julia Galef and Kelsey Piper, at a level of fidelity and niceness far above the average for that website. On the other hand they are briefer, more flippant, and spend more time responding to critics outside the community than they would on other platforms.

kbog @ 2019-07-30T23:10 (+3)

I've recently started experimenting with that, I think it's good. And Twitter really is not as bad a website as people often think.

JoshYou @ 2019-07-30T23:36 (+4)

Yep, though I think it takes a while to learn how to tweet, whom to follow, and whom to tweet at before you can get a consistently good experience on Twitter and avoid the nastiness and misunderstandings it's infamous for.

There's a bit of an extended universe of Vox writers, economists, and "neoliberals" that are interested in EA and sometimes tweet about it, and I think it would be potentially valuable to add some people who are more knowledgeable about EA into the mix.