Introducing “Better for Animals”: A New Public Resource for Evidence-Based Animal Advocacy

By Animal Charity Evaluators, Alina Salmen, Max Taylor @ 2025-09-24T16:47 (+48)

Resource: Better for Animals: Evidence-Based Insights for Effective Animal Advocacy

Animal advocates use a wide variety of approaches to help animals—from running corporate campaigns to get chickens out of cages, to researching wild animal welfare science, to influencing lawmakers to support plant-based policies. But which of these approaches are the most promising, and how can they be made more effective? Evaluating and comparing them is a monumental challenge—especially as our field has less empirical research available to guide decisions than other cause areas, such as global health and development.1

However, the animal advocacy evidence base is growing: On average, we add more than 100 articles to our Research Library each month. This is great news; however, it brings its own challenges. While we have always consulted existing research to inform our grantmaking and charity recommendation decisions, the increasing volume and complexity of research called for ACE to adopt a more systematic and dynamic approach to synthesizing results from empirical studies and updating our thinking about intervention effectiveness.

The challenge isn’t unique to us: Advocates, funders, and researchers navigating this expanding and often contradictory “evidence maze” can easily become overwhelmed. Research from Faunalytics has highlighted this very issue, finding that advocates often need more accessible syntheses to make informed decisions.2

In February 2024, ACE launched a project aiming to address this problem. We started out with the primary goal of sharpening our own grantmaking and charity recommendation decisions, while also addressing what we saw as a bottleneck for the wider movement. We wanted to create a thorough, dynamic overview of the evidence for the almost 30 intervention types in our Menu of Interventions—whether they have been shown to work, what their risks are, and under what conditions we expect them to be more or less effective.

We developed this resource internally and are now excited to share Better for Animals: Evidence-Based Insights for Effective Animal Advocacy. This resource is a living document. We will update it several times a year with new evidence, and we hope it will evolve with feedback from you, our community. At ACE, we now regularly consult these evidence reviews when evaluating charities or grant applications. Understanding the state of the evidence for the interventions a charity uses helps us assess the strength of their theory of change, gauge whether they follow best practice in how they implement the intervention, and ask them the most meaningful questions about their work.

To help make this detailed information more accessible to a wide range of audiences, starting later in September we will launch a series of social media and blog posts spotlighting one intervention each month.

We hope that readers will use our new resource in several ways:

This project was a huge effort and would not have been possible without the critical feedback and strategic input of countless volunteers, advocates, researchers, and funders. A huge thank you to everyone who contributed!

Below, we walk you through how this resource came to be, our research process, and the main limitations.

The Project

We knew we couldn’t develop this resource in a vacuum. We started by consulting other organizations doing similar work, in order to collaborate and avoid duplication, including Mercy For Animals, Faunalytics, and Rethink Priorities. These conversations confirmed the project would fill a unique and necessary gap, and complement other efforts in the movement.

We developed a detailed research protocol, adapting one developed at Faunalytics for our purposes. The protocol detailed our search strategy, guidelines for evaluating and synthesizing evidence, and the key research questions we wanted to answer for each intervention. After trialing the protocol on an initial set of topics, we shared early drafts with a range of external reviewers—funders, advocates, and researchers—and used their feedback and our experience of trialling the protocol to refine our process.

Using the refined protocol, our researchers, research fellow, and a group of amazing volunteers wrote evidence reviews on the remaining topics. These were typically reviewed by ACE’s Programs team. We also submitted a subset for external peer review, selecting the interventions most commonly used by the charities we evaluate for recommendation or grants. These peer reviewers included researchers and advocates with specialist expertise on those topics.

The Research Process

For each topic, our researchers began by scouring key sources, from academic databases like Google Scholar to the Faunalytics Research Library and research reports from groups within the movement. This created a longlist of potential articles for inclusion.

We then shortlisted the most relevant and rigorous studies. While our initial plan was to cap this at around 10 articles per intervention due to team capacity, this ended up varying greatly by intervention type. For some interventions, we reviewed nearly 50 articles to build a coherent picture. For others, a lack of direct research meant we had to rely on very few articles, theoretical arguments, and/or evidence from adjacent fields.

From there, we synthesized the evidence by evaluating, comparing, and combining the findings from all shortlisted articles to form a coherent overall picture. We focused this analysis on a set of key questions, starting with “Is it effective?”, where we define effectiveness in terms of reduced or avoided animal suffering. Next, we dug deeper to understand relevant context and risks. We believe it’s unhelpful to label most approaches as simply “good” or “bad;” nuance is critical. An intervention’s success almost always depends on the context: where and how it is implemented, who the target audience is, and what the specific ask is. We explored the evidence for conditions that might make an intervention more or less likely to succeed, and how it could potentially backfire and inadvertently harm animals or the movement.

Finally, we brought everything together into an overall assessment of how promising we think the intervention is. We also determined our level of confidence based on the quality, quantity, and agreement of sources available, and identified the high-priority research questions that, if answered, could change our minds or increase confidence in our verdict.

We now update the evidence reviews every few months with new research, most of which is identified through our monthly Research Digest, which collates new research relevant to farmed animal advocates every month.

Limitations

Our conclusions about interventions’ effectiveness are to be interpreted with caution for several reasons:

We’d love to continue receiving feedback. Because we don’t have time to moderate a flurry of comments, if you’d like to give feedback on the project as a whole, or a particular intervention, please email alina.salmen@animalcharityevaluators.org or max.taylor@animalcharityevaluators.org with your feedback, or to request comment access to the document.

Acknowledgments

We would like to extend our gratitude to:

Our volunteers

Jackie Bialo, Elena Braeu, Jan Gaida, and Sada Rice.

Our research fellow

Sam Mazzarella

For their feedback and advice

Alene Anello, Christopher Berry, Aaron Boddy, George Bridgwater, Chris Bryant, Vicky Cox, Alice Di Concetto, Rune-Christoffer Dragsdahl, Neil Dullaghan, Sueda Evirgen, Carolina Galvani, Martin Gould, Vasco Grilo, Thomas Hecquet, Emre Kaplan, Cailen Labarge, Chrys Liptrot, Jesse Marks, William McAuliffe, Caroline Mills, Gülbike Mirzaoğlu, PJ Nyman, Björn Ólafsson, Pete Paxton, Jacob Peacock, Kathrin Plaschnick, Andrea Polanco, Sean Rice, Aditya SK, Zoë Sigle, Saulius Šimčikas, Michael St Jules, Ben Stevenson, Andie Thompkins, and Prashanth Vishwanath.

  1. E.g., Hilton & Bansal (2023)
  2. Jones & Anderson (2024) 

david_reinstein @ 2025-10-01T13:20 (+4)


Quick impression -- the Gdoc is a bit challenging to navigate. I'd love to have a menu bar letting me see each of the interventions covered and jump to them, as well as tables comparing them. Could this potentially be turned into a Notion or Coda.io page?

Also would be nice if there were comment access so people could note additional evidence, critiqus of the evidence etc.   

Max Taylor @ 2025-10-02T12:33 (+1)

Thanks David! 

  • We trialled a few formats, including Notion, and Google Docs was overall the easiest overall for reading and updating, but agree the in-built menu bar is a bit unwieldy. We also considered adding a comparison table but decided not to as distilling the content for each intervention to that extent ended up being unhelpfully reductive. We'll consider other ways to make this easier to navigate though, thanks for flagging!
  • Yeah, to keep the document tidy and help us moderate comments then we invite people to directly email me (max.taylor@animalcharityevaluators.org) or Alina Salmen (alina.salmen@animalcharityevaluators.org) with feedback or to request comment access, rather than us enabling comment access automatically.
  • Thanks for noting all the cross-over with The Unjournal, that's great! I've added those evaluations to our list of studies to incorporate when we next update this.
Tristan Katz @ 2025-10-02T12:52 (+2)

This is awesome. I really liked how you considered both short term and long term, clear and diffuse effects, and noted how they changed your confidence. 

It seems like this should be highly valuable for:

I agree with @david_reinstein that it would be nice to see this made into a more visually polished and navigable form, but in terms of the content itself I found it very easy to understand the reasoning and assessments. 

Max Taylor @ 2025-10-02T16:16 (+1)

Great, thanks Tristan! That's really good to hear, and noted re. the formatting. And yes, we definitely hope that other researchers will build on this and challenge us so that we can continually improve it.

david_reinstein @ 2025-10-01T16:08 (+2)

I asked GPT 5 -pro about the links between these , and it shared this, which looks to be correct to me, at least from my side, at least for the first list. I slightly paraphrasr/format the output below

[GPT] 

Evaluated by The Unjournal → matching sections in Better for Animals

Rethink Priorities (2022): “Forecasts estimate limited cultured meat production through 2050” — Unjournal evaluation package (2025).
Section match: Evidence Review: Alternative proteins 
Why it matches: Both assess the prospects and constraints for cultivated/alternative proteins (costs, adoption, near‑term pathways). Unjournal’s evaluators discuss TEAs, shifting costs, and framing choices; the PDF synthesizes impacts and price‑parity considerations. (forum.effectivealtruism.org)

Green, Smith & Mathur (2024): “Meaningfully reducing consumption of meat and animal products is an unsolved problem: A meta‑analysis” — Unjournal evals (2025).
Section match: Corporate & institutional vegn outreach* (pp. 32–47), Social media campaigns & online ads (pp. 173–181), Vegn pledges* (pp. 187–194), and Books, documentaries/films & podcasts (pp. 7–14).
Why it matches: The meta‑analysis synthesizes RCTs on meat‑reduction interventions; the PDF reviews the same intervention families, their effect sizes, and limitations. (The Unjournal)

Epperson & Gerster (2024): “Willful Ignorance and Moral Behavior” — Unjournal evaluation summary & two reviews (2024).
Section match: Social media campaigns & online ads ; Media outreach & journalism 
Why it matches: The paper studies information avoidance and the impact of an animal‑advocacy video on consumption; the media/online sections discuss message framing, short‑lived effects, and pathways to behavior change. (The Unjournal)

Bruers (2023): “The animal welfare cost of meat” — Unjournal evaluations (2025).
Section match: Research – Effective animal advocacy and Research – Farmed animal welfare.
Why it matches: Methodological work on valuing animal welfare (WTP/WTA, interspecies comparisons); the Presearch sections highlight the need for credible measurement and decision‑relevant welfare research. (The Unjournal)

 

[DR: I'm slightly less confident in this second list below] 

In the Unjournal Database/research we're considering evaluating

“Cultured meat: A comparison of techno‑economic analyses” 
Section: Alternative proteins 
Notes: TEA synthesis & forecasting align directly with the assessment of cultivated/plant‑based options and price‑parity dynamics (incl. survey price sensitivity on p. 203).

“A survey on inter‑animal welfare comparisons” (working paper)
Sections: Research – Farmed animal welfare (pp. 157–161); Research – Wild animal welfare (pp. 162–168).
Notes: Exactly the methodological gap the report flags (measurement/aggregation across species and contexts).

“Interventions that influence animal‑product consumption: A meta‑review”
Sections: Corporate & institutional vegn outreach* (pp. 32–47), Social media campaigns & online ads (pp. 173–181), Vegn pledges* (pp. 187–194).
Notes: Closely parallels the intervention taxonomy and effect‑size discussions in those sections.

“Giving farm animals a name and a face (identifiable victim effect)”
PDF sections: Media outreach & journalism (pp. 93–97); Social media campaigns & online ads (pp. 173–181); Celebrity/influencer outreach (p. 15).
Notes: Messaging psychology and emotional appeals are treated as potentially stronger levers within media/online tactics.

“Concentration and Resilience in the US Meat Supply Chains” (NBER w29103)
Sections (adjacent): Corporate outreach for welfare improvements (pp. 23–31) and Government outreach (pp. 64–74).
Notes: The PDF focuses on welfare‑commitment supply‑chain policies and public‑policy levers, not industrial concentration per se—so this is adjacent, not a direct treatment.

david_reinstein @ 2025-10-01T15:19 (+2)

Epperson and Gerster's "Willful Ignorance and Moral Behavior" seems relevant to your review of "Social media campaigns and online ads". See our evaluation package on this here.
  

david_reinstein @ 2025-10-01T14:55 (+2)

Our evaluations of "Meaningfully reducing consumption of meat and animal products is an unsolved problem: A meta-analysis" also seems relevant. Both the paper and the evaluations provide some caution on how meta-analyses should be used and some insights into the potential for these to be done more carefully.  And I believe the meta-analysis covers itself covers several of the papers you cite.  

david_reinstein @ 2025-10-01T12:20 (+2)

I just wanted to note quickly that The resource discusses at least one piece of research evaluated by the “Pivotal questions”: an Unjournal trial initiative  -- see  https://forum.effectivealtruism.org/posts/yqy6d9sydTHujMw8B/rethinking-the-future-of-cultured-meat-an-unjournal 

 

I'll try to follow further on this project. And how it might be informed by unjournal's evaluations and pivotal questions work

SummaryBot @ 2025-09-25T21:53 (+2)

Executive summary: Animal Charity Evaluators has launched Better for Animals, a living resource synthesizing evidence on nearly 30 animal advocacy interventions, aiming to improve strategy and grantmaking while helping advocates, funders, and researchers navigate an expanding but fragmented evidence base; the resource is updated regularly, incorporates feedback, and highlights both strengths and limitations of current knowledge.

Key points:

  1. Better for Animals responds to the challenge of evaluating diverse advocacy strategies amid limited, fragmented, and sometimes contradictory evidence.
  2. ACE developed a structured research protocol, collaborated with peer organizations, and incorporated internal and external reviews to ensure rigor.
  3. The resource provides nuanced assessments of interventions, avoiding simple “good/bad” labels and emphasizing context, risks, and conditions for effectiveness.
  4. Reviews are updated several times per year to integrate new research, identified mainly through ACE’s monthly Research Digest.
  5. Major limitations include lack of full systematic reviews, publication bias, overrepresentation of short-term and Western studies, and reliance on lower-quality or adjacent evidence for some interventions.
  6. ACE invites feedback and hopes the resource will guide advocates’ strategies, inform funders’ priorities, and inspire researchers to fill critical evidence gaps.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.