Impact evaluation of Animal Ask: A retrospective after 5 years

By Animal Ask @ 2025-11-18T15:01 (+20)

 

Executive summary

Animal Ask is a research organisation that offers consultation and dedicated research to stakeholders in the animal advocacy movement to help those organisations make strategic decisions to bring about the highest possible impact for animals. Animal Ask also offers other custom research, such as written documents to help animal advocacy organisations communicate as effectively as possible with policymakers.

Since being launched in 2020, Animal Ask has completed 57 major research projects around the world, covering all major exploited animal groups (insects, shrimp, fish, chickens, pigs, and ruminants). Animal Ask has had numerous constructive collaborations with stakeholders and has usually received positive feedback (though some issues have been raised, which are also noted in this report). We have achieved all of this with a team that has ranged in size between 3 and 5 staff members.

In this report, we examine the impact of Animal Ask in terms of improving the lives of non-human animals.

To understand Animal Ask's impact, we consider six viewpoints:

Integrating the evidence from these six viewpoints, the conclusive assessment regarding Animal Ask's impact is as follows:

Introduction

The big picture

Since this evaluation requires us to be critical, it is helpful to begin with a high-level perspective of Animal Ask's accomplishments over the past five years. Since its launch in 2020, Animal Ask's team of 3–5 staff members has:

Animal Ask is committed to achieving a single objective: to deliver impact for animals. Specifically, Animal Ask aims to increase animal welfare and/or to decrease the extent of animal exploitation. The sole criterion against which the organization seeks to be evaluated is its impact on animals.

A map illustrating the countries where Animal Ask has worked

Animal Ask's theory of change

In broad strokes, Animal Ask attempts to deliver impact by following three theories of change. Our direct contribution is up to stage C of each of these theories of change:

  1. Strategic priorities and consultation.
    1. Animal Ask finds animal advocacy organisations at the early, decision-making stage of a campaign →
    2. Animal Ask performs research to help advise the organisation's decision-making →
    3. the organisation selects more impactful asks for their campaigns and/or develop a more evidence-based and nuanced strategy for achieving those asks, than they otherwise would have done →
    4. some of those campaigns eventually succeed →
    5. the laws or corporate policies that are achieved have a higher impact than they otherwise would →
    6. there is a net, counterfactual reduction in animal suffering or the scale of animal exploitation, compared to if Animal Ask had not acted.
  2. Information lobbying.
    1. For an upcoming campaign with a given ask, Animal Ask prepares persuasive written documents supporting the value or the feasibility of that ask (e.g. white papers, economic reports, scientific publications) →
  3. the campaign has a higher probability of success →
  4. over time, more campaigns succeed than would otherwise be the case →
  5. more pro-animal laws or corporate policies are achieved →
  6. there is a net, counterfactual reduction in animal suffering or the scale of animal exploitation, compared to if Animal Ask had not acted.
  7. Foundational research.
    1. Animal Ask identifies key uncertainties facing the movement →
    2. Animal Ask performs dedicated research into those foundational questions and publishes recommendations for the movement as a whole →
    3. decision-makers inside the animal advocacy movement read that research and keep it in mind while making decisions that involve those key uncertainties →
    4. [the same latter steps as in "Strategic priorities and consultation"]

Over the past five years, our division of research effort has been approximately 50% on strategic priorities and consultation, 15% on information lobbying, and 35% on foundational research. However, foundational research has been less common during 2024 and 2025. The need for foundational research was higher as Animal Ask was in its early years and we rapidly identified many key uncertainties that required foundational research. We also spent most of 2023 on foundational research, as we had a specific grant to do so. Currently, our division of research is probably something like 75% on strategic priorities and consultation, 20% on information lobbying, and 5% on foundational research.

The landscape of animal advocacy research

Numerous research organizations operate within the animal advocacy movement, each employing distinct theories of change. While an exhaustive list is beyond the scope of this discussion, illustrative examples include:

This represents the landscape in which Animal Ask operates. As one of many research organizations in the animal advocacy movement, Animal Ask contributes to impact through specific research endeavors targeted at specific decisions.

For a deeper dive into animal advocacy research, see the blog article by Animal Charity Evaluators here.

 

Challenges in evaluating research impact

The theory of change employed by Animal Ask, and indeed by research organizations generally, operates at a "meta" level. This means that the organization does not directly engage in advocacy for specific animal policies. Instead, Animal Ask's function is to support individuals and organizations directly involved in such campaigns. This operational distance from direct action places Animal Ask's work at an abstract level above the campaigns themselves.

"Meta" work typically entails a longer and more complex theory of change. The aforementioned theories of change, for instance, often comprise half a dozen broad steps. Each sequential step introduces additional potential points of failure. Even if the initial stages of the theory of change are successfully executed, yielding high-quality and necessary outputs, subsequent links may not materialize as intended. For example, even if Animal Ask provides robust research and evidence-based strategic recommendations, the campaigns themselves may still fail. Furthermore, even when campaigns succeed and "meta" work demonstrably produces impact, the realization of this impact can often require several years.

In practical terms, it is exceedingly difficult to definitively ascertain the impact of most research on animal welfare outcomes. While there are valid reasons to believe that certain types of research, including that conducted by Animal Ask, do contribute to animal welfare, measuring this impact remains exceptionally challenging.

A crucial consideration is the protracted timeline associated with long theories of change, such as that of Animal Ask. Campaigns typically require years to achieve success, implying a commensurate delay in determining the impact of advisory services on those campaigns. Animal Ask has been in operation for five years, a relatively short period. Many campaigns in which the organization has been involved have simply not had sufficient time to manifest either success or failure. Conversely, it is necessary to establish a reasonable cutoff point for evaluation. An organization lacking genuine impact could potentially persist for decades without demonstrating effectiveness if every evaluation concludes with the rationale of insufficient time for the theory of change to operate and for impact to materialize. For Animal Asks consultation model, an additional few years of evaluation would likely be warranted, but not indefinitely. A commitment to accountability is essential. A conservative estimate for this evaluative threshold might fall within the 8- to 10-year mark (e.g., 2028–2030), while a more optimistic assessment might extend it to the 15- or 20-year mark.

This discussion also raises an important, unresolved debate about strategy. Given a particular set of evidence about an organization's impact, what is the correct rate at which that organization should scale? If an organization scales too soon on the basis of insufficient evidence, it risks wasting effort and resources. If an organization scales too late and waits for perfect evidence, it risks wasting opportunities for impact. This is a question that has faced all impact-oriented organizations and organisations in the same ecosystem will take different approaches.

The six viewpoints of this report

The remainder of this report will assess whether Animal Ask is achieving a satisfactory impact. This assessment will proceed from six distinct perspectives, each employing a different line of reasoning:

In the conclusion, the results from each viewpoint will be summarized, and methods for integrating these disparate perspectives will be explored.

Viewpoint 0: The outside view

Organizational failure is a common outcome, particularly within the non-profit sector. In this context, "failure" denotes the absence of measurable impact. It is important to distinguish this from an unsuccessful organizational launch; rather, it signifies that, despite the founders' best efforts, an intervention simply did not deliver the intended impact. Identifying this outcome allows for the better allocation of resources and effort to more impactful endeavors instead.

Animal Ask was established in 2020 by Amy Odene and George Bridgwater, emerging from the Ambitious Impact (AIM) Charity Entrepreneurship (CE) incubation program.

A 2023 AIM assessment of 50 launched charities categorized their progression into three states (link):

This suggests that between 20% and 60% of AIM-incubated charities experience failure, while 40% to 80% achieve success. Thus, a randomly selected AIM-incubated charity could be expected to have a probability of success within this range.

A 2019 AIM report (link) recommended the establishment of an animal advocacy-focused research organization, which subsequently became Animal Ask in 2020. This report explored various research models, with Animal Ask's current model—consulting with other animal advocacy organizations on strategic decisions—identified as one of the most promising strategies.

However, the authors of the 2019 report estimated the probability of achieving impact on any given research project to be approximately 5%. This implies a 95% project-level failure rate. This high probability of failure was deemed acceptable due to the substantial potential payoff.

The low probability of success was attributed to two critical links in the theory of change: a) "High-impact ask used instead of a less high-impact ask" (estimated probability: ~30%) and b) "Using a high-impact ask, leading to more implemented change for animals" (estimated probability: ~25%). The prescience of these predictions regarding key uncertainties has been borne out by subsequent observations, which will be discussed further in this report.

Viewpoint 1: Impact tracking

Summary of viewpoint: We have tracked the outcomes of 57 of Animal Ask's research projects over the past five years. The collected evidence offers tentative support for Animal Ask's impact but remains ultimately inconclusive. A definitive conclusion is anticipated within the coming years.

A common approach to monitoring and evaluation (M&E) for organizations such as Animal Ask involves cataloging all projects, specifying their intended outcomes, and subsequently assessing their actual outcomes.

This is the most important viewpoint in this report. We probably put about 75% of our credence into this viewpoint alone.

As an end-line metric, Animal Ask's projects can be classified into five categories:

Between 2021 and 2025, we have completed 57 major projects. The graph below shows the outcome by category.

 

A bar graph summarising campaigns by impact category

Positive impact: 2 campaigns
No impact: 17 campaigns
Campaign is still ongoing: 14 campaigns
Averted a suboptimal decision: 2 campaigns
Not measured and difficult to tell: 22 campaigns

Two of our projects have shown good results, which are discussed further below (see "Viewpoint 2: The skeptic's view"). An additional two projects helped prevent wasted resources for movements.

Seventeen of our projects haven't had a clear impact yet. This includes cases where (a) our advice wasn't used by partner organizations, and/or (b) the related campaign didn't achieve its goal. These problems were identified as early as 2019 as key uncertainties for Animal Ask (see above, "Viewpoint 0: The outside view"). We know that sometimes, our advice might not have been used because it didn't fit the partner organizations' situation or changing priorities, or because of outside factors we couldn't control (e.g. changing political landscape, decision-making staff turnover, funding constraints). We also acknowledge that organizations may already be making the best decisions possible given their situation—in these situations, additional research from Animal Ask does not add any value. We've put a lot of effort into understanding these uncertainties better and improving our methods to increase the chance our research will add value and organisations are well positioned to take action. For example, we now conduct significantly more pre-vetting for projects covering the broader stakeholders outside of the relevant departments in an organization or even outside of the organization themselves. 

However, 14 projects are part of ongoing campaigns. These campaigns haven't reached a final decision about their goals yet. These projects will eventually be classified as either having a "Positive impact" or "No impact." Their final classification depends on what happens in the future.

For some examples of case-studies for projects in each category outside of “positive impact” which is covered more below.

Based on this data, Animal Ask's work appears somewhat promising, but the evidence is ultimately inconclusive. 

One category, "Not measured and difficult to tell," warrants further discussion.

Viewpoint 2: The skeptic's view

Summary of viewpoint: Animal Ask can identify only a limited number of instances where its work has demonstrably led to impact. These examples are, at best, ambiguous. However, this observation may be because insufficient time has passed for further examples to materialize.

A skeptical perspective suggests that while past projects can be cataloged and their outcomes reviewed, this approach risks overlooking the broader context. There is a potential for self-deception regarding actual impact. To substantiate a claim of significant impact, it is necessary to demonstrate tangible improvements in the lives of specific animals (e.g., fish, chickens, or pigs on a farm) or, alternatively, to point to specific legislative or corporate policy changes directly attributable to the organization's efforts.

An organization like Animal Ask should be able to provide numerous instances where its influence led to distinct changes in law, governmental decisions, corporate policies, or company practices. Such evidence represents the strongest form of validation and sets a high benchmark for accountability. That said, it is important to keep in mind that this viewpoint represents much stronger skepticism than that applied to most organizations like Animal Ask. If applied strictly, this viewpoint would make it extremely difficult to estimate the impact of most meta organizations, as meta organizations can usually only access data that represents a small proportion of their actual impact—as Kevin Xia writes: "I have found that meta orgs in [the effective altruism community] often only report the impact they know they have caused, and extrapolation is almost a bit frowned upon or at least difficult to get right, due to various biases at play."

Animal Ask's work has demonstrably influenced two specific laws or decisions, though both examples possess some ambiguity around the level of contribution and the counterfactuals.

  1. The cancellation of a particular factory farm.
  2. The inclusion of fish in a state's Animal Welfare Act in Australia.

This limited number of outcomes, when viewed in isolation after five years of operation, appears underwhelming. However, this assessment is incomplete, as numerous campaigns are ongoing. Reducing the entirety of the work to these two points, while other initiatives are still in progress, is overly simplistic.

Therefore, the true measure of Animal Ask's efficacy under a skeptical evaluation will depend on the emergence of additional examples on this list in the coming years.

Viewpoint 3: The "meta = leverage" view

Summary of viewpoint: There is a reasonable argument that since Animal Ask is a meta organisation, it only needs to have a modest probability of success to be a stronger bet than the next best alternatives.

This entire viewpoint is, at its core, expressing the idea that meta organisations can have higher leverage. As described by Peter Wilderford here, this argument roughly goes: "If you think a typical EA [effective altruism] cause has very high impact, it seems quite plausible that you can have even higher impact by working one level of 'meta' up -- working not on that cause directly, but instead working on getting more people to work on that cause. For example, while the impact of a donation to the Against Malaria Foundation seems quite large, it should be even more impactful to donate to Charity Science, Giving What We Can, The Life You Can Save, or Raising for Effective Giving, all of which claim to be able to move many dollars to AMF for every dollar donated to them. Likewise, if you think an individual EA is quite valuable because of the impact they’ll have, you may want to invest your time and money not in having a direct impact, but in producing more EAs!"

Under this view Animal ask are able to assist or influence the campaigns of approximately 10 organisations a year at similar costs to running an additional campaign. If several organisations pivot or are more successful because of Animal Asks research each year then this meta work is likely to create more effective campaigns than using the same resources directly on said campaigns. 

Viewpoint 4: Money moved ratio

For a sub-set of our research projects in strategic priorities and consultation that work with groups in the decision-making stage of a campaign there are often a wide variety of options being considered. One way to model the impact of Animal Ask is through a money moved ratio between expenditure on research and expenditure on these campaigns. 

This ratio allows us to see the minimum amount our research needs to have increased the impact of these campaigns to have provided equal value to marginal spending on these organisations. For example a ratio of 1 would imply campaigns need to be at least twice as effective, whereas a ratio of 4 would imply they need to be at least 25% more impactful. This also serves as a benchmark to other meta-organisations in charity evaluation and effective giving space who evaluate themselves on the ratio of money moved to effective causes compared to their operational costs. 

Of the 18 projects that resolved with positive impact, ongoing campaigns or averted a sub-optimal decision only 6 in the ongoing category can be evaluated under this metric. Across these projects our partner organisations have a combined yearly budget of approximately $2,870,000  of which around $940,000 is allocated to campaigns we provided research to help inform decision making. Estimated as a fraction of the total budget across active campaigns. Given approximately half our historic spending has been on projects of this kind ($410,000 USD) then ongoing money moved is 2.3 per year. If campaigns run for 3,4 or 5 years on average the money moved ratio is 6.9, 9.2 and 11.5 respectively. Which would require that our research results in campaigns that are at least 9-14% more effective than they would have been otherwise, assuming one to one counterfactual value of funds from both organisations donors.

OrganisationMoney MovedTypeBudgetRatio
Founders pledge 2020 Raised/Influenced 

2

Giving what we Can Raised 

6

Animal Ask (3 Years)

$2,833,721

Influenced

$410,000

6.9

Animal Ask (4 Years)

$3,778,295

Influenced

$410,000

9.2

Ace

$11,000,000

Influenced

$1,114,991

9.9

Founders Pledge 2023 Raised/Influenced 

11.0

Animal Ask (5 Years)

$4,722,869

Influenced

$410,000

11.5

Founders Pledge 2024 Raised/Influenced 

16.0

Givewell

$397,000,000

Influenced

$21,361,760

18.6

Taken at face value this would put our money moved ratio above Giving What we Can and comparable to Animal Charity Evaluators, however this is not a one to one comparison. There is wide gradation in the counterfactual use of funds across these groups and the relative impact of their final destination.

Giving what we Can multiplier discounts for potential counterfactual donations should represent an approximation for fresh funds directed to top opportunities. Founders Pledge does not make such discounts and also directs to a wider variety of causes. 

While groups like Animal Ask, ACE and Givewell are directing or influencing donations/ organisation funds that will already be spent to more effective areas. In which case the impact of that work needs to discount the value of the counterfactual use of funds. Animal Ask is also the only organisation on this list who works with organisations themselves rather than solely with donors. 

Viewpoint 5: External evaluations of our work

Summary of viewpoint: There have been a couple of external evaluations of Animal Ask, but the conclusions of these evaluations have been ambiguous and uncertain.

There have been a couple of times when Animal Ask has been evaluated by people outside the organisation.

Aidan Whitfield and Bridget Loughhead, during the 2023 pilot of Ambitious Impact's Research Training Program, spent two weeks evaluating Animal Ask. Since the authors had to use our own modelling as part of their evaluation, the cost-effectiveness model cannot be considered as independent from our own data. Key passages are as follows:

Since Animal Ask is funded by grants, our work has also been evaluated a number of times by grantmakers. While we don't have specific information, some general themes that have been raised by our funders are:

Issues raised about our work

Before we conclude, we would like to take this opportunity to be transparent about issues that have been raised with our work over time. Most of the feedback we have received has been positive, but we have also heard occasional criticisms of our work. When we notice issues with our work, or when issues are raised by others, we do our best to address those issues if necessary.

During a couple of projects, we did a poor job of communicating with partner organisations. This led to us misunderstanding the organisations' expectations and needs, causing inefficiency in our work and occasionally causing additional work for the partner organisations.

One partner organisation said that Animal Ask should utilise the expertise in the movement more proactively, especially on-the-ground expertise (e.g. the perspectives of the movements' lobbyists). A few people have pointed out that Animal Ask could improve the presentation of its reports, especially with correcting typos, linking internal sections, etc. And Aidan Whitfield and Bridget Loughhead's 2023 report (described above) recommended that Animal Ask be more transparent about its impact, as this would make it easier for people to evaluate Animal Ask without engaging significantly with us.

In response to issues around communication and project scoping, we have developed a detailed method for communicating with our partner organisations. We now spend much more effort up front talking with potential partner organisations about whether we can offer any benefit, and we are more proactive about following up with organisations over time to see whether they need any more support.

Conclusion — How should we integrate this evidence?

This analysis considered six major viewpoints offering evidence regarding Animal Ask's organizational impact. The conclusions reached by each viewpoint are as follows:

Three of these viewpoints (1, 2, and 4) converge on similar conclusions: Animal Ask's work has shown promising signs, but the data is currently inconclusive. These findings are enriched by two viewpoints providing background context: viewpoint 0 suggests an expectation of Animal Ask's likely failure, while Viewpoint 3 suggests strong prospects for success.

The general implications for Animal Ask are as follows:

Appendix: How does generative AI change Animal Ask's strategy?

The staff composition of Animal Ask may be suboptimal due to developments in AI that have occurred since key hires were made.

A brief timeline of Animal Ask:

In the three-year period from 2022 to 2025, the landscape of secondary research (and, to a lesser extent, primary research) has undergone significant transformation. It is now possible for an AI agent to generate a report that, at the beginning of the decade, would have required human authorship. While the quality of reports from ChatGPT Deep Research (and its equivalents) is not yet perfect, and the complete replacement of human researchers is not currently advocated, the quality of AI-generated research reports is remarkably high and continues to improve. Crucially, these agents can produce a report in approximately 15 minutes, compared to several weeks for a human.

A key question arises: Is the optimal staff composition of 2020-21 still optimal in 2025? In 2020-21, delegating most meaningful secondary research tasks to AI agents was prohibitively difficult, leading to the decision to hire human personnel. Currently, delegating numerous secondary research tasks to AI agents is straightforward and routine.

Despite this notable shift in global secondary research methodologies, Animal Ask has largely maintained a consistent structure and approach. The organisation has adopted occasional AI research tools for specific purposes (e.g., brainstorming, certain automated classification tasks, suggesting overlooked sources), while exercising caution and thoughtfulness in their integration. Animal Ask's reports from 2021 were human-authored, as are its reports from 2025.

Perhaps it would have been more strategic for Animal Ask to employ a single researcher (rather than two) and for that researcher to leverage AI tools significantly, thereby freeing up the rest of Animal Ask to concentrate on stakeholder engagement and communication.

It is possible that Animal Ask, despite its substantial investment in human research effort, continues to operate optimally. The purpose of this discussion is not to assert that Animal Ask is making incorrect decisions. Rather, it is to suggest that the optimal composition of Animal Ask may have changed between 2021 and 2025, given the major developments in AI during that period.