Impact of Charity Evaluations on Evaluated Charities' Effectiveness

By EdoArad @ 2021-01-25T13:24 (+26)

In relevance to ongoing outreach work in EA Israel, an upcoming Charity Effectiveness Prize, and our local charity-evaluation work, I've become interested in how does the evaluation process affects charities who are under evaluation. 

I can (sort of) quantify and understand the direct impact of the recommendation on the top charities. I can also (sort of) imagine the kind of impact the recommendation process has on popularizing cost-effectiveness (although I'd love to read a detailed report on the topic).

What I'd like to understand better at the moment are the generally-framed questions around how does more evidence lead to higher performance: 

  1. How do self-reflecting charities adapt to new evidence? GiveDirectly, for example, performs a lot of RCTs on direct cash transfers. I'd be very interested in examples of GiveDirectly and other charities making strategic or practical changes due to new evidence, and examples of randomized trials or other analyses being performed to make high-level decisions. Are there examples of nonprofits that completely started over when their interventions proved ineffective? I'd also be interested to learn more about what charities recommended by evaluation orgs learn from the evaluation process itself and whether this helps them improve (or cause harm).
  2. How are charities that aren't empirically grounded impacted from doing charity evaluation? Most charities, unfortunately, are not performing RCTs or otherwise invest in gathering evidence about the impact of their actions. Generally speaking, how do such charities respond to evidence or claims of (in)effectiveness? How reasonable is it to expect charities without a strong self-evaluation history to improve as a result of performing an analysis later on? Are there examples where organizations like GiveWell and ACE successfully helped improve the performance of investigated charities?

smaq @ 2021-01-27T15:04 (+9)

I can only address one of your points from question 1. Evidence Action has abandoned and completely started at least one of their interventions which proved to be ineffective in light of new evidence. If I remember correctly, it was in Bangladesh.

EdoArad @ 2021-01-27T17:51 (+5)

Thank you!

I've searched and found this post describing it. The summary:

Evidence Action is terminating the No Lean Season program, which was designed to increase household food consumption and income by providing travel subsidies for seasonal migration by poor rural laborers in Bangladesh, and was based on multiple rounds of rigorous research showing positive effects of the intervention. This is an important decision for Evidence Action, and we want to share the rationale behind it.  

Two factors led to this, including the disappointing 2017 evidence on program performance coupled with operational challenges given a recent termination of the relationship with our local partner due to allegations of financial improprieties. 

Ultimately, we determined that the opportunity cost for Evidence Action of rebuilding the program is too high relative to other opportunities we have to meet our vision of measurably improving the lives of hundreds of millions of people. Importantly, we are not saying that seasonal migration subsidies do not work or that they lack impact; rather, No Lean Season is unlikely to be among the best strategic opportunities for Evidence Action to achieve our vision.

smaq @ 2021-01-31T19:00 (+1)

Thank you. Yes, this is exactly what I referring to.