SoGive Grants: a promising pilot. Our reflections and payout report.

By SoGive, Isobel P @ 2022-12-05T17:31 (+69)

Executive Summary

SoGive ran a pilot grants program, and we ended up granting a total of £223k to 6 projects (see the section “Our grantee payouts” for details). We were pleased to be able to give high quality feedback to all rejected applicants (which appears to be a gap in the grants market), and were also able to help some projects make tweaks such that we could make a positive decision to fund them. We also noted that despite explicitly encouraging biosecurity projects we only received one application in this field. We tracked our initial impressions of grants and found that the initial video call with applicants was discriminatory and helped unearth lots of decision-relevant information, but that a second video call didn’t change our evaluations very much. Given we added value to the grants market by providing feedback to all applicants, helped some candidates tweak their project proposal and identified 6 promising applications to direct funding towards, we would run SoGive Grants again next year, but perform a more light touch approach, assuming that the donors we work with agree to this, or new ones opt to contribute to the pool of funding. This report also includes thoughts on how we might improve the questions asked in the application form (which currently mirrors the application form used by EA Funds).

 

Introduction

Back in April SoGive launched our first ever applied-for granting program, this program has now wrapped up and this post sets out our experiences and lessons learned. For those of you not familiar with SoGive we’re an EA-aligned research organisation and think tank.

This post will cover:

  1. Summary of the SoGive Grants program
  2. Advice to grant applicants
  3. Reflections on our evaluation process and criteria
  4. Advice for people considering running their own grants program 
  5. Our grantee payouts

We’d like to say a huge thank you to all of the SoGive team who helped with this project, and also to the external advisors who offered their time and expertise. Also, as discussed in the report we referred back to a lot of publicly posted EA material (typically from the EA forum) so for those individuals and organisations who take the time to write up their views and considerations online it is incredibly helpful and it affects real world decisions - thank you. 

If any potential donors reading this want their funding to contribute to the funding pool for the next round of SoGive grants, then please get in touch (isobel@sogive.org). 

 

1. Summary of the SoGive Grants program

Meta/EA Community Building

8

Public policy/improving governance

8

Hard to categorise

3

Existential risk (multiple causes)

3

Climate change

2

Biosecurity

1

Nuclear weapons

1

Total

26

  1. Application form: We started with a relatively light touch grant application form (similar to the EAIF form to reduce the burden on applicants). 
  2. Video call 1: After some initial filtering we then conducted video calls with the most promising applicants. 
    1. This involved asking questions about the history of the project and its current status and then some more in-depth questioning on their theory of change, the status of the problem-area field and other efforts to tackle the same problem, their perceived likelihood of success, worst-case scenarios, counterfactuals (for both the projects trajectory, and the applicants time), and the amount of money asked for (and which parts of the project they would prioritise). If we ran grants again, we would also ask applicants to steelman the best case scenario for their application in the video call.
  3. SoGive meeting 1: Then we had a SoGive team meeting to discuss the applicants and key cruxes etc. This allowed the wider team to share their intuitions and knowledge of the proposed interventions and dig into cruxes which would help determine whether projects were worth funding or not.
  4. Video call 2: From there we conducted further research before another round of video calls to give applicants the chance to address particular concerns and discuss more collaboratively how the projects could be tweaked to be more successful. 
  5. SoGive meeting 2: Then we had another SoGive wide team meeting to discuss applications again, and make final recommendations. 

 

2. Advice to grant applicants

3. Reflections on our evaluation process and criteria

Would we run SoGive Grants again?

 

4. Advice for people considering running their own grants program

 

5. Our grantee payouts

Below are listed the 6 grantees we recommended (listed with their permission and in alphabetical order).

Doebem

Effective Institutions Project

Founders Pledge

Jack Davies

Paul Ingram

Social Change Lab

 

Closing comments/suggestions for further work

We’d love to talk privately if you’d like to discuss the more logistical details of running your own granting program. Or if you’d like to contribute to the funding pool for the next round of SoGive grants, then please get in touch (isobel@sogive.org). 

 

Appendix A: Our rating system and a short evaluation

In this appendix we evaluate our evaluation process, as stated previously we went relatively heavy touch when examining grant applications. This was because we thought that there may be some cases where a highly impactful project might not be well communicated in its application, or we may get materially valuable extra information about (e.g.) management quality from video calls; we weren’t sure at which point we would hit diminishing returns from investing time to investigate grants.  We tracked our perceptions of grants over time (see below), to see how much our initial impressions changed upon further research and conversation with grant applicants, and in general found they didn’t shift too much with further research.This will also prove useful if we run granting again to see how future rounds match up in terms of assessed potential/quality. 

N.B. The sample size is very small (26 applicants), so one should be careful not to over rely on the obtained results/insights.

Skim Rating

After we initially read the applications, everyone who was reviewing a specific grant was asked to rate the application from 0 to 3. (0=don’t fund, it’s not even worth really doing any further evaluation,1=unlikely to fund but maybe there could be promising aspects, 2=it’s possible we would fund with more information, 3=extremely strong application)

Multi-variable rating (post call 1)

After we conducted our first video call with the applicants, everyone who was in the call was asked to rate the application on:

The below graphs are the sum of above scores, the highest possible score would be 58

Overall rating (post call 1)

Mean ratings


 

Metric


 

Progressed to call 2?

Received funding?
YesNoYesNo
Skim rating

1.86

1.48

1.83

1.52

Multi-variable rating

36.86

27.57

36.92

28.35

Overall Rating (post call 1)

6.57

4.71

6.42

4.96

 

How useful was each stage of the process?

The chart above tracks applicants' progress through our evaluation system. It suggests that both the initial skim and first video call provided lots of discriminating information, whereas the second video call had much more diminished returns in terms of gaining more decision-relevant information. As such if we run SoGive grants again, we might not run a second video call round.

Interpretation of results

 

  1. ^

    It’s explained in more detail here, but essentially the bar is £5000 or less to save a life. 

  2. ^

     Based on the guesses provided by Linchuan Zhang here, the marginal cost-effectiveness of the LTFF is 0.01 % to 0.1 % per billion dollars. If this is true, a project of 100 k$ would meet the bar if it decreased existential risk by at least 10^-6 % to 10^-5 %.

  3. ^

    SoGive’s core hypothesis refers to a previous strategy of SoGive around selling the idea of effective giving to the general public and seeing how much interest we got. It entailed directing people to our website which has lots of UK charities reviewed on it and tracking whether or not this analysis changes their donation intentions and patterns, with the plan being to conduct more thorough analysis of which parts of the website and analysis foster the greatest change in behaviour. 


Kirsten @ 2022-12-06T01:10 (+10)

I really appreciate that you not only gave feedback to your applicants, but also included common pitfalls in this article!

NickLaing @ 2022-12-05T21:52 (+10)

Wow thanks so much for this effort -  as someone who runs a small charity, it's so encouraging to see smallish EA aligned organisations getting a look in for some funding and going through this great process.  I have a couple of comments :).

1. As someone working in a global health charity, I often find it strange how little weighting delivery is given in Effective Altruism in general. There are a million good ideas that could have great impact, what matters more is whether the intervention will happen or not. It almost feels like delivery could almost be a multiplier  for other scores rather than a smaller score on it's own, or at least it could have a higher weighting maybe?. Does the fidelity of all the other scores not depend in a sense on the project actually playing out as planned?

2. I also have questions about how good a measure importance, tractability and neglectedness translate as a measure for rating an intervention, when I think they emerged in effective altruism for rating a problem. Were the judges using these criteria to rate the problem being addressed or the solution itself? For example on neglectedness some of the solutions (Nuclear winter one, Existential risk one) might be the only people doing that exact thing to contribute to the issue (say a score 10/10), while the issues themselves might be neglected but less so (e.g. 7/10).

3. (Selfish question!) Do you know of other EA organisations or grantees doing anything vaguely similar - smaller grants to smaller organisations? Is there any online database or list on the forum of EA aligned donor orgs?

Thanks so, so much I found your whole process and system very interesting and informative - must be the most transparent grantee of all time ;). Was very encouraging

Isobel P @ 2022-12-07T15:48 (+7)

1. This is a  good point, I hope that we weighted heavily enough on delivery but it's not certain.  I imagine that sometime next year when we review the progress and impact of grantees this will be something we consider more thoroughly, and will adjust accordingly. 

2. Yep - I should have been more specific, the I and N were applied to the problem area as a whole and the T was applied to the proposed intervention. In hindsight, maybe we could have weighted this more heavily in favour of the actual intervention being assessed.  This was in part exacerbated by us taking a sort of worldview diversification approach and not having a specific cause area focus. I imagine more tailored funders avoid this problem as they pick a cause area they deem to be important ahead of time and then are only evaluating on the merit of the intervention, whereas we had to incorporate assessments of both the problem area and  the proposed project. 

3. Hmm - unfortunately not really in the global health space. The Effective Thesis database here has some sources of funds I hadn't heard of, and the funding opportunities tag might be useful, but they tend to be more longtermist focused. If you message me with details of your project then I'd be happy to think about people I could connect you with. 

Zoe Williams @ 2022-12-09T09:20 (+6)

Post summary (feel free to suggest edits!):
SoGive is an EA-aligned research organization and think tank. In 2022, they ran a pilot grants program, granting £223k to 6 projects (out of 26 initial applicants):

The funds were sourced from private donors, mainly people earning to give. If you’d like to donate, contact isobel@sogive.org.

They advise future grant applicants to lay out their theory of change (even if their project is one small part), reflect on how you came to your topic and if you’re the right fit, and consider downside risk.

The give a detailed review of their evaluation process, which was heavy touch and included a standardized bar to meet, ITN+ framework, delivery risks (eg. is 80% there 80% of the good?), and information value of the project. They tentatively plan to run it again in 2022, with a lighter touch evaluation process (extra time didn’t add much value).

They also give reflections and advice for others starting grant programs, and are happy to discuss this with anyone.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)