[Impact Roadmap] Decision-Making Tools
By Aaron Boddy🔸 @ 2023-07-05T10:47 (+19)
This post is intended to outline useful Decision-Making Tools you'll likely refer to many times throughout the Roadmap.
There is a separate post on Creative-Thinking Tools.
Additionally, a lot of this is based on the work of Charity Entrepreneurship - an overview can be found at this EAG talk by Joey Savoie, or a deeper dive can be found in the Charity Entrepreneurship Handbook - How to Launch a High-Impact Nonprofit.
This also just seems like an opportune place to recommend applying to the Incubation Program if you're interested in this kind of stuff :)
Meta tools
- The Explore/Exploit tradeoff - Knowing (or committing to) how long you should spend “exploring” options, before committing to “exploit” the knowledge you have gained. In Algorithms to Live By, this tradeoff is put at 37% spent exploring, the rest exploiting.
- Time-Capping - Fixing the number of hours for a certain task, research project or decision and keeping research within those bounds.
- Iterative Depth - A process of narrowing down many ideas to the most promising ones, without spending too much time on the least promising ones. This can be achieved by Time-Capping (above) each stage of the process, and iteratively researching ideas that make it to subsequent stages in increasing depth.
- Feedback - Giving and receiving feedback, as well as knowing which pieces of feedback to implement and which to discard.
- Narrow Focus - The ability to maintain focus on an idea without spreading yourself too thin. Ten half-finished projects are likely not as valuable as a single finished project.
- Reevaluation Points - Building in regular times that you’ll reevaluate your project with fresh eyes and perspective, allowing you to maintain Narrow Focus (above) at other times.
- Components of a Good Tool - Understanding where a tool you might use falls across the spectrum of:
- Speed - If two tools are otherwise equal but one is faster, using the faster one is better)
- Cross-Applicability - A tool that can be efficiently used n more situations (especially when comparing interventions across causes)
- Accuracy - The more accurate a tool, the more weight you can give it in decision making
- Tool Trade-offs - Choosing a tool based on the decision, such as between a) a fast, but semi-accurate tool, or b) a slow but accurate one. Often depends on the importance of the answer, how much time you have, and the number of options being compared.
- Using Multiple Tools - Often multiple tools can be used to get closer to the truth, and if multiple tools are converging on an answer - you’re likely heading in the right direction!
- Try to use as many tools as you can, but keep proper utilization in mind (most tools have a minimum useful time - Diversity is better, but keep in mind the “sweet spots” for each tool)
- Use complementary tools - i.e. cost-effectiveness and experts provide different perspectives and great to use together
Multi-Factor Decision-Making
I want to highlight that this tool in particular will likely be used significantly throughout the Roadmap.
Creating a set of criteria, assigning weightings for each, and then going through multiple options to generate a score to help you assess the best options. This enables you to combine a large number of factors into a single score, and be transparent about your reasoning. Often created in a spreadsheet.
- The number of factors - Use more factors for complex problems | Try to limit categories | 1 factor can have multiple subcategories | Keep your comparison between 3 to 10 variables
- Colour coding - Allows you to see information much quicker | Easily compare data and spot weak areas | Valuable for broad-level comparison | Use intuitive colours (Green/Yellow/Red)
- Content Hierarchy - Put the most important criteria closer to the total score (left-hand side of your spreadsheet) | Arrange columns from most to least significant factors
- Feedback - Spreadsheets are transparent | Lets you layout your reasoning in a clear process | Which helps in communication with advisors and evaluators
Rationality
Using the tools of rationality to make decisions more effective, such as understanding biases, bayesian reasoning, and epistemic modesty.
- Instrumental vs Epistemic Rationality
- Epistemic rationality means arriving at true beliefs: your map matches the territory
- Instrumental rationality means achieving your goals effectively and successfully
- Expected Value Calculations - expected value is calculated by multiplying each of the possible outcomes by the likelihood each outcome will occur and then summing all of those values
- Occam’s Razor - all else being equal, the simplest explanation is probably right
- Bayesian Thinking - Having a prior belief, then updating that belief based on new evidence
- Skepticism and critical thinking - is something looks too good to be true, your model is probably flawed - adopt a skeptical prior
- Epistemic modesty - Become more comfortable with deferring to experts, and applying self-skepticism
- Crucial Considerations - are factors that influence the outcome the most
Scientific Method
Understanding the relative strengths of different types of evidence when reviewing the literature, as well as concepts such as randomisation and falsification if you need to undertake primary scientific research.
- Interpreting a P-Value - All else being equal, the lower the p-value, the more likely it is that the results are because of a true effect
- P-Fishing - P-fishing is where you analyse the data from a large number of different ways. The more statistical tests you do, the more likely it is you will get a false positive
- Don’t just read the abstract - If the decision you’re making is very important to get right, you should not just read the abstract (if the decision is small, unimportant or very time-bound, abstracts are fine)
- Pay attention to the effect size - Statistical significance is not the same as real life significance
- Factors that affect how relevant an RCT is:
- Metrics - Is it measuring the metrics you care about, or an intermediate metric?
- Location - The closer the location the study was done to the location you will do the intervention in, the better
- Population - Population is similar to where you’ll be working
- Intervention - Intervention is similar to the one you’ll be working on
- Scale - An intervention done at a small scale gets a lot more attention and highly talented people working on it than when it gets scaled up to millions of people
- Date - The world changes a lot over time. If a study was conducted in the 90s, that’s 20+ years ago. What worked then might not work now
Effective Altruism
Using the insights gleaned from the EA community (themselves often derived from Economics and Philosophy) to improve your decisions.
- Impartiality - Helping people/animals in different circumstances regardless of what we prefer
- Crucial Considerations - The most important factors that can change the outcome of your decision
- Counterfactuals - What would have happened if you hadn’t chosen X
- Strawmanning - Making an argument in its softest version
- Steelmanning - Making an argument in its strongest version
- Important, Neglectedness, Tractability (INT) Framework - A conceptual framework that you can use as a shortcut to narrow down to good ideas quickly
- Limiting Factor - The factor(s) that is going to limit your intervention
- Might be more important than the scale of an issue
- Value Drift - Changing the hierarchy of your values in a significant way, e.g. becoming less altruism-oriented
- Transparency - Being transparent about failures that are potentially common (might harm your org but might help the world)
- Cause X - Undiscovered cause area that might be highly promising
- Longtermism - Focusing on interventions in the long-run futurem e.g. AI, biorisk
- Neartermism - Focusing on interventions for people/animals that live now or in the next generation
- Excited Altruism - Put less weight on putting altruism over personal happiness
- Selfless Altruism - Prioritise something that is good for the whole despite personal interest or passion
- Hits-Based Giving - High-risk, high-reward philanthropy
- Evidence-Based Giving - Focus on the historical evidence for high-impact
- High-Fidelity - More complex, deeper, more descriptive
- Low-Fidelity - Broad, easy, less nuanced
Independent Experts
Try to distinguish good advice from bad advice, then try to apply the good advice
- Come prepared - Don’t want to waste advice or advisor's time
- Have topics and questions prepared (don’t necessarily have to follow)
- Advice often points to the biggest weakness but not the path forward - Advisors will be drawn towards highlighting flaws, but unless they have deep understanding of your context, will be less likely to come up with solutions
- Types of Experts
- Specialists - Narrow knowledge about something very specific
- Domain Expert - broad knowledge about a narrow domain
- Broad expert - Broad, comparative knowledge, e.g. charity evaluators
- Mentors - Have a similar experience, can relate to your charity | Have a nontrivial relationship with you | Are more trusted advisors | You talk the them about cross-cutting issues
- How to connect - Start with building a network of advisors | Talk to a lot of people (conference, social skypes) | Don’t neglect your contact form | Check CE Mentorship profiles | Spot people who give good advice and turn them into mentors
- Broad vs. Narrow - Mentors usually provide broad advice but they can also give you good, specific advice because they know your organisation well
- Funders - Large funders will often want to be involved - don’t just want to provide money
- How to connect - Speaking to current donors (even small donors)
- Broad vs. Narrow - EA funders are broad advisors | Most funders will be narrow advisors that can give you advice on things that cross-apply from the for-profit world, e.g. hiring, management
- Academic Experts - Often their names will come up again and again while researching
- How to connect - Cold reach outs often work out pretty well (at least depending on popularity) - 50% response rate (though varies on the field) | Email - Short, sweet and to the point | Often studying it for a reason and so can be excited to talk to you
- Broad vs. Narrow - Academic experts are often narrow advisors | Though some buck the trend and can have broader knowledge
- Policymakers - Broadly (key ministers or people connected to them such as staffers) | Lots of variation between countries, but most have national and local level (and sometimes state) | Need to identify who makes the decisions you care about
- How to connect - Ideally Warm introductions - Usually can be done through established non-profits | Cold introduction to a lower down official
- Broad vs. Narrow - Normally pretty broad | Will need a quick, simple, ideally intuitive explanation of the issue and intervention | They’ll know what can get traction
- Other (Charitable) Organisations - Other incubatees, other orgs in your space | Bear in mind you’re speaking to a person, not an org, but you’re speaking due to their affiliation
- How to connect - Ranges, but often (especially small orgs) are easy to get in touch with | Many will reach out to you | Cold calls, warm introductions and networking all work
- Broad vs. Narrow - Foundations are broad - Implementation will be specific
- Your Community - Broadly the EA community and the specific area you are within (i.e. if working in animals the animal community) | Advice is from individuals that don’t fall into categories above - will likely interact lots and get positive/negative feedback | Can give you a lot of knowledge, expertise, and connections to valuable people | Provides support
- How to connect - Most communities have low barriers to entry (i.e. EA Forum) | Generally excited to have new people join and intro events | Low bar to connect | Connecting frequently and respecting the norms is good (be a community member and not a spammer) | Engage thoughtfully (read content published by the community before you post)
- Broad vs. Narrow - Filled with individuals that have specific knowledge, but overall broad | Some people are community experts who can give a good overview
Task Planning
The ability to prioritise and focus on high-impact tasks, deep work, maintaining healthy habits etc.
- Focus on high-impact tasks - Is it Important? Is it Urgent? What’s the Effort required?
- Prioritise High-Importance, High-Urgent tasks
- Schedule High-Importance, Non-Urgent tasks
- Understand the effort required for each for Time Capping and Quick Wins
- Implement 80/20 approach
- Don’t reinvent the wheel - Use standard templates when useful
- Can outsource to Fiverr or Upwork to free up your time
- Use a task-management tool - Avoid trap of thinking you can keep it all in your mind
- Avoid trap of over researching task management or reinventing task management
- Ensure consistent task management system (such as Asana, not email)
- Getting Things Done - Capture Everything, Clarify, Organise, Review, Engage
- Time Boxing - Look at high impact, high priority tasks and reserve a slot on your calendar
- Creates time for you to work on these tasks
- Forces you to apply 80/20 as you are time capped
- Without time boxing can spend the day “firefighting” (working on urgent, non-important tasks - like emails etc which can be batched)
- Deep Work - Uninterrupted work (turn off phone, slack messages etc)
- Allows you to focus and achieve output in a limited time (aligns well with Time Boxing)
- Don’t schedule meetings when you’re primed for deep work (and maybe have some days with no meetings at all)
- Review your activities - Does your time spent on activities align with your high-importance tasks
- Can reconsider whether you are spending too much time on a task and can outsource/delegate (within your task management system)
- Goal is to have fewer tasks - what are we doing that we shouldn’t be doing? (beware of Sunk Cost Fallacy)
- Self Care - Eat well, Sleep Well, Exercise
Problem-Solving
- One of the bests ways to create a habit is an if-then Intention - where you have a:
- Trigger - The trigger is something specific in the environment that prompts you to do a certain behaviour if it happens
- Behaviour - The specific behaviour you then carry out
- Escape, Alter, Reframe (EAR)
- Escape - Can you just leave the problem?
- Alter - Can you change the situation to make it better?
- Reframe - Can you look at the problem in a different way?
- Why is this important? - The first most important question is to figure out whether you should even bother solving the problem
- What triggered this question? - This is a great question that will keep you grounded on something practical and help you figure out what is the actual problem
- Is there a better way to frame the question/problem? - The initial framing of the question will influence the answers you come to
- How to reframe a problem
- What is my underlying need that makes me want X?
- How do I get both A and B conflicting goals at the same time?
- How do I get X negative thing to stop happening?
- How do I get Y positive thing to happen instead of X?
- What’s causing X?
- What are some different ways I could prevent it?
- What decision is it influencing? - Make sure that you’re thinking about something that is useful
- Goals/Criteria - If you don’t know what you’re pursuing, any choice is equally good
- Start Broad - e.g. Preventing suffering | Making your loved ones happy
- Break it down to intermediate steps - e.g. Make less chickens live through factory farms | Have more enjoyable down time or reduce stressors
- Don’t re-invent the wheel - If you’re having a problem, odds are somebody else has already had it, solved it, and written about it
- Google it. See what other people have found
- Check your notes to see if you have already thought / written about it
- Methodology - How are you going to approach the problem? (or a combination)
- Spreadsheets
- Long period of collecting and analysing data
- More analysis and trying to figure out the problem via thinking and discussing with friends
- Data/Observations - Focus on observing the instances of the problem first with an open mind.
- As hypotheses occur to you, quickly jot them down in the hypotheses section, then return to listing observations
- Hypotheses - Having this step forces you to come up with multiple hypotheses as to what is causing the problem
- If you have the time, this is a good time to try out the 5 Whys
- Experiments / potential solutions - Generate a list of potential solutions and ways to experiment with them
- Make sure to spend some time coming up with alternatives
- There’s always more potential solutions than you first think
Long-term Planning
Being able to generate an overall goal, while also knowing what level of detail you should plan ahead (i.e. a month-to-month plan should be more detailed than a one-year plan, which in turn should be more detailed than a five-year plan).
You will achieve less than you expect in 1 year and more than you expect in 5 years
- Front load tasks - Put the most important things earlier in the plan
- You can always cut the less important stuff at the end
- Focus - e.g. Top 3 org objectives for the year | Top 3 personal objectives for the month
- Work backwards - from aspirational goals, then can figure out steps
- Black Swans - Build in robustness - need to be able to deal with unknown unknowns
- Pre-Mortem - Imagine your project failed a year from now
- Why did it fail? What could you have done to prevent it?
- Plan for Pivots - Expect 5 small modifications in a year
- Making a Plan
- Before you Plan - Have a deep understanding of the area (Broad and Narrow reading, talking to experts) | Expect to get more information every year | Look into templates
- 5-Year Plan - Inspirational | Broad direction | Not very accurate
- Paragraph per year + conclusion/summary of what you hope to accomplish
- 1-Year Plan - Goal-setting | Key metrics, key milestones | Donors look at it more closely
- Month-to-Month Plan - Specific steps and timeline | Assigning responsibilities for staff
- Tracking your Plan
- Monthly Review - Go through your goals and check if they are: on track (green), not on track (amber), unlikely to happen (red)
- Helps you adjust your workload and re-focus | Can evaluate for more goals
- Yearly Review - Deeper | Based on lessons you learned (e.g. on hiring, too ambitious goals)
- Look back on specific SMART goals | Analyse your processes
- Monthly Review - Go through your goals and check if they are: on track (green), not on track (amber), unlikely to happen (red)
Cost-Effectiveness Analysis (CEA)
A Cost-Effectiveness Analysis (CEA) is basically a calculation aimed at creating a single number representing a benefit (such as a health metric, like DALYs or Lives Saved) for a given cost (normally some currency). E.g. “the result of the CEA showed a life could be saved for “$5,000”.
You’ll likely want to start with a spreadsheet tool like Google Sheets (a tool like Guesstimate can be used for complex CEAs you may develop in future, in particular for CEAs with high levels of uncertainty). The design of your CEA will likely split up each broad idea within your CEA into its own sheet:
- Each sheet will likely be read across a given row, with the first column containing your titles.
- You might put specific sections into their own boxes (for readability) and notes in the last columns.
- Once you’ve developed your sheet for each broad idea, the key number from each can be pulled into a Summary Sheet.
- The summary sheet will contain the results of the CEA, i.e. “Benefit X for Cost Y '', as well as the most important numbers that factor into that result and the factors which most affect the estimate (the Sensitivity Analysis).
- You’ll likely include a Benefits, Costs, and Counterfactuals section. Additionally, you might also want to include an Optimistic, Pessimistic, and Best-Guess for some numbers (for Monte Carlo simulations you can use Guesstimate).
- You might also discount numbers within your CEA, such as:
- Certainty - if you have a source for a number, but it’s uncertain, you might apply a certainty discount to it
- Generalisability - when generalising evidence from one context to another, you might apply a generalisability discount, to acknowledge that the situation you’re modelling is not identical to the source you’re using
- Bias - if you believe that bias may be a factor in a number from a source, you might discount it
- Finally, you’ll want to include a References tab at the end (along with links to the most important numbers actually within the CEA itself).
To keep your CEAs consistent with others (like GiveWell’s) you’ll likely want to use similar formatting, such as colour-coding cells based on data type:
- Yellow: Value and Ethical judgements - These are numbers that could change if the reader has different ethical judgements to you, and there is often no clear answer - i.e. “how many years of happiness is losing the life of one child under five worth”
- Green: Citation-based numbers - Numbers based on a specific citation, often hyperlinked (or noted in the reference section)
- Blue: Calculated numbers - Numbers generated from other numbers within the sheet
- Orange: Estimated numbers - Numbers where a source cannot be found, and a number is required to be estimated by a CEA author (or expert)
Bear in mind that there is diversity among existing CEAs. Charity Entrepreneurship suggests that there are the following levels:
- Informative A CEA where the endline isn’t particularly useful on its own, but the variables and citations that were used to develop it can be informative for your own.
- Suggestive A good-quality CEA, but assessing a different metric/situation to you. Often it will help you to update your views, but you’ll still need to create your own CEA.
- Predictive - A high-quality CEA that is sufficiently close to your metrics/situation. You’d likely use many of the same inputs when creating your own CEA.
Finally, it’s worth remembering that the CEA is a map, not the territory. It’s likely to be imperfect and/or missing key data. Whilst it’s often a very useful tool, there are shortcomings, and it should be one tool among a few to help you make decisions.
[More details on developing a Cost-Effectiveness Analysis can be found in chapter 14 of the Charity Entrepreneurship Handbook - “How to Launch a High-Impact Nonprofit” (and also likely elsewhere…), additionally it’s worth referring to this list of ways in which cost-effectiveness estimates can be misleading, it’s a useful checklist to assess if important considerations haven’t been missed).]
Monitoring, Evaluation, and Learning (MEL)
Monitoring refers to the routine monitoring of project resources, activities, and results, and analysis of the information to guide project implementation.
Evaluation refers to the periodic assessments and analysis of ongoing or completed projects
Learning is the process through which information generated from M&E is reflected upon and intentionally used to continuously improve a project’s ability to achieve results
It’s worth noting here that MEL systems often don’t get it right the first time, rather they evolve over time, based on internal learning (and external accountability).
Monitoring
A good place to start with Monitoring is a Logframe. A Logframe essentially takes your Theory of Change (or a Programme Theory) and transforms it into a table, allowing you to inspect each step, and pull out the data that can be used to verify that the change is taking place. You take each step (Objective) and then explicitly state what you’re going to track (Indicators/Metrics), how you’ll verify and store the data related to this tracking (Sources of Verification), and finally any risks, assumptions or externalities within this step, that may result in no impact, even if the Objective is achieved (Threats). While your final Monitoring system may end up outgrowing the Logframe, this is a great place to start.
[The CART Principles are a useful methodology for Monitoring - and are summarised in the Programme Development Methodologies Appendix.]
Evaluation
Evaluation is most useful when scheduled as a periodic appraisal. In particular, it’s useful to specify ahead of time what actions should be taken based on findings, with an organisational commitment (and resources) to do so. Such as:
- What circumstances warrant a scale-up of the intervention?
- What circumstances warrant a small (or significant) redesign of the program?
- What circumstances would warrant a program shutdown?
[The Network on Development Evaluation (EvalNet) criteria are a useful methodology for Evaluation - and are summarised in the Programme Development Methodologies Appendix.]
Learning
Your Monitoring systems and Evaluation appraisals provide an opportunity for reflection and continuous improvement.
For example at a Project level:
- What works well in a particular context or what does not work well?
- Which aspects of a project has more influence the achievement of results?
- Which strategies can be replicated?
And at an Organisational level:
- Compare project results to determine which contribute to your organisation's mission.
- Aggregate results from similar projects or cross-cutting organizational indicators to understand your organisation's wider reach.
- Use the learnings from different projects to guide new project development and funding opportunities.
To facilitate this Learning, you might:
- Set up both formal and informal opportunities to reflect on lessons learned
- Share project outcomes and organisational insights (both positive and negative), with partners, communities, and funders
- Publish learnings (i.e. on your website) for accountability and transparency