AMA: Ian David Moss, strategy consultant to foundations and other institutions

By IanDavidMoss @ 2021-03-02T16:55 (+40)

This is my first-ever AMA and I'm excited about it -- thanks to Aaron for the push! I will be answering questions here the afternoon of Monday, March 8 between 1-3pm East Coast time.

Here's some information about me and my work:

I am happy to answer questions about any of the above, or anything else that's on your mind! I may not get to everything, especially if there are a lot of questions, but I'll try my best.

(Update: I've now come to the end of the time I budgeted, but will continue monitoring this discussion and will try for one or two follow-ups this week if I can!)


tessa @ 2021-03-03T04:18 (+20)

Are there theory-of-change-level misconceptions that you commonly find yourself correcting for your clients? What are some of the strategic mistakes you frequently see made by institutions on the scale you advise?

IanDavidMoss @ 2021-03-08T18:20 (+12)

I love the way you phrased this question -- in fact, one of the reasons why I'm such a big believer in theories of change (so much so that I wrote an introductory explainer about them) is that they are excellent for revealing strategic mistakes in a client's thinking.

A frequent pitfall I come across is that the originators of an organization or program often fall in love with the solution rather than the problem. By that I mean they see a problem, think immediately of a very detailed solution for that problem -- whether it's a software platform, some other kind of technology or innovation, an adaptation of an existing idea to a new audience or environment, etc. -- and get so invested in executing on that solution that it doesn't even occur to them to think about modifications or alternatives that might have higher potential. Alternatively, the solution can become so embedded in the organization's identity that people who join or lead it later on see the specific manifestation of the solution as the organization's reason to exist rather than the problem it was trying to solve or opportunity it was trying to take advantage of.

This often shows up when doing a theory of change for a program or organization years down the line after reality has caught up to the original vision -- day-to-day activities, carried out by employees or successors and shaped through repeated concessions to convenience or other stakeholders, often imply a very different set of goals than are stated in the mission or vision statement! For that reason, when doing a theory of change, I try to encourage clients to map backwards from their goals or the impact they want to create and forget for a moment about the programs that currently exist, to encourage them to see a whole universe of potential solutions and think critically about why they are anchored on one in particular.

MichaelPlant @ 2021-03-04T12:03 (+13)

Hello Ian. Could you say a bit what providing strategy and research looks like? I don't have an intuitive grasp on what sort of things that involves and I'd appreciate an example or two!

IanDavidMoss @ 2021-03-05T14:30 (+4)

Hi Michael, there are some sample project descriptions over at my website, but I'll paste a couple here for convenience: 

For more than 18 months, I worked with Democracy Fund’s Strategy, Impact and Learning team to bolster organizational capacity for strategic decision-making and develop a framework for risk assessment and mitigation across the organization. Deliverables included a training for 35+ senior and program staff covering forecasting skills and decision analysis, a concept paper, and recommendations to strengthen the approval process for nearly $40M in annual grantmaking. (2018-20)

I advised the Omidyar Network on the development of its Learning & Impact plan, with particular attention to designing team and organization-wide accountability systems that incentivize smart decision-making habits and practices as an alternative to traditional outcomes-based accountability. In addition, this engagement helped support the creation of a framework to help philanthropic institutions respond to the uncertainty created by the COVID crisis. (2020)

In partnership with BYP Group, I developed theories of change for two grant programs administered by Melbourne, Australia-based Creative Victoria (a state government agency). In addition, I worked directly with nine Creative Victoria grantees to create evaluation frameworks for their funded projects, which sought to use creative industry assets to accelerate progress on longstanding social issues such as mental health, social cohesion, and gender equality. (2018)

Those should give you a high-level sense of what I do, but I'm happy to answer more specific questions as bandwidth allows.

JamesOz @ 2021-03-03T09:50 (+10)

What would be your top 3-5 tips for making good decisions in an organisation?

IanDavidMoss @ 2021-03-08T20:09 (+9)
  1. Be aware of your decisions in the first place! It's really easy to get so caught up in our natural habits of decision-making that we forget that anything out of the ordinary is happening. Try to set up tripwires in your team meetings, Slack chats, and other everyday venues for communication to flag when an important fork in the road is before you and resist the natural pressure people will feel to get to resolution immediately. Then commit to a clear process of framing the choice, gathering information, considering alternatives, and choosing a path forward.
  2. Match the level of information-gathering and analysis you give a decision to its stakes. Often organizations have rote processes set up for analysis that aren't actually connected to anything worth worrying about, while much more consequential decisions are made in a single meeting or in a memo from the CEO. Try to establish a discipline of asking how much of your/your team's time it's worth spending on getting a decision right. Try to ensure that every piece of knowledge your team collects has at least one clear, easily foreseen use case in a decision-making context, and dump any that are just taking up space.
  3. Try to structure decisions for flexibility and option value. Look for ways to run experiments and give yourself an out if they don't work, ways to condition a decision on some other event or decision so that you aren't backed into making a choice before you have to, ways to hedge against multiple scenarios. Obviously, there will be situations when there is one correct choice and you need to go all-in on that choice. But in my experience those are pretty rare, and clients are more likely to make the opposite error of overcommitting to decision sequences that overly narrow the set of reasonable future options and cause problems down the line because of that.
MichaelA @ 2021-03-05T06:33 (+9)
  1. What do you believe that seems important and that you think most EAs would disagree with you about?
  2. What do you believe that seems important and that you think most people working on improving institutional (or other) decision-making would disagree with you about?
  3. What do you think EAs are most often, most significantly, or most annoyingly wrong about?
    • I'm perhaps particularly interested in ways in which you think longtermists are often wrong about politics, policy, and/or institutional decision-making.
  4. What's an important way your own views/beliefs have changed recently?

(I'm perhaps most interested in your independent impression, before updating on others' views.)

IanDavidMoss @ 2021-03-08T18:53 (+5)

Great questions!

  1. I'm on record as believing that working on EA-style optimization within causes, even ones that don't rise to the top of the most important causes to work on, is EA work that should be recognized as such and welcomed into the community. I got a lot of pushback when I published that post over four years ago, although I've since seen a number of people make similar arguments. I think EA conventional wisdom sometimes sets up a rather unrealistic, black-and-white understanding of why other people engage in altruistic acts: it's either 100% altruistic, in which case it goes into the EA bucket and you should try to optimize it, or it's not altruistic at all, which case it's out of scope for us and we don't need to talk about it. In reality, I think many people pursue both donations and careers out of a combination of altruistic and selfish factors, and finding ways to engage productively about increasing the impact of the altruism while respecting the boundaries put in place by self-interest is a relatively unexplored frontier for this community that has the potential to be very, very productive.
  2. This depends on whether you center your perspective on the EA community or not. There are lots of folks out there in the wider world trying to improve the functioning of institutions, but most of them aren't making any explicit attempt to prioritize among them beyond whether they are primarily mission- or profit-driven. In this respect, the EA community's drive to prioritize IIDM work based on opportunity to improve the world is quite novel and even a bit radical. On the EA side of things, however, I think there's not enough recognition of the value that comes from engaging with fellow travelers who have been doing this kind of work for a lot longer, just without the prioritization that EA brings to the table. IIDM is an incredibly interdisciplinary field, and one of the failure modes that I see a lot is that good ideas gain traction within a short period of time among some subset of the professional universe, and then get more or less confined to that subset over time. I think EA's version of IIDM is in danger of meeting the same fate if we don't very aggressively try to bridge across sectoral, country, and disciplinary boundaries where people are using different language to talk about/try to do the same kinds of things.
  3. My main discomfort with longtermism has long been that there's something that feels kind of imperialist, or at least foolish, about trying to determine outcomes for a far future that we know almost nothing about. Much of IIDM work involves trying to get explicit about one's uncertainty, but the forecasting literature suggests that we don't have a very good language or tools for precisely estimating very improbable events. To be clear, I have no issue with longtermist work that attacks "known unknowns" -- risks from AI, nuclear war, etc. are all pretty concrete even in the time horizon of our own lives. But if someone's case for the importance of something relies on imagining what life will be like more than a few generations from now, I'm generally going to be pretty skeptical that it's more valuable than bednets.
  4. My own career direction has shifted pretty radically over the past five years, and EA-style thinking has had a lot to do with that. Even though I stand by my position in point #1 that cause neutrality shouldn't be a prerequisite for engaging in EA, I have personally found that embracing cause neutrality was very empowering for me and I now wish I had done it sooner. It's something I hope to write more about in the future.
MichaelA @ 2021-03-09T01:49 (+2)

Thanks for these answers. I think I find your answer to Q2 particularly interesting. (FWIW, I also think I probably have a different perspective to your re your answer to Q1, but I imagine any quick response from me would probably just rehash old debates.)

But if someone's case for the importance of something relies on imagining what life will be like more than a few generations from now, I'm generally going to be pretty skeptical that it's more valuable than bednets.

Would you include even cases that rely on things like believing there's a non-trivial chance of at least ~10 billion humans per generation for some specified number of generations, with a similar or greater average wellbeing than the current average wellbeing? Or cases that rely on a bunch of more specific features of the future, like what kind of political systems, technologies, and economic systems they'll have?

My main discomfort with longtermism has long been that there's something that feels kind of imperialist, or at least foolish, about trying to determine outcomes for a far future that we know almost nothing about. [...] To be clear, I have no issue with longtermist work that attacks "known unknowns" -- risks from AI, nuclear war, etc. are all pretty concrete even in the time horizon of our own lives.

How do you feel about longtermist work that specifically aims at one of the following?

  1. Identifying unknown unknowns - e.g. through horizon-scanning
  2. Setting ourselves up to be maximally robust to both known and unknown unknowns - e.g. through generically improving knowledge, improving decision-making, improving society's ability to adapt and coordinate (perhaps via things like improving global governance while preventing stable authoritarianism)
    1. I think efforts to ensure we can have a long reflection could be seen as part of this
  3. Improving our ability to do 1 and/or 2 - e.g., through improving our forecasting and scenario planning abilities
IanDavidMoss @ 2021-03-09T02:33 (+1)

Would you include even cases that rely on things like believing there's a non-trivial chance of at least ~10 billion humans per generation for some specified number of generations, with a similar or greater average wellbeing than the current average wellbeing? Or cases that rely on a bunch of more specific features of the future, like what kind of political systems, technologies, and economic systems they'll have?

My general intuition is that if there's a strong case that some action today is going to make a huge difference for humanity dozens or hundreds of generations into the future, that case is still going to be pretty strong if we limit our horizon to the next 100 years or so. Aside from technologies to prevent an asteroid from hitting the earth and similarly super-rare cataclysmic natural events, I'm hard pressed to think of examples of things that are obviously worth working on that don't meet that test. But I'm happy to be further educated on this subject.

How do you feel about longtermist work that specifically aims at one of the following?

Yeah, that sort of "anti-fragile" approach to longtermism strikes me as completely reasonable, and obviously it has clear connections to the IIDM cause area as well.

MichaelA @ 2021-03-09T03:47 (+4)

My general intuition is that if there's a strong case that some action today is going to make a huge difference for humanity dozens or hundreds of generations into the future, that case is still going to be pretty strong if we limit our horizon to the next 100 years or so.

I might be misunderstanding you here, so apologies if the rest of this comment is talking past you. But I think the really key point for me is simply that, the "larger" and "better" the future would be if we get things right,[1] the more important it is to get things right. (This also requires a few moral assumptions, e.g. that wellbeing matters equally whenever it happens.)

To take it to the extreme, if we knew with certainty that extinction was absolutely guaranteed in 100 years, then that massively reduces the value of reducing extinction risk before that time. On the other extreme, if we knew with certainty that if we reduce AI risk in the next 100 years, the future will last 1 trillion years, contain 1 trillion sentient creatures per year, and they will all be very happy, free, aesthetically stimulated, having interesting experiences, etc., then that makes reducing AI risk extremely important.

A similar point can also apply with negative futures. If there's a non-trivial chance that some risk would result in a net negative future, then knowing how long that will last, how many beings would be in it, and how negative it is for those beings is relevant to how bad that outcome would be.

Most of the benefits of avoiding extinction or other negative lock-ins accrue more than 100 years from now, whereas (I'd argue) most of the predictable benefits of things like bednet distribution accrue within the next 100 years. So the relative priority of the two broad intervention categories could depend on how "large" and "good" the future would be if we avoid negative lock-ins. And that depends on having at least some guesses about the world more than 100 years from now (though they could be low-confidence and big-picture, rather than anything very confident or precise).[1]

So I guess I'm wondering whether you're uncomfortable with, or inclined to dismiss, even those sorts of low-confidence, big-picture guesses, or just the more confident and precise guesses?

(Btw, I think the paper The Case for Strong Longtermism is very good, and it makes the sort of argument I'm making much more rigorously than I'm making it here, so that could be worth checking out.)

[1] If we're total utilitarians, we could perhaps interpret "larger" and "better" as a matter of how long civilization or whatever lasts, how many beings there are per unit of time during that period, and how high their average wellbeing is. But I think the same basic point stands given other precise views and operationalisations.

[2] Put another way, I think I do expect that most things that are top priorities for their impact >100 years from now will also be much better in terms of their impact in the next 100 years than random selfish uses of resources would be. (And this will tend to be because the risks might occur in the next 100 years, or because things that help us deal with the risks also help us deal with other things.) But I don't necessarily expect them to be better than things like bednet distribution, which have been selected specifically for their high near-term impact.

MichaelA @ 2021-03-05T06:27 (+8)

How do you decide what sorts of clients to seek out, agree to consult for, or position yourself to consult for in future?

 E.g., would you ideally want to mostly work with clients who are fairly focused on typical EA cause areas and who seem fundamentally receptive towards prioritising well within those areas (even if they don't currently prioritise well)? Or would you aim to focus on a different type of client? Or do you not have strong preferences on that front?

IanDavidMoss @ 2021-03-08T19:12 (+3)

One of the realities of consulting is that, unless you get very lucky,  you generally do have to be at least somewhat opportunistic in taking projects early on. I'm now in the fourth year of running my business and I'm able to be a lot pickier than I was when I first started, but if I limited my work to clients that were only focused on typical EA cause areas, I'd run out of clients pretty quickly. So I've cast my net quite a bit more broadly, which not only expands the opportunity set but also hedges against me getting typecast and positions me to be competitive/relevant in a wider range of professional networks, which I think is valuable for all sorts of reasons.

Another thing to keep in mind is that I've found that having clients that look great on paper doesn't always mean that you are able to achieve a lot of impact with them. Some of my most successful projects have been with clients that did smaller-scale work or were less sophisticated in their approach, because they knew they needed guidance from an outside expert and were willing to cede a lot of authority and creative input to me as part of the process. When you're really trying to innovate and move the field forward, it helps a lot to have clients like these because they aren't anchored on the usual ways of doing things, which makes them more open to trying out ideas. A lot of the sales process for consulting comes down to reassurance that someone else has done this thing and it worked out great for them, so getting those first few case studies locked down can be really important.

MichaelA @ 2021-03-09T01:35 (+2)

Thanks for the answer :)

So it sounds like part of your theory of change for your work is getting opportunities to test out innovations related to better decision-making practices and generating examples of these working, in order to inform other efforts to improve decision-making which are perhaps higher stakes in a direct sense?

IanDavidMoss @ 2021-03-09T02:07 (+3)

A part of it, definitely. At the same time, there are other projects that may not offer much opportunity for innovation but where I still feel I can make a difference because I happen to be good at the thing they want me to do. So a more complete answer to your original question is that I choose and seek out projects based on a matrix of factors including the scale/scope of impact, how likely I am to get the gig, how much of an advantage I think working with me would offer them over whatever the replacement or alternative would be, how much it would pay, the level of intrinsic interest I have in the work, how much I would learn from doing it, and how well it positions me for future opportunities I care about.

Vicky Clayton @ 2021-03-02T20:47 (+8)

What are the frameworks you find most helpful in your work supporting clients with their decision-making?

IanDavidMoss @ 2021-03-08T19:52 (+1)

Besides theory of change, which tessa mentioned,  I've found myself increasingly focusing on the "front end" of decision-making rather than very detailed tools to choose from among defined alternatives, because in my experience  leaders and teams generally need help putting more structure around their decision-making process before they can engage productively with such methods.

One innovation I've been working on is a tool called the decision inventory, which is a way for clients to get a sense of the landscape of decisions facing them and prioritize among those decisions. It's a much more intuitive exercise and can be done much more quickly than a formal decision analysis or cost-benefit model, so it lends itself well to introducing the concepts and building buy-in among a team to do this kind of work. It can be especially helpful for teams because different team members have a different view of the decision landscape, and will have different ideas about what decisions are important for which reasons, so activating that collective intelligence can be educational for leaders.

MichaelA @ 2021-03-05T06:29 (+4)

I previously asked a batch of questions which I'd be interested to hear your take on:

  1. Do many research organisations (within or outside of EA) make theory of change (ToC) diagrams? If not, why not?
  2. Do many research orgs make ToC diagrams, but not make them publicly accessible? If so, why?
  3. Should more research orgs make ToC diagrams? Why or why not?
  4. Should more research orgs make their ToC diagrams publicly accessible? Why or why not?

(My own tentative answers, and those of a bunch of other people, can be seen at the linked post.)

MichaelA @ 2021-03-05T06:26 (+4)

What are the main careers paths/options you considered (or are considering) as alternatives to your current path? What led you to choose your current path rather than those alternatives?

jared_m @ 2021-03-04T01:42 (+4)

If you were advising someone five years behind you, but on a somewhat similar track (a MBA type leaving a  senior role at a mission-driven organization to become an independent consultant), what would your top pieces of advice be re: 

Thank you!

IanDavidMoss @ 2021-03-08T19:36 (+4)
  1. You should hope that the transition will be painless, but prepare for it to be really, really hard just in case. I definitely recommend starting out with at least 6-9 months of runway for basic living expenses so that you can manage stress about being able to support yourself. It also helps if you can have one or two client engagements lined up before you actually make the jump. In retrospect, I did this transition in Hard Mode by switching cause area focus at the same time as I went from being an employee to an entrepreneur, which necessitated essentially rebuilding my network from scratch. Don't do this. If you do want to make a career switch, you'll have a much easier time if you get another job in your preferred area first and then go independent after that.
  2. One thing I've learned since I started is that client work is itself the best business development. There's really no comparison between a pitch and a referral -- the latter is dramatically more effective in making the case. Another tip is that you can create a lot of opportunities by doing the legwork of chasing obscure RFPs for projects you want to do but are not really qualified for, and then approaching other consultants (who are qualified for them) to ask if they want to partner with you on a bid. That way you get to know that firm and you gain relevant experience if your team wins the contract.
  3. This really threw me for a loop my first few years. The money is one thing, but being under-utilized for a while can also be really bad for your sense of self-worth -- and scrambling to meet a million deadlines obviously has its downsides as well. I've generally found the valleys to be more challenging to manage than the peaks, as very few of my projects are so time-sensitive that pushing off a deadline here or there is going to cause a catastrophe. I've found it helpful to maintain an active learning and writing practice as part of my portfolio of activities that can expand or contract to meet the moment. These are things I want to do anyway, and so if I find I have extra time to do them it's almost a blessing rather than something to be bummed about.
tamgent @ 2021-03-05T14:35 (+3)

What are the major risks or downsides that may occur, accidentally or otherwise, from efforts to improve institutional decision-making? 

How concerned are you about these (how likely you think they are, and how bad would they be if it happened)?

IanDavidMoss @ 2021-03-18T17:34 (+2)

As part of the working group's activities this year, we're currently in the process of developing a prioritization framework for selecting institutions to engage with. In the course of setting up that framework, we realized that the traditional Importance/Tractability/Neglectedness schematic doesn't really have an explicit consideration for downside risk. So we've added that in the context of what it would look like to engage with an institution. With the caveat that this is still in development, here are some mechanisms we've come up with by which an intervention to improve decision-making could cause more harm than good:

  • The involvement of people from our community in a strategy to improve an institution's decision-making reduces the chances of that strategy succeeding, or its positive impact if it does succeed
    • (This seems most likely to be a reputation/optics effect, e.g. for whatever reason we are not credible messengers for the strategy or bring controversy to the effort where it didn't exist before. It will be most relevant where there is already capacity in place among other stakeholders or players in the system to make a change, whereby there is something to lose by us getting involved.)
  • The strategy selected leads to worse outcomes than the status quo due to poor implementation or an incomplete understanding of its full implications for the organization
    • (One way I've seen this go wrong is with reforms intended to increase the amount of information available to decision-makers at the expense of some ongoing investment of time. Often, there is insufficient attention put toward ensuring use of the additional information, with the result that the benefits of the reform aren't realized but the cost in time is still there.)
  • A failed attempt to execute on a particular strategy at the next available opportunity crowds out a what would otherwise be a more successful strategy in the near future
    • (This one could go either way; sometimes it takes several attempts to get something done and previous pushes help to lay the groundwork for future efforts rather than crowding them out. However, there are definitely cases where a particularly bad execution of a strategy can poison critical relationships or feed into a damaging counter-narrative that then makes future efforts more difficult.)
  • The strategy succeeds in improving decision quality at that particular institution, but it doesn't actually improve world outcomes because of insufficient altruistic intent on the part of the institution
    • (We do define this sort of value alignment as a component of decision quality, but since it's only one element it would theoretically be possible to engage in a way that solely focuses on the technical aspects of decision-making, only to see the improved capability directed toward actions that cause global net harm even if they are good for some of the institution's stakeholders. I think that there's a lot our community can do in practice to mitigate this risk, but in some contexts it will loom large.)

I think all of these risks are very real but also ultimately manageable. The most important way to mitigate them is to approach engagement opportunities carefully and, where possible, in collaboration with people who have a strong understanding of the institutions and/or individual decision-makers within them.

BrianTan @ 2021-03-05T02:11 (+3)
  1. What are the 4-10 books you've found most helpful to your career as a strategy and research consultant, or about your knowledge on improving institutional decision-making?
  2. What are other resources (aside from books) that you've also found most helpful to your career?
JamesOz @ 2021-03-03T10:00 (+2)
  1. What was your journey to becoming a strategy consultant from an arts administrator?
  2.  What would you advise for someone in the early career who wants to move into improving institutional decision making and/or strategy consulting? 
JamesOz @ 2021-03-03T09:52 (+2)

In your opinion, how much of a factor in making good decisions is the actual process vs a healthy team culture and psychological safety amongst team members to challenge others/take risks?

JamesOz @ 2021-03-03T08:58 (+2)
  1. What are your thoughts (or experience, if any) regarding self-managing organisational frameworks  such as  Holacracy where the structure is less hierarchical and decisions tend to be made predominately by elected individuals or teams using consent-based decision making?
  2. On the same vein as above, have you noticed a change in the quality/style of decisions from very hierarchical organisations such as a government relative to an NGO that might be slightly less so?

     
tamgent @ 2021-03-05T14:34 (+1)

On what timescales do you see most of the impact from improving institutional decision-making starting to kick in, and what does the growth function look like to you?