Building Blocks of Utility Maximization

By NunoSempere @ 2021-09-20T17:23 (+21)

Introduction

Suppose that at every point in time, we take the action  given by:

That is, we want to choose the action  in the set of possible actions  which maximizes () the expected () utility () in the world given that action () and given all our observations and models about the world ().

In the next sections, I will give a brief example, analyze each of the parts in some detail as they relate to altruism, flesh them out, and then point out where I think some EA organizations and myself fall into according to this model.

I hesitated for a long while about posting this piece, because I thought that it might be perceived as too basic or unsophisticated, and because I'd been working on a related but much more complicated model. And indeed, the below model is basic. However, I've found that it does contribute to my clarity of thought, which I think is valuable.

A brief example

If your utility function  is “eat as much ice cream as possible”, then at every point you’d want to choose the action a among the set of possible actions A available to you (buy ice cream, invest in the stock market, work to get  more money, etc.) which leads to the most ice cream eaten by you, given all you know about the world.

The moving parts in our model, and what it means to optimize them, are:

Building Blocks

The choice function (originally )

So you have something like a landscape of the expected value of actions, and you want to find and choose the highest point. Some ways in which you can improve your ability to do this:

Consider an organization like GiveWell. GiveWell could estimate the value of any charity. But doing so is costly, so it can't just evaluate the value of every charity and choose the best ones. This leads to interesting exploration-exploitation tradeoffs problems, even if the evaluations of expected value of any particular charity were perfect (!).

Here, better parametrizations are particularly helpful. By parametrizations, I mean something like dividing into parts which can be considered in isolation. For example, GiveWell could divide charities into various cause areas, and evaluate swathes of causes (e.g., rare diseases) all at once. 

Good parametrizations could lead to efficiency gains, worse parametrizations could lead to confused results. For example, one might feel aversion towards "politics" in general—thinking that it is generally toxic—and as a result discount "better voting mechanisms" as a cause. But perhaps a more fine-grained parametrization would have made a distinction between "ideological or party politics" and "all other politics", and realized that "better voting mechanisms" falls into the second bucket.

With regards to fundamentals, one would want to make sure that one is maximizing over the right thing. For example one would want to make sure that one isn't e.g., triple counting impact, and one might want to maximize over Shapley values, instead of over counterfactual values to avoid this. Similarly, one might want to take into account that one is maximizing over an estimate, and adjust for the optimizer's curse.

Many of these points could also belong in the next section, estimating the utility of actions, or the consequences of actions in general.

the expected ()

In general, to get better predictions (or more accurate expectations), one can either:

Various forecasting platforms (such as Metaculus, Hypermind, Predictit, etc.) provide forecasting capabilities. Robust randomized trials can generate conclusions (and thus predictions) that span longer time periods, and scholarly works such as, e.g., the regressions from Acemoglu and Robinson could provide conclusions that could last many generations (though they are not immune to criticism [1].)

However, in general our current general forecasting capabilities feel insufficient, particularly because they don't allow for cheap, reliable, longer-term predictions. Some open questions in the area are:

It also feels like there hasn't been much work in forecasting the value of individual actions, projects, or the promisingness of research directions, in such a way that forecasts could be action-guiding. 

Note also that forecasts normally require some sort of evaluation or resolution at the end in order for forecasters to be rewarded. This means that as evaluation capabilities increase, so do forecasting capabilities, because anything that can be evaluated could be forecasted in advance.

utility ()

Advances related to utility functions might be:

throughout time

Consider that the utility of an action can be expressed as 

Where  corresponds to additional utility during year , and  is a discount factor— which could correspond to the probability of value drift, the probability of expropriation, the probability of existential-risk, irrational bias, or intrinsically caring less about future people and events. Parts of that discount factor might be unavoidable (e.g., unavoidable probability of a physically unlikely catastrophe, practically unavoidable risk of expropriation), but the rest could likely be reduced, which would increase the overall utility.

Once one considers a time dimension, coordination throughout time becomes an additional point of optimization. 

Incidentally, note that because the expected value is  additive:

which could be a useful decomposition in terms of forecasting, because forecasting systems could forecast the additional expected value of an action for each year, and said predictions could be evaluated year by year.

of actions ()

Various ways of improving the set of actions () available to oneself might be:

¿taken by agents?

In the previous section, I added people kind of as an after-thought. We could make our model more elaborate by having

where  is now a vector of actions, with one index for each person (i.e.,  denotes an action which could be taken by the -th person, and  denotes the set of actions which the -th person could take) Writing  and , we could have:

This would open new avenues of optimization:

But perhaps not all actions are carried out by human agents. For example, large bureaucracies, ideologies or nations could be modeled as having their own sets of actions at their disposal. This could be further modeled, and relates to the "improving institutional decision making" cause.

given your knowledge of the world ()

Previously, I was considering forecasting as the art of maximizing accuracy holding information about the world constant. But one can also improve one's grasp of the state of the world, and have more information with which to make better forecasts.

One particular useful type of knowledge about the world is a good categorization scheme or parametrization which allows you to group different things together and evaluate their characteristics at the same time, and thus more easily optimize over a set of options.

Where EA organizations fall in this scheme

There isn't a clear mapping between EA organizations and the parts of this scheme, but overall:

  1. Taking object-level optimal actions: Individual EAs, Good Ventures, object-level EA organizations like the Against Malaria Foundation, Wave,
  2. Estimating the expected value of actions: GiveWell, 80,000 hours, Animal Charity Evaluators, Open Philanthropy, SoGive, EA Funds, etc.
  3. Attaining clarity about one's values: Global Priorities Institute, Forethought Foundation, Rethink Priorities, Happier Lives Institute, etc.
  4. Fine-tuning agents:
    • More agents: EA local groups,
    • More coordinated agents: CEA (??)
    • More altruistic agents: Founders Pledge, Raising for Effective Giving, Giving What We Can.
    • More rational agents: CFAR, ClearerThinking.
  5. Improving models of the world: Our World in Data, Metaculus, Open Philanthropy, Rethink Priorities, J-PAL, IDInsight, etc.

Each of these points then has various meta-levels. Or, in other words, these can be stacked. For example, one can try to [estimate the expected value] of [more agents] (e.g., the expected value of an additional Giving What We Can pledge), or one can [recruit more agents] in order [to have better models] about [expected value estimates] about [object level actions] (e.g., by running forecasting tournament about OpenPhilanthropy grants.)

I see QURI as mostly working on the meta-level of 2. and 5. And I see myself as working on 2., 3. and 5., and maximally away from 4.

Conclusion

Intuitively, the EA community would want to invest in all of these "building blocks", because each of them probably has diminishing returns. For instance, as one gains influence over more and more rational agents, clarity about one's utility function becomes more valuable in comparison. [2]


[1]:  Despite criticisms, I do think that there is some core to those studies. For instance, the results of The Persistent Effects of Peru's Mining "Mita" seem relatively robust: the paper looks at extractive institutions which for bureaucratic reasons changed discretely at a geographic boundary: "on one side, all communities sent the same percentage of their population, while on the other side, all communities were exempt."

[2]: It also seems to me that considering the optimal distribution of talent and resources among these building blocks is probably more important than considering which has the highest marginal value at any given moment. 

In theory, both approaches should be equivalent—always directing resources to the block with the highest marginal value should lead to the optimal allocation, in which all marginal values are equal. 

But in practice, I imagine that coordination is difficult and includes some noise, and external shocks mean that knowing which block has the highest marginal value is less information that what one might think.


Misha_Yagudin @ 2021-09-21T08:35 (+5)

re: footnote 1

The paper The Standard Errors of Persistence, you cite as a criticism says the following about the robustness of Peruan study:

This study examines differences in household consumption and child stunting on either side of Peru’s Mitaboundary. It finds that areas which traditionally had to provide conscripted mine labour have household consumption almost 30 per centlower than on the other side of the boundary. We examine the regression in column 1 of Table 2, which compares equivalent household consumption in a hundred kilometre strip on either side of the boundary with controls for distance to the boundary, elevation, slope and household characteristics. The variable of interest is a dummy for being inside the boundary. We examine here how well the regression explains arbitrary patterns of consumption generated as spatial noise. To do this we take the locations where households live and simulate consumption levels based on median consumption at the points. The original study found a 28 per cent difference in consumption levels across the historic boundary. If we normalize the noise variables to have the same mean and standard deviation as the original consumption data, we get a difference of at least 28 per cent (positive or negative) in 70 per cent of cases.

What do you think of that? In general, it seems that your justification for relative robustness doesn't engage with the critiques at all. My understanding of their major point is that spatial autocorrelations of residuals are unaccounted for and might make noise look significant. The simpler example of a common spurious relationship was, AFIAK, first described in Spurious regressions in econometrics (see this decently looking blogpost for relevant intuitions).

NunoSempere @ 2021-09-21T11:54 (+3)

Note that per Table A1...A3, the authors replace the explanatory variable with noise in every study except in the Mita study, for which they only make their point for the dependent variable. Also, the Mita study isn't present in Figure 8. Not sure why that is.

spatial autocorrelations of residuals are unaccounted for and might make noise look significant

So I sort of understand this point, but not enough to understand if the construction of the noise makes sense. 

In any case, yeah, it looks like it was less robust than I thought.

Emrik @ 2022-11-05T18:04 (+4)

This is good stuff!

  1. I really like your way of framing abstractions as "parametrizations" of the choice function. Another way to think of this is that you want your ontology of things in the world to consist of abstractions with loose coupling.
  2. For example:
    1. Let's say you're considering eating something, and you have both "eating an apple" and "eating a blueberry muffin" as options. 
    2. Also assume that you don't have a class for "food" that includes a reference to "satiation" such that "if satiated, then food is low expected utility". Instead, that rule is encoded into every class of food separately.
    3. Then you'd have to run both "eating an apple" and "eating a blueberry muffin" into the choice function separately in order to figure out that they are low EV. If instead you had a reasonable abstraction for "food", you could just run the choice function once and not have to bother evaluating subclasses.
  3. Not only does loose coupling help with efficient computation, it also helps with increasing modularity and thereby reducing design debt.
    1. If base-level abstractions are loosely connected, then even if you build your model of the world on top of them, they still have a limited number of dependencies to other abstractions.
    2. Thus, if one of the base-level abstractions has a flaw, you can switch it out without having to refactor large parts of your entire model of the world. 
  4. A loosely coupled ontology also allows for further specialisation of each abstraction, without having to pay costs of compromise for when abstractions have to serve many different functions.