Contribution-Adjusted Utility Maximization Funds: An Early Proposal

By Ozzie Gooen @ 2021-08-03T23:01 (+14)

Epistemic status: Early. I’m pretty unsure about this. However, I think the potential value is quite high, and there are likely to be other promising ideas in this broad area. I very much encourage further thought and new ideas in this area. 

Research status: Messy. Think of this a bit like a draft. I'd love to attempt a much more thorough take on possibilities in this area, but that would take much more time.

Scholarship status: I feel like there must be research into the main ideas of this somewhere in Economics, particularly around Mechanism Design, but I haven’t spent much time searching. Links and similar appreciated.

Thanks to Edo Arad for comments.


Overview

Right now in Effective Altruism, we have a few donor funds with particular focus areas. In this post I propose a new type of fund that’s instead focused on maximizing the combined utility functions of its particular donors. The fund goals would be something like, “Maximize the combined utility of our donors, adjusted for donation amount, in any way possible, abiding by legal and moral standards.” I think that this sort of fund structure is highly theoretical, but in theory could some particular wants that aren’t currently being met.

For this document, I call these funds "Contribution-Adjusted Utility Maximization Funds", or CAUMFs. This name is intentionally long; this idea is early, and I don't want to pollute the collective namespace.

This fund type has two purposes.

  1. It’s often useful for individuals to coordinate on non-charitable activities. For example, research into best COVID risk measures to be used by a particular community.
  2. These funds should help make it very clear that donations will be marginally valuable for the preferences of the donor. Therefore, donating to these funds should be safe on the margin. Hopefully this would result in more total donations.

You can picture these funds as somewhere between bespoke nonprofit advising institutions, cooperatives, and small governments. If AI and decision automation could cut down on labor costs, related organizations might eventually be much more exciting.

I could see one golden rule being particularly valuable to satisfy:

On the margin, any money donated will either produce more value according to the donor’s preferences than the counterfactual (in expectation), or that money would eventually be returned to the donor, with interest similar to what the donor would have expected otherwise.

A simple version could look something like:

  1. A donor makes a donation. Say, $10k.
  2. The fund collects information from the donor. Maybe they take a survey, have an interview, set up a communication channel (Slack, email, etc), or something else.
  3. The fund then has one year to spend that $10k in ways that would be valuable for the donor. This could either mean paying for work that will only benefit the clients preferences, or pooling that money with arrangements from other donors to fund things that even better maximize their total preferences. Some of the money will be used for the funds expenses, in rough proportion to the work spent on it. Donors will continually be pinged to learn about their preferences in areas the fund is considering investing in, but ultimately the fund decides when and where to spend money, for the sake of expediency.
  4. At the end of the year, any remaining money is either returned or re-invested, at the choice of the donor.

These funds have the interesting property that they will allocate resources in fairly strict proportion to the allocation of funding between donors. Donors that give heavily will be correspondingly emphasized. Unlike community-specific funds, they will cater to whichever clusters wind up donating money.

A lot of charities do weigh the preferences of their larger donors more than those of the smaller donors, but often this difference is quite nebulous. Many charities try not to take the donor preferences into account at all. (This is often seen as a morally gray area, normally with good reason.) My impression is that the main existing EA funds try to minimize the impact of the “idiosyncratic” preferences of the donors, in favor of only maximizing their specific fund-specific goals.

I would expect and hope that CAUMFs wouldn’t compete with charities, but instead work with them. They should coordinate with charities that their donors either value or are likely to value on reflection, then donate as to best maximize the donor preferences. When distributing charitable funding, these funds would act as middlemen. It's probably generally better to think of CAUMFs as community funds than donation targets. 

Theoretical Example

To give an incredibly simple example, imagine there are two agents, A and B, and a list of possible interventions 1-4. The U(n) scores represent the utility that each agent would get under each condition.

If only agent A purchased into the fund, with, say, $400, then the fund would purchase intervention (1). This would cost $300 but deliver $500 of value. If this were the only purchase over the year, the last $100 would be returned. So this transaction netted them $200 of value.

However, if agent B also joins the fund with $400, then their money would be pooled on (2), causing a surplus of $300 in value.

The rules mean that at any point in time, it’s in either party’s interest to invest in the pool, and also that they could reap the rewards of coordination. Without such a pool it would have required effortful coordination to choose (2), this way it happens more organically.

On the specific question of how costs should be figured out around intervention 2, a few options emerge. Some things to consider:

  1. A gets more utility for it than B, so perhaps they should pay more.
  2. There’s still $200 total left between the two of them. By purchasing (2) and (4), total utility would be maximized.
  3. It would be better for A to go with only intervention 1, while it would be better in total to go with 2 and 4.

If this were a regular negotiation, A might get B to pay $300 for intervention 2, meaning each would have $100 remaining. However, if they essentially precommit when donating to the fund, and the fund followed a total utility maximization procedure (after obeying the primary decision rule), then it might choose options 2 and 4.

Importantly, the expected costs of donating to these funds should be very low. If money is given back, and only spent when net-positive, then this should be a relatively low-risk option for these agents. (This assumes that they have the extra cash, and it would make a similar amount of interest here to how it would otherwise).

Questions

What specific kinds of things would be purchased in these funds?

I imagine that these funds would begin in areas that are highly specific to the initial donors, and then find more cooperative areas as clusters of similar donors join. Some examples of particular interventions could be:

  1. The fund finds out that a few donors are big fans of a few bloggers who are seeking funding. The fund messages the bloggers, and finds that most don’t need much money, but one in particular would be helped a lot by a $5,000 yearly donation. They set that up.
  2. A friend of a few donors is looking for a loan for a year, in order to learn coding and go job hunting. The fund does a background check, loans them $10,000, follows their progress, then takes care of the repayments.
  3. The fund funds Derisked to work on a project particularly relevant to 2 donors.
  4. Three donors care a lot about x-risks. The fund coordinates with the LTFF and Longview Philanthropy to wait for particularly valuable times to donate. The LTFF tells them in July that there are a few particularly promising interventions that couldn’t meet their official standards, so the fund jumps in with donations. Later the fund decides that Longview would be a good source for $40,000/year of continued funding. They start giving them that amount, but they keep an eye on the space to be on the lookout for even better options.
  5. 3 donors are based in Vancouver and are really unsure about the COVID situation. The fund helps find an online investigator into the risks of COVID, and pays them to do some work around COVID.
  6. There are a few donors who really like it when they are simply bought useful things. For these donors, they’ll occasionally just get shipped things from Amazon that are probably good things for them to have. (Great masks for COVID, for instance). They can always return these if they want.
  7. The fund subsidizes things that are good ideas to do, and extra subsidizes things that have positive externalities. In the former case, the money ultimately comes from the individual anyway, but is “locked up” for the year. For example, subsidies could exist for gym memberships and psychological help. This would act essentially as precommitments of spending money on long-term positive things. In the case of externalities, actions that would be beneficial for many fund donors would be funded in part by other fund donors.
  8. Two different donors are in a legal dispute with each other. The fund pays to investigate counseling and other methods to lessen the legal costs for both sides. It finds some alternative dispute mechanism that would be preferable to both.
  9. It’s calculated that if a particular cluster of people would become donors, that would be very beneficial for several existing donors. Therefore, the fund pays for advertising and gives that cluster starting discounts.
  10. Fund A represents that a subset of its donors would be partially better served by moving funding to Fund B, so it helps enact this.

It could be valuable to use these funds as insurance vehicles and other similar things, but that would complicate the incentives.

What legal structure could such funds use?

If they could be set up as nonprofits (non charities), that might be ideal for tax purposes. 501(c)(4)s (Civic Leagues, Social Welfare Organizations, and Local Associations of Employees) or 501(c)(7)s (Social and Recreational Clubs) might be options.

A different option would work by individuals creating separate accounts that are controlled by the fund managers, rather than transferring managers to centralized funds. This could work with separate bank accounts or legal vehicles like LLCs (in the United States) or Trusts.

It might be easiest to just start these funds as normal businesses, but these would bring challenges around tax deductibility. If one wanted to get very advanced, there could be some complex setup with donor advised funds controlled by other sorts of business structures.

How much work should these organizations do themselves?

CAUMFs might act as a fairly small layer on top of other services. I would expect them to take on some initiatives that no other organization can competently do, but generally try to push these into separate organizations. One reason for this is that CAUMFs would probably not want to have large fixed costs or commitments, in order to keep the golden commitment.

What would the expenses be?

Bespoke/boutique advising can be expensive, because quality human labor is expensive. I would expect advisor costs of 5% to 20% in the short term, and that those would decrease over time.

What’s the difference between CAUMFs and cooperatives?

Cooperatives typically exist to serve specific clusters of people. CAUMFs could support more diverse collectives; I would expect them to typically be made of multiple messy clusters. If some very clean clusters emerge, these might be spun off into separate funds or businesses.

Wouldn't this be a whole lot of administrative work to track?

Charities often allow for Restricted Funds. These can honestly be a big pain to track and account for. I would expect the agreements under CAUMFs would be similarly frustrating at first. However, with the right software, and as these groups gain practice, then it might become much easier. Keep in mind that these funds should be very minimal; for each donor, they would ideally be mostly writing checks, not managing many low-level tasks.

Estimating the utility that donors would get from different interactions could be a huge challenge. It’s very possible that this would make naive versions of these funds intractable.

Can the extra money be legally given back to donors?

I’m not sure. My guess is that it would be at least a bit messy. There’s a big space for various kinds of possible hacks here. Perhaps the donors could request the funding go to a charity instead. Perhaps it’s enough that the money be re-invested for future years (for one, the fund would itself need several months runway). Lawyers should probably be brought in for this.

How important is it that this acts as a fund, as opposed to a team of advisors?

Theoretically, if there are very few payments per donor, this model could work by having advisors simply suggest things to donors, and having the donors pay for them then. I’d love there to be more research in this area in general and I could see things like this having their place. Already, much of Longview Philanthropy works very similar to this. However, I think in practice this structure is limiting. Donors are notoriously fickle, indecisive, and slow in situations like this. This would likely require a lot of back-and-forth, and an inability for the fund to make commitments. It’s particularly tricky in situations that require buy-in from multiple donors. This is very similar to the question of whether the public should vote on representatives to make decisions for them, or attempt to themselves vote on every piece of legislature.

How could we get a CAUMF to be created?

The most obvious requirement is to have someone who is capable, trusted (or could easily become trusted), interested, and available. This can be a high bar, but sometimes it happens. Perhaps such a structure could start as a side project.

The next challenge is to find some match between interested donors and potential services. I imagine much of the work here would be identifying some pockets like, “These three people, representing $30,000 per year total, all are based in London and would benefit from having London-specific meetups there.” Finding a product-market fit can be rough, even in an exciting area. I could easily imagine that founders interested in this area could get general-purpose funding just to explore the space and subsidize early donors.

If those two challenges could be overcome, legal details would need to be figured out, and the first payments would have to be processed.

Would alternative rules be even more preferable?

Alternative rules to the ones mentioned above could result in higher total expected utility. Perhaps donors would all agree to situations where it would be possible (in theory) for one donor to occasionally get all of the benefit, in cases where that benefit would maximize total utility (proportioned for donation contribution). Perhaps the fund could attempt to “simply” represent the proportional utility function of its donors. (Interestingly, this would probably work best in large iterations. If an agent could ever predict that their own interests would lose out, they could just drop out in the next round.)

Would alternative time allotments be more preferable?

I’m not sure what the best setup to have is around returning capital. A more elegant (but complicated) solution would be for donors to choose discount rates. Whenever some amount of their allotment is at risk for underachieving this rate (for example, they don’t expect to have anything promising in a 3-year period), only then would money be transferred back. This would obviously require some level of trust, but trust was already needed to begin with.

As a fund matures it might develop high certainty around how well it could spend the money of new donors. As this happens, the fund might be able to effectively guarantee that the money will be spent well within a certain timeframe. If this happens, the need for a returning policy could be eliminated.

If yearly targets were given, I would expect that these would begin short (~1 year) as the funds find their footing, and then get longer and longer (~5 to 10 years, perhaps). Having reserves would be useful for longer-term funding ventures. Venture funds typically use limited partnerships with finite lifespans of 7-10 years. In situations where the time horizon were more than 1 year, it would be more important that the funding be held outside of regular banks.

Would CAUMFs compete with donor advised funds?

As stated in the body, I think they represent a very different product. These funds are more for things that would provide personal benefits, or simply to help decide between donor advised funds and specific charities.

Shouldn’t the big Effective Altruist funders fund all of the community-specific projects?

I think no, for a few reasons.

I would expect that if we could find so many ways to convert money into productivity effectively that workers get low on money (say that it seems beneficial for people to spend $30,000 per year on these funds, just for their own productivity benefits), donors would be interested in raising salaries proportionally. It’s just that money would transfer from funders to employees to funds to projects, rather than from funders to projects.

Isn’t it selfish to donate to local preferences?

The main advantage of these funds for non-charitable ventures would be for collective funding problems. Taxes are a good example of a different sort of collective funding effort that people pay for; you can think of these funds a bit like personal-utility-maximizing taxes. Even the most adamant altruists typically require (i.e. are very much helped by) basic infrastructure and personal safety. I expect that there are many EV-positive productivity enhancements and similar that we could find that would wind up being effective on the margin, but not a good fit for charitable funds.

Could we do this as a DAO (Decentralized Autonomous Organization)?

For some reason a few people I’ve talked to recently brought up DAOs. I’m not particularly excited about this now, in part because crypto brings a bunch of tax complications that at least I’m not excited to delve into. I could see it being a possibility much later on, if the costs can be brought down a whole lot.

How can we have trust in these funds?

Great question! You should very much be skeptical. I imagine that 5% to 20% of the fund’s overhead should go to evaluation and auditing of its performance. The fund would clearly be biased when selecting evaluators, so this choice will be unusually influenced by the donors. For example, perhaps 10% of the budget is automatically granted to a 3rd-party agency who will survey the donors to decide on an evaluation process each year. Alternatively, the donors could separately pay for evaluation.

Will these funds be confidential?

Sadly, the requirement “understanding donors well enough to advance their preferences” makes it difficult to keep their identity totally private. In situations where there are a few large donors in a community, I imagine it might be difficult to even keep their involvement confidential to the broader community. Donors might need to know each other to discuss their mutual preferences. I think that issues of privacy will be an issue with these funds, particularly if they have some very large and eccentric donors, but they can work over time to try to find a good balance.

Shouldn’t [longtermist|animal|global health] donors have the same preferences for charities?

After thinking about this for a while, I think clearly “not totally”. One big challenge is that there’s still a lot of differences of opinion between people within each area. I think that within longtermism at least, the issue is less one of values, and more one of beliefs. It’s possible that if they would reflect and discuss these issues indefinitely their opinions would converge. However, right now, different people seem quite strongly committed to diverse estimates, despite often-conflicting forecasts and blog posts. Some longtermist donors might expect political issues to be higher-EV and would like to spend money accordingly, while others might prefer FHI-style macrostrategy.


NunoSempere @ 2021-08-05T17:22 (+6)

CAUMFs might act as a fairly small layer on top of other services

If this was as easy as downloading a package over npm, it seems like an obviously good idea. But overall my impression is that this would have way too much overhead in terms of legal headache and coordination required might be pretty great.

Could we do this as a DAO

The thing this might be pointing at is that writing smarts contracts to manipulate money right now seems more convenient on a blockchain than ~anywhere else. Like, I'm sure that stripe has some API, but implementing something like a dominant assurance contract with normal money API would be arduous, but doable in some blockchain. It could even be the Binance blockchain (which is not decentralized), for all I care.

Overall my impression is that this might be an idea to keep in mind, but that the infrastructure is just not there yet.

Ozzie Gooen @ 2021-08-05T15:13 (+4)

One of the key questions here that's been brought up is how difficult it would be to actually express one's utility function. 

I think this is the main problem. There are several reasons why it's difficult:

I think in practice though: 

Davidmanheim @ 2021-08-04T14:55 (+4)

This seems like a good direction to think about, but I'm skeptical it's more useful to form organizations to do this, rather than just having EA people  just coordinate to hire people. For example, Eliezer just hired a community matchmaker for a short period. For this type of idea, if it's a useful enough service, I suspect there will be a relatively easy to sustain funding model from donors that care and value the service. This model doesn't do the "brainstorming" phase, but I also think that's the part which is hardest not to do directly with the funders / people interested, and it makes almost as much sense to have them pay someone to come up with ideas as a separate phase - there is little reason to think the people who are good at figuring out what people want also have the skills to do that thing.

Ozzie Gooen @ 2021-08-04T17:05 (+9)

Thanks for the thoughts here!
 

I think that the hiring of the community matchmaker was a good idea. I expect that coordinating funding after this will be at least a bit of a pain though. In my experience, coordinating funding of group things is often fairly painful. 

The fact that Eliezer took it on himself to pay for the matchmaker, and I believe has also previously paid for community-wide things, might present some evidence that there's more opportunity here. One reason Eliezer has been able to do these things is because he probably has more capital than most members in the community (median, not mean), not just because he's so great at coming up with these ideas. I could imagine cases where other people, if they could pool their money better, could come up with similarly interesting innovations.


Around if the people who come up with the ideas should be separate from the ones that do the thing; 

I could imagine CAUMF funds on various places in this area. Many companies have leaders who are more idea-people, and pay a lot of attention to the clients, and then they separately have COOs or similar who are in charge of running things. I'd also say that these funds would ideally not do that much of the work themselves, but instead things like funding.

Around Eliezer's projects, you could think of one implementation as the following:

  • Several people in the community contribute money to a fund
  • The fund has 1-2 part-time or full-time people who are pretty decent at making stuff happen
  • Every so often Eliezer or anyone else will recommend an idea. The fund will work with the donors to determine interest, and then help either subsidize it or carry it out.
  • The subsidies will be taken from donors in rough proportion to how valuable the work would be to them (or something like this)