Impact Prizes as an alternative to Certificates of Impact
By Ozzie Gooen @ 2019-02-20T21:25 (+39)
Epistemic state: quite uncertain
TLDR Example
An EA donor puts up a $50k prize for distribution in 2022. In 2022, several projects that have started since 2019 apply. Their net EA impacts are estimated, and these estimates (vs. the total value estimate of all submissions) are eventually used to give them corresponding proportional amounts of the $50k.
Back in 2019, several projects sell “rights” to their prize, and these get sold around. It’s expected that $1M in estimated total value will apply, so the market value of the claim of every $10 of estimated impact is $0.50. One project sets up an estimation service where they publicly estimate the eventual evaluation of every project, to help make the market more efficient, with the goal of themselves getting part of the prize.
Impact Prizes
I really like the goal of Certificates of Impact, but personally find them suboptimal in practice. I think Impact Prizes present an interesting alternative. It's also possible Certificates of Impact could be used with Impact Prizes to gain the advantages of both down the road.
The most basic definition of Impact Prizes is something like,
"Declarations and fulfillment of prizes aimed at public benefit."
Such a definition would apply to many existing charity prizes. They've recently been used with success on LessWrong in the interactions of the AI Alignment Prize.
I think these get more interesting with some extra less-explored features.
Possible Features
Tokenization
If one group has an expectation of making $2,000 of prize money from a future Impact Prize, they should be able to sell that claim to a 3rd party. This should be really simple, and that 3rd party should be able to easily resell it.
We can call "parts" of this claim "tokens."[1]
Suppose there's a single $10,000 prize, to be awarded in 2020, and a specific group has a 20% chance of winning that prize. Then that group has an expected value of $2,000 of prize money. That group creates 100 tokens representing 100% of the claim to that prize. They sell 50 tokens for $1,000.
Later, they do win the $10,000 prize. However, because they only own 50% of the tokens, they only get $5,000. The other $5,000 goes to the previous share purchaser.
Proportional Prizes
If only one prize was given, then share purchasers would only be interested in projects that have chances of being the top submitted project. This seems suboptimal.
Imagine instead that once the prize evaluation session begins, every single project is numerically evaluated for impact. Then each one gets a reward in proportion to the impact that the project was rated as having.
Example
Say the $10,000 prize attracted 20 project entries, each of which was evaluated to have saved 1 life (these were all efficient anti-malaria projects). Each individual project would be awarded
This means that even small projects would receive rewards, and thus, they could effectively issue and sell tokens.
Probabilistic Evaluations
If a random sample of projects was selected to be evaluated, this may not change the expected value for stock purchases that much. A smaller proportion would get awards, but they would get proportionally more.
If the distribution of project impacts was considered very long-tailed and the sample very small, then this would disincentivize investments in better projects. Perhaps one partial solution would be to do a first quick round of review, ensure that the highest potential projects make the cut to be in the prize round, and then randomly select lower potential projects for the rest of the round.
Pragmatic Priors
Instead of using probabilistic evaluations, it could make sense to use decent priors. Imagine that all projects start off with wide distributions based on empirical priors. Then evaluators would gradually narrow these down in multiple passes, spending evaluation time roughly in proportion to the impact on the final result.
Counterfactual Prize Adjustment
I assume the main goal for many Impact Prizes would be to encourage valuable activity. This may not actually correlate that well with total project impact. There could be many submitted projects that would have been done equally well if it wasn't for the Impact Prizes.
If this was a concern, it may be reasonable to estimate counterfactual prize value on some scale along with project value. Projects that would have been helped more by marginal prize money could be granted proportionally larger prizes. Say that each project is rated on a linear scale of 0-10 in terms of "counterfactual effect of prize amount", and this was multiplied by its project impact estimate.
Example
Say project A is a large-scale United Nations effort that created $2 Million of value, and project B is a smaller project by an independent organization. It comes out that the United Nations effort would have done the project without any expectation of a reward, while for the independent organization, the reward was a decisive factor. In this case, it seems possibly useful to be able to favor the independent organization in the award outcome.
Tooling and Earmarking
Instead of presenting $10,000 for "all projects", it may make more sense to divide this pool to encourage a few areas. For instance, it may be common practice to earmark 20% for support and evaluation. The idea of this would be to encourage some people to "do good" by doing things that would help the prize. Some things to help could include setting up a Prediction Tournament to establish common knowledge of prize expectations, or web tools to make purchasing and selling more accessible.
Because some prizes would go towards efforts to help the prize system, this could lead to a minor prize-value-promotion economy. As stated above, some people could set up prediction systems, and other people could make predictions of prize outcomes on them. Others may act as police, detecting and reporting on bad actors.
Users could investigate not only bad behavior for prizes but also good behavior. In many tournament systems groups become quite competitive and indirect services like education or collaboration can be undervalued. If there could be a lot of value in some of these areas, then it should be evidently valuable when people point that out. The presence of some motivated actors actively investigating and promoting overlooked activity would hopefully lead to more of that activity.
Dealing With Multiple Prizes
One disadvantage of Impact Prizes, compared to Certificates of Impact, is that they could get complicated when there are several different prizes by different donors. A naive implementation of Impact Prizes could demand a unique token minting per project per prize, which would make things very messy. Any given project may have dozens of tokens to worry about and trade, and many exchanges may be between clusters of tokens at a time.
A simpler setup would look something more like Certificates of Impact. Only one token is made per project, but that token can be used for all Impact Prizes. Perhaps there would be a few common standards of tokens if Impact Prizes with different parameters.
Example
Say in 50% of tokens of a project are sold for $2,000, and later that project wins $5,000 from an Impact Prize. With the case of shared tokens, this token-holder could expect to possibly win even more money later on from other Impact Prizes as well.
A related technique could just be that future donors often donate to existing Impact Prizes instead of creating new ones. This would mean that Impact Prizes would be lower-bound (the existing cash pool) but not simply upper-bound (it's not clear how much more money will be added).
It's possible that Certificates of Impact could work effectively as one of these token standards.
Risks and Insurance
One bias that these systems may create is that actors may be motivated to maximize upside risk, but may not care about minimizing downside risks. As long as Impact Prizes can only give out money (rather than demand money), than the lowest one should expect from a highly risky outcome is zero.
One way to get around this would be with formal insurance systems. All projects that create tokens could be required to purchase insurance upon project formation. When it comes to evaluation time, the Impact Prize could request that the insurer payout for any projects that are evaluated to be net-negative. It's not obvious how they should strike a balance between charging for the entire cost or for the proportional cost.
In the case of multiple prizes, perhaps damages should be handled outside the prize system.
Challenges
Legal Implications
I think that tokenized Impact Prize systems in particular may be quite legally complicated. Corporate stock systems come with lots of rules, in part because there's been an established record of people manipulating them in shady ways for personal gain.
If sophisticated financial instruments like shorting became possible, challenges could arise that would normally be addressed by corporate law. For instance, insider trading is regulated, in part, to prevent corporate employees from taking relatively simple actions to short their own stocks and then purposely cause bad things to happen.
If an Impact Prize system was established, it would have to either work within the current legal infrastructure, like stock, or outside of it. Both come with disadvantages.
It's possible that only accredited investors would be able to purchase Impact Prize tokens, though this may be a fine first step.
These considerations would really need to be evaluated by an actual attorney. I suggest anyone considering doing this at scale hire an attorney first.
That said, similar problems would come up with Certificates of Impact if they was done to a similar scale. They also may be adequately addressed by existing cryptocurrency Token projects.
Evaluation Costs
The final prize evaluations could be quite costly to produce. A few methods above could help, but significant costs would remain. I feel like there's probably clever ways of thinking about this to incentivize everyone to maximize total value. For example, perhaps the evaluation cost comes out of each project's value, incentivizing projects not to apply if that total would be below zero, and incentivizing them to make the evaluation easy.
Openness Costs
My current model is that there a lot of incentives to not make most kinds of evaluations public. Perhaps the best comparison is prizes that are given out based on rubrics, though here most of the results of most of those rubrics are not made public.
The Impact Prize evaluations may be controversial and are likely to be at least somewhat misunderstood. Public evaluations may really require a community that is quite epistemically mature.
Controversy could create liability. If a Twitter war or similar gets started, it's possible there could be enough anger for any prize to be canceled, or at least to stop future prizes.
Technology
For such a system to work well, a decent of work may be needed both on technical tooling and in implementation creativity.
Cultural Risks
If Impact Prizes took off, I could imagine some actors drawing into the ecosystem who only motivated by making profits. The token system may be looked at as a form of gambling (somewhat similar to the stock market) and may lead to some gambling tendencies. I think there may be some significant downsides here, but estimate that the upsides will be higher. (but this should be tested!) It could obviously be partly combatted using some of the techniques mentioned above.
Comparison to Certificates of Impact
A Philosophical Comparison
Perhaps the main philosophical difference between Impact Prize Tokens and Certificates of Impact is that Certificates of Impact, according to Paul Christiano, are supposed to represent causal responsibility. As he writes,
Allocating certificates requires explicit and transparent allocation of causal responsibility, both within teams and between teams and donors.
I personally find the causal responsibility bit unintuitive, and don't expect a much larger community (especially outside the EA sphere) to accept it.
Impact Prize tokens would be decoupled from this idea.
A Ratio Comparison
I believe Certificates of Impact are supposed to be priced at their expected rates of impact, so $1 worth of certificates means $1 of counterfactual impact.
I think this will prove somewhat inflexible. I question the market viability of a $1 to $1 peg. If the demand is much less than what is necessary to create a $1 to $1 peg, then I would expect this to result in an illiquid market.
That said, of course a $1 to $1 would make things very simple if it works. A variable ratio could be fairly confusing and could require sophisticated purchasers (well, ones that could do two multiplications.)
Complexity
Perhaps the main challenge to Impact Prizes as I discuss them is their additional complexity, compared to Certificates of Impact. It may require setting up a prize in advance, and then when that happens either doing a bunch of evaluations or figuring out clever ways of decreasing that burden.
Further Work
The feature space is quite large. I'd like it to be larger. I'd be curious to hear other ideas for features or to modify the above features.
One area I'm particularly interested in is how best to structure the openness of evaluations. I think that the "Openness Cost" is very significant, and it would be nice to be able to reduce it while still maintaining much of the benefits of the evaluations.
[1] There's much about the Blockchain world I don't like, but they have used tokens extensively for this specific purpose. I don't want to use the word "shares" because these parts will not have any voting rights, and legally there are other important distinctions.
Special thanks to Ryan Carey for discussing the concept, providing writing feedback, and suggesting I use the name "Impact Prizes" instead of something more obtuse.
Denis Drescher @ 2022-04-05T23:28 (+6)
I read your post back in the day (according to my upvote), but then sadly forgot about it! You’ll probably have noticed that my conception of it follows your design. I never thought that the differences that you detected were particularly notable and concluded that a market for impact certificates is the same as a prize competition (or an iterated prize competition) with many proportional prizes, an opportunity for seed funders to make a profit by teaming up with prospective future winners, and any or no lower limit on when the outcomes has to have been achieved.
I’ll reference your post next time when I draw this analogy! :-D
Larks @ 2019-02-26T04:19 (+4)
Great post; I had been thinking about writing something very similar. In many ways I think you have actually understated the potential of the idea. Additionally I think it addresses some of the concerns Owen raised last time.
Evaluation Costs
The final prize evaluations could be quite costly to produce.
I actually think the final evaluations might be cheaper than the status quo. At the moment OpenPhil (or whoever) has to do two things:
1) Judge how good an outcome is.
2) Judge how likely different outcomes are.
With this plan, 2) has been (partially) outsourced to the market, leaving them with just 1).
Cultural Risks
If Impact Prizes took off, I could imagine some actors drawing into the ecosystem who only motivated by making profits.
This is not a bug, this is a feature! There is a very large pool of people willing to predict arbitrary outcomes in return for money, that we have thus far only very indirectly been tapping into. In general bringing in more traders improves the efficiency of a market. Even if you add noisy traders, their presence improves the incentives for 'smart money' to participate. I think it's unlikely we'd reach the scale required for actual hedge funds to get involved, but I do think it's plausible we could get a lot of hedge fund guys participating in their spare time.
Legal Implications
In terms of legal status, one option I've been thinking about would be copying PredictIt. If we have to pay taxes every time a certificate is transferred, the transaction costs will be prohibitive. I am quite worried it will be hard to make this work within US law unfortunately, which is not very friendly to this sort of experimentation. At the same time, given the SEC's attitude towards non-compliant security issuance, I would not want to operate outside it!
Quick other thoughts
One issue with the idea is it is hard for OpenPhil to add more promised funding later, because the initial investment will already have been committed at some fixed level. e.g. If OpenPhil initially promise $10m, and then later bump it to $20m, projects that have already sold their tokens cannot expand to take advantage of this increase, so it is effectively pure windfall with no incentive effect. A possible solution would be cohorts; we promise $10m in 2022 for projects started in 2019, and then later add another $12m, paid in 2023, for 2020 projects.
Ozzie Gooen @ 2019-02-26T11:13 (+2)
Thanks! Some quick responses to parts.
Legal Implications
I think copying PredictIt would be pretty messy. I'm curious about the feasibility of using crypto, similar to CriptoKitties. It seems like several crypto groups did essentially a similar thing, and perhaps in the medium-term, they will be recognized as being legal in the United States.
"If OpenPhil initially promise $10m, and then later bump it to $20m, projects that have already sold their tokens cannot expand to take advantage of this increase, so it is effectively pure windfall with no incentive effect. "
I agree cohorts are one solution. Another issue though, is that if the audience thought there was a decent chance of a bump, then the ("40% chance there would be a $10mil bump") would be factored into the price.
Milan_Griffes @ 2019-02-20T17:42 (+4)
Is there a postmortem somewhere on Certificates of Impact & challenges they faced when implementing?
Larks @ 2019-02-26T04:15 (+11)
I think I might have been the second largest purchaser of the certificates. My experience was that we didn't attract the really high quality projects I'd want, and those we did see had very high reservation prices from the sellers, perhaps due to the endowment effect. I suspect sellers might say that they didn't see enough buyers. Possibly we just had a chicken-and-egg problem, combined with everyone involved being kind of busy.
Ozzie Gooen @ 2019-02-20T20:35 (+4)
Not that I know of. Paul has a lot of stuff going on, for one thing. :)
I think some people are still excited about Certificates of Impact though.
Raemon @ 2019-02-20T20:49 (+10)
My impression is that nobody has made it their job (and spent at least a month and preferably a year or two) to make Certificates of Impact work. i.e. money is real because humans have agreed to believe it's real, and because there's a lot of good infrastructure that helps it work. If Certificates of Impact (or Prizes) are to be real someone needs to actually build a thing and hype it continuously. So far it doesn't feel like it's been tried.
Ozzie Gooen @ 2019-02-20T20:53 (+3)
I'd generally agree with that.
Honestly, the technical infrastructure for Certificates of Impact would be very similar to that for Impact Prizes as I discuss them above. I think both would be really interesting to test at larger scales.
Impact Prizes may need less hype though, but may be more difficult to scale.
Raemon @ 2019-02-20T21:29 (+1)
Hmm, I think they need about the same amount of hype. I do think Impact Prizes aren't any harder to scale – Certificates of Impact already depend on something like Impact Prizes eventually existing.
Actually, I think of Impact Prizes as "a precise formulation of how one might scale the hype and money necessary for Certificates to work."
Ozzie Gooen @ 2019-02-20T22:05 (+2)
That makes sense to me. When I said "harder to scale", I mean harder to "put a bunch on top of each other". In some ways it's not as elegant.
Agreed that Impact Prizes are one ways that Certificates of Impact could work long-term. Like, one group places $100k of Impact Prizes for 2030, where it will only be used to purchase Certificates of Impact.
Sanjay @ 2019-02-25T00:04 (+3)
I think this idea is similar to alice.si (see https://alice.si/ or for more detail https://github.com/alice-si/whitepaper/blob/master/Alice%20white%20paper%20-%20FV%200.9.pdf)
I know the founder of alice.si (not very well, but we've met up a couple of times).
(Note that alice is on the blockchain and I'm not convinced there's much benefit apart from the fact that some people don't trust charities and the blockchain might help with that)
Also, I haven't read this very carefully, so apologies if the two ideas are not as similar as I think
cole_haus @ 2019-02-20T07:12 (+3)
This seems very related to social impact bonds: "Social Impact Bonds are a type of bond, but not the most common type. While they operate over a fixed period of time, they do not offer a fixed rate of return. Repayment to investors is contingent upon specified social outcomes being achieved."
Ozzie Gooen @ 2019-02-20T15:46 (+4)
Yep, it's related. I've looked a bit into social impact bonds; they actually are quite specific and precise though (they pay with a specific interest rate, for very specific outcomes).
There have been different thoughts on how to use markets for charitable work before. The Impact Certificate Post and its comments list some.
https://forum.effectivealtruism.org/posts/yNn2o3kEhixZHkRga/certificates-of-impact#f4CnqiLtCcg6zfAKF
Milan_Griffes @ 2019-02-20T05:43 (+2)
So the prize money gets paid out in 2022, in the tl;dr example? (I'm a little unclear about that from my quick read.)
This means that the Impact Prize wouldn't help teams fund their work during the 2019-22 period. Am I understanding that correctly?
Raemon @ 2019-02-20T06:27 (+2)
Part of the point is that, although the prize isn't awarded until 2022, you can still sell your rights to the prize in 2019, to someone who predicts that you will win the prize in 2022.
Milan_Griffes @ 2019-02-20T15:21 (+4)
Got it. So this would go something like:
- There's a prize!
- I'm going to do X, which I think will win the prize!
- Do you want to buy my rights to the prize, once I win it after doing X ?
Seems like this will select for sales & persuasion ability (which could be an important quality for successfully executing projects).
Ozzie Gooen @ 2019-02-20T15:44 (+5)
Yep.
I imagine it would select in part for sales & persuasion, but not more than for other prizes (where you need to do the same for the judges). The middlemen would focus on the financial motive, so I'd expect them to be relatively sane.
I would really desire the evaluations and predictions/estimations to be really good, in order to make sure people focus on the right things.