How Can Donors Incentivize Good Predictions on Important but Unpopular Topics?
By MichaelDickens @ 2019-02-03T01:11 (+27)
Altruists often would like to get good predictions on questions that don't necessarily have great market significance. For example:
- Will a replication of a study of cash transfers show similar results?
- How much money will GiveWell move in the next five years?
- If cultured meat were price-competitive, what percent of consumers would prefer to buy it over conventional meat?
If a donor would like to give money to help make better predictions, how can they do that?
You can't just pay people to make predictions, because there's no incentive for their predictions to actually be accurate and well-calibrated. One step better would be to pay out only if their predictions are correct, but that still incentivizes people who may be uninformed to make predictions because there's no downside to being wrong.
Another idea is to offer to make large bets, so that your counterparty can make a lot of money for being right, but they also want to avoid being wrong. That would incentivize people to actually do research and figure out how to make money off of betting against you. This idea, however, doesn't necessarily give you great probability estimates because you still have to pick a probability at which to offer a bet. For example, if you offer to make a large bet at 50% odds and someone takes you up on it, then that could mean they believe the true probability is 60% or 99%, and you don't have any great way of knowing which.
You could get around this by offering lots of bets at varying odds on the same question. That would technically work, but it's probably a lot more expensive than necessary. A slightly cheaper method would be to determine the "true" probability estimate by binary search: offer to bet either side at 50%; if someone takes the "yes" side, offer again at 75%; if they then take the "no" side, offer at 62.5%; continue until you have reached satisfactory precision. This is still pretty expensive.
In theory, if you create a prediction market, people will be willing to bet lots of money whenever they think they can outperform the market. You might be able to start up an accurate prediction market by seeding it with your own predictions; then savvy newcomers will come and bet with you; then even savvier investors will come and bet with them; and the predictions will get more and more accurate. I'm not sure that's how it would work out in practice. And anyway, the biggest problem with this approach is that (in the US and the UK) prediction markets are heavily restricted because they're considered similar to gambling. I'm not well-informed about the theory or practice of prediction markets, so there might be clever ways of incentivizing good predictions that I don't know about.
Anthony Aguirre (co-founder of Metaculus, a website for making predictions), proposed paying people based on their track record: people with a history of making good predictions get paid to make more predictions. This incentivizes people to establish and maintain a track record of making good predictions, even though they don't get paid directly for accurate predictions per se.
Aguirre has said that Metaculus may implement this incentive structure at some point in the future. I would be interested to see how it plays out and whether it turns out to be a useful engine for generating good predictions.
One practical option, which goes back to the first idea I mentioned, is to pay a group of good forecasters like the Good Judgment Project (GJP). In theory, they don’t have a strong incentive to make good predictions, but they did win IARPA's 2013 forecasting contest, so in practice it seems to work. I haven't looked into how exactly to get predictions from GJP, but it might be a reasonable way of converting money into knowledge.
Based on my limited research, it looks like donors may be able to incentivize donations reasonably effectively with a consulting service like GJP, or perhaps by doing something involving predictions markets, although I'm not sure what. I still have some big open questions:
- What is the best way to get good predictions?
- How much does a good prediction cost? How does the cost vary with the type of prediction? With the accuracy and precision?
- How accurate can predictions be? What about relatively long-term predictions?
- Assuming it's possible to get good predictions, what are the best types of questions to ask, given the tradeoff between importance and predict-ability?
- Is it possible to get good predictions from prediction markets, given the current state of regulations?
cole_haus @ 2019-02-05T06:51 (+5)
Subsidizing a prediction market seems like one of the more promising approaches to me. There's a write-up of would that would look like more concretely at: Subsidizing prediction markets. Unfortunately, a quick search also turns up a theoretical limitation of this approach: Subsidized Prediction Markets for Risk Averse Traders.
Ozzie Gooen @ 2019-02-08T01:34 (+4)
On your questions:
1. I've been doing a decent amount of thinking & experimentation in similar work recently. I'm personally optimistic about non-market applications like GJP and Metaculus. I think that the path for similar groups to pay forecasters is much more straightforward than similar in prediction markets. I think there could be a lot more good work in this area.
2. GJP charges several thousand per question, but Metaculus is free, assuming they accept your questions. I think the answer to this is very complicated; there are many variables at play. That said, I think that with a powerful system, $50k-500k per year in predictions could get a pretty significant informational return.
3. This is also a very vague question, it's not obvious what metrics to use to best answer it. That said, if a good prediction system is made, it could help answer this question in specific quantitative ways. It seems to me that a robust prediction system should be roughly at least as accurate as a non-predictive system with the same people. Long-term predictions are tricky, but I think we could have some basic estimates of bias.
4. This is also a huge question. I think there's a lot of experimentation yet to be done here on many different kinds of questions. If we could have meta-predictions on things like, "How important will we have found this predictable item was to have in the system", then we may be able to use the system to answer and optimize here.
5. I'm not very optimistic about prediction markets. This is of course something that would be nice to formally predict in the next 1-3 years.
Halffull @ 2019-02-13T18:53 (+1)
One option we were looking to use at Verity is the 'contest' model - In which an interested party can subsidize a particular question, and then split the pool between forcasters based on their reputation/score after the outcome has come to pass. This helps to subsidize specific predictions, rather than subsidizing more general predictions when paying people for their overall score. It has similarities to the subsidized prediction market model as well.
PeterMcCluskey @ 2019-02-06T17:57 (+1)
Regulations shouldn't be much of a problem for subsidized prediction markets. The regulations are designed to protect people from losing their investments. You can avoid that by not taking investments - i.e. give every trader a free account. Just make sure any one trader can't create many accounts.
Alas, it's quite hard to predict how much it will cost to generate good predictions, regardless of what approach you take.