Epistemic Trade: A quick proof sketch with one example
By Linch @ 2021-05-11T09:05 (+19)
Note on history of this post: I wrote a draft of this post last (2020) summer when I was both more obsessed with covid-19 related questions and interested in exploring my fit for research into philosophy/macrostrategy, and this post has strong echoes of both. Since then I’ve decided that there’s greater comparative advantage and personal fit for me in empirical cause prioritization, at least in the short term. I ended up deciding I should publish it anyway, and have done <15 minutes of editing between August 2020 and today.
There are two important caveats:
- Timeliness: Because the covid-y example originated last (2020) summer, it likely already looks dated. In addition, I did not bother to look at the recent (post Aug 2020) literature on peer disagreement or related issues before publishing.
- Quality: The post is of lower quality and will look more rushed than I’d ideally like, however I’ve decided to publish this sooner rather than later since I haven’t worked on it for almost a year and realistically I’m unlikely to work on it again any time soon.
Introduction
In the middle of a conversation with a friend, I came up with a minor idea in the intersection of a number of philosophy questions that many EAs and rationalists are interested in, including epistemic deference, peer disagreement, moral trade, acausal trade and epistemic game theory.
I’m not claiming that this is particularly important or insightful, but I and a few people I talked to thought it was interesting. So I decided to write it up in case you might also find puzzling about it and related issues interesting/fun!
The article is written as a proof sketch rather than a proof.
Claim
Even when aggregating beliefs are costly, it can be Pareto-efficient to just act as if “swapping” beliefs.
A COVID-y Example
To clarify this, I’ll give a covid-y example. Alice and Bob are acquaintances, who live far away and are unlikely to infect each other. Assume that they are selfish (only care about personal risk) and have similar objectives around COVID-19 (they don’t want the possibility of death or long-term disability) and overall risk assessments (their overall beliefs in the risk of COVID-19 is fairly similar). However, they have very different internal risk models.
Alice thinks that aerosol transmission is the most dangerous/likely source of COVID-19 transmission, and that the best intervention to prevent this is wearing N95 masks. She’s very skeptical of surface transmission (and correspondingly, hand hygiene).
Bob thinks that surface transmission is the most dangerous/likely source of COVID-19 transmission, and that the best intervention to prevent this is proper adherence to hand hygiene. He’s very skeptical of aerosol transmission (and correspondingly, masks).
Alice and Bob are epistemic peers. They both respect each other a lot and don’t think one is necessarily more knowledgeable than the other. However, when they tried to respectfully discuss their disagreements, neither found the other’s arguments convincing.
However, there’s an additional twist: Alice finds wearing masks very costly. She can only wear a properly-fitted mask in ~70% of the situations where she considers not wearing a mask to be dangerous. However, she considers handwashing very easy to practice (if pointless).
Bob’s costs are exactly the opposite of Alice’s.
What should Alice and Bob do? In this case, I claim that even if belief aggregation is impossible, they will be better off swapping risk models, and acting as if the other’s risk models are true. I call this swapping “epistemic trade.”[1]
Expanded Claim
Assume two epistemic peers with a similar objective, very different beliefs/causal models for how to achieve said common objective, and different costs for various actions. I claim that in some situations when belief aggregation/updating is costly, it may in expectation be Pareto efficient to just “swap” their causal models.
Assumptions
Let’s explore each of the assumptions in detail.
- Two Epistemic Peers
- For the relevant domain, Alice and Bob must be epistemic peers.
- Intuitively, if Alice has her beliefs because she is a computational fluid dynamics expert who’s well-read on the COVID-19 literature and has run many of her own simulations, while Bob’s source was “has read the US CDC website once in March”, Alice would (justifiably) choose to not update much on Bob’s beliefs.
- It’s possible that not only must they believe each other to be epistemic peers, but they must have common knowledge of this. However, I did not explore this angle further.
- What defines an epistemic peer?
- I think this is the weakest part of the argument, since there may not be a rigorous formal definition. I tried skimming the peer disagreement literature, and got pretty confused.
- For the relevant domain, Alice and Bob must be epistemic peers.
- Similar/common objective
- In this situation, we’re imagining that with reference to model-relevant details, Alice and Bob have a similar (presumably selfish) objective, like not getting COVID-19.
- The picture is a lot less intuitive if Alice wants to avoid getting COVID-19 and Bob wants to maximize the number of paperclips in the universe.
- We’re also implicitly assuming a similar magnitude
- In this case, a similar risk appetite. The model might break if Alice is 10000 times more worried about COVID-19 than Bob.
- In this situation, we’re imagining that with reference to model-relevant details, Alice and Bob have a similar (presumably selfish) objective, like not getting COVID-19.
- Different causal models on how to achieve said objective
- In our case, different risk models for which things causes COVID-19 transmission and which interventions prevent it
- Different cost functions
- Trade is possible. In our case, handwashing is less costly for Alice, and mask usage is less costly for Bob.
- Belief aggregation is difficult or impossible
- Model exchange is possible, and not too costly
- In essence, it has to a) be cheaper/more doable than belief aggregation, and the loss of model fidelity is not too high.
Robustness
Are these assumptions potentially realistic in real-world situations?
- Two Epistemic Peers
- I don’t know if true epistemic peer is a well-defined term here, but intuitively situations where two people who are sufficiently close epistemic peers have to be fairly common. (For example, two epidemiologists with very different risk models but a similar impact factor, two EAs who respect each other a lot, or two forecasters with a similar Brier score on similar questions).
- Similar/common objective
- For the COVID-19 example, this seems like a fairly safe assumption. People may not have identical total risk tolerance, but often variance in this has to be lower than difference in the next two points:
- Different causal models on how to achieve said objective
- I regularly encounter people in similar reference classes (different epidemiologists on Twitter, say), who have very different risk models for eg, whether SARS-CoV-2 spreads via large droplets vs small airborne droplets vs formites.
- Different cost functions
- Intuitively, I regularly meet people who seem to have costs that are 10x greater or smaller than mine for the same intervention
- Taking preferences at face value, it’s hard to imagine that people would go to rallies to oppose mask usage unless it really matters to them.
- Intuitively, I regularly meet people who seem to have costs that are 10x greater or smaller than mine for the same intervention
- Belief aggregation is difficult or impossible
- I’m not sure how hard this is in practice. I do feel like there are many times where I talk to people who I mostly consider to be epistemic peers, and after long conversations, we cannot reach consensus (and indeed, if we ignore social politeness, I at least would not have updated towards to their position at all).
- Exchange of action plans based on different models is possible, and not too costly
- Possible: I do think there are some situations where it’s hard to update your models, but you can act as if you believe the new model.
- I think in practice this is healthier than changing your beliefs based on outside-view reasons[2].
- Not too costly: One cost is the time cost of communicating your model, and/or what actions your model entails, during an exchange. Another cost is fidelity: you’re probably worse at operating under a model you don’t believe than one that you do.
- For example, if you haven’t thought a lot about the implications of airborne transmission, you may be worse at specifically identifying/remembering the most necessary situations for mask usage.
- I suspect this is not a big deal in practice relative to #3 and #4
- Caleb: In practice, an additional cost is becoming the kind of person who does things that violate their beliefs. Sacrificing consistency for meta-consistency. Some people can do this, others can't.
- Possible: I do think there are some situations where it’s hard to update your models, but you can act as if you believe the new model.
Some side notes
Is aggregating beliefs/updating always better than trade?
No, not strictly speaking. Toy counterexamples will be left as an exercise to the reader.
(A example of such a situation is where there are less discontinuities of risk at X% compliance for an intervention, such that an typical intermediate value between two world models is not enough to reach X%)
Is trade necessary?
One might ask: Is trade necessary? If an agent thinks that an epistemic peer’s risk model is less costly (and they place no terminal value in the well-being of the peer), can’t they just unilaterally update to the peer’s risk model?
My guess is that the answer is yes, you need to trade in at least some situations. My intuition goes something like this: consider a case with N risk models and N sets of costs. If everybody thinks it’s epistemically acceptable to unilaterally update, the “correct” thing to do would be to have a race to the bottom where they each adopt the “easiest” risk model to follow. Intuitively (I don’t have a proof), this will lead to greater overall risk in expectation. Thus, having a peer willing to trade serves as a credible signal that your update doesn’t increase overall expected risk.
How to formalize this is unclear to me.
Is this idea…
True?
Having not put too much thought into it, I place ~60% credence that the core idea, or something meaningfully like it, is true.
Novel?
After talking to several people who know more than me, some light scans of Google Scholar, and reading the Stanford Encyclopedia of Philosophy’s sections on peer disagreement and epistemic game theory, I place moderate (~60%?) credence in it being novel. It has startling similar characteristics to the epistemic prisoners’ dilemma, but I still think it’s meaningfully different.
This belief is not very resilient, and I’ll quickly update if somebody comments with a citation.
Useful?
I’m currently fairly optimistic (around 80%?) that this is a sufficiently interesting idea that it’s worth people’s time reading.
I’m much less optimistic (~17%?) that this has direct usefulness in advancing theoretical work, and even more pessimistic (~12%?) that this will have sufficiently interesting practical implications that it’d, e.g., end up as part of a solution to another paper.
Applicability to Effective Altruism
So far, I think this is a solution looking for a problem. The main point of interest I think is that it might generalize some results in moral trade to apply to situations where the fundamental disagreements are epistemic, rather than having different terminal values or moral systems.
A commentator also suggested that there’s applicability to Comparative advantage in EA careers, though here I am personally unsure (and lean against) the practical utility of epistemic trade, vs either a) actually updating or b) trading impact certificates.
Future Work (Possibilities)
Here are things I’d be excited to see future work on:
- Making subparts of the argument more rigorous
- Figure out what “epistemic peer” really refers to
- Cleaning up the “is trade necessary” section
- Figure out which assumptions have to be true for epistemic trade to be Pareto efficient:
- Do you need monotonicity of risk models/costs?
- What other structure of costs and utilities is/is not necessary?
- Coming up with non-COVID-19 examples
- Adding references and tieing in this work with the existing academic literature
- Deeper dive to probe whether it’s true/novel
- Thinking harder about potential practical applications to epistemic trade?
- AI alignment?
- Game theory/collective decision-making?
- How EAs allocate resources?
- Coming up with a more precise name than epistemic trade[1]
Future Work (Realistic)
If this post gets a bunch of good and/or useful feedback without a devastating counterargument, I might (May 2021:~25%? (Note: was ~55% in first draft)) expand it to a longer blog post.
By myself, I think it is unlikely (May 2021:~6%, Original:~13%) that I’ll want to make it substantially more rigorous, e.g. by trying to make the arguments rigorous enough to be a paper or preprint. However, I will of course be very excited if someone with different incentives and interests from me (e.g., a PhD student in epistemology or game theory, or someone from a different domain who could think of practical applications for this idea) wants to collaborate.
Footnotes and Caveats
[1] I checked to make sure the phrase “epistemic trade” isn’t already taken. However, I think this isn’t a very important concept, and reserving the phrase “epistemic trade” seems a bit defect-y. (I also feel this way about the not-so-fundamental Fundamental Attribution Error, as well as most theories that begin with the word “modern”). If people have suggestions for a more precise/descriptive name that has less probability of future naming collisions, let me know and I’d gladly rename this article.
Thanks goes to my past housemates (especially Adam and Pedro) for indulging in my COVID-19 obsession and various impractical proposals/flights of fancy that come with thinking about it from all angles, Tushant Jha for being willing to listen to my initial rambly, ill-formed thoughts around the issue and providing a vocabulary and structure to make it more rigorous, Greg Lewis for giving it a fair shake and encouraging me to write up this argument, and Carl Shulman for pointing me to prior work on LessWrong for Epistemic Prisoner's Dilemmas. As usual, all errors are my own.
UnexpectedValues @ 2021-05-11T20:34 (+9)
Cool idea! Some thoughts I have:
- A different thing you could do, instead of trading models, is compromise by assuming that there's a 50% chance that your model is right and a 50% chance that your peer's model is right. Then you can do utility calculations under this uncertainty. Note that this would have the same effect as the one you desire in your motivating example: Alice would scrub surfaces and Bob would wear a mask.
- This would however make utility calculations twice as difficult as compared just using your own model, since you'd need to compute the expected utility under each model. But note that this amount of computational intensity is already assumed by the premise that it makes sense for Alice and Bob to trade models. In order for Alice and Bob to reach this conclusion, each needs to compute their utility under each action in each of their models.
- I would say that this is more epistemically sound than switching models with your peer, since it's reasonably well-motivated by the notion that you are epistemic peers and could have ended up in a world where you had had the information your peer has and vice versa.
- But the fundamental issue you're getting at here is that reaching an agreement can be hard, and we'd like to make good/informed decisions anyway. This motivates the question: how can you effectively improve your decision making without paying the cost required by trying to reach an agreement?
- One answer is that you can share partial information with your peer. For instance, maybe Alice and Bob decide that they will simply tell each other their best guess about the percentage of COVID transmission that is airborne and leave it at that (without trying to resolve subsequent disagreement). This is enough to, in most circumstances, cause each of them to update a lot (and thus be much better informed in expectation) without requiring a huge amount of communication.
- Which is better: acting as if each model is 50% to be correct, or sharing limited information and then updating? I think the answer depends on (1) how well you can conceptualize your peer's model, (2) how hard updating is, and (3) whether you'll want to make similar decisions in the future but without communicating. The sort of case when the first approach is better is when both Alice and Bob have simple-to-describe models and will want to make good COVID-related decisions in the future without consulting each other. The sort of case when the second approach is better is when Alice and Bob have difficult-to-describe models, but have pretty good heuristics about how to update their probabilities based on the other's probabilities.
I started making a formal model of the "sharing partial information" approach and came up with an example of where it makes sense for Alice and Bob to swap behaviors upon sharing partial information. But ultimately this wasn't super interesting because the underlying behavior was that they were updating on the partial information. So while there are some really interesting questions of the form "How can you improve your expected outcome the most while talking to the other person as little as possible", ultimately you're getting at something different (if I understand correctly) -- that adopting a different model might be easier than updating your own. I'd love to see a formal approach to this (and may think some more about it later!)
Ozzie Gooen @ 2024-06-04T20:18 (+4)
I have a somewhat different proposal for the name "Epistemic Trade". (I just came up with the name, then searched for it, then found this).
2 parties have different beliefs on X. They find some way to trade with each other, so that based on each parties' beliefs, they come out better.
Many bets would count, as "Epistemic Trade". But so would more complex negotiations.
With AI risk, the EA community might want to make an "Epistemic Trade" with other lobbyists. We think that AI will happen much faster, and be much more dangerous than they think. So, we try to carve out these very specific situations for our lobbying work, and give up more of the normal worlds.
Also curious to get takes here. I feel like there should be existing terminology for this, but a quick search didn't bring up anything.
cole_haus @ 2021-05-11T23:52 (+2)
Nondogmatic Social Discounting seems very loosely related. Could be an entry point for further investigations, references, etc.
The long-run social discount rate has an enormous effect on the value of climate mitigation, infrastructure projects, and other long-term public policies. Its value is however highly contested, in part because of normative disagreements about social time preferences. I develop a theory of "nondogmatic" social planners, who are insecure in their current normative judgments and entertain the possibility that they may change. Although each nondogmatic planner advocates an idiosyncratic theory of intertemporal social welfare, all such planners agree on the long-run social discount rate. Nondogmatism thus goes some way toward resolving normative disagreements, especially for long-term public projects.