Do impact certificates help if you're not sure your work is effective?

By Eli Rose @ 2020-02-12T14:13 (+21)

I've heard a plan to use impact certificates ( https://medium.com/@paulfchristiano/certificates-of-impact-34fa4621481e ) in the following way.

Suppose I work at org A, but I actually value work at org B more. The plan is that I get impact certificates for my work for A, and find a buyer who is willing to give me B-certificates in exchange for my A-certificates. Now I have some amount of B-certificates, which is like doing work for B in the first place.

I'm not convinced on this. My question: if I don't think work at A is valuable, why should I trust the market to know better than me? I'm okay with the stock market determining good prices for shares of Microsoft, but that market is huge; the impact certificate market is likely to be small for at least a while.

An analogy: when should I trust PredictIt's market (https://www.predictit.org/markets/detail/3633/Who-will-win-the-2020-Democratic-presidential-nomination) on who will win the Democratic nomination over Nate Silver's analysis ( https://projects.fivethirtyeight.com/2020-primary-forecast )? Right now the two disagree significantly on Bloomberg's chances (PredictIt gives 25%, Nate Silver gives him 4%).

Another angle on the concern: ordinarily, when you believe B > A, you "vote with your feet" by doing B instead of doing A. In this situation, you instead are effectively "voting" to lower the price of A-certificates on the market. So you're trading off one against the other. But it seems likely that many people will take you working at A as an endorsement of A. This convention is really strong; I certainly do it. Unless you put "I ACTUALLY TRADE ALL THESE IMPACT CERTIFICATES FOR B-CERTIFICATES" on your LinkedIn and mention it to people you meet at parties, I think people will continue to do it.

I'm concerned that splitting the "vote" between these two methods will do harm to the community's ability to decide what types of work are good.

What are people's thoughts on this? Any written resources? (I can't find much on impact certs beyond Paul's original post.)


Owen_Cotton-Barratt @ 2020-02-12T14:58 (+16)

This use-case for impact certificates isn't predicated on trusting the market more than yourself (although that might be a nice upside). It's more like a facilitated form of moral trade, where people with different preferences about what altruistic work happens all end up happier on account of switching so that people can work on things they can make more progress on rather than the things they personally want to bet on. (There are some reasons to be sceptical about how often this will actually be a good trade, because there can be significant comparative advantage to working on a project you believe in, from both motivation and having a clear sense of the goals; however I expect at least some of the time there would be good trades.)

On your second concern, I think that working in this way should basically be seen as a special case of earning to give. You're working for an employer whose goals you don't directly believe in because they will pay you a lot (in this case in impact certificates), which you can use to further things you do believe in. Sure there's a small degree to which people might interpret your place of work as an endorsement, but I don't think this is one of the principle factors feeding into our collective epistemic processes (particularly since you can explicitly disavow it; and in a world where this happens often others may be aware of the possibility even before disavowal) and wouldn't give it too much weight in the decision.

reallyeli @ 2020-02-12T15:59 (+1)

Hmm, your first paragraph is indeed a different perspective than the one I had. Thanks! I remain unconvinced though.

Casting it as moral trade gives me the impression that impact certificates are for people who disagree about ends, not for people who agree about ends but disagree about means. In the case where my buyer and myself both have the same goals (e.g. chicken deaths prevented), why would I trust their assessment of chicken-welfare org A more than I trust my own? (Especially since presumably I work there and have access to more information about it than them.)

Some reasons I can imagine:

- I might think that the buyer is wiser than me and want to defer to them on this point. In this case I'd want to be clear that I'm deferring.

- I might think that no individual buyer is wiser than me, but the market aggregates information in a way that makes it wiser than me. In this case I'd want a robust market, probably better than PredictIt.

Owen_Cotton-Barratt @ 2020-02-12T21:14 (+1)

I'm not trying to take any view over whether there's moral disagreement (I think in practice moral and empirical disagreements are not always cleanly distinguishable, but that's a side point).

If you agree on goals, then maybe you will Aumann update towards agreement on actions and no trade will be needed. If there's a persistent disagreement (even after you express that organisation A does not seem to you to be a good use of resources) then maybe it's not a trade between different ultimate moral perspectives, but a trade between different empirical worldviews, such that the expectation of having made the trade is better for both worldviews than before making the trade. From your perspective as a certificate-seller, you don't need to know whether the buyer agrees with your moral views or not.

reallyeli @ 2020-02-13T00:28 (+1)

I agree with this. I wasn't trying to make a hard distinction between empirical and moral worldviews. (Not sure if there are better words than 'means' and 'ends' here.)

I think you've clarified it for me. It seems to me that impact certificate trades have little downside when there is persistent, intractable disagreement. But in other cases, deciding to trade rather than to attempt to update each other may leave updates on the table. That's the situation I'm concerned about.

For context, I was imagining a trade with an anonymous partner, in a situation where you have reason to believe you have more information about org A than they do (because you work there).

Owen_Cotton-Barratt @ 2020-02-13T10:52 (+1)

In the case where the other party is anonymous, how could you hope to update each other? (i.e. you seem to be arguing against anonymity, not against selling impact certificates)

reallyeli @ 2020-02-13T17:00 (+1)

Sure, I agree that if they're anonymous forever you can't do much. But that was just the generating context; I'm not arguing only against anonymity.

I'm arguing against impact certificate trading as a *wholesale replacement* for attempting to update each other. If you are trading certificates with someone, you are deferring to their views on what to do, which is fine, but it's important to know you're doing that and to have a decent understanding of why you differ.

Owen_Cotton-Barratt @ 2020-02-13T21:24 (+3)

If you are trading certificates with someone, you are deferring to their views on what to do

I think this is meaningfully wrong; at least the sense in which you are deferring is not stronger than the sense in which employees are deferring to their employer's views on what to do (i.e. it's not an epistemic deferral but a deferral to authority).

reallyeli @ 2020-02-13T22:29 (+3)

"The sense in which employees are deferring to their employer's views on what to do" sounds fine to me, that's all I meant to say.

Buck @ 2020-02-14T06:47 (+6)

[for context, I've talked to Eli about this in person]

I'm interpreting you as having two concerns here.

Firstly, you're asking why this is different than you deferring to people about the impact of the two orgs.

From my perspective, the nice thing about the impact certificate setup is that if you get paid in org B impact certificates, you're making the person at orgs A and B put their money where their mouth is. Analogously, suppose Google is trying to hire me, but I'm actually unsure about Google's long term profitability, and I'd rather be paid in Facebook stock than Google stock. If Google pays me in Facebook stock, I'm not deferring to them about the relative values of these stocks, I'm just getting paid in Facebook stock, such that if Google is overvalued it's no longer my problem, it's the problem of whoever traded their Facebook stock for Google stock.

The reason why I think that the policy of maximizing impact certificates is better for the world in this case is that I think that people are more likely to give careful answers to the question "how relatively valuable is the work orgs A and B are doing" if they're thinking about it in terms of trying to make trades than if some random EA is asking for their quick advice.

---

Secondly, you're worrying that people might end up seeming like they're endorsing an org that they don't endorse, and that this might harm community epistemics. This is an interesting objection that I haven't thought much about. A few possible responses:

reallyeli @ 2020-02-21T16:29 (+2)

What is meant by "not my problem"? My understanding is that what is meant is "what I care about is no better off if I worry about this thing than if I don't." Hence the analogy to salary; if all I care about is $$, then getting paid in Facebook stock means that my utility is the same if I worry about the value of Google stock or if I don't.

It sounds like you're saying that, if I'm working at org A but getting paid in impact certificates from org B, the actual value of org A impact certificates is "not my problem" in this sense. Here obviously I care about things other than $$.

This doesn't seem right at all to me, given the current state of the world. Worrying about whether my org is impactful is my problem in that it might indeed affect things I care about, for example because I might go work somewhere else.

Thinking about this more, I recalled the strength of the assumption that, in this world, everyone agrees to maximize impact certificates *instead of* counterfactual impact. This seems like it just obliterates all of my objections, which are arguments based on counterfactual impact. They become arguments at the wrong level. If the market is not robust, that means more certificates for me *which is definitionally good*.

So this is an argument that if everyone collectively agrees to change their incentives, we'd get more counterfactual impact in the long run. I think my main objection is not about this as an end state — not that I'm sure I agree with that, I just haven't thought about it much in isolation — but about the feasibility of taking that kind of collective action, and about issues that may arise if some people do it unilaterally.

Larks @ 2020-02-13T18:07 (+2)
I'm concerned that splitting the "vote" between these two methods will do harm to the community's ability to decide what types of work are good.

Could you go into detail about why you think this would be bad? Typically when you are uncertain about something it is good to have multiple (semi-) independent indicators, as you can get a more accurate overall impression by combining the two in some way.

reallyeli @ 2020-02-13T22:34 (+3)

I'm deciding whether organization A is effective. I see some respectable people working there, so I assume they must think work at A is effective, so I update in favor of A being effective. But unbeknownst to me, those people don't actually think work at A is effective, but they trade their impact certificates to other folks who do. I don't know these other folks.

Based on the theory that it's important to know who you're trusting, this is bad.