That UMD Extra Credit Question

By zdgroff @ 2015-07-13T23:23 (+1)

Crossposted on zachgroff.com

A discussion recently erupted with several friends over a tweet about an extra credit question posed by a professor at the University of Maryland:

WHAT KIND OF PROFESSOR DOES THIS pic.twitter.com/ACtQ0FCwRm
— name (@shaunhin) July 1, 2015

One of my friends commented that the rational thing to do is to select 6% - unless you happen to be that marginal student whose choice brings everyone down, you can only expect to gain by selecting 6%.

My immediate reaction was that, well no, that's the rational thing to do provided you are egoistic and only care about your own exam score. If you're a rational altruist, though, the rational thing to do may be to select 2%, since in the unlikely event that you are the marginal student, you threaten to lose points for everyone. Depending on the size of the class and the way you value each additional point on the exam, this could easily outweigh the slight chance of getting an extra 4% for yourself.

As is often the case, things are more complicated. The reason is this: what if there is a curve? If there's a curve, then additional points on the exam only serve to set you aside from anyone else, and if you cause everybody to lose their bonus points, you just leave the relative distribution unchanged. From an altruistic perspective, if the value of everybody's exam score is equal, then it's unclear which way to answer this question.

It's more likely that everybody's exam score is not equal, though. If I'm a truly effective, altruistic person and (almost) the rest of the class is not, then I should select 6%, since in the scheme of things, it's better for those who will do good with their credentials to outcompete those who won't.

The irony is that a radically altruistic position leads to the same choice as a radically egoistic one. It seems to me that this is likely the correct assessment. I could see worries that effective altruism could lead to a cutthroat world, but these are easily allayed by the fact that the calculation changes if I know that 10% of the class is likely to be effective altruists.

In fact, this could be somewhat comforting for those who worry about effective altruism requiring some holier-than-thou self-sacrifice. Certainly, a dose of self-sacrifice is called for. But if you want to be not simply altruistic but also effective, the best default way of behaving in many situations may be the rationally egoistic one.

 


undefined @ 2015-07-14T02:41 (+5)

I've sometimes felt tensions in collective action problems between "I should be a good altruist and go along with what benefits the group" and "I should take as much as I can because I'm the only EA here and I can put this to better use than anyone else." It's not always been easy, but I think generally I'd rather just go for the collective action problem solution.

Another anecdote is that my workplace has a workplace giving program where people can volunteer and donate to help underprivileged Americans. I feel weird turning these down and I think it makes me look selfish, but little do they know that I'm actually having a tremendous impact (I hope).

undefined @ 2015-07-14T17:16 (+3)

I'd suggest being very cautious in thinking "I should take as much as I can because I'm the only EA here and I can put this to better use than anyone else" and analogous thoughts (though I know you don't personally fall into that trap Peter, being good-natured and gentle as a lamb :-). I discussed this in my post on effective altruism and consequentialism:

A third sort of non-consequentialist position is that we should not act wrongly in certain ways even if the results of doing so appear positive in a purely consequentialist calculus. On this position we should not treat our ends as justifying absolutely any means. Examples of prohibited means could be any of the adjectives or nouns commonly associated with wrongdoing: dishonesty, unfairness, cruelty, theft, et cetera. This view has strong intuitive force. And even if we don’t straightforwardly accept it, it’s hard not to think that a sensitivity to the badness of this sort of behaviour is a good thing, as is a rule of thumb prohibiting them - something that many consequentialists accept.

It would be naive to suppose that effective altruists are immune to acting in these wrong ways - after all, they’re not always motivated by being unusually nice or moral people. Indeed, effective altruism makes some people more likely to act like this by providing ready-made rationalisations which treat them as working towards overwhelming important ends, and indeed as vastly important figures whose productivity must be bolstered at all costs. I’ve seen prominent EAs use these justifications for actions that would shock people in more normal circles. I shouldn’t give specific examples that are not already in the public domain. But some of you will remember a Facebook controversy about something (allegedly and contestedly) said at the 2013 EA Summit, though I think it’d be fairest not to describe it in the comments. And there are also attitudes that are sufficiently common to not be personally identifiable, such as that one’s life as an important EA is worth that of at least 20 “normal” people.

Refraining from tipping might be an interesting and/or useful marginal case to consider. I'm not saying that EAs should never maximise their own gains in collective action problems, and tipping could be a case in which they should if they're sufficiently sure they'll donate the money they thereby save.