Perils of optimizing in social contexts

By Owen Cotton-Barratt @ 2022-06-16T17:23 (+124)

As a special case of not over-optimizing things, I want to point out that an optimizing approach can have a bunch of subtle negative effects in social contexts. The slogan is something like "optimizers make bad friends" ... people don't like being treated as a means rather than an end, and if they get the vibe from social interactions that you're trying to steer them into something, then they may react badly.

Here I'm referring to things like:

... where it's not clear that the other side are willing collaborators in your social objectives. 

So I don't mean to include optimizing for things like:

I think that people (especially smart people) are often pretty good at getting a vibe that someone is trying to steer them into something (even from relatively little data). When people do notice, it's often correct for them to treat you as adversarial and penalize you for this. This is for two reasons:

  1. A Bayesian/epistemic reason: people who are optimizing for particular outcomes selectively share information which pushes towards those outcomes. So if you think someone is doing [a lot of] this optimization your best estimate of the true strength of the position they're pushing should be [much] lower than if you think they're optimizing less (given otherwise the same observations).
    • Toy example: if Alice and Bob are students each taking 12 subjects, and you randomly find out Alice's chemistry grade and it's a B+, and you hear Bob bragging that his history grade is an A-, you might guess that Alice is overall a stronger student, since it's decently likely that Bob chose his best grade to brag about.
  2. An incentives reason: we want to shape the social incentive landscape so that people aren't rewarded for trying to manipulate us. If we just do the Bayesian response, it will still often be correct for people to invest some in trying to manipulate (in theory they know more about how much information to reveal to leave you with the best impression after Bayesian updating).
    • I don't think the extra penalty for incentive shaping needs to be too big, but I do think it should be nonzero.
    • Actually their update should be larger to compensate for the times that they didn't notice what was happening; this is essentially the same argument as given in Sections III and IV of the (excellent) Integrity for Consequentialists

Correspondingly, I think that we should be especially cautious of optimizing in social contexts to try to get particular responses out of people whom we hope will be our friends/allies. (This is importantly relevant for community-building work.)


brb243 @ 2022-06-18T16:44 (+4)

I think that people (especially smart people) are often pretty good at getting a vibe that someone is trying to steer them into something (even from relatively little data). ...we want to shape the social incentive landscape so that people aren't rewarded for trying to manipulate us.

I studied lobbying in Washington, DC, from US trade diplomats, and we were learning that this 5 billion industry is benefiting decisionmakers by sharing research biased in various ways from which they can make decisions unbiased with respect to their own values.[1] So, 'smartness,' if it is interpreted as direct decisionmaking privilege, can be positively correlated with accepting what could be perceived as manipulation.

Also, people who are 'smart' in their ability to process, connect, or repeat a lot of information to give the 'right' answers[2] but do not critically think about the structures which they thus advance may be relatively 'immune' toward negative perceptions of manipulation due to the norms of these structures. These people can be more comfortable if they perceive 'steering' or manipulation, because they could be against 'submitting' to a relatively unaggressive entity. So, in this case, manipulation[3] can be positively correlated with (community builders') individual consideration in a relationship.

'Specific' objective optimization should be refrained from only among people who are 'smart' in emotional/reasoning and would not[4] engage in a dialogue.[5] These people would perceive manipulation negatively[6] and would not support community builders in developing (yet) better ways of engaging people with various viewpoints on doing good effectively.[7]

Still, many people in EA may not mind some manipulation,[8] because they are intrinsically motivated to do good effectively, and there are little alternatives for such. This is not to say that if it is possible to avoid 'specific' optimization, this should not be done but developing this skill can be deprioritized relative to advancing community building projects that attract intrinsically motivated individuals or make changes[9] where the changemakers perceive some 'unfriendliness.'

I would like to ask you if you think that some EA materials that optimize for agreement with a specific thesis, which community builders would use, should be edited, further explained, or discouraged.[10]

  1. ^

    See Allard, 2008 for further discussion on the informational value of privately funded lobbying. 

  2. ^

    Including factually right answers or those which they assess as best for their further social or professional status progress.

  3. ^

    ideally while its use is acknowledged and possibly the discussant is implicitly included in its critique

  4. ^

    or, the discussion would be set up in a way that prevents dialogue

  5. ^

    of course, regardless of their decisionmaking influence

  6. ^

    also due to their limited ability to contribute

  7. ^

    or anything else relevant to EA or the friendship

  8. ^

    For example, a fellow introductory EA fellowship participant pointed out that the comparison between the effectiveness of the treatment of Kaposi sarcoma and information for high-risk groups to prevent HIV/AIDS makes sense because a skin mark is much less serious that HIV/AIDS but this did not discourage anyone from engagement.

  9. ^

    such as vegan lunches in a canteen because community builders optimize for the canteen managers agreeing that this should be done

  10. ^

    for example, see my recent comment on the use of stylistic devices to attract attention and limit critical thinking