Capitalising on Trust—A Karmic Simulation

By Non-zero-sum James @ 2024-09-28T05:42 (+2)

This is a linkpost to https://nonzerosum.games/cooperationvsdefection.html

Recently we've been exploring moral philosophy with our series on Moral Licensing, Andrew Tane Glen's Why Cooperate?, and in a workshop I ran with my daughter's class about the strategies of cooperation and defection. One phenomenon that has arisen through these explorations is that defectors gain a short term, relative advantage, while cooperators benefit from a sustained long term absolute advantage, which got me thinking about a simulation.

This post revolves around a simulation, that only runs on the site, come on over and check it out :)


nathan98000 @ 2024-09-29T02:50 (+3)

One phenomenon that has arisen through these explorations is that defectors gain a short term, relative advantage, while cooperators benefit from a sustained long term absolute advantage

It seems like you’re drawing a general conclusion about cooperation and defection. But your simulated game has very specific parameters. The pay off matrix, the stipulation that nobody dies, the stipulation that everyone who interacts with a defector recognizes so and remembers, the stipulation that there are only two types of agents, etc. It doesn’t seem like any general lessons about cooperation/defection are supported by a hyper-specific set up like this

Non-zero-sum James @ 2024-10-08T09:38 (+1)

Hi Nathan,

Thanks for your response, and I see your point, the more specific the parameters get, the less general the conclusions can be.

To explain, my purpose in using a simulation is to illustrate a phenomenon that is perhaps too complex to reduce to a formula, because it seeks to emulate some aspects of society that are often not accounted for in game-theoretical models. Simulations allow for complex parameters to provide a sort of empirical evidence for principles that might not be able to be proven mathematically (by me at least).

The reason I've chosen the parameters I have, is not to create an inevitable outcome, but to reflect aspects of the real world that are not usually considered in game-theoretical models, like for instance the instinctive animal behaviour to avoid agents with whom you've had previous negative experiences. This is difficult to model mathematically, but is never-the-less a significant factor when creating a model that applies to the real world.

The stipulation that no one dies is a simplification that serves two purposes:

  1. It reflects the self-balancing nature of human systems in the simplest way possible (modelling death and reproduction, social welfare, health and other systems that maintain a society's population would be unwieldy—and even more specific).
  2. It gets the population past the inevitable point of total collapse that happens if the defectors are able to essentially kill other agents during their dominant phase.

So, the specifics of the model are not meant to be arbitrary, but reflective of features of actual populations of people or other animals. The aim was to better approximate real world dynamics rather than the siloed game-environments which often result in conclusions that don't comport with common sense—not because common sense is wrong or the theory is wrong but because the game-environment is too limited.

Your point is important though, and if I develop this further I would think about introducing controls for the initial ratio of the agents (cooperators:defectors) and less specifics preserving survival, so collapse becomes a feature determined by the ratio. Other controllable parameters might also help to give the user a more intuitive feel for the effects of various dynamics on the system.