Charities I Would Like to See

By MichaelDickens @ 2015-09-20T15:22 (+7)

Cross-posted to my blog.

There are a few cause areas that are plausibly highly effective, but as far as I know, no one is working on them. If there existed a charity working on one of these problems, I might consider donating to it.

Happy Animal Farm

The closest thing we can make to universal eudaimonia with current technology is a farm of many small animals that are made as happy as possible. Presumably the animals are cared for by people who know a lot about their psychology and welfare and can make sure they’re happy. One plausible species choice is rats, because rats are small (and therefore easy to take care of and don’t consume a lot of resources), definitively sentient, and we have a reasonable idea of how to make them happy.

I am not aware of any public discussion on this subject, so I will perform a quick ad-hoc effectiveness estimate.

(Most of the figures below come from a personal communication with Emily Cutts Worthington, who is more knowledgeable about taking care of rats than I am. These figures are not robust but are based on her best guesses.)

A rat curator working a few hours a week can probably support 100 happy rats. I have a lot of uncertainty about how brain size affects sentience, but say a happy rat is half as happy as a happy human. Suppose the rats are euthanized when their health starts to deteriorate, so they get close to 1 QALY per year. This would cost about $5 per rat per month plus an opportunity cost of maybe $500 per month for the time spent, which works out to another $5 per rat per month. Thus creating 1 rat QALY costs $120 per year, which is $240 per human QALY per year.

Deworming treatments cost about $30 per DALY. Thus a rat farm looks like a fairly expensive way of producing utility. It may be possible to decrease costs by scaling up the rat farm operation, but it would have to be about an order of magnitude cheaper to rival deworming treatments.

This is just a rough back-of-the-envelope calculation so it should not be taken literally, but I’m still surprised by how cost-inefficient this looks. I expected rat farms to be highly cost-effective based on the fact that most people don’t care about rats, and generally the less people care about some group, the easier it is to help that group. (It’s easier to help developing-world humans than developed-world humans, and easier still to help factory-farmed animals.) Again, I could be completely wrong about these calculations, but rat farms look less promising than I had expected.

Humane Insecticides

http://reducing-suffering.org/humane-insecticides/

I know very little about humane insecticides but it’s a cause that’s plausibly highly cost-effective and virtually no one is working on it. I’m inclined to want to focus more on high-learning-value or far-future interventions; supporting humane insecticides probably only has short-term effects (albeit extremely large ones). But the overwhelming importance of reducing insect suffering (if insects feel pain, which seems sufficiently likely to be a concern) and the extreme neglectedness of this cause could possibly make it the best thing to work on.

High-Leverage Values Spreading

In On Values Spreading, I discuss the possibility of focusing values-spreading efforts on high-leverage individuals:

Probably, some sorts of values spreading matter much, much more than others. Perhaps convincing AI researchers to care more about non-human animals could substantially increase the probability that a superintelligent AI will also care about animals. This could be highly impactful and may even be the most effective thing to do right now. (I am much more willing to consider donating to MIRI now that its executive director values non-human animals.) I don’t see anyone doing this, and if I did, I would want to see evidence that they were doing a good job; but this would plausibly be a highly effective intervention.

I’d like to see an organization that focuses specifically on seeking out and implementing extremely high-leverage values spreading interventions. Perhaps this could mean trying to persuade AI researchers or geoengineering researchers to care about non-human animals–the results of their research could have drastic effects on animals and we want to make sure those effects are positive. I’m sure there are other high-leverage values spreading interventions that no one is currently doing (targeting high-impact researchers is just the first one I came up with off the top of my head); a dedicated organization could explore this space and try to find other highly effective strategies.

Promoting Universal Eudaimonia

Right now, shockingly few people are concerned with filling the universe with tiny beings whose minds are specifically geared toward feeling as much pure happiness as possible. I’d like to see more efforts to promote this outcome. Maybe the best way to do this is to start a Pay David Pearce to Do Whatever He Wants Fund, but I don’t know if David Pearce is funding constrained.


undefined @ 2015-09-21T08:07 (+7)

Micheal, I like your blog and enjoyed the post.

I agree there are no good charities for hedonistic utilitarians at the moment, because they are either not very aligned with hedonistic utilitarian goals or their cost-effectiveness is not tractable. (You can still donate if you have so much money that your alternative spending would be "bigger car/yacht", otherwise it doesn't make much sense.)

Your ideas are all interesting, but values spreading and promoting universal eudaimonia are non-starters. You get downvoted on an EA forum, and you are not going to find a more open-minded amicable target group than this.

Happy animals are problematic because their feedback is limited; you don't know when they are suffering unless you monitor them with unreasonable effort. Their minds are not optimized for high pleasure/low suffering. Perhaps with future technology this sort of thing will be trivial, but that is not certain and investing in the necessary research will give too much harmful knowledge to non-value-aligned people. Even if it were net good to fund such research, it will probably be done for other reasons anyway (commercial applications, publicly funded neurology etc.), so again it's something you should only fund if you have too much money.

I don't know enough about insect biology to judge humane insecticides; the idea is certainly not unrealistic. But remember real people would have to use it preferentially, so even if such a charity existed, there's no guarantee anyone would use it instead of laughing you out of the room.

undefined @ 2015-09-20T21:01 (+7)

Given the recent post on the marketability of EA (which went so far as to suggest excluding MIRI from the EA tent to make EA more marketable - or maybe that was a comment in response to the post; don't remember), a brief reaction from someone who is excited about Effective Altruism but has various reservations. (My main reservation, so you have a feel for where I'm coming from, is that my goal in life is not to maximize the world's utility, but, roughly speaking, to maximize my own utility and end-of-life satisfaction, and therefore I find it hard to get excited about theoretically utility maximizing causes rather than donating to things which I viscerally care about -- I know this will strike most people here as incredibly squishy, but I'd bet that much of the public outside the EA community probably has a similar reaction, though few would actually come out and say it)

Just my 2 cents.

undefined @ 2015-09-20T22:23 (+2)

The ideas about Happy Animal Farm / Promoting Universal Eudaimonia seem nuts to me, so much so that I actually reread the post to see if this was a parody.

Yeah I definitely understand that reaction which is why I was not sure it was a good idea to post this. It looks like it probably wasn't. Thanks for the feedback.

undefined @ 2015-09-21T19:38 (+2)

Upvoted, because although I disagree with much of this on object level, I think the post is totally legit and I think we should encourage original thinking.

Perhaps we need to find a time and place to start a serious discussion of ethics. I think hedonistic utilitarianism is wrong already on the level of meta-ethics. It seems to assume the existence of universal morals which from my point of view is a non-sequitur. Basically all discussions of universal morals are games with meaningless words, maps of non-existing territories. The only sensible meta-ethics I know is equating ethics with preferences. It seems that there is such a thing as intelligent agents with preferences (although we have no satisfactory mathematical definition yet). Of course each agent has its own preferences and the space of possible preferences is quite big (orthogonality thesis). Hence ethical subjectivism. Human preferences don't seem to differ much from human to human once you take into account that much of the differences in instrumental goals are explained by different beliefs rather than different terminal goals (=preferences). Therefore it makes sense in certain situations to use approximate models of ethics that don't explicitly mention the reference human, like utilitarianism. On the other hand, there is no reason the precise ethics should have a simple description (complexity of value). It is a philosophical error to think ethics should be low complexity like physical law since ethics (=preferences) is a property of the agent and has quite a bit of complexity put in by evolution. In other words, ethics is in the same category as the shape of Africa rather than Einstein's equations. Taking simplified models which take only one value into account (e.g. pleasure) to the extreme is bound to lead to abhorrent conclusions as all other values as sacrificed.

undefined @ 2015-10-30T01:24 (+1)

Here is a paper that argues that the money saved by being vegan can be used to bring lots of mice happy lives (overcoming the "logic of the larder" that eating meat creates lives worth living).

undefined @ 2015-09-21T05:37 (+1)

I can see why it’s been downvoted, but marketing aside, I had some nerdy fun reading it. I think we need a forum (or does it exist already?) that clearly proclaims that everything posted there makes sense in the author’s personal morality, and they wouldn’t post it there if they thought these conclusions were shared widely. That way people could discuss unusual ideas (and signal their readiness to consider them ^^) without the risk of others mistaking them for mainstream opinions.

My brand of utilitarianism is more focused on reducing extreme suffering or preference frustration, not to the extend of complete negative utilitarianism but somewhat. Hence the risk of something going wrong and some of the tiny beings suddenly experiencing significant pain, for me, quickly outweighs the expected benefits.

Value-spreading and humane insecticides seem genuinely interesting to me, especially since the latter might be turned into a social enterprise through which I could ETG effectively at the same time. The success of the enterprise would hinge on demand, though, which is probably where the plan breaks down.

undefined @ 2015-09-20T17:38 (+1)

Rat happiness is HALF as good as human happiness? I'm not so sure about that.

I would be willing to support high leverage value spreading though. And I'd like to know what Pearce is up to these days. Many people are strangely skeptical or dismissive of universal eudaimonia scenarios, and it's an important idea to establish.

undefined @ 2015-09-20T17:55 (+4)

Rat happiness is HALF as good as human happiness? I'm not so sure about that.

Reasons to believe rat happiness and human happiness is about comparable:

  • The brain structures that make humans happy look similar to the brain structures that make rats happy.
  • Rats behave in similar ways to humans in response to pleasurable or painful stimuli.
  • Most of the parts of the human brain that other animals don't possess have to do with high-level cognitive processing, which doesn't seem to have much to do with happiness or suffering.

Reasons to believe human happiness is substantially greater than rat happiness:

  • Sentience may increase rapidly as number of neurons increases. (But I don't expect that most human neurons are involved in producing conscious experiences.)
  • High-level cognitive abilities may increase capacity for happiness or suffering. (I find this implausible because subjectively when I feel very unhappy it usually doesn't have much to do with my cognitive abilities.)
undefined @ 2015-09-21T15:45 (+3)

High-level cognitive abilities may increase capacity for happiness or suffering. (I find this implausible because subjectively when I feel very unhappy it usually doesn't have much to do with my cognitive abilities.)

Because it can be useful to report disagreement on these things: I disagree. I find it very plausible that high-level cognitive abilities increase capacity for happiness or suffering, and my subjective experience is different from yours.

undefined @ 2015-09-21T18:05 (+1)

Yeah I had definitely considered that we might have different subjective experiences. I wonder how much of this comes from the fact that we introspect differently and how much is just that our brains work in different ways. Introspection is hard.

undefined @ 2015-09-20T18:47 (+3)

The difference in size between a human brain and a rat brain is significant. An average adult human brain is 1300-1400g and the average rat brain is 2g. There's no reason to peg the capability of the latter to generate vivid mental states as within the same order of magnitude, or two orders of magnitude in my opinion, of capability as the former.

The brain structures that make humans happy look similar to the brain structures that make rats happy.

Yes, but one is much larger and more powerful than the other.

Rats behave in similar ways to humans in response to pleasurable or painful stimuli.

So do all sorts of non-conscious entities.

Most of the parts of the human brain that other animals don't possess have to do with high-level cognitive processing, which doesn't seem to have much to do with happiness or suffering.

But the difference in size and capacity is altogether too large to be handwaved in this way. Besides, many components of human happiness do depend on higher level cognitive processing. What constitutes brute pain is simple, but what makes someone truly satisfied and grateful for their life is not.

undefined @ 2015-09-23T11:11 (+1)

Yes, I'd treat the ratio of brain masses as a lower bound on the ratio of moral patient-ness.