Exercise for 'What could the future hold? And why care?'

By EA Handbook @ 2022-05-18T03:52 (+9)

Part 1 (15 mins.)

Helping in the present or in the future?

A commonly held view within the EA community is that it's incredibly important to start from thinking about what it really means to make a difference, before thinking about specific ways of doing so. It’s hard to do the most good if we haven’t tried to get a clearer picture of what doing good means, and as we saw in chapter 3, clarifying our views here can be quite a complex task.

One of the core commitments of effective altruism is to the ethical ideal of impartiality. Although in normal life we may reasonably have special obligations (eg. to friends and family), in their altruistic efforts aspiring effective altruists strive to avoid privileging the interests of others based on arbitrary factors such as their appearance, race, gender, or nationality. 

Longtermism posits that we should also avoid privileging the interests of individuals based on when they might live.

In this chapter's exercise we’ll be reflecting on some prompts to help you start considering what you think about this question, i.e. "Do the interests of people who are not alive yet matter as much as the interests of people living today?"

Please read this short description of temporal discounting and then spend a couple minutes thinking through each prompt, and note down your thoughts - feel free to jot down uncertainties, or open questions you have that seem relevant. We encourage you to note down your thought process, but feel free to simply report your intuitions and gut feelings. 

Of course, these thought experiments all assume an unrealistic level of certainty about your options and their outcomes. For the purpose of this exercise, however, we encourage you to accept the premise of the thought experiments instead of trying to find loopholes. The idea is to isolate one particular aspect of a situation (e.g., the timing of our impact) and try to get at our moral intuitions about just that aspect

  1. Suppose that you could save 100 people today by burying toxic waste that will, in 200 years, leak out and kill thousands. Would you choose to save the 100 now and kill the thousands later? Does it make a difference whether the toxic waste leaks out 200 years from now or 2000?
  2. Imagine you donate enough money to the Against Malaria Foundation (AMF) to save a life. Unfortunately, there’s an administrative error with the currency transfer service you used, and AMF isn’t able to use your money until 5 years after you donated. Public health experts expect malaria rates to remain high over the next 5 years, so AMF expects your donation will be just as impactful in 5 years time. Many of the lives that AMF saves are of children under 5, and so the life your money saves is of someone who hadn’t been born yet when you donated.

    If you had known this at the time, would you have been less excited about the donation?

Part 2 (30 mins.)

One question (among many) that is relevant to this topic is “when will we develop human-level AI?”. 

It’s obviously not possible to just look this up, or to gather direct data on this question. So we need to gather what data and arguments we have, and make a judgment call. This applies to AI and other existential risks, but also to most questions that we’re interested in - “How many chickens will move to better changes if we pursue this advocacy campaign?”, “How much do we need to spend on bednets to save a life?”.

These judgements are really important: they could make a big difference to the impact we have. 

Unfortunately, we don’t yet have definitive answers to these questions, but we can aim to become “well-calibrated.” This means that when you say you’re 50% confident, you’re right about 50% of the time, not more, not less; when you say you're 90% confident, you're right about 90% of the time; and so on. 

This exercise aims to help you become well calibrated. The app you’ll use contains thousands of questions - enough for many hours of calibration training - that will measure how accurate your predictions are and chart your improvement over time. Nobody is perfectly calibrated; in fact, most of us are overconfident. But various studies show that this kind of training can quickly improve the accuracy of your predictions. 

Of course, most of the time we can’t check the answers to the questions life presents us with, and the predictions we’re trying to make in real life are aimed at complex events. The Calibrate Your Judgment tool helps you practice on simpler situations where the answer is already known, providing you with immediate feedback to help you improve.

Have a go using the Calibrate Your Judgment app for around 30 minutes! 


 


VictorW @ 2023-07-24T09:00 (+4)

I'm finding the app feedback misleading and none of the explanations in the About/FAQ page are expanding in my Chrome and Opera.

Lorenzo Buonanno @ 2023-07-25T21:55 (+2)

Thanks for flagging! I've sent a bug report to the developers of the app

Edit: they fixed it

Victoria Gaston @ 2023-10-10T14:29 (+2)

1. Toxic Waste Problem:

The 100 people living today, whoever is responsible for this toxic waste, can´t make thousands of people in 200 years pay for this mistake. It is wrong to bury the toxic waste and save people now if we are sure that this will cause even more deaths in 200 years for 2 reasons:

a) the number of people affected.

b) the lack of decision power and choice that the affected people have.

Logically speaking, it makes no sense to think differently if the leak were to happen in 2000 years and kill thousands of people, however, here I wouldn´t be so confident in my choice. To explain why I don´t feel confident, I am forced to bend and question the premises of the experiment. I hope that in 2000 years people will be more advanced and have the means to avoid toxic waste poisoning, so admitting that in 2000 years people will die because of toxic waste buried now would mean to me that we aren´t so bright and great, and we don´t have much potential. This would radically change the way I think about so many other topics.

Saving now 100 people in hopes that, later on, humans would know what to do disregards the dilemma because this implies that nobody dies (and that´s not the case, someone will die, either a hundred or thousands). Saving now 100 people puts the weight of acting on future people´s shoulders. If we didn´t bury the waste, they wouldn´t need to find a solution for it, in the first place.

Let´s imagine that we take option A and save 100 people today in the hopes of finding a way to save thousands in 200 years. Let´s imagine that this equals 6-7 generations of people (if new babies are born every 30 days on average). This means that our grandchildren´s grandchildren would be among the possibly poisoned and killed people. Let that sink in, and now, we should focus on whether future generations will be able to react fast enough. 

When is it time to start coming up with ideas to avoid or survive the leak? Is it 5 years before it happens enough? 2 months? How do they know when it will exactly happen? I wouldn´t be very confident in their ability to react in time. The second generation will trust that the third generation will come up with a solution, and the third generation will hope the same about the fourth. 

Besides, why would they care? The example of their ancestors will deter them from caring enough. Why should generations 2 to 5 pay for the research and the countermeasures for a problem that they didn´t cause, and won´t suffer? We can apply the same logic to 2000 years.

2. Donating to AMF problem:

It will be fine by me. I would trust the experts and hope that inflation rates really don´t have a negative effect on the donation´s potential, and I would hope that some technology or means needed to fight malaria get cheaper and that my donation can do better in 5 years than today. I would only be worried if AMF closes down in the meantime!

Joanna Michalska @ 2024-12-12T13:24 (+1)

PART 1

  1. The lives of the 100 people living today aren't worth 10x more than the lives of the thousands living in the future, so I wouldn't bury the waste.

  2. I would have still donated; I don't see much of a difference, and the time when the beneficiaries are alive isn't a morally significant factor.

PART 2 My judgement is terrible but my confidence is very low so let's hope they cancel out.

Vegan banjo @ 2024-06-29T13:06 (+1)

Part-I Case1 Saving or helping more is always better than a few. So the decision is always for those thousand people who are going to exist for the future 200 years. As there is every possibility that the future of humanity is going to be better if and only if we don't deliberately or ignorantly make it worse. So being a member of EA community I have every responsibility to think for those in future even they are not in a position to influence the decisions in their favour.

Case-II If the malaria rate remains high then it is a good reason to believe that my donation which cannot be used before 5 years is of atleast same value that it would have been for now. Moreover the life lost or sufferings of all child are same even if they don't exist now. Ultimate aim of my donation is to reduce suffering and death irrespective of its time or location.

iLooremeta @ 2023-08-16T19:00 (+1)

While I am not a longtermist, I would not choose an action that would directly put the lives of others at risk even in 200 years. In the scenario, we are told that the toxic waste shall leak, therefore, it’s definite that there shall be thousands of livers lost. Compared to the 100 lives that would be lost now, I would not risk that many lives even though they are far in the future. While we have talked about discount functions, it would be immoral to treat human lives in that way. 

In the second scenario where we are asked about 200 years or 2000 years, temporal discounting comes at a higher rate. Thinking that far into the future is hard because it would need me to think of other things that might have happened, such as existential catastrophes that might wipe out humanity before then. In that case, I would do more evaluation such that if I have confidence that humanity would be wiped out in that time, then I would save the 100 people in the current time. However, this would only be in a case where I am very confident that humanity shall be lost by that time, meaning the toxic waste I bury would have no effect on people in that future.

Zahra Irfan @ 2023-06-30T13:37 (+1)

Week 5 exercise.

A. I would save 100 people now by hurrying the waste as there are high chances that technology will be advanced after a decade and we might be able to save thousands of people in the future too. I will be working to save thousands of people in the future by contributing to research. B. I'd still be excited as even if it's about someone who isn't born yet I'd still be able to save them.

Victoria Gaston @ 2023-10-10T15:57 (+2)

The exercise purposefully asks us to ignore any "loopholes", and focus on the dilemma of either saving 100 people now or saving >1000 in the future. What would you choose being these the only 2 choices? What you suggest opens the door to saving everyone, however, the exercise doesn´t include this third option.

Zahra Irfan @ 2023-12-31T17:12 (+1)

Well then, it's truly really hard to choose. Anyone who thinks rationally would go with the option which offers saving more lives but I personally think that the choice of saving 100 people now is still better. We should be open to all possibilities. What I'm going to say now might sound foolish but if we can't find any good solutions by that time we can always dig that waste out (Which isn't possible ik) 👀