Announcing the launch of the Happier Lives Institute
By MichaelPlant @ 2019-06-19T15:40 (+133)
Following months of work by a dedicated team of volunteers, I am pleased to announce the launch of the Happier Lives Institute, a new EA organisation which seeks to answer the question: ‘What are the most effective ways we can use our resources to make others happier?’
Summary
The Happier Lives Institute is pioneering a new way of thinking about the central question of effective altruism - how can we benefit others as much as possible? We are approaching this through a ‘happiness lens’, using individuals’ reports of their subjective well-being as the measure of benefit. Adopting this approach indicates potential new priorities, notably that mental health emerges as a large and neglected problem.
Our vision is a world where everyone lives their happiest life.
Our mission is to guide the decision-making of those who want to use their resources to most effectively make lives happier.
We aim to fulfill our mission by:
1. Searching for the most effective giving opportunities in the world for improving happiness. We are starting by investigating mental health interventions in low-income countries.
2. Assessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.
Our approach
Our work is driven by three beliefs.
1) We should do the most good we can
We should use evidence and reason to determine how we can use our resources to benefit others the most. We follow the guiding principles of effective altruism: commitment to others, scientific mindset, openness, integrity, and collaborative spirit.
2) Happiness is what ultimately matters
Philosophers use the word ‘well-being’ to refer to what is ultimately good for someone. We think well-being consists in happiness, defined as a positive balance of enjoyment over suffering. Understood this way, this means that when we reduce misery, we increase happiness. Further, we believe well-being is the only thing which is intrinsically good, that is, that matters in and of itself. Other goods, such as wealth, health, justice, and equality are instrumentally valuable: they are not valuable in themselves, but because and to the extent that they increase happiness.
3) Happiness can be measured
The last few decades have seen an explosion of research into ‘subjective well-being’ (SWB), with about 170,000 books and articles published in the last 15 years. SWB is measured using self-reports of people’s emotional states and global evaluations of life satisfaction; these measures have been shown to be valid and reliable. We believe SWB scores are the best available measure of happiness; therefore, we should use these scores, rather than anything else (income, health, education, etc.) to determine what makes people happier.
Specifically, we expect to rely on life satisfaction as the primary metric. This is typically measured by asking “Overall, how satisfied are you with your life nowadays?” (0 - 10). While we think measures of emotional states are closer to an ideal measure of happiness, far fewer data of this type is available. A longer explanation of our approach to measuring happiness can be found here.
When we take these three beliefs together, the question: “How can we do the most good?” becomes, more specifically: “What are the most cost-effective ways to increase self-reported subjective well-being?”
Our strategy
Social scientists have collected a wealth of data on the causes and correlates of happiness. While there are now growing efforts to determine how best to increase happiness through public policy, no EA organisation has yet attempted to translate this information into recommendations about what the most effective ways are for private actors to make lives happier. The Happier Lives Institute intends to fill this gap.
In doing this, we hope to complement the rigorous and ground-breaking work undertaken by GiveWell and 80,000 Hours and to collaborate with them where feasible. To highlight the divergences, our ‘happiness lens’ approach is a different approach to assessing impact from the one GiveWell takes; GiveWell does not focus on mental health; we aim to investigate more speculative giving opportunities and those outside global health and development. 80,000 Hours primarily focuses on the long-term; we intend to provide guidance to those who careers will focus on (human) welfare-maximisation in the nearer-term.
Current work
Our work is divided into two streams.
- A research group is investigating the most promising giving opportunities among mental health interventions in lower and middle-income countries. We’ve developed a screening tool to assess a list of nearly 200 interventions stated on the Mental Health Innovation Network website. The eight members of our screening team give these individual ratings, which we then check for inter-rater reliability. Once we’ve moved through the list, we will build cost-effectiveness models for the most promising interventions.
- Individuals pursuing projects taken from our research agenda. Current projects are on positive education (Jide Alaga), careers (Teis Rasmussen), personal happiness interventions (Stephan Tegtmeier), and the nature and measurement of happiness (Michael Plant). Further information on individuals' projects can be found on our Team page.
Future plans
Our research agenda consists of three sections:
- Cause areas: explains how our six main cause areas (mental health, pain, positive education, societal change, drug policy reform, research) were identified and presents specific questions related to each.
- More general research questions: sets out further relevant research questions that are not specifically related to one of the six cause areas.
- Towards practical recommendations: identifies research questions that seem particularly relevant for determining what effective altruists should do right now. This is based on our current understanding and, naturally, is subject to change depending on the insights gained from answering the research questions stated in the preceding sections.
The research agenda is open and we welcome individuals to take topics and investigate them. If you would like to work on one of these please email hello@happierlivesinstitute.org so we can provide assistance and avoid unnecessary duplication of work.
Take action
What can do if you want to contribute to our mission?
The books and articles on our reading list will help you to deepen your understanding of what happiness is, how to measure it, what affects it and what can be done to improve it.
We have not completed sufficient research to make confident recommendations about the most effective interventions for improving happiness. However, we have identified some promising organisations which we believe are doing valuable work. If you are looking for high-impact giving opportunities to increase world happiness then this is the best place to start.
As our research develops, we intend to publish detailed career profiles to guide people who want to dedicate their careers to maximising the happiness of others. In the meantime, we’ve listed some initial ideas we think are promising. If you would be interested in volunteering with us, you can find more information on that here.
Follow our work
If you would like to be kept updated about our work then please sign up to our monthly e-newsletter and follow us on Facebook, Twitter and LinkedIn.
We will also be contributing regularly to the Effective Altruism, Mental Health, and Happiness Facebook group which has over 1,000 members.
Feedback
We greatly value your feedback, particularly in this early stage of our organisational development. Please post your questions and comments below or email us directly at hello@happierlivesinstitute.org. We expected to publish a Frequently Asked Questions page on our website in the next few weeks to address any areas of confusion or objections to our work.
Max_Daniel @ 2019-06-25T12:53 (+24)
Congratulations to launching HLI. From my outside perspective, it looks like you have quite some momentum, and I'm glad to see more diverse approaches being pursued within EA. (Even though I don't anticipate to support yours in particular.)
One thing I'm curious about is to what extent HLI's strategy or approach depend on views in population ethics (as opposed to other normative questions, including the theory of well-being), and to what extent you think the question whether maximizing consequentialisism would recommend to support HLI hinges on population ethics.
I'm partly asking because I vaguely remember you having written elsewhere that regarding population ethics you think that (i) death is not bad in itself for any individual's well-being, (ii) creating additional people is never good for the world. My impression is that (i) and (ii) have major implications for how to do 'cause prioritization', and for how to approach the question of "how to do the most good we can" more broadly. It thus would make sense to me that someone endorsing (i) and (ii) thought that, say, they need to research and provide their own career advice as it would likely differ from the one provided by 80K and popular views in EA more generally. (Whereas, without such an explanation, I would be confused why someone would start their own organization "[a]ssessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.") More broadly, it would make sense to me that people endorsing (i) and (ii) embark on their own research programme and practical projects.
However, I'm struck by what seems to me a complete absence of such explicit population ethical reasoning in your launch post. It seems to me that everything you say is consistent with (i) and (ii), and that e.g. in your vision you almost suggest a view that is neutral about 'making happy people'. But on the face of it, 'increasing the expected number of [happy] individuals living in the future, for example by reducing the risk of human extinction' seems a reasonable candidate answer to your guiding question, i.e., “What are the most cost-effective ways to increase self-reported subjective well-being?”
Put differently, I'd expect that your post raises questions such as 'How is this different from what others EA orgs are doing?' or 'How will your career advice differ from 80K's?' for many people. I appreciate there are many other reasons why one might focus to, as you put it, "welfare-maximization in the nearer-term" - most notably empirical beliefs. For example, someone might think that the risk of human extinction this century was extremely small, or that reducing that risk was extremely intractable. And perhaps an organization such as HLI is more useful as a broad tent that unites 'near-term happiness maximizers' irrespective of their reasons for why they focus on the near term. You do mention some of the differences, but it doesn't seem to me that you provide sufficient reasons for why you're taking this different approach. Instead, you stress that you take value to exclusively consist of happiness (and suffering), how you operationalize happiness etc. - but unless I'm mistaken, these points belonging to the theory of well-being don't actually provide an answer to the question that to me seems a bit like the unacknowledged elephant in the room: 'So why are you not trying to reduce existential risk?' Indeed, if you were to ask me why I'm not doing roughly the same things as you with my EA resources, I'd to a first approximation say 'because we disagree about population ethics' rather than 'because we disagree about the theory of well-being' or 'I don't care as much about happiness as you do', and my guess is this is similar for many EAs in the 'longtermist mainstream'.
To be clear, this is just something I was genuinely surprised by, and am curious to understand. The launch post currently does seem slightly misleading to me, but not more so than I'd expect posts in this reference class to generally be, and not so much that I clearly wish you'd change anything. I do think some people in your target audience will be similarly confused, and so perhaps it would make sense for you to at least mention this issue and possibly link to a page with a more in-depth explanation for readers who are interested in the details.
In any case, all the best for HLI!
MichaelPlant @ 2019-06-29T12:39 (+9)
Hello Max,
Thanks for this thoughtful and observant comment. Let me say a few things in reply. You raised quite a few points and my replies aren't in a particular order.
I'm sympathetic to person-affecting views (on which creating people has no value) but still a bit unsure about this (I'm also unsure what the correct response to moral uncertainty is and hence uncertain about how to respond to this uncertainty). However, this view isn't shared across all of HLI's supporters and contributors, hence it isn't true to say there is an 'HLI view'. I don't plan to insist on one either.
And perhaps an organization such as HLI is more useful as a broad tent that unites 'near-term happiness maximizers' irrespective of their reasons for why they focus on the near term.
I expect that HLI's primary audience to be those who have decided that they want to focus on near-term human happiness maximization. However, we want to leave open the possibility of working on improving the quality of lives of humans in the longer-term, as well as non-humans in the nearer- and longer-term. If you're wondering why this might be of interest, note that one might hold a wide person-affecting view on which it's good to increase the well-being of future lives that exist, whichever those lives are (just as one might care about the well-being on one's future child, whichever child that turns out to be (i.e. de dicto rather than de re)). Or one could hold creating lives can be good but still think it's worth working on the quality of future lives, rather than just the quantity (reducing extinction risks being a clear way to increase the quantity of lives). Some of these issues are discussed in section 6 of the mental health cause profile.
However, I'm struck by what seems to me a complete absence of such explicit population ethical reasoning in your launch post
Internally, we did discuss whether we should make this explicit or not. I was leaning towards doing so and saying that our fourth belief was something about prioritising making people happy rather than making people happy. In the end, we decided not to mention this. One reason is that, as noted above, it's not (yet) totally clear what HLI will focus on, hence we don't know what our colours are so as to be able to nail them to the mast, so to speak.
Another reason is that we assumed it would be confusing to many of our readers if we launched into an explanation of why we were making people happier as opposed to making happy people (or preventing the making of unhappy animals). We hope to attract the interest of non-EAs to our project; outside EA we doubt many people will have these alternatives to making people happier in mind. Working on the principle you shouldn't raise objections to your argument your opponent wouldn't consider, it seemed questionably useful to bring up the topic. To illustrate, if I explain what HLI is working on to a stranger I met in the pub, I would say 'we're focused on finding the best ways to make people happier' rather than 'we're focused on near-term human happiness maximisation', even though the latter is more accurate, as it will cause less confusion.
More generally, it's unclear how much work HLI should put into defending a stance in population ethics vs assuming one and then seeing what follows if one applies new metrics for well-being. I lean towards the latter. Saliently, I don't recall GiveWell taking a stance on population ethics so much as assuming its donors already care about global health and development and want to give to the best things in that category.
Much of the above equally applies to discussing the value of saving lives . I'm sympathetic to (although, again, not certain about) Epicureanism, on which living longer has no value, but I'm not sure anyone else in HLI shares that view (I haven't asked around, actually). In the cause profile of mental health, section 5 I do a cost-effectiveness comparison of saving lives to improving lives that using the 'standard' view of the badness of death, deprivationism (the badness of your death is the ammount of well-being you would have had if you lived, hence saving 2-year-olds is better than saving 20-year-olds, all other things equal). I imagine we'll set out how different views about the value of saving lives give you different priorities without committing, as an organisation, to a view, and leave readers to make up their own minds.
(Whereas, without such an explanation, I would be confused why someone would start their own organization "[a]ssessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.")
I don't see why this is confusing. Holding one's views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
Thanks for your comments and engaging on this topic. If quite a few people flag similar concerns over time we may need to make a more explicit statement about such matters.
Max_Daniel @ 2019-06-29T13:08 (+1)
Hi Michael, thank you for your thoughtful reply. This all makes a lot of sense to me.
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad - because of the downsides you mention - for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically. (Though I appreciate that maybe you didn't write that post specifically for this Forum, and that maybe it just isn't worth the effort to do so.) Waiting and checking if other people flag similar concerns seems like a very sensible response to me.
One quick reply:
Holding one's views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
I agree I didn't make intelligible why this would be confusing to me. I think my thought was roughly:
(i) Contingently, we can have an outsized impact on the expected size of the total future population (e.g. by reducing specific extinction risks).
(ii) If you endorse totalism in population ethics (or a sufficiently similar aggregative and non-person-affecting view), then whatever your theory of well-being, because of (i) you should think that we can have an outsized impact on total future well-being by affecting the expected size of the total future population.
Here, I take "outsized" to mean something like "plausibly larger than through any other type of intervention, and in particular larger than through any intervention that optimized for any measure of near-term well-being". Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would "screen off" questions about the theory of well-being, or how to measure well-being - that is, my guess is that reducing existential risk would be (contingently!) a convergent priority (at least on the axiological, even though not necessarily normative level) of all bundles of ethical views that include totalism, in particular irrespective of their theory of well-being. [Of course, taken literally this claim would probably be falsified by some freak theory of well-being or other ethical view optimized for making it false, I'm just gesturing at a suitably qualified version I might actually be willing to defend.]
However, I agree that there is nothing conceptually confusing about the assumption that a different theory of well-being would imply different career priorities. I also concede that my case isn't decisive - for example, one might disagree with the empirical premise (i), and I can also think of other at least plausible defeaters such as claims that improving near-term happiness correlates with improving long-term happiness (in fact, some past GiveWell blog posts on flow-through effects seem to endorse such a view).
MichaelPlant @ 2019-06-29T14:42 (+3)
Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would "screen off" questions about the theory of well-being
Yes, this seems a sensible conclusion to me. I think we're basically in agreement: varying one's account of the good could lead to a new approach to prioritisation, but probably won't make a practical difference given totalism and some further plausible empirical assumptions.
That said, I suspect doing research into how to improve the quality of lives long-term would be valuable and is potentially worth funding (even from a totalist viewpoint, assuming you think we have or will hit diminishing returns to X-risk research eventually).
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad - because of the downsides you mention - for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically.
Oh I'm glad you agree - I don't really want to tangle with all this on the HLI website. I thought about giving more details on the EA forum than were on the website itself, but that struck me as having the downside of looking sneaky and was a reason against doing so.
RomeoStevens @ 2019-06-19T21:34 (+18)
There seems to be strong status quo bias and typical mind fallacy with regard to hedonic set point. This would seem to be a basically rational response since most people show low changes in personality factors (emotional stability, or 1/neuroticism, the big five factor most highly correlated with well being reports, though I haven't investigated this as deeply as I would like for making any strong claims) over their lifetime. In particular, environmental effects have very transient impact, colloquially referred to as the lottery effect, though this instantiation of the effect is likely false.
After doing personal research in this area for several years one of the conclusions that helped me make sense of some of the seeming contradictions in the space was the realization that humans are more like speedrunners than well-being of the video game character maximizers. In particular the proxy measure is generally maximizing the probability of successful grandchildren rather than anything like happiness. In the same way that a speedrunner trades health points for speed and sees the health points less as the abstraction of how safe the protagonist is and more as just another resource to manage, humans treat their own well being as a just another resource to manage.
Concretely, the experience is that only people *currently* in the tails of happiness seem to be able to care about it. People in the left tail obviously want out, and people in the right tail seem to be able to hold onto an emotionally salient stance that *this might be important* (they are currently directly experiencing the fact that life can be much much better than they normally suppose). In the same way that once people exit school their motivation for school reform drops off a cliff. It has been noted that humans seem to have selective memory about past experiences of intense suffering or happiness, such as sickness or peak experiences, as some sort of adaptation. Possibly to prevent overfitting errors.
More nearby, my guess is that caring about this will be anti-selected for in EA, since it currently selects for people with above average neuroticism who use the resultant motivation structure to work on future threats and try to convince others they should worry more about future threats. Positive motivational schemas are less common. Thus I predict lots of burnout in EA over time.
MichaelPlant @ 2019-06-22T00:45 (+16)
RomeoStevens, thanks for this comment. I think you're getting at something interesting, but I confess I this quite hard to follow. Do you think you could possibly restate it, but do so more simply (i.e. with less jargon)? For instance, I don't know how to make sense of
There seems to be strong status quo bias and typical mind fallacy with regard to hedonic set point.
RomeoStevens @ 2019-06-23T01:32 (+7)
People observe that observed happiness doesn't seem to respond much to interventions, so they deprioritize such interventions. This is partially due to the illegibility of variance in happiness.
aarongertler @ 2019-06-21T00:10 (+13)
More nearby, my guess is that caring about this will be anti-selected for in EA, since it currently selects for people with above average neuroticism who use the resultant motivation structure to work on future threats and try to convince others they should worry more about future threats.
I have the opposite intuition. While EA demographics contain a lot of people with above-average neuroticism, the individuals I know in EA tend to be unusually appreciative of both how bad it is to suffer and how good it is to be happy:
- "Rational fiction" (quite popular in EA circles) often contains deep exploration of positive emotion.
- People encourage one another to try unusual experiences (chemical, romantic, narrative) for the sake of extra happiness.
- Fewer people than in the general population are satisfied with "default" options for happiness (e.g. watching TV) -- sometimes because they care less about personal happiness, but often because they really want to go beyond the default and have experiences that are on the extreme positive end of the scale, because they are very physiologically or intellectually satisfying.
Standard nonprofit messaging is often along the lines of "help people live ordinary happy lives ". EA messaging has a lot of that, mixed with some "help people survive the worst possible outcome" but also some "help people transcend the burdens of ordinary life and move forward into a future that could be much, much better than today".
I don't see the future in the latter case as "everyone has a reasonably productive farm and an extra room in their small house", but as "everyone has access to all the wealth/knowledge they want, and all their preferences are satisfied unless they interfere with others' preferences or run afoul of Fun Theory".
----
To put it more succinctly: people in EA tend to be nerdy optimizers, and many of us want to optimize not just "avoiding bad experiences", but also "having good experiences".
RomeoStevens @ 2019-06-21T10:49 (+2)
Fair. I may be over updating on the EAs I know who don't seem particularly concerned that they are default stressed and unhappy. Also I think people living in high density cities underestimate how stressed and unhappy they actually are.
HaukeHillebrandt @ 2019-06-21T22:52 (+2)
people living in high density cities underestimate how stressed and unhappy they actually are
Can you say more about this?
RomeoStevens @ 2019-06-21T23:38 (+11)
I think there were some previous links in a debate about this on FB that I'm not finding now.
https://www.sciencealert.com/where-you-live-has-a-drastic-effect-on-your-happiness-levels
It's a U shaped curve since rural folks are also unhappy. My own sense was that there was a phase shift somewhere between 100k and 250k (exact mapping to density I don't know) related to whether the schelling points for social gathering condense or fracture. I'd recommend people find out for themselves by visiting smaller and happier places. People in SV for instance can spend time in Santa Cruz which is #2 in happiness in the nation.
Habryka @ 2019-06-20T22:25 (+2)
[Made this into a top-level comment]
toonalfrink @ 2019-06-19T16:16 (+18)
[Our] research group is investigating the most promising giving opportunities among mental health interventions in lower and middle-income countries.
Any reason why you're focusing on interventions that target mental health directly and explicitly, instead of any intervention that might increase happiness indirectly (like bednets)?
MichaelPlant @ 2019-06-21T13:47 (+8)
Hello Toon. The reason we're starting with this is because it looks like it could be a more cost-effective way of increasing happiness than the current interventions effective altruists tend to have in mind (alleviating poverty, saving lives) and it hasn't been thoroughly investigated yet. Our plan is to find the most cost-effective mental health intervention and then see how that compares to the alternatives. I ran some initial numbers on this in a previous post on mental health.
I'm not sure if that answers your question. From the fact that one intervention is more direct than another (i.e. there are fewer cause steps before the desired outcome occurs) doesn't necessarily imply anything about comparative cost-effectiveness.
JamesSnowden @ 2019-06-21T19:28 (+12)
Excited to see your work progressing Michael!
I thought it might be useful to highlight a couple of questions I personally find interesting and didn't see on your research agenda. I don't think these are the most important questions, but I haven't seen them discussed before and they seem relevant to your work.
Writing this quickly so sorry if any of it's unclear. Not necessarily expecting an answer in the short term; just wanted to flag the questions.
(1) How should self-reporting bias affect our best guess of the effect size of therapy-based interventions on life satisfaction (proxied through e.g. depression diagnostics)?
My understanding is that at least some of the effect size for antidepressants is due to placebo (although I understand there's a big debate over how much).
If we assume that (i) at least some of this placebo effect is due to self-reporting bias (rather than a "real" placebo effect that genuinely makes people happier), and (ii) It's impossible to properly blind therapeutic interventions, how should this affect our best guess of the effect size of therapy relative to what's reported in various meta-analyses? Are observer-rating scales a good way to overcome this problem?
(2) How much do external validity concerns matter for directly comparing interventions on the basis of effect on life satisfaction?
If my model is: [intervention] -> increased consumption -> increased life satisfaction.
And let's say I believe the first step has high external validity but the second step has very low external validity.
That would imply that directly measuring the effect of [intervention] on life satisfaction would have very low external validity.
It might also imply a better heuristic to make predictions on the effect of future similar interventions on life satisfaction would be:
(i) Directly measure the effect of [intervention] on consumption
(ii) Use the average effect of increased consumption on life satisfaction from previous research to estimate the ultimate effect on life satisfaction.
In other words: when the link between certain outcomes and ultimate impact differs between settings in a way that's ex ante unpredictable, it may be better to proxy future impact of similar interventions through extrapolation of outcomes, rather than directly measuring impact.
What evidence currently exists around the external validity of the links between outcomes and ultimate impact (i.e. life satisfaction)?
MichaelPlant @ 2019-06-22T01:10 (+3)
Hello James,
Thanks for these.
I remember we discussed (1) a while back but I'm afraid I don't really remember the details anymore. To check, what exactly is the bias you have in mind - that people inflate their self-reports scores generally when they are being given treatment? Is there one or more studies you can point me to so I can read up on this, or is this a hypothetical concern?
I don't think I understand what you're getting at with (2): are you asking what we infer if some intervention increases consumption but doesn't increase self-reported life satisfaction in a scenario S but does in other scenarios? That sounds like a normal case where we get contradictory evidence. Let me know if I've missed something here.
What evidence currently exists around the external validity of the links between outcomes and ultimate impact (i.e. life satisfaction)?
I'm not sure what you mean by this. Are you asking what the evidence is on what the causes and correlated of life satisfaction is? Dolan et al 2008 have a much cited paper on this.
JamesSnowden @ 2019-06-22T19:03 (+11)
On (1)
>people inflate their self-reports scores generally when they are being given treatment?
Yup, that's what I meant.
>Is there one or more studies you can point me to so I can read up on this, or is this a hypothetical concern?
I'm afraid I don't know this literature on blinding very well but a couple of pointers:
(i) StrongMinds notes "social desirability bias" as a major limitation of their Phase Two impact evaluation, and suggest collecting objective measures to supplement their analysis:
"Develop the means to negate this bias, either by determining a corrective percentage factor to apply or using some other innovative means, such as utilizing saliva cortisol stress testing. By testing the stress levels of depressed participants (proxy for depression), StrongMinds could theoretically determine whether they are being truthful when they indicate in their PHQ-9 responses that they are not depressed." https://strongminds.org/wp-content/uploads/2013/07/StrongMinds-Phase-Two-Impact-Evaluation-Report-July-2015-FINAL.pdf
(ii) GiveWell's discussion of the difference between blinded and non-blinded trials on water quality interventions when outcomes were self-reported [I work for GiveWell but didn't have any role in that work and everything I post on this forum is in a personal capacity unless otherwise noted]
https://blog.givewell.org/2016/05/03/reservations-water-quality-interventions/
On (2)
May be best to just chat about this in person but I'll try to put it another way.
Say a single RCT of a cash transfer program in a particular region of Kenya doubled people's consumption for a year, but had no apparent effect on life satisfaction. What should we believe about the likely effect of a future cash transfer program on life satisfaction? (taking it as an assumption for the moment that the wider evidence suggests that increases in consumption generally lead to increases in life satisfaction).
Possibility 1: there's something about cash transfer programs which mean they don't increase life satisfaction as much as other ways to increase consumption.
Possibility 2: this result was a fluke of context; there was something about that region at that time which meant increases in consumption didn't translate to increases in reported life satisfaction, and we wouldn't expect that to be true elsewhere (given the wider evidence base).
If Possibility 2 is true, then it would be more accurate to predict the effect of a future cash transfer program on life satisfaction by using the RCT effect of cash on consumption, and then extrapolating from the wider evidence base to the likely effect on life satisfaction. If possibility 1 is true, then we should simply take the measured effect from the RCT on life satisfaction as our prediction.
One way of distinguishing between possibility 1 and possibility 2 would be to look at the inter-study variance in the effects of similar programs on life satisfaction. If there's high variance, that should update us to possibility 2. If there's low variance, that should update us to possibility 1.
I haven't seen this problem discussed before (although I haven't looked very hard). It seems interesting and important to me.
Habryka @ 2019-06-20T22:27 (+11)
For whatever it's worth, my ethical intuitions suggest that optimizing for happiness is not a particularly sensible goal. I personally care relatively little about my self-reported happiness levels, and wouldn't be very excited about someone optimizing for them.
Kahneman has done some research on this, and if I remember correctly changed his mind publicly a few years ago from his previous position in Thinking Fast and Slow to a position that values life-satisfaction a lot more than happiness (and life-satisfaction tends to trade off against happiness in many situations).
This was the random article I remember reading about this. Take it with all the grains of salt of normal popular science reporting. Here are some quotes (note that I disagree with the "reducing suffering" part as an alternative focus):
At about the same time as these studies were being conducted, the Gallup polling company (which has a relationship with Princeton) began surveying various indicators among the global population. Kahneman was appointed as a consultant to the project.
“I suggested including measures of happiness, as I understand it – happiness in real time. To these were added data from Bhutan, a country that measures its citizens’ happiness as an indicator of the government’s success. And gradually, what we know today as Gallup’s World Happiness Report developed. It has also been adopted by the UN and OECD countries, and is published as an annual report on the state of global happiness.
“A third development, which is very important in my view, was a series of lectures I gave at the London School of Economics in which I presented my findings about happiness. The audience included Prof. Richard Layard – a teacher at the school, a British economist and a member of the House of Lords – who was interested in the subject. Eventually, he wrote a book about the factors that influence happiness, which became a hit in Britain,” Kahneman said, referring to “Happiness: Lessons from a New Science.”
“Layard did important work on community issues, on improving mental health services – and his driving motivation was promoting happiness. He instilled the idea of happiness as a factor in the British government’s economic considerations.
“The involvement of economists like Layard and Deaton made this issue more respectable,” Kahneman added with a smile. “Psychologists aren’t listened to so much. But when economists get involved, everything becomes more serious, and research on happiness gradually caught the attention of policy-making organizations.
“At the same time,” said Kahneman, “a movement has also developed in psychology – positive psychology – that focuses on happiness and attributes great importance to internal questions like meaning. I’m less certain of that.
[...]
Kahneman studied happiness for over two decades, gave rousing lectures and, thanks to his status, contributed to putting the issue on the agenda of both countries and organizations, principally the UN and the OECD. Five years ago, though, he abandoned this line of research.
“I gradually became convinced that people don’t want to be happy,” he explained. “They want to be satisfied with their life.”
A bit stunned, I asked him to repeat that statement. “People don’t want to be happy the way I’ve defined the term – what I experience here and now. In my view, it’s much more important for them to be satisfied, to experience life satisfaction, from the perspective of ‘What I remember,’ of the story they tell about their lives. I furthered the development of tools for understanding and advancing an asset that I think is important but most people aren’t interested in.
“Meanwhile, awareness of happiness has progressed in the world, including annual happiness indexes. It seems to me that on this basis, what can confidently be advanced is a reduction of suffering. The question of whether society should intervene so that people will be happier is very controversial, but whether society should strive for people to suffer less – that’s widely accepted.
I don't fully agree with all of the above, but a lot of the gist seems correct and important.
MichaelPlant @ 2019-06-21T14:26 (+14)
Thanks for this. Let me make three replies.
First, HLI will primarily use life satisfaction scores to determine our recommendations. Hence, if you think life satisfaction does a reasonably job of capturing well-being, I suppose you will still be interested in the outputs.
Second, it's not yet clear if there would be different priorities if life satisfaction rather than happiness were used as the measure of benefit. Hence, philosophical differences may not lead to different priorities in this case.
Third, I've been somewhat bemused by Kahneman's apparent recent conversation to thinking life satisfaction, rather than happiness, is what matters for well-being. I don't see why the descriptive claim that people, in fact, try to maximise their life satisfaction rather than their happiness should have any bearing on the evaluative claim of well-being consists in. To get such a claim off the ground, you'd need something like a 'subjectivist' view about well-being, on which well-being consists in whatever people choose their well-being to consist in. Hedonism (well-being consists in happiness) is an 'objectivist' view, because it holds your happiness is good for you whether you think it is or not. See Haybron for a brief discussion of this.
I don't find subjectivism about well-being plausible. Consider John Rawls' grass-counter case: imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns. Suppose this person then does spend their time counting blades of grass and is miserable while doing so. On the subjectivist view, this person's life is going well for them. I think this person's life is going poorly for them because they are unhappy. I'm not sure there's much, if anything more, to say about this case: some will think grass-counter's life is going well, some won't.
Lukas_Finnveden @ 2019-06-23T20:58 (+8)
Has Kahneman actually stated that he thinks life satisfaction is more important than happiness? In the article that Habryka quotes, all he says is that most people care more about their life satisfaction than their happiness. As you say, this doesn't necessarily imply that he agrees. In fact, he does state that he personally thinks happiness is important.
(I don't trust the article's preamble to accurately report his beliefs when the topic is as open to misunderstandings as this one is.)
MichaelPlant @ 2019-06-27T21:22 (+3)
I'm not sure what Kahneman believes. I don't think he's publicly stated well-being consists in life satisfaction rather than happiness (or anything else). I don't think his personal beliefs are significant for the (potential) view either way (unless one was making an appeal to authority).
MichaelStJules @ 2019-06-21T16:22 (+7)
Consider John Rawls' grass-counter case: imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns. Suppose this person then does spend their time counting blades of grass and is miserable while doing so. On the subjectivist view, this person's life is going well for them. I think this person's life is going poorly for them because they are unhappy.
I think the example might seem absurd because we can't imagine finding satisfaction in counting blades of grass; it seems like a meaningless pursuit. But is it any more meaningful in any objective sense than doing mathematics (in isolation, assuming no one else would ever benefit)? The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn't matter as long as the individual is (more) satisfied.
Furthermore, I think life satisfaction and preference satisfaction are slightly different. If we're talking about life satisfaction rather than preference satisfaction, it's not an overriding desire (which sounds like addiction), but, upon reflection, (greater) satisfaction with the choices they make and their preferences for those choices. If we are talking about preference satisfaction, people can also have preferences over their preferences. A drug addict might be compelled to use drugs, but prefer not to be. In this case, does the mathematician prefer to have different preferences? If they don't, then the example might not be so counterintuitive after all. If they do, then the subjectivist can object in a way that's compatible with their subjectivist intuitions.
Also, a standard objection to hedonistic (or more broadly experiential) views is wireheading or the experience machine, of which I'm sure you're aware, but I'd like to point them out to everyone else here. People don't want to sacrifice the pursuits they find meaningful to be put into an artificial state of continuous pleasure, and they certainly don't want that choice to be made for them. Of course, you could wirehead people or put them in experience machines that make their preferences satisfied (by changing these preferences or simulating things that satisfy their preferences), and people will also object to that.
MichaelPlant @ 2019-06-27T21:36 (+3)
The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn't matter as long as the individual is (more) satisfied.
Yes, the subjectivist could bite the bullet here. I doubt many(/any) subjectivists would deny this is a somewhat unpleasant bullet to bite.
Life satisfaction and preference satisfaction are different - the former refers to a judgement about one's life, the latter to one's preferences being satisfied in the sense that the world goes the way one wants it to. I think the example applies to both views. Suppose the grass counter is satisfied with his life and things are going the way he wants them to go: it still doesn't seem that his life is going well. You're right that preference satisfactionists often appeal to 'laundered' preferences - their have to prefer what their rationally ideal self would prefer, or something - but it's hard and unsatisfying to spell out what this looks like. Further, it's unclear how that would help in this case: if anyone is a rational agent, presumably Harvard mathematicians like the grass-counter are. What's more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn't count towards my well-being because they 'irrational' you don't seem to be respecting the view that my well-being consists in whatever I say it does.
On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one's life is going well, it doesn't matter how you come to that judgement.
MichaelStJules @ 2019-07-01T23:13 (+1)
What's more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn't count towards my well-being because they 'irrational' you don't seem to be respecting the view that my well-being consists in whatever I say it does.
I don't think this need be the case, since we can have preferences that are mutually exclusive in their satisfaction, and having such preferences means we can't be maximally satisfied. So, if the mathematician's preference upon reflection is to not count blades of grass (and do something else) but they have the urge to do so, at least one of these two preferences will go unsatisfied, which detracts from their wellbeing.
However, this on its own wouldn't tell us the mathematician is better off not counting blades of grass, and if we did always prioritize rational preferences over irrational ones, or preferences about preferences over the preferences to which they refer, then it would be as if the irrational/lower preferences count for nothing, as you suggest.
On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one's life is going well, it doesn't matter how you come to that judgement.
I agree, although it also doesn't help preference satisfactionists who only count preference satisfaction/frustration when it's experienced consciously, and it might also not help them if we're allowed to change your preferences, since having easier preferences to satisfy might outweigh the preference frustration that would result from having your old preferences replaced by and ignored for the new preferences.
I think the involuntary experience machine and wireheading are problems for all the consequentialist theories with which I'm familiar (at least under the assumption of something like closed individualism, which I actually find to be unlikely).
Lukas_Finnveden @ 2019-06-23T21:10 (+5)
For whatever it's worth, my metaethical intuitions suggest that optimizing for happiness is not a particularly sensible goal.
Might just be a nitpick, but isn't this an ethical intuition, rather than a metaethical one?
(I remember hearing other people use "metaethics" in cases where I thought they were talking about object level ethics, as well, so I'm trying to understand whether there's a reason behind this or not.)
Habryka @ 2019-06-24T19:42 (+1)
Hmm, I don't think so. Though I am not fully sure. Might depend on the precise definition.
It feels metaethical because I am responding to a perceived confusion of "what defines moral value?", and not "what things are moral?".
I think "adding up people's experience over the course of their life determines whether an act has good consequences or not" is a confused approach to ethics, which feels more like a metaethical instead of an ethical disagreement.
However, happy to use either term if anyone feels strongly, or happy to learn that this kind of disagreement falls clearly into either "ethics" or "metaethics".
Lukas_Finnveden @ 2019-06-25T12:37 (+5)
I'm by no means schooled in academic philosophy, so I could also be wrong about this.
I tend to think about e.g. consequentialism, hedonistic utilitarianism, preference utilitarianism, lesswrongian 'we should keep all the complexities of human value around'-ism, deontology, and virtue ethics as ethical theories. (This is backed up somewhat by the fact that these theories' wikipedia pages name them ethical theories.) When I think about meta-ethics, I mainly think about moral realism vs moral anti-realism and their varieties, though the field contains quite a few other things, like cole_haus mentions.
My impression is that HLI endorses (roughly) hedonistic utilitarianism, and you said that you don't, which would be an ethical disagreement. The borderlines aren't very sharp though. If HLI would have asserted that hedonistic utilitarianism was objectively correct, then you could certainly have made a metaethical argument that no ethical theory is objectively correct. Alternatively, you might be able to bring metaethics into it if you think that there is an ethical truth that isn't hedonistic utilitarianism.
(I saw you quoting Nate's post in another thread. I think you could say that it makes a meta-ethical argument that it's possible to care about things outside yourself, but that it doesn't make the ethical argument that you ought to do so. Of course, HLI does care about things outside themselves, since they care about other people's experiences.)
Habryka @ 2019-06-25T20:21 (+3)
This seems reasonable. I changed it to say "ethical".
cole_haus @ 2019-06-24T23:25 (+3)
Contemporary Metaethics delineates the field as being about:
(a) Meaning: what is the semantic function of moral discourse? Is the function of moral discourse to state facts, or does it have some other non-fact-stating role?
(b) Metaphysics: do moral facts (or properties) exist? If so, what are they like? Are they identical or reducible to natural facts (or properties) or are they irreducible and sui generis?
(c) Epistemology and justification: is there such a thing as moral knowledge? How can we know whether our moral judgements are true or false? How can we ever justify our claims to moral knowledge?
(d) Phenomenology: how are moral qualities represented in the experience of an agent making a moral judgement? Do they appear to be ‘out there’ in the world?
(e) Moral psychology: what can we say about the motivational state of someone making a moral judgement? What sort of connection is there between making a moral judgement and being motivated to act as that judgement prescribes?
(f) Objectivity: can moral judgements really be correct or incorrect? Can we work towards finding out the moral truth?
It doesn't quite seem to me like the original claim fits neatly into any of these categories.
aarongertler @ 2019-06-21T00:02 (+4)
Specifically, we expect to rely on life satisfaction as the primary metric. This is typically measured by asking “Overall, how satisfied are you with your life nowadays?” (0 - 10).
I'd be curious to hear examples of questions that HLI thinks would be better than the above for assessing the thing they want to optimize. My assumption was that their work would typically measure things that were close to life satisfaction, rather than transient feelings ("are you happy now?"), because the latter seems very subjective and timing-dependent.
I think of "life satisfaction" as a measure of something like "how happy/content you are with your past + how happy/content you expect to be in the future + your current emotional state coloring everything, as usual".
(Note that being happy with the past isn't the same as having been happy in the past -- but I don't think those trade off against each other all that often, especially in the developing world [where many of the classic "happiness traps", like drugs and video games, seem like they'd be less available].)
Michael: To what extent do you believe that the thing HLI wants to optimize for is the thing people "want" in Kahneman's view? If you think there are important differences between your definition of happiness and "life satisfaction", why pursue the former rather than the latter?
MichaelPlant @ 2019-06-21T15:07 (+9)
Hello Aaron,
In the 'measuring happiness' bit of HLI's website we say
The ‘gold standard’ for measuring happiness is the experience sampling method (ESM), where participants are prompted to record their feelings and possibly their activities one or more times a day.[1] While this is an accurate record of how people feel, it is expensive to implement and intrusive for respondents. A more viable approach is the day reconstruction method (DRM) where respondents use a time-diary to record and rate their previous day. DRM produces comparable results to ESM, but is less burdensome to use (Kahneman et al. 2004).
Further, I don't think that fact happiness is subjective or timing-dependent is problematic: what I think matters is how pleasant/unpleasant people feel throughout the moments of their life. (In fact, this is the view Kahneman argued for in his 1999 paper 'Objective happiness'.)
Habryka @ 2019-06-21T00:50 (+2)
I was responding to this section, which immediately follows your quote:
While we think measures of emotional states are closer to an ideal measure of happiness, far fewer data of this type is available.
I think emotional states are a quite bad metric to optimize for and that life satisfaction is a much better measure because it actually measures something closer to people's values being fulfilled. Valuing emotional states feels like a map territory confusion in a way that I Nate Soares tried to get at in his stamp collector post:
Ahh! No! Let's be very clear about this: the robot is predicting which outcomes would follow from which actions, and it's ranking them, and it's taking the actions that lead to the best outcomes. Actions are rated according to what they achieve. Actions do not themselves have intrinsic worth!
Do you see where these naïve philosophers went confused? They have postulated an agent which treats actions like ends, and tries to steer towards whatever action it most prefers — as if actions were ends unto themselves.
You can't explain why the agent takes an action by saying that it ranks actions according to whether or not taking them is good. That begs the question of which actions are good!
This agent rates actions as "good" if they lead to outcomes where the agent has lots of stamps in its inventory. Actions are rated according to what they achieve; they do not themselves have intrinsic worth.
The robot program doesn't contain reality, but it doesn't need to. It still gets to affect reality. If its model of the world is correlated with the world, and it takes actions that it predicts leads to more actual stamps, then it will tend to accumulate stamps.
It's not trying to steer the future towards places where it happens to have selected the most micro-stampy actions; it's just steering the future towards worlds where it predicts it will actually have more stamps.
Now, let me tell you my second story:
Once upon a time, a group of naïve philosophers encountered a group of human beings. The humans seemed to keep selecting the actions that gave them pleasure. Sometimes they ate good food, sometimes they had sex, sometimes they made money to spend on pleasurable things later, but always (for the first few weeks) they took actions that led to pleasure.
But then one day, one of the humans gave lots of money to a charity.
"How can this be?" the philosophers asked, "Humans are pleasure-maximizers!" They thought for a few minutes, and then said, "Ah, it must be that their pleasure from giving the money to charity outweighed the pleasure they would have gotten from spending the money."
Then a mother jumped in front of a car to save her child.
The naïve philosophers were stunned, until suddenly one of their number said "I get it! The immediate micro-pleasure of choosing that action must have outweighed —
People will tell you that humans always and only ever do what brings them pleasure. People will tell you that there is no such thing as altruism, that people only ever do what they want to.
People will tell you that, because we're trapped inside our heads, we only ever get to care about things inside our heads, such as our own wants and desires.
But I have a message for you: You can, in fact, care about the outer world.
And you can steer it, too. If you want to.
aarongertler @ 2019-06-21T06:06 (+4)
I think I mostly agree with you here, but I'm slightly confused by HLI's definition of "happiness" -- I meant my comment as a set of questions for Michael, inspired by the points you made.
Nathan Young @ 2019-06-21T13:53 (+9)
Thank you for your work. It seems like a really important thing to study. Thank you for taking the time to lay your your plans so clearly.
Do you think your work will at any point reach onto how individuals could live that would make them more happu or have greater well being? I think there is room for publising a kind of workflow/ lifehacks to help peoeple know how their lives could be better. I acknolwedge that's not what you speak about here but it seems adjacent. Perhaps another reader could point me in the direction of this.
We think well-being consists in happiness, defined as a positive balance of enjoyment over suffering. Understood this way, this means that when we reduce misery, we increase happiness.
Sure though there are some kinds of misery you don't want to reduce. I could choose not to attend my fathers funeral and that would reduce misery. Do you have any idea how you will attempt to account for "good sadness" in any way? If you will avoid those kinds of interventions, how will you choose your interventions and how will you avoid bias in this?
MichaelPlant @ 2019-06-27T21:56 (+5)
Hello Nathan. I think HLI will probably focus on what we can do for others. There is already quite a lot of work by psychologists on what individuals can do for themselves, see e.g. The How of Happiness by Lyubormirsky and what is called 'positive psychology' more broadly. Hence, our comparative advantage and counterfactual impact will be on how best to altruistically promote happiness.
Sure though there are some kinds of misery you don't want to reduce
I think we should be maximising happiness over any organism's whole lifespan; hence, some sadness now and then may be good for maximising happiness over the whole life. It's an empirical question how much sadness is optimal for maximum lifetime happiness.
On the funeral point, I think you're capturing an intuition about what we ought to do rather than what makes life go well for someone: you might think that not going the funeral would make your life go better for you, but that you ought to go anyway. Hence, I don't think your point counts against happiness being what makes your life go well for you (leaving other considerations to the side).
Nathan Young @ 2019-06-28T19:20 (+3)
Yeah, fair points. :)
RyanCarey @ 2019-07-07T12:18 (+8)
In the UK, "Institute" is a protected term, for which you need approval from the Secretary of State to use in a business name, per https://web.archive.org/web/20080913085135/http://www.companieshouse.gov.uk/about/gbhtml/gbf3.shtml. I'm not sure how this changes if you're being a part of the university, but otherwise this could present some problems.
Evan_Gaensbauer @ 2019-06-19T19:03 (+4)
While updates from individual EA-aligned organizations are typically relegated to the 'community' page on the EA Forum, I believe an exception should be made for the public announcement for the launch of a new EA-aligned organization, especially one that takes a focus area that doesn't already have major professional representation in EA. I believe that such announcements are of interest to people who browse the EA Forum, including newcomers to the community, and is not what I would call just a 'niche' interest in EA. Also, specifically with the case of Michael D. Plant, I believe he is someone whose reputation in EA precedes him such that we should give credit as the announcement of this project launch to be of significant interest to the EA community, and of things coming out of EA that are of interest to the broader public.
aarongertler @ 2019-06-19T23:21 (+18)
I assigned Frontpage status to this article when it appeared, shortly before Evan's comment was posted. I agree with him that the launch of new organizations could potentially be of interest even to people who are relatively new to EA. However, a post's category isn't based on the author's reputation, but on the post's content.
I think detailed posts that explain a specific approach to doing the most good make sense for this category, and this post does that while also happening to be about a new organization. Some but not all posts about new organizations are likely to be assigned Frontpage status.
(I also don't like the word "relegated" in this context. The Community and Frontpage sections serve different purposes, but neither status is meant as a measurement of quality.)
Evan_Gaensbauer @ 2019-06-21T01:05 (+1)
Thanks for the response, Aaron. Had I been aware this post would have received Frontpage status, I would not have made my above comment. I notice my above comment has many votes, but not a lot of karma, which means it was a controversial comment. Presumably, at least several people disagree with me.
1. I believe the launch of new EA-aligned organizations should be considered of interest to people who browse the Frontpage.
2. It's not clear to me that it's only people who are 'relatively new to EA' who primarily browse the Frontpage instead of the Community page. While I'm aware the Frontpage is intended primarily for people relatively new to EA, it's not clear to me the usage of the EA Forum is such that it's only newcomers to EA who primarily browse the Frontpage. Ergo, it seems quite possible there are a lot of people who are committed EA community members, who are not casually interested in each update from every one of dozens of EA-aligned organizations. So, they may skip the 'Community' page, while nonetheless there are major updates like these that are more 'community-related' than 'general' EA content, but nonetheless deserve on the Frontpage, where people who do not browse the community tab often, who are also not newcomers to EA, will see them.
3. I understand why there would be some hesitance to move posts announcing the launch of new EA-aligned projects/organizations to the Frontpage. The problem is there aren't really hard barriers to just anyone declaring a new project/organization aimed at 'doing good' gaming EA by paying lip service to EA principles and practices, but, behind the scenes, the organization is not (intending/trying to be) as effective or altruistic as they claimed to be. One reason this problem intersects with moving posts to the Frontpage of the EA Forum is because to promote just any new project/organization that declares themselves to be EA-aligned to a place of prominence in EA sends the signal, intentionally or not, that this project/org has received a kind of 'official EA stamp of approval'. Why I brought up Michael Plant's reputation is not because I thought anyone's reputation alone should dictate what assignment on the EA Forum their posts receive. I just mentioned it that, on the chance Aaron or the administration of the EA Forum was on the fence about whether to promote this post to the Frontpage or not, I wanted to vouch for Michael Plant as an EA community member whose reputation of commitment of fidelity to EA principles and practices in the projects he is involved with is such that, on priors, I would expect the new project/org he is launching, and its announcement, to be that which the EA Forum should be willing to put its confidence behind.
4. I agree ideally the reputation of an individual EA community member should not impact what we think of the content of their EA Forum posts. I also agree that in practice we should aspire to live this principle in practice as much as possible. I just also believe that it's realistic to acknowledge EA is a community of biased humans like any other, and so forms of social influence like individual reputation still impact how we behave. For example, if William MacAskill or Peter Singer were to announce the launch of a new EA-aligned project/org, not exclusively based on their prior reputation, but based on their prior reputation, barring a post they made to the EA Forum reading like patent nonsense, which is virtually guaranteed not to happen, I expect it would be promoted to the Frontpage. My goal in vouching for Michael Plant is, while he isn't as well-known in EA as Profs. MacAskill or Singer, was to indicate I believe he deserves a similar level of credit in the EA community as a philosopher who practices EA with impeccable fidelity.
5. I also made my above comment under perceiving the norms for which posts are assigned to the 'Community' or 'Frontpage' posts to be ambiguous. For the purposes of what kinds of posts announcing the launch of a new EA-aligned project/org will be assigned to the Frontpage, I find the following from Aaron sufficient and satisfactory clarification of my prior concerns:
I think detailed posts that explain a specific approach to doing the most good make sense for this category, and this post does that while also happening to be about a new organization. Some but not all posts about new organizations are likely to be assigned Frontpage status.
6. Aaron dislikes my use of the word 'relegate' to describe the assignments of posts on the EA Forum to the Frontpage or the Community page, respectively. I used the word 'relegate', because that appears to be how promotions to the Frontpage on LessWrong work, and because I was under the impression the EA Forum had similar administration norms to LessWrong. Since the EA Forum 2.0 is based on the same codebase as LW 2.0, and the same team that built LW2.0 also was crucial in the development of the EA Forum2.0, I was acting under the assumption the EA Forum admin team significantly borrowed admin norms from the LW2.0 team from which they inherited administration of the EA Forum 2.o. In his above comment, Aaron has clarified the distinction between the 'Frontpage' and other tabs on the EA Forum is not the same as the distinction between the 'Frontpage' and other tabs on LW.
7. While the distinctions between Frontpage and and Community sections are intended to serve different purposes, and not as a measure of quality, because of the availability heuristic, I worry one default outcome of 'Frontpage' posts, well, being on the frontpage on the EA Forum, and their receiving more attention, meaning they will be assumed to be of higher quality.
These are the reasons that motivated me to make my above comment. Some but not all of these concerns are entirely assuaged by Aaron's response. All my concerns specifically regarding EA Forum posts that are announcements for new orgs/projects are assuaged. Some of my concerns with ambiguity between which posts will be assigned to the Frontpage or Community tabs respectively remain. However, they hinge upon disputable facts of the matter that could be resolved alone by EA Forum usage statistics, specifically comparative usage stats between the Community and Frontpage tabs. I don't know if the EA Forum moderation team has access to that kind of data, but I believe access to such usage stats could greatly aid in resolving my concerns regarding how much traffic each tab, and its respective posts, receive.
Habryka @ 2019-06-21T01:40 (+6)
I used the word 'relegate', because that appears to be how promotions to the Frontpage on LessWrong work, and because I was under the impression the EA Forum had similar administration norms to LessWrong.
Also not how it is intended to work on LessWrong. There is some (around 30%) loss in average visibility but there are many important posts that are on personal blogposts on LessWrong. The distinction is more nuanced and being left on personal blogpost is definitely not primarily a signifier of quality.
Evan_Gaensbauer @ 2019-08-05T01:10 (+2)
Alright, thanks for letting me know. I'll remember that for the future.
aarongertler @ 2019-06-21T06:25 (+4)
You've written more here than I can easily respond to, especially the day before EA Global begins! ;-)
...but I'll focus on your last point:
I worry one default outcome of 'Frontpage' posts, well, being on the frontpage on the EA Forum, and their receiving more attention, meaning they will be assumed to be of higher quality.
Some Forum posts seem like they will more accessible than others to people who have little previous experience with the EA community. Because these posts have a larger potential audience (in theory), we currently expose them to a larger audience using the Frontpage category.
This doesn't mean that Frontpage posts are necessarily "better", or even more useful to the average Forum visitor. But they could theoretically appeal to new audiences who aren't as familiar with EA.
For example, while a lot more Forum users might be interested in a post on the historical growth of the movement than on a post about nuclear war (because most Forum users are experienced with/invested in the EA community), a post about nuclear war could be interesting to people from many communities totally separate from EA (think-tank researchers, scientists, journalists, etc.)
Historically, a lot more posts get the "Frontpage" category than the "Community" category. But as you can see by going to the "Community" page on the Forum, posts in that category often get a lot of votes and comments -- probably because they appeal broadly to the people who use the Forum most, whatever cause area they might care about.
I doubt that someone looking at posts in both categories would conclude that "Frontpage" posts were "better" or "more important", at least if they took the time to read a couple of posts in each category.
That said, we did inherit the "Frontpage" name from LessWrong, and we may consider changing it in the future. (I'd welcome any suggestions for new names -- "Research" doesn't quite fit, I think, but good names are probably something along those lines.)
----
Historically, the Forum's homepage gets roughly ten times as much traffic as the Community page. But of the dozen posts with the most views in June, seven are Frontpage and five are Community. This is partly because many visitors to the homepage don't read anything, or read one article and bounce off (as for basically any website) and partly because much of the Forum's traffic comes from link-sharing through social media, the EA Newsletter, etc. (places where categorization doesn't matter at all).
Do you have any further questions about this point?
Evan_Gaensbauer @ 2019-08-05T01:09 (+2)
Hi. I'm just revisiting this comment now. I don't have anymore questions. Thanks for your detailed response.