Humanities Research Ideas for Longtermists

By Lizka @ 2021-06-09T04:39 (+151)

Summary

This post lists 10 longtermism-relevant project ideas for people with humanities interests or backgrounds. Most of these ideas are for research projects, but some are for summaries, new outreach content, etc. (See below for what I mean by “humanities.”)

The ideas, in brief:

  1. Study future-oriented beliefs in certain religions or groups
  2. Study the ways in which incidental qualities become essential to institutions
  3. Explore fiction as a tool for moral circle expansion
  4. Study how longtermists use different forms of media and how this might be improved
  5. Study how non-EAs actually view AI safety issues, and how we got here
  6. Produce anthropological/ethnographic studies of unusually relevant groups
  7. Apply insights from education, history, and development studies to creating a post-societal-collapse recovery plan
  8. Study notions of utopias
  9. Analyze social media (and online forums) in the context of longtermism
  10. Use tools from non-history humanities fields to aid history-oriented projects relevant for longtermism

Why it might be helpful to produce lists of projects for people with humanities backgrounds (or interests) to work on

  1. Deliberately looking for and studying topics that are humanities-oriented could be a way to discover longtermist interventions that are hard to notice or tackle from other angles (e.g., a STEM angle), improve our views on known causes and interventions, and find topics that are better fits for some people than existing (non-humanities) project ideas would be.

  2. If it is relatively easy to produce such lists, it suggests that we are systematically missing humanities ideas and tools from our reasoning, and that this gap is not explainable by a natural disconnect between longtermist values or concerns and non-STEM areas.[1] (If we had exhausted humanities approaches to longtermism, it would probably be hard to find previously unnoticed topics that seem reasonable.) It seems valuable to have diversity in backgrounds and perspectives, and the existence of this gap suggests that supporting humanities projects might be a way to improve on that front.

  3. Collections like this can consolidate existing ideas and resources in one place, making it easier to find projects and collaborate as a community.

  4. I am aware of talented people who have been put off EA (and longtermism) due to their general sense that the humanities are considered worthless. My sense is that EAs do see value in the humanities, and it might be worth making this clearer.

  5. (Personal note) this project was helpful for me as a way to explore longtermist research.

Scope and disclaimers

The focus of the post is on the humanities disciplines most neglected in EA and longtermism, so I didn't focus on history, philosophy, and psychology. (Those might also be neglected in the community, but there has been at least some mention of how they could be relevant for longtermism in places like the Forum.)[2] My use of the word “humanities'' is loose—for this project, I accepted some fields that might be considered social sciences instead. In practice, I think the ideas listed here are most related to anthropology, archival studies, area studies, art history, (comparative) literature, (comparative) religion studies/theology, education, and media studies.

The list is not meant to be exhaustive by any means; in particular, the selection of topics here is heavily influenced by my own academic background (literature, sort-of-history, art, math). Some of the ideas are ideas for bringing existing research into EA rather than ideas for producing totally new research. It is also important to note that I have very little background in most of the areas involved in this list, and I wouldn't be surprised if deeper research discovered that some of these topics have been covered by EAs. Finally, my understanding of religion is biased towards Christianity/Islam/Judaism, and my model is not great for other religions.

Reviewers of a draft of this post suggested additional resources that might be relevant for different topics; I linked these resources but have not read them all myself.

I am interning at Rethink Priorities for the summer, and wrote this post as a starter project.

Further actions

I would be really excited to see other people make lists like this, or comment on this one with other ideas, links to work that might have already been done in the area, or notes on what you think could be most and least useful.

If someone actually takes up one of these projects, it would be good to note this in a comment on this post. (And maybe we’ll come up with a better system for coordinating soon.)

Related links

Some EA Forum Posts I'd like to write (generated this project)

A central directory for open research questions (has many links to other research-topic lists)

Improving the EA-aligned research pipeline: Sequence introduction

I did not include all the topics I generated for this project in this post. You can see more of the the full list of projects I brainstormed at this link. For the post itself, I took a sample that seemed relatively promising and spanned a broad array of disciplines and approaches, and wrote those ideas out more carefully.

The list

1. Study future-oriented beliefs in certain religions or groups

If we can identify such beliefs and practices, we can use them to inform outreach.

Some potential subtopics

  1. What ideologies or practices have made people seriously care about and try to protect the welfare of future generations? (E.g., bloodline-based beliefs: what influence do they have on people’s actions with respect to the future?)

  2. What ideologies or practices have made people care a lot about the preservation of humanity/society?

  3. What ideologies or practices have made people not care about the future? (Fatalism? Individualist mindsets?)

  4. Study institutions or practices that morally align with consequentialist ideas and consider how they affected or did not affect future-oriented action. (This could help to identify outreach or impact possibilities, give us a sense of the robustness of certain ideas, direct our work by borrowing from older traditions, etc.)[3]

  5. What are the implications of the above for the outreach and/or value-spreading efforts we should engage in or support?

Fields: Religion studies, anthropology, history

2. Study the ways in which incidental qualities become essential to institutions

What can this tell us about the stabilization of norms or values within cultures, movements, institutions? This might be relevant for community-building and for discussions of patient philanthropy. It might also aid in forecasting if we notice real patterns that would let us predict that a certain institution was in the process of adopting an incidental practice as part of its identity.

Some potential subtopics

  1. Case studies of long-running institutions, religions, or movements and their change over time; how they diverged from their foundations or original mission
  2. Case studies of institutions that survive dramatic historical or political moments mostly unchanged.
  3. How did core values of religions/movements/institutions change as they grew? (Is there an equivalent concern for a “philosophy” or a movement?) (Potential case study: Church of LDS, or Mormonism.)
  4. How would we notice this happening in a movement like EA or a philosophy like longtermism?
  5. Are there asymmetrical patterns in the value drift we can detect in certain movements and institutions? Can we expect our values to improve or deteriorate in certain ways? How would this inform our approach to patient philanthropy?

Fields: Theology, religions studies, anthropology, history

3. Explore fiction as a tool for moral circle expansion

A typical narrative (outside of EA) is that reading fiction helps develop empathy. If true, and if moral circle expansion is important, it suggests that we might want to dedicate more energy to fiction. There might also be lessons to learn from case studies of sympathetic portrayals of nonhuman beings (and the possible connections with anthropomorphization) in fiction. [4]

Some potential subtopics

  1. Investigate the legitimacy of the claim that fiction helps develop empathy (are there reasonable studies on this?). How does empathy development correspond to moral circle expansion?
  2. If fiction is reasonably a tool for general empathy/moral circle expansion, what sort of fiction is good at this? Are there particular case studies that stand out?
    1. Is typical young-adult (YA) or narrative fiction good for this? (The idea is that the reader must empathize with someone unlike themselves.)
    2. Are satires noticeably good at pointing out moral contradictions?
    3. Are things different when people read foreign works in translation?
  3. Collect and study media that might help people learn to empathize with future beings (both as an independent longtermist cause and as a way to study the mechanism of empathy development).
    1. List and analyze depictions of future people that succeed at evoking empathy and prompting action. (Discussion of one possible example.)
    2. Consider depictions of other beings that are outside the typical moral circle and see if some types of portrayals evoke actionable empathy
      1. e.g., do Pixar films that anthropomorphize fish or ants lead to better treatment of them? (Or at least lead to pushes for better treatment, even if the pushes themselves were not effective?)
      2. Alternatively, if we find that they do evoke empathy but it leads to ineffective changes, can we channel the empathy of such media productively?
    3. Does anthropomorphization in media create harmful prejudices or poorly calibrated expectations about the future and the potential sentience of various beings?
  4. Consider historical cases where creators of fiction or popular media have unexpectedly deep moral/progressive/political stances with respect to the moral status of beings. (Or, unexpectedly shallow stances?) 7. Some possible examples[5]: Voltaire, Čapek, Tolstoy Fields: (Comparative) literature, education, media studies, art history, psychology, anthropology, history

4. Study how longtermists use different forms of media and how this might be improved

Are we unhelpfully ignoring some forms of media? Or, using them in ways that are historical accidents but in practice less helpful? Should we encourage more creative media?

Some potential subtopics

  1. Analyze images used in EA or longtermist discussions and outreach.[6]

    1. Are there accidental patterns in the images EAs (or various EA organizations) use? This would be helpful to explicitly take into account in case it does not align with our interests. (My personal/anecdotal take is that they often fall into tropes and activate my knee-jerk reaction against socialist realism — if this is true, it may be bad for outreach. Alternatively, it might reinforce stereotypes of EA, longtermism or our causes, etc.) [7]

    2. Do the images we use meaningfully inform our thinking on some subjects? (If we use certain images of specific concepts, will they inappropriately inform our models of things we should keep open minds about? As a silly illustration: if our images of conflict always show tanks, maybe that unnecessarily focuses our thinking on land war.)

  2. Suggest or support ways to diversify media for EA outreach (moving beyond nerdy podcasts and academic or intellectual writing); consider the pros and cons of these forms of media for different functions. Different forms of media to consider[8]

    1. Animations/Vox-style short videos about EA/longtermism[9]

    2. Comics about EA/longtermism

    3. Board/video games, like the paperclips clicker game, an AI policy game (that I haven’t tried) and this vegan game (which I also haven’t tried)

    4. Fiction, including fanfiction and interactive fiction

    5. Discussions and work on this topic: When can Writing Fiction Change the World?, Please use art to convey EA!, Ben West’s comment on an EA Forum post (outreach to high schoolers), Effective altruism art and fiction

  3. See if someone has set up an author/creator success predictability study that avoids survivorship bias and set up better systems for tracking the impact and quality of nonstandard media (it’s harder to evaluate reach and truthfulness), or try to do this yourself. (Same thing with influencers?)

  4. Consider downsides of using non-prose media more frequently (and specific downsides of certain media). Some examples:

    1. Non-prose media might be lower-fidelity, i.e. it might be harder to make sure that the message one wants to convey is actually the message that gets conveyed.
    2. Non-prose media are generally slower media; they’re harder to produce fast, and might take more resources.
    3. Non-prose media might be worse for asymmetric weapons

Fields: media studies, visual arts, comparative literature, market research

5. Study how non-EAs actually view AI safety issues, and how we got here

My sense is that how most people feel about AI is probably strongly shaped by various historical propaganda efforts and fiction media (e.g. blockbuster sci-fi) rather than current reality or scenario analysis that’s aimed more at realism and usefulness than entertainment. This might suggest that we should study popular media on AI more carefully. We could also study other ideas of technology-stemming risks as proxies for the perception of AI. (However, I’m not sure if the best approaches are humanities-based or surveys, or how much public opinion actually matters for e.g. AI governance questions.)

Some potential subtopics

  1. Study the psychology & anthropology phenomena of fear of GMO, vaccination, “unnaturalness,” alternative proteins, etc., both to compare and find patterns relevant for AI, and for more independent goals like animal welfare and catastrophe recovery planning.
  2. Survey the history of the idea of AI in popular discourse. (How important has fiction been in shaping popular understanding of AI and its risks?)
  3. Study representations of AI in contemporary media.
  4. Potential resources:
    1. The American Public's Attitudes Concerning Artificial Intelligence (FHI)
    2. Baobao Zhang: How social science research can inform AI governance
    3. Irving & Askell, "AI Safety Needs Social Scientists" and Why AI really needs social scientists

Fields: Literature, history, anthropology, media studies, psychology

6. Produce anthropological/ethnographic studies of unusually relevant groups

We might want to study the EA community itself as well as communities that are important for specific cause areas or interact with known x-risks and pathways for potential mitigation strategies (e.g. ML labs).

Some potential subtopics

  1. Ethnographic studies of directly important communities, like ML labs. These can give us better models of risks and protections, possible interventions (e.g. make sure that the people who work in these are aware of x-risks, etc.), and generally help us identify the levers we could pull.

    1. Possible resources: Reducing long-term risks from malevolent actors, and Safety culture (Wikipedia)
  2. Study our own community. (Relevant: I want an ethnography of EA[10])

    1. A careful analysis of the EA community could reveal some of our epistemic biases, potential pitfalls, unhelpful practices (like jargon), and possible low-hanging fruit in terms of improvement, expansion, etc. An ethnography might also help us assess more specific criticisms of EA.
    2. Compare the EA community to other groups and movements. (Or a smaller project: produce a list of EA-adjacent communities.)
  3. Study groups that have specific qualities we want to emulate.

    1. Some possible examples: open source info groups like Bellingcat (which tend to rely on volunteers and tools we might want to learn), or Teach for America’s (movement-)scaling.

Fields: anthropology/ethnography, history

7. Apply insights from education, history, and development studies to creating a post-societal-collapse recovery plan

Help produce civilization recovery materials in case of a catastrophe that doesn’t quite destroy everyone, but which brings down most of the institutions and systems humanity has developed. (Studying and planning for this seems incredibly hard, but the payoff could be big enough that it might be worth looking into.)

Some potential subtopics

  1. Consider which important aspects of society are the least likely to re-emerge without specific planning. Why? What are the blockers/bottlenecks? How does that differ across collapse scenarios?
  2. How can we support possible efforts to rebuild? (Consider things like scientific/cultural/moral recovery.)
  3. Some relevant posts:
    1. Civilization Re-Emerging After a Catastrophic Collapse
    2. A (Very) Short History of the Collapse of Civilizations, and Why it Matters

Fields: Education, anthropology, archival studies, history

8. Study notions of utopias

If we have a better understanding of attitudes to utopias (and accounts of "flourishing futures"), we might be able to better direct our outreach and advocacy efforts.

Some potential subtopics

  1. Is it helpful to have a clearer picture of utopia (“flourishing futures”) to work towards? In particular, does it help people become more future-oriented? Does it help motivate EAs? Does it distract people from reasonable interventions and/or more urgent issues?

  2. Does creating and promoting pictures of flourishing futures harm (or help) outreach or the reputation in some ways (e.g. via appearing naive, outlandish, callously disregarding of present suffering, or reminiscent of totalitarian ideologies)?

  3. What (if any) are helpful images/notions of utopia for any given concrete purpose?

  4. What do notions of utopia across religions and (sub)cultures look like?[11]

    1. If things are meaningfully different across cultures, we might want to shy away from concrete images of utopia.
    2. Should we try to find compromises between these? What are ways of creating widely appealing visions of utopia?

Fields: comparative literature, art history, religion studies, history, psychology

9. Analyze social media (and online forums) in the context of longtermism

We can consider social media both as a factor that shapes our modes of internal communication (context for longtermist discussions) and as a tool for outreach and influencing decisions and opinions.

Some potential subtopics

  1. An analysis of our online forums’ architecture in the interest of culture-shaping and bias- or gap-catching.
  2. An analysis of the language of EAs/longtermists (along these lines) with the goal of consciously shaping outreach and discussion on e.g. longtermism. What are the discourse norms and common rhetorical strategies that have developed? Which, if any, seem to be unproductive?
    1. Meme culture - is it helpful in EA? Is it a good outreach tool? How should we improve our modes of interacting with memes? [12]

    2. Similar questions about the rationalist community

  3. Study social media and advertising strategies to understand how much humans are susceptible to certain persuasion strategies at different points in their life. Is this a risk factor for AI-aided totalitarian state possibilities? Can we set up safeguards for pathways of persuasion (especially ones that target vulnerable people?)
  4. Should longtermists use social media more often as an advocacy or advising tool? How useful are Twitter and other online platforms as tools for influencing high-stakes decision-making?
    1. Did the actions of people with large Twitter followings who tweeted about pandemic interventions affect real (CDC, WHO, and US gov) decisions in measurable ways? Some case studies here could be Nate Silver (e.g. vaccine side-effects), Matt Yglesias (mid-pandemic, vaccine prioritization), and Zeynep Tufekci (early on, masks), this study.
    2. To what extent should longtermists focus on producing and communicating research publically instead of just circulating it internally or in the narrow academic sphere?
    3. Which platforms are more conducive to this sort of influence?
    4. How tractable is it to become influential (specifically for high-stakes decisions) on something like Twitter?
    5. Does the existence of this form of influence pose risks; does this increase the chances of reactive decision-making? I can imagine that it might give unqualified “influencers” or viral pieces of media authority they do not deserve. Are there ways to improve this?

Fields: Comparative literature? Media studies, psychology, market research

10. Use tools from non-history humanities fields to aid history-oriented projects relevant for longtermism

This likely entails using cultural artefacts as sources of data.

Some potential subtopics

  1. Study scientific or tech-oriented communities throughout history (to improve our understanding of the history of science and technology)

    1. Consider places of cultural or intellectual exchange beyond a narrow academic focus— studies of such places exist, but might not have bled into EA/longtermist communities. (E.g. study the ways early modern European ship-designers spread their developments & interacted with academic communities.[13])

    2. Produce a compilation of (good) analyses of knowledge diffusion systems (processes by which truths became “accepted”)

      1. Philosophical models of such pathways
      2. Specific historical moments of reflection on knowledge diffusion (e.g. Robert Boyle’s writing)
  2. Comb for past verifiable prediction sets to see when long-term predictions were reasonable and if there are noticeable patterns. (Pull predictions from broad and good document samples— this could include personal correspondences, fictional work, etc. Alternatively, try producing a complete analysis of the implicit forecasts in the writings of historical people who are relevant for EA.[14])

  3. Consider whether it is possible to use cultural artefacts as a source of data for forecasting.

    1. For example, if many literary works published in a time/place are suddenly more sympathetic to some kind of animal, does that correlate with or predict broad and measurable shifts in the social status of the animal?[15] Are there certain kinds of cultural artefacts that are more or less useful for this sort of thing? If there is some correlation, what are the causal directions?
  4. An analysis of the hinge-of-history idea:

    1. Formalize the thesis or question.
    2. Attempt to make it less individualistic (i.e. are there ways to avoid defining the question in terms of the ease-of-impact of an individual actor?).
    3. Produce a more careful outside-view analysis of the history of people and communities thinking that they were living at incredibly influential times. (E.g. The Great Horse Manure Crisis of 1894)

Fields: Archival studies, anthropology, literature or media studies, history, history of science

A few final notes or reminders on further actions

Credits

This essay is a project of Rethink Priorities.

It was written by Lizka, an intern at Rethink Priorities. Thanks to Janique Behman, Neil Dullaghan, David Mathers, Peter Wildeford, and especially Michael Aird and my supervisor Linch Zhang for their helpful feedback. Any mistakes are the fault of Linch Zhang. If you like our work, please consider subscribing to our newsletter. You can see all our public work to date here.

Notes


  1. For context, it took me around 5 hours to generate the full list of ideas, although it took significantly longer to organize them, select and elaborate on my favorite ones, and edit them for clarity. ↩︎

  2. As one example, the list of “Research questions organized by discipline” compiled by 80,000 Hours has history, philosophy, and psychology questions, but not really other humanities/social sciences questions. The 2019 EA Survey found that 14.2% of respondents had studied “Social Sciences”, 13.4% had studied Philosophy, 13.1% had studied Arts & Humanities, 7.5% had studied Psychology (and 24% had studied Computer Science). I’m not entirely sure how to interpret this, given that survey-takers could select multiple responses and some of the categories are often loosely understood, but this does seem to imply that philosophy and psychology are fairly well represented. [Note: addition from 2022: here's a great post on how to apply a background in psychology to AI safety.] ↩︎

  3. Some writing on topics around this already exists; Long-Term Influence and Movement Growth: Two Historical Case Studies (considers state consequentialism, a.k.a. “Mohist consequentialism”), What are some historical examples of people and organizations who've influenced people to do more good?, and off the forum: Consequences of Compassion: An Interpretation and Defense of Buddhist Ethics (note that I have not read this book). Ideas from Buddhism (and presumably other religions) can inform philosophical frameworks of consequentialism (or theories of well-being), like in this paper. Also could be relevant for this topic more broadly: Against moral advocacy. ↩︎

  4. The Sentience Institute’s work is likely very relevant. ↩︎

  5. It would probably be better to crowdsource the list of examples, though; the selection of examples I can come up with myself will be very biased. ↩︎

  6. This post, which presents and discusses two different infographics based on The Precipice, could be relevant. ↩︎

  7. A sketch of how this could be done; find a good sample of EA images, get someone with a background in visual arts/market research/art history/design/some other relevant field or skill to produce a list of trends or possible concerns, then test these specific questions on audiences in a more quantitative way. It’s also possible that it’s easier to just talk to people involved in putting images out there to figure out their main purposes (Facebook banners? etc.) and uses, and how they get created or selected. ↩︎

  8. Individual EA examples of many of these media do exist, but it seems hard to find them, and my sense is that we could do more of this. I would appreciate any links or suggestions on how to improve on that! ↩︎

  9. This episode of the Clearer Thinking podcast might be relevant, although I have not listened to it yet. ↩︎

  10. It seems that there have been attempts to produce ethnographies, but it isn’t clear how successful they were. (Which might be an argument against thinking it will succeed in the future, but more details would be helpful.) ↩︎

  11. Possible methodology: use cultural artefacts (e.g. fiction) to generate ideas and hypotheses, and then test those hypotheses/construct validity using survey construction tools (borrowing preference-determination tools from psychology). Holden Karnofsky discusses something like this in an 80,000 Hours podcast episode. ↩︎

  12. Studying memes is also mentioned in this AI Governance research agenda ↩︎

  13. Relevant readings: Steven Shapin, “Pump and Circumstance: Robert Boyle’s Literary Technology,” Social Studies of Science 14, no. 4 (1984), Brian Ogilvie, The Science of Describing: Natural History in Renaissance Europe (University of Chicago Press, 2008), Pamela Long, Artisan/Practitioners and the Rise of the New Science, 1400-1600 (Oregon State University Press, 2011), Technology Trap (about the Industrial Revolution, haven’t read myself) ↩︎

  14. Two such people could be Benjamin Franklin and Jeremy Bentham, although depending on the methodology, selection bias could be a concern. ↩︎

  15. This would be susceptible to selection biases, so it would be necessary to make sure to include cases where this is not true. The first step might be to develop coherent inclusion criteria to produce a list of cases. The following links were suggested as sources of methodologies to emulate: link 1 and link 2. ↩︎


JP Addison @ 2021-06-10T12:32 (+31)

Any mistakes are the fault of Linch Zhang

:D   Good line. I hope you snuck this in and Linch didn’t notice.

Peter_Hurford @ 2021-06-12T21:33 (+6)

:D

MichaelA @ 2021-06-09T08:12 (+14)

Thanks for this post! 

I agree with your points in the "Why it might be helpful to produce lists of projects for people with humanities backgrounds (or interests) to work on" section, and think each of those 10 ideas do seem at least worth some people considering. (Obviously you and I discussed that earlier - I'm just saying it publicly too!)

I've now added this post to my central directory of open research questions, to hopefully increase the chance that people come across this collection later when looking for research ideas.

ggilgallon @ 2021-07-13T11:53 (+11)

Just flagging that I would be excited to connect with anyone who is working on / considering working on 4) how longtermists use different forms of media and how this might be improved, 5) how non-EAs view AI safety issues, 8) notions of utopias, 9) social media in the context of longtermism; all of which relate to new projects at the Future of Life Institute. Feel free to reach out! 

jasmine_wang @ 2022-01-28T18:12 (+1)

Hi Georgiana! Would love to chat (I think we overlapped digitally at SRF in FHI!). Proposed something similar here and delighted to see similar motivations / hopes, and would love to discuss support / co-creation / potential collaboration! https://mirror.xyz/qualiatinker.eth/6c4VLPaS3hqpuWT2iz4yEXRRMHFtMa4vilZvT5lKdmI

Let me know how best to reach out, or you can reach me at jasmine@verses.xyz!

jasmine_wang @ 2022-01-28T18:12 (+1)

Hi Georgiana! Would love to chat (I think we overlapped digitally at SRF in FHI!). Proposed something similar here and delighted to see similar motivations / hopes, and would love to discuss support / co-creation / potential collaboration! https://mirror.xyz/qualiatinker.eth/6c4VLPaS3hqpuWT2iz4yEXRRMHFtMa4vilZvT5lKdmI

Miranda_Zhang @ 2021-06-18T01:24 (+11)

Great list - even though the EA community certainly doesn't exclude or disvalue the humanities, I think it can be perceived as such. As someone with deep pulls to narrative + cultural change practitioners, I particularly like that you've included literature/media here - narrative change is a nascent field but an oft-touted accomplishment is the legalization of gay marriage: Cultural change in acceptance of LGBT people: lessons from social marketing

If narrative can influence policy then this kind of work does seem important for building out institutions capable of governing for the long-term.

Miranda_Zhang @ 2021-07-10T17:55 (+8)

Quick note: I'm considering switching thesis topics to "Did the actions of people with large Twitter followings who tweeted about pandemic interventions affect real (CDC, WHO, and US gov) decisions in measurable ways? Some case studies here could be Nate Silver (e.g. vaccine side-effects), Matt Yglesias (mid-pandemic, vaccine prioritization), and Zeynep Tufekci (early on, masks)."

Not at all firm on this but just wanted to make a note here, as I would love to talk about how to make my Public Policy thesis EA-aligned! 

seriously, send help.

Miranda_Zhang @ 2021-07-21T18:32 (+2)

Update: This article seems to be pretty relevant to the above question.

Unfortunately, I'm starting to think my interest is even more qualitative than the above. So I'm not sure how much I'll be contributing to that research question.

Lizka @ 2021-06-19T23:45 (+8)

Hi folks! Thank you so much for the warm reception this post has received so far. I'm actively trying to improve my EA-aligned research and writing skills, so I would really appreciate any constructive feedback you might be willing to send as a comment or a private message. (Negative feedback is especially appreciated.) If you are worried about wording criticism in a diplomatic way, Linch (my supervisor) has also offered to perform the role of a middleman. 

Of course, we would also appreciate being informed if any of the proposed research ideas actually change your decisions (e.g. if you end up writing a paper or thesis based on an idea listed here). (And I would be really curious to see where that goes.)

On a different note, there are additional posts that I would have linked to this one if I had published later. In particular, the Vignettes Workshop (AI Impacts) , Why EAs researching mainstream topics can be useful (note: Michael and I both work at Rethink Priorities),  this post about a game on animal welfare that just came out (I haven’t tried the game), and this question about the language Matsés and signaling epistemic  certainty .

Linch @ 2021-06-21T20:54 (+4)

Hi folks, I want to second Lizka in saying that if you have any feedback, feel free to do any of: comment here, PM me on this site or email me at linch@rethinkpriorities.org.

I'm especially excited for people to point out empirical or conceptual errors here, as the person at fault for all mistakes in this post. :) 

MichaelA @ 2021-06-20T15:31 (+4)

Suggest or support ways to diversify media for EA outreach (moving beyond nerdy podcasts and academic or intellectual writing); consider the pros and cons of these forms of media for different functions. Different forms of media to consider ...

Yeah, I think this is an interesting idea. This morning, I made a Slack workspace for "EA Creatives & Communicators", to provide a space for interactions between people in the EA community who aim to do good through various types of creative or communications activities - e.g., by covering EA-relevant topics or important messages via documentaries, other types of videos, short stories, maybe journalism. These interactions could involve things like asking for advice/feedback, sharing tips and resources, and finding collaborators. If anyone else is interested in joining that Slack, send me a message.

There was a direct catalyst for me making that Slack other than this post, but it's quite possible that this post primed me to respond to that catalyst by making that Slack, rather than just giving the one specific person I was talking to some suggested names and tips. (So in case this post did prime me for that, thanks again for your work on it!)

huffmancaleb12 @ 2021-06-09T17:43 (+4)

This is a great post. Further, some completed basic research in the humanities/social sciences could provide useful insights for longtermism without the need to complete any original research. For example, reading through some historical case studies and synthesizing potential takeaways for longtermism.  

Notably, research for longtermism can easily overlap with other cause areas, such as reducing existential risk or catastrophes. There’s low-hanging fruit here. 

I’m currently working (Summer 2021) with Effective Altruism for Christians on increasing research in theology/religion and EA, so I have a special interest in the first item on the list, “1. Study future-oriented beliefs in certain religions or groups”. Recommendations are welcome!