Ask Rethink Priorities Anything (AMA)
By Marcus_A_Davis @ 2020-12-13T17:23 (+101)
Hi, all.
We're the staff at Rethink Priorities and we would like you to Ask Us Anything! We'll be answering all questions starting Tuesday, 15 December.
About the Org
Rethink Priorities is an EA research organization focused on influencing funders and key decision-makers to improve decisions within EA and EA-aligned organizations. You might know of our work on quantifying the amount of farmed vertebrates and invertebrates, interspecies comparisons of moral weight, ballot initiatives as a tool for EAs, the risk of nuclear winter, or running the EA Survey, among other projects. You can see all our work to date here and some of our ongoing projects here.
Over the next few years we plan to expand our work in animal welfare, relaunch our work in longtermism, and continue our work in movement building, and much more.
About the Team
Leadership
Marcus A. Davis - Co-Executive Director
Marcus is a co-founder and co-Executive Director at Rethink Priorities, where he leads research and strategy. He's also a co-founder of Charity Entrepreneurship and Charity Science Health, where he previously systematically analyzed global poverty interventions, helped manage partnerships, and implemented the technical aspects of the project.
Peter Hurford - Co-Executive Director
Peter is the other co-founder and co-Executive Director of Rethink Priorities. Prior to running Rethink Priorities, he was a data scientist in industry for five years at DataRobot, Avant, Clearcover, and other companies. He also has a Triple Master Rank on Kaggle (an international data science competition) and have achieved top 1% performance in five different Kaggle competitions. He was a previous long-time board member at Animal Charity Evaluators and he continues to serve on the board at Charity Science.
Research
David Moss - Principal Research Manager
David Moss is the Principal Research Manager at Rethink Priorities. He previously worked for Charity Science and has worked on the EA Survey for several years. David studied Philosophy at Cambridge and is an academic researcher of moral psychology.
Kim Cuddington - Distinguished Researcher
Kim Cuddington is a Distinguished Researcher at Rethink Priorities and is an Associate Professor at the University of Waterloo. She has a PhD in Zoology, a Masters in Biology, and a Masters in Philosophy. She also has a background in ecology and mathematical modeling.
David Reinstein - Distinguished Researcher
Senior lecturer in economics at the University of Exeter. His research has covered a number of topics including charitable giving and social influences on giving. He originally received his PhD at the University of California, Berkeley under Emmanuel Saez.
Jason Schukraft - Senior Research Manager
Jason is a Senior Research Manager at Rethink Priorities. Before joining the RP team, Jason earned his doctorate in philosophy from the University of Texas at Austin. Jason specializes in questions at the intersection of epistemology and applied ethics.
David Rhys Bernard - Senior Staff Researcher
David is a PhD candidate at the Paris School of Economics and has a Masters in Public Policy and Development. He has a background in causal inference and econometrics and has previously worked at Giving What We Can and the United Nations Development Programme.
Saulius Šimčikas - Senior Staff Researcher
Saulius is a Senior Staff Researcher at Rethink Priorities. Previously, he was a research intern at Animal Charity Evaluators, organized Effective Altruism events in the UK and Lithuania, and worked as a programmer.
Neil Dullaghan - Staff Researcher
Neil is a Staff Researcher at Rethink Priorities. He also volunteers for Charity Entrepreneurship and Animal Charity Evaluators. Before joining RP, Neil worked as a data manager for an online voter platform and has an academic background in Political Science.
Holly Elmore - Staff Researcher
Holly Elmore is a Staff Researcher at Rethink Priorities and has a background in evolutionary biology and ecology. Before working at RP, she earned a PhD from Harvard University in the department of Organismic and Evolutionary Biology. While at Harvard, she organized Harvard University Effective Altruism Student Group, serving as president for two years.
Derek Foster - Staff Researcher
Derek is a Staff Researcher at Rethink Priorities. He studied philosophy and politics as an undergraduate, followed by public health and health economics at master’s level. Before joining RP, Derek worked on the Global Happiness Policy Report and various other projects related to global health, education, and subjective well-being.
Daniela R. Waldhorn - Staff Researcher
Daniela is a Staff Researcher at Rethink Priorities. She is a PhD candidate in Social Psychology, and has a background in management and operations.
Before joining RP, Daniela worked for Animal Ethics and for Animal Equality.
Linchuan Zhang - Staff Researcher
Linchuan (Linch) Zhang is a Staff Researcher at Rethink Priorities working on forecasting and longtermist research. Before joining RP, he did forecasting projects around Covid-19, including with superforecasters and University of Oxford researchers. Previously, he programmed for Impossible Foods and Google, and has led several EA local groups.
Michael Aird - Associate Researcher
Michael Aird is a Associate Researcher at Rethink Priorities. He has a background in political and cognitive psychology and in teaching. Before joining RP, he conducted longtermist macrostrategy research for Convergence Analysis and the Center on Long-Term Risk.
Administration
Abraham Rowe - Director of Operations
Abraham is the Director of Operations at Rethink Priorities. He previously co-founded and served as the Executive Director of Wild Animal Initiative, and served as the Corporate Campaigns Manager at Mercy For Animals.
Janique Behman - Director of Development
Janique is the Director of Development at Rethink Priorities. She cultivates relationships with major donors and institutional grantmakers and helps us find funders for our new research projects. She previously was in charge of strategy and community-building at Effective Altruism Zurich and interned at EA Geneva. She holds an MBA with a focus on philanthropy advisory services.
Ask Us Anything
Please ask us anything - about the org and how we operate, about the staff, about our research… anything!
You can read more about us in our 2020 Impact and 2021 Strategy EA Forum update or visit our website rethinkpriorities.org.
If you're interested in hearing more, please consider subscribing to our newsletter.
Also, we'd be remiss if we didn't mention that we're currently fundraising! We are funding constrained and have the management capacity and hiring talent pool to grow if given more money. We accept and track restricted funds by cause area if that is of interest.
If you'd like to support our work, you can find donation instructions at https://www.rethinkpriorities.org/donate or you can email Janique at janique@rethinkpriorities.org.
Jonas Vollmer @ 2020-12-14T11:01 (+30)
How funding-constrained is your longtermist work, i.e., how much funding have you raised for your 2021 longtermist budget so far, and how much do you expect to be able to deploy usefully, and how much are you short?
Peter_Hurford @ 2020-12-15T19:41 (+24)
Hi Jonas,
Since we last posted our longtermism budget, we've raised ~$89,500 restricted to longtermism for 2021 (with the largest being the grant recommendation from the Survival and Flourishing Fund). This means we will enter 2021 with ~$121K restricted to longtermism not yet spent. Overall, we'd like to raise an additional $403K-$414K for longtermist work by early 2021.
For full transparency - note that, if necessary, we may also choose to use unrestricted funds on longtermism and that this is not factored into these numbers. We currently have ~$273K in unrestricted funds, though we will likely have non-longtermism things we will need to spend this money on.
Given that we are currently just raising money to cover the salaries of our existing longtermist staff (including operations support) as well as start an longtermism intern program, we expect we will be able to deploy longtermist money quickly. We also have a large talent pool of longtermist researchers we likely could hire this year if we ended up with even more longtermism money.
Linch @ 2020-12-16T03:23 (+11)
I did internal modeling/forecasting for our fundraising figures, and at least on the first pass it looked like our longtermist work was more likely to be funding constrained than our other priority cause areas, at least if "funding constrained" is narrowly defined as "what's the probability that we do not raise all the money that we'd like for all planned operations to run smoothly."
My main reasoning was somewhat outside-viewy and focused on general uncertainty: our longtermist team is new, and relative to other of Rethink's cause areas, less well-established with less of a track record of either a) prior funding, b) public work other than Luisa's nuclear risk work, or c) a well-vetted research plan. So I'm just generally unsure of these things.
Three major caveats:
1. I did those forecasts in late October and now I think my original figures were too pessimistic.
2. Another caveat is that my predictions were more a reflection of my own uncertainty than a lack of inside view confidence in the team. For context, my 5th-95th percentile credible interval spanned ~an order of magnitude across all cause areas.
3. When making the original numbers, I incorporated but plausibly substantially underrated the degree that the forecasts will change and not just reflect reality. For example, Peter and Marcus may have prioritized different decisions accordingly due to my numbers, or this comment may affect other people's decisions.
Dan Hageman @ 2020-12-13T23:29 (+27)
Huge fan of the work your team has done, so thank you all for everything! A couple questions :)
1. For potential donors who are particularly interested in wild animal welfare research, how would you describe any key differentiating factors between the approaches of Rethink Priorities and Wild Animal Initiative?
2. For donors who might want to earmark donations to go specifically towards wild animal welfare research within your organization, would this in turn affect the allocation of priority-agnostic donations otherwise made to Rethink? Or is there a way in which such earmarked donations indeed counterfactually support this specific area as opposed to the general areas you cover? (This question applies to most multi-focused orgs.)
3. With respect to invertebrate research, and specifically 'invertebrate sentience', it seems that the sheer number of invertebrates existing would be the driving factor in calculating any expected benefit of pursuing interventions. Are there 'sentience probabilities' low enough to put such an expected value of intervention in question? (I have not thoroughly looked through your publicly available work, so feel free to point to relevant resources if this question has been addressed!)
Thanks in advance for all your thoughts!
Jason Schukraft @ 2020-12-15T16:33 (+24)
Hi Dan,
Thanks for your questions. I'll let Marcus and Peter answer the first two, but I feel qualified to answer the third.
Certainly, the large number of invertebrate animals is an important factor in why we think invertebrate welfare is an area that deserves attention. But I would advise against relying too heavily on numbers alone when assessing the value of promoting invertebrate welfare. There are at least two important considerations worth bearing in mind:
(1) First, among sentient animals, there may be significant differences in capacity for welfare or moral status. If these differences are large enough, they might matter more than the differences in the numbers of different types of animals.
(2) Second, at some point, Pascal's Mugging will rear its ugly head. There may be some point below which we are rationally required to ignore probabilities. It's not clear to me where that point lies. (And it's also not clear that this is the best way to address Pascal's Mugging.) There are about 440 quintillion nematodes alive at any given time, which sounds like a pretty good reason to work on nematode welfare, even if one's credence in their sentience is really low. But nematodes are nothing compared to bacteria. There are something like 5 million trillion trillion bacteria alive at any given time. At some point, it seems as if expected value calculations cease to be appropriately action-guiding, but, again, it's very uncertain where to draw the line.
Marcus_A_Davis @ 2020-12-15T18:40 (+16)
Thanks for the questions!
On (1), we see our work in WAW as currently doing three things: (1) foundational research (e.g., understanding moral value and sentience, understanding well-being at various stages of life), (2) investigating plausible tractable interventions (i.e., feasible interventions currently happening or doable within 5 years), and (3) field building and understanding (e.g., currently we are running polls to see how "weird" the public finds WAW interventions).
We generally defer to WAI on matters of direct outreach (both academic and general public) and do not prioritize that area as much as WAI and Animal Ethics do. It's hard to say more on how our vision differs from WAI without them commenting, but we collaborate with them a lot and we are next scheduled to sync on plans and vision in early January.
On (2), it's hard to predict exactly what additional restrict donations do, but in general, we expect them to increase in the long run how much we spend in a cause by an amount similar to how much is donated. Reasons for this include: we budget on a fairly long-term basis, so we generally try to predict what we will spend in a space, and then raise that much funding. If we don't raise as much as we'd like, we likely consider allocating our expenses differently; and if we raise more than we expected, we'd scale up our work in a cause area. Because our ability to work in spaces is influenced by how much we raise, generally raising more restricted funding in a space ought to lead to us doing more work in that space.
Denis Drescher @ 2020-12-14T10:21 (+22)
I’ve been very impressed with your work, and I’m looking forward to you hopefully making similarly impressive contributions to probing longtermism!
But when it comes to questions: You did say “anything,” so may I ask some questions about productivity when it comes to research in particular? Please pick and choose from these to answer any that seem interesting to you.
- Thinking vs. reading. If you want to research a particular topic, how do you balance reading the relevant literature against thinking yourself and recording your thoughts? I’ve heard second-hand that Hilary Greaves recommends thinking first so to be unanchored by the existing literature and the existing approaches to the problem. Another benefit may be that you start out reading the literature with a clearer mental model of the problem, which might make it easier to stay motivated and to remain critical/vigilant while reading. Would you agree or do you have a different approach?
- Self-consciousness. I imagine that virtually any research project, successful and unsuccessful, starts with some inchoate thoughts and notes. These will usually seem hopelessly inadequate but they’ll sometimes mature into something amazingly insightful. Have you ever struggled with mental blocks when you felt self-conscious about these beginnings, and have you found ways to (reliably) overcome them?
- Is there something interesting here? I often have some (for me) novel ideas, but then it turns out that whether true or false, the idea doesn’t seem to have any important implications. Conversely, I’ve dismissed ideas as unimportant, and years later someone developed them – through a lot of work I didn’t do because I thought it wasn’t important – into something that did connect to important topics in unanticipated ways. Do you have rules of thumb that help you assess early on whether a particular idea is worth pursuing?
- Survival vs. exploratory mindset. I’ve heard of the distinction between survival mindset and exploratory mindset, which makes intuitive sense to me. (I don’t remember where I learned of these terms, but I tried to clarify how I use them in a comment below.) I imagine that for most novel research, exploratory mindset is the more useful one. (Or would you disagree?) If it doesn’t come naturally to you, how do you cultivate it?
- Optimal hours of work per day. Have you found that a particular number of hours of concentrated work per day works best for you? By this I mean time you spend focused on your research project, excluding time spent answering emails, AMAs, and such. (If hours per day doesn’t seem like an informative unit to you, imagine I asked “hours per week” or whatever seems best to you.)
- Learning a new field. I don’t know what I mean by “field,” but probably something smaller than “biology” and bigger than “how to use Pipedrive.” If you need to get up to speed on such a field for research that you’re doing, how do you approach it? Do you read textbooks (if so, linearly or more creatively?) or pay grad students to answer your questions? Does your approach vary depending on whether it’s a subfield of your field of expertise or something completely new?
- Hard problems. I imagine that you’ll sometimes have to grapple with problems that are sufficiently hard that it feels like you didn’t make any tangible progress on them (or on how to approach them) for a week or more. How do you stay optimistic and motivated? How and when do you “escalate” in some fashion – say, discuss hiring a freelance expert on some other field?
- Emotional motivators. It’s easy to be motivated on a System 2 basis by the importance of the work, but sometimes that fails to carry over to System 1 when dealing with some very removed or specific work – say, understanding some obscure proof that is relevant to AI safety along a long chain of tenuous probabilistic implications. Do you have tricks for how to stay System 1 motivated in such cases – or when do you decide that a lack of motivation may actually mean that something is wrong with the topic and you should question whether it is sufficiently important?
- Typing speed. I have this pet theory that a high typing speed is important for some forms of research that involves a lot of verbal thinking (e.g., maybe not maths). The idea is that our memory is limited, so we want to take notes of our thoughts. But handwriting is slow, and typing is only mildly faster, so unless one thinks slowly or types very fast, there is a disconnect that causes continual stalling, impatience, forgotten ideas, and prevents the process from flowing. Does that make any intuitive sense to you? Do you have any tricks (e.g., dictation software)?
- Obvious questions. Nate Soares has an essay on “obvious advice.” Michael Aird mentioned that in many cases he just wanted to follow up on some obvious ideas. They were obvious in hindsight, but evidently they hadn’t been obvious to anyone else for years. Is there a distinct skill of “noticing the obvious ideas” or “noticing the obvious open questions”? And can it be trained or turned into a repeatable process?
- Tiredness, focus, etc. We sometimes get tired or have trouble focusing. Sometimes this happens even when we’ve had enough sleep (just to get an obvious solution out of the way: sleep/napping). What are your favorite things to do when focusing seems hard or you feel tired? Do you use any particular nootropics, supplements, air quality monitor, music, or exercise routine?
- Meta. Which of these questions would you like to see answered by more people because you are interested in the answers too?
Thank you kindly! And of course just pick out the questions you think are interesting for you or other readers to answer. :-)
Holly_Elmore @ 2020-12-15T18:56 (+23)
I can answer 6, as I’ve been doing it for Wild Animal Welfare since I was hired in September. WAW is a new and small field, so it is relatively easy to learn the field, but there’s still so much! I started by going backwards (into the Welfare Biology movement of the 80s and 90s) and forwards (into the WAW EA orgs we know today) from Brain Tomasik, consulting the primary literature over various specific matters of fact. A great thing about WAW being such a young field (and so concentrated in EA) is that I can reach out to basically anyone who’s published on it and have a real conversation. It’s a big shortcut!
I should note that my background is in Evolutionary Biology and Ecology, so someone else might need a lot more background in those basics if they were to learn WAW.
Jason Schukraft @ 2020-12-15T16:02 (+20)
Hi Denis,
Lots of really good questions here. I’ll do my best to answer.
-
Thinking vs reading: I think it depends on the context. Sometimes it makes sense to lean toward thinking more and sometimes it makes sense to lean toward reading more. (I wouldn’t advise focusing exclusively on one or the other.) Unjustified anchoring is certainly a worry, but I think reinventing the wheel is also a worry. One could waste two weeks groping toward a solution to a problem that could have been solved in afternoon just by reading the right review article.
-
Self-consciousness: Yep, I am intimately familiar with hopelessly inchoate thoughts and notes. (I’m not sure I’ve ever completed a project without passing through that stage.) For me at least, the best way to overcome this state is to talk to lots of people. One piece of advice I have for young researchers is to come to terms with sharing your work with people you respect before it’s polished. I’m very grateful to have a large network of collaborators willing to listen to and read my confused ramblings. Feedback at an early stage of a project is often much more valuable than feedback at a later stage.
-
Is there something interesting here?: Yep, this also happens to me. Unfortunately, I don’t have any particular insight. Oftentimes the only way to know whether an idea is interesting is to put in the hard exploratory work. Of course, one shouldn’t be afraid to abandon an idea if it looks increasingly unpromising.
-
*Survival vs. exploratory mindset: Insofar as I understand the terms, an exploratory mindset is an absolute must. Not sure how to cultivate it, though.
-
Optimal hours of work per day: I work between 4 and 8 hours a day. I don’t find any difference in my productivity within that range, though I imagine if I pushed myself to work more than 8, I would pretty quickly hit diminishing returns.
-
Learning a new field: I can’t emphasize enough the value of just talking to existing experts. For me at least, it’s by far the most efficient way to get up-to-speed quickly. For that reason, I really value having a large network of diverse people I can contact with questions. I put a fair amount of effort into cultivating such a network.
-
Hard problems: I’m fortunate that my work is almost always intrinsically interesting. So even if I don’t make progress on a problem, I continue to be motivated to work on it because the work itself is so very pleasant. That said, as I’ve emphasized above, when I’m stuck, I find it most helpful to talk to lots of people about the problem.
-
Emotional motivators: When I reflect on my life as a whole, I’m happy that I’m in a career that aims to improve the world. But in terms of what gets me out of bed in the morning and excited to work, it’s almost never the impact I might have. It’s the intrinsically interesting nature of my work. I almost certainly would not be successful if I did not find my research to be so fascinating.
-
Typing speed: No idea what my typing speed is, but it doesn’t feel particularly fast, and that doesn’t seem to handicap me. I’ve always considered myself a slow thinker, though.
-
Obvious questions: Yeah, I think there is a general skill of “noticing the obvious.” I don’t think I’m great at it, but one thing I do pretty often is reflect on the sorts of things that appear obvious now that weren’t obvious to smart people ~200 years ago.
-
Tiredness, focus, etc.: Regular exercise certainly helps. Haven’t tried anything else. Mostly I’ve just acclimated to getting work done even though I’m tired. (Not sure I would recommend that “solution,” though!)
-
Meta: I’d like to see others answer questions 1, 3, 6, 7, and 10.
Denis Drescher @ 2020-12-19T15:06 (+5)
Your advice to talk to people is probably most important to me! I haven’t tried that a lot but when I did, it was very successful. One hurdle is not wanting to come off as too stupid to the other person (but there are also people who make me feel sufficiently at ease that I don’t mind coming off as stupid) and another is not wanting to waste people’s time. So I want to first be sure that I can’t just figure it out myself within ~ 10x the time. Maybe that a bad tradeoff. I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat. (Maybe they have the same reluctance, and both of us would be happier if he didn’t have it. Can we have a Reciprocity.io for talking about research, please? ^^)
Typing speed: Haha! You can test it here for example: https://10fastfingers.com/typing-test/english. I’ve been stagnating at ~ 60 WPM now for years. Maybe there’s some sort of distinction that some brains are more optimized toward (e.g., worse memory) or incentivized to optimize toward (e.g., through positive feedback) fewer low-level concepts and other more toward more high-level concepts. So when it comes to measures of performance that have time in the denominator, the first group hits diminishing marginal returns early while the second keeps speeding up for a long time. Maybe the second group is, in turn, less interested in understanding from first-principles, which might make them less innovative. Just random speculation.
Obvious questions: Yeah, I’ve been wondering how it can be that now a lot of people come up independently with cases for nonhuman rights and altruism regardless of distance, but a century ago seemingly almost no one did. Maybe it’s just that I don’t know because most of those are lost in history and those that are not, I just don’t know about (though I can think of some examples). Or maybe culture was so different that a lot of the frameworks weren’t there that these ideas attach to. So if moral genius is, say, normally distributed, then values-spreading could have the benefit that it increases the number of people that use relevant frameworks and thereby also increases the absolute number of moral geniuses who work within those frameworks. The values would have to be sufficiently cooperative not to risk zero-sum competition between values. I suppose that’s similar to Bostrom’s Megaearth scenario except with people who share certain frameworks in their thinking rather than the pure number of people.
Getting work done when tired: Well, to some degree I noticed that I over-update on tiredness, and then get into a negative feedback loop where I give up on things too quickly because I think I’m too tired to do them. At that point I’m usually not actually particularly tired.
MichaelA @ 2020-12-20T02:08 (+5)
(Sorry for barging in on this thread :D)
Regarding talking to people to get early feedback, get up to speed in a field, etc., you might find this post useful (if you haven't already seen it).
I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat.
I find this relatable. Relatedly, in the above-linked post, Michelle Hutchinson (the author) wrote:
Try to make the conversation concise, and to avoid going over the time allocated. I really appreciate when people do this when I’m talking to them, because it means I can focus on thinking through the ideas rather than also making sure that we’re sticking to the agenda and get to everything.
I commented that I'd slightly push back on that passage, saying:
I think it makes sense for this to be the default way one approaches conversations in which one is seeking advice. But I think a decent portion of advice-givers would either be ok with or actually prefer a more loose / lengthy / free-wheeling / non-regimented conversation.
There have been a few times when I've arranged to talk to someone I perceived as very busy and important, and so I've tried to be very conscious of their time and give them opportunities to wrap things up, but they repeatedly opted to keep talking for a surprisingly long time. And they seemed genuinely happy with this, and I ended up getting a lot of extra value out of that extra time.
So I think it's probably good to be open to signs that one's conversation partner is ok with or prefers a longer conversation, even if one shouldn't assume they are.
Denis Drescher @ 2020-12-20T11:40 (+3)
Thanks! Yeah, I sometimes wonder about that. I suppose in rationality-adjacent circles I can just ask what someone’s preference is (free-wheeling chat or no-nonsense and to the point). Maybe that’d be a faux pas or weird in general, but I think it should be fine among most EAs?
Holly_Elmore @ 2020-12-15T19:01 (+19)
- Personally, I’m very self-conscious about my work and tend to wait to long too share it. But the culture of RP seems to fight that tendency— which I think is very productive!
Denis Drescher @ 2020-12-19T12:49 (+7)
Thanks! This is something I sometimes struggle with I think. Is the culture just all about sharing early and often and helping each other, or are there also other aspects to the culture that I may not anticipate that help you overcome this self-consciousness? :-)
DavidBernard @ 2020-12-15T18:29 (+18)
1. Thinking vs. reading.
Another benefit of thinking before reading is that it can help you develop your research skills. Noticing some phenomena and then developing a model to explain it is a super valuable exercise. If it turns out you reproduce something that someone else has already done and published, then great, you’ve gotten experience solving some problem and you’ve shown that you can think through it at least as well as some expert in the field. If it turns out that you have produced something novel then it’s time to see how it compares to existing results in the literature and get feedback on how useful it is.
This said, I think this is more true for theoretical work than applied work, e.g. the value of doing this in philosophy > in theoretical economics > in applied economics. A fair amount of EA-relevant research is summarising and synthesising what the academic literature on some topic finds and it seems pretty difficult to do that by just thinking to yourself!
3. Is there something interesting here?
I mostly try to work out how excited I am by this idea and whether I could see myself still being excited in 6 months, since for me having internal motivation to work on a project is pretty important. I also try to chat about this idea with various other people and see how excited they are by it.
4. Survival vs. exploratory mindset.
I also haven’t heard these terms before, but from your description (which frames a survival mindset pretty negatively), an exploratory mindset comes fairly naturally to me and therefore I haven’t ever actively cultivated it. Lots of research projects fail so extreme risk aversion in particular seems like it would be bad for researchers.
5. Optimal hours of work per day.
I typically aim for 6-7 hours of deep work a day and a couple of dedicated hours for miscellaneous tasks and meetings. Since starting part-time at RP I’ve been doing 6 days a week (2 RP, 4 PhD), but before that I did 5. I find RP deep work less taxing than PhD work. 6 days a week is at the upper limit of manageable for me at the moment, so I plan to experiment with different schedules in the new year.
6. Learning a new field.
I’m a big fan of textbooks and schedule time to read a couple of textbook chapters each week. Lesswrong’s best textbooks on every subject thread is pretty good for finding them. I usually make Anki flashcards to help me remember the key facts, but I’ve recently started experimenting with Roam Research to take notes which I’m also enjoying so my “learning flow” is in flux at the moment.
8. Emotional motivators.
My main trick for dealing with this is to always plan my day the night before. I let System 2 Dave work out what is important and needs to be done and put blocks in the calendar for these things. When System 1 Dave is working the next day, his motivation doesn’t end up mattering so much because he can easily defer to what System 2 Dave said he should do. I don’t read too much into lack of System 1 motivation, it happens and I haven’t noticed that it is particularly correlated with how important the work is, it’s more correlated with things like how scary it is to start some new task and irrelevant things like how much sunlight I’ve been getting.
9. Typing speed.
I struggle to imagine typing speed being a binding constraint on research productivity since I’ve never found typing speed to be a problem for getting into flow, but when I just checked my wpm was 85 so maybe I’d feel different if it was slower. When I’m coding the vast majority of my time is spent thinking about how to solve the problem I’m facing, not typing the code that solves the problem. When I’m writing first drafts, I think typing speed is a bit more helpful for the reasons you mention, but again more time goes into planning the structure of what I want to say and polishing, than the first pass at writing where speed might help.
11. Tiredness, focus, etc.
My favourite thing to do is to stop working! Not all days can be good days and I became a lot happier and more productive when I stopped beating myself up for having bad days and allowed myself to take the rest of the afternoon off.
12. Meta.
The questions I didn’t answer were because I didn’t have much to say about them so I’d be happy to see answers to them!
Denis Drescher @ 2020-12-19T13:32 (+3)
Thank you! Using the thinking vs. reading balance as a feedback mechanism is an interesting take, and in my experience it’s also most fruitful in philosophy, though I can’t compare with those branches of economics.
Survival mindset: I suppose it serves its purpose when you’re in a very low-trust environment, but it’s probably not necessary most of the time for most aspiring EA researchers.
Thanks for linking that list of textbooks! It’s also been helpful for me in the past. :-D
Planning the next day the evening before also seems like a good thing to try for me. Thanks!
I wonder whether you all have such fairly high typing speeds simply because you all type a lot or whether 80+ WPM is a speed threshold that is necessary to achieve before one ceases to perceive typing speed as a limiting factor. (Mine is around 60 WPM.)
I hope you can get your work hours down to a manageable level!
EdoArad @ 2020-12-21T05:37 (+2)
It was interesting to read, thanks for the answers :)
A small remark, which may be of use as you said you used Anki and now using Roam - The Roam Toolkit add-on allows you to use spaced-repetition in Roam.
Linch @ 2020-12-16T06:40 (+10)
#9 Typing speed: I think my own belief is that typing speed is probably less important than you appear to believe, but I care enough about it that I logged 53 minutes of typing practice on keybr this year (usually during moments where I'm otherwise not productive and just want to get "in flow" doing something repetitive), and I suspect I still can productively use another 3-5 hours of typing practice next year even if it trades off against deep work time (and presumably many more hours than that if it does not).
#10 Obvious questions. I suspect that while sometimes ignoring/not noticing "obvious questions/advice" etc is coincidental unforced errors, more often than not there is some form of motivated reasoning going on behind the scenes (eg because this story will invalidate a hypothesis I'm wedded to, because it involves unpleasant tradeoffs, because some beliefs are lower prestige, because it makes the work I do seem less important, etc). I think training myself carefully to notice these things has been helpful, though I suspect I still miss a lot of obvious stuff.
#11 Tiredness, focus, etc..I haven't figured this out yet and am keen to learn from my coworkers and others! Right now I take a lot of caffeine and I suspect if I were more careful about optimization I should be cycling drugs over a weekly basis rather than taking the same one every day (especially a drug like caffeine that has tolerance and withdrawal symptoms).
Denis Drescher @ 2020-12-19T15:39 (+2)
Typing speed: Interesting! What is your typing speed?
Obvious questions: Thanks, I’ll keep that in mind. It seems unlikely to be the case for me, but I haven’t tried to observe such a connection either. I observed the opposite tendency in me in the sense that I’m worried about being wrong and so probe all the ways in which I may be wrong a lot, which has had the unintended negative effect that I’m too likely to abandon old approaches in favor of ones I’ve heard of very freshly because for the latter I haven’t come up with as many counterarguments. I also find rehearsing stuff that I already believe to be yucky and boring in ways that rehearsing counterarguments is not. But of course I might be falling for both traps in different contexts.
Linch @ 2020-12-19T23:24 (+2)
Typing speed: Interesting! What is your typing speed?
Only 57.9 according to keybr. I suspect a) typing practice will be less helpful for me if my typing speed is higher (like David's) and b) my current typing speed is below average for programmers (not sure about researchers).
(It's probably relevant/bad that my default typing system on those typing test layouts (26 characters + space only uses about 5 fingers. I think I go up to 8 on a more (normal) paragraph like this one that also uses shift/return/slash/number pad. I think if I'm focused on systematic rather than incremental changes to my typing speed I'd try to figure out how to force myself to use all 10 fingers).
Obvious questions
Hmm I think a lot of people have motivated reasoning of the form I describe, but I don't know you well enough and I definitely don't think all people are like this.
There is certainly a danger as well of being too contrarian or self-critical.
Have you tried calibration practice?
Maybe also make an explicit effort to write down key beliefs and numerical probabilities (or even just words for felt senses) to record and eventually correct for overupdating on new arguments/evidence (if this is indeed your issue).
Denis Drescher @ 2020-12-20T12:09 (+2)
Do you use the guided lessons of Keybr or a custom text? I think the guided lessons are geared toward your weaknesses, which probably leads to a lower speed than what you’d achieve with the average text.
my current typing speed is below average for programmers
That’s something where I’ve never felt bottlenecked by my typing speed. Learning to type blindly was very useful, though, because it gave me a lot more freedom with screen configurations. (And switching to a keyboard layout other than German, where most brackets are super hard to reach. I use a customized Colemak.)
Have you tried calibration practice?
Yeah, it’s on my list of things I want to practice more, but the few times I did some tests I was mostly well-calibrated already (with the exception of one probability level or what they’re called). There’s surely room for improvement, though. Maybe I’ll do worse if the questions are from an area that I think I know something about. ^^
Maybe I’m also too impressionable by people who speak with an air of confidence. I might be falling for some sort of typical mind fallacy and assume that when someone doesn’t use a lot of hedges, they must be so sure that they’re almost certain to be right, and then update strongly on that. But I’m not quite convinced by that theory either. That probably happens sometimes, but at other times I also overupdate on my own new ideas. I’m pretty sure I overupdate whenever people use guilt-inducing language, though.
I filled in Brian Tomasik’s list of beliefs and values on big questions at one point. :-D
MichaelA @ 2020-12-17T12:52 (+9)
Hi Denis,
Thanks again for these questions. I'll share my answers in a few comments. This context and disclaimer - including that I only started with Rethink a month ago - should be borne in mind.
1. Thinking vs reading
I don't think I really have explicit policies regarding balancing reading against thinking myself and recording my thoughts. Maybe I should.
I'm somewhat inclined to think that, on the margin and on average (so not in every case), EA would benefit from a bit more reading of relevant literatures (or talking to more experienced people in an area, watching of relevant lectures, etc.), even at the expense of having a bit less time for coming up with novel ideas.
I feel like EA might have a bit too much a tendency towards "think really hard by oneself for a while, then kind-of reinvent the wheel but using new terms for it". It might be that, often, people could get to similar ideas faster and in a way that connects to existing work better (making it easier for others to find, build on, etc.) by doing some extra reading first.
Note that this is not me suggesting EAs should increase how much they defer to experts/others/existing work. Instead, I'm tentatively suggesting spending more time learning what experts/others/existing work has to say, which could be followed by agreeing, disagreeing, critiquing, building on, proposing alternatives, striking out in a totally different direction, etc.
(On this general topic, I liked the post The Neglected Virtue of Scholarship.)
MichaelA @ 2020-12-17T12:57 (+5)
Less important personal ramble:
I often feel like I might be spending more time reading up-front than is worthwhile, as a way of procrastinating, or maybe out of a sort-of perfectionism (the more I read, the lower the chance that, once I start writing, what I write is mistaken or redundant). And I sort-of scold myself for that.
But then I've repeatedly heard people remark that I have an unusually large amount of output. (I sort-of felt like the opposite was true, until people told me this, which is weird since it's such an easily checkable thing!) And I've also got some feedback that suggested I should move more in the direction of depth and expertise, even at the cost of breadth and quantity of output.
So maybe that feeling that I'm spending too much time reading up-front is just mistaken. And as mentioned, that feeling seems to conflict with what I'd (tentatively) tend to advise others, which should probably make me more suspicious of the feeling. (This reminds me of asking "Is this how I'd treat a friend?" in response to negative self-talk [source with related ideas].)
MichaelA @ 2020-12-18T12:24 (+8)
10. “Obvious questions”
(Just my personal, current, non-expert thoughts, as always. Also, I’m not sure I’m addressing precisely the question you had in mind.)
A summary of my recommendations in this vicinity:
- If people want to do research and want a menu of ideas/questions to work on, including ideas/questions that seem like they obviously should have a bunch of work on them but don’t yet, they could check out this central directory for open research questions, and/or an overlapping 80,000 Hours post.
- If people want to discover “new” instances of such ideas/questions, one option might be to just try to notice ideas/variables/assumptions that seem important to some people’s beliefs, but that seem debatable and vague, have been contested by others, and/or haven’t been stated explicitly and fleshed out.
- One way to do this might be to have a go at rigorously, precisely writing out the arguments that people seem to be acting as if they believe, in order to spot the assumptions that seem required but that those people haven't stated/emphasised.
- One could then try to explore those assumptions in detail, either just through more fleshed-out “armchair reasoning”, or through looking at relevant empirical evidence and academic work, or through some mixture of those things.
- I think this is a big part of what I’ve done this year.
- Here’s one example of a piece of my own work which came from roughly that sort of process.
I’ll add more detailed thoughts below.
---
I interpret this question as being focused on cases in which an idea/open question seems like it should’ve been obvious, or seems obvious in retrospect, yet it has been neglected so far. (Or the many cases we should assume still exist in which the idea/question is still neglected, but would - if and when finally tackled - seem obvious.)
It seems to me that there are two major types of such cases:
- Unnoticed: Cases in which the ideas/open questions haven’t even been noticed by almost anyone
- Or at least, almost anyone in the relevant community/field.
- So I'd still say an idea counts as "unnoticed" for these purposes even if, for example, a very similar ideas has been explored thoroughly in sociology, but no one in longtermism has noticed that that idea is relevant to some longtermist issue, nor independently arrived at a similar idea.
- Or at least, almost anyone in the relevant community/field.
- Noticed yet neglected: Cases in which the ideas/open questions have been noticed, but no one has really fleshed them out or tackled them much
- E.g., a fair number of longtermists have noticed that the question of how likely various types of recovery are from various types of civilizational collapse. But as far as I’m aware, there was nothing even approaching a thorough analysis of the question until some recent still-in-progress work, and there's still room for much more work here.
- Another example is questions related to how likely global, stable totalitarianism is; what factors could increase or decrease the odds of that; and what to do about this. Some people have highlighted such questions (including but not only in the context of advanced AI), but I’m not aware of any detailed work on them.
This is really more a continuum than a binary distinction. In almost all cases, there’s probably been someone in a relevant community who’s at least briefly noticed something relevant. But sometimes it’ll just be that something kind-of relevant has been discussed verbally a few times and then forgotten, while other times it’ll be that people have prominently highlighted pretty precisely the relevant open question, yet no one has actually worked on it. (And of course there'll be many cases in between.)
---
For "noticed yet neglected" ideas/questions, recommendation 1 from above will be more relevant: people could find many ideas/questions of this type in this central directory for open research questions, and just get cracking on them.
That directory is like a map pointing the way to many trees that might be full of low-hanging fruit that would’ve been plucked by now in a better world. And I really would predict that a lot of EAs could do valuable work by just having a go at those questions. (I’m less confident that this is the most valuable thing lots of EAs could be doing, and each person would have to think that through for themselves, in light of their specific circumstances. See also.)
So we don’t necessarily need all EA-aligned researchers to try to cultivate a skill of “noticing the ideas that should’ve been tackled/fleshed out already” (though I’m sure some should). Some could just focus on actually exploring the ideas that have been noticed but still haven’t been tackled/fleshed out.
---
For "unnoticed" ideas/questions, recommendation 2 from above will be more relevant.
I think this dovetails somewhat with Ben Garfinkel calling for[1] more people to just try to rigorously write up more detailed versions of arguments about AI risk that often float around in sketchier or briefer form. (Obviously brevity is better than length, all else held equal, but often a few pages isn’t enough to give an idea proper treatment.)
---
There are at least two other approaches for finding "unnoticed" ideas/questions which seem to have sometimes worked for me, but which I’m less sure would often be useful for many people, and less sure I’ll describe clearly. These are:
- Trying to sketch out causal diagrams of the pathway to something (e.g., an existential catastrophe) happening
- I think that doing something like this has sometimes helped me notice there there are:
- assumptions or steps missing in the standard/fleshed-out stories of how something might happen,
- alternative pathways by which something could happen, and/or
- alternative/additional outcomes that may occur
- See also
- I think that doing something like this has sometimes helped me notice there there are:
- Trying to define things precisely, and/or to precisely distinguish concepts from each other, and seeing if anything interesting falls out
- Here’s an abstract example, but one which matches various real examples that have happened for me:
- I try to define X, but then notice that that definition would fail to cover some cases of what I’d usually think of as X, and/or that it would cover some cases of what I’d usually think of as Y (which is a distinct concept).
- This makes me realise that X and/or Y might be able to take somewhat different forms or occur via different pathways to what was typically considered, or that there’s actually an extra requirement for X or Y to happen that was typically ignored.
- I feel like it’d be easy to misinterpret my stance here.
- I actually think that definitions will never or almost never really be “perfect”, and I agree with the ideas in this post (see also family resemblance). And I think that many debates over definitions are largely nitpicking and wasting time.
- But I also think that, in many case, being clearer about definitions can substantially benefit both thought and communication.
- Here’s an abstract example, but one which matches various real examples that have happened for me:
---
I should again mention that I’m only ~1.5 years into my research career, so maybe I’ll later change my mind about a bunch of those points, and there are probably a lot of useful things that could be said on this that I haven’t said.
[1] See the parts of the transcript after Howie asks "Do you know what it would mean for the arguments to be more sussed out?"
alexlintz @ 2020-12-24T11:51 (+6)
I don't work at Rethink Priorities but I couldn't resist jumping in with some thoughts as I've been doing a lot of thinking on some of these questions recently
Thinking vs. reading. I’ve been playing around with spending 15-60 min sketching out a quick model of what I think of something before starting in on the literature (by no means a consistent thing I do though). I find it can be quite nice and help me ask the right questions early on.
Self-consciousness. Idk if this fits exactly but when I started my research position I tried to have the mindset of, ‘I’ll be pretty bad at this for quite a while’. Then when I made mistakes I could just think, ‘right, as expected. Now let’s figure out how to not do that again’. Not sure how sustainable this is but it felt good to start! In general it seems good to have a mindset of research being nearly impossibly hard. Humans are just barely able to do this thing in a useful way and even at the highest levels academics still make mistakes (most papers have at least some flaws).
Optimal hours of work per day. I tend to work about 4-7 hours per day including meetings and everything. Including only mentally intensive tasks I probably get around 4-5 a day. Sometimes I’m able to get more if I fall into a good rhythm with something. Looking around at estimates (Rescuetime says just ~3 hours per day average of productive work) it seems clear I’m hitting a pretty solid average. I still can’t shake the feeling that everyone else is doing more work. Part of this is because people claim they do much more work. I assume this is mostly exaggeration though because hours worked is used as a signal of status and being a hard worker. But still, it's hard to shake the feeling.
Learning a new field. I just do a lot of literature review. I tend to search for the big papers and meta-analyses, skim lot’s of them and try to make a map of what the key questions are and what the answers proposed by different authors are for each question (noting citations for each answer). This helps to distill the field I think and serves as something relatively easy to reference. Generally there’s a lot of restructuring that needs to happen as you learn more about a topic area and see that some questions you used were ill-posed or some papers answer somewhat different questions. In short this gets messy, but it seems like a good way to start and sometimes it works quite well for me.
Hard problems. I have a maybe-controversial take that research (even in LT space) is motivated largely by signalling and status games. From this view the advice many gave about talking to people about it sounds good. Then you generate some excitement as you’re able to show someone else you’re smart enough to solve it, or they get excited to share what they know, etc. I think if you had a nice working group on any topic, no matter how boring, everyone would get super excited about it. In general, connecting the solution to a hard problem to social reward is probably going to work well as a motivator by this logic.
Emotional motivators. I’ve been thinking a lot recently about what I’m calling ‘incentive landscaping’. The basic idea is that your system 2 has a bunch of things it wants to do (e.g. have impact). Then you can shape your incentive landscape such that your system 1 is also motivated to do the highest impact things. Working for someone who shares your values is the easiest way to do this as then your employer and peers will reward you (either socially or with promotions) for doing things which are impact-oriented. This still won’t be perfectly optimized for impact but it gets you close. Then you can add in some extra motivators like a small group you meet with to talk about progress on some thing which seems badly motivated, or ask others to make your reward conditional on you completing something your system 2 thinks is important. Still early days for me on this though and I think it’s a really hard thing to get right.
Typing speed. At least when I'm doing reflections or broad thinking I often circumvent this by doing a lot of voice notes with Dragon. That way I can type at the speed of thought. It’s never perfect but ~97% of it is readable so it’s good enough. Then if you want to actually have good notes you go through and summarize your long jumble of semi-coherent thoughts into something decent sounding. This has the side of effect of some spaced repetition learning as well!
Tiredness, focus, etc. I’ve had lot’s of ongoing and serious problems with fatigue and have tried many interventions. Certainly caffeine (ideally with l-theanine) is a nice thing to have but tolerance is an issue. Right now what seems to work for me (no idea why) is a greens powder called Athletic Greens. I’m also trying pro/prebiotics which might be helping. Magnesium supplementation also might have helped. A medication I was taking was causing some problems as well and causing me to have some really intense fatigue on occasion (again, probably…). It’s super hard to isolate cause and effect in this area as there are so many potential causes. I’d say it’s worth dropping a lot of money on different supplements and interventions and seeing what helps. If you can consistently increase energy by 5-10% (something I think is definitely on the table for most people), that adds up really quickly in terms of the amount of work you can get done, happiness, etc. Ideally you’d do this by introducing one intervention at a time for 2-4 weeks each. I haven’t had patience for that and am currently just trying a few things at once, then I figure I can cut out one at a time and see what helped. Things I would loosely recommend trying (aside from exercise, sleep, etc): Prebiotics, good multivitamins, checking for food intolerances, checking if any pills you take are having adverse effects.
I do also work through tiredness sometimes and find it helpful to do some light exercise (for me, games in VR) to get back some energy. That also works as a decent gauge for whether I'll be able to push past the tiredness. If playing 10 min of Beatsaber feels like a chore, I probably won't be able to work.
How you rest might also be important. E.g. might need time with little input so your default mode network can do it’s thing. No idea how big of a deal this is but I’ve found going for more walks with just music (or silence) to maybe be helpful, especially in that I get more time for reflection.
I’ve also been experimenting with measuring heart rate variability using an app called Welltory. That’s been kind of interesting in terms of raising some new questions though I’m still not sure how I feel about it/how accurate it is for measuring energy levels.
Denis Drescher @ 2020-12-27T11:17 (+3)
Whee! Thank you too!
Yeah, I think that perspective on self-consciousness is helpful!
Work hours: I also wonder how much this varies between professions. Maybe that’s worth a quick search and writeup for me at some point. When you go from a field where it’s generally easy to concentrate for a long time every day to a field where it’s generally hard, that may seem disproportionately discouraging when you don’t know about that general difference.
“Try to make a map of what the key questions are and what the answers proposed by different authors are”: Yeah, combining that with Jason’s tips seems fruitful too: When talking to a lot of people, always also ask what those big questions and proposed answers are. More nonobvious obvious advice! :-D
I may try out social incentives and dictation software, but social things are usually draining and sometimes scary for me, so there’d be a tradeoff between the motivation and my energy. And I feel like I think in a particular and particularly useful way while writing but can often not think new thoughts while speaking, but that may be just a matter of practice. We’ll see! And even if it doesn’t work, these questions and answers are not (primarily) for me, and others probably find them brilliantly useful!
I’ve bought some Performance Lab products (following a recommendation from Alex in a private conversation). They have better reviews on Vaga and are a bit cheaper than the Athletic Greens.
“Default mode network”: Interesting! I didn’t know about that.
MichaelA @ 2020-12-14T13:43 (+6)
Hi Denis, thanks for these questions. I'll give my answers to a bunch of them tomorrow. Just jumping in early with a clarifying question: Could you explain what you mean by "Survival vs. exploratory mindset", and/or provide a link that explains that distinction? I haven't heard those terms before, and Google didn't immediately show me anything that looked relevant.
(Is it perhaps related to exploring vs exploiting?)
Denis Drescher @ 2020-12-14T14:16 (+5)
Hi Michael! Huh, true, those terms seem to be vastly less commonly used than I had thought.
By survival mindset I mean: extreme risk aversion, fear, distrust toward strangers, little collaboration, isolation, guarded interaction with others, hoarding of money and other things, seeking close bonds with family and partners, etc., but I suppose it also comes with modesty and contentment, equanimity in the face of external catastrophes, vigilance, preparedness, etc.
By exploratory mindset I mean: risk neutrality, curiosity, trust toward strangers, collaboration, outgoing social behavior, making oneself vulnerable, trusting partners and family without much need for ritual, quick reinvestment of profits, etc., but I suppose also a bit lower conscientiousness, lacking preparedness for catastrophes, gullibility, overestimating how much others trust you, etc.
Those categories have been very useful for me, but maybe they’re a lot less useful for most other people? You can just ignore that question if the distinction makes no intuitive sense this way or doesn’t quite fit your world models.
MichaelA @ 2020-12-17T12:31 (+4)
This distinction reminds me of the "survival values vs self-expression values" dimension of the World Values Survey. I'm a bit rusty on those terms, but from skimming a Wikipedia page, I think the "survival" part lines up decently with what you describe as "survival mindset", but the self-expression part might not line up well with "exploratory mindset":
Survival values place emphasis on economic and physical security. They are linked with a relatively ethnocentric outlook and low levels of trust and tolerance.
Self-expression values give high priority to subjective well-being, self-expression and quality of life.[1] Some values more common in societies that embrace these values include environmental protection, growing tolerance of foreigners, gays and lesbians and gender equality, rising demands for participation in decision-making in economic and political life (autonomy and freedom from central authority), interpersonal trust, political moderation, and a shift in child-rearing values from emphasis on hard work toward imagination and tolerance.[1]
The shift from survival to self-expression also represents the transition from industrial society to post-industrial society, as well as embracing democratic values.
As for your question: I haven't thought in terms of survival vs exploratory mindset before, so I don't think I have a strong view on which is more useful for research (or the situations in which this differs), how often I adopt each mindset, or how I cultivate them. I guess I'd probably guess exploratory mindset tends to be more useful and tends to be what I have, but I'm not sure.
I think parts of Rationality: From AI to Zombies (aka "the sequences") and Harry Potter and the Methods of Rationality have quite useful advice - and a way of making it stick psychologically - that feels somewhat relevant here. E.g., the repeated emphasis and elaboration on "that which can be destroyed by the truth should be". I have a sense that someone who's struggling to adopt useful facets of the exploratory might benefit from reading (or re-skimming) one or both of those things.
Denis Drescher @ 2020-12-19T15:44 (+2)
Yeah, I agree about how well or not well those concepts line up. But I think insofar as I still struggle with probably disproportionate survival mindset, it’s about questions of being accepted socially and surviving financially rather than anything linked to beliefs (maybe indirectly in a few edge cases, but that feels almost irrelevant).
If this is not just my problem, it could mean that a universal basic income could unlock more genius researchers. :-)
MichaelA @ 2020-12-18T07:01 (+5)
11. Tiredness, focus, etc.
I find that being tired makes my mind wander a lot when reading longform things (e.g., papers, posts, not things like Slack messages or emails), so when I'm tired I usually try to do things other than reading.
If I'm just a bit or moderately tired, I usually find I'm still about as able to write as normal. If I'm very tired, I'll still often be able to write quickly, but then when I later read what I wrote I'll feel that it was unclear, poorly structured, and more typo-strewn than usual. So when very tired, I try to avoid writing longform things (e.g., actual research outputs).
Things I find I'm still pretty able to do when tired include commenting on documents people want input on (I think I'm more able to focus on this than on regular reading because it's more "interactive" or something), writing things like EA Forum comments, replying to emails and Slack messages and the like, doing miscellaneous admin-y tasks, and reflecting on the last week/month and planning the next. So I often do a disproportionate amount of such tasks during evenings or during days when I'm more tired than normal, and at other times do a disproportionate amount of reading and "substantive" writing.
Also, I'm fortunate enough to have flexible hours. So sometimes I just work less on days when I'm tired (perhaps spending more time with my wife), and then make up for it on other days.
MichaelA @ 2020-12-18T01:17 (+5)
2 and 3. Self-consciousness and Is there something interesting here?
These questions definitely resonate with me, and I imagine they’d resonate with most/all researchers.
I have a tendency to continually wonder if what I’m doing is what I should be doing, or if I should change my priorities. I think this is good in some ways. But sometimes I’d make better decisions faster if I just actually pursued an idea more “confidently” for a bit, to get more info on whether it’s worth pursuing, rather than just “wondering” about it repeatedly and going back and forth without much new info to work with. Basically, I might do too much self-doubt-style armchair reasoning, with too little actual empirical info.
Also, pursuing an idea more “confidently” for a bit will not only inform me about whether to continue pursuing it further, but also might result in outputs that are useful for others. So I try to sometimes switch into “just commit and focus mode” for a given time period, or until I hit a given milestone, and mostly minimise reflection on what I should prioritise during that time. But so far this has been like a grab bag of heuristics and habits I use, rather than a more precise guideline for myself.
See also When to focus and when to re-evaluate.
Things that help me with this include, and/or some scattered related thoughts, include:
- Talking to others and getting feedback, including on early-stage ideas
- I liked David and Jason’s remarks on this in their comments
- A sort-of minimum viable product and quick feedback loop approach has often seemed useful for me - something like:
- First getting verbal feedback from a couple people on a messy, verbal description of an idea
- Then writing up a rough draft about the idea and circulating it to a couple more people for a bit more feedback
- Then polishing and fleshing out that draft and circulating it to a few more people for more feedback
- Then posting publicly
- (But only proceeding to the next step if evidence from the prior one - plus one’s own intuitions - suggested this would be worthwhile)
- Feedback has often helped me determine whether an idea is worth pursuing further, feel more comfortable/motivated with pursuing an idea further (rather than being mired in unproductive self-doubt), develop the idea, work out which angles of it are most worth pursuing, and work out how to express it more clearly
- Reminding myself that I haven’t really gathered any new info since the last time I thought “Should this really be what I spend my time on?”, so thinking about that again is unlikely to reveal new insights, and is probably just a stupid part of my psychology rather than something I’d endorse.
- I might think to myself something like “If a friend was doing this, you’d think it’s irrational, and gently advise them to just actually commit for a bit and get new info, right? So shouldn’t you do the same yourself?”
- Remembering Algorithms to Live By drawing an analogy to a failure mode in which computers continually reprioritise tasks and the reprioritisation takes up just enough processing power to mean no actual progress on any of the tasks occurs, and this can just cycle forever. And the way to get out of this is to at some point just do tasks, even without having confidence that these “should” be top priority.
- This is just my half-remembered version of that part of the book, and might be wrong somehow.
- Remembering that I’d be deeply uncertain about the “actual” value of any project I could pursue, because the world is very complicated and my ambitions (contribute to improving the long-term future) are pretty lofty. The best I can do is something that seems good in expected value but with large error bars. So the fact I feel some uncertainty and doubt provides basically no evidence that this project isn’t worth pursuing. (Though feeling an unusually large amount of uncertainty and doubt might.)
- Remembering that, if the idea ends up seeming to have not been important but there was a reasonable ex ante case that it might’ve been important, there’s a decent chance someone else would end up pursuing it if I don’t. So if I pursue it, then find out it seems to not be important, then write about what I found, that might still have the effect of causing an important project to get done, because it might cause someone else to do that important project rather than doing something similar to what I did.
MichaelA @ 2020-12-18T01:18 (+5)
Examples to somewhat illustrate the last two points:
This year, in some so-far-unpublished work, I wrote about some ideas that:
- I initially wasn’t confident about the importance of
- Seemed like they should’ve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isn't important or (b) has been in essence discussed in some other form or place that I just am not familiar with.
So when I had the initial forms of these ideas and wasn't sure how much time (if any) to spend on them, I took roughly the following approach:
I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like they’d have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc.
In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. This - combined with my independent impression that these ideas might be somewhat important and novel - seemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like they’d probably also be important and novel if the other ideas were, and vice versa).
So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more.
Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didn’t matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to:
- Explain to others why they shouldn’t bother exploring the same thing
- Make it easy for others to see if they disagreed with my reasoning for why this probably didn’t matter, because I might be wrong about that, and it might be good for others to quickly check that reasoning
Having spent time on that idea sort-of felt in hindsight silly or like a mistake. But I think I probably shouldn't see that as having been a bad decision ex ante, given that:
- It seems plausible that, if not for my write-up, someone else would've eventually "wasted" time on a similar idea
- This was just one out of a set of ideas that I tried to flesh out and write up, many/most of which still (in hindsight) seem like they were worth spending time on
- So maybe it's very roughly like I gave 60% predictions for each of 10 things, and decided that that'd mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
- (I didn't actually make quantitative predictions)
- So maybe it's very roughly like I gave 60% predictions for each of 10 things, and decided that that'd mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
And some of the other ideas were in between - no strong reason to believe they were important or that they weren’t - so I just fleshed them out a bit and left it there, pending further feedback. (I also had other things to work on.)
Denis Drescher @ 2020-12-19T17:51 (+4)
Yeah, I even mentioned this idea (about preventing someone from “wasting” time on a dead end you already explored) in a blog post a while back. :-D
Much of the bulk of the iceberg is research, which has the interesting property that often negative results – if they are the result of a high-quality, sufficiently powered study – can be useful. If the 100 EAs from the introduction (under 1.a.) are researchers, they know that one of the plausible ideas got to be right, and 99 of them have already been shown not to be useful, then the final EA researcher can already eliminate 99% of the work with very little effort by relying on what the others have already done. The bulk of that impact iceberg was thanks to the other researchers. Insofar as research is a component of the iceberg, it’s a particularly strong investment.
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
MichaelA @ 2020-12-20T01:58 (+4)
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
- I think it's common for people to not publish explorations that turned out to seem to "not reveal anything important" (except of course that this direction of exploration might be worth skipping).
- Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
- I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/too early.
- Again, there can be valid reasons for this (if you're sufficiently confident that it's worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
Denis Drescher @ 2020-12-19T17:43 (+4)
Part of this reminds me a lot of CFAR’s approach here (I can’t quite tell whether Julia Galef is interviewer, interviewee, or both):
For example, when I've decided to take a calculated risk, knowing that I might well fail but that it's still worth it to try, I often find myself worrying about failure even after having made the decision to try. And I might be tempted to lie to myself and say, "Don't worry! This is going to work!" so that I can be relaxed and motivated enough to push forward.
But instead, in those situations I like to use a framework CFAR sometimes calls "Worker-me versus CEO-me." I remind myself that CEO-me has thought carefully about this decision, and for now I'm in worker mode, with the goal of executing CEO-me's decision. Now is not the time to second-guess the CEO or worry about failure.
Your approach to gathering feedback and iterating on the output, making it more and more refined with every iteration but also deciding whether it’s worth another iteration, that process sounds great!
I think a lot of people aim for such a process or want to after reading your comment, but will held back from showing their first draft to their first round of reviewers because they worry the reviewers will think badly of them for addressing a topic of this particular level of perceived difficulty or relevance (maybe it’s too difficult or too irrelevant in the reviewer’s opinion), or think badly of them for a particular wording, or think badly of them because they think you should’ve anticipated a negative effect of writing about the topic and not done so (e.g., some complex acausal trade or social dynamics thing that didn’t occur to you), or just generally have diffuse fears holding them back. Such worries are probably disproportionate, but still, overcoming them will probably require particular tricks or training.
MichaelA @ 2020-12-20T02:05 (+2)
I like that "Worker-me versus CEO-me" framing, and hadn't heard of it or seen that page, so thanks for sharing that. It does seem related to what I said in the parent comment.
I share the view that it'll be decently common for a range of disproportionate worries to hold people back from striking out into areas that seem good in expected value but very uncertain and with real counterarguments, and from sharing early-stage results from such pursuits. I also think there can be a range of good reasons to hold back from those things, and that it can be hard to tell when the worries are disproportionate!
I imagine it'd be hard (though not impossible) to generate advice on this that's quite generally useful without being vague/littered with caveats. People will probably have to experiment to some extent, get advice from trusted people on their general approach, and continue reflecting, or something like that.
MichaelA @ 2021-03-21T03:29 (+4)
Regarding your Typing speed question, Tom Chivers (a journalist) was asked in a recent EA Forum AMA "How one should go about learning how to write high-quality material? And what is the way to get it published?"
His reply:
I wish I had a better answer to the first one than "become good at writing". My own pathway was reading loads and loads, and writing loads and loads, and then essentially mimicking the writing that I liked (mainly Pratchett tbh) until eventually I noticed that I'd stopped doing that and had a recognisable style of my own. I sometimes go through my old emails from before I was a journalist and see I've just written needlessly long show-offy emails to friends, which I cringe about a bit now, but they were clearly practice for when I had to do it for real.
Actually, also, I did philosophy at uni and MA, and I found that the way I learnt to structure an argument in those essays has been really helpful.
Oh and this might sound silly but become good at typing. If you can type as fast as you think then when the ideas are flowing quickly then they just sort of appear on the page. I used to work as a medical secretary for a long time and I swear that helped me an awful lot, not least in transcribing interviews but also just in being able to get ideas down quickly.
As for getting it published: pitch! Ideally start by developing a relationship with some editor somewhere. It might be a good idea to blog as well, so that you can point people to stuff you've written. [emphasis added]
Denis Drescher @ 2021-03-23T14:41 (+2)
Heh, great find! :-D
MichaelA @ 2020-12-19T04:16 (+4)
7. Hard problems
I’m not actually sure if the precise problem you’re describing resonates with me. I definitely often feel very uncertain about:
- whether the goal I’m striving towards really matters at all
- even if so, whether it’s a goal worth prioritising
- whether I should prioritise it (is it my comparative advantage?)
- whether anything I produce in pursuing this goal will be of any use to anyone
But I’m not sure there have been cases where, for a week or more, I didn’t feel like I was at least progressing towards:
- having the sort of output I had planned or now planned to produce (setting aside the question of whether that output will be useful to anyone), and/or
- deciding (for good reason) to not bother trying to create that sort of output
Note that I’d count as “progress” cases where I explored some solutions/options that I thought might work/be useful for X, and all turned out to be miserable wastes of time, so I can at least rule those out and try something else next week. I'd also count cases where I learned other potentially useful things in the process of pursuing dead ends, and that knowledge seems likely to somehow benefit this or other projects.
It is often the case that my estimate of how many remaining days something will take is longer at the end of the week than it was at the beginning of the week. But this is usually coupled with me thinking that I have made some sort of progress - I just also realised that some parts will be harder than I thought, or that I should do a more thorough job than I’d planned, or something like that.
(But I feel like maybe I'm just interpreting your question differently to what you intended.)
Denis Drescher @ 2020-12-19T18:00 (+4)
In a private conversation we figured out that I may tend too much toward setting specific goals and then only counting achievement of these goals as success ignoring all the little things that I learn along the way. If the goal is hard to achieve, I have to learn a lot of little things on the way and that takes time, but if I don’t count these little things as little successes, my feedback gets too sparse, and I lose motivation. So noticing little successes seems valuable.
MichaelA @ 2020-12-19T04:11 (+4)
8. Emotional motivators
(Disclaimer: I'm just reporting on my own experience, and think people will vary a lot in this sort of area, so none of the following is even slightly a recommendation.)
In general:
- Personally, I seem to just find it pretty natural to spend a lot of hours per week doing work-ish things
- I tend to be naturally driven to “work hard” (without it necessarily feeling much like working) by intellectual curiosity, by a desire to produce things I’m proud of, and by a desire for positive attention (especially but not only from people whose judgement I particularly respect)
- That third desire in particular can definitely become a problem, but I try to keep a close eye on it and ensure that I’m channeling that desire towards actions I actually endorse on reflection
- I do get run down sometimes, and sometimes this has to do with too many hours per week for too many weeks in a row. But the things that seem more liable to run me down are feeling that I lack sufficient autonomy in what I do, how, and when; or feeling that what I’m doing isn’t valuable; or feeling that I’m not developing skills and knowledge I’ll use in future
- That last point means that one type of case in which I do struggle to be motivated is cases where I know I’m going to switch away from a broad area after finishing some project, and that I’m unlikely to use the skills involved in that project again.
- In these cases, even if I know that finishing that project to a high standard would still be valuable and is worth spending time on, it can be hard for me to be internally motivated to do so, because it no longer feels like doing so would “level me up” in ways I care about.
- That last point means that one type of case in which I do struggle to be motivated is cases where I know I’m going to switch away from a broad area after finishing some project, and that I’m unlikely to use the skills involved in that project again.
- I seem to often become intensely focused on a general area in an ongoing way (until something switches my focus to another area), and just continually think about it, in a way that feels positive or natural or flow-like or something
- This happened for stand-up comedy, then for psychology research, then for teaching, then for EA stuff (once I learned about EA)
- (The other points above likewise applied during each of those four “phases” of my adult life)
- This happened for stand-up comedy, then for psychology research, then for teaching, then for EA stuff (once I learned about EA)
Luckily, the sort of work I do now:
- is very intellectually stimulating
- involves producing things I’m (at least often!) proud of
- can bring me positive attention
- allows me a sufficient degree of autonomy
- seems to me to be probably the most valuable thing I could realistically be doing at the moment (in expectation, and with vast uncertainty, of course)
- involves developing skills and knowledge I expect I might use in future
That means it’s typically been relatively easy for me to stay motivated. I feel very fortunate both to have the sort of job and “the sort of psychology” I’ve got. I think many people might, through no fault of their own, find it harder to be emotionally motivated to spend lots of hours doing valuable work, even when they know that that work would be valuable and they have the skills to do it. Unfortunately, we can’t entirely choose what drives us, when, and how.
(There’s also a scary possibility that my tendency so far to be easily motivated to work on things I think are valuable is just the product of me being relatively young and relatively new to EA and the areas I’m working in, and that that tendency will fade over time. I’d bet against that, but could be wrong.)
Denis Drescher @ 2020-12-19T21:24 (+2)
Awesome! For me the size of an area plays a role for how long I have a high level of motivation for it. When you’re studying a board game, there are a few activities, they are quite similar, and if you try out all of them it might be that you run out of motivation within a year. This happened to me with Othello. But computer science or EA are so wide that if you lose motivation for some subfield of decision theory, you move on to another subfield of decision theory, or to something else entirely, like history. And there are probably a lot of such subareas where there are potentially impactful investigations waiting to be done. So it makes sense to me to be optimistic about having long sustained motivation for such a big field.
My motivation did shift a few times, though. I think before 2012 it was more a “This is probably hopeless, but I have to at least try on the off-chance that I’m in a world where it’s not hopeless.” 2012–2014 it was more “Someone has to do it and no one else will.” After March 28, 2014, it was carried a lot by the sudden enormous amount of hope I got from EA. On October 28, 2015, I suddenly lost an overpowering feeling of urgency and became able to consider more long-term strategies than a decade or two. Even later, I became increasingly concerned with coordination and risk from regression to the (lower) mean.
MichaelA @ 2020-12-18T07:01 (+4)
9. Typing speed
I'd be surprised if typing speed was a big factor explaining differences in how much different researchers produce, or in their ability to produce certain types of output. (But of course, that claim is pretty vague - how surprised would I be? What do I mean by "big factor?")
But I just did a typing test, and got 92wpm (with "medium" words, and 1 typo), which is apparently high. So perhaps I'm just taking that for granted and not recognising how a slower typing speed could've limited me. Hard to say.
MichaelA @ 2020-12-18T04:28 (+4)
6. Learning a new field
I don’t know if I have a great, well-chosen, or transferable method here, so I think people should pay more attention to my colleagues’ answers than mine. But FWIW, I tend to do a mixture of:
- reading Wikipedia articles
- reading journal article abstracts
- reading a small set of journal articles more thoroughly
- listening to podcasts
- listening to audiobooks
- watching videos (e.g., a Yale lecture series on game theory)
- talking to people who are already at least sort-of in my network (usually more to get a sounding board or “generalist feedback”, rather than to leverage specific expertise of theirs)
I’ve also occasionally used free online courses, e.g. the Udacity Intro to AI course. (See also What are some good online courses relevant to EA?)
Whether I take many notes depends on whether I'm just learning about a field because I think it might be useful in some way in future for me to know about that field, or because I have at least a vague idea of a project I might work on within that field (e.g., "how bad would various possible types of nuclear wars be, from a longtermist perspective?"). In the latter case, I'll take a lot of notes as I go in Roam, beginning to structure things into relevant sub-questions, things to learn more about, etc.
Since leaving university, I haven’t really made much use of textbooks, flashcards, or reaching out to experts who aren’t already in my network. It's not really that I actively chose to not make much use of these things (it’s just that I never actively chose to make much use of these things), and think it’s plausible that I should make more use of these things. I’ll very likely talk to a bunch of experts for my current or upcoming research projects.
Adam Binks @ 2020-12-14T12:32 (+4)
These are fascinating, I would love to see answers to all of these questions!
Denis Drescher @ 2020-12-19T12:40 (+2)
Wow! Thanks for all the insightful answers, everyone!
Would anyone mind if I transfer these into a post on my blog (or a separate post in the EA Forum) that is linear in the sense that there is one question and then all answers to it, then the next question and all answers to it, and so on? That may also generate more attention for these answers. :-)
alexlintz @ 2020-12-24T11:53 (+3)
Yeah, this would be nice to have! It's a lot of text to digest as it is now and I guess most people won't see it here going forward
Linch @ 2020-12-21T15:32 (+2)
Sure, in general feel free to assume that anything I write that's open to the public internet is fair game.
MichaelA @ 2020-12-22T00:24 (+2)
Yeah, same for me.
Jason Schukraft @ 2020-12-21T15:15 (+2)
That's fine by me!
EdoArad @ 2020-12-21T06:13 (+2)
I think it would be valuable to publish these as a sequence of questions on the forum and let others chime in and have a more thorough discussion. Perhaps even separated through time, say 1 or two per week
nil @ 2020-12-14T15:50 (+20)
- If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
- What new charities do you want to be created by EAs?
- What are the biggest mistakes Rethink Priorities did?
Thank you!
saulius @ 2020-12-16T18:42 (+58)
What are the biggest mistakes Rethink Priorities did?
I can’t speak for the entire organization, but I can talk about what I see as my biggest mistakes since I started working at Rethink Priorities:
- Writing articles about interventions I think are promising and thinking that my work is done once the article is published. Examples are baitfish (see the comment above), fish stocking, rodents farmed for pet snake food. The way I see things now, if I think that something should be done, I should express that opinion very clearly and with fewer caveats, find funders who want to fund it, find activists that want to do it, and connect them. Or something like that. And that is the kind of work I am doing at the moment, even though I think I am much better at writing articles than at doing this.
- Avoiding expressing opinions too much. It’s related to the point above. I think that in the past I was too afraid of writing something that could later turn out to be wrong. Hence, I wrote articles in such a way that sometimes the reader could not even know what I think about a problem I am writing about, how important I think it is in the context of other things, etc. I wanted decision makers to read my articles and form their own opinions based on what I said. I now think that this is not ideal because decision makers may not have the time to form nuanced opinions based on subtle details in my long articles. But someone has to form actionable opinions, and it is me who has the context and the time for that. So I want to try to write more articles of the “This is what I think you should do and I’m going to explain why” type, rather than the “Here is a 40 page summary of everything I've ever read on this topic” type. I sometimes want to write articles of the former type because then my managers, funders and myself can all clearly see what I’ve been working on for all this time. But my end goal is making an impact so I try to not think about that too much. Note that if I pledged to only ever write articles that are purely of the former kind, I might end up not writing a single paragraph all year. I don’t think I should go that far.
- Spending too much time on finishing articles that I know won’t have that much impact. In some cases, it’s better to just drop them, admit to yourself that you wasted some time, and move on to the next project. That said, there were some articles that I had strongly considered abandoning, but in the end I was happy I finished them.
- Spending any time on details that I know right away won’t be that important. There are some examples of this in Estimates of global captive vertebrate numbers article. Did I really need to write about pets, civet farming, and other relatively minor problems that I know effective altruists won’t work on? I guess I wanted the list to be complete, but I don’t know why. It wasted not only my time, but also the time and the attention of the readers.
- Being too frugal. In the beginning of working at Rethink Priorities, I wanted to either take a low salary, or spend as little money as I can and donate the rest. But the problems that it caused made me less productive and possibly decreased my impact. Now I allow myself to spend more and I think I'm better off because of it.
- Not doing more to address some of my productivity problems, especially negative self-talk about myself and my work. Almost every day I hate myself for not doing enough work. It is exhausting, and it tires me out more quickly and hence I become even less productive. I still haven’t found a good way to deal with it. I tried therapy multiple times but I never emphasized this specific issue so that is on my to-do list. I also want to try more meditation, maybe that can help.
Gina_Stuessy @ 2020-12-21T02:53 (+17)
Saulius, just wanted to comment that while I haven't devoted the time to read in detail most of your research, I have noticed and greatly appreciated that you have contributed a LOT of useful knowledge to EAA over the past several years. Yours is a name I've recognized in EAA since its early days. I am glad that you're shifting to express your opinions more strongly so that more action can be taken on all of the wonderful research you've contributed. I've gotten the sense that you take these issues very seriously, are super motivated to address them, and don't get pulled into more trivial things, and I greatly admire and am inspired by you for that.
Re (6), I hope that you can be proud of what you've done and decrease your negative self-talk. Take care of yourself. I'd be curious to hear if meditation ends up helping out with this.
abergal @ 2020-12-16T18:57 (+11)
I found this response insightful and feel like it echoes mistakes I've made as well; really appreciate you writing it.
saulius @ 2020-12-15T18:43 (+41)
What new charities do you want to be created by EAs?
For me it's a lobbying organization against baitfish farming in the U.S. I wrote about the topic two years ago here. Many people complimented me on it but no one did anything. I talked with some funders who said they would be interested in funding someone suitable pursuing this, but I haven’t found who could this be. The main argument against it used to be that the industry is declining. But the recently released aquaculture census suggests that it is no longer declining (see my more recent thoughts on numbers here).
Using fish as live bait is already prohibited in some U.S. states (see the map in Kerr (2012)). Many other states have import and movement restrictions (see this table). It seems that all of this happened due to environmental concerns. And the practice is banned in multiple other countries. To me this shows that it is plausible to make progress on this.
Take a look at this graph I made of the number of animals farmed in the U.S. at any time.
I used yellow and black colours to represent ranges. So for example, I think that there are between 1 billion and (5+1=)6 billion baitifsh farmed in the U.S. at any time. It’s more likely to be closer to 1 billion than to 6 billion though. Still, if we wanted to decrease the number of vertebrates farmed for the U.S. consumption by say 500 million, it would seem very difficult to make Americans decrease their chicken and egg consumption by 25%, or decrease their farmed fish consumption by 13%-42%. Decreasing baitfish production by a 500 million might also be difficult but I think it is much more easily achievable.
I am doing a bit more research on this right now (in parallel with other projects), and I might make another EA forum post about it at some point but I don’t know if that is what is needed to make this happen. I think that at this point someone should just try to do it.
If anyone is interested, please schedule a meeting with me here or write to me at saulius at rethinkpriorities dot org .
MichaelStJules @ 2020-12-16T03:24 (+4)
Maybe Aquatic Life Institute or Fish Welfare Initiative would work on this. I'm not sure if they're already aware. I think it would be closer to ALI's work.
saulius @ 2020-12-16T19:39 (+6)
Thanks for suggestions Micheal. Haven from FWI is actually helping me to do research on this in his free time. He said that FWI would be open to putting someone who would work on this under their organization if given funding, but not to redirecting the time of the current staff towards the project. This makes sense because they want to continue with the work that they have started doing, and they are not experts on lobbying and I think few if any of them are located in the U.S. I haven’t talked about this with ALI yet (you are right, I should), but from what I hear, I think that they also don’t have expertise in U.S. lobbying, are mostly not located in the U.S., and would probably not want to redirect current staff time to new projects. I don’t know how much previous lobbying experience is important here but my sense is that it is. I feel that what is needed is a person (or two) who would be suitable for leading this, and then we could figure out all the organizational and funding stuff.
william @ 2020-12-18T04:24 (+10)
Hi Saulius, thank you for your comment! To add some more context, ALI is based in New York, but we indeed have a global team. I'm very glad you're bringing up baitfish. Our focus for 2020 was the creation of the Aquatic Animal Alliance, the drafting of our coalition welfare standards and the launch of our certifier campaign. We've done great progress on all of them, and actually already had our first victory with GlobalGAP (which certifies more than 1% of the global aquaculture market). For next year, we plan on continuing our certifier campaign but also wanted to pursue 2 additional campaigns through the Alliance: lobbying and a fish restocking campaign. On the lobbying front, we've already been active in France and plan to do more work there and at the EU level. Regarding fish restocking, we plan on starting working with US states departments of Fish and Wildlife to get them to adopt some or all of our welfare standards. We have already contacted vets who work at these agencies; and through our producer sentiment roundtables we organized in the fall, we have already found fish restocking producers who also are open to working with us. I'm really glad you bringing up baitfish, because we were not planning to focus on it, but you make a compelling case, I would love to follow up on that. That being said, it's also true that we are running a tight ship, so it will also depending on funding, the interest of the other Alliance members and the progress of our certifier campaign. If you have any questions about the Aquatic Life Institute, the Aquatic Animal Alliance, or any of the work we are currently doing please reach out to us at kiara@ali.fish. We hope to share more of our research pieces and accomplishments very shortly!
saulius @ 2020-12-18T12:18 (+7)
Thank you very much William for your comment! I will follow up with you in private but there are few things that I thought would be suitable to say/ask here as well.
It was very recently brought to my attention that baitfish seems to also be farmed in France and that there is an animal advocacy organization that has a petition on it (see here and here). I don’t know what is the scale of baitfish farming in France or in any country other than the U.S., so I don’t yet know if it is an issue I would recommend tackling in France. I just thought I should mention that in case you or someone else could be interested in doing some lobbying on this issue there.
Also, at Rethink Priorities we try to track any possible impact we had on the projects of animal welfare organizations. So I wanted to ask, do you think you would have worked on fish restocking if this article was never written? And please don’t hesitate to say that you knew about the industry and its size independently of that article and it had nothing to do with it, if that is the case :)
william @ 2020-12-19T02:19 (+9)
Thanks Saulius, it actually so happens that the organization running the baitfish petition in France, Paris Animaux Zoopolis, was founded by Amandine Sanvisens... who is also the director of ALI in France! But, and that goes to your next point, we were not aware of the relative scale of baitfish farming; so if we do end up prioritizing it over another intervention, the credit for the additional impact of doing that campaign over the one we would have done otherwise would go to you and RP! Would love to chat more and we'll keep you updated.
saulius @ 2020-12-19T17:21 (+6)
Good to know. I've talked to Gautier who wrote the French article I linked to, and he said he had already tried to figure out the scale of the industry in France, but didn't manage to find stats on it. However, he said that there are indications that it is a small industry compared to the U.S. He said there was work on it mostly due to legal precedent reasons rather than direct impact.
saulius @ 2020-12-16T18:52 (+15)
If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
Contrary to organizations like OPIS, Center for Reducing Suffering, and Center on long-term risk, we don't have reducing extreme suffering set as our only priority. We sometimes work on reducing suffering that may not be classified as extreme (arguably, our work on cage-free hen campaigns fall into this category). And perhaps some other work is not directly about reducing suffering at all. Since preventing extreme suffering is not our only priority, I think that we are unlikely to be the best donation opportunity for this specific goal. That said, when I look at the list of our publications, I think that almost all the articles we write contribute to the goal of preventing needless and extreme suffering in some way, although in many cases it is quite indirect. In the end, we are not able to compare whether or not Rethink Priorities is a better donation opportunity for this purpose than other organizations in an unbiased way.
Marcus_A_Davis @ 2020-12-16T19:49 (+13)
Thanks for the questions!
If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring some of the worst suffering.
Specifically, I'd say it's plausible some of the worst persistent current suffering is plausibly in farmed chickens and fish, and thus work to reduce the worst aspects of those is a decent bet to prevent extreme suffering. Similarly, wild animals likely experience the largest share of extreme suffering currently because of the sheer numbers and nature of life largely without interventions to prevent, say, the suffering of starvation, or extreme physical pain. For these reasons, work to improve conditions for wild animals plausibly could be a good investment.
Still restricted to the present, and outside the typical EA space altogether, I think it's plausible much of the worst suffering in the world is committed during war crimes or torture under various authoritarian states. I do not know if there's anything remotely tractable in this space or what good donation opportunities would be.
If you broaden consideration to include the future, a much wider set of creatures plausibly could experience extreme suffering including digital minds running at higher speeds, and/or with increased intensity of valenced experience beyond what's currently possible in biological creatures. Here, what you think is the best bet would depend on many empirical beliefs again. I would say, only, that I'm excited about our longtermism work and think we'll meaningfully contribute to creating the kind of future that decreases the risks of these types of outcomes.
Marcus_A_Davis @ 2020-12-16T19:52 (+9)
What new charities do you want to be created by EAs?
I don't have any strong opinions about this and it would likely take months of work to develop them. In general, I don't know enough to suggest that it is desirable that new charities work in areas I think could use more work rather than existing organizations up their work in those domains.
What are the biggest mistakes Rethink Priorities did?
Not doing enough early enough to figure out how to achieve impact from our work and communicate with other organizations and funders about how we can work together.
MichaelA @ 2020-12-17T23:50 (+7)
If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
I like the answers Marcus and Saulius gave to this question. I'll just add two things those answers didn't explicitly mention.
EA movement-building
- Rethink has done and plans to do work aimed at improving efforts to build the EA movement and promote EA ideas
- E.g., Rethink's work on the EA Survey, or its plans related to:
- "Further refining messaging for the EA movement, exploring different ways of talking about EA to improve EA recruitment and increase diversity.
- Further work to explore better ways to talk about longtermism to the general public, to help EAs communicate longtermism more persuasively and to increase support for desired longtermist policies in the US and the UK."
- E.g., Rethink's work on the EA Survey, or its plans related to:
- And building the EA movement and promoting EA ideas seems like plausibly one of the best interventions for reducing needless/extreme/all suffering
- E.g., building the EA movement could increase the flows of talent and funds to existing suffering-focused EA organisations (such as CLR), lead to the creation of new ones, or lead to talented people using their careers to effectively reduce suffering in other ways (e.g., through specific roles in government or AI labs)
- E.g., promoting EA ideas (even without "building the EA movement") could lead to a general shift in voting, policies, behaviours towards reducing suffering
Forecasting
- Rethink plans to "Use novel econometric methods to better understand our ability to reliably impact the long-term future", as well as to "Improve our ability to forecast the short-term and long-term future."
- Improving our ability to forecast events and impacts, and improving our understanding of when and how much to trust forecasts, would presumably be about as useful for reducing suffering as for all other efforts to improve the world. (And I think it'd plausibly be very useful for such efforts.)
- This seems especially true in relation to:
- efforts to reduce suffering in the long-term future, and
- decisions about how much to focus on reducing suffering in the long-term future vs reducing suffering in the nearer term.
- This seems especially true in relation to:
Caveats
I'm not necessarily arguing that Rethink is where someone should donate if they wish to reduce suffering. That would depend on things like precisely how effective Rethink's movement-building work would be for EA movement-building, precisely how useful EA movement-building is for reducing suffering, etc.
I'm not in a good position to make those judgements, for reasons including that:
- I don't take a primarily suffering-focused perspective myself (so I haven't thought about it a great deal - though I did work at the Center on Long-Term Risk for 3 months)
- I now work for Rethink, so I might be biased
- I've only worked at Rethink for a month, and don't work on the EA movement-building or forecasting stuff myself
But hopefully this is useful food for thought anyway :)
Akash @ 2020-12-14T18:00 (+17)
What are the things you look for when hiring? What are some skills/experiences that you wish more EA applicants had? What separates the "top 5-10%" of EA applicants from the median applicant?
Marcus_A_Davis @ 2020-12-15T19:01 (+16)
Thanks for the question!
We hire for fairly specific roles, and the difference between those we do hire and don't isn't necessarily as simple as those brought on being better as researchers overall (to say nothing of differences in fit or skill across causes).
That said, we generally prioritize ability in writing, general reasoning, and quantitative skills. That is we value the ability to uncover and address considerations, counter-points, meta-considerations on a topic, produce quantitative models and do data analysis when appropriate (obviously this is more relevant in certain roles than others), and to compile this information into understandable writing that highlights the important features and addresses topics with clarity. However, which combination of these skills is most desired at a given time depends on current team fit and the role each hire would be stepping into.
For these reasons, it's difficult to say with precision which skills I'd hope for more of among EA researchers. With those caveats, I'd still say a demonstration of these skills through producing high quality work, be it academic or in blog posts, is in fact a useful proxy for the kinds of work we do at RP.
Neel Nanda @ 2020-12-14T16:27 (+16)
What would you do if Rethink Priorities had significantly more money? (Eg, 2x or 10x your current budget)
Peter_Hurford @ 2020-12-15T19:09 (+12)
Hi Neel,
We'd obviously be very excited to take 10x our budget if you're offering ;)
Right now, 10x our budget would be ~$14M, which would still be 8x smaller than large think tanks like the Brookings Institution. I think if we had 10x the budget, the main thing we would do is expand our research staff as rapidly as non-financial constraints (e.g., management, operations, and team culture) allow.
There are definitely many more areas of research we could be working in, both within our existing cause areas (currently farmed animal welfare, wild animal welfare, invertebrate welfare, longtermism, and EA movement building) and other cause areas we aren't working in yet. We'd also need more operations staff and management to facilitate this.
As for specific research questions, I think we have a much clearer vision of what we would do with 2x the money than 10x the money. I personally (speaking for myself not the rest of the org) would love to see us hire staff to work more directly on farmed animal welfare policy and to investigate meat alternatives, do much more to understand EA community health and movement building, do more fundamental research (e.g., like our work on moral weight and investigating well-being metrics), and potentially investigate new charities that could be launched (similar to CE's work). But that is just a wishlist and it would change as I talk to more people.
We're already working a lot to prioritize what questions we want to tackle - our longtermist and wild animal departments, for example, just recently expanded beyond one person and we're in the process of making new research agendas, so it is hard to recommend ideas in those areas right now.
One benefit of hiring more people, though, is we'd have more people to do the important work of figuring out what it is we should do!
arushigupta @ 2020-12-14T16:57 (+13)
You mentioned in your 2021 update that you're starting a research internship program next year (contingent on more funding) in order to identify and train talented researchers, and therefore contribute to EA-aligned research efforts (including your own).
Besides offering similar internships, what do you think other EA orgs could do to contribute to these goals? What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?
Peter_Hurford @ 2020-12-15T19:01 (+13)
Hi Arushi,
I am very hopeful the internship program will let us identify, take on, and train many more staff than we could otherwise and then either hire them directly or be able to recommend them to other organizations.
While I am wary of recommending unpaid labor (that's why our internship is paid), I otherwise think one of the best ways for a would-be researcher to distinguish themselves is writing a thoughtful and engaging EA Forum post. I've seen a lot of great hires distinguish themselves like this.
Other than open more researcher jobs and internships, I think other EA orgs could perhaps contribute by writing advice and guides about research processes or by offering more "behind the scenes" content on how different research is. done.
Lastly, in my personal opinion, I think we should also do more to create an EA culture where people don't feel like the only way they can contribute is as a researcher. I think the role gets a lot more glamor than it deserves and many people can contribute a lot from earning to give, working in academia, working in politics, working in a non-EA think tank, etc.
DavidBernard @ 2020-12-15T18:31 (+12)
I’m happy to see an increase in the number of temporary visiting researcher positions at various EA orgs. I found my time visiting GPI during their Early Career Conference Programme very valuable (hint: applications for 2021 are now open, apply!) and would encourage other orgs to run similar sorts of programmes to this and FHI’s (summer) research scholars programme. I'm very excited to see how our internship program develops as I really enjoy mentoring.
I think I was competitive for the RP job because of my T-shaped skills, broad knowledge in lots of EA-related things but also specialised knowledge in a specific useful area, economics in my case. Michael Aird probably has the most to say about developing broad knowledge given how much EA content he has consumed in the last couple of years, but in general reading things on the Forum and actively discussing them with other people (perhaps in a reading group) seems to be the way to develop in this area. Developing specialised skills obviously depends a lot on the skill, but graduate education and relevant internships are the most obvious routes here.
MichaelA @ 2020-12-16T04:51 (+5)
I already strongly agreed with your first paragraph in a separate answer, so I'll just jump in here to strongly agree with the second one too!
Michael Aird probably has the most to say about developing broad knowledge given how much EA content he has consumed in the last couple of years
I can confirm that I've been gobbling up EA content rather obsessively for the last 2 years. If anyone's interested in what this involved and how many hours I spent on it, I describe that here.
saulius @ 2020-12-15T18:53 (+10)
What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?
Linch @ 2020-12-16T05:02 (+7)
What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?
I think this is a relatively minor thing, but trying to become close to perfectly calibrated (aka being able to put precise numbers on uncertainty) on some domains seem like a moderate-sized win, at very low cost.
I mainly believe this because I think the costs are relatively low. My best guess is that the majority of EAs can become close to perfectly calibrated on trivia numerical questions in much less than 10 hours of deliberate practice, and my median guess is for the amount of time needed is around 2 (eg practice here).
I want to be careful with my claims here. I think sometimes people have the impression that getting calibrated is synonymous with rationality, or intelligence, or judgement. I think this is wrong:
- Concretely, I just don't think being perfectly calibrated is that big a deal. My guess is that going from median-EA levels of general calibration to perfect calibration on trivia questions is an improvement in good research/thinking by 0.2%-1%. I will be surprised if somebody becomes a better researcher by 5% via these exercises, and very surprised if they improve by 30%.
- In forecasting/modeling, the main quantifiable metrics include both a) calibration (roughly speaking, being able to quantify your uncertainty) and b) discrimination (roughly speaking, how often you're right). In the vast majority of cases, calibration is just much less important than discrimination.
- There are generalizability issues with generalizing from good calibration on trivia questions to good calibration overall. The latter is likely to be much harder to train precisely, or even precisely quantify (though I'm reasonably confident that going from poor calibration on trivia to perfect calibration should generalize somewhat, Dave Bernard might have clearer thoughts on this)
- I think calibration matters more for generalist/secondary research (much of what RP does) than for things that either a) require relatively narrow domain expertise, like ML-heavy AI Safety research or biology-heavy biosecurity work, or b) require unusually novel thinking/insight (like much of crucial considerations work).
Nonetheless, I'm a strong advocate for calibration practice because I think the first hour or two of practice will pay off by 1-2 orders of magnitude over your lifetime, and it's hard to identify easy wins like that (I suspect even exercise has a less favorable cost-benefits ratio, though of course it's much easier to scale).
MichaelA @ 2020-12-16T04:45 (+5)
Misc thoughts on "What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?"
There was some relevant discussion here. Ideas mentioned there include:
- getting mentorship outside of EA orgs (either before switching into EA orgs after a few years, or as part of a career that remains outside of explicitly EA orgs longer-term)
- working as a research assistant for a senior researcher
I think the post SHOW: A framework for shaping your talent for direct work is also relevant.
MichaelA @ 2020-12-16T04:09 (+4)
Hi Arushi,
Good questions! I'll split some thoughts into a few separate comments for readability.
Writing on the Forum
I second Peter's statement that
one of the best ways for a would-be researcher to distinguish themselves is writing a thoughtful and engaging EA Forum post. I've seen a lot of great hires distinguish themselves like this.
(Though in some cases it might make sense to publish the post to LessWrong instead or in addition.)
This statement definitely seems true in my own case (though I imagine for some people other approaches would be more effective):
I got a offer for an EA research job before I began writing for the EA Forum. But I was very much lacking in the actual background/credentials the org said they were looking for, so I'm almost certain I wouldn't have gotten that offer if the application process hadn't included a work test that let me show I was a good fit despite that relevant lack of background/credentials. (I was also lucky that the org let me do the work test rather than screening me out before that.) And the work test was basically "Write an EA Forum post on [specific topic]", and what I wrote for it did indeed end up as one of my first EA Forum/LessWrong posts.
And then this year I've gotten offers from ~35% of what I've applied to, as compared to ~7% last year, and I'd guess that the biggest factors in the difference were:
- I now had an EA research role on my CV, signalling I might be a fit for other such roles
- Going from 1FTE non-EA stuff (teaching) in 2019 to only ~0.3FTE non-EA stuff (a grantwriting role I did for a climate change company on the side of my ~0.7FTE EA work till around August) allowed me a lot of time to build relevant skills and knowledge
- In 2020 I wrote a bunch of (mostly decently/well received) EA Forum or LessWrong posts, helping to signal my skills and knowledge, and also just "get my name out there"
- "getting my name out there" was not part of my original goal, but did end up happening, and to quite a surprising degree.
- Writing EA Forum and LessWrong posts helped force and motivate me to build relevant skills and knowledge
- Comments and feedback from others on my EA Forum and LessWrong posts sometimes helped me build relevant skills and knowledge, or build my ideas of what was worth thinking and writing about
- See also this other comment of mine from this AMA
Factors 1 and 2 didn't depend on me writing things on the EA Forum or LessWrong. But factors 3-5 did. So it seems that writing for the Forum and LessWrong really helped me out here. It also seems plausible that, if I'd started writing for the Forum/LW before I got my first EA job offer, that might've led to me getting an offer sooner than I in fact did.
(But I'm not sure how generalisable any of these takeaways are - maybe this approach suited me especially well for some reason.)
On this, I'd also recommend Aaron Gertler's talks Why you (yes, you) should post on the EA Forum and How you can make an impact on the EA Forum.
MichaelA @ 2020-12-16T04:10 (+3)
My own story & a disclaimer
(This is more of a tangent than an answer, but might help provide some context for my other responses here and elsewhere in this AMA. Feel free to ignore it, though!)
I learned about EA in late 2018, and didn't have much relevant expertise, experience, or credentials. I'd done a research-focused Honours year and published a paper, but that was in an area of psychology that's not especially relevant to the sort of work that, after learning about EA, I figured I should aim towards. (More on my psych background here.) I was also in the midst of the 2 year Teach For Australia program, which involves teaching at a high school, and also wasn't relevant to my new EA-aligned plans.
Starting then and continuing through to mid 2020 ish, I made an active effort to "get up to speed" on EA ideas, as described here.
In 2019, I applied for ~30 EA-aligned roles, mostly research-ish roles at EA orgs (though also some non-research roles or roles at non-EA orgs). I ultimately got two offers, one for an operations role at an EA org and one for a research role. I think I had relevant skills but didn't have clear signals of this (e.g., more relevant work experience or academic credentials), so I was often rejected at the CV screening stage but often did ok if I was allowed through to work tests and interviews. And both of the offers I got were preceded by work tests.
Then in 2020, I wrote a lot of posts on the EA Forum and a decent number on LessWrong, partly for my research job and partly "independently". I also applied for ~11 roles this year (mostly research roles, and I think all at EA orgs), and ultimately received 4 offers (all research roles at EA orgs). So that success rate was much higher, which seems to fit my theory that last year I had relevant skills but lacked clear signals of this.
So I've now got a total of ~1.5 years FTE of research experience, ~0.5 of which (in 2017) was academic psychology research and ~1 of which (this year) was split across 3 EA orgs. That's obviously not enough time to be an expert, and I still have a great deal to learn on a whole host of dimensions.
Also, I only started with Rethink roughly a month ago.
harriet @ 2020-12-16T13:08 (+5)
Hey Michael,
This is a tangent to your tangent, but are you still based in Australia? If so, how do you find Rethink's remote by default set up with the time difference?
For context, I considered applying for the same role, but ultimately didn't because at the time I was stuck working from Australia with all my colleagues in GMT+0 timezone (thanks covid), and the combination of daytime isolation/late night meetings were making me pretty miserable. Is Rethink better at managing these issues?
Cheers!
Peter_Hurford @ 2020-12-16T23:12 (+8)
Just want to say that Rethink Priorities is committed to being able to successfully integrate remote Australians and we'd be excited to have more APAC applicants in our future hiring rounds!
MichaelA @ 2020-12-16T13:44 (+5)
Hey Harriet,
Good question. And sorry to hear you had that miserable situation - hope things are better for you now!
First, I should note that I’m in Western Australia, so things would presumably be somewhat different for people in the Eastern states. Also, of course, different people’s needs, work styles, etc. differ.
I’ve been meeting with US people in my mornings, which is working well because I wake up around 7am and start working around 8, while the people I’m meeting with are more night-owl-ish. And I’ve been meeting with people in the UK/Europe in my evenings (around 5-9pm), which I’m also fine with.
Though it is tricky to get all 3 sets of time zones in the same meeting. Usually one of us has to be up early or late. But so far those sort of group meetings have just been something like once a fortnight, so it’s been tolerable.
And other than meetings, time zones aren’t seeming to really matter for my job; most of my work and most of my communication with colleagues (via slack, google doc comments, email, etc) doesn’t require being up at the same time as someone else. (I imagine that, in general, this is true for many research roles and less true for e.g. operations roles.)
Though again, I’ve only been at Rethink for a month so far. And I’m planning to move to Oxford in March. If I was in Australia permanently, perhaps time zone issues for team meetings would become more annoying.
Btw, I also worked for Convergence Analysis (based in UK/Europe) from March to ~August from Australia. That was even easier, because there were never three quite different time zones to deal with (no US employees).
harriet @ 2020-12-17T16:15 (+1)
Thanks for the detailed answer - this actually sounds pretty doable!
MichaelA @ 2020-12-16T04:36 (+2)
Research training programs, and similar things
(You said "Besides offering similar internships". But I'm pretty excited about other orgs running similar internships, and/or running programs that are vaguely similar and address basically the same issues but aren't "internships". So I'll say a bit about that cluster of stuff, with apologies for sort-of ignoring instructions!)
David wrote:
I’m happy to see an increase in the number of temporary visiting researcher positions at various EA orgs. I found my time visiting GPI during their Early Conference Career Programme very valuable (hint: applications for 2021 are now open, apply!) and would encourage other orgs to run similar sorts of programmes to this and FHI’s (summer) research scholars programme. I'm very excited to see how our internship program develops as I really enjoy mentoring.
I second all of that, except swapping GPI's Early Conference Career Programme (which I haven't taken part in) for the Center on Long-Term Risk's Summer Research Fellowship. I did that fellowship with CLR from mid August to mid November, found it very enjoyable and useful.
I recently made a tag for posts relevant to what I called "research training programs". By this I mean things like FHI and CLR's Summer Research Fellowships, Rethink Priorities' planned internship program, CEA's former Summer Research Fellowship, probably GPI's Early Career Conference Programme, probably FHI's Research Scholars Program, maybe the Open Phil AI Fellowship, and maybe ALLFED's volunteer program. Readers interested in such programs might want to have a look at the posts with that tag.
I think that these programs might be one of the best ways to address some of the main bottlenecks in EA or at least in longtermism (I've thought less about areas of EA other than longtermism). What I mean is related to the claim that EA being vetting-constrained, and to Ben Todd's claim that some of EA's main bottlenecks at the moment are "organizational capacity, infrastructure, and management to help train people up". There was also some related discussion here (though it's harder to say whether that overall supported the claims I'm gesturing at).
So I'm really glad a few more such programs have recently popped up in longtermism. And I'm really excited about Rethink's internship program (which I wasn't involved in the planning of, and didn't know about when I accepted the role at Rethink). And I'd be keen to see more such programs emerge over time. I think they could take a wide variety of forms, including but not limited to internships.
And I'd strongly recommend aspiring or early-career researchers consider applying to such programs. See also Jsevillamol's post My experience on a summer research programme.
(As always, these are just my personal views, not necessarily the views of other people at Rethink.)
RogerAckroyd @ 2020-12-14T10:42 (+13)
Conditional on invertibrates being sentient, I would upgrade my probability of other things being sentient. So maybe bivales are sentient, some existing robots, maybe even plants. I would take the case for hidden qualia in humans seriously as well. Do you agree, and if so, would this have any impact on good policies to pursue?
Jason Schukraft @ 2020-12-15T15:29 (+13)
Hi Roger,
There are different possible scenarios in which invertebrates turn out to be sentient. It might be the case, for instance, that panpsychism is true. So if one comes to believe that invertebrates are sentient because panpsychism is true, one should also come to believe that robots and plants are sentient. Or it could be that some form of information integration theory is true, and invertebrates instantiate enough integration for sentience. In that case, the probability that you assign to the sentience of plants and robots will depend on your assessment of their relevant level of integration.
For what it's worth, here's how I think about the issue: sentience, like other biological properties, has an evolutionary function. I take it as a datum that mammals are sentient. If we can discern the role that sentience is playing in mammals, and it appears there is analogous behavior in other taxa, then, in the absence of defeaters, we are licensed to infer that individuals of those taxa are sentient. In the past few years I've updated toward thinking that arthropods and (coleoid) cephalopods are sentient, but the majority of these updates have been based on learning new empirical information about these animals. (Basically, arthropods and cephalopods engage in way more complex behaviors than I realized.) When we constructed our invertebrate sentience table, we also looked at plants, prokaryotes, protists, and, in an early version of the table, robots and AIs of various sorts. The individuals in these categories did not engage in the sort of behaviors that I take to be evidence of sentience, so I don't feel licensed to infer that they are sentient.
RogerAckroyd @ 2020-12-15T17:00 (+1)
Thank you. That is rather different from my view of sentience in some ways, I appreciate the clarification.
vaidehi_agarwalla @ 2020-12-15T01:19 (+11)
Regarding the following research areas for 2021:
Further refining messaging for the EA movement, exploring different ways of talking about EA to improve EA recruitment and increase diversity.
Further work to explore better ways to talk about longtermism to the general public, to help EAs communicate longtermism more persuasively and to increase support for desired longtermist policies in the US and the UK.
- What kind of research do you plan on doing to answer these questions?
- Did you consider other areas of EA movement building apart from messaging before choosing this one, and if so how did you narrow down your options?
- Do you see general EA messaging as part of your longtermist focus, or is this a separate category? Either ways, how do you figure out how to allocate resources to this movement building-related efforts?
Peter_Hurford @ 2020-12-15T19:30 (+12)
Hi Vaidehi,
What kind of research do you plan on doing to answer these questions?
I will be working on both of these projects with David Moss. Our plan is to run surveys of the general public that describe EA (or longtermism) and ask questions to gauge how people view the message. We'd then experimentally change the message to explore how different framings change support, with the idea that messages that engender more support on the survey are likely to be more successful overall. For EA messaging, we'd furthermore look at support broken down by different demographics to see if there are more inclusive messages out there. We did a similar project we did for animal welfare messaging on live shackle slaughter, which you can look at to get a sense of what we do. We also have a lot of unpublished animal welfare messaging work we're eager to get out there as soon as we can.
~
Did you consider other areas of EA movement building apart from messaging before choosing this one, and if so how did you narrow down your options?
As you know, we do run the EA Survey and Local Groups Survey. Right now, our main goal is to stay within analysis of EA movement building rather than work to directly build the movement like other groups (e.g., CEA, 80K, GWWC, TLYCS) already do. We see these messaging studies as a good next step. However, we have not systematically compared opportunities yet as we don't have the staff or funding right now to do such a search.
~
Do you see general EA messaging as part of your longtermist focus, or is this a separate category?
We see these as separate projects in separate cause areas, though there will definitely be a lot of cross-cause learning. Note that we also do this for farmed animal welfare as well and may also do so in wild animal welfare in the near future. It is a very useful thing to do for all sorts of causes!
~
Either ways, how do you figure out how to allocate resources to this movement building-related efforts?
Right now we just have allocated funding from restricted funding and a portion of our unrestricted funding. We will also likely fundraise for this work specifically from interested donors.
bshumway @ 2020-12-14T20:42 (+11)
If you had to choose just three long-termist efforts as the highest expected value, which would you pick and why?
Linch @ 2020-12-16T02:51 (+10)
(Speaking for myself and not others on the team, etc)
At a very high level, I think I have mostly "mainstream longtermist EA" views here, and my current best guess would be that AI Safety, existential biosecurity, and cause prioritization (broadly construed) are the highest EV efforts to work on overall, object-level.
This does not necessarily mean that marginal progress on these things are the best use of additional resources, or that they are the most cost-effective efforts to work on, of course.
Peter_Hurford @ 2020-12-15T19:43 (+9)
This is not a satisfying answer but right now I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize.
I also think we should spend a lot more resources on figuring out if and how much we can expect to reliably influence the long-term future, as this could have a lot of impact on our strategy (such as becoming less longtermist or more focused on broad longtermism or more focused on patient longtermism, etc.).
I don't have a third thing yet, but both of these projects we are aiming to do within Rethink Priorities.
MichaelA @ 2020-12-16T06:26 (+6)
(Just my personal views, as always)
Roughly in line with Peter's statement that "I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize", I recently argued (with some caveats and uncertainties) that marginal longtermist donations will tend to be better used to support "fundamental" rather than "intervention" research. On what those terms mean, I wrote:
It’s most useful to distinguish intervention research from fundamental research based on whether the aim is to:
- better understand, design, and/or prioritise among a small set of specific, already-identified intervention options, or
- better understand aspects of the world that may be relevant to a large set of intervention options (more)
See that post's "Key takeaways" for the main arguments for and against that overall position of mine.
I think I'd also argue that marginal longtermist research hours (not just donations) will tend to be better used to support fundamental rather than intervention research. (But here personal fit becomes quite important.) And I think I'd also currently tend to prioritise "fundamental" research over non-research interventions, but I haven't thought about that as much and didn't discuss it in the post.
So the highest-EV-on-the-current-margin efforts I'd pick would probably be in the "fundamental research" category.
Of course, these are all just general rules, and the value of different fundamental research efforts, intervention research efforts, and non-research efforts will vary greatly.
In terms of specific fundamental research efforts I'm currently personally excited about, these include analyses, from a longtermist perspective, of:
- totalitarianism/dystopias,
- world government (see also),
- civilizational collapse and recovery,
- "the long reflection", and/or
- long-term risks from malevolent actors
Basically, those things seem like variables that might (or might not!) matter a great deal, and (as far as I'm aware) haven't yet been looked into from a longtermist perspective much. So I expect there could be some valuable low-hanging fruit there.
Maybe if I had to pick just three, I'd bundle the first two together, and then stamp my feet and say "But I want four!"
(I have more thoughts on this that I may write about later. See also this and this. And again, these are just my personal, current views.)
JoshYou @ 2020-12-14T17:16 (+11)
How do you decide how to allocate research time between cause areas (e.g. animals vs x-risk)?
Marcus_A_Davis @ 2020-12-15T19:32 (+8)
Hey Josh, thanks for the question!
From first principles, our allocation depends on talent fit, the counterfactual value of our work, fundraising, and, of course, some assessment of how important we think the work is, all things considered.
At the operational level, we set targets as percentage of time we want to spend on each cause area based on these factors and we re-evaluate based on that as our existing commitments, the data, and as changes in our opinions about these matters warrant.
EdoArad @ 2020-12-15T08:22 (+9)
In this report on bottlenecks in the X-risk research community, the main suggestion was to improve the senior researcher pipeline. What do you think about the senior researcher pipeline in prioritization research?
Peter_Hurford @ 2020-12-15T19:52 (+11)
I think it would always be good to have more senior researchers, but they seem rather hard to find. Right now, my personal view is that the best way to build senior researchers is to hire and train mid-level or junior-level researchers. We hope to keep doing this with our past hires, existing hires, and our upcoming intern program.
If you're interested in funding researcher talent development, I think funding our intern program is a very competitive opportunity.
saulius @ 2020-12-15T20:07 (+6)
I haven’t read that report in full, but I imagine that it's such a big issue in the X-risk research because it grew very quickly from an obscure field, to a field with a lot of funding available and a lot of people wanting to work in it. I think it’s a rare situation, and I don't feel that it's a significant problem in the kind of research that I do (farmed animal welfare). I remember hearing that it is a problem in cultured meat R&D though, and it makes sense, the situation is similar.
EdoArad @ 2020-12-17T04:55 (+2)
That makes sense, thanks. I've checked with people in cultured meat, and they seem to agree with you - e.g. startup companies are looking for hires that have broadly relevant PhD experience (less than what I'd count senior) and some major companies have a single scientific advisor who, while being accomplished academics, are not very familiar with the field.
EdoArad @ 2020-12-15T08:22 (+8)
Do you think that you have received valuable feedback on your work by posting it on the forum? If you did, did most of it come from people in your existing network?
Jason Schukraft @ 2020-12-15T14:43 (+21)
Hey Edo,
I definitely receive valuable feedback on my work by posting it on the Forum, and the feedback is often most valuable when it comes from people outside my current network. For me, the best example of this dynamic was when Gavin Taylor left extensive comments on our series of posts about features relevant to invertebrate sentience (here, here, and here) back in June 2019. I had never interacted with Gavin before, but because of his comments, we set up a meeting, and he has become an invaluable collaborator across many different projects. My work is much improved due to his insights. I'm not sure Gavin and I would ever have met (much less collaborated) if not for his comments on the Forum.
MichaelA @ 2020-12-17T00:39 (+2)
Hi Edo,
I definitely think I’ve received valuable feedback on my work on the EA Forum, as well as on LessWrong. This feedback came in the form of upvotes/downvotes, comments on my posts, and private messages/discussions that people had with me as a result of me having posted things.
It’s harder to say:
- In what ways was that feedback valuable?
- Precisely how valuable was it? How does that compare to other sources of feedback?
- How did the value vary by various factors (e.g., EA Forum vs LessWrong, posts that are more like summaries vs posts that are more like “original research”)?
- What proportion of that came from people in my existing network?
Some thoughts on those points follow. (But first I should flag that I think there are also good reasons to post to the EA Forum/LW other than to get feedback, including to share potentially useful ideas, to signal one’s skills/knowledge to aid in later job applications, and make connections with other EAs; more on this in this other comment of mine.)
Note that all of the following relates to posts I made before joining Rethink, as I haven't yet posted anything related to my work with Rethink.
Q1: Valuable in what ways?
- Maybe the main way the feedback was useful was in helping me get an overall sense of how I was doing as an EA researcher, how I was doing as a macrostrategy researcher, and how valuable the kinds of work I was doing were, to inform whether to carry on with those things.
- That said, I think votes and comments provided less useful feedback on these points than I’d have expected. That feedback basically just seemed to indicate “You’re probably neither a terrible fit for this nor an amazing wunderkind, but rather somewhere in the vast chasm in between.” Which I guess did narrow my uncertainty slightly, but not very much.
- But there was one case in which my posts led to a more experienced researcher learning that I existed, perceiving me as having strong potential, and reaching out to me to chat, and I think that that conversation substantially informed my career plans. And since then I’ve had further conversations with that person that have also informed my career plans.
- Another way the feedback was useful was via some comments on posts informing my specific choices about what research or write about next, or what shape those next posts should take.
- If I recall correctly, this happened a few times with my first series of posts (on moral uncertainty).
- Some comments helped me determine what threads it’d be interesting/necessary to explore more (e.g., because people were still confused about those things).
- I think that’d happen less often now, because now I’m better at writing posts in general and I have more opportunities for feedback pre-posting (e.g., from my colleagues at Rethink).
- The best example was that this comment from MichaelStJules on a post of mine prompted me to make my database of existential risk estimates.
- Maybe I would’ve eventually ended up making such a database anyway, but I don’t think I’d explicitly thought of doing so before seeing that comment.
- I think that that database is probably in the top 5 most valuable things I’ve publicly posted this year (out of probably ~35 posts, if we exclude things like question posts and link posts). And I think it was more valuable than the post of mine which MichaelStJules commented on.
- I think this is an interesting case, because making that database required no special skills (it was just a weirdly overlooked low-hanging fruit that anyone could’ve plucked already), and the relevant part of MichaelStJules’ comment was just one sentence, and they just gestured in the general direction of what I ended up doing rather than clearly outlining it. So it feels sort-of like this was an “easy win” that just required a space for some accidental public brainstorming.
- If I recall correctly, this happened a few times with my first series of posts (on moral uncertainty).
- The way the feedback was most often valuable, but which is less important than the above two things, was via helping me improve specific posts. I often edited posts in response to comments.
- Finally, I imagine feedback sometimes helped me improve my research or writing style.
- Off the top of my head, I can’t remember that happening. But maybe it happened early on and I’ve just forgotten.
Q2: How valuable (compared to other things)?
- I’d probably describe how useful the feedback was by saying “Maybe less valuable than I’d have idealistically expected, but valuable enough to be a noticeable extra perk of posting publicly.”
- I think the two most valuable sources of feedback for me in 2019 and 2020, which I think were much more valuable than feedback from the Forum, were (1) results from job applications, and (2) conversations with people who were further along in various career paths.
- This is partly because what I needed most was an overall sense of which pathway I should be heading in.
- But as noted above, my Forum posts did lead to one instance of (2) - i.e., conversations about my career plans with a more experienced person.
- Regular sources of feedback on things I wrote on the Forum also probably tended to be somewhat less useful than the results from a survey I ran about the quality and impact of my writing on the Forum and LessWrong.
Q3: How did the value vary?
- I think I probably got a similar amount of value per unit of feedback on the EA Forum and LessWrong
- But I think the case in which my posts prompted a useful conversation about my career plan was prompted by my Forum rather than LessWrong posts.
- And feedback on LessWrong was less pleasant, on average (more often needlessly blunt or snarky - but still better than most of the internet, and still often the substance of what people were saying was useful).
Q4: What proportion was from my existing network?
- I think almost all of the value came from people “outside of my existing network” (here meaning “people I hadn’t interacted with 1-1, though maybe I’d had public comment exchanges with them”).
- This is probably partly because:
- My network of EA researchers / Forum users / similar happened to be quite small at the start of this year
- I wrote across a wide range of topics this year, so the set of people who’d be able to give useful input on something I wrote is quite wide and diverse, making it harder to have them all in my network and individually solicit their feedback
- If people were already in my network (e.g. if they were coworkers), I’d be more likely to get feedback from them before/without posting to the Forum/LW
- The first two of those points have become less true over time, so I imagine from now on I might tend to get a higher proportion of my feedback in ways that don’t require posting to the Forum.
- This is probably partly because:
EdoArad @ 2020-12-17T04:13 (+2)
Thanks! I found it very interesting that one of the most important feedback was on how you were doing as a researcher, and that the most important feedback was from the survey. I think that this probably applies widely and is a good reminder to interact well, especially with posts and people I appreciate (I think that I'll try to send more PMs to people who I think are constantly writing well on the forum and may be under-appreciated).
Also, thinking on Q4, I think that I might be worried that as people's personal network gets larger and more skilled, that they might post less publicly or only material that is heavily polished.
Generally, though, it seems like you didn't find engagement with the content itself very useful, which is about what I'd have guessed but unfortunate to hear.
(btw, reminding you to link to this comment from here)
MichaelA @ 2020-12-17T09:16 (+5)
Also, thinking on Q4, I think that I might be worried that as people's personal network gets larger and more skilled, that they might post less publicly or only material that is heavily polished.
Yeah. I think it's great that people can build networks of people with relevant interests and expertise and get thoughtful feedback from those networks, but also a shame if that means that people don't take the little bit of extra time to post work that's already been done and written up.
I think that this sort of thing is why I wanted to say "But first I should flag that I think there are also good reasons to post to the EA Forum/LW other than to get feedback...".
I plan to indefinitely continue posting publicly except in cases (which do exist) where there are specific reasons not to do so,[1] such as:
- potential infohazards
- the piece of writing is likely to be more polished and useful in future, so I'm deferring posting it till then
- In cases where the work isn't fully polished but the writer has no plans to ever polish it, I'd say it's often worth posting anyway with some disclaimers, and letting others just decide for themselves whether to bother reading it
- there are reasons to believe the work will confuse or mislead people more than it informs them (see also)
(Tangentially, I also feel like it's a shame when people do post EA-relevant work publicly, but just post it on their personal blog or their organisation's website or something, without also crossposting it to the Forum. It seems to me that that unnecessarily increases how hard it can be for people to find relevant info.)
[1] This sentence used to say "I plan to indefinitely continue posting publicly unless there are specific reasons not to do so, such as:" (emphasis added). That was more ambiguous, so I edited it.
jsteinhardt @ 2020-12-17T12:53 (+2)
I think the reasons people don't post stuff publicly isn't out of laziness, but because there's lots of downside risk, e.g. of someone misinterpreting you and getting upset, and not much upside relative to sharing in smaller circles.
MichaelA @ 2020-12-17T13:33 (+3)
(Just speaking for myself, as always)
I definitely agree that there are many cases where it does make sense not to post stuff publicly. I myself have a decent amount of work which I haven't posted publicly. (I also wrote a small series of posts earlier this year on handling downside risks and information hazards, which I mention as an indication of my stance on this sort of thing.)
I also agree that laziness will probably rarely be a major reason why people don't post things publicly (at least in cases where the thing is mostly written up already).
I definitely didn't mean to imply that I believe that laziness is the main reason people don't post things publicly, or that there are no good reasons to not post things publicly. But I can see how parts of my comment were ambiguous and could've been interpreted my comment that way. I've now made one edit to slightly reduce ambiguity.
So you and I might actually have pretty similar stances here.
But I also think that decent portions of cases in which a person doesn't post publicly may fit one of the following descriptions:
- The person sincerely believes there are good reasons to not post publicly, but they're mistaken.
- But I also think there are times when people sincerely believe they should post something publicly, and then do, even though really they shouldn't have (e.g., for reasons related to infohazards or the unilateralist's curse).
- I'm not sure if people err in one direction more often than the other, and it's probably more useful to think about things case by case.
- But I also think there are times when people sincerely believe they should post something publicly, and then do, even though really they shouldn't have (e.g., for reasons related to infohazards or the unilateralist's curse).
- The person overestimates the risks posting publicly posing to their own reputations, or (considered from a purely altruistic perspective) overweight risks to their own reputations relative to potential benefits to others/the world (basically because the benefits are mostly externalities while the risks aren't).
- That said, risks to individual EA-aligned researchers' reputations could be significant from an altruistic perspective, depending on the case
- Also, I don't want to be judgemental about this, or imply that people are obligated to be selfless in this arena. It's more like it'd be nice if they were more selfless (when this is the situation at hand), but understandable if they aren't, because we're only human.
- It's simply that the person's default is to not post this publicly, and the person doesn't actively think about whether to post, or don't have enough pushing them towards doing so.
- So it's more out of something like inertia than out of weighing perceived costs and benefits.
- Posting publicly would take up too much time (for further writing, editing, formatting, etc.) to be worthwhile, not because of laziness but because of other things worth prioritizing.
None of those cases primarily centre on laziness, and I wouldn't want to be judgemental towards any of those people. But in the first three cases, it might be better if the person was nudged towards posting publicly.
(And again, to be clear, I do also think there are cases in which one shouldn't post publicly.)
Does this roughly align with your views?
jsteinhardt @ 2020-12-17T18:59 (+2)
I didn't mean to imply that laziness was the main part of your reply, I was more pointing to "high personal costs of public posting" as an important dynamic that was left out of your list. I'd guess that we probably disagree about how high those are / how much effort it takes to mitigate them, and about how reasonable it is to expect people to be selfless in this regard, but I don't think we disagree on the overall list of considerations.
MichaelA @ 2020-12-17T09:14 (+4)
I think that this probably applies widely and is a good reminder to interact well, especially with posts and people I appreciate (I think that I'll try to send more PMs to people who I think are constantly writing well on the forum and may be under-appreciated).
Yeah, that sounds to me like it could be handy!
It also would've been useful (or at least comforting) if I'd known that, if I was doing badly and seemed to be a bad fit, I'd get a clear indication of that. (It'd obviously suck to hear it, but thenI could move on to other pursuits.) Otherwise it felt hard to update in either direction. But I think it's much easier and less risky to just make it more likely that people would get clear indications when they are doing well than when they aren't, for a wide range of reasons (including that even people who are capable of being great at something might not clearly display that capability right away).
Generally, though, it seems like you didn't find engagement with the content itself very useful, which is about what I'd have guessed but unfortunate to hear.
I think I agree with what you mean, but that this phrasing give someone the wrong impression. I definitely appreciated the engagement that did occur, and often found it useful. The problems were more that:
- Often there just wasn't much engagement. Maybe like some upvotes, 0-1 downvotes, 0-4 short comments.
- It's very hard to distinguish "These 3 positive comments are from the 3 out of (let's say) 25 readers who had an unusually positive opinion about this or want to be welcoming, and the others thought this sort-of sucked but couldn't be bothered saying so or didn't want to be mean" from "These 3 positive comments are totally sincere, and the other (let's say) 22 readers also thought this was great but didn't bother commenting or felt it'd be weird to just comment 'this is great!' without saying more"
- And that's not the fault of those 3 commenters. And it would feel harsh to say it's the fault of the (perhaps imagined) other 22 readers either.
(btw, reminding you to link to this comment from here)
Thanks! Done.
MichaelA @ 2020-12-17T08:51 (+4)
I found it very interesting that one of the most important feedback was on how you were doing as a researcher
It seems worth emphasising here that, before 2020:
- I'd only done ~0.5FTE years of research before 2020, and it was in an area and methodology that's not very relevant to what I'm doing now
- I hadn't started my "EA-aligned career"
- (More on this in this comment)
Therefore, for most of this year I've seen myself as more in "explore" than "exploit" mode.
As I gradually move more towards the "exploit" end of that continuum, I'd guess that:
- I'll have less need of feedback that just gives me an overall sense of whether I'm a good fit for X
- The value of feedback that improves a given piece of work (e.g., points out mistakes or angles that should be explored more or clarified) will rise, because the direct value of the individual pieces of work I'm doing is higher
This reminds me of some education researchers emphasising that the purpose of feedback in the context of high schools is to improve the student, not the piece of work. This makes sense, because the essay that 15 year old wrote isn't going to affect any important decisions, but the 15 year old may later do useful things, and has a lot to learn in order to do so.
But in other contexts, a given piece of writing may be likely to influence important decisions, and the writer may already be more experienced. In those cases, it might make sense for feedback to focus on improving the piece of writing rather than the writing.
EdoArad @ 2020-12-15T08:22 (+8)
Have you had experience with using volunteers or outsourcing questions to the broad EA community? How was it?
saulius @ 2020-12-15T20:34 (+11)
I did try it on some occasions with people who wanted to do research similar to the kind of research that I do. I think that it saved me less time than the time it took me to think of good questions to outsource and explain everything, and so on. This might be partly because there is a skill in outsourcing that I haven't mastered yet. I don't know if it helped anyone to decide whether they should pursue this type of career. If it did, then it was very much worth it.
One way I used volunteers (and friends whom I forced to volunteer) productively was making them read texts that I wrote and asking to comment aloud (not in writing) on everything that is at least slightly unclear. Then I didn't explain, but rewrote that part, and asked them to read again and asked if they understand it now. I found that this is important for texts that contain some complicated ideas/reasoning. E.g., it was very useful for the explanation of optimizer's curse and other things in this article. Not important for simple texts.
saulius @ 2020-12-15T20:39 (+6)
I also tried organizing some brainstorming sessions with members of the EA community. It was a bit useful, though I'm not sure it was wroth it (despite great participants), mostly because I get stressed about running events and then overprepare. And also because it would have taken too much time to explain all the relevant context in which I needed ideas. I think that in the right hands and the right situation, this is a tool that could be used productively though.
Marcus_A_Davis @ 2020-12-15T19:46 (+8)
Hey Edo, thanks for the question!
We've had some experience working with volunteers. In the past, when we had less operational support than we do now, we found it challenging to manage and monitor volunteers but we think it's something that we're better placed to handle now so may explore again in the coming years, though we are generally hesitant about depending on free labor.
We've not really had experience publicly outsourcing questions to the EA community, but we regularly consult wider EA communities for input on questions we are working on. Finally, and I'm not sure this is what you meant, but we've also partnered with Metaculus on some forecasting questions.
jayquigley @ 2020-12-20T03:15 (+7)
- What is your stance regarding aiming your output at an EA audience vs. a wider audience? (Academic & governmental audiences, etc.?)
- It seems that a large portion of output begins on your blog and in EA Forum posts. What other venues do you aim at, if any?
- To what extent do you regard tailoring your work to academic journals with "peer-review" as counterfactually worthwhile?
EdoArad @ 2020-12-15T08:22 (+6)
How do you manage research questions? Do you have some sort of an internal list of relevant questions? I'd also love to hear about specific examples of decisions to pursue or discard a research question.
Marcus_A_Davis @ 2020-12-15T19:57 (+9)
Thanks for the question, Edo!
We keep a large list of project ideas, and regularly add to it by asking others for projects ideas including staff, funders, advisors, and organizations in the spaces we work in.
EdoArad @ 2020-12-17T04:45 (+4)
Thank you! I have some followup questions if that's ok :)
Is it reasonable to publicly publish the list or some of it?
How do you prioritize and select them?
Do the suggestions to pursue a project come from the managers or the researchers? If they sometimes come from the researchers, do you have any mechanisms in place to motivate the researchers to explore the list or does it happen naturally?
EdoArad @ 2020-12-15T08:22 (+5)
What are some possible efforts within prioritization research that is outside your scope and you'd like to see more of?
Linch @ 2020-12-16T05:24 (+7)
I'm not confident that this is fully outside the scope of RP, but I think backchaining-in-practice is plausibly underrated by EA/longtermism, despite a lot of chatter about it in theory.
By backchaining in practice I mean tracing backwards fully from the world we want (eg a just, kind, safe world capable of long reflection), to specific efforts and actions individuals and small groups can do, in AI safety, biosecurity, animal welfare, movement building, etc.
Specific things that I think will be difficult to be under RP's purview include things that require specific AI Safety or biosecurity stories, though those things plausibly have information hazards so I'd encourage people who are doing these extensive diagrams to be a) somewhat careful about information security and b) talk to the relevant people within EA (eg FHI) before creating and certainly before publishing them.
An obvious caveat here is that it's possible many such backchaining documents exist and I am unaware of them. Another caveat is that maybe backchaining is just dumb, for various epistemic reasons.
Peter_Hurford @ 2020-12-16T21:34 (+6)
I'm not really sure what is included in the scope of "prioritization research". One thing we definitely do not do and very likely will never do, and that I am glad others do is technical AI safety research.
Other than that, I think pretty much anything in longtermism could be fair game for Rethink Priorities at some point.
EdoArad @ 2020-12-17T03:49 (+3)
I am surprised that you mention technical AI Safety as something you don't do under what I consider "prioritization research", which I didn't before posting my question was apparently a concept I used mostly internally 😊 Linch's mention of it below was in the context of understanding it's importance rather than trying to solve it, which I guess is how I'd carve up "prioritization research".
I guess that for similar reasons I'd expect RP to focus less on solving (longtermist or other) problems. Just to make sure, could examples like the following be in RP's scope if you had the right people/situation?
- Suggesting safe ways to use certain geoengineering mehcanisms.
- Developing methods for increased empathy toward future people.
- Proposing and defining a governmental institute for future generations.
- Developing economic models for incentives of great power war under futuristic scenarios like space expansion and proposing mechanisms to manage the risk of war.
Linch @ 2020-12-21T18:17 (+6)
Linch's mention of it below was in the context of understanding its importance rather than trying to solve it, which I guess is how I'd carve up "prioritization research".
I think what counts as prioritization vs object-level research of the form "trying to solve X" does not obviously have clean boundaries, for example a scoping paper like Concrete Problems in AI Safety is something that a) should arguably be considered prioritization research and b) is arguably better done by somebody who's familiar with (and connected in) AI.
Peter_Hurford @ 2020-12-17T07:06 (+4)
Yes, I think all the things you mentioned are projects that are "within the scope" of RP (not that we would necessarily do them). We see our scope as being very broad so that we can always do the highest impact projects.
EdoArad @ 2020-12-17T08:31 (+3)
Thanks, that's interesting to hear. I guess that the mission statement is broad enough to allow it :)
I have some concerns about this approach, mostly as it relates to developing research and organizational expertise, and possibly discouraging the creation of new research organizations. However, I'm sure that these kinds of considerations go into your case-by-case decision-making process and I imagine that these problems would only be crucial when EA and RP scales-up and matures more.
MichaelA @ 2020-12-16T05:56 (+3)
Hi Edo,
Could you expand a bit on what you mean by prioritization research? Do you mean something like "efforts to find the most important causes to work on and compare interventions across different areas, so that we can do as much good as possible with the resources available to us"?
If so, how narrowly do you intend "causes" to be interpreted? E.g., would you count research that informs how much to prioritise technical AI safety work vs AI governance work? Or only research that informs decisions like how much to prioritise AI risk vs biorisk? Or only research that informs decisions like how much to prioritise longtermism vs near-termist animal welfare?
(I think this is a good question, btw! I just feel like it could go in a few different directions depending on how it's intended/operationalised.)
EdoArad @ 2020-12-16T10:27 (+6)
Thanks for asking for clarification. I intended something wide that includes everything from, say, ranking interventions through cause prioritization to global priorities research and basic research that aims at improving practical prioritization making.
EdoArad @ 2020-12-15T08:22 (+5)
How does your cooperation with other prioritization research groups look like? What do you think are the biggest bottlenecks in prioritization research as a field?
Peter_Hurford @ 2020-12-16T21:35 (+6)
I'm not sure what other groups you have in mind, but I'll answer this with regard to longtermism-oriented EA-affiliated research groups.
We've collaborated a lot with the Future of Humanity Institute and the Forethought Foundation and have even shared staff and research projects with them on occasion. We have also talked some with people at Global Priorities Institute and other organizations.
I'd guess right now the biggest bottleneck is just finding ways to get more researchers working on these most important questions. There's a lot more talent out there than there are spots open. More funding would help, but we also need more management and mentorship capacity.
I'm optimistic that our internship program will be a help for this, but it is still funding constrained.
JoshYou @ 2020-12-14T17:18 (+5)
How's having two executive directors going?
Marcus_A_Davis @ 2020-12-15T19:15 (+9)
I think it's going great! I think our combined skillset is a big pro when reviewing work, considering project ideas. In general, I think bouncing ideas off each other improves and sharpens our ideas. We are definitely able to cover more depth and breadth with the two of us than if only one person was leading the organization.
Additionally, Peter and I get along great and I enjoy working alongside him everyday (well, digitally anyway given we are remote).
Peter_Hurford @ 2020-12-15T19:17 (+8)
I also think having a co-Executive Director is great. As Marcus said, we complement each other very well -- Marcus is more meticulous and detail-oriented than me, whereas I tend to be more "visionary". I definitely think we need both.
We also share responsibilities and handle disagreements very well, and we have a trusted tie-breaking system. We've thought a few times about whether this merits splitting into CEO / COO or something similar and it hasn't ever made as much sense as our current system.
EdoArad @ 2020-12-15T08:22 (+4)
In the following comment, Marcus wrote:
One very simplistic model you can use to think about possible research projects in this area is:
- Big considerations (classically "crucial considerations", i.e. moral weight, invertebrate sentience)
- New charities/interventions (presenting new ideas or possibilities that can be taken up)
- Immediate influence (analysis to shift ongoing or pending projects, donations, or interventions)
It's far easier to tie work in categories (2) or (3) into behavior changed. By contrast, projects or possible research that falls into the (1) can be very difficult to map to specific plausible changes ahead of time and, sometimes, even after the completion of the work. These projects can also be more likely to be boom or bust, in that the results of investigating them could have huge effects if we or others shift our beliefs but it can be fairly unlikely to change beliefs at all. That said, I think these types of projects can be very valuable and we try to dedicate some of our time to doing them.
I have some follow-up questions.
These categories seem to have some overlapping but different research methodologies and needed skillsets in use. Say, work that more estimation based on gathering quantitative evidence, philosophical work that draws from academic moral philosophy or building world-models from pieces of qualitative evidence. Do you have a model for a categorization for different types of research?
How do you expect work on "Big considerations" to propagate? e.g, in the case of invertebrate sentience, did you have an explicit audience in mind and a resulting ToC?
Peter_Hurford @ 2020-12-16T21:36 (+4)
Hey EdoArad, it looks like you posted a lot of these questions twice and the questions have been answered elsewhere.Here are some answers to the questions I don't think were posted twice:
~
These categories seem to have some overlapping but different research methodologies and needed skillsets in use. Say, work that more estimation based on gathering quantitative evidence, philosophical work that draws from academic moral philosophy or building world-models from pieces of qualitative evidence. Do you have a model for a categorization for different types of research?
We do not currently have a model for that.
~
How do you expect work on "Big considerations" to propagate? e.g, in the case of invertebrate sentience, did you have an explicit audience in mind and a resulting ToC?
In the case of invertebrate sentience, our audience would be the existing EA-aligned animal welfare movement and big funders, such as Open Philanthropy and the EA Animal Welfare Fund. I hope that if we can demonstrate the cause area is viable and tractable, we might be able to find new funding opportunities and start moving money to them. The EA Animal Welfare Fund has already started giving money to some invertebrate welfare projects this year and I think our research was a part of those decisions.
EdoArad @ 2020-12-17T03:13 (+2)
Thanks for the answer! I find it interesting that the intended audience is internal to EA.
(And sorry about the duplicates - fixed now )
Peter_Hurford @ 2020-12-17T03:27 (+2)
Yeah, our broader theory of change is mostly (but not entirely) based on improving the output of the EA movement, and having the EA movement push out from there.