Some reflections on testing fit for research

By rosehadshar @ 2022-02-21T09:53 (+64)

Thanks to Michael Aird for feedback on the ideas in the post, and to Nora Ammann and Damon Binder for feedback on the post itself.

At the end of last year, I decided that I wanted to pursue some research interests of mine, and test my fit for research in the EA space more generally. I ended up on a 3 month, part time contract as a researcher, working on various aspects of the history of social movements with a mentor at a different organisation.

Looking back over these 3 months, I think I learned some pretty useful things from trying to test my fit for research. In this post, I try to share these learnings, primarily with an audience of people who are interested in doing research, but don’t have much experience yet.

Context

The below is basically a series of short reflective essays. Rather than offering abstract models, I’m trying to share the texture of my experience, in the hopes that this will make the things I’m writing about easier to really ‘get’. Unfortunately this writing style is also quite wordy. For a quicker read, skim the headings, and then pick ones that sound relevant to you to read in full.

Forming opinions is really useful

It’s (relatively) easy to ask interesting questions and speculate on their answers, but harder to actually come out and say ‘currently x is my best guess’, because then you can be wrong.

Testing my fit for research forced me to form opinions in various ways:

I found practising forming opinions a bit scary, but also very good for my epistemics and my confidence:

There are many reasons why I didn’t previously feel empowered to form opinions on EA stuff, and I think it’s worth listing them as I expect other people share some of them:

To be clear, I’m not claiming that testing your fit for research is the only, or best, way of practising forming opinions. It’s just the way that I started doing so, and one of the possible benefits to be had from testing fit for research.

Thinking with numbers is really useful

This is a very common position in the EA community, so I expect many people don’t need me to tell them this. But even though I had heard this many times, I only properly understood how quantitative thinking was useful by actually doing it.

Before this research project, I didn’t feel excited about thinking with numbers. There were a few different things going on here:

Because of thoughts like this, I wouldn’t have proactively sought out opportunities to think with numbers. Fortunately for me, my mentor ended up giving me a project to do which required some numbers. To my surprise, I found that:

I was capable of usefully thinking with numbers, just using basic maths.[1]

It didn’t instantly destroy my identity as a person who likes poetry and language.

I could use numbers to think about history and culture (in some cases at least).

It was actually fun.

It changed the way I thought, for the better.

To expand upon the last point, here are the things I found most useful about thinking with numbers:

As with forming opinions, I’m not claiming that doing research is the only or best way to learn to think with numbers (and some kinds of research wouldn’t help at all). But it is one possible way.

You need surface area to have an impact

Some of my research interests are motivated by wanting to improve the EA community. Over the course of this project, I realised that:

I ended up working on a set of questions where my mentor did have good surface area, and could guide me towards the action-relevant bits. If that hadn’t been the case, I think I wouldn’t have ended up doing any useful research.

My main takeaway here is that you need to have good surface area to do impactful research. Here is a non-exhaustive list of kinds of surface area it might be useful to seek, depending on your project:

Basically, make sure you’re in close contact with the people and ideas that are relevant for your work. Put like that, it’s a kind of obvious point, but I think it’s easy to neglect the social aspect, and think that if you just read the relevant peer-reviewed literature, you will have enough context. I don’t think this is true for research generally, and in particular for research that’s trying to have an impact. The cutting edge of useful questions will ~never be published in peer review because of how long that process takes. Besides, there’s often lots of nuance and tacit knowledge involved, which you can’t get at unless you actually spend time with the relevant people. (Probably there are other reasons too.)

I think there are two parts of the research process that surface area is particularly important for:

For early-stage researchers, I think this is especially worth bearing in mind when it comes to choosing mentors. Ideally, you want to find someone with more surface area than you on the thing you want to impact. Otherwise, there’s a high risk of working on stuff that is irrelevant, or that no one ever reads.[3]

Working on someone else’s question is easier than working on your own

Part of the reason why I wanted to do some research in the first place was that I felt that I had a bunch of interesting and potentially useful ideas. It seemed natural then to work directly on those ideas.

What happened next was that I spent a long time on background reading, trying to refine questions, realising they needed more refining, and eventually getting beached and feeling like all of my questions were useless. I spent a week feeling bad and being quite unproductive, and then suddenly things turned around and I started doing directly useful things.

This turnaround didn’t happen because I finally figured out my own ideas: my mentor Damon just said ‘it’d be pretty useful for my research if I knew the answer to question x. How about you work on that for a bit?’

Things immediately got easier and more useful, and in retrospect I wish I’d tried harder at the beginning to get someone to mentor me on a question they cared about. I don’t have a coherent model here, but some things I’ll note:

It’s easy to get completely stuck

In the course of 3 months, I spent about a week genuinely stuck. I would start trying to do something, realise it was harder than I thought, and give up. Then I’d pick up something else, but while doing it I’d start to worry that it wasn’t actually worth doing at all. Sooner or later, I would just be staring at my screen, stuck. Occasionally I’d try to address the meta problem that I was stuck, but then I’d feel bad that I was spending so much time on meta and not making any object level progress, and go back to some object level thing, which I’d then get stuck on…

I had seen other people get stuck on their research, but deep down I sort of thought I was different. I didn’t seem to get stuck on my other work, and I thought of myself as a productive person who would be able to work through challenges, not get beached by them.

It turns out I am not different to those people, and I now finally get how easy it is to get stuck.

I think getting unstuck is very situation specific: perhaps the question is actually too hard, perhaps you’re right that it’s not terribly useful, or maybe you just lack confidence and need someone to tell you you’re doing fine. The way I got unstuck was by working on a question my mentor gave me instead of the stuff I was worrying about.

My main piece of general advice is, ask for help. In an ideal world, you have a mentor or manager who you can talk to about this. If you don’t, ask other researchers, or friends who you’ve found it useful to talk to in the past. Don’t despair if the first person you ask says no, or you have a conversation but it doesn’t help. Think who else might have useful insights, and ask them.

Meanwhile, go easy on yourself. If there are any lower priority tasks that feel easier to do, or robustly but only mildly useful, do those. Get some easy wins, read a few of the books on your ‘I wish I had time’ list, give feedback on other people’s work, write up the blog post you’ve been meaning to - anything that reduces the amount of time you’re staring at your screen feeling bad. It will pass.

Answering a question is harder than reasoning about other people’s answers

When I started on this research project, I found it much harder to make progress than I had done during my undergraduate and masters degrees.

In previous research work I was responding to existing literature, so there was already a framework for thinking about the question. I was usually doing one or multiple of:

In some sense, my work was responding more to a paper world of existing literature and arguments than to the real, messy world.

For this research project, I was often trying to ask a question which started from the world, not existing literature. This meant that I needed to figure out a framework for my own thinking, which felt much harder to do.

(NB I think that often you can look at a question either from the real or the paper world perspective, and often the best approach involves a bit of both.)

Appendix: miscellaneous learnings

I also learned a variety of more minor things. I’m not going to write these up in detail, as I think the key learnings are more important and the post is easier to read if it covers fewer things, but if anyone comments that they are particularly interested in a given point, I’ll try to expand.

Research process

Work hacks

Things I didn’t realise before

Notes

  1. ^

    Clearly there’s an important limit here. But I claim that you only need basic maths for sometimes thinking numerically to be an improvement on never thinking numerically.

  2. ^

    Let’s say I think that there’s a serious risk that a particular kind of duck will go extinct. I can leave it there, or I can read lots of stuff about the duck. Probably I will still hold my initial position after reading, as my initial position is pretty vague and compatible with lots of different states of the world. If instead I try to work out how likely it is with numbers, I’ll quickly have to learn lots of new things: how many of this species of duck are there right now? At what rate are the ducks dying? What is the minimum viable population of these ducks? How is the population rate changing over time?

    Let’s say that I decide that the ducks are dying for two reasons: disease, and hunting by humans. If I’m just reading about the duck, I might read a lot about each of these things. If I’m also thinking with numbers, I might realise that 90% of the death rate is explained by disease, and so while hunting is also a problem, it’s much more important to understand the disease part. (Later I might decide that the hunting 10% is more tractable than the disease 90% and so I still want to learn about the hunting. Later still, I might realise that even though it’s tractable to halve the hunting, that won’t make a big enough difference to save the duck.)

    I look for some more numbers, and find some data on pollution levels in the kind of wetland that this duck lives in. The good news is, this duck lives in places that are cleaning up their act fast. So at first, I try to convert these numbers into a decrease in the death rate. Then I realise that the insects that carry the duck disease also thrive in unpolluted wetlands. So what will the net effect be? Come to think of it, might hunting also get more popular if the wetlands are cleaner and better preserved? Or maybe concern for wetland pollution correlates with concern for ducks, and so hunting will reduce? I realise I’m confused and need to think more carefully about the relationship between habitat quality and the risk of duck extinction. If I hadn’t been trying to pull these different threads together into a model, I might not have noticed that I don’t really understand the connections between habitat and extinction.

    I end up with an estimate: 45% chance of extinction by 2050. I go to my friend the duck expert, and they say that that sounds way too high. Have I thought about the expected impact of all of us duck activists on duck populations? No, I haven’t. Later, I go back to my numbers and try to work out how many ducks I think various interventions can save. It changes my numbers quite a bit, and I’m down to more like 15% risk now. If I had gone to my duck expert friend with my initial opinion, ‘there’s a serious risk this duck will go extinct’, they would probably have agreed with me. Even if they had mentioned the interventions, it would have been easy for me to miss that I hadn’t been taking them into account in my previous thinking.

  3. ^

    Sometimes it won’t be possible to find a mentor with more surface area than you (because no one is available, or because no one has more surface area than you because it’s a novel project), and sometimes doing irrelevant or unread research is worth doing instrumentally, for your own learning or to put on your CV.


Gavin @ 2022-02-21T18:23 (+11)

You mention that there are lots of different kinds of research, but I think this is the key point about testing fit. I'm pretty shocked by how uncorrelated research competences are. 

So even if you fail at (say) solo academic technical research, you should definitely try team / assistant / desk / blog / strategy / research management before you write off research in general.

JanBrauner @ 2022-02-26T13:10 (+8)

I have a similar knee-jerk reaction whenever I read a post "on research", so I wrote up my experience with different types of research: https://forum.effectivealtruism.org/posts/pHnMXaKEstJGcKP2m/different-types-of-research-are-different

 (I'm not at all trying to imply that Rose should have caveated more in her post.)

rosehadshar @ 2022-02-22T09:00 (+3)

This seems like a useful point, thanks!

It makes me want to give a clarification: the reflections above are just the most important things I happened to learn - not a list of generally most important points to consider when testing fit for research. I think I'd need more research experience to write a good version of the latter thing (though I think my list probably overlaps with it somewhat).

I also want to respond to "you should definitely try [...] before you write off research in general". I think I agree with this, conditional on it being a sensible idea for you to be testing your fit for research in general in the first place. Some thoughts:

  • There are loads and loads of other important things to do that are not research. For lots of people I imagine there being more information in switching tack completely and trying a few new things, than in working their way through a long list of different kinds of research.
  • The space of research is too big for it to be sensible to test your fit for everything, so you need to narrow down to things that seem especially fun/especially likely to be a good fit for you.
  • I particularly care about this because I think research has inflated prestige in the EA community, and so there's a danger of people spending too much time testing fit for different kinds when really what they want is approval. I think the ideal solution here isn't 'keep testing your fit till you find some kind of research you're good at' - it's 'the norms of the community change such that there's more social reward for things other than research'.
Gavin @ 2022-02-22T15:34 (+2)

Agree with all of this

FJehn @ 2022-02-21T12:50 (+1)

Thank you for writing this. I think this contains lots of good information for the people you are aiming at.

An interesting read might be this paper here: https://journals.biologists.com/jcs/article/121/11/1771/30038/The-importance-of-stupidity-in-scientific-research I think some of the struggles you ran into are just a part of doing research and do not make your fit for research smaller.

rosehadshar @ 2022-02-22T09:12 (+1)

Thanks, I enjoyed that post (and it's quite short, for people considering whether to read).