Frank Feedback Given To Very Junior Researchers
By NunoSempere @ 2021-09-01T10:55 (+157)
Over the last year, I have found myself giving feedback on various drafts, something that I'm generally quite happy to do. Recently, I got to give two variations of this same feedback in quick succession, so I noticed the commonalities, and then realized that these commonalities were also present on past pieces of feedback. I thought I'd write the general template up, in case others might find it valuable.
High level comments
- You are working at the wrong level of abstraction and depth / you are biting more than you can chew / being too ambitious.
- In particular, the questions that you analyze are likely to have many cruxes, i.e, factors that might change the conclusion completely. But you only identify a few such cruxes, and thus your analysis doesn't seem likely to be that robust.
- I guess that the opposite error is possible—focus too much on one specific scenario which isn't that likely to happen. I just haven't seen it as much, and it doesn't seem as crippling when it happens.
- Because you're being too ambitious, you don't have the tools necessary to analyze what you want to analyze, and to some extent those tools may not exist.
- Compare with: Forecasting transformative AI timelines using biological anchors, Report on Semi-informative Priors on Transformative AI or Invertebrate Sentience: Summary of findings, which are much more constrained and have specific technical/semi-technical intellectual tool suited to the job (comparison with biological systems, variations on Laplace's law and other priors, markers of consciousness like reaction to harmful stimuli). You don't have an equivalent technical tool.
- There is a missing link between the individual facts you outline, and the conclusions you reach (e.g., about [redacted] and [redacted]). I think that the correct thing to do here is to sit with the uncertainty, or to consider a range of scenarios, rather than to reach one specific conclusion. Alternatively, you could highlight that different dynamics could still be possible, but that on the balance of probabilities, you personally think that your favored hypothesis is more likely.
- But in that case, it's be great if you more clearly defined your concepts and then expressed your certainty in terms of probabilities, because those are easier to criticize or put to the test, or even notice that there is a disagreement to be had.
Judgment calls
- I get the impression that you rely too much on secondary sources, rather than on deeply understanding what you're talking about.
- You are making the wrong tradeoff between formality and ¿clarity of thought?
- Your report was difficult to read because of the trappings of scholarship—formal tone, long sentences and paragraphs, etc.) An index would have helped.
- Your classification scheme is not exhaustive, and thus less useful.
- This seems particularly important when considering intelligent adversaries.
- I get the impression that you are not deeply familiar with the topic you are talking about. For example, when giving your overview, you don't consider [redacted], which is really the company working on this space.
- In particular, I expect that the funders or decision-makers (for instance, Open Philanthropy) whom you might be attempting to influence or inform will be more familiar with the topic than you, and would thus not outsource their intellectual labor to your report.
- I don't really know whether you are characterizing the literature faithfully, whether you're just citing the top few most salient experts that you found, or whether there are other factors at play. For instance, maybe the people who [redacted] don't want to be talking about it. Even if you are representing the academic consensus fairly, I don't know how much to trust it. Like, I get that it's an academic field, but I don't particularly expect it to have good models of the world.
- It's unclear who is the "we" who will implement the measures you propose in the text.
- This might be an acceptable simplification to make at the beginning, but in the end I don't think it's that useful to talk about what "humanity as a whole" should do without a specific plan of action.
- Because of the above, your conclusions seem fragile / untrustworthy.
Suggestions if you want to produce something which is directly useful
- Bite off a smaller chunk. E.g., what does the literature say on A? How could B look in 5 years? What are the current capabilities of C? What is the theoretical maximum of D? How does F tool shine light on E?
- Alternatively, do become deeply familiar with a topic you're interested in over a longer period of time, then write the more ambitious type of analysis.
- Try to get an exhaustive classification scheme, or clearly point out which assumptions you are making or not making.
- This point feels particularly important when considering adversarial agents, because vectors of attack are fungible/interchangeable.
- One method for finding exhaustive classifications is logical negation. E.g., state/non-state actors, human/non-human forecasting systems, transparent/opaque systems, systems which take/do not take decisions, etc.
- One can then consider different ends of the spectrum, e.g. more or less opaque systems, bigger or smaller non-state actors, etc.
- Outline what your method was for generating your classification scheme.
- If there is no method, point this out.
- If you are making an arbitrary decision (e.g., to only focus on United Nations' organizations rather than on all organizations), point this out. Constraining the scope of your research seems fine, but I find it very annoying when this isn't presented loud and clear.
- For example, "We are only looking at human forecasting systems because those are the ones we are most familiar with. However, note that machine learning systems or data analysis pipelines are usually more powerful methods if one has enough data."
- Go through past OpenPhilanthropy and Rethink Priorities reports to get a sense of how ambitious reports which are comprehensive enough to influence decisions look.
- Go through the following examples to get a sense of useful projects which are constrained and done by relatively non-senior researchers, yet still seem useful Parameter counts in Machine Learning, Database of existential risk estimates, Base Rates on United States Regime Collapse, Analgesics for farm animals.
- Signpost how much time you've spent, and how confident you are in your conclusion.
- Depending on the situation: Hire an editor, or at least go through hemingwayapp to make your writing clearer (h/t Marta Krzeminska).
But note that maybe producing something directly useful isn't what you what to be doing, and gaining expertise about some topic of interest might be a fine thing to do. In that case, maybe just gain expertise about X directly and then write some thoughts on it, rather than attempting to produce an exhaustive report on X from the get go.
Object level comments
In this section, I usually go section by section pointing out impressions, things one could add, objections, etc.
Models about the value of your project
Ultimately, I expect that the chance that your report influences e.g., funding decisions this time is pretty low, and that most of its value would come from the things you've learnt allowing you to choose a more constrained project next time, or improving your models of the world more generally.
Edited to add: I think that this is essentially a common state of affairs, and that affecting the world through research requires hitting a fairly narrow target. Ideally you could rely on mentors and on feedback from the EA community to aim you in the right direction. But in practice you do end up needing to figure out yourself a lot of the specifics. I hope that the above points were helpful for that, and good luck.
Notes on that feedback
So, I realize that the above feedback might come across as discouraging, but at the point where someone has e.g., written quite a lengthy piece which probably won't affect any decisions, I do feel bound to give them an honest assessment of why I think that is when they ask me for feedback. That said, I am aware that I could probably word things more tactfully.
However, I'm not too worried because feedback recipients generally signaled that they found the feedback as valuable, or even "among the most useful [they] gathered." And in general, the EA community does go to great lengths to be welcoming to new members, so some contrast occasionally doesn't feel like a terrible idea.
I'd be curious to get push-back on any of the points, or to get other people's version of this post.
Isaac_Dunn @ 2021-09-01T14:05 (+46)
Thanks for sharing this! I enjoyed the comments about picking the right scope for a project. I also liked the general nudge towards being transparent about reasoning and uncertainty rather than overstating how much evidence supports particular conclusions.
I think that it probably is worth the trouble to be more encouraging. I'd consider being specific about some things that have been done well, beginning and ending the feedback with encouraging words, and taking a final pass to word things in a way that implies that you're glad they've done this work and you're rooting for them. That said, it definitely seems much better to give unpolished feedback rather than no feedback, so if it'd be too high a burden then I'd go ahead with potentially discouraging feedback.
I agree that the EA community does try to be welcoming to new members, but I suspect that doing it even more would probably be good to counteract the shame and guilt I think many people might have about not being good enough for a community that places high value on success.
NunoSempere @ 2021-09-01T22:28 (+15)
beginning and ending the feedback with encouraging words,
So a version of this is also known as a "shit sandwich", and it's not clear to me that it is an effective pattern. In particular, it seems plausible that it only works a limited number of times before people start to notice and develop an aversion to it. I personally find it fairly irritating/annoying.
It's also not clear to me what flavor of encouragement is congruent with a situation in which e.g., getting EA jobs is particularly hard (though perhaps less so for research positions since Rethink Priorities is expanding!)
That said, you are just most likely just correct. I'd still be interested in getting the impression of someone with more research management experience, though.
Ben_Kuhn @ 2021-09-02T02:56 (+70)
I don't have research management experience in particular, but I have a lot of knowledge work (in particular software engineering) management experience.
IMO, giving insufficient positive feedback is a common, and damaging, blind spot for managers, especially those (like you and me) who expect their reports to derive most of their motivation from being intrinsically excited about their end goal. If unaddressed, it can easily lead to your reports feeling demotivated and like their work is pointless/terrible even when it's mostly good.
People use feedback not just to determine what to improve at, but also as an overall assessment of whether they're doing a good job. If you only give negative feedback, you're effectively biasing this process towards people inferring that they're doing a bad job. You can try to fight it by explicitly saying "you're doing a good job" or something, but in my experience this doesn't really land on an emotional level.
Positive feedback in the form "you are good at X, do more of it" can also be an extremely useful type of feedback! Helping people lean into their strengths more often yields as much or more improvement as helping them shore up their weaknesses.
I'm not particularly good at this myself, but every time I've improved at it I've had multiple reports say things to the effect of "hey, I noticed you improved at this and it's awesome and very helpful."
That said, I agree with you that shit sandwiches are silly and make it obvious that the positive feedback isn't organic, so they usually backfire. The correct way to give positive feedback is to resist your default to be negatively biased by calling out specific things that are good when you see them.
NunoSempere @ 2021-09-02T15:31 (+6)
People use feedback not just to determine what to improve at, but also as an overall assessment of whether they're doing a good job
Good point, thanks.
Charles He @ 2021-09-01T23:11 (+17)
This might be a cultural thing but in the UK/US/Canada, a purely negative note from a superior/mentor/advisor (or even friendly peer) feels really really bad.
I really strongly suggest if you are a leader or mentor, to always end a message on a sincerely positive note.
So a version of this is also known as a "shit sandwich", and it's not clear to me that it is an effective pattern. In particular, it seems plausible that it only works a limited number of times before people start to notice and develop an aversion to it. I personally find it fairly irritating/annoying.
I think there's a pattern where being pro forma or insincere is really bad.
But it seems low cost and valuable to add a sincere note saying:
"I really liked your motivation and effort and I think there's potential from you. I like [this thing about you]...I think you can really help in [this way]."
Which is what you want right? And believe right? Otherwise why spend time writing feedback.
Mentees and junior people can be pretty fragile and it can really affect them.
Like, it's not a high probability but there are letters or even phrases that someone will remember for years.
NunoSempere @ 2021-09-02T21:43 (+3)
Thanks for the comment. Any thoughts on Linch's comment below?
Charles He @ 2021-09-03T00:53 (+5)
Thanks for the reply.
I think both your main post and Linch's comment are both very valuable, thoughtful contributions.
I agree that such direct advice is under supplied. Your experiences/suggestions should be taken seriously and is a big contribution.
I don't have anything substantive to add.
Owen_Cotton-Barratt @ 2021-09-01T23:47 (+16)
I think narrowly following the form can be kind of annoying, but the spirit of the idea is to do proof of work to show that you value their efforts, which can help to make it gut-level easier for the recipient to hear the criticism as constructive advice from an ally (that they want to take on board) rather than an attack from someone who doesn't like them (that they want to defend against).
Khorton @ 2021-09-02T07:40 (+28)
All my psych classes and management training have agreed so far that shit sandwich style feedback is ineffective because either people only absorb the negative or only absorb the positive. (This is more true if you have an ongoing relationship with someone - if you're giving one-off feedback I guess you have no choice!)
I recommend instead framing conversations around someone's goals. Framing feedback as advice to help someone meet their goals helps me to give more useful information and them to absorb it better, for example "Hiring managers will be looking for X, Y, and Z in your piece" or "Focused on A, B, and C would significantly increase the expected impacted of this piece of research" It's even more useful if you ask them what they think about their own work first, because sometimes they can already identify some of the problems and you can skip that stage and go straight to giving advice on how to fix them!
Then, if this is someone you manage and you're reviewing further drafts, give positive comments when they've updated it - and make a special effort to notice when they do well on those efforts in future papers.
I agree that if you're giving one-off advice though the person will be looking to see if you think they have potential through your tone so it is worth reflecting how well you think they're doing. (EDIT: I see that OP does tell people how well they're doing but it's not very encouraging. I agree it's useful to explicitly say you're glad they're part of the EA community etc.)
NunoSempere @ 2021-09-02T21:46 (+4)
Re: the Edit, I've added an additional paragraph to make that particular point slightly less biting.
Also, thanks for the point around framing in terms of people's goals.
NunoSempere @ 2021-09-02T15:34 (+6)
do proof of work to show that you value their efforts
Yes, this makes some sense, thanks.
Harrison D @ 2021-09-01T19:08 (+10)
I had similar thoughts at the end. I definitely think the optimal feedback depends on the audience/recipient and the quality of the project (including whether the person seems to have an accurate vs. overly-pessimistic vs. overly-optimistic view of the project quality), but I also think that in most cases it's probably better to add more positive notes, especially at the beginning and end.
NunoSempere @ 2021-09-02T15:30 (+2)
I added a paragraph to the "Models about the value of your project" to make it less biting.
Linch @ 2021-09-02T21:33 (+19)
So I agree with many of the other comments overall about the need to give concrete positive feedback, especially if you're in a position of authority over others. And indeed I strive to be positive and encouraging whenever I'm in a position of management or mentorship. On the other hand, given the current state of the EA community, it is very much the case that many objectively talented people do not find good fits with doing the type of work they want to do in our community. So I think frank high-level feedback is also undersupplied, including very broad ones like:
a) you are a valuable person who may contribute to EA in other ways, just this thing you suggest is not your comparative advantage and really is probably net negative
or
b) you are a valuable person intrinsically but I don't think you are a good fit for the EA community at the current state, so unfortunately you should probably just make a name for yourself elsewhere and come back to the community in 5 years
I think in my own case, I could've benefitted from more feedback like a) in the past (eg in a lot of my attempts at meta work). I think b) is harder feedback to give and also has much worse downside risks if you're wrong, but such feedback would also have saved some people substantial time, money, and emotional grief if delivered judiciously and carefully.
Denise_Melchin @ 2021-09-03T08:44 (+39)
I agree with the gist of this comment, but just a brief note that you do not need to do direct work to be "part of the EA community". Donating is good as well. :-)
Linch @ 2021-12-03T00:21 (+3)
I think a lot of this is an empirical question of what's needed. I think my own view is that some people in the position I described will grow stronger and contribute to the movement more if they are willing to try difficult ambitious things outside of the movement and come back when they/EA have both matured somewhat in slightly uncorrelated ways, rather than thinking of their impact as primarily through donations (which for most people may not look like trying their best to do a really good job either starting something new or trying hard to climb career ladders, but more like being relatively mediocre).
It's an empirical question however, and I'm open to people thinking I'm wrong and the long-term impact-maximizing thing for almost everybody who aren't doing direct EA jobs is usually donations or relatively untargeted external jobs.
Isaac_Dunn @ 2021-09-03T01:06 (+14)
I agree that it's valuable to give honest feedback if you think that someone should consider trying something else, rather than just giving blithely positive feedback that might cause them to continue pursuing something that's a bad fit.
It's probably worth being especially thoughtful about the way that such feedback is framed. For example, if feedback of type a) can be made constructive, it might make it seem more sincerely encouraging: rather than "it's probably bad for you to do this kind of work", saying "I actually think that you might not be as well suited to this kind of work as others in the EA community because others are better at [specific thing], but from [strength X] and [strength Y] that I've noticed, I wonder if you've considered [type of work T] or [type of work S]?" (I know that you were paraphrasing and wouldn't say those actual phrases to people)
For feedback of type b), my gut reaction is that basically no one should be given feedback of that type because of the risk if you're wrong as you say, but also because of the risk of exacerbating feelings that only sufficiently impressive people are welcome in EA. I guess it depends whether you mean "you're a valued member of this community, but not competitive for a job in the community" or "you're not good enough to be a member of this community". I agree that some people should be given the first type of feedback if you're sure enough, but I don't think anyone should be told they're not good enough to join the community.
Linch @ 2021-12-03T00:26 (+5)
I think I have a fairly different attitude towards feedback compared to you and some of the other commenters. My generally view is that subject to time constraints, giving and receiving lots of feedback is both individually and institutionally healthier, and also we should be more willing to give low-quality and low-certainty feedback when we're not sure (and disclaim that we're not sure) rather than leave things unsaid.
In general I think people aren't correctly modeling that constructive feedback is both time and emotionally costly, and 1) suggesting more roadblocks to making it harder to deliver such feedback makes our community worse and 2) what happens when you don't give negative feedback isn't that people are slightly deluded but overall emotionally happier. People's emotions adjust and a fair number of junior EAs basically act like they're stepping on eggshells because they don't know if what they're doing is perceived as bad/dumb because nobody would tell them.
technicalities @ 2021-09-01T17:46 (+10)
I think this is your best post this year. Because rarely said, despite these failure modes seeming omnipresent. (I fall into em all the time!)
Stefan_Schubert @ 2021-09-01T11:43 (+10)
Maybe you could change the framing a bit. E.g. you could call the first section "High-level comments" or "Comments on the high-level structure" rather than "high-level errors". I guess that in fact comments are rarely just about pointing out errors, which means that it may also be more descriptively accurate.
The second section could maybe be called "Judgement calls" or something like that.
NunoSempere @ 2021-09-01T12:05 (+4)
Thanks, changed.
MichaelA @ 2021-10-14T14:56 (+5)
Thanks, I thought this was helpful and can imagine sharing it with people or referring back to it in future. I also appreciated the discussion in the comments.
People who liked this post or the comment discussion might also like Giving and receiving feedback and the comments there (which I refer people to often), or other posts tagged Management & mentoring.