Long-Term Future Fund: August 2019 grant recommendations
By Habryka @ 2019-10-03T18:46 (+79)
Note: The Q4 deadline for applications to the Long-Term Future Fund is Friday 11th October. Apply here.
We opened up an application for grant requests earlier this year, and it was open for about one month. This post contains the list of grant recipients for Q3 2019, as well as some of the reasoning behind the grants. Most of the funding for these grants has already been distributed to the recipients.
In the writeups below, we explain the purpose for each grant and summarize our reasoning behind their recommendation. Each summary is written by the fund manager who was most excited about recommending the relevant grant (with a few exceptions that we've noted below). These differ a lot in length, based on how much available time the different fund members had to explain their reasoning.
When we’ve shared excerpts from an application, those excerpts may have been lightly edited for context or clarity.
Grant Recipients
Grants Made By the Long-Term Future Fund
Each grant recipient is followed by the size of the grant and their one-sentence description of their project. All of these grants have been made.
- Samuel Hilton, on behalf of the HIPE team ($60,000): Placing a staff member within the government, to support civil servants to do the most good they can.
- Stag Lynn ($23,000): To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety.
- Roam Research ($10,000): Workflowy, but with much more power to organize your thoughts and collaborate with others.
- Alexander Gietelink Oldenziel ($30,000): Independent AI Safety thinking, doing research in aspects of self-reference in using techniques from type theory, topos theory and category theory more generally.
- Alexander Siegenfeld ($20,000): Characterizing the properties and constraints of complex systems and their external interactions.
- Sören M. ($36,982): Additional funding for an AI strategy PhD at Oxford / FHI to improve my research productivity
- AI Safety Camp ($41,000): A research experience program for prospective AI safety researchers.
- Miranda Dixon-Luinenburg ($13,500): Writing EA-themed fiction that addresses X-risk topics.
- David Manheim ($30,000): Multi-model approach to corporate and state actors relevant to existential risk mitigation.
- Joar Skalse ($10,000): Upskilling in ML in order to be able to do productive AI safety research sooner than otherwise.
- Chris Chambers ($36,635): Combat publication bias in science by promoting and supporting the Registered Reports journal format.
- Jess Whittlestone ($75,080): Research on the links between short- and long-term AI policy while skilling up in technical ML.
- Lynette Bye ($23,000): Productivity coaching for effective altruists to increase their impact.
Total distributed: $439,197
Other Recommendations
The following people and organizations were applicants who got alternative sources of funding, or decided to work on a different project. The Long-Term Future Fund recommended grants to them, but did not end up funding them.
The following recommendation still has a writeup below:
- Center for Applied Rationality ($150,000): Help promising people to reason more effectively and find high-impact work, such as reducing x-risk.
We did not write up the following recommendations:
- Jake Coble, who requested $10,000 to conduct research alongside Simon Beard of CSER. This grant request came with an early deadline, so we made the recommendation earlier in the grant cycle. However, after our recommendation went out, Jake found a different project he preferred, and no longer required funding.
- We recommended another individual for a grant, but they wound up accepting funding from another source. (They requested that we not share their name; we would have shared this information had they received funding from us.)
Writeups by Helen Toner
Samuel Hilton, on behalf of the HIPE team ($60,000)
Placing a staff member within the government, to support civil servants to do the most good they can.
This grant supports HIPE (https://hipe.org.uk), a UK-based organization that helps civil servants to have high-impact careers. HIPE’s primary activities are researching how to have a positive impact in the UK government; disseminating their findings via workshops, blog posts, etc.; and providing one-on-one support to interested individuals.
HIPE has so far been entirely volunteer-run. This grant funds part of the cost of a full-time staff member for two years, plus some office and travel costs.
Our reasoning for making this grant is based on our impression that HIPE has already been able to gain some traction as a volunteer organization, and on the fact that they now have the opportunity to place a full-time staff member within the Cabinet Office. We see this both as a promising opportunity in its own right, and also as a positive signal about the engagement HIPE has been able to create so far. The fact that the Cabinet Office is willing to provide desk space and cover part of the overhead cost for the staff member suggests that HIPE is engaging successfully with its core audiences.
HIPE does not yet have robust ways of tracking its impact, but they expressed strong interest in improving their impact tracking over time. We would hope to see a more fleshed-out impact evaluation if we were asked to renew this grant in the future.
I’ll add that I (Helen) personally see promise in the idea of services that offer career discussion, coaching, and mentoring in more specialized settings. (Other fund members may agree with this, but it was not part of our discussion when deciding whether to make this grant, so I’m not sure.)
Writeups by Alex Zhu
Stag Lynn ($23,000)
To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety
Stag’s current intention is to spend the next year improving his skills in a variety of areas (e.g. programming, theoretical neuroscience, and game theory) with the goal of contributing to AI safety research, meeting relevant people in the x-risk community, and helping out in EA/rationality related contexts wherever he can (eg, at rationality summer camps like SPARC and ESPR).
Two projects he may pursue during the year:
- Working to implement certificates of impact in the EA/X-risk community, in the hope of encouraging coordination between funders with different values and increasing transparency around the contributions of different people to impactful projects.
- Working as an unpaid personal assistant to someone in EA who is sufficiently busy for this form of assistance to be useful, and sufficiently productive for the assistance to be valuable.
I recommended funding Stag because I think he is smart, productive, and altruistic, has a track record of doing useful work, and will contribute more usefully to reducing existential risk by directly developing his capabilities and embedding himself in the EA community than he would by finishing his undergraduate degree or working a full-time job. While I’m not yet clear on what projects he will pursue, I think it’s likely that the end result will be very valuable — projects like impact certificates require substantial work from someone with technical and executional skills, and Stag seems to me to fit the bill.
More on Stag’s background: In high school, Stag had top finishes in various Latvian and European Olympiads, including a gold medal in the 2015 Latvian Olympiad in Mathematics. Stag has also previously taken the initiative to work on EA causes -- for example, he joined two other people in Latvia in attempting to create the Latvian chapter of Effective Altruism (which reached the point of creating a Latvian-language website), and he has volunteered to take on major responsibilities in future iterations of the European Summer Program in Rationality (which introduces promising high-school students to effective altruism).
Potential conflict of interest: at the time of making the grant, Stag was living with me and helping me with various odd jobs, as part of his plan to meet people in the EA community and help out where he could. This arrangement lasted for about 1.5 months. To compensate for this potential issue, I’ve included notes on Stag from Oliver Habryka, another fund manager.
Oliver Habryka’s comments on Stag Lynn
I’ve interacted with Stag in the past and have broadly positive impressions of him, in particular his capacity for independent strategic thinking
Stag has achieved a high level of success in Latvian and Galois Mathematical Olympiads. I generally think that success in these competitions is one of the best predictors we have of a person’s future performance on making intellectual progress on core issues in AI safety. See also my comments and discussion on the grant to Misha Yagudin last round.
Stag has also contributed significantly to improving both ESPR and SPARC , both of which introduce talented pre-college students to core ideas in EA and AI safety. In particular, he’s helped the programs find and select strong participants, while suggesting curriculum changes that gave them more opportunities to think independently about important issues. This gives me a positive impression of Stag’s ability to contribute to other projects in the space. (I also consider ESPR and SPARC to be among the most cost-effective ways to get more excellent people interested in working on topics of relevance to the long-term future, and take this as another signal of Stag’s talent at selecting and/or improving projects.)
Roam Research ($10,000)
Workflowy, but with much more power to organize your thoughts and collaborate with others.
Roam is a web application which automates the Zettelkasten method, a note-taking / document-drafting process based on physical index cards. While it is difficult to start using the system, those who do often find it extremely helpful, including a researcher at MIRI who claims that the method doubled his research productivity.
On my inside view, if Roam succeeds, an experienced user of the note-taking app Workflowy will get at least as much value switching to Roam as they got from using Workflowy in the first place. (Many EAs, myself included, see Workflowy as an integral part of our intellectual process, and I think Roam might become even more integral than Workflowy. See also Sarah Constantin’s review of Roam, which describes Roam as being potentially as “profound a mental prosthetic as hypertext”, and her more recent endorsement of Roam.)
Over the course of the last year, I’ve had intermittent conversations with Conor White-Sullivan, Roam’s CEO, about the app. I started out in a position of skepticism: I doubted that Roam would ever have active users, let alone succeed at its stated mission. After a recent update call with Conor about his LTF Fund application, I was encouraged enough by Roam’s most recent progress, and sufficiently convinced of the possible upsides of its possible success, that I decided to recommend a grant to Roam.
Since then, Roam has developed enough as a product that I’ve personally switched from Workflowy to Roam and now recommend Roam to my friends. Roam’s progress on its product, combined with its growing base of active users, has led me to feel significantly more optimistic about Roam succeeding at its mission.
(This funding will support Roam’s general operating costs, including expenses for Conor, one employee, and several contractors.)
Potential conflict of interest: Conor is a friend of mine, and I was once his housemate for a few months.
Alexander Gietelink Oldenziel ($30,000)
Independent AI Safety thinking, doing research in aspects of self-reference in using techniques from type theory, topos theory and category theory more generally.
In our previous round of grants, we funded MIRI as an organization: see our April reportfor a detailed explanation of why we chose to support their work. I think Alexander’s research directions could lead to significant progress on MIRI’s research agenda — in fact, MIRI was sufficiently impressed by his work that they offered him an internship. I have also spoken to him in some depth, and was impressed both by his research taste and clarity of thought.
After the internship ends, I think it will be valuable for Alexander to have additional funding to dig deeper into these topics; I expect this grant to support roughly 1.5 years of research. During this time, he will have regular contact with researchers at MIRI, reporting on his research progress and receiving feedback.
Alexander Siegenfeld ($20,000)
Characterizing the properties and constraints of complex systems and their external interactions.
Alexander is a 5th-year graduate student in physics at MIT, and he wants to conduct independent deconfusion research for AI safety. His goal is to get a better conceptual understanding of multi-level world models by coming up with better formalisms for analyzing complex systems at differing levels of scale, building off of the work of Yaneer Bar-Yam. (Yaneer is Alexander’s advisor, and the president of the New England Complex Science Institute.)
I decided to recommend funding to Alexander because I think his research directions are promising, and because I was personally impressed by his technical abilities and his clarity of thought. Tsvi Benson-Tilsen, a MIRI researcher, was also impressed enough by Alexander to recommend that the Fund support him. Alexander plans to publish a paper on his research; it will be evaluated by researchers at MIRI, helping him decide how best to pursue further work in this area.
Potential conflict of interest: Alexander and I have been friends since our undergraduate years at MIT.
Writeups by Oliver Habryka
I have a sense that funders in EA, usually due to time constraints, tend to give little feedback to organizations they fund (or decide not to fund). In my writeups below, I tried to be as transparent as possible in explaining the reasons for why I came to believe that each grant was a good idea, my greatest uncertainties and/or concerns with each grant, and some background models I use to evaluate grants. (I hope this last item will help others better understand my future decisions in this space.)
I think that there exist more publicly defensible (or easier to understand) arguments for some of the grants that I recommended. However, I tried to explain the actual models that drove my decisions for these grants, which are often hard to summarize in a few paragraphs. I apologize in advance that some of the explanations below are probably difficult to understand.
Thoughts on grant selection and grant incentives
Some higher-level points on many of the grants below, as well as many grants from last round:
For almost every grant we make, I have a lot of opinions and thoughts about how the applicant(s) could achieve their aims better. I also have a lot of ideas for projects that I would prefer to fund over the grants we are actually making.
However, in the current structure of the LTFF, I primarily have the ability to select potential grantees from an established pool, rather than encouraging the creation of new projects. Alongside my time constraints, this means that I have a very limited ability to contribute to the projects with my own thoughts and models.
Additionally, I spend a lot of time thinking independently about these areas, and have a broad view of “ideal projects that could be made to exist.” This means that for many of the grants I am recommending, it is not usually the case that I think the projects are very good on all the relevant dimensions; I can see how they fall short of my “ideal” projects. More frequently, the projects I fund are among the only available projects in a reference class I believe to be important, and I recommend them because I want projects of that type to receive more resources (and because they pass a moderate bar for quality).
Some examples:
- Our grant to the Kocherga community space club last round. I see Kocherga as the only promising project trying to build infrastructure that helps people pursue projects related to x-risk and rationality in Russia.
- I recommended this round’s grant to Miranda partly because I think Miranda's plans are good and I think her past work in this domain and others is of high quality, but also because she is the only person who applied with a project in a domain that seems promising and neglected (using fiction to communicate otherwise hard-to-explain ideas relating to x-risk and how to work on difficult problems).
- In the November 2018 grant round, I recommended a grant to Orpheus Lummis to run an AI safety unconference in Montreal. This is because I think he had a great idea, and would create a lot of value even if he ran the events only moderately well. This isn’t the same as believing Orpheus has excellent skills in the relevant domain; I can imagine other applicants who I’d have been more excited to fund, had they applied.
I am, overall, still very excited about the grants below, and I think they are a much better use of resources than what I think of as the most common counterfactuals to donating to the LTFF fund (e.g. donating to the largest organizations in the space, donating based on time-limited personal research) .
However, related to the points I made above, I will have many criticisms of almost all the projects that receive funding from us. I think that my criticisms are valid, but readers shouldn't interpret them to mean that I have a negative impression of the grants we are making — which are strong despite their flaws. Aggregating my individual (and frequently critical) recommendations will not give readers an accurate impression of my overall (highly positive) view of the grant round.
(If I ever come to think that the pool of valuable grants has dried up, I will say so in a high-level note like this one.)
I can imagine that in the future I might want to invest more resources into writing up lists of potential projects that I would be excited about, though it is also not clear to me that I want people to optimize too much for what I am excited about, and think that the current balance of "things that I think are exciting, and that people feel internally motivated to do and generated their own plans for" seems pretty decent.
To follow up the above with a high-level assessment, I am slightly less excited about this round’s grants than I am about last round’s, and I’d estimate (very roughly) that this round is about 25% less cost-effective than the previous round.
Acknowledgements
For both this round and the last round, I wrote the writeups in collaboration with Ben Pace, who works with me on LessWrong and the Alignment Forum. After an extensive discussion about the grants and the Fund's reasoning for them, we split the grants between us and independently wrote initial drafts. We then iterated on those drafts until they accurately described my thinking about them and the relevant domains.
I am also grateful for Aaron Gertler’s help with editing and refining these writeups, which has substantially increased their clarity.
Sören M. ($36,982)
Additional funding to improve my research productivity during an AI strategy PhD program at Oxford / FHI.
I'm looking for additional funding to supplement my 15k pound/y PhD stipend for 3-4 years from September 2019. I am hoping to roughly double this. My PhD is at Oxford in machine learning, but co-supervised by Allan Dafoe from FHI so that I can focus on AI strategy. We will have multiple joint meetings each month, and I will have a desk at FHI.
The purpose is to increase my productivity and happiness. Given my expected financial situation, I currently have to make compromises on e. g. Ubers, Soylent, eating out with colleagues, accommodation, quality and waiting times for health care, spending time comparing prices, travel durations and stress, and eating less healthily.
I expect that more financial security would increase my own productivity and the effectiveness of the time invested by my supervisors.
I think that when FHI or other organizations in that reference class have trouble doing certain things due to logistical obstacles, we should usually step in and fill those gaps (e.g. see Jacob Lagerros’ grant from last round). My sense is that FHI has trouble with providing funding in situations like this (due to budgetary constraints imposed by Oxford University).
I’ve interacted with Sören in the past (during my work at CEA), and generally have positive impressions of him in a variety of domains, like his basic thinking about AI Alignment, and his general competence from running projects like the EA Newsletter.
I have a lot of trust in the judgment of Nick Bostrom and several other researchers at FHI. I am not currently very excited about the work at GovAI (the team that Allan Dafoe leads), but still have enough trust in many of the relevant decision makers to think that it is very likely that Soeren should be supported in his work.
In general, I think many of the salaries for people working on existential risk are low enough that they have to make major tradeoffs in order to deal with the resulting financial constraints. I think that increasing salaries in situations like this is a good idea (though I am hesitant about increasing salaries for other types of jobs, for a variety of reasons I won’t go into here, but am happy to expand on).
This funding should last for about 2 years of Sören’s time at Oxford.
AI Safety Camp ($41,000)
A research experience program for prospective AI safety researchers.
We want to organize the 4th AI Safety Camp (AISC) - a research retreat and program for prospective AI safety researchers. Compared to past iterations, we plan to change the format to include a 3 to 4-day project generation period and team formation workshop, followed by a several-week period of online team collaboration on concrete research questions, a 6 to 7-day intensive research retreat, and ongoing mentoring after the camp. The target capacity is 25 - 30 participants, with projects that range from technical AI safety (majority) to policy and strategy research. More information about past camps is at https://aisafetycamp.com/
[...]
Early-career entry stage seems to be a less well-covered part of the talent pipeline, especially in Europe. Individual mentoring is costly from the standpoint of expert advisors (esp. compared to guided team work), while internships and e.g. MSFP have limited capacity and are US-centric. After the camp, we advise and encourage participants on future career steps and help connect them to other organizations, or direct them to further individual work and learning if they are pursuing an academic track..
Overviews of previous research projects from the first 2 camps can be found here:
1- http://bit.ly/2FFFcK1
2- http://bit.ly/2KKjPLB
Projects from AISC3 are still in progress and there is no public summary.
To evaluate the camp, we send out an evaluation form directly after the camp has concluded and then informally follow the career decisions, publications, and other AI safety/EA involvement of the participants. We plan to conduct a larger survey from past AISC participants later in 2019 to evaluate our mid-term impact. We expect to get a more comprehensive picture of the impact, but it is difficult to evaluate counterfactuals and indirect effects (e.g. networking effects). The (anecdotal) positive examples we attribute to past camps include the acceleration of entrance of several people in the field, research outputs that include 2 conference papers, several SW projects, and about 10 blogposts.
The main direct costs of the camp are the opportunity costs of participants, organizers and advisors. There are also downside risks associated with personal conflicts at multi-day retreats and discouraging capable people from the field if the camp is run poorly. We actively work to prevent this by providing both on-site and external anonymous contact points, as well as actively attending to participant well-being, including during the online phases.
This grant is for the AI Safety Camp, to which we made a grant in the last round. Of the grants I recommended this round, I am most uncertain about this one. The primary reason is that I have not received much evidence about the performance of either of the last two camps [1], and I assign at least some probability that the camps are not facilitating very much good work. (This is mostly because I have low expectations for the quality of most work of this kind and haven’t looked closely enough at the camp to override these — not because I have positive evidence that they produce low-quality work.)
My biggest concern is that the camps do not provide a sufficient level of feedback and mentorship for the attendees. When I try to predict how well I’d expect a research retreat like the AI Safety Camp to go, much of the impact hinges on putting attendees into contact with more experienced researchers and having a good mentoring setup. Some of the problems I have with the output from the AI Safety Camp seem like they could be explained by a lack of mentorship.
From the evidence I observe on their website, I see that the attendees of the second camp all produced an artifact of their research (e.g. an academic writeup or code repository). I think this is a very positive sign. That said, it doesn’t look like any alignment researchers have commented on any of this work (this may in part have been because most of it was presented in formats that require a lot of time to engage with, such as GitHub repositories), so I’m not sure the output actually lead to the participants to get any feedback on their research directions, which is one of the most important things for people new to the field.
After some followup discussion with the organizers, I heard about changes to the upcoming camp (the target of this grant) that address some of the above concerns (independent of my feedback). In particular, the camp is being renamed to “AI Safety Research Program”, and is now split into two parts — a topic selection workshop and a research retreat, with experienced AI Alignment researchers attending the workshop. The format change seems likely to be a good idea, and makes me more optimistic about this grant.
I generally think hackathons and retreats for researchers can be very valuable, allowing for focused thinking in a new environment. I think the AI Safety Camp is held at a relatively low cost, in a part of the world (Europe) where there exist few other opportunities for potential new researchers to spend time thinking about these topics, and some promising people have attended. I hope that the camps are going well, but I will not fund another one without spending significantly more time investigating the program.
Footnotes
[1] After signing off on this grant, I found out that, due to overlap between the organizers of the events, some feedback I got about this camp was actually feedback about the Human Aligned AI Summer School, which means that I had even less information than I thought. In April I said I wanted to talk with the organizers before renewing this grant, and I expected to have at least six months between applications from them, but we received another application this round and I ended up not having time for that conversation.
Miranda Dixon-Luinenburg ($13,500)
Writing EA-themed fiction that addresses X-risk topics.
I want to spend three months evaluating my ability to produce an original work that explores existential risk, rationality, EA, and related themes such as coordination between people with different beliefs and backgrounds, handling burnout, planning on long timescales, growth mindset, etc. I predict that completing a high-quality novel of this type would take ~12 months, so 3 months is just an initial test.
In 3 months, I would hope to produce a detailed outline of an original work plus several completed chapters. Simultaneously, I would be evaluating whether writing full-time is a good fit for me in terms of motivation and personal wellbeing.
[...]
I have spent the last 2 years writing an EA-themed fanfiction of The Last Herald-Mage trilogy by Mercedes Lackey (online at https://archiveofourown.org/series/936480). In this period I have completed 9 “books” of the series, totalling 1.2M words (average of 60K words/month), mostly while I was also working full-time. (I am currently writing the final arc, and when I finish, hope to create a shorter abridged/edited version with a more solid beginning and better pacing overall.)
In the writing process, I researched key background topics, in particular AI safety work (I read a number of Arbital articles and most of this MIRI paper on decision theory: https://arxiv.org/pdf/1710.05060v1.pdf), as well as ethics, mental health, organizational best practices, medieval history and economics, etc. I have accumulated a very dedicated group of around 10 beta readers, all EAs, who read early drafts of each section and give feedback on how well it addresses various topics, which gives me more confidence that I am portraying these concepts accurately.
One natural decomposition of whether this grant is a good idea is to first ask whether writing fiction of this type is valuable, then whether Miranda is capable of actually creating that type of fiction, and last whether funding Miranda will make a significant difference in the amount/quality of her fiction.
I think that many people reading this will be surprised or confused about this grant. I feel fairly confident that grants of this type are well worth considering, and I am interested in funding more projects like this in the future, so I’ve tried my best to summarize my reasoning. I do think there are some good arguments for why we should be hesitant to do so (partly summarized by the section below that lists things that I think fiction doesn’t do as well as non-fiction), so while I think that grants like this are quite important, and have the potential to do a significant amount of good, I can imagine changing my mind about this in the future.
The track record of fiction
In a general sense, I think that fiction has a pretty strong track record of both being successful at conveying important ideas, and being a good attractor of talent and other resources. I also think that good fiction is often necessary to establish shared norms and shared language.
Here are some examples of communities and institutions that I think used fiction very centrally in their function. Note that after the first example, I am making no claim that the effect was good, I’m just establishing the magnitude of the potential effect size.
- Harry Potter and the Methods of Rationality (HPMOR) was instrumental in the growth and development of both the EA and Rationality communities. It is very likely the single most important recruitment mechanism for productive AI alignment researchers, and has also drawn many other people to work on the broader aims of the EA and Rationality communities.
- Fiction was a core part of the strategy of the neoliberal movement; fiction writers were among the groups referred to by Hayek as "secondhand dealers in ideas.” An example of someone whose fiction played both a large role in the rise of neoliberalism and in its eventual spread would be Ayn Rand.
- Almost every major religion, culture and nation-state is built on shared myths and stories, usually fictional (though the stories are often held to be true by the groups in question, making this data point a bit more confusing).
- Francis Bacon’s (unfinished) utopian novel “The New Atlantis” is often cited as the primary inspiration for the founding of the Royal Society, which may have been the single institution with the greatest influence on the progress of the scientific revolution.
On a more conceptual level, I think fiction tends to be particularly good at achieving the following aims (compared to non-fiction writing):
- Teaching low-level cognitive patterns by displaying characters that follow those patterns, allowing the reader to learn from very concrete examples set in a fictional world. (Compare Aesop’s Fables to some nonfiction book of moral precepts — it can be much easier to remember good habits when we attach them to characters.)
- Establishing norms, by having stories that display the consequences of not following certain norms, and the rewards of following them in the right way
- Establishing a common language, by not only explaining concepts, but also showing concepts as they are used, and how they are brought up in conversational context
- Establishing common goals, by creating concrete utopian visions of possible futures that motivate people to work towards them together
- Reaching a broader audience, since we naturally find stories more exciting than abstract descriptions of concepts
(I wrote in more detail about how this works for HPMOR in the last grant round.)
In contrast, here are some things that fiction is generally worse at (though a lot of these depend on context; since fiction often contains embedded non-fiction explanations, some of these can be overcome):
- Carefully evaluating ideas, in particular when evaluating them requires empirical data. There is a norm against showing graphs or tables in fiction books, making any explanation that rests on that kind of data difficult to access in fiction.
- Conveying precise technical definitions
- Engaging in dialogue with other writers and researchers
- Dealing with topics in which readers tend to come to better conclusions by mentally distancing themselves from the problem at hand, instead of engaging with concrete visceral examples (I think some ethical topics like the trolley problem qualify here, as well as problems that require mathematical concepts that don’t neatly correspond to easy real-world examples)
Overall, I think current writing about both existential risk, rationality, and effective altruism skews too much towards non-fiction, so I’m excited about experimenting with funding fiction writing.
Miranda’s writing
The second question is whether I trust Miranda to actually be able to write fiction that leverages these opportunities and provides value. This is why I think Miranda can do a good job:
- Her current fiction project is read by a few people whose taste I trust, and many of them describe having developed valuable skills or insights as a result (for example, better skills for crisis management, a better conception of moral philosophy, an improved moral compass, and some insights about decision theory)
- She wrote frequently on LessWrong and her blog for a few years, producing content of consistently high quality that, while not fictional, often displayed some of the same useful properties as fiction writing.
- I’ve seen her execute a large variety of difficult projects outside of her writing, which means I am a lot more optimistic about things like her ability to motivate herself on this project, and excelling in the non-writing aspects of the work (e.g. promoting her fiction to audiences beyond the EA and rationality communities)
- She worked in operations at CEA and received strong reviews from her coworkers
- She helped CFAR run the operations for SPARC in two consecutive years and performed well as a logistics volunteer for 11 of their other workshops
- I’ve seen her organize various events and provide useful help with logistics and general problem-solving on a large number of occasions
My two biggest concerns are:
- Miranda losing motivation to work on this project, because writing fiction with a specific goal requires a significantly different motivation than doing it for personal enjoyment
- The fiction being well-written and engaging, but failing to actually help people better understand the important issues it tries to cover.
I like the fact that this grant is for an exploratory 3 months rather than a longer period of time; this allows Miranda to pivot if it doesn’t work out, rather than being tied to a project that isn’t going well.
The counterfactual value of funding
It would be reasonable to ask whether a grant is really necessary, given that Miranda has produced a huge amount of fiction in the last two years without receiving funding explicitly dedicated to that. I have two thoughts here:
- I generally think that we should avoid declining to pay people just because they’d be willing to do valuable work for free. It seems good to reward people for work even if this doesn’t make much of a difference in the quality/consistency of the work, because I expect this promise of reward to help people build long-term motivation and encourage exploration.
- To explain this a bit more, I think this grant will help other people build motivation towards pursuing similar projects in the future, by setting a precedent for potential funding in this space. For example, I think the possibility of funding (and recognition) was also a motivator for Miranda in starting to work on this project.
- I expect this grant to have a significant effect on Miranda’s productivity, because I think that there is often a qualitative difference between work someone produces in their spare time and work that someone can focus full-time on. In particular, I expect this grant to cause Miranda’s work to improve in the dimensions that she doesn’t naturally find very stimulating, which I expect will include editing, restructuring, and other forms of “polish”.
David Manheim ($30,000)
Multi-model approach to corporate and state actors relevant to existential risk mitigation.
Work for 2-3 months on continuing to build out a multi-model approach to understanding international relations and multi-stakeholder dynamics as it relates to risks of strong(er) AI systems development, based on and extending similar work done on biological weapons risks done on behalf of FHI's Biorisk group and supporting Open Philanthropy Project planning.
This work is likely to help policy and decision analysis for effective altruism related to the deeply uncertain and complex issues in international relations and long term planning that need to be considered for many existential risk mitigation activities. While the project is focused on understanding actors and motivations in the short term, the decisions being supported are exactly those that are critical for existential risk mitigation, with long term implications for the future.
I feel a lot of skepticism toward much of the work done in the academic study of international relations. Judging from my models of political influence and its effects on the quality of intellectual contributions, and my models of research fields with little ability to perform experiments, I have high priors that work in international relations is of significantly lower quality than in most scientific fields. However, I have engaged relatively little with actual research on the topic of international relations (outside of unusual scholars like Nick Bostrom) and so am hesitant in my judgement here.
I also have a fair bit of worry around biorisk. I haven’t really had the opportunity to engage with a good case for it, and neither have many of the people I would trust most in this space, in large part due to secrecy concerns from people who work on it (more on that below). Due to this, I am worried about information cascades. (An information cascade is a situation where people primarily share what they believe but not why, and because people update on each others' beliefs you end up with a lot of people all believing the same thing precisely because everyone else does.)
I think is valuable to work on biorisk, but this view is mostly based on individual conversations that are hard to summarize, and I feel uncomfortable with my level of understanding of possible interventions, or even just conceptual frameworks I could use to approach the problem. I don’t know how most people who work in this space came to decide it was important, and those I’ve spoken to have usually been reluctant to share details in conversation (e.g. about specific discoveries they think created risk, or types of arguments that convinced them to focus on biorisk over other threats).
I’m broadly supportive of work done at places like FHI and by the people at OpenPhil who care about x-risks, so I am in favor of funding their work (e.g. Soren’s grant above). But I don’t feel as though I can defer to the people working in this domain on the object level when there is so much secrecy around their epistemic process, because I and others cannot evaluate their reasoning.
However, I am excited about this grant, because I have a good amount of trust in David’s judgment. To be more specific, he has a track record of identifying important ideas and institutions and then working on/with them. Some concrete examples include:
- Wrote up a paper on Goodhart’s Law with Scott Garrabrant (after seeing Scott’s very terse post on it)
- Works with the biorisk teams at FHI and OpenPhil
- Completed his PhD in public policy and decision theory at the RAND Corporation, which is an unusually innovative institution (e.g. this study);
- Writes interesting comments and blog posts on the internet (e.g. LessWrong)
- Has offered mentoring in his fields of expertise to other people working or preparing to work projects in the x-risk space; I’ve heard positive feedback from his mentees
Another major factor for me is the degree to which David is shares his thinking openly and transparently on the internet, and participates in public discourse, so that other people interested in these topics can engage with his ideas. (He’s also a superforecaster, which I think is predictive of broadly good judgment.) If David didn’t have this track record of public discourse, I likely wouldn’t be recommending this grant, and if he suddenly stopped participating, I’d be fairly hesitant to recommend such a grant in the future.
As I said, I’m not excited about the specific project he is proposing, but have trust in his sense of which projects might be good to work on, and I have emphasized to him that I think he should feel comfortable working on the projects he thinks are best. I strongly prefer a world where David has the freedom to work on the projects he judges to be most valuable, compared to the world where he has to take unrelated jobs (e.g. teaching at university).
Joar Skalse ($10,000)
Upskilling in ML in order to be able to do productive AI safety research sooner than otherwise.
I am requesting grant money to upskill in machine learning (ML).
Background: I am an undergraduate student in Computer Science and Philosophy at Oxford University, about to start the 4th year of a 4-year degree. I plan to do research in AI safety after I graduate, as I deem this to be the most promising way of having a significant positive impact on the long-term future
[...]
What I’d like to do:
I would like to improve my skills in ML by reading literature and research, replicating research papers, building ML-based systems, and so on.
To do this effectively, I need access to the compute that is required to train large models and run lengthy reinforcement learning experiments and similar.
It would also likely be very beneficial if I could live in Oxford during the vacations, as I would then be in an environment in which it is easier to be productive. It would also make it easier for me to speak with the researchers there, and give me access to the facilities of the university (including libraries, etc.).
It would also be useful to be able to attend conferences and similar events.
Joar was one of the co-authors on the Mesa-Optimisers paper, which I found surprisingly useful and clearly written, especially considering that its authors had relatively little background in alignment research or research in general. I think it is probably the second most important piece of writing on AI alignment that came out in the last 12 months, after the Embedded Agency sequence. My current best guess is that this type of conceptual clarification / deconfusion is the most important type of research in AI alignment, and the type of work I’m most interested in funding. While I don’t know exactly how Joar contributed to the paper, my sense is that all the authors put in a significant effort (bar Scott Garrabrant, who played a supervising role).
This grant is for projects during and in between terms at Oxford. I want to support Joar producing more of this kind of research, which I expect this grant to help with. He’s also been writing further thoughts online (example), which I think has many positive effects (personally and as externalities).
My brief thoughts on the paper (nontechnical):
- The paper introduced me to a lot of of terminology that I’ve continued to use over the past few months (which is not true for most terminology introduced in this space)
- It helped me deconfuse my thinking on a bunch of concrete problems (in particular on the question of whether things like Alpha Go can be dangerous when “scaled up”)
- I’ve seen multiple other researchers and thinkers I respect refer to it positively
- In addition to being published as a paper, it was written up as a series of blogposts in a way that made it a lot more accessible
More of my thoughts on the paper (technical):
Note: If you haven’t read the paper, or you don’t have other background in the subject, this section will likely be unclear. It’s not essential to the case for the grant, but I wanted to share it in case people with the requisite background are interested in more details about the research
I was surprised by how helpful the conceptual work in the paper was - helping me think about where the optimization was happening in a system like AlphaGo Zero improved my understanding of that system and how to connect it to other systems that do optimization in the world. The primary formalism in the paper was clarifying rather than obscuring (and the ratio of insight to formalism was very high - see my addendum below for more thoughts on that).
Once the basic concepts were in place, clarifying different basic tools that would encourage optimization to happen in either the base optimizer or the mesa optimizer (e.g. constraining and expanding space/time offered to the base or mesa optimizers has interesting effects), plus clarifying the types of alignment / pseudo-alignment / internalizing of the base objective, all helped me think about this issue very clearly. It largely used basic technical language I already knew, and put it together in ways that would’ve taken me many months to achieve on my own - a very helpful conceptual piece of work.
Note on the writeups for Chris, Jess, and Lynette
The following three grants were more exciting to one or more other fund managers than they were to me (Oliver). I think that for all three, if it had just been me on the grant committee, we might have not actually made them. However, I had more resources available to invest into these writeups, and as such I ended up summarizing my view on them, instead of someone else on the fund doing so. As such, they are probably less representative of the reasons for why we made these grants than the writeups above.
In the course of thinking through these grants, I formed (and wrote out below) more detailed, explicit models of the topics. Although these models were not counterfactual in the Fund’s making the grants, I think they are fairly predictive of my future grant recommendations.
Chris Chambers ($36,635)
Note: Application sent in by Jacob Hilton.
Combat publication bias in science by promoting and supporting the Registered Reports journal format
I'm suggesting a grant to fund a teaching buyout for Professor Chris Chambers, an academic at the University of Cardiff working to promote and support Registered Reports. This funding opportunity was originally identified and researched by Hauke Hillebrandt, who published a full analysis here. In brief, a Registered Report is a format for journal articles where peer review and acceptance decisions happen before data is collected, so that the results are much less susceptible to publication bias. The grant would free Chris of teaching duties so that he can work full-time on trying to get Registered Reports to become part of mainstream science, which includes outreach to journal editors and supporting them through the process of adopting the format for their journal. More details of Chris's plans can be found here.
I think the main reason for funding this is from a worldview diversification perspective: I would expect it to broadly improve the efficiency of scientific research by improving the communication of negative results, and to enable people to make better-informed use of scientific research by reducing publication bias. I would expect these effects to be primarily within fields where empirical tests tend to be useful but not always definitive, such as clinical trials (one of Chris's focus areas), which would have knock-on effects on health.
From an X-risk perspective, the key question to answer seems to be which technologies differentially benefit from this grant. I do not have a strong opinion on this, but to quote Brian Wang from a Facebook thread:
In terms of [...] bio-risk, my initial thoughts are that reproducibility concerns in biology are strongest when it comes to biomedicine, a field that can be broadly viewed as defense-enabling. By contrast, I'm not sure that reproducibility concerns hinder the more fundamental, offense-enabling developments in biology all that much (e.g., the falling costs of gene synthesis, the discovery of CRISPR).
As for why this particular intervention strikes me as a cost-effective way to improve science, it is shovel-ready, it may be the sort of thing that traditional funding sources would miss, it has been carefully vetted by Hauke, and I thought that Chris seemed thoughtful and intelligent from his videoed talk.”
The Let’s Fund report linked in the application played a major role in my assessment of the grant, and I probably would not have been comfortable recommending this grant without access to that report.
Thoughts on Registered Reports
The replication crisis in psychology, and the broad spread of “career science,” have made it (to me) quite clear that the methodological foundations of at least psychology itself, but possibly also the broader life-sciences, are creating a very large volume of false and likely unreproducible claims.
This is in large part caused by problematic incentives for individual scientists to engage in highly biased reporting and statistically dubious practices.
I think preregistration has the opportunity to fix a small but significant part of this problem, primarily by reducing file-drawer effects. To borrow an explanation from the Let’s Fund report (lightly edited for clarity):
[Pre-registration] was introduced to address two problems: publication bias and analytical flexibility (in particular outcome switching in the case of clinical medicine).
Publication bias, also known as the file drawer problem, refers to the fact that many more studies are conducted than published. Studies that obtain positive and novel results are more likely to be published than studies that obtain negative results or report replications of prior results. The consequence is that the published literature indicates stronger evidence for findings than exists in reality.
Outcome switching refers to the possibility of changing the outcomes of interest in the study depending on the observed results. A researcher may include ten variables that could be considered outcomes of the research, and — once the results are known — intentionally or unintentionally select the subset of outcomes that show statistically significant results as the outcomes of interest. The consequence is an increase in the likelihood that reported results are spurious by leveraging chance, while negative evidence gets ignored.
This is one of several related research practices that can inflate spurious findings when analysis decisions are made with knowledge of the observed data, such as selection of models, exclusion rules and covariates. Such data-contingent analysis decisions constitute what has become known as P-hacking, and pre-registration can protect against all of these.
[...]
It also effectively blinds the researcher to the outcome because the data are not collected yet and the outcomes are not yet known. This way the researcher’s unconscious biases cannot influence the analysis strategy
“Registered reports” refers to a specific protocol that journals are encouraged to adopt, which integrates preregistration into the journal acceptance process. Illustrated by this picture (borrowed from the Let’s Fund report):
Of the many ways to implement preregistration practices, I don’t think the one that Chambers proposes seems ideal, and I can see some flaws with it, but I still think that the quality of clinical science (and potentially other fields) will significantly improve if more journals adopt the registered reports protocol. (Please keep this in mind as you read my concerns in the next section.)
The importance of bandwidth constraints for journals
Chambers has the explicit goal of making all clinical trials require the use of registered reports. That outcome seems potentially quite harmful, and possibly worse than the current state of clinical science. (However, since that current state is very far from “universal registered reports,” I am not very worried about this grant contributing to that scenario.)
The Let’s Fund report covers the benefits of preregistration pretty well, so I won’t go into much detail here. Instead, I will mention some of my specific concerns with the protocol that Chambers is trying to promote.
From the registered reports website:
Manuscripts that pass peer review will be issued an in principle acceptance (IPA), indicating that the article will be published pending successful completion of the study according to the exact methods and analytic procedures outlined, as well as a defensible and evidence-bound interpretation of the results.
This seems unlikely to be the best course of action. I don’t think that the most widely-read journals should only publish replications. The key reason is that many scientific journals are solving a bandwidth constraint - sharing papers that are worth reading, not merely papers that say true things, to help researchers keep up to date with new findings in their field. A math journal could have papers for every true mathematical statement, including trivial ones, but they instead need to focus on true statements that are useful to signal boost to the mathematics community. (Related concepts are the tradeoff between bias and variance in Machine Learning, or accuracy and calibration in forecasting.)
Ultimately, from a value of information perspective, it is totally possible for a study to only be interesting if it finds a positive result, and to be uninteresting when analyzed pre-publication from the perspective of the editor. It seems better to encourage pre-publication, but still take into account the information value of a paper’s experimental results, even if this doesn’t fully prevent publication bias.
To give a concrete (and highly simplified) example, imagine a world where you are trying to find an effective treatment for a disease. You don’t have great theory in this space, so you basically have to test 100 plausible treatments. On their own, none of these have a high likelihood of being effective, but you expect that at least one of them will work reasonably well.
Currently, you would preregister those trials (as is required for clinical trials), and then start performing the studies one by one. Each failure provides relatively little information (since the prior probability was low anyways), so you are unlikely to be able to publish it in a prestigious journal, but you can probably still publish it somewhere. Not many people would hear about it, but it would be findable if someone is looking specifically for evidence about the specific disease you are trying to treat, or the treatment that you tried out. However, finding a successful treatment is highly valuable information which will likely get published in a journal with a lot of readers, causing lots of people to hear about the potential new treatment.
In a world with mandatory registered reports, none of these studies will be published in a high-readership journal, since journals will be forced to make a decision before they know the outcome of a treatment. Because all 100 studies are equally unpromising, none are likely to pass the high bar of such a journal, and they’ll wind up in obscure publications (if they are published at all) [1]. Thus, even if one of them finds a successful result, few people will hear about it. High-readership journals exist in large part to spread news about valuable results in a limited bandwidth environment; this no longer happens in scenarios of this kind.
Because of dynamics like this, I think it is very unlikely that any major journals will ever switch towards only publishing registered report-based studies, even within clinical trials, since no journal would want to pass up on the opportunity to publish a study that has the opportunity to revolutionize the field.
Importance of selecting for clarity
Here is the full set of criteria that papers are being evaluated by for stage 2 of the registered reports process:
1. Whether the data are able to test the authors’ proposed hypotheses by satisfying the approved outcome-neutral conditions (such as quality checks or positive controls)
2. Whether the Introduction, rationale and stated hypotheses are the same as the approved Stage 1 submission (required)
3. Whether the authors adhered precisely to the registered experimental procedures
4. Whether any unregistered post hoc analyses added by the authors are justified, methodologically sound, and informative
5. Whether the authors’ conclusions are justified given the data
The above list is comprehensive, and does not include any mention of the clarity of the authors’ writing, the quality/rigor of the explanation provided by the paper’s methodology, or the implications of the paper’s findings on underlying theory. (All of these are very important to how journals currently evaluate papers.) This means that journals can only filter for those characteristics in the first stage of the registered reports process, when large parts of the paper haven’t yet been written. As a result, large parts of the paper basically have no selection applied to them for conceptual clarity, as well as thoughtful analysis of implications for future theory, likely resulting in those qualities getting worse.
I think the goal of registered reports is to split research in two halves where you publish two separate papers, one that is empirical, and another that is purely theoretical, which that takes the results of the first paper as given and explores their consequences. We already see this split a good amount in physics, in which there exists a pretty significant divide between experimental and theoretical physics, the latter of which rarely performs experiments. I don’t know whether encouraging this split in a given field is a net-improvement, since I generally think that a lot of good science comes from combining the gathering of good empirical data with careful analysis and explanations, and I am particularly worried that the analysis of the results in papers published via registered reports will be of particularly low-quality, which encourages the spread of bad explanations and misconceptions which can cause a lot of damage (though some of that is definitely offset by reducing the degree to which scientists can fit hypotheses post-hoc, due to preregistration). The costs here seem related to Chris Olah’s article on research debt.
Again, I think both of these problems are unlikely to become serious issues, because at most I can imagine getting to a world where something between 10% and 30% of top journal publications in a given field have gone through registered reports-based preregistration. I would be deeply surprised if there weren’t alternative outlets for papers that do try to combine the gathering of empirical data with high-quality explanations and analysis.
Failures due to bureaucracy
I should also note clinical science is not something I have spent large amounts of time thinking about, that I am quite concerned about adding more red tape and necessary logistical hurdles to jump through when registering clinical trials. I have high uncertainty about the effect of registered reports on the costs of doing small-scale clinical experiments, but it seems more likely than not that they will lengthen the review process, and add additional methodological constraints.
(There is also a chance that it will reduce these burdens by giving scientists feedback earlier in the process and letting them be more certain of the value of running a particular study. However, this effect seems slightly weaker to me than the additional costs, though I am very uncertain about this.)
In the current scientific environment, running even a simple clinical study may require millions of dollars of overhead (a related example is detailed in Scott Alexander’s “My IRB nightmare”). I believe this barrier is a substantial drag on progress in medical science. In this context, I think that requiring even more mandatory documentation, and adding even more upfront costs, seems very costly. (Though again, it seems highly unlikely for the registered reports format to ever become mandatory on a large scale, and giving more researchers the option to publish a study via the registered reports protocol, depending on their local tradeoffs, seems likely net-positive)
To summarize these three points:
- If journals have to commit to publishing studies, it’s not obvious to me that this is good, given that they would have to do so without access to important information (e.g. whether a surprising result was found) and only a limited number of slots for publishing papers.
- It seems quite important for journals to be able to select papers based on the clarity of their explanations, both for ease of communication and for conceptual refinement.
- Excessive red tape in clinical research seems like one of the main problems with medical science today, so adding more is worrying, though the sign of the registered reports protocol on this is a bit ambigious
Differential technological progress
Let’s Fund covers differential technological progress concerns in their writeup. Key quote:
One might worry that funding meta-research indiscriminately speeds up all research, including research which carries a lot of risks. However, for the above reasons, we believe that meta-research improves predominantly social science and applied clinical science (“p-value science’) and so has a strong differential technological development element, that hopefully makes society wiser before more risks from technology emerge through innovation. However, there are some reproducibility concerns in harder sciences such as basic biological research and high energy physics that might be sped up by meta-research and thus carry risks from emerging technologies[110].
My sense is that further progress in sociology and psychology seems net positive from a global catastrophic risk reduction perspective. The case for clinical science seems a bit weaker, but still positive.
In general, I am more excited about this grant in worlds in which global catastrophes are less immediate and less likely than my usual models suggest, and I’m thinking of this grant in some sense as a hedging bet, in case we live in one of those worlds.
Overall, a reasonable summary of my position on this grant would be "I think preregistration helps, but is probably not really attacking the core issues in science. I think this grant is good, because I think it actually makes preregistration a possibility in a large number of journals, though I disagree with Chris Chambers on whether it would be good for all clinical trials to require preregistration, which I think would be quite bad. On the margin, I support his efforts, but if I ever come to change my mind about this, it’s likely for one or more of the above reasons."
Footnotes
[1] The journal could also publish a random subset, though at scale that gives rise to the same dynamics, so I’ll ignore that case. It could also batch a large number of the experiments until the expected value of information is above the relevant threshold, though that significantly increases costs.
Jess Whittlestone ($75,080)
Note: Funding from this grant will go to the Leverhulme Centre for the Future of Intelligence, which will fund Jess in turn. The LTF Fund is not replacing funding that CFI would have supplied instead; without this grant, Jess would need to pursue grants from sources outside CFI.
Research on the links between short- and long-term AI policy while skilling up in technical ML.
I’m applying for funding to cover my salary for a year as a postdoc at the Leverhulme CFI, enabling me to do two things:
-- Research the links between short- and long-term AI policy. My plan is to start broad: thinking about how to approach, frame and prioritise work on ‘short-term’ issues from a long-term perspective, and then focusing in on a more specific issue. I envision two main outputs (papers/reports): (1) reframing various aspects of ‘short-term’ AI policy from a long-term perspective (e.g. highlighting ways that ‘short-term’ issues could have long-term consequences, and ways of working on AI policy today most likely to have a long-run impact); (2) tackling a specific issue in ‘short-term’ AI policy with possible long-term consequences (tbd, but an example might be the possible impact of microtargeting on democracy and epistemic security as AI capabilities advance).
-- Skill up in technical ML by taking courses from the Cambridge ML masters.
Most work on long-term impacts of AI focuses on issues arising in the future from AGI. But issues arising in the short term may have long-term consequences: either by directly leading to extreme scenarios (e.g. automated surveillance leading to authoritarianism), or by undermining our capability to deal with other threats (e.g. disinformation undermining collective decision-making). Policy work today will also shape how AI gets developed, deployed and governed, and what issues will arise in the future. We’re at a particularly good time to influence the focus of AI policy, with many countries developing AI strategies and new research centres emerging.
There’s very little rigorous thinking the best way to do short-term AI policy from a long-term perspective. My aim is to change that, and in doing so improve the quality of discourse in current AI policy. I would start with a focus on influencing UK AI policy, as I have experience and a strong network here (e.g. the CDEI and Office for AI). Since DeepMind is in the UK, I think it is worth at least some people focusing on UK institutions. I would also ensure this research was broadly relevant, by collaborating with groups working on US AI policy (e.g. FHI, CSET and OpenAI).
I’m also asking for a time buyout to skill up in ML (~30%). This would improve my own ability to do high-quality research, by helping me to think clearly about how issues might evolve as capabilities advance, and how technical and policy approaches can best combine to influence the future impacts of AI.
The main work I know of Jess’s is her early involvement in 80,000 Hours. In the first 1-2 years of their existence, she wrote dozens of articles for them, and contributed to their culture and development. Since then I’ve seen her make positive contributions to a number of projects over the years - she has helped in some form with every EA Global conference I’ve organized (two in 2015 and one in 2016), and she’s continued to write publicly in places like the EA Forum, the EA Handbook, and news sites like Quartz and Vox. This background implies that Jess has had a lot of opportunities for members of the fund to judge her output. My sense is that this is the main reason that the other members of the fund were excited about this grant — they generally trust Jess’s judgment and value her experience (while being more hesitant about CFI’s work).
There are three things I looked into for this grant writeup: Jess’s policy research output, Jess’s blog, and the institutional quality of Leverhulme CFI. The section on Leverhulme CFI became longer than the section on Jess and was mostly unrelated to her work, so I’ve taken it out and included it as an addendum.
Impressions of Policy Papers
First is her policy research. The papers I read were from those linked on her blog. They were:
- The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions by Jess Whittlestone, Rune Nyrup, Anna Alexandrova and Stephen Cave
- Reducing Malicious Use of Synthetic Media Research: Considerations and Potential Release Practices for Machine Learning by Aviv Ovadya and Jess Whittlestone
On the first paper, about focusing on tensions: the paper said that many “principles of AI ethics” that people publicly talk about in industry, non-profit, government and academia are substantively meaningless, because they don’t come with the sort of concrete advice that actually tells you how to apply them - and specifically, how to trade them off against each other. The part of the paper I found most interesting were four paragraphs pointing to specific tensions between principles of AI ethics. They were:
- Using data to improve the quality and efficiency of services vs. respecting the privacy and autonomy of individuals
- Using algorithms to make decisions and predictions more accurate vs. ensuring fair and equal treatment
- Reaping the benefits of increased personalization in the digital sphere vs. enhancing solidarity and citizenship
- Using automation to make people’s lives more convenient and empowered vs. promoting self-actualization and dignity
My sense is that while there is some good public discussion about AI and policy (e.g. OpenAI’s work on release practices seems quite positive to me), much conversation that brands itself as ‘ethics’ is often not motivated by the desire to ensure this novel technology improves society in accordance with our deepest values, but instead by factors like reputation, PR and politics.
There are many notions, like Peter Thiel’s “At its core, artificial intelligence is a military technology” or the common question “Who should control the AI?” which don’t fully account for the details of how machine learning and artificial intelligence systems work, or the ways in which we need to think about them in very different ways from other technologies; in particular, that we will need to build new concepts and abstractions to talk about them. I think this is also true of most conversations around making AI fair, inclusive, democratic, safe, beneficial, respectful of privacy, etc.; they seldom consider how these values can be grounded in modern ML systems or future AGI systems. My sense is that much of the best conversation around AI is about how to correctly conceptualize it. This is something that (I was surprised to find) Henry Kissinger’s article on AI did well; he spends most of the essay trying to figure out which abstractions to use, as opposed to using already existing ones.
The reason I liked that bit of Jess’s paper is that I felt the paper used mainstream language around AI ethics (in a way that could appeal to a broad audience), but then:
- Correctly pointed out that AI is a sufficiently novel technology that we’re going to have to rethink what these values actually mean, because the technology causes a host of fundamentally novel ways for them to come into tension
- Provided concrete examples of these tensions
In the context of a public conversation that I feel is often substantially motivated by politics and PR rather than truth, seeing someone point clearly at important conceptual problems felt like a breath of fresh air.
That said, given all of the political incentives around public discussion of AI and ethics, I don’t know how papers like this can improve the conversation. For example, companies are worried about losing in the court of Twitter’s public opinion, and also are worried about things like governmental regulation, which are strong forces pushing them to primarily take popular but ineffectual steps to be more "ethical". I’m not saying papers like this can’t improve this situation in principle, only that I don’t personally feel like I have much of a clue about how to do it or how to evaluate whether someone else is doing it well, in advance of their having successfully done it.
Personally, I feel much more able to evaluate the conceptual work of figuring out how to think about AI and its strategic implications (two standout examples are this paper by Bostrom and this LessWrong post by Christiano), rather than work on revising popular views about AI. I’d be excited to see Jess continue with the conceptual side of her work, but if she instead primarily aims to influence public conversation (the other goal of that paper), I personally don’t think I’ll be able to evaluate and recommend grants on that basis.
From the second paper I read sections 3 and 4, which lists many safety and security practices in the fields of biosafety, computer information security, and institutional review boards (IRBs), then outlines variables for analysing release practices in ML. I found it useful, even if it was shallow (i.e. did not go into much depth in the fields it covered). Overall, the paper felt like a fine first step in thinking about this space.
In both papers, I was concerned with the level of inspiration drawn from bioethics, which seems to me to be a terribly broken field (cf. Scott Alexander talking about his IRB nightmare or medicine’s ‘culture of life’). My understanding is that bioethics coordinated a successful power grab (cf. OpenPhil’s writeup) from the field of medicine, creating hundreds of dysfunctional and impractical ethics boards that have formed a highly adversarial relationship with doctors (whose practical involvement with patients often makes them better than ethicists at making tradeoffs between treatment, pain/suffering, and dignity). The formation of an “AI ethics” community that has this sort of adversarial, unhealthy relationship with machine learning researchers would be an incredible catastrophe.
Overall, it seems like Jess is still at the beginning of her research career (she’s only been in this field for ~1.5 years). And while she’s spent a lot of effort on areas that don’t personally excite me, both of her papers include interesting ideas, and I’m curious to see her future work.
Impressions of Other Writing
Jess also writes a blog, and this is one of the main things that makes me excited about this grant. On the topic of AI, she wrote three posts (1, 2, 3), all of which made good points on at least one important issue. I also thought the post on confirmation bias and her PhD was quite thoughtful. It correctly identified a lot of problems with discussions of confirmation bias in psychology, and came to a much more nuanced view of the trade-off between being open-minded versus committing to your plans and beliefs. Overall, the posts show independent thinking written with an intent to actually convey understanding to the reader, and doing a good job of it. They share the vibe I associate with much of Julia Galef’s work - they’re noticing true observations / conceptual clarifications, successfully moving the conversation forward one or two steps, and avoiding political conflict.
I do have some significant concerns with the work above, including the positive portrayal of bioethics and the absence of any criticism toward the AAAI safety conference talks, many of which seem to me to have major flaws.
While I’m not excited about Leverhulme CFI’s work (see the addendum for details), I think it will be good for Jess to have free rein to follow her own research initiatives within CFI. And while she might be able to obtain funding elsewhere, this alternative seems considerably worse, as I expect other funding options would substantially constrain the types of research she’d be able to conduct.
Lynette Bye ($23,000)
Productivity coaching for effective altruists to increase their impact.
I plan to continue coaching high-impact EAs on productivity. I expect to have 600+ sessions with about 100 clients over the next year, focusing on people working in AI safety and EA orgs. I’ve worked with people at FHI, Open Phil, CEA, MIRI, CHAI, DeepMind, the Forethought Foundation, and ACE, and will probably continue to do so. Half of my current clients (and a third of all clients I’ve worked with) are people at these orgs. I aim to increase my clients’ output by improving prioritization and increasing focused work time.
I would use the funding to: offer a subsidized rate to people at EA orgs (e.g. between $10 and $50 instead of $125 per call), offer free coaching for select coachees referred by 80,000 Hours, and hire contractors to help me create materials to scale coaching.
You can view my impact evaluation (linked below) for how I’m measuring my impact so far.
(Lynette’s public self-evaluation is here.)
I generally think it's pretty hard to do "productivity coaching" as your primary activity, especially when you are young, due to a lack of work experience. This means I have a high bar for it being a good idea that someone should go full-time into the "help other people be more productive” business.
My sense is that Lynette meets that bar, but only barely (to be clear, I consider it to be a high bar). The main thing that she seems to be doing well is being very organized about everything that she is doing, in a way that makes me confident that her work has had a real impact — if not, I think she’d have noticed and moved on to something else.
However, as I say in the CFAR writeup, I have a lot of concerns with primarily optimising for legibility, and Lynette’s work shows some signs of this. She has shared around 60 testimonials on her website (linked here). Of these, not one of them mentioned anything negative, which clearly indicates that I can't straightforwardly interpret those testimonials as positive evidence (since any unbiased sampling process would have resulted in at least some negative datapoints). I much prefer what another applicant did here: they asked people to send us information anonymously, which increased the chance of our hearing opinions that weren’t selected to create a positive impression. As is, I think I actually shouldn't update much on the testimonials, in particular given that none of them go into much detail on how Lynette has helped them, and almost all of them share a similar structure.
Reflecting on the broader picture, I think that Lynette’s mindset reflects how I think many of the best operations staff I’ve seen operate: aim to be productive by using simple output metrics, and by doing things in a mindful, structured way (as opposed to, for example, trying to aim for deep transformative insights more traditionally associated with psychotherapy). There is a deep grounded-ness and practical nature to it. I have a lot of respect for that mindset, and I feel as though it's underrepresented in the current EA/rationality landscape. My inside-view models suggest that you can achieve a bunch of good things by helping people become more productive in this way.
I also think that this mindset comes with a type of pragmatism that I am more concerned about, and often gives rise to what I consider unhealthy adversarial dynamics. As I discussed above, it’s difficult to get information from Lynette’s positive testimonials. My sense is that she might have produced them by directly optimising for “getting a grant” and trying to give me lots of positive information, leading to substantial bias in the selection process. The technique of ‘just optimize for the target’ is valuable in lots of domains, but in this case was quite negative.
That said, framing her coaching as achieving a series of similar results generally moves me closer to thinking about this grant as "coaching as a commodity". Importantly, few people reported very large gains in their productivity; the testimonials instead show a solid stream of small improvements. I think that very few people have access to good coaching, and the high variance in coach quality means that experimenting is often quite expensive and time-consuming. Lynette seems to be able to consistently produce positive effects in the people she is working with, making her services a lot more valuable due to greater certainty around the outcome. (However, I also assign significant probability that the way the evaluation questions were asked reduced the rate at which clients reported either negative or highly positive experiences.)
I think that many productivity coaches fail to achieve Lynette’s level of reliability, which is one of the key things that makes me hopeful about her work here. My guess is that the value-add of coaching is often straightforwardly positive unless you impose significant costs on your clients, and Lynette seems quite good at avoiding that by primarily optimizing for professionalism and reliability.
Further Recommendations (not funded by the LTF Fund)
Center for Applied Rationality ($150,000)
This grant was recommended by the Fund, but ultimately was funded by a private donor, who (prior to CEA finalizing its standard due diligence checks) had personally offered to make this donation instead. As such, the grant recommendation was withdrawn.
Oliver Habryka had created a full writeup by that point, so it is included below.
Help promising people to reason more effectively and find high-impact work, such as reducing x-risk.
The Center for Applied Rationality runs workshops that promote particular epistemic norms—broadly, that beliefs should be true, bugs should be solved, and that intuitions/aversions often contain useful data. These workshops are designed to cause potentially impactful people to reason more effectively, and to find people who may be interested in pursuing high-impact careers (especially AI safety).
Many of the people currently working on AI safety have been through a CFAR workshop, such as 27% of the attendees at the 2019 FLI conference on Beneficial AI in Puerto Rico, and for some of those people it appears that CFAR played a causal role in their decision to switch careers. In the confidential section, we list some graduates from CFAR programs who subsequently decided to work on AI safety, along with our estimates of the counterfactual impact of CFAR on their decision [16 at MIRI, 3 on the OpenAI safety team, 2 at CHAI, and one each at Ought, Open Phil and the DeepMind safety team].
Recruitment is the most legible form of impact CFAR has, and is probably its most important—the top reported bottleneck in the last two years among EA leaders at Leaders Forum, for example, was finding talented employees.
[...]
In 2019, we expect to run or co-run over 100 days of workshops, including our mainline workshop (designed to grow/improve the rationality community), workshops designed specifically to recruit programmers (AIRCS) and mathematicians (MSFP) to AI safety orgs, a 4-weekend instructor training program (to increase our capacity to run workshops), and alumni reunions in both the United States and Europe (to grow the EA/rationality community and cause impactful people to meet/talk with one another). Broadly speaking, we intend to continue doing the sort of work we have been doing so far.
In our last grant round, I took an outside view on CFAR and said that, in terms of output, I felt satisfied with CFAR's achievements in recruitment, training and the establishment of communal epistemic norms. I still feel this way about those areas, and my writeup last round still seems like an accurate summary of my reasons for wanting to grant to CFAR. I also said that most of my uncertainty about CFAR lies in its long-term strategic plans, and I continue to feel relatively confused about my thoughts on that.
I find it difficult to explain my thoughts on CFAR, and I think that a large fraction of this difficulty comes from CFAR being an organization that is intentionally not optimizing towards being easy to understand from the outside, having simple metrics, or more broadly being legible [1]. CFAR is intentionally avoiding being legible to the outside world in many ways. This decision is not obviously wrong, as I think it brings many positives, but I think it is the cause of me feeling particularly confused about how to talk coherently about CFAR.
Considerations around legibility
Summary: CFAR’s work is varied and difficult to evaluate. This has some good features — it can avoid focusing too closely on metrics that don’t measure impact well — but also forces evaluators to rely on factors that aren’t easy to measure, like the quality of its internal culture. On the whole, while I wish CFAR were somewhat more legible, I appreciate the benefits to CFAR’s work of not maximizing “legibility” at the cost of impact or flexibility.
To help me explain my point, let's contrast CFAR with an organization like AMF, which I think of as exceptionally legible. AMF’s work, compared to many other organizations with tens of millions of dollars on hand, is easy to understand: they buy bednets and give them to poor people in developing countries. As long as AMF continues to carry out this plan, and provides basic data showing its success in bednet distribution, I feel like I can easily model what the organization will do. If I found out that AMF was spending 10% of its money funding religious leaders in developing countries to preach good ethical principles for society, or funding the campaigns of government officials favorable to their work, I would be very surprised and feel like some basic agreement or contract had been violated — regardless of whether I thought those decisions, in the abstract, were good or bad for their mission. AMF claims to distribute anti-malaria bednets, and it is on this basis that I would choose whether to support them.
AMF could have been a very different organization, and still could be if it wanted to. For example, it could conduct research on various ways to effect change, and give its core staff the freedom to do whatever they thought was best. This new AMF (“AMF 2.0”) might not be able to tell you exactly what they’ll do next year, because they haven’t figured it out yet, but they can tell you that they’ll do whatever their staff determine is best. This could be distributing deworming pills, pursuing speculative medical research, engaging in political activism, funding religious organizations, etc.
If GiveWell wanted to evaluate AMF 2.0, they would need to use a radically different style of reasoning. There wouldn’t be a straightforward intervention with RCTs to look into. There wouldn’t be a straightforward track record of impact from which to extrapolate. Judging AMF 2.0 would require GiveWell to form much more nuanced judgments about the quality of thinking and execution of AMF’s staff, to evaluate the quality of its internal culture, and to consider a host of other factors that weren’t previously relevant.
I think that evaluating CFAR requires a lot of that kind of analysis, which seems inherently harder to communicate to other people without summarizing one’s views as: "I trust the people in that organization to make good decisions."
The more general idea here is that organizations are subject to bandwidth constraints - they often want to do lots of different things, but their funders need to be able to understand and predict their behavior with limited resources for evaluation. As I've written about recently, a key variable for any organization is the people and organizations by which they are trying to be understood and held accountable. For charities that receive most of their funding in small donations from a large population of people who don’t know much about them, this is a very strong constraint; they must communicate their work so that people can understand it very quickly with little background information. If a charity instead receives most of its funding in large donations from a small set of people who follow it closely, it can communicate much more freely, because the funders will be able to spend a lot of their time talking to the org, exchanging models, and generally coming to an understanding of why the org is doing what it’s doing.
This idea partly explains why most organizations tend to focus on legibility, in how they talk about their work and even in the work they choose to pursue. It can be difficult to attract resources and support from external parties if one’s work isn’t legible.
I think that CFAR is still likely optimizing too little towards legibility, compared to what I think would be ideal for it. Being legible allows an organization to be more confident that its work is having real effects, because it acquires evidence that holds up to a variety of different viewpoints. However, I think that far too many organizations (nonprofit and otherwise) are trying too hard to make their work legible, in a way that reduces innovation and also introduces a variety of adversarial dynamics. When you make systems that can be gamed, and which carry rewards for success (e.g. job stability, prestige, etc), people will reliably turn up to game them [2].
(As Jacob Lagerros has written in his post on Unconscious Economics, this doesn’t mean people are consciously gaming your system, but merely that this behavior will eventually transpire. The many causes of this include selection effects, reinforcement learning, and memetic evolution.)
In my view, CFAR, by not trying to optimize for a single, easy-to-explain metric, avoids playing the “game” many nonprofits play of focusing on work that will look obviously good to donors, even if it isn’t what the nonprofit believes would be most impactful. They also avoid a variety of other games that come from legibility, such as job applicants getting very good at faking the signals that they are a good fit for an organization, making it harder for them to find good applicants.
Optimizing for communication with the goal of being given resources introduces adversarial dynamics; someone asking for money may provide limited/biased information that raises the chance they’ll be given a grant but reduces the accuracy of the grantmaker’s understanding. (See my comment in Lynette’s writeup below for an example of how this can arise.) This optimization can also tie down your resources, forcing you to carry out commitments you made for the sake of legibility, rather than doing what you think would be most impactful [3].
So I think that it's important that we don't force all organizations towards maximal legibility. (That said, we should ensure that organizations are encouraged to pursue at least some degree of legibility, since the lack of legibility also gives rise to various problems.)
Do I trust CFAR to make good decisions?
As I mentioned in my initial comments on CFAR, I generally think that the current projects CFAR is working on are quite valuable and worth the resources they are consuming. But I have a lot of trouble modeling CFAR’s long-term planning, and I feel like I have to rely instead on my models of how much I trust CFAR to make good decisions in general, instead of being able to evaluate the merits of their actual plans.
That said, I do generally trust CFAR's decision-making. It’s hard to explain the evidence that causes me to believe this, but I’ll give a brief overview anyway. (This evidence probably won’t be compelling to others, but I still want to give an accurate summary of where my beliefs come from):
- I expect that a large fraction of CFAR's future strategic plans will continue to be made by Anna Salamon, from whom I have learned a lot of valuable long-term thinking skills, and who seems to me to have made good decisions for CFAR in the past.
- I think CFAR's culture, while imperfect, is still based on strong foundations of good reasoning with deep roots in the philosophy of science and the writings of Eliezer Yudkowsky (which I think serve as a good basis for learning how to think clearly).
- I have made a lot of what I consider my best and most important strategic decisions in the context of, and aided by, events organized by CFAR. This suggests to me that at least some of that generalizes to CFAR's internal ability to think strategically.
- I am excited about a number of individuals who intend to complete CFAR's latest round of instructor training, which gives me some optimism about CFAR's future access to good talent and its ability to establish and sustain a good internal culture.
Footnotes
[1] The focus on ‘legibility’ in this context I take from James C. Scott’s book “Seeing Like a State.” It was introduced to me by Elizabeth Van Nostrand in this blogpost discussing it in the context of GiveWell and good giving; Scott Alexander also discussed it in his review of the book . Here’s an example from Scott regarding centralized planning and governance:
The centralized state wanted the world to be “legible”, i.e. arranged in a way that made it easy to monitor and control. An intact forest might be more productive than an evenly-spaced rectangular grid of Norway spruce, but it was harder to legislate rules for, or assess taxes on.
[2] The errors that follow are all forms of Goodhart’s Law, which states that “any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”
[3] The benefits of (and forces that encourage) stability and reliability can maybe be most transparently understood in the context of menu costs and the prevalence of highly sticky wages.
Addenda
Addendum: Thoughts on a Strategy Article by the Leadership of Leverhulme CFI and CSER
I wrote the following in the course of thinking about the grant to Jess Whittlestone. While the grant is to support Jess’s work, the grant money will go to Leverhulme CFI, which will maintain discretion about whether to continue employing her, and will likely influence what type of work she will pursue.
As such, it seems important to not only look into Jess’s work, but also look into Leverhulme CFI and its sister organization, the Centre for the Study of Existential Risk (CSER). While my evaluation of the organization that will support Jess during her postdoc is relevant to my evaluation of the grant, it is quite long and does not directly discuss Jess or her work, so I’ve moved it into a separate section.
I’ve read a few papers from CFI and CSER over the years, and heard many impressions of their work from other people. For this writeup, I wanted to engage more concretely with their output. I reread and reviewed an article published in Nature earlier this year called Bridging near- and long-term concerns about AI, written by the Executive Directors at Leverhulme CFI and CSER respectively, Stephen Cave and Seán ÓhÉigeartaigh.
Summary and aims of the article
The article’s summary:
Debate about the impacts of AI is often split into two camps, one associated with the near term and the other with the long term. This divide is a mistake — the connections between the two perspectives deserve more attention, say Stephen Cave and Seán S. ÓhÉigeartaigh.
This is not a position I hold, and I’m going to engage with the content below in more detail.
Overall, I found the claims of the essay hard to parse and often ambiguous, but I’ve attempted to summarize what I view as its three main points:
- If ML is a primary technology used in AGI, then there are likely some design decisions today that will create lock-in in the long-term and have increasingly important implications for AGI safety.
- If we can predict changes in society from ML that matter in the long-term (such as automation of jobs), then we can prepare policy for them in the short term (like preparing educational retraining for lorry drivers who will be automated).
- Norms and institutions built today will have long-term effects, and so people who care about the long term should especially care about near-term norms and institutions.
They say “These three points relate to ways in which addressing near-term issues could contribute to solving potential long-term problems.”
If I ask myself what Leverhulme/CSER’s goals are for this document, it feels to me like it is intended as a statement of diplomacy. It’s saying that near-term and long-term AI risk work are split into two camps, but that we should be looking for common ground (“t_he connections between the two perspectives deserve more attention_”, “Learning from the long term”). It tries to emphasize shared values (“Connected research priorities”) and the importance of cooperation amongst many entities (“The challenges we will face are likely to require deep interdisciplinary and intersectoral collaboration between industries, academia and policymakers, alongside new international agreements”). The goal that I think it is trying to achieve is to negotiate trade and peace between the near-term and long-term camps by arguing that “This divide is a mistake”.
Drawing the definitions does a lot of work
The authors define “long-term concerns” with the following three examples:
wide-scale loss of jobs, risks of AI developing broad superhuman capabilities that could put it beyond our control, and fundamental questions about humanity’s place in a world with intelligent machines
Despite this broad definition, they only use concrete examples from the first category, which I would classify as something like “mid-term issues.” I think the possibility of even wide-scale loss of jobs, unless interpreted extremely broadly, is something that does not make sense to put into the same category as the other two, which are primarily concerned with stakes that are orders of magnitude higher (such as the future of the human species). I think this conflation of very different concerns causes the rest of the article to make an argument that is more likely to mislead than to inform.
After this definition, the article failed to mention any issue that I would classify as representative of the long-term concerns of Nick Bostrom or Max Tegmark, both of whom are cited by the article to define “long-term issues.” (In Tegmark’s book Life 3.0, he explicitly categorizes unemployment as a short-term concern, to be distinguished from long-term concerns.)
Conceptual confusions in short- and mid-term policy suggestions
The article has the following policy idea:
Take explainability (the extent to which the decisions of autonomous systems can be understood by relevant humans): if regulatory measures make this a requirement, more funding will go to developing transparent systems, while techniques that are powerful but opaque may be deprioritized.
(Let me be clear that this is not explicitly listed as a policy recommendation.)
My naive prior is that there is no good AI regulation a government could establish today. I continue to feel this way after looking into this case (and the next example below). Let me explain why in this case the idea that regulation requiring explainability would encourage transparent + explainable systems is false.
Modern ML systems are not doing a type of reasoning that is amenable to explanation in the way human decisions often are. There is not a principled explanation of their reasoning when deciding whether to offer you a bank loan, there is merely a mass of correlations between spending history and later reliability, which may factorise into a small number of well-defined chunks like “how regularly someone pays their rent” but it might not. The main problem with the quoted paragraph is that it does not at all attempt to specify how to define explainability in an ML system to the point where it can be regulated, meaning that any regulation would either be meaningless and ignored, or worse highly damaging. Policies formed in this manner will either be of no consequence, or deeply antagonise the ML community. We currently don’t know how to think about explainability of ML systems, and ignoring that problem and regulating that they should be ‘explainable’ will not work.
The article also contains the following policy idea about autonomous weapons.
The decisions we make now, for example, on international regulation of autonomous weapons, could have an outsized impact on how this field develops. A firm precedent that only a human can make a ‘kill’ decision could significantly shape how AI is used — for example, putting the focus on enhancing instead of replacing human capacities.
Here and throughout the article, repeated uses of the conditional ‘could’ make it unclear to me whether this is being endorsed or merely suggested. I can’t quite tell if they think that drone swarms are a long-term issue - they contrast it with a short-term issue but don’t explicitly say that it is long-term. Nonetheless, I think their suggesting it here is also a bit misguided.
Let me contrast this with Nick Bostrom on a recent episode of the Joe Rogan Experienceexplaining that he thinks that the specific rule has ambiguous value. Here’s a quote from a discussion of the campaign to ban lethal autonomous weapons:
Nick Bostrom: I’ve kind of stood a little bit on the sidelines on that particular campaign, being a little unsure exactly what it is that… certainly I think it’d be better if we refrained from having some arms race to develop these than not. But if you start to look in more detail: What precisely is the thing that you’re hoping to ban? So if the idea is the autonomous bit, that the robot should not be able to make its own firing decision, well, if the alternative to that is there is some 19-year old guy sitting in some office building and his job is whenever the screen flashes ‘fire now’ he has to press a red button. And exactly the same thing happens. I’m not sure how much is gained by having that extra step.
Interviewer: But it feels better for us for some reason. If someone is pushing the button.
Nick Bostrom: But what exactly does that mean. In every particular firing decision? Well, you gotta attack this group of surface ships here, and here are the general parameters, and you’re not allowed to fire outside these coordinates? I don’t know. Another is the question of: it would be better if we had no wars, but if there is gonna be a war, maybe it is better if it’s robots v robots. Or if there’s gonna be bombing, maybe you want the bombs to have high precision rather than low precision - get fewer civilian casualties.
[...]
On the other hand you could imagine it reduces the threshold for going to war, if you think that you wouldn’t fear any casualties you would be more eager to do it. Or if it proliferates and you have these mosquito-sized killer-bots that terrorists have. It doesn’t seem like a good thing to have a society where you have a facial-recognition thing, and then the bot flies out and you just have a kind of dystopia.
Overall, it seems that in both situations, the key open questions are in understanding the systems and how they’ll interface with areas of industry, government and personal life, and that regulation based on inaccurate conceptualizations of the technology would either be meaningless or harmful.
Polarizing approach to policy coordination
I have two main concerns with what I see as the intent of the paper.
The first one can be summarized by Robin Hanson’s article To Oppose Polarization, Tug Sideways:
The policy world can [be] thought of as consisting of a few Tug-O-War "ropes" set up in this high dimensional policy space. If you want to find a comfortable place in this world, where the people around you are reassured that you are "one of them," you need to continually and clearly telegraph your loyalty by treating each policy issue as another opportunity to find more supporting arguments for your side of the key dimensions. That is, pick a rope and pull on it.
If, however, you actually want to improve policy, if you have a secure enough position to say what you like, and if you can find a relevant audience, then [you should] prefer to pull policy ropes sideways. Few will bother to resist such pulls, and since few will have considered such moves, you have a much better chance of identifying a move that improves policy. On the few main dimensions, not only will you find it very hard to move the rope much, but you should have little confidence that you actually have superior information about which way the rope should be pulled.
I feel like the article above is not pulling policy ropes sideways, but is instead connecting long-term issues to specific sides of existing policy debates, around which there is already a lot of tension. The issue of technological unemployment seems to me to be a highly polarizing topic, where taking a position seems ill-advised, and I have very low confidence about the correct direction in which to pull policy. Entangling long-term issues with these highly tense short-term issues seems like it will likely reduce our future ability to broadly coordinate on these issues (by having them associated with highly polarized existing debates).
Distinction between long- and short-term thinking
My second concern is that on a deeper level, I think that the type of thinking that generates a lot of the arguments around concerns for long-term technological risks is very different from that which suggests policies around technological unemployment and racial bias. I think there is some value in having these separate ways of thinking engage in “conversation,” but I think the linked paper is confusing in that it seems to try to down-play the differences between them. An analogy might be the differences between physics and architecture; both fields nominally work with many similar objects, but the distinction between the two is very important, and the fields clearly require different types of thinking and problem-solving.
Some of my concerns are summarized by Eliezer in his writing on Pivotal Acts:
...compared to the much more difficult problems involved with making something actually smarter than you be safe, it may be tempting to try to write papers that you know you can finish, like a paper on robotic cars causing unemployment in the trucking industry, or a paper on who holds legal liability when a factory machine crushes a worker. But while it's true that crushed factory workers and unemployed truckers are both, ceteris paribus, bad, they are not astronomical catastrophes that transform all galaxies inside our future light cone into paperclips, and the latter category seems worth distinguishing...
...there will [...] be a temptation for the grantseeker to argue, "Well, if AI causes unemployment, that could slow world economic growth, which will make countries more hostile to each other, which would make it harder to prevent an AI arms race." But the possibility of something ending up having a non-zero impact on astronomical stakes is not the same concept as events that have a game-changing impact on astronomical stakes. The question is what are the largest lowest-hanging fruit in astronomical stakes, not whether something can be argued as defensible by pointing to a non-zero astronomical impact.
I currently don’t think that someone who is trying to understand how to deal with technological long-term risk should spend much time thinking about technological unemployment or related issues, but it feels like the paper is trying to advocate for the opposite position.
Concluding thoughts on the article
Many people in the AI policy space have to spend a lot of effort to gain respect and influence, and it’s genuinely hard to figure out a way to do this while acting with integrity. One common difficulty in this area is navigating the incentives to connect one’s arguments to issues that already get a lot of attention (e.g. ongoing political debates). My read is that this essay makes these connections even when they aren’t justified; it implies that many short- and medium-term concerns are a natural extension of current long-term thought, while failing to accurately portray what I consider to be the core arguments around long-term risks and benefits from AI. It seems like the effect of this essay will be to reduce perceived differences between long-term, mid-term and short-term work on risks from AI, to cause confusion about the actual concerns of Bostrom et al., and to make future communications work in this space harder and more polarized.
Broader thoughts on CSER and CFI
I only had the time and space to critique one specific article from CFI and CSER. However, from talking to others working in the global catastrophic risk space, and from engagement with significant fractions of the rest of CSER and CFI’s work, I've come to think that the problems I see in this article are mostly representative of the problems I see in CSER’s and CFI’s broader strategy and work. I don’t think what I’ve written sufficiently justifies that claim; however, it seems useful to share this broader assessment to allow others to make better predictions about my future grant recommendations, and maybe also to open a dialogue that might cause me to change my mind.
Overall, based on the concerns I’ve expressed in this essay, and that I’ve had with other parts of CFI and CSER’s work, I worry that their efforts to shape the conversation around AI policy, and to mend disputes between those focused on long-term and short-term problems, do not address important underlying issues and may have net-negative consequences.
That said, it’s good that these organizations give some researchers a way to get PhDs/postdocs at Cambridge with relatively little institutional oversight and an opportunity to explore a large variety of different topics (e.g. Jess, and Shahar Avin, a previous grantee whose work I’m excited about).
Addendum: Thoughts on incentives in technical fields in academia
I wrote the following in the course of writing about the AI Safety Camp. This is a model I use commonly when thinking about funding for AI alignment work, but it ended up not being very relevant to that writeup, so I’m leaving it here as a note of interest.
My understanding of many parts of technical academia is that there is a strong incentive to make your writing hard to understand while appearing more impressive by using a lot of math. Eliezer Yudkowsky describes his understanding of it as such (and expands on this further in the rocket alignment problem):
The point of current AI safety work is to cross, e.g., the gap between [. . . ] saying “Ha ha, I want AIs to have an off switch, but it might be dangerous to be the one holding the off switch!” to, e.g., realizing that utility indifference is an open problem. After this, we cross the gap to solving utility indifference in unbounded form. Much later, we cross the gap to a form of utility indifference that actually works in practice with whatever machine learning techniques are used, come the day.
Progress in modern AI safety mainly looks like progress in conceptual clarity — getting past the stage of “Ha ha it might be dangerous to be holding the off switch.” Even though Stuart Armstrong’s original proposal for utility indifference completely failed to work (as observed at MIRI by myself and Benya), it was still a lot of conceptual progress compared to the “Ha ha that might be dangerous” stage of thinking.
Simple ideas like these would be where I expect the battle for the hearts of future grad students to take place; somebody with exposure to Armstrong’s first simple idea knows better than to walk directly into the whirling razor blades without having solved the corresponding problem of fixing Armstrong’s solution. A lot of the actual increment of benefit to the world comes from getting more minds past the “walk directly into the whirling razor blades” stage of thinking, which is not complex-math-dependent.
Later, there’s a need to have real deployable solutions, which may or may not look like impressive math per se. But actual increments of safety there may be a long time coming. [. . . ]
Any problem whose current MIRI-solution looks hard (the kind of proof produced by people competing in an inexploitable market to look impressive, who gravitate to problems where they can produce proofs that look like costly signals of intelligence) is a place where we’re flailing around and grasping at complicated results in order to marginally improve our understanding of a confusing subject matter. Techniques you can actually adapt in a safe AI, come the day, will probably have very simple cores — the sort of core concept that takes up three paragraphs, where any reviewer who didn’t spend five years struggling on the problem themselves will think, “Oh I could have thought of that.” Someday there may be a book full of clever and difficult things to say about the simple core — contrast the simplicity of the core concept of causal models, versus the complexity of proving all the clever things Judea Pearl had to say about causal models. But the planetary benefit is mainly from posing understandable problems crisply enough so that people can see they are open, and then from the simpler abstract properties of a found solution — complicated aspects will not carry over to real AIs later.
And gives a concrete example here:
The journal paper that Stuart Armstrong coauthored on "interruptibility" is a far step down from Armstrong's other work on corrigibility. It had to be dumbed way down (I'm counting obscuration with fancy equations and math results as "dumbing down") to be published in a mainstream journal. It had to be stripped of all the caveats and any mention of explicit incompleteness, which is necessary meta-information for any ongoing incremental progress, not to mention important from a safety standpoint. The root cause can be debated but the observable seems plain. If you want to get real work done, the obvious strategy would be to not subject yourself to any academic incentives or bureaucratic processes. Particularly including peer review by non-"hobbyists" (peer commentary by fellow "hobbyists" still being potentially very valuable), or review by grant committees staffed by the sort of people who are still impressed by academic sage-costuming and will want you to compete against pointlessly obscured but terribly serious-looking equations.
(Here is a public example of Stuart’s work on utility indifference, though I had difficulty finding the most relevant examples of his work on this subject.)
Some examples that seem to me to use an appropriate level of formalism include: the Embedded Agency sequence, the Mesa-Optimisation paper, some posts by DeepMind researchers (thoughts on human models, classifying specification problems as variants of Goodhart’s law), and many other blog posts by these authors and others on the AI Alignment Forum.
There’s a sense in which it’s fine to play around with the few formalisms you have a grasp of when you’re getting to grips with ideas in this field. For example, MIRI recently held a retreat for new researchers, which led to a number of blog posts that followed this pattern (1, 2, 3, 4). But aiming for lots of technical formalism is not helpful - any conception of useful work that focuses primarily on molding the idea to the format rather than molding the format to the idea, especially for (nominally) impressive technical formats, is likely optimizing for the wrong metric and falling prey to Goodhart’s law.︎
Ben_West @ 2019-10-04T16:26 (+39)
Here are some examples of communities and institutions that I think used fiction very centrally in their function
Ender's Game is often on military reading lists (e.g. here). A metric which strikes me as challenging but exciting would be to create a book which gets on one of these lists. (Or on the list of some influential person, e.g. Bill Gates list.)
This would also help me understand the theory of change. I agree with your assessment that some fiction has had a significant impact on the world, but would also guess that most fiction has approximately zero impact on the world, so I would be curious to better understand the "success conditions" for this grant.
Habryka @ 2019-10-05T04:27 (+9)
I like this metric. I agree that it would be quite challenging to meet, but would definitely be a decent indicator of at least reach and readership. Obviously I wouldn't want it to be the only metric, but it seems like a good one to look into for any project like this.
Yeah, I do think I could do a bit better at defining what I would consider success for this grant, so I will try to write a comment with some more of my thoughts on that in the next few days.
Ben_West @ 2019-10-14T17:04 (+13)
Thanks! While I am making demands on your time, I would also be interested in understanding your opinion of Crystal Society (which seems like it might be similar to what Miranda is proposing?), if you think it was successful in accomplishing the goals you hope Miranda's work would accomplish, and why or why not.
As one example thing I am confused about: you list HP:MoR as "very likely the single most important recruitment mechanism for productive AI alignment researchers," and it is not clear to me why Crystal Society has been so much less successful, given that it seems better targeted for that purpose (e.g. it's pretty clearly about the alignment problem).
Habryka @ 2019-10-15T16:31 (+5)
I think this is a good question. I am currently on a team retreat, so likely won't get to this until next week (and maybe not then because I will likely be busy catching up with stuff). If I haven't responded in 10 days, please feel free to ping me.
Ben_West @ 2019-10-25T13:05 (+6)
Thanks! Ping on this.
I also realize that there are other fanfictions, e.g. Friendship is Optimal, that, in theory at least, seem well-placed to introduce concerns about AI alignment to the public. To the extent you can explain why these were less successful than HP:MoR (or any general theory of what success looks like here), I would be interested in hearing it!
Habryka @ 2019-10-25T21:52 (+4)
Thanks for pinging me!
I am still pretty swamped (being in the middle of both another LTFF grant round and the SFF grant round), and since I think a proper response to the above requires writing quite a bit of text, it will probably be another two weeks or so.
Liam_Donovan @ 2019-10-14T17:10 (+3)
Maybe the most successful recruitment books directly target people 1-2 stages away in the recruitment funnel? In the case of HPMOR/Crystal Society, that would be quantitatively minded people who enjoy LW-style rationality rather than those who are already interested in AI alignment specifically.
Cullen_OKeefe @ 2019-10-04T03:19 (+38)
Could the Fund managers explain how they manage conflicts of interest in the grant deliberation process?
Habryka @ 2019-10-04T05:25 (+34)
Happy to describe what we've historically done, though we are still figuring out some better policies here, so I expect this to have changed by next round.
As I mentioned in the description of our voting process last time, we have a spreadsheet that has all the applications and other organizations we are considering recommending grants to, in which each fund member can indicate a vote as well as a suggested funding amount (we are still iterating on whether to use votes or a more budget-based system). Whenever there is a potential cause for a conflict of interest, the relevant person (or a separate fund member who suspects a COI) leaves a comment in the relevant row in the spreadsheet with the details of the COI, then the other fund members look at the description of the COI and decide whether it makes sense for that person to withdraw from voting (so far the fund always agreed with the assessment of the individual fund member on whether they should withdraw, but since any fund member has veto power over any of our grant recommendations, I am confident that we would not make grants in which one fund member thinks that a different fund member has a strong COI and we don't have independent evidence supporting the grant).
We don't (yet) have a super concrete policy of what counts as a conflict of interest, but I think we've historically been quite conservative and flagged all the following things as potential conflicts of interest (this doesn't mean I am certain that all of the below have been flagged on every occasion, though I think that's quite likely, just that all of these have historically been flagged) :
- Having worked with the relevant organization or people in the past (this usually just gets flagged internally and then added to the writeup, and doesn't cause someone to withdraw from voting)
- Living in the same house/apartment as the person we are granting to (usually makes us hesitant to make a grant just on the basis of the relevant person, so we usually seek additional external feedback to compensate for that)
- Being long-time friends with the potential grantee (I expect would just get flagged and added to the writeup)
- Being a past or current romantic partner of the potential grantee (I expect this would cause someone to exclude themselves from voting, though I don't think this ever became relevant. There is one case where a fund member first met and started dating a potential grantee after all the votes had been finalized, but I don't think there was any undue influence in that case.)
- Having some other interpersonal conflict with the relevant person (This usually doesn't make it into the writeup, but I flagged it on one occasion)
- Probably some others, since COIs can arise from all kinds of things
If the votes of the fund member with the potential COI mattered in our final grant decision we've incorporated descriptions of those COIs into the final writeups, and sometimes added writeups by people who have less cause for a COI to provide a more neutral source of input (an example of this is the Stag grant, which has a writeup by Alex who has some degree of COI in his relationship to Stag, so it seemed good to add a writeup by me as an additional assessment of the grant).
CEA is currently drafting a more formal policy which is stricter about fund members making recommendations to their own organizations, or organizations closely related to their own organization, but doesn't cover most of the other things above. We are also currently discussing some more formalized COI policy internally, though I always expect that for the vast majority of potential COI causes we will have to rely on a relatively fuzzy definition, because these can arise from all kinds of different things.
Cullen_OKeefe @ 2019-10-05T03:23 (+18)
Thank you for a very thorough and transparent reply!
Max_Daniel @ 2019-10-05T10:00 (+8)
Thank you, this speaks to some tentative concerns I had after reading the grant recommendations. FWIW, I feel that the following was particularly helpful for me deciding how much I trust the decision-making behind grant decisions, how willing I feel to donate to the Fund etc. - I think I would have liked to see this information in the top-level post.
Whenever there is a potential cause for a conflict of interest, the relevant person (or a separate fund member who suspects a COI) leaves a comment in the relevant row in the spreadsheet with the details of the COI, then the other fund members look at the description of the COI and decide whether it makes sense for that person to withdraw from voting.
Habryka @ 2019-10-05T16:56 (+5)
That's good to hear! Because a lot of what I described above was a relatively informal procedure, I felt weird putting a lot of emphasis on it in the writeup, but I do agree that it seems like important information for others to have.
I think by next round we will probably have a more formal policy that I would feel more comfortable explicitly emphasizing in the writeup.
riceissa @ 2019-10-10T05:24 (+35)
A trend I've noticed in the AI safety independent research grants for the past two rounds (April and August) is that most of the grantees have little to no online presence as far as I know (they could be using pseudonyms I am unaware of); I believe Alex Turner and David Manheim are the only exceptions. However, when I think about "who am I most excited to give individual research grants to, if I had that kind of money?", the names I come up with are people who leave interesting comments and posts on LessWrong about AI safety. (This isn't surprising because I mostly interact with the AI safety community publicly online, so I don't have much access to private info.) To give an idea of the kind of people I am thinking of, I would name John Wentworth, Steve Byrnes, Ofer G., Morgan Sinclaire, and Evan Hubinger as examples.
This has me wondering what's going on. Some possibilities I can think of:
- the people who contribute on LW aren't applying for grants
- the private people are higher quality than the online people
- the private people have more credentials than the online people (e.g. Hertz Fellowship, math contests experience)
- fund managers are more receptive offline than online and it's easier to network offline
- fund managers don't follow online discussions closely
I would appreciate if the fund managers could weigh in on this so I have a better sense of why my own thinking seems to diverge so much from the actual grant recommendations.
Grue_Slinky @ 2019-10-11T14:03 (+18)
As one of the people you mentioned (I'm flattered!), I've also been curious about this.
As for my own anecdata, I basically haven't applied yet. Technically I did apply and get declined last round, but a) it was a fairly low-effort application since I didn't really need the money then which b) I said so on the application and c) I didn't have any public posts until 2 months ago so I wasn't in your demographic and d) I didn't have any references because I don't really know many people in the research community.
I'm about to submit a serious application for this round, where of those only (d) is still true. At least, I haven't extensively interacted with any high-status researchers for it to make sense to ask anyone for references. And I think maybe there's a correlation there that explains part of your question: I post/comment online when I'm up to it because it's one of the best ways for me to get good feedback (this being a great example), even though I'm a slow writer and it's a laborious process for me to get from "this seems like a coherent, nontrivial idea probably worth writing up" to feeling like I've covered the all the inferential gaps, noted all the caveats, taken into account relevant prior writings, and thought of possible objections enough to feel ready to hit the submit button. But anyways, I would guess that maybe online people slightly skew towards being isolated (else they'd get feedback or spread their ideas by just talking to e.g. coworkers), hence not having references. But I don't think this is a large effect (and I defer to Habryka's comment). Of the people you mentioned, I believe Evan is currently working with Christiano at OpenAI and has been "clued-in" for a while, and I have no idea about the first 3.
Also, I often wonder how much Alignment research is going on that I'm just not clued into from "merely" reading the Alignment Forum, Alignment Newsletter, papers by OpenAI/DeepMind/CHAI etc. I know that MIRI is nondisclosed-by-default now, and I get that. But they laid out their reasons for that in detail, and that's on top of the trust they've earned from me as an institution for their past research. When I hear about people who are doing their own research but not posting anything, I get pretty skeptical unless they've produced good Alignment research in the past (producing other technical research counts for something, my own intuition is that the pre-paradigmatic nature of Alignment research is different enough that the tails come apart), and my system 1 says (especially if they're getting funded):
Oh come on! I would love to sit around and do my own private "research" uninterrupted without the hard work of writing things up, but that's what you have to do if you want to be a part of the research community collectively working toward solving a problem. If everyone just lounged around in their own thoughts and notes without distilling that information for others to build on, there just wouldn't be any intellectual progress. That's the whole point of academic publication, and forum posting is actually a step down from that norm, and even that's only possible because the community of < 100 is small, young, and non-specialized enough that medium-effort ways of distilling ideas still work (less inferential gaps to cross etc.)
(My system 2 would obviously use a different tone than that, but it largely agrees with the substance.)
Also, to echo points made by Jan, LW is not the best place for a broad impression of current research, the Alignment Forum is strictly better. But even the latter is somewhat skewed towards MIRI-esque things over CHAI, OpenAI, and Deepmind's stuff, here's another decent comment thread discussing that.
Habryka @ 2019-10-10T16:44 (+13)
My sense is mostly (1), with maybe some additional disagreement over what online contributions are actually a sign of competence. But usually I am quite excited when an active online contributor applies.
I share your perspective that I am most excited about people who participate in AI Alignment discussion online, but we’ve received relatively few applications from people in that reference class.
Some of the people we’ve given grants to, were the result of Alex Zhu doing a lot of networking with people who are interested in AI alignment, which tends to select on some slightly different things, but given the lack of applications from people with a history of contributing online, that still seems pretty good to me.
Ben Pace @ 2019-10-10T06:39 (+6)
This is an excellent question!
Jan_Kulveit @ 2019-10-10T19:54 (+4)
The reason may be somewhat simple: most AI alignment researchers do not participate (post or comment) on LW/AF or participate only a little. For more understanding why, check this post of Wei Dai and the discussion under it.
(Also: if you follow just LW, your understanding of the field of AI safety is likely somewhat distorted)
With hypothesis 4.&5. I expect at least Oli to have strong bias of being more enthusiastic in funding people who like to interact with LW (all other research qualities being equal), so I'm pretty sure it's not the case
2.&3. is somewhat true at least on average: if we operationalize "private people" as "people who do you meet participating in private research retreats or visiting places like MIRI or FHI", and "online people" as "people posting and commenting on AI safety on LW" than the first group is on average better.
1. is likely true in the sense that best LW contributors are not applying for grants
riceissa @ 2019-10-10T21:44 (+16)
The reason may be somewhat simple: most AI alignment researchers do not participate (post or comment) on LW/AF or participate only a little.
I'm wondering how many such people there are. Specifically, how many people (i) don't participate on LW/AF, (ii) don't already get paid for AI alignment work, and (iii) do seriously want to spend a significant amount of time working on AI alignment or already do so in their free time? (So I want to exclude researchers at organizations, random people who contact 80,000 Hours for advice on how to get involved, people who attend a MIRI workshop or AI safety camp but then happily go back to doing non-alignment work, etc.) My own feeling before reading your comment was that there are maybe 10-20 such people, but it sounds like there may be many more than that. Do you have a specific number in mind?
if you follow just LW, your understanding of the field of AI safety is likely somewhat distorted
I'm aware of this, and I've seen Wei Dai's post and the comments there. Personally I don't see an easy way to get access to more private discussions due to a variety of factors (not being invited to workshops, some workshops being too expensive for it to be worth traveling to, not being eligible to apply for certain programs, and so on).
Halstead @ 2019-10-06T14:43 (+32)
I'm interested in the recommendation of CFAR (though I appreciate it is not funded by the LTFF). What do you think are the top ideas regarding epistemics that CFAR has come up with that have helped EA/the world?
You mention double cruxing in the other post discussing CFAR. Rather than an innovation, isn't this merely agreeing on which premise you disagree on? Similarly, isn't murphyjitsu just the pre-mortem, which was defined by Kahneman more than a decade ago?
I also wonder why CFAR has to charge people for their advice. Why don't they write down all of their insights and put it online for free?
Habryka @ 2019-10-06T18:49 (+7)
Hmm, it seems to me like you are modeling the goals and purpose of CFAR quite differently than I do. I model CFAR primarily as an educational institution, with a bit of research, but mostly with the goal of adapting existing knowledge and ideas from cognitive science and other disciplines into more practical applications (hence the name "Center for Applied Rationality").
In my review of CFAR last round, I listed
- "Establishing Epistemic Norms"
- "Recruitment" and
- "Training"
as the three primary sources of value add of CFAR, which importantly lacks what you seem to be evaluating above, and I would describe as "research" (the development of new core concepts in a given field).
I think in the space of the three axes I outlined, CFAR has been pretty successful, as I tried to explain during my last review, and with some additional evidence that I have heard particularly good things about the Artificial Intelligence Risk for Computer Scientist workshops, in that they seem to be able to facilitate a very unique environment in which people with a strong technical background can start engaging with AI Alignment questions.
I don't think of CFAR's value being primarily generated by producing specific insights of the type of Kahneman's work (though I do think there have been some generated by CFAR that I found useful) but in the teaching and communication of ideas in this space, and the feedback loop that comes from seeing how trying to teach those techniques actually works (often uncovering many underspecified assumptions, or the complete lack of effectiveness of an existing technique).
Murphyjitsu is indeed just pre-mortem, and I think is cited as such in both the handbook and at the workshop. It's just that the name of premortem didn't stick with participants, and so people changed it to something that people seemed to actually be able to engage with (and there was also a bunch of other valuable iteration on how you actually teach the relevant skill).
I also wonder why CFAR has to charge people for their advice. Why don't they write down all of their insights and put it online for free?
This seems to again approach CFAR's value add from a different perspective. While I would be in favor of CFAR publishing their handbook, it's clear to me that this would not in any real way compete with the value of existing CFAR workshops. Universities have classes, and very few people are able to learn from textbooks alone, and their and CFAR's value comes from the facilitation of classes, active exercises, and the fast feedback loop that comes from having an instructor right in the room with you.
I am in favor of CFAR writing more things down, but similar to how very few people can learn calculus or linear algebra from nothing but a book, with no accountability structure or teacher to ask questions off, is it also unlikely that many people can learn the relevant subsets of cognitive science and decision-making from just a written description (I think some can, and our community has a much higher fraction of autodidacts than the general population, but even for those, learning without a teacher is usually still a lot slower).
I do think there are other benefits to writing things down, like having more cross-examination of your ideas by others, giving you more information about them, "just writing down their ideas" would not effectively replace CFAR's value proposition.
-----
I am however just straightforwardly confused by what you mean by "isn't double crux merely agreeing on which premise you disagree on?", since that seems to have relatively little to do with basically any formulation of double crux I've seen. The goal of double crux is to formulate which observations would cause both you and the other person to change their mind. This has at most a tenuous connection to "which premise do we disagree on?", since not all premises are necessary premises for a conclusion, and observations tend to only very rarely directly correspond to falsifying one specific premise. And human cognition usually isn't structured by making explicit premises and arguing from them, making whatever methodology you are comparing it to not really be something that I have any idea how to apply in conversation (if you ask me "what are my premises for the belief that Nature is the most prestigious science journal?" then I definitely won't have a nice list of premises I can respond with, but if you ask me "what would change my mind about Nature being the most prestigious science journal?" I might be able to give a reasonably good answer and start having a productive conversation).
Halstead @ 2019-10-07T02:03 (+30)
thanks for this.
If the retreats are valuable, one would expect them to communicate genuinely useful concepts and ideas. Which ideas that CFAR teaches do you think are most useful?
On the payment model, imagine that instead of putting their material on choosing a high impact career online, 80k charged people £3000 to have 4 day coaching and networking retreats in a large mansion, afterwards giving them access to the relevant written material. I think this would shave off ~100% of the value of 80k. The differences between the two organisations don't seem to me to be large enough to make a relevant difference to this analysis when applied to CFAR. Do you think there is a case for 80k to move towards the CFAR £3k retreat model?
**
On double cruxing, here is how CFAR defines double cruxing
"Let’s say you have a belief, which we can label A (for instance, “middle school students should wear uniforms”), and that you’re in disagreement with someone who believes some form of ¬A. Double cruxing with that person means that you’re both in search of a second statement B, with the following properties:
1. You and your partner both disagree about B as well (you think B, your partner thinks ¬B)
2. The belief B is crucial for your belief in A; it is one of the cruxes of the argument. If it turned out that B was not true, that would be sufficient to make you think A was false, too.
3. The belief ¬B is crucial for your partner’s belief in ¬A, in a similar fashion."
So, if I were to double crux with you, we would both establish which were the premises we disagree on that cause our disagreement. B is a premise in the argument for A. This is double cruxing, right?
You say:
"if you ask me "what are my premises for the belief that Nature is the most prestigious science journal?" then I definitely won't have a nice list of premises I can respond with, but if you ask me "what would change my mind about Nature being the most prestigious science journal?" I might be able to give a reasonably good answer and start having a productive conversation"
Your answer could be expressed in the form of premises right? Premises are just propositions that bear on the likelihood of the conclusion
Habryka @ 2019-10-07T04:27 (+21)
On the payment model, imagine that instead of putting their material on choosing a high impact career online, 80k charged people £3000 to have 4 day coaching and networking retreats in a large mansion, afterwards giving them access to the relevant written material.
CFAR's model is actually pretty similar to 80k's here. CFAR generally either heavily discounts or waives the cost of the workshop for people they think are likely to contribute to the long-term-future, or are more broadly promising, and who don't have the money to pay for the workshop. As such the relevant comparison is more "should 80k offer paid coaching (in addition to their free coaching) at relatively high rates for people who they think are less likely to contribute to improving the world, if the money they earn from that allows them to offer the other free coaching services (or scale them up by 30% or something like that)", to which my answer would be "yes".
My sense is that 80k is in a better-funded position, and so this tradeoff doesn't really come up, but I would be surprised if they never considered it in the past (though career coaching is probably somewhat harder to monetize than the kind of product CFAR is selling).
I also think you are underestimating to what degree the paid workshops were a necessity for CFAR historically having gotten to exist. Since there is a lot of downtime cost in being able to run workshops (you need to have a critical mass of teaching staff, you need to do a lot of curriculum development, have reliable venues, etc.) and the EA community didn't really exist yet when CFAR got started, it was never really an option for CFAR to fully run off of donations, and CFAR additionally wanted to make sure it actually produced something that people would be willing to pay for, so offering paid workshops was one of the only ways to achieve those two goals. I also generally think it's a good idea for projects like CFAR to ensure that they are producing a product that people are willing to pay significant amount of money for, which is at least a basic sanity check on whether you are doing anything real.
As an example, I encouraged Lynette to ask people whether they would be willing to pay for her coaching, and ideally ask them for at least some payment even if she can't break even, to make sure that the people she is offering services to are filtered for the people who get enough value out of it to spend $50 per session, or something in that space (she had also considered that already on her own, though I don't remember the current state of her asking her clients for payment).
I just remembered that 80k actually did consider monetizing part of it's coaching in 2014, which would have probably resulted in a pretty similar model to CFAR:
Is there a subsection of the audience who might be willing to pay for coaching?
We’re interesting in the possibility of making part of the coaching self-funding. Our best guess was that the people who will be most willing to pay for coaching are people from tech and finance backgrounds aged 25-35. We found that about 20% of the requests fell in this category, which was higher than our expectations.
Re retreats:
I think it's quite plausible that 80k organizing retreats would be quite valuable, in particular in a world where CFAR isn't filling that current niche. CEA also organized a large number of retreats of a similar type in the last year (I attended one on individual outreach, and I know that they organized multiple retreats for group organizers, and at least one operations retreat) presumably because they think that is indeed a good idea (the one that I attended did seem reasonably valuable, and a lot of the design of it was clearly influenced by CFAR workshops, though I can't speak on whether that overall initiative was worth it).
afterwards giving them access to the relevant written material
I agree that 80k also has a lot of impact via their written material, but I think that is because they have invested a very large fraction of their resources into producing those materials (80K would likely be unable to run as many workshops as CFAR and also produce the written material). I think if 80k was focusing primarily on coaching, it would be very unlikely to produce good written material that would stand well on its own, though I expect it would still produce a good amount of value (and it might still produce some writings, but likely not ones that make much sense without the context of the coaching, similar to CFAR). As such I am skeptical of your claim that switching to that model would get rid of ~100% of 80k's value. I expect it would change their value proposition, but likely still have a good chance of being competitive in terms of impact (and fully switching towards a coaching model was something that I've heard 80k consider multiple times over the years).
Your answer could be expressed in the form of premises right? Premises are just propositions that bear on the likelihood of the conclusion
I think if you define "premise" more broadly to mean "propositions that bear on the likelihood of the conclusion" then you are closer, but still not fully there. A crux would then be defined "a set of premises that when falsified, would provide enough evidence that you would change your mind on the high-level claim", which is importantly still different from "identifying differences in our premises", in particular it emphasizes identifying specific premises that are particularly load-bearing for the argument at hand.
(This wouldn't be a very standard usage of "premise" and doesn't seem to align super well with any definitions I can find in any dictionaries, which all tend to either be about logical inference or about subsets of a specific logical argument that is being outlined, but doesn't seem like a horrible stretch from available definitions. Though I wouldn't expect people to intuitively know what you mean by that definition of "premise")
I do still expect people to give quite drastically different answers if you ask them "is 'not X' a premise of your belief?" vs. "would observing X change your mind about this belief?". So I wouldn't recommend using that definition if you were actually trying to do the thing that double crux is trying to do, even if you define it beforehand. I do think that the norms from (classical) rhetoric and philosophy of trying to identify differences in your premises are good norms and generally make conversations go better. I agree that Double Crux is trying to operationalize and build on that, and isn't doing some weird completely novel thing, though I do think it extends on it in a bunch of non-trivial ways.
Halstead @ 2019-10-07T15:20 (+3)
I disagree that 80k should transition towards a £3k retreat + no online content model, but it doesn't seem worth getting into why here.
On premises, here is the top definition I have found from googling... "a previous statement or proposition from which another is inferred or follows as a conclusion". This fits with my (and CFAR's) characterisation of double cruxing. I think we're agreed that the question is which premises you disagree on cause your disagreement. It is logically impossible that double cruxing extends this characterisation.
Habryka @ 2019-10-07T17:21 (+4)
I disagree that 80k should transition towards a £3k retreat + no online content model, but it doesn't seem worth getting into why here.
I never said 80k should transition towards a retreat + no online content model. What I said is that it seems plausible to me it would still produce a lot of value in that case, though I agree that their current model seems likely a better fit for them, and probably overall more valuable. Presumably you also disagree with that, but it seemed important to distinguish.
"a previous statement or proposition from which another is inferred or follows as a conclusion"
Given that in the scenario as outlined, there was no "previous statement" or "previous proposition", I am still confused how you think this definition fits. In the scenario at hand, nobody first outlined their complete argument for why they think the claim discussed is true, and as such, there is no "previous statement or proposition" that can be referred back to. This definition seems to refer mostly to logical argument, which doesn't really apply to most human cognition.
I am not super excited about debating definitions, and we both agree that using the word premise is at least somewhat close to the right concept, so I am not very excited about continuing this thread further. If you really care about this, I would be glad to set up an experiment on mechanical turk in which we ask participants to list the necessary premises of a belief they hold, and see how much their responses differ from asking them what observations would change their mind about X. It seems clear to me that their responses would differ significantly.
which premises you disagree on cause your disagreement.
This is still only capturing half of it, even under the definition of premise that you've outlined here, which seems to be a reasonable definition of what a crux for a single participant in the conversation is. A double crux would be "a set of premises, that when viewed as a new conjunctive proposition you both assign opposite truth values to, that when flipped would cause both of you to change their mind". Though that alone obviously doesn't yet make a procedure, so there is still a bunch more structure, but I would think of the above as an accurate enough description to start working with it.
It is logically impossible that double cruxing extends this characterisation.
I don't think I really know how to engage with this. Obviously it's possible for double-crux to extend this characterization. I even outlined a key piece that was missing from it in the above paragraph.
But it's also a procedure that is meant to be used with real people, where every bit of framing and instruction matters. If you really believe this, let us run a test and just give one group of people the instruction "find the premises on which you disagree on that cause your disagreement" and the other group the full double crux worksheet. Presumably you agree that the behavior of those groups will drastically differ.
You maybe have something more specific in mind when you mean "logically impossible", but given that we are talking about a high-level procedure proofs of logical impossibility seem highly unlikely to me.
Habryka @ 2019-10-07T04:26 (+16)
To give an answer to the question of what material that CFAR teaches at their workshops I consider valuable, here is a list of classes that I've seen have a big impact on individuals, sometimes including myself and for which I also have separate reasons to think they are valuable.
- Units of Exchange
- Basically an introduction into consequentialist reasoning, trying to get people to feel comfortable trading off different resources that they previously felt were incomparable. A lot of the core ideas in EA are based off of this, and I think it's generally a good introduction into that kind of thinking.
- Inner Simulator
- Basic practical introduction into System 1 and System 2 level processing, and going into detail on how to interface between S1 and S2 processing.
- Trigger-Action Planning
- Basic introduction into how associative processing works in the brain, where it tends to work, and where it tends to fail, and how to work around those failure modes. In the literature the specific technique is known as "Mental contrasting with implementation intentions" and is probably one of the most robust findings in terms of behavior change in behavioral psychology.
- This class is often particularly valuable because I've seen it provide people with their first real mechanistic model of the human mind, even if simplified. A lot of people don't really have any mechanistic baseline of how human cognition works, and so the simplified statement of "humans cognition can be modeled as a large pile of programmed 'if-then-statements' can get people initial traction on figuring out how their own mind works".
- Goal Factoring
- For most attendees this has a lot of overlap with basic 80k coaching. Practice in trying to ask yourself repeatedly "why is this thing that I am doing important to me, and could I achieve it some better way?", and this is probably the class that I've seen that had the biggest effects in terms of causing career changes in participants, mostly by getting them to think about why their are pursuing the career they are pursuing, and how they might be able to achieve their goals better.
- Understanding Shoulds
- This covers a lot of material in the Minding Our Way "Replacing Guilt" series, which many EAs and people that I trust have reported to have benefited a lot from, and which core conclusions are quite important for a lot of thinking about how to have a big impact in the world, how morality works, reminding people that they are allowed to care about things, etc.
- Focusing
- Based on Gendlin's "Focusing" book and audiobook, it teaches a technique that forms the basis of a significant fraction of modern therapeutic techniques and I consider a core skill for doing emotional processing. I've benefited a lot from this, and it also has a pretty significant amount of evidence behind it (both in that it's pretty widely practiced, and in terms of studies), though only for the standards of behavioral psychology, so I would still take that with a grain of salt.
- Systemization
- This is basically "Getting Things Done" the book, in a class. I, and a really large number of people I've worked with and who seem to be good at their job, consider this book core reading for basically anyone's personal productivity, and I think teaching this is pretty valuable. This class in particular tends to help people who bounced off of the book, which still recommends a really large fraction of practices that I've seen in particular young people bounce off of, like putting everything into binders and getting lots of cabinets to put those binders in, instead of having good digital systems.
- Double Crux
- We've discussed this one above a good amount. In particular I've seen this class cause a bunch of people to have productive conversations that have previously had dozens of hours of unproductive or really conflict-heavy conversations, the most easily referenced and notable of which is probably a conversation between Scott Garrabrant and Eric Drexler that I think significantly moves the conversation around AI Alignment forward
All of the above strike me as pretty robustly good concepts to teach, already make up more than 50% of intro workshops, and that are pretty hard to get a good grasp on without reading ~6 books, and having substantial scaffolding to actually put time into practicing the relevant ideas and techniques.
Pablo_Stafforini @ 2019-10-07T12:57 (+28)
I agree that these are pretty valuable concepts to learn. At the same time, I also believe that these concepts can be learned easily by studying the corresponding written materials. At least, that's how I learned them, and I don't think I'm different from the average EA in this respect.
But I also think we shouldn't be speculating about this issue, given its centrality to CFAR's approach. Why not give CFAR a few tens of thousands of dollars to (1) create engaging online content that explains the concepts taught at their workshops and (2) run a subsequent RCT to test whether people learn these concepts better by attending a workshop than by exposing themselves to that content?
Habryka @ 2019-10-07T17:32 (+23)
I would be open to helping run such an RCT, and by default would expect the written material without further assistance to have relatively little impact.
I also think that for many people asking them to read the related online material will have a much lower completion rate than going to a workshop, and figuring out how to deal with that would be a major uncertainty in the design of the RCT. I have many friends that I tried to get to desperately read the material that explains the above core concepts, sadly without success, who finally got interested enough into all of the above after attending a CFAR workshop.
In my last 5 years of working in EA and the rationality community, I have repeatedly been surprised by the degree to which even very established EAs have not read almost any introductions to the material I outlined above, and where the CFAR workshop was their first introduction into the material. This includes large parts of the staff at CEA, as well as many core group organizers I've met.
I don't expect CFAR putting out online material to help much with this, since roughly the same holds true for 80k material, and a lot of the concepts above actually already have good written explanations to them.
You seem to be very optimistic about getting people to read written content, whereas my experience has been that people are very reluctant to read content of any type that is not fiction or is of very high relevance to some particular niche interest of theirs. Inviting people to a workshop seems to work a lot more reliably to me, though obviously with written material you get a much broader reach, which can compensate for the lower conversion rate (and which medium makes sense to optimize I think hinges a lot on whether you care about getting a small specific set of people to learn something, vs. trying to get as many people as possible to learn something).
Pablo_Stafforini @ 2019-10-08T09:29 (+33)
Thank you. Your comment has caused me to change my mind somewhat. In particular, I am now inclined to believe that getting people to actually read the material is, for a significant fraction of these people, a more serious challenge than I previously assumed. And if CFAR's goal is to selectively target folks concerned with x-risk, the benefits of insuring that this small, select group learn the material well may justify the workshop format, with its associated costs.
I would still like to see more empirical research conducted on this, so that decisions that involve the allocation of hundreds of thousands of EA dollars per year rest on firmer ground than speculative reasoning. At the current margin, I'd be surprised if a dollar given to CFAR to do object-level work achieves more than a dollar spent in uncovering "organizational crucial considerations"—that is, information with the potential to induce a major shift in the organization's direction or priority. (Note that I think this is true of some other EA orgs, too. For example, I believe that 80k should be using randomization to test the impact of their coaching sessions.)
vaidehi_agarwalla @ 2019-10-08T00:46 (+3)
Hi Oliver, Is there a sequence out there explaining these terms? A quick Google/LW/CFAR search didn't throw anything up which covered all the concepts you mention above (there's a sequence called Hammertime, but it didn't cover all the concepts you mention). I think one of the benfits of a centralized source of information is that it's accessible and intuitive to find. In the current state, it seems that you would have to go out of your way to find these kinds of writeups, and possibly not even know they exist.
Habryka @ 2019-10-08T02:57 (+24)
I don't think there is a single link, though most of the concepts have a pretty good canonical resource. I do think it usually takes quite a bit of text to convey each of those concepts, so I don't think creating a single written reference is easily feasible, unless someone wants to produce multiple books worth of content (I've historically been impressed with how much content you can convey in a 1.5 hour long class, often 10 blog posts worth, or about half of a book).
I don't think I have the time to compile a full list of resources for each of these concepts, but I will share the top things that come to mind.
- Units of Exchange: I think microeconomics classes do a pretty good job of this, though are usually a bit abstract. A lot of writing of Scott Alexander gets at this, with the best introduction probably being his "Efficient Charity: Do unto others..."
- Inner Simulator: Covered pretty well by Thinking: Fast and Slow by Daniel Kahneman
- Trigger-Action Planning: Also covered pretty well by Thinking Fast and Slow, though, with some Getting Things Done thrown into it
- Goal Factoring: I don't actually know a good introduction to this, alas.
- Understanding Shoulds: Mindingourway.com's "Replacing Guilt" series
- Focusing: The best introduction into this is Gendlin's audiobook, which I highly recommend and is relatively short
- Systemization: As mentioned, Getting Things Done is the best introduction into this topic
- Double Crux: I think Duncan Sabien's introduction for this is probably the best one
Max_Daniel @ 2019-10-08T23:24 (+9)
I don't think this is very important for my overall view on CFAR's curriculum, but FWIW I was quite surprised by you describing Focusing as
a technique that forms the basis of a significant fraction of modern therapeutic techniques and I consider a core skill for doing emotional processing. I've benefited a lot from this, and it also has a pretty significant amount of evidence behind it (both in that it's pretty widely practiced, and in terms of studies), though only for the standards of behavioral psychology, so I would still take that with a grain of salt.
Maybe we're just using "significant fraction" differently, but my 50% CI would have been that focusing is part of 1-3 of the 29 different "types of psychotherapy" I found on this website (namely "humanistic integrative psychotherapy", and maybe "existential psychotherapy" or "person-centred psychotherapy and counselling"). [Though to be fair on an NHS page I found, humanistic therapy was one of 6 mentioned paradigms.] Weighting by how common the different types of therapy are, I'd expect an even more skewed picture: my impression is that the most common types of therapy (at least in rich, English-speaking countries and Germany, which are the countries I'm most familiar with) are cognitive-behavioral therapy and various kinds of talking therapy (e.g. psychoanalytic, i.e. broadly Freudian), and I'd be surprised if any of those included focusing. My guess is that less than 10% of psychotherapy sessions happening in the above countries include focusing, potentially significantly less than that.
My understanding had been that focusing was developed by Eugene Gendlin, who after training in continental philosophy and publications on Heidegger became a major though not towering (unlike, say, Freud) figure in psychotherapy - maybe among the top decile but not the top percentile in terms of influence among the hundreds of people who founded their own "schools" of psychotherapy.
I've spent less than one hour looking into this, and so might well be wrong about any of this - I'd appreciate corrections.
Lastly, I'd appreciate some pointers to studies on focusing. I'm not doubting that they exist - I'm just curious because I'm interested in psychotherapy and mental health, but couldn't find them quickly (e.g. I searched for "focusing Gendlin" on Google Scholar).
Habryka @ 2019-10-09T00:35 (+6)
I haven't looked super much into the literature on this so I might be wrong, my sense was that it was more of a case of "lots of therapeutic techniques share a lot of structure, and Gendlin formalized it into a specific technique, but a lot of them share a lot of structure with what Gendlin is doing", which makes sense, because that's how focusing was developed. From the Wikipedia article:
Gendlin developed a way of measuring the extent to which an individual refers to a felt sense; and he found in a series of studies that therapy clients who have positive outcomes do much more of this. He then developed a way to teach people to refer to their felt sense, so clients could do better in therapy. This training is called 'Focusing'. Further research showed that Focusing can be used outside therapy to address a variety of issues.
The thing that made me more comfortable saying the above was that Gendlin's goal (judging from the focusing book I read and the audiobook I listened to) seems to have been in significant parts a study into "what makes existing therapeutic techniques work", instead of "let's develop a new technique that will revolutionize therapy", so even if a school of therapy isn't downstream of Gendlin, you expect a good fraction to still have focusing-like things in them, since Gendlin seemed to be more interested in refining techniques instead of revolutionizing them.
I do agree that I should probably stop using words like "significant fraction". I intended to mean something like 20%-30% of therapy sessions will likely include something that is pretty similar to focusing, even if it isn't exactly called that, which still seems roughly right to me and matches with my own experience of therapy with a practitioner who specialized in CBT and some trauma-specific therapies, but our actual sessions weren't really utilizing either of those schools and were basically just focusing sessions, which to that therapist seemed like the natural thing to do in the absence of following a more specific procedure.
Some of my impression here also comes from two textbooks I read on therapy whose names I currently forgot, both of which were mostly school-independent and seemed to emphasize a lot of focusing-like techniques.
However, I don't have super strong models here, and a significant fraction of my models are downstream of Gendlin's own writing (who as I said seems to describe focusing more as "the thing that makes most type of therapy work"), so I am pretty open to being convinced I am wrong about this. I can particularly imagine that Freudian approaches could do less focusing, since I've basically not interacted with anything in that space and feel kinda averse to it, so I am kind of blind to a significant fraction of the therapy landscape.
Max_Daniel @ 2019-10-09T11:36 (+5)
Thanks, this is helpful!
I hadn't considered the possibility that techniques prior to Gendlin might have included focusing-like techniques, and especially that he's claiming to have synthesized what was already there. This makes me less confident in my impression. What you say about the textbooks you read definitely also moves my view somewhat.
(By contrast, what you wrote about studies on focusing probably makes me somewhat reduce my guess on the strength of the evidence of focusing, but obviously I'm highly uncertain here as I'm extrapolating from weak cues - studies by Gendlin himself, correlational claim of intuitively dubious causal validity - rather than having looked at the studies themselves.)
This all still doesn't square well with my own experiences with and models of therapy, but they may well be wrong or idiosyncratic, so I don't put much weight on them. In particular, 20-30% of sessions still seems higher than what I would guess, but overall this doesn't seem sufficiently important or action-relevant that I'd be interested to get at the bottom of this.
Max_Daniel @ 2019-10-07T12:18 (+16)
Just a brief reaction:
This makes sense to me as a response to Halstead's question. However, it actually makes me a bit less confident that (what you describe as) CFAR's reluctance to increase legibility is a good idea. An educational institution strikes me as something that can be made legible way more easily and with fewer downsides than an institution doing cutting-edge research in an area that is hard to communicate to non-specialists.
Jan_Kulveit @ 2019-10-08T14:33 (+11)
In my experience teaching rationality is more tricky than the reference class education, and is an area which is kind of hard to communicate to non-specialists. One of the main reasons seems to be many people have somewhat illusory idea how much they understand the problem.
Habryka @ 2019-10-07T16:50 (+10)
I don't think most of the costs that I described that come from legibility differ that much between research and educational institutions? The american public education system, as well as many other public education systems actually strike me as core examples of systems that have suffered greatly due to very strong forces on legibility in all of their actions (like standardized curricula combined with standardized testing). I think standardized testing is pretty good in a lot of situations, but that in this case it resulted in a massive reduction in variance in a system where most of the value comes from the right tail.
I agree that there are also other separate costs to legibility in cutting-edge domains, but the costs on educational institutions still seem quite significant to me. And most of the costs are relatively domain-general.
Max_Daniel @ 2019-10-07T17:37 (+14)
Thanks, that helps me understand where you're coming from, though it doesn't change my views on CFAR. My guess is we disagree about various more general claims around the costs and benefits of legibility, but unfortunately I don't have time right now to articulate my view on this.
Very roughly, I think I (i) agree with you that excessive optimization for easily measurable metrics has harmed the public education system, and in particular has reduced benefits from the right tail, (ii) disagree with your implied criterion of using something like "quality-weighted sum of generated research" is an appropriate main criterion for assessing the education system, and thus by extension disagree with the emphasis on right-tail outcomes when evaluating the public education system as a whole, (iii) don't think this tells us much about CFAR as I both think that CFAR's environment makes increased legibility less risky (due to things like high goal-alignment with important stakeholders such as funders, a more narrow target audience, ...) and also that there are plenty of ways to become more legible that don't incur risks similar to standardized testing or narrow optimization for quantitative metrics (examples: qualitatively describe what you're trying to teach, and why you think this is a good idea; monitor and publish data such as number of workshops run, attendance etc., without narrowly optimizing for any of these; maintain a list of lessons learned).
(I upvoted your reply, not sure why it was downvoted by someone else.)
Habryka @ 2019-10-07T18:08 (+19)
(Reply written after the paragraph was added above)
Thanks for the elaboration! Some quick thoughts:
qualitatively describe what you're trying to teach, and why you think this is a good idea; monitor and publish data such as number of workshops run, attendance etc., without narrowly optimizing for any of these
I think CFAR has done at least everything on this list of examples. Which you might already be aware of, but wanted to make sure is common knowledge. There are a significant number of posts trying to explain CFAR at a high-level, and the example workshop schedule summarizes all the classes at a high-level. CFAR has also published the number of workshops they've run and their total attendance in their impact reports and on their homepage (currently listing 1045 alumni). Obviously I don't think that alone is sufficient, but it seemed plausible that a reader might walk away thinking that CFAR hadn't done any of the things you list.
disagree with your implied criterion of using something like "quality-weighted sum of generated research" is an appropriate main criterion for assessing the education system, and thus by extension disagree with the emphasis on right-tail outcomes when evaluating the public education system as a whole
I think there is some truth to this interpretation, but I think it's overall still wrong enough that I would want to correct it. I think the education system has many goals, and I don't think I would summarize it's primary output as "quality-weighted sum of generated research". I don't think going into my models of the education system here is going to be super valuable, though happy to do that at some other point if anyone is interested in them. My primary point was that optimizing for legibility clearly has had large effects on educational institutions, in ways that would at least be harmful to CFAR if affected in the same way (another good example here might be top universities and the competition for getting into all the top 10 ranking, though I am less confident of the dynamics of that effects).
Habryka @ 2019-10-07T17:49 (+13)
(Edit the below was written before Max edited the second paragraph into his comment)
Seems good! I actually think considerations around legibility are quite important and where I expect a good amount of intellectual progress to be made by talking to each other, so I would like to see your perspective written up and engage with it.
I also want to make sure that it's clear that I do think CFAR should be more legible and transparent (as I said in the writeup above). I have some concerns with organizations trying to be overly legible, but I think we both agree that at the current margin it would be better for CFAR to optimize more for legibility.
(I've sadly had every single comment of mine on this thread strong-downvoted by at least one person, and often multiple people. My sense is that CFAR is a pretty polarizing topic, which I think makes it particularly important to have this conversation, but seems to also cause some unfortunate voting patterns that feel somewhat stressful to deal with.)
anonymous_ea @ 2019-10-08T03:16 (+21)
I'm sorry to see the strong downvotes, especially when you've put in more effort on explaining your thinking and genuinely engaging with critiques than perhaps than all other EA Fund granters put together. I want you to know that I found your explanations very helpful and thought provoking, and really like how you've engaged with criticisms both in this thread and the last one.
Max_Daniel @ 2019-10-08T10:23 (+7)
Seconded.
(I'm wondering whether this phenomenon could also be due to people using downvotes for different purposes. For example, I use votes roughly to convey my answer to the question "Would I want to see more posts like this on the Forum?", and so I frequently upvote comments I disagree with. By contrast, someone might use votes to convey "Do I think the claims made in this comment are true?".)
MichaelA @ 2019-10-11T05:11 (+12)
Data point: I often feel a pull towards up-voting comments that I feel have stimulated or advanced my thinking or exemplify a valuable norm of transparency and clarity, but then I hold back because I think I might disagree with the claims made or I think I simply don't know enough to judge those claims. This is based on a sense that I should avoid contributing to information cascade-type situations (even if, in these cases, any contribution would only be very slight).
This has happened multiple times in this particular thread; there've been comments of Oliver's that I've very much appreciated the transparency of, but with which I felt like I still might slightly disagree overall, so I avoided voting either way.
(I'm not saying this is the ideal policy, just that it's the one I've taken so far.)
Habryka @ 2019-10-08T03:21 (+6)
Thank you! :)
Halstead @ 2019-10-07T15:10 (+6)
Yes I don't fully understand why they're not legible. A 4 day workshop seems pretty well-placed for a carefully done impact evaluation.
Habryka @ 2019-10-07T17:54 (+15)
For whatever it's worth, this seems right to me, and I do want to make sure that people know that I do think CFAR should try to be more legible at the margin
I mentioned this in my writeup above:
I think that CFAR is still likely optimizing too little towards legibility, compared to what I think would be ideal for it. Being legible allows an organization to be more confident that its work is having real effects, because it acquires evidence that holds up to a variety of different viewpoints.
I do think the question of what the correct outcome measurements for an impact evaluation would be is non-trivial, and would be interested in whether people have any good ideas for good outcome measurements.
Khorton @ 2019-10-06T22:22 (+14)
An aside: I had never heard of 'Murphyjitsu' before, but use pre-mortems in my personal and professional life regularly. I'm surprised people found the name 'Murphyjitsu' easier to engage with!
Habryka @ 2019-10-06T22:29 (+8)
It's a bit of a more playful term, which I think makes sense in the context of a workshop, but I also use the two terms interchangeably and seen CFAR staff do the same, and usually use pre-mortem when I am not in a CFAR context.
I don't have strong opinions on which term is better.
Jess_Whittlestone @ 2019-10-04T16:38 (+32)
Firstly, I very much appreciate the grant made by the LTF Fund! On the discussion of the paper by Stephen Cave & Seán Ó hÉigeartaigh in the addenda, I just wanted to briefly say that I’d be happy to talk further about both: (a) the specific ideas/approaches in the paper mentioned, and also (b) broader questions about CFI and CSER’s work. While there are probably some fundamental differences in approach here, I also think a lot may come down to misunderstanding/lack of communication. I recognise that both CFI and CSER could probably do more to explain their goals and priorities to the EA community, and I think several others beyond myself would also be happy to engage in discussion.
I don’t think this is the right place to get into that discussion (since this is a writeup of many grants beyond my own), but I do think it could be productive to discuss elsewhere. I may well end up posting something separate on the question of how useful it is to try and “bridge” near-term and long-term AI policy issues, responding to some of Oli’s critique - I think engaging with more sceptical perspectives on this could help clarify my thinking. Anyone who would like to talk/ask questions about the goals and priorities of CFI/CSER more broadly is welcome to reach out to me about that. I think those conversations may be better had offline, but if there's enough interest maybe we could do an AMA or something.
Ben Pace @ 2019-10-04T22:59 (+6)
Neat! I’d be very interested in talking about/debating this, perhaps in the comments of another post. In particular, the sections above that feel most cruxy to me are the ones on the centrality of conceptual progress to AI strategy/policy work: what that looks like, how to figure out what new concepts are needed, or whether this is even an important part of AI policy, are all things I’d be interested to discuss.
Habryka @ 2019-10-05T04:49 (+4)
I would definitely also be interested in talking about this, either somewhere on the forum, or in private, maybe with a transcript or summarized takeaways from the conversation posted back to the forum.
Larks @ 2019-10-03T21:38 (+31)
Thanks for writing this up. Impressive and super-informative as ever. Especially with Oliver I feel like I get a lot of good insight into your thought process.
Ozzie Gooen @ 2019-10-04T11:07 (+28)
Seconded. I'm quite happy with the honesty. My impression is that lots of people in positions of power/authority can't really be open online about their criticisms of other prestigious projects (or at least, don't feel like it's worth the cost.) This means that a lot of the most important information is closely guarded to a few specific social circles, which makes it really difficult for others outside to really know what's going on.
I'm not sure what the best solution is, but having at least some people in-the-know revealing their thoughts about such things seems quite good.
Ideally I'd want honest & open discussions that go both ways (for instance, a back-and-forth between evaluators and organizations), but don't expect that any time soon.
I think my preference would be for the EA community to accept norms of honest criticism and communication, but would note that this may be very uncomfortable for some people. Bridgewater has the most similar culture to what I'm thinking of, and their culture is famously divisive.
Habryka @ 2019-10-05T04:35 (+16)
Thank you! I agree with this assessment. My current guess is that it doesn't necessarily make sense for everyone to run my strategy of openness and engaging in lots of discussion, but that at the margin I feel like I would like to see a lot more of that.
I also have the same sense of feeling like Bridgewater culture is both a good example and something that illustrates the problems of doing this universally.
ofer @ 2019-10-09T13:45 (+25)
It might be useful to get some opinions/intuitions from fund managers on the following question:
How promising is the most promising application that you ended up not recommending a grant for? How would a counterfactually valid grant for that application compare to the $439,197 that was distributed in this round, in terms of EV per dollar?
Habryka @ 2019-10-10T05:09 (+21)
This is a bit hard to answer for me, because there are three grants that I was quite excited about that we didn't end up making, that I think were more valuable than many of the grants we did end up making, so maybe a different grant member should answer this question.
If I exclude those three grants, I think there were grants we didn't fund that are about as good as the ones we funded, at least from my personal perspective.
It's harder for me to give an answer "from the perspective of the whole fund", but I would still be surprised if the next grant would have a marginal cost-effectiveness of less than 90% of the marginal grant this round, though I think these things tend to be pretty high variance, so probably only 60% of the average grant this round.
ofer @ 2019-10-10T08:33 (+16)
Thank you!
This suggests that at an additional counterfactually valid donation of $10,000 to the fund, donated prior to this grant round, would have had (if not saved for future rounds) about 60% of the cost-effectiveness of the $439,197 that was distributed.
It might be useful to understand how much more money the fund could have distributed before reaching a very low marginal cost-effectiveness. For example, if the fund had to distribute in this grant round a counterfactually valid donation of $5MM, how would the cost-effectiveness of that donation compare to that of the $439,197 that was distributed?
Khorton @ 2019-10-03T19:53 (+25)
As always, thank you to the committee for writing up their thoughts. The grants to HIPE, Jess Whittlestone, and Lynette Bye look really interesting - I'd be happy to see updates on their work in the future!
Habryka @ 2019-10-05T04:36 (+5)
Thanks!
I agree, and am also excited to see updates on their work. I've updated on the importance of follow-up investigations since last round, so I might invest less resources in the writeups next round, and invest some additional resources into following up with past grantees and getting a sense of how their projects played out.
aarongertler @ 2019-10-03T19:01 (+25)
Regarding Chris Chambers:
The Let’s Fund report linked in the application played a major role in my assessment of the grant, and I probably would not have been comfortable recommending this grant without access to that report.
While you discuss what you believe the positive effects of his work might be, you don't really get into why you think he is the right person to do this project, or why/whether you think this project is stronger than other meta-science initiatives (maybe it was the only such project that applied? Are there others you'd be interested to see apply?).
I assume that some of this is addressed in the lengthy Let's Fund report, but would you be open to summarizing which parts of the report you found most compelling in Chris's favor?
Habryka @ 2019-10-05T04:48 (+16)
Ok, let me go into more detail on that.
I think the biggest obstacle with funding initiatives like this is definitely that it's very hard to even just identify a single potentially promising project without looking into the space for quite some time. We don't really have resources available for extensive proactive investigation into grant areas, so someone I reasonably trust suggesting this as a potential meta-science initiative is definitely the biggest reason for us making this grant.
In general, as I mentioned in one of the sections in the writeup above, we are currently constrained to primarily do reactive grantmaking, and so are unlikely to fund projects that did not apply to the fund or were high on our list of obvious places to maybe give money to.
I have a strong interest in meta-science initiatives, and Chris Chambers was the only project this round that applied in that space, so that combination was definitely a major factor.
However, I do also think that Chambers has achieved some pretty impressive results with his work so far:
Chambers keeps an online spreadsheet with all the journals that have adopted the format [262].
To date, 140 journals have adopted them so far and the fields covered are:
+ Life/medical sciences: neuroscience, nutrition, psychology, psychiatry, biology, cancer research, ecology, clinical & preclinical medicine, endocrinology, agricultural and soil sciences
+ Social sciences: political science, financial and accounting research
+ Physical sciences: chemistry, physics, computer science etc.
+ Generalist journals that cover multiple fields: Royal Society Open Science and Nature Human Behaviour
His success so far has made this one of the most successful preregistration projects I know of to-date, and it seems likely that further funding will relatively straightforwardly generalize to more journals offering registered-reports as a potential way to publish.
HaukeHillebrandt @ 2019-10-07T14:28 (+30)
Thank you for the detailed write-ups.
I will focus on where I disagree with the the Chris Chambers / Registered Reports grant (note: this is Let’s Fund’s grantee, the organization I co-founded).
1. What if all clinical trials became Registered Reports?
You write:
“Chambers has the explicit goal of making all clinical trials require the use of registered reports. That outcome seems potentially quite harmful, and possibly worse than the current state of clinical science.”
I think, if all clinical trials became Registered Reports, then there’d be net benefits.
In essence, if you agree that all clinical trials should be preregistered, then Registered reports is merely preregistration taken to its logical conclusion by being more stringent (i.e. peer-reviewed, less vague etc.).
Relevant quote from the Let’s Fund report (Lets-Fund.org/Better-Science):
“The principal differences between pre-registration and Registered Reports are:
- In pre-registration, trial outcomes or dependent variables and the way of analyzing them are not described as precisely as could be done in a paper
- Pre-registration is not peer-reviewed
- Pre-registration also often does not describe the theory that is being tested.
For the reason, simple pre-registration might not be as good as Registered Reports. For instance, in cancer trials, the descriptions of what will be measured are often of low quality i.e. vague, leading to ‘outcome switching’ (i.e. switching between planned and published outcomes) [180], [181]. Moreover, data processing can often involve very many seemingly reasonable options for excluding or transforming data[182], which can then be used for data dredging pre-registered trials (“With 20 binary choices, 220 = 1,048,576 different ways exist to analyze the same data.” [183]). Theoretically, preregistration could be more exhaustive and precise, but in practice, it rarely is, because it is not peer-reviewed.”
Also, note that exploratory analysis can still be used in Registered Reports, if it’s clearly labelled as exploratory.
----
2. Value of information and bandwidth constraints
You write:
“Ultimately, from a value of information perspective, it is totally possible for a study to only be interesting if it finds a positive result, and to be uninteresting when analyzed pre-publication from the perspective of the editor.“
Generally, a scientist’s priors regarding the likelihood of treatment being successful should be roughly proportional to the value of information. In other words, if the likelihood that a treatment is successful is trivially low, then it is likely too expensive to be worth running or will increase the false positive rate.
On bandwidth constraints: this seems now largely a historical artifact from pre-internet days, when journals only had limited space and no good search functionality. Back then, it was good that you had a journal like Nature that was very selective and focused on positive results. These days, we can publish as many high-quality null-result papers online in Nature as we want to without sacrifice, because people don’t read a dead tree copy of Nature front to back. Scientists now solve the bandwidth constraint differently (e.g. internet keyword searches, how often a paper is cited, and whether their colleagues on social media share it).
In your example, you can combine all 100 potential treatments into one paper and then just report whether it worked or not. The cost of reporting that a study was carried out are trivial compared to others. If the scientist doesn’t believe any results are worth reporting they can just not report them, and we will still have the record of what was attempted (similar to it being good that we can see unpublished preregistrations on trials.gov that never went anywhere as data on the size of publication bias).
3. Implications of major journals implementing Registered reports
You write:
“Because of dynamics like this, I think it is very unlikely that any major journals will ever switch towards only publishing registered report-based studies, even within clinical trials, since no journal would want to pass up on the opportunity to publish a study that has the opportunity to revolutionize the field.”
This is traded-off by top journals publishing biased results (which follows directly from auction theory where the highest bidder is more likely to pay more than the true price; similarly, people who publish in Nature will be more likely to overstate their results. This is borne out empirically. See https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0050201)
Registered Reports are simply more trustworthy and this might change the dynamics so that there’ll be pressure for journals to adopt the registered Reports format or fall behind in terms of impact factor.
--
3.1 On clarity
You write:
“As a result, large parts of the paper basically have no selection applied to them for conceptual clarity,”
On clarity: Registered reports will have more clarity because they’re more theoretically motivated (see https://lets-fund.org/better-science/#h.n85wl9bxcln4) and the reviewers, instead of being impressed by results, are judging papers more on how detailed and clear the methodology is described. This might aid replication attempts and will likely also be a good proxy of the clarity of the conclusion. Scientists are still incentivized to write good conclusions, because they want their work to be cited. Also, the importance of the conclusion will be deemphasized. In the optimal case of a RR, “ a comprehensive and analytically sophisticated design, vetted down to each single line of code by the reviewers before data collection began,” https://www.nature.com/articles/s41562-019-0652-0 is what happens during the review.
What is missing from the results section is pretty much only the final numbers that are plugged in after review and data collection and the result section then “writes itself”. The conclusion section is perhaps almost unnecessary, if the introduction already motivates the implications of the research results and is already used as a more extensive speculative summary in many papers.
I think the conclusion section will be quite short and not very important section in registered reports as is increasingly the case (in Nature, there’s sometimes no “redundant” conclusion section).
---
4. Is reducing red tape more important?
You write:
>>Excessive red tape in clinical research seems like one of the main problems with medical science today
I don’t think excessive red tape is one of the main problems with medical science (say on the same level of publication bias), that there are no benefits of IRBs, nor that Registered Reports adds red tape or has much to do with the issue you cite. I think a much bigger problem is research waste as outlined in the Let’s Fund report.
Most scientists who publish Registered Reports describe the publication experience as quite pleasant with a bit of front-loaded work (see e.g. https://twitter.com/Prolific/status/1153286158983581696). In my view, the benefits far outweigh the costs.
5. On Differential technological development aspect of Registered Reports
On differential tech development and perhaps as an aside: note that more reliable science has wide-ranging consequences for many other cause areas in EA. Not only global development has had problems with replicability (e.g. https://blogs.worldbank.org/impactevaluations/pre-results-review-journal-development-economics-lessons-learned-so-far and the “worm wars”), but also areas related to GBCRs (e.g. there’s a new Registered Reports initiative for research on Influenza see https://cos.io/our-services/research/flu-lab/).
Habryka @ 2019-10-10T23:53 (+16)
This is great, and I think these counterpoints are valuable to read for anyone interested in this topic. I disagree with sections of this (and sometimes agree but just think the balance of considerations plays out differently), and will try to find the time to respond to this in more detail in at least the coming weeks.
Ben Pace @ 2019-10-11T00:18 (+10)
Note: I think this comment would be considerably easier for me to engage with if it were split into three comments, at the points where you have a break using '--'.
Also if the formatting of quotes was done using the style native to the editor, where you use a > then a space, it would make it easier for me to read.
HaukeHillebrandt @ 2019-10-24T08:36 (+7)
Thanks for the heads up - I've cleaned up the formatting now to make it more readable.
anonymous_ea @ 2019-10-11T01:26 (+7)
Datapoint for Hauke: I also am very interested in this topic and Hauke's thoughts on it but found the formatting made it difficult for me to read it fully
Milan_Griffes @ 2019-10-03T21:56 (+17)
Thanks for this substantive post!
Re: CFAR, from the April 2019 grant decisions thread:
I expect to communicate extensively with CFAR over the coming weeks, talk to most of its staff members, generally get a better sense of how CFAR operates and think about the big-picture effects that CFAR has on the long-term future and global catastrophic risk. I think I am likely to then either:
-make recommendations for a set of changes with conditional funding,
-decide that CFAR does not require further funding from the LTF,
-or be convinced that CFAR's current plans make sense and that they should have sufficient resources to execute those plans.
Sounds like the third option is what happened?
Habryka @ 2019-10-03T22:10 (+7)
Of the categories listed, that seems to most accurately summarize what happened (edit: though I don't think this should be seen as a concrete endorsement of CFAR's long-term plans, and have more to do with the considerations about decision-making ability I outlined above). I do think that it's still quite possible that in future rounds I will take the first option, and make recommendations conditional on some changes, though I feel comfortable with the amount of funding we recommended to CFAR this round.
CFAR also communicated to me that they plan to focus more on external transparency in the coming months, so I expect that I will continue building out my models in this space.
HowieL @ 2019-10-11T03:55 (+12)
For anybody who wants to look more into CSER, Sean provided me with a his quick take on a few articles he thinks are representative and that he's proud of.
[Edited to more accurately describe the list as just Sean's quick take]
Sean_o_h @ 2019-10-11T07:36 (+11)
Thanks Howie, but a quick note that this was an individual take by me rather than necessarily capturing the whole group; different people within the group will have work they feel is more impactful and important.
Updates on a few of the more forward-looking items mentioned in that comment.
- A paper on AI and nukes is now out here: https://www.cser.ac.uk/resources/learning-climate-change-debate-avoid-polarisation-negative-emissions/
- A draft/working paper on methodologies and evidence base for xrisk/gcr is here, with a few more in peer review: http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf
- We're in the process of re-running the biological engineering horizon-scan, as well as finishing up an '80 questions for UK biosecurity' expert elicitation process.
- We successfully hired Natalie Jones, author of one of the pieces mentioned (https://www.sciencedirect.com/science/article/abs/pii/S0016328717301179), and she'll do international law/governance and GCR work with us
HowieL @ 2019-10-11T13:07 (+4)
Ah, sorry. Was writing quickly and that was kind of sloppy on my part. Thanks for the correction!
Edited to be clearer.