Benevolent_Rain's Quick takes

By Benevolent_Rain @ 2023-09-03T07:24 (+4)


Ulrik Horn @ 2023-09-03T07:24 (+15)

Could we please start propagating more nuanced and informed information about nuclear power? I have heard at least 2 times recently from prominent EAs on podcasts that nuclear is too expensive because it is over-regulated which is likely to be misguided at best and wrong at worst. The reason to have better epistemic hygiene around nuclear energy is because I imagine many people exploring EA for the first time to have a climate change background and it will likely seem both weird and off-putting to them if a movement that is supposedly evidence-driven throws out quite uninformed and controversial statements about a hot-button topic.  

There is an excellent post on LessWrong about nuclear power that shows not even the pro-nuclear lobby claim high costs due to over-regulation. There are many good reasons to invest in nuclear, and Johannes Ackva has written and talked about this many times and the biggest reason for nuclear is to have a back-up in case we run into problems converting to a renewable energy-dominated electric grid.

I am very happy to clarify topics around nuclear, coming from the energy industry myself. And for prominent EAs I am sure Johannes also has time to inform people before they make public statements on the topic.

Paul_Christiano @ 2023-09-03T19:00 (+25)

The linked LW post points out that nuclear power was cheaper in the past than it is today, and that today the cost varies considerably between different jurisdictions. Both of these seem to suggest that costs would be much lower if there was a lower regulatory burden. The post also claims that nuclear safety is extremely high, much higher than we expect in other domains and much higher than would be needed to make nuclear preferable to alternative technologies. So from that post I would be inclined to believe that overregulation is the main reason for a high cost (together with the closely related fact that we've stopped building nuclear plants and so don't benefit from economies of scale).

I can definitely believe the linked post gives a misleading impression. But I think if you want to correct that impression it would be really useful to explain why it's wrong. It would be even better to provide pointers to some evidence or analysis, but just a clear statement of disagreement would already be really helpful.

Do you think that greater adoption of nuclear power would be harmful (e.g. because the safety profile isn't good, because it would crowd out investments in renewables, because it would contribute to nuclear proliferation, or something else)? That lowering regulatory requirements would decrease safety enough that nuclear would become worse than alternative power sources, even if it isn't already? That regulation isn't actually responsible for the majority of costs? A mixture of the above? Something else altogether?

My own sense is that using more nuclear would have been a huge improvement over the actual power mix we've ended up with, and that our failure to build nuclear was mostly a policy decision. I don't fully understand the rationale, but it seems like the outcome was regulation that renders nuclear uncompetitive in the US, and it looks like this was a mistake driven in large part by excessive focus on safety. I don't know much about this so I obviously wouldn't express this opinion with confidence, and it would be great to get a link to a clear explanation of an alternative view.

Ulrik Horn @ 2023-09-04T10:47 (+3)

Hi Paul, thanks for taking the time to respond.

My main concern is the combination of saying contrarian and seemingly uninvestigated things publicly. Therefore, I am only instrumentally, and at a lower priority interested in whether or not nuclear power is over regulated or not (I think nuclear energy is probably less important than poverty, AI or bio). I do not think I need to spend a lot of time to understand whether nuclear power is over regulated or not in order to state that I think it is over-confident to say that it is in fact over regulated. In other words, and in general, I think a smaller burden of proof is required to say that something is highly uncertain, than to actually say whether something is like this or like that with some significant amount of certainty. Instead of going down a potential nuclear energy rabbit hole, I would advise people looking for examples of over-regulation to either pick a less controversial example of over regulation, or if such examples are not easily found, to avoid making these analogies altogether as I do not think they are important for the arguments in which these examples have been used (i.e. the 2 podcasts).

Still, while I feel confident about my stance above and the purpose of my original post, I want to take the time to respond as carefully as I can to your questions, making it clear that in doing so we are shifting the conversation away from my originally stated purpose of improving optics/epistemics and towards digging into what is actually the case with nuclear (a potential rabbit hole!). I also, unfortunately do not have too much time to look more deeply into this (but did on three occasions assess nuclear in terms of costs for powering a civilizational shelter - it did not look promising and am happy to share my findings).

So I will attempt to answer your questions but in order to stick with the main recommendation that I make (to be epistemically humble) my answers are probably not very informative and I think mostly what I am doing is pointing at the level of scrutiny that is required in order to answer these questions with more certainty and especially publicly as a representative of an evidence-based movement.

To your specific questions:

  1. I am uncertain if higher adoption of nuclear power would be harmful as this seems like a very complex issue as the impact of energy is complex and entangled with economic, social and environmental considerations, to name a few. That said, and based on a very limited understanding of all these topics, I think the main drawback of nuclear power could be high costs, especially if deployed in low income countries. If nuclear costs remain high, it means such governments have less money for critically needed expenditure on healthcare and education. I am mentioning this as I think it is not clear-cut in a trade-off between environment and global health to always choose the environment. I think, in large part due to being convinced by Johannes Ackva, that the main benefit of nuclear power is to have another tool ready to go in our tool-kit if the transition to a mostly renewables-powered grid becomes challenging. So I think perhaps the best strategy for nuclear power is to be careful in committing to large capital expenditures now, especially in low income countries, and instead to try to get the cost of SMRs down as much as possible. This cost reduction might necessitate quite a bit of deployment so perhaps something like Germany’s support of solar is needed in the SMR space. But again, this is a super challenging topic so I do not want to take any too strong stances here. Moreover, it might not be so much a discussion of whether or not nuclear is net harmful, as finding a way to progress with nuclear in a strategic way that maximizes potential benefits.
  2. On safety requirements, I think again this is super complicated and I am hesitant to take any strong stances here. One thing I read while quickly looking at the issue of regulation driving cost was that in the 90s or thereabouts there was a requirement to significantly increase the thickness of the containment room. I have no idea if this would have prevented or minimized damage in past nuclear accidents. Additionally, there might be military considerations (the military, to be sure, puts a lot of spanners in the works for wind energy too with concerns about effect on radars!) as we see with the threat of attacks on the Ukranian Zaporizhzhia plant and the general attention everywhere to security post 9/11. So I think one would need to first identify which regulatory requirements have increased cost and then look at why these were implemented and make an assessment in each case to land on some general understanding of whether these safety requirements seemed overkill or not. And I would push back a bit on the framing I perceive in your question that nuclear power is over regulated. As I commented on the LW post, currently, nuclear, wind and solar have the same number of deaths per unit of energy produced. And the “worst” generation sources like coal and gas is not something we want to aspire to, I think instead we want all generation sources to be safe. For example, I can imagine cost cutting in wind energy from relaxing safety but this would mean a lot more accidents during construction and workers dying at a high rate. I think this is where utilitarian ethics starts having limitations as it seems at least in wind energy to be somewhat reckless to relax safety standards, it simply does not feel right and would not be seen to be proper behavior of the people and organizations involved. I am not sure how relaxing safety standards would play out in nuclear energy, but I would caution against arguing for relaxed safety standards without understanding properly what individuals are likely to be affected negatively, how they would be affected and if this seems like appropriate behavior by the parties involved. That said, there might be cost driving safety in the nuclear that is not likely to have effect on lives - Johannes has done more research than me here and am quite happy to defer to his view that evidence points towards regulation being pursued by the anti-nuclear activists.
  3. Whether regulation is responsible for the majority of the cost difference between nuclear and renewables is something I have not been able to easily find quantified assessments of. I think this again points to the complexity of this issue and again I would caution making strong statements here without deeply understanding the various safety requirements and their impact on cost. I think if it was clear that regulation was the main culprit, and that relaxation of regulation would be more or less harmless, then I think most of the pro nuclear lobby would be making strong statements about this. As this seems not to be the case I would at least be much more cautious about assuming something like this. A related observation: Renewables are on a downward cost curve. I have not seen any downward curve in nuclear, anywhere. Thus, even if we manage to get prices down a notch on nuclear by relaxing regulation, projecting forward nuclear power might struggle to keep up with the decreasing costs in renewables.

One thing I have learnt by having worked in and observed the energy industry is to be humble and uncertain. Very few people saw the cost decrease in solar coming. And it might well be that if we had done nuclear instead, we would decarbonize quicker and with a higher likelihood, but I am deeply uncertain having seen how hard it is to forecast such a complex and entangled industry.

Lastly, I am a little uncertain about your second paragraph so please let me know if you think I missed responding to something you said. I am happy to provide the evidence I have come across (not much!) and could also make a clearer statement of disagreement if you think that is appropriate. If so, I would appreciate a bit more detail about what you think I should clearly disagree with.

jackva @ 2023-09-04T09:00 (+22)

Since I was mentioned in the original post, I quickly (q.e.d.) wanted to lay out my position - where I agree with Ulrik and what my own take on nuclear is (where, I expect, Ulrik and I somewhat disagree):


1. I agree with Ulrik that there is sometimes a “reflexive” pro-nuclearism in EA/rationality circles that seems to be related to widely held priors such as “regulation is bad”, “technology is good”, “greens got it all wrong”, etc. I think this is something where we need to be careful as a movement because a lot of those priors are imported from typical, but not justified, ideological commitments such as techno-optimistic libertarianism popular in Silicon Valley. When someone comes out as pro-nuclear on a podcast and blames regulation as the main / only culprit for nuclear’s uncompetitiveness, I think it would be good if they know the history pretty well and that it does not come across as a slogan on priors (I make no judgment on what happened here, since Ulrik has not identified the podcasts).

2. At the same time, I believe that the evidence strongly suggests that:

  • Over-regulation is a significant driver of increased cost and that this was a deliberate strategy by anti-nuclear activists
  • The overall impact of the anti-nuclear / environmental movement on the development on nuclear power was very significant and very negative, via 
    • Over-regulation
    • Social stigma
    • Blocking of innovation in nuclear (famously, Senator Kerry’s opposition to the Integral Fast Reactor)
    • The nuclear industry entering a defensive posture, not innovating
    • A crowding out of nuclear out of the conversation, instead focusing on intermittent renewables as the energy source preferred for socio-cultural reasons (“small is beautiful”, “harmony with nature” )
       
  •  At the same time the anti-nuclear movement is not alone to blame for the stalling of Gigawatt-scale nuclear in the West, with other factors typically cited:
    • Flattening electricity demand reducing the need for massive new capacity
    • In the US, the decentralized nature of utilities, making economy of scales and learning hard
    • De-regulation of electricity markets making it harder to finance large projects
    • Conflation of nuclear war and nuclear energy
       
  • Epistemically the situation is such that the evidence will always allow for multiple explanations / different weights on different factors because
    • We have very small N (30? countries) with lots of explanatory variables (energy market structures, resources, political systems, strength of different political forces, etc.)
    • Lots of causal interactions that are hard to precisely track (e.g. how much should we blame the nuclear industry’s end to innovating on the defensive posture induced by anti-nuclear activists that put the nuclear industry on retreat since the late 80s?)
    • Combinatorial causation, e.g. maybe the non-stalling of France required (a) extreme energy import dependence and two oil crises illustrating vulnerability, (b) centralized governance and (c) being a nuclear power already whereas the stalling in the US required (a) flattening electricity demand, (b) decentralized utilities, (c) a trend of deregulation and (d) anti-nuclear activists.
Ulrik Horn @ 2023-09-04T09:55 (+5)

I think I agree with most if not all of the above (and on some points I would defer to you Johannes having done much more research than me).

Another point you might agree with: Renewables has significantly disrupted the operations and revenues of nuclear. Wind and solar especially ramp quickly up and down with cloud cover and the approach/retreat of weather systems. Existing nuclear is not well equipped to such fast ramp-rates. I think this has made the % of year nuclear power plants are fully operational decrease and that there are "unnecessary" periods where nuclear could have commended high prices but cannot as it takes too long to stop and start these plants. Not too sure about this, but I think I have heard nuclear operators quoting this as a reason for needing to shut down nuclear reactors for commercial purposes.

jackva @ 2023-09-04T12:42 (+3)

Yes, thanks -- the destruction of electricity markets for baseload sources is indeed another effect that makes building nuclear (and coal) harder.

Robi Rahman @ 2023-09-03T21:06 (+3)

I am very happy to clarify topics around nuclear, coming from the energy industry myself.

What part of the energy industry do you work in?

Larks @ 2023-09-03T18:09 (+2)

The post you link to seems to say that regulations could be relaxed a lot (presumably lowering costs) without increasing risk:

The regulatory standards for nuclear are much higher than they are for coal, gas, oil, food or medicine. Whether this is good or bad depends on your perspective, but safety regulations in nuclear energy could probably be drastically lower and wouldn’t increase harm in any meaningful way.

I also don't think it's fair to say "not even the pro-nuclear lobby" without engaging with the arguments about ALARA etc. from Devanney - whose recommendations about regulator incentive reform etc. seem to be basically endorsed by the LW post you link to.

jackva @ 2023-09-03T14:42 (+2)

Which podcasts were that?

Ulrik Horn @ 2023-09-03T14:57 (+2)

I didn't want to publicly shame anyone, but it was once on Clearer Thinking and once on FLI's podcast, I think. There just seemed to me to be this meme spreading and I thought it would be good to nip it in the bud as at least to me the statements seemed overconfident (even though I think in one or perhaps both instances the speakers did preface the statement with "I think"). In general I perceive this weird thing to be going on with nuclear in EA/rationalist circles where there seems to be some sort of I think unsupported and uninvestigated bias towards nuclear. The first time I became suspicious of something odd was on this question I forecasted on on Good Judgement Open where even in the face of clear evidence, most forecasters simply refused to believe there would be delays in opening UAE's first nuclear power plant.

jackva @ 2023-09-03T15:17 (+4)

I think it's mostly an overextended / unnuanced contrarian reaction to the mainstream environmentalists' anti-nuclearism.

Ulrik Horn @ 2023-09-03T16:41 (+1)

Yes that seems plausible. But I still think we as a world, and especially we as Effective Altruists, would be better off adjusting to something more neutral and nuanced. My main call is for us to stop overextending and land somewhere closer to a nuanced, evidence-based stance on nuclear power, and ideally also something that does not unnecessarily raise eyebrows from mainstream people. As a movement I think we have enough eccentric takes for the mainstream as it is.

Ulrik Horn @ 2024-01-22T12:04 (+7)

Sales professionals might be able to meaningfully contribute to reducing bio x-risk. They could do so by working for germicidal UV companies in promoting their product and increasing sales. This is not my own idea, but I do not think I have seen this career track before and thought it might be useful to some - people with sales backgrounds might not easily find impactful roles (perhaps apart from fundraising and donor relations). If you need more details please just comment here and I will give as much detail as I have on this opportunity.

Ulrik Horn @ 2024-03-06T08:15 (+4)

Risk neutral grantmakers should, if they have not already, strongly consider modifying their position. If such a grantmaker has a choice of an intervention with 1000 utils of potential impact but only 1% chance of working out (10 utils in expectation), and an intervention with 10 utils of potential impact but 90% likely to work out (9 utils in expectation), I would suggest that one should go with the latter at this point where the x-risk community is hopefully still in its early days. 

The reason is that having wins has value in and of itself. I think this is especially true in the x-risk domain where the path to impact is uncertain and complex. At least now, in the hopefully early days of such work, there might be significant value by just demonstrating to ourselves, and perhaps major donors on the fence on whether to become "EA/x-risk donors" and also perhaps talent wondering if EA "is for real", that we can do something.

niplav @ 2024-03-06T14:33 (+7)

Isn't the solution to this to quantify the value of a marginal win, and add it to the expected utility of the intervention?

titotal @ 2024-03-07T09:11 (+3)

You are not going to be able to get an accurate estimate for the "value of a marginal win". 

I also doubt that you can accurately estimate a  "1% chance of 1000 utilitons". In my opinion guesses like these tend to be based on flimsy assumptions and usually wildly overestimated. 

niplav @ 2024-03-07T12:45 (+9)

What do you mean by "accurate estimate"? The more sophisticated version would be to create a probability distribution over the value of the marginal win, as well as for the intervention, and then perform a Monte-Carlo analysis, possibly with a sensitivity analysis.

But I imagine your disagreement goes deeper than that?

In general, I agree with the just estimate everything approach, but I imagine you have some arguments here.

Ulrik Horn @ 2024-02-27T11:22 (+3)

Tu Youyou might be a model for EAs. According to Wikipedia she has saved millions of lives due to her discovery of treatments for Malaria for which she received a Nobel prize. I am guessing, without having done research that at least hundred thousand of these lives might be counterfactually saved due to the time it would take until the next person made this discovery. I randomly came across her going down a GPT/Wikipedia rabbit hole and was surprised to see her not mentioned once on the Forum so far. That said, I am unsure how many people there are that might have counterfactually saved ~100k people or more.

Ulrik Horn @ 2023-11-11T05:25 (+3)

Metaculus does not seem to predict any significant reduction in risk from AI despite the recent movement on AI safety by the US and UK governments. If anything, it seems that for one reason or another, risks might have increased according to Metaculus. I am sharing as I would perhaps naively have thought that these government actions would significantly reduce risk, rather than being a tiny, hopeful first step in a long sequence of events that have to unfold to meaningfully reduce risk. However, I think my enthusiasm was correlated more with the amount of mainstream media coverage rather than any meaningful change in the risk landscape.

I looked at this, this, this and this question on Metaculus. However, I did not dig deep, I only looked visually at the very last tiny bit of the graphs where I did not have enough resolution for daily changes in probabilities so there could be something I missed. If anyone is using this to make big decisions, I recommend they do more rigorous work. It could also be that Metaculus forecasters are slower to update than I expect, even though more than 1 week has passed since the announcements by the UK and the US gov't - I would love to be proved wrong!

JWS @ 2023-11-11T11:01 (+6)

In my opinion, you should hold onto your initial reaction and should downweight your trust in Metaculus estimates on these questions accordingly.

Basically I think you're correct to be more optimistic because of government awareness and action of these issues, and should be sceptical of Metaculus predictors having already 'priced in' government action being more helpful/concerned than the community originally expected.

See here for my thoughts (full post upcoming), Matthew's recent post on the 'sleepwalking fallacy', and I might just perma-link Xuan's thread against the naïve scaling hypothesis

See also this Metaculus poll for the date a weakly general AI is announced, which has shrunk to Feb 2026 (and lowering) despite the fact that two of the four criteria contain limits on the size/scale/content of the training data that current LLM-based systems cannot meet.[1] Sure, if I provide the LLM with SAT answers in training and then ask it to solve an SAT it'll get a high score, but to that's basically doing research like this. Similar errors in forecasting might be involved with the markets you mention.

Perhaps in Metaculus voters defence:

  • They're taking time to integrate this new information into their estimates
  • There's some weird quirk of resolution criteria where they don't think they can update but are still more optimistic generally
  • Maybe they actually thought governments would get involved but be unsuccessful before ~the last year or so and so did already 'price in' the social reaction
  • New people have joined the market and are pushing the risk estimates up, but before the government actions they weren't involved in the market and were more pessimistic, causing some kind of confounding effect
  1. ^

    And another one is an adversarial Turing Test, which RLHF'd LLMs seem hopelessly bound to fail. Just ask it to say why it thinks cannibalism is good or what its favourite porn is

Ulrik Horn @ 2023-11-12T05:40 (+1)

Thanks, those are some very good points. I especially liked your point on having baked in the recent government reaction already, something I had not considered. I have been very superficially thinking about whether prediction markets could be used to get some idea of risk reduction impacts. E.g. if a new organization is announced doing new work, or such organizations achieve a surprising milestone in x risk reduction, I was thinking perhaps one could use prediction markets to estimate the amount of risk reduction. I have not had time to investigate this in more detail and might have missed others having already written about this.

Ulrik Horn @ 2024-02-22T08:52 (+1)

Ambition is like fire. Too little and you go cold. But unmanaged it leaves you burnt.

Ulrik Horn @ 2024-02-02T07:58 (+1)

Not well researched at all, but it seems there is some issue in the EA community with unwanted romantic/sexual advances. What I do not get is how people get into these situations. I thought it was common sense not to make a romantic/sexual advance until you have pretty strong evidence that it might be welcomed? Before one makes a sexual/romantic suggestion, at least one, and ideally several of the below should be true:

Also, this person is not someone you have much power over. If you do have power, you need to act much more carefully and ideally not act at all - it might be net negative in expectation. This is because some of these signs might appear because of the power differential and not the attraction (although that can get more complicated!).

If you make a romantic/sexual advance despite none of the above being true you are asking for trouble. Especially in a mostly professional space like EA I think people should think hard about making advances without any of the "established" signs. And even if you get rejected if some of the signs are there, you are less likely to face backlash as the person probably likes you as a friends or something and will give you leeway.

This is inspired by a lot of recent discussion on "but isn't it ok for people to make an advance - if they didn't nobody would even be in romantic relationships". Others might have raised it but what I see missing is discussion of this crucial, pre-romantic stage and how that should update one's romantic priors.