titotal

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Posts

Explaining the discrepancies in cost effectiveness ratings: A replication and...
by titotal @ 2024-10-14 | +158 | 0 comments
Is "superhuman" AI forecasting BS? Some experiments on the "539" bot from the...
by titotal @ 2024-09-18 | +68 | 0 comments
Most smart and skilled people are outside of the EA/rationalist community: an...
by titotal @ 2024-07-12 | +215 | 0 comments
In defense of standards: A fecal thought experiment
by titotal @ 2024-06-24 | +6 | 0 comments
Motivation gaps: Why so much EA criticism is hostile and lazy
by titotal @ 2024-04-22 | +212 | 0 comments
[Draft] The humble cosmologist's P(doom) paradox
by titotal @ 2024-03-16 | +38 | 0 comments
The Leeroy Jenkins principle: How faulty AI could guarantee "warning shots"
by titotal @ 2024-01-14 | +54 | 0 comments
titotal's Quick takes
by titotal @ 2023-12-09 | +8 | 0 comments
Why Yudkowsky is wrong about "covalently bonded equivalents of biology"
by titotal @ 2023-12-06 | +29 | 0 comments
"Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech...
by titotal @ 2023-09-29 | +102 | 0 comments
The bullseye framework: My case against AI doom
by titotal @ 2023-05-30 | +71 | 0 comments
Bandgaps, Brains, and Bioweapons: The limitations of computational science and...
by titotal @ 2023-05-26 | +59 | 0 comments
Why AGI systems will not be fanatical maximisers (unless trained by fanatical...
by titotal @ 2023-05-17 | +43 | 0 comments
How "AGI" could end up being many different specialized AI's stitched together
by titotal @ 2023-05-08 | +31 | 0 comments
Nuclear brinksmanship is not a good AI x-risk strategy
by titotal @ 2023-03-30 | +19 | 0 comments
How my community successfully reduced sexual misconduct
by titotal @ 2023-03-11 | +209 | 0 comments
Does EA understand how to apologize for things?
by titotal @ 2023-01-15 | +159 | 0 comments
Cryptocurrency is not all bad. We should stay away from it anyway.
by titotal @ 2022-12-11 | +96 | 0 comments
AGI Battle Royale: Why “slow takeover” scenarios devolve into a chaotic...
by titotal @ 2022-09-22 | +49 | 0 comments
Chaining the evil genie: why "outer" AI safety is probably easy
by titotal @ 2022-08-30 | +40 | 0 comments