elifland
You can give me anonymous feedback here. I often change my mind and don't necessarily endorse past writings.
Posts
            Discussing how to align Transformative AI if it’s developed very soon 
by elifland @ 2022-11-28 | +36 | 0 comments
    
        by elifland @ 2022-11-28 | +36 | 0 comments
            Eli's review of "Is power-seeking AI an existential risk?"
by elifland @ 2022-09-30 | +58 | 0 comments
    
        by elifland @ 2022-09-30 | +58 | 0 comments
            Forecasting thread: How does AI risk level vary based on timelines?
by elifland @ 2022-09-14 | +47 | 0 comments
    
        
    
        
    
        by elifland @ 2022-09-14 | +47 | 0 comments
            Prioritizing x-risks may require caring about future people
by elifland @ 2022-08-14 | +183 | 0 comments
    
        by elifland @ 2022-08-14 | +183 | 0 comments
            Reasons I’ve been hesitant about high levels of near-ish AI risk
by elifland @ 2022-07-22 | +216 | 0 comments
    
        by elifland @ 2022-07-22 | +216 | 0 comments
            Impactful Forecasting Prize Results and Reflections
by elifland, Misha_Yagudin @ 2022-03-29 | +40 | 0 comments
    
        by elifland, Misha_Yagudin @ 2022-03-29 | +40 | 0 comments
            Impactful Forecasting Prize meetup
by elifland, Sam Glover, Misha_Yagudin @ 2022-02-10 | +4 | 0 comments
    
        by elifland, Sam Glover, Misha_Yagudin @ 2022-02-10 | +4 | 0 comments
            Impactful Forecasting Prize meetup
by elifland, Sam Glover, Misha_Yagudin @ 2022-02-10 | +4 | 0 comments
    
        by elifland, Sam Glover, Misha_Yagudin @ 2022-02-10 | +4 | 0 comments
            Impactful Forecasting Prize for forecast writeups on curated Metaculus questions
by elifland, Sam Glover, Misha_Yagudin @ 2022-02-04 | +91 | 0 comments
    
        
    
        
    
        
    
    by elifland, Sam Glover, Misha_Yagudin @ 2022-02-04 | +91 | 0 comments