Yanni Kyriacos's Quick takes

By yanni kyriacos @ 2023-06-13T00:23 (+2)


Yanni Kyriacos @ 2024-02-25T23:42 (+87)

I met Australia's Assistant Minister for Defence last Friday. I asked him to write an email to the Minister in charge of AI, asking him to establish an AI Safety Institute. He said he would. He also seemed on board with not having fully autonomous AI weaponry.

All because I sent one email asking for a meeting + had said meeting. 

Advocacy might be the lowest hanging fruit in AI Safety.

Mo Putera @ 2024-02-28T18:58 (+4)

Akash's Speaking to Congressional staffers about AI risk seems similar:

In May and June of 2023, I (Akash) had about 50-70 meetings about AI risks with congressional staffers. ...

In March of 2023, I started working on some AI governance projects at the Center for AI Safety. One of my projects involved helping CAIS respond to a Request for Comments about AI Accountability that was released by the NTIA.

As part of that work, I started thinking a lot about what a good regulatory framework for frontier AI would look like. For instance: if I could set up a licensing regime for frontier AI systems, what would it look like? Where in the US government would it be housed? What information would I want it to assess?

I began to wonder how actual policymakers would react to these ideas. I was also curious to know more about how policymakers were thinking about AI extinction risks and catastrophic risks.

I started asking other folks in AI Governance. The vast majority had not talked to congressional staffers (at all). A few had experience talking to staffers but had not talked to them about AI risk. A lot of people told me that they thought engagement with policymakers was really important but very neglected. And of course, there are downside risks, so you don't want someone doing it poorly. 

After consulting something like 10-20 AI governance folks, I asked CAIS if I could go to DC and start talking to congressional offices. The goals were to (a) raise awareness about AI risks, (b) get a better sense of how congressional offices were thinking about AI risks, (c) get a better sense of what kinds of AI-related priorities people at congressional offices had, and (d) get feedback on my NTIA request for comment ideas. 

CAIS approved, and I went to DC in May-June 2023. And just to be clear, this wasn't something CAIS told me to do– this was more of an "Akash thing" that CAIS was aware was happening.

Like you, Akash just cold-emailed people:

I sent a mass email to tech policy staffers, and I was pretty impressed by the number who responded. The email was fairly short, mentioned that I was at CAIS, had 1-2 bullets about what CAIS does, and had a bullet point about the fact that I was working on an NTIA request for comment.

I think it was/is genuinely the case that Congressional staffers are extremely interested in AI content right now. Like, I don't think I would've been able to have this many meetings if I was emailing people about other issues.

There's a lot of concrete learnings in that writeup; definitely worth reading I think.

Yanni Kyriacos @ 2024-03-27T02:05 (+31)

A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Neil Warren @ 2024-03-30T18:30 (+2)

Another thing you can do is send comments proposed legislation on regulations.gov. I did so last week about a recent californian bill on open-sourcing model weights (now closed). In the checklist (screenshot below) they say: "the comment process is not a vote – one well supported comment is often more influential than a thousand form letters". There are people much more qualified on AI risk than I over here, so in case you didn't know, you might want to keep an eye on new regulation coming up. It doesn't take much time and seems to have a fairly big impact.

Yanni Kyriacos @ 2024-02-27T22:15 (+27)

My previous take on writing to Politicians got numbers, so I figured I'd post the email I send below.

I am going to make some updates, but this is the latest version:

---

Hi [Politician]

My name is Yanni Kyriacos, I live in Coogee, just down the road from your electorate.

If you're up for it, I'd like to meet to discuss the risks posed by AI. In addition to my day job building startups, I do community / movement building in the AI Safety / AI existential risk space. You can learn more about AI Safety ANZ by joining our Facebook group here or the PauseAI movement here. I am also a signatory of Australians for AI Safety - a group that has called for the Australian government to set up an AI Commission (or similar body).

Recently I worked with Australian AI experts (such as Good Ancestors Policy) in making a submission to the recent safe and response AI consultation process. In the letter, we called on the government to acknowledge the potential catastrophic and existential risks from artificial intelligence. More on that can be found here.

There are many immediate risks from already existing AI systems like ChatGPT or Midjourney, such as disinformation or improper implementation in various businesses. In the not-so-distant future, certain safety nets will need to be activated (such as a Universal Basic Income policy) in the event of mass unemployment due to displacement of jobs with robots and AI systems.

But of greatest concern is the speed at which we are marching towards AGI (artificial general intelligence) – systems that will have cognitive abilities at or above human level.

Half of AI researchers believe that there is a 10% or greater chance that the invention of artificial superintelligence will mean the end of humanity. Among AI safety scientists, this chance is estimated to be an average of 30%. And these levels of risk aren’t just a concern for people in the far-distant future, with prediction markets such as Metaculus showing these kinds of AI could be invented in the next term of government.

Notable examples of individuals sounding the alarm are Prof. Geoffrey Hinton and Prof. Yoshua Bengio, both Turing-award winners and pioneers of the deep learning methods that are currently achieving the most success. The existential risk of AI has been acknowledged by hundreds of scientists, the UN, the US and recently the EU.

To make a long story short: we don't know how to align AI with the complex goals and values that humans have. When a superintelligent system is realised, there is a significant risk it will pursue a misaligned goal without us being able to stop it. And even if such a superhuman AI remains under human control, the person (or government) wielding such a power could use this to drastically, irreversibly change the world. Such an AI could be used to develop new technologies and weapons, manipulate masses of people or topple governments.

The advancements in the AI landscape have progressed much faster than anticipated. In 2020, it was estimated that an AI would pass university entrance exams by 2050. This goal was achieved in March 2023 by the system GPT-4 from OpenAI. These massive, unexpected leaps have prompted many experts to request a pause in AI development through an open letter to major AI companies. The letter has been signed over 33,000 times so far, including many AI researchers and tech figures.

Unfortunately, it seems that companies are not willing to jeopardise their competitive position by voluntarily halting development. A pause would need to be imposed by a government. Luckily, there seems to be broad support for slowing down AI development. A recent poll indicates that 63% of American support regulations to prevent AI companies from building superintelligent AI. At the national level, a pause is also challenging because countries have incentives to not fall behind in AI capabilities. That's why we need an international solution.

The UK organised an AI Safety Summit on November 1st and 2nd at Bletchley Park. We hoped that during this summit, leaders would work towards sensible solutions that prevent the very worst of the risks that AI poses. As such I was excited to see that Australia signed the The Bletchley Declaration, agreeing that this risk is real and warrants coordinated international action. However, the recent policy statements by Minister Husic don't seem to align with the urgency that experts are seeing. The last safe moment to act could be very soon.

The Summit has not yet produced an international agreement or policy. We have seen proposals being written by the US Senate, and even AI company CEOs have said there is “overwhelming consensus” that regulation is needed. But no proposal so far has seriously considered ways to slow down or prevent a superintelligent AI from being created. I am afraid that lobbying efforts by AI companies to keep regulation at a minimum are turning out to be highly effective.

It's essential that the government follows through on its commitment at Bletchley Park to create a national or regional AI safety body. We have such bodies for everything from the risk of plane crashes to the risk of tsunamis. We urgently need one on ensuring the safety of AI systems.

Anyway, I'd love to discuss this more in person or via zoom if you're in town soon.

Let me know what you think.
Cheers,,

Yanni 

Vasco Grilo @ 2024-02-28T17:55 (+2)

Thanks for sharing, Yanni, and it is really cool that you managed to get Australia's Assistant Minister for Defence interested in creating an AI Safety Institute! 

More on that can be found here.

Did you mean to include a link?

In 2020, it was estimated that an AI would pass university entrance exams by 2050.

The Metaculus' question you link to involves meeting many conditions besides passing university exams:

For these purposes we will thus define "AI system" as a single unified software system that can satisfy the following criteria, all easily completable by a typical college-educated human.

  • Able to reliably pass a Turing test of the type that would win the Loebner Silver Prize.
  • Able to score 90% or more on a robust version of the Winograd Schema Challenge, e.g. the "Winogrande" challenge or comparable data set for which human performance is at 90+%
  • Be able to score 75th percentile (as compared to the corresponding year's human students; this was a score of 600 in 2016) on all the full mathematics section of a circa-2015-2020 standard SAT exam, using just images of the exam pages and having less than ten SAT exams as part of the training data. (Training on other corpuses of math problems is fair game as long as they are arguably distinct from SAT exams.)
  • Be able to learn the classic Atari game "Montezuma's revenge" (based on just visual inputs and standard controls) and explore all 24 rooms based on the equivalent of less than 100 hours of real-time play (see closely-related question.)
Yanni Kyriacos @ 2024-04-01T23:49 (+19)

RIP to any posts on anything earnest over the last 48 hours. Maybe in future we don't tag anything April Fools and it is otherwise a complete blackout on serious posts 😅

BrownHairedEevee @ 2024-04-02T05:24 (+6)

How about making the April Fool's Day tag visible on the forum frontpage, like so?

Darren McKee @ 2024-04-02T15:47 (+12)

Something(!) needs to be done. Otherwise, it's just a mess for clarity and the communication of ideas. 

Yanni Kyriacos @ 2024-04-02T20:04 (+8)

I think the hilarity is in the confusion / click bait. Your idea would rob us of this! I think the best course of action is for anyone with a serious post to wait until April 3 :|

tobytrem @ 2024-04-03T08:47 (+2)

Not a solution to everything mentioned here- but a reminder that you can click "customize feed" at the top of the page and remove all posts tagged april fools.

Yanni Kyriacos @ 2024-04-04T10:31 (+1)

nah let's lean all the way in, for one day a year, the wild west out here.

Yanni Kyriacos @ 2024-04-03T00:56 (+1)

Damn just had the idea of a "Who wants to be Fired?" post. 

Yanni Kyriacos @ 2024-03-05T22:55 (+18)

I think if you work in AI Safety (or want to) it is very important to be extremely skeptical of your motivations for working in the space. This applies to being skepticism of interventions within AI Safety as well. 

For example, EAs (like most people!) are motivated to do things they're (1) good at (2) see as high status (i.e. people very quietly ask themselves 'would someone who I perceive as high status approve of my belief or action?'). Based on this, I am worried that (1) many EAs find protesting AI labs (and advocating for a Pause in general) cringy and/or awkward (2) Ignore the potential impact of organisations such as PauseAI. 

We might all literally die soon because of misaligned AI, so what I'm recommending is that anyone seriously considering AI Safety as a career path spends a lot of time on the question of 'what is really motivating me here?' 

Yanni Kyriacos @ 2024-03-06T00:40 (+7)

fwiw i think this works in both directions - people who are "action" focussed probably have a bias towards advocacy / protesting and underweight the usefulness of research.

Yanni Kyriacos @ 2024-02-21T04:10 (+18)

I have written 7 emails to 7 Politicians aiming to meet them to discuss AI Safety, and already have 2 meetings.

Normally, I'd put this kind of post on twitter, but I'm not on twitter, so it is here instead.

I just want people to know that if they're worried about AI Safety, believe more government engagement is a good thing and can hold a decent conversation (i.e. you understand the issue and are a good verbal/written communicator), then this could be an underrated path to high impact.

Another thing that is great about it is you can choose how many emails to send and how many meetings to have. So it can be done on the side of a "day job".

Yanni Kyriacos @ 2024-03-27T07:33 (+16)

[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)


 

JP Addison @ 2024-03-27T18:25 (+4)

I agree with you, and so does our issue tracker. Sadly, it does seem a bit hard. Tagging @peterhartree as a person who might be able to tell me that it's less hard than I think.

Yanni Kyriacos @ 2024-03-27T21:56 (+4)

As someone who works with software engineers, I have respect for how simple-appearing things can actually be technically challenging.

Lorenzo Buonanno @ 2024-03-28T10:09 (+2)

For what it's worth, I would find the first part of the issue (i.e. making the player "floating" or "sticky") already quite useful, and it seems much easier to implement.

Yanni Kyriacos @ 2024-03-13T23:56 (+16)

I think acting on the margins is still very underrated. For e.g. I think 5x the amount of advocacy for a Pause on capabilities development of frontier AI models would be great. I also think in 12 months time it would be fine for me to reevaluate this take and say something like 'ok that's enough Pause advocacy'.

Basically, you shouldn't feel 'locked in' to any view. And if you're starting to feel like you're part of a tribe, then that could be a bad sign you've been psychographically locked in.

Yanni Kyriacos @ 2024-03-01T23:25 (+10)

The general public wants frontier AI models regulated and there doesn't seem to be grassroots focussed orgs attempting to capture and funnel this energy into influencing politicians. E.g. via this kind of activity. This seems like massively low hanging fruit. An example of an organisation that does this (but for GH&W) is Results Australia. Someone should set up such an org.

Yanni Kyriacos @ 2024-03-02T00:13 (+1)

My impression is that PauseAI focusses more on media engagement + protests, which I consider a good but separate thing. Results Australia, as an example, focuses (almost exclusively) on having concerned citizens interacting directly with politicians. Maybe it would be a good thing for orgs to focus on separate things (e.g. for reasons of perception + specialisation). I lean in this direction but remain pretty unsure.

joepio @ 2024-03-02T09:47 (+5)

Founder of PauseAI here. I know our protests are the most visible, but they are actually a small portion of what we do. People always talk about the protests, but I think we actually had most results through invisible volunteer lobbying. Personally, I've spent way more time sending emails to politicians and journalists, meeting them and informing them of the issues. I wrote an Email Builder to get volunteers to write to their representatives, gave multiple workshops on doing so, and have seen many people (including you of course!) take action and ask for feedback in the discord.

I think combining both protesting and volunteer lobbying in one org is very powerful. It's not a new idea of course - orgs like GreenPeace have been using this strategy for decades. The protests create visibility and attention, and the lobbying gives the important people the right background information. The protests encourage more volunteers to join and help out, so we get more volunteer lobbyists! In my experience the protests also help with getting an in with politicians - it creates a recognizable brand that shows you represent a larger group.

Yanni Kyriacos @ 2024-04-04T10:36 (+8)

So I did a quick check today - I've sent 19 emails to politicians about AI Safety / x-risk and received 4 meetings. They've all had a really good vibe, and I've managed to get each of them to commit to something small (e.g. email XYZ person about setting up an AI Safety Institute). I'm pretty happy with the hit rate (4/19). I might do another forum quick take once I've sent 50.

yanni kyriacos @ 2024-04-20T06:01 (+7)

Be the meme you want to see in the world (screenshot).


 

Yanni Kyriacos @ 2023-08-09T04:59 (+7)

What is your best guess of the overall impact of 80k interviewing AI labs on their podcast + listing AI lab vacant roles?

Poll: https://www.guidedtrack.com/programs/qj0ykwn/run

EdoArad @ 2023-08-09T07:15 (+2)

@Yonatan Cale 

Yonatan Cale @ 2023-08-09T09:12 (+6)

My long thoughts:

1. 80k don't claim to only advertise impactful jobs

They also advertise jobs that help build career impact, and they're not against posting jobs that cause harm (and it's often/always not clear which is which). See more in this post.

They sometimes add features like marking "recommended orgs" (which I endorse!), and sometimes remove those features ( 😿 ).

2. 80k's career guide about working at AI labs doesn't dive into "which lab"

See here. Relevant text:

Recommended organisations

We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.

I think [link to comment] the "which lab" question is really important, and I'd encourage 80k to either be opinionated about it, or at least help people make up their mind somehow, not just leave people hanging on "which lab" while also often recommending people go work at AI labs, and also mentioning that often that work is net-negative and recommending reducing the risk by not working "in certain positions unless you feel awesome about the lab".

[I have longer thoughts on how they could do this, but my main point is that it's (imo) an important hole in their recommendation that might be hidden from many readers]

3. Counterfactual / With great power comes great responsibility

If 80k wouldn't do all this, should we assume there would be no job board and no guides?

I claim that something like a job board has critical mass: Candidates know the best orgs are there, and orgs know the best candidates are there. 

Once there's a job board with critical mass, it's not trivial to "compete" with it.

But EAs love opening job boards. A few new EA job boards pop up every year. I do think there would be an alternative. And so the question seems to me to be - how well are 80k using their critical mass?

4. What results did 80k's work actually cause?

First of all: I don't actually know, and if someone from 80k would respond, that would be way better than my guess.

Still, here's my guess, which I think would be better than just responding to the poll:

  • Lots of engineers who care about AI Safety but don't have a deep understanding of it (and not much interest in spending months to learn) - go work at AI labs.
    • Is this positive, because now people "care" in the labs, or negative because the labs have a much easier time hiring people who basically go and do their job? This is seriously a hard question, but I'd guess "negative" (and I think 80k would agree but I'm not sure)
  • I wouldn't be surprised if 80k are directly responsible for a few very important hires. 
    • For example, I think the CISO (head of security) of Anthropic used to run the security of Chrome. I'm VERY happy Anthropic hired such a person, I think infosec is really important for AI labs, and I wouldn't be surprised if 80k had something to do with this, and if not this - then maybe some other similar role. 
      • I think this is very positive, and maybe more important than all the rest.

 

[I need to go, my comment seems incomplete but I hope it's still somewhat helpful so posting. I'm still not sure how to vote!]

Guy Raveh @ 2023-08-09T17:03 (+3)

And there's always the other option that I (unpopularly) believe in - that better publicly available AI capabilities are necessary for meaningful safety research, thus AI labs have contributed positively to the field.

Yanni Kyriacos @ 2024-03-27T22:06 (+6)

It breaks my heart when I see eulogy posts on the forum. And while I greatly appreciate people going to the effort of writing them (while presumably experiencing grief), it still doesn't feel like enough. We're talking about people that dedicated their lives to doing good, and all they get is a post. I don't have a suggestion to address this 'problem', and some may even feel that a post is enough, but I don't. Maybe there is no good answer and death just sucks. I dunno.

Yanni Kyriacos @ 2023-07-27T00:44 (+6)

I think https://www.wakingup.com/ should be considered for effective organisation status. It donates 10% of revenue to the most effective causes and I think reaching nondual states of awakening could be one of the most effective ways for people in rich countries to improve their wellbeing. 

Misha_Yagudin @ 2023-07-27T01:31 (+5)

Related: https://www.clearerthinking.org/post/can-you-experience-enlightenment-through-sam-harris-waking-up-meditation-app

Will Aldred @ 2023-07-28T00:47 (+2)

Also related (though more tangentially): https://podcast.clearerthinking.org/episode/167/michael-taft-and-jeremy-stevenson-glimpses-of-enlightenment-through-nondual-meditation/

yanni kyriacos @ 2024-04-23T23:55 (+4)

This is an extremely "EA" request from me but I feel like we need a word for people (i.e. me) who are Vegans but will eat animal products if they're about to be thrown out. OpportuVegan? UtilaVegan?

Matt Goodman @ 2024-04-24T21:12 (+7)

Freegan

Bella @ 2024-04-24T10:18 (+5)

I think the term I've heard (from non-EAs) is 'freegan' (they'll eat it if it didn't cause more animal products to be purchased!)

yanni kyriacos @ 2024-04-24T20:28 (+1)

This seems close enough that I might co-opt it :)

https://en.wikipedia.org/wiki/Freeganism

tobytrem @ 2024-04-24T10:35 (+3)

If you predictably do this, you raise the odds that people around you will cook some/ buy some extra food so that it will be "thrown out", or offer you food they haven't quite finished (and that they'll replace with a snack later. 
So I'd recommend going with "Vegan" as your label, for practical as well as signalling reasons. 

yanni kyriacos @ 2024-04-24T20:26 (+3)

Yeah this is a good point, which I've considered, which is why I basically only do it at home.

Yanni Kyriacos @ 2024-03-06T23:03 (+3)

I think the average EA worries too much about negative PR related to EA. I think this is a shame because EA didn't get to where it is now because it concerned itself with PR. It got here through good-old-fashioned hard work (and good thinking ofc).

Two examples:

1. FTX. 

2. OpenAI board drama.

On the whole, I think there was way too much time spent thinking and talking about these massive outliers and it would have been better if 95%+ of EAs put their head down and did what they do best - get back to work.

I think it is good to discuss and take action on the issues at a concrete level, but not to talk about how this affects "what people think of EA". EA overindexing on conscientiousness can have negative consequences!

Yanni Kyriacos @ 2024-02-26T23:32 (+3)

What is your "Pens Down" moment? 

"Pens Down" to mean 'Artificial Super Intelligence in my opinion is close enough that it no longer makes sense to work on whatever else I'm currently working on, because we're about to undergo radical amounts of change very soon/quickly'.

For me, it is probably when we have something as powerful as GPT-4 except it is agentic and costs less than $100 / month. So, that looks like a digital personal assistant that can execute an instruction like "have a TV delivered for me by X date, under Y price and organise installation and wall mounting."

This is obviously a question mainly for people who don't work full time on AI Safety.

Chris Leong @ 2024-02-27T00:27 (+4)

I don't know if this can be answered in full-generality.

I suppose it comes down to things like:
• Financial runway/back-up plans in case your prediction is wrong
• Importance of what you're doing now
• Potential for impact in AI safety

Yanni Kyriacos @ 2024-02-27T03:11 (+1)

I agree. I think it could be a useful exercise though to make the whole thing (ASI) less abstract.

I find it hard to reconcile that (1) I think we're going to have AGI soon and (2) I haven't made more significant life changes.

I don't buy the argument that much shouldn't change (at least, in my life). 

Chris Leong @ 2024-02-27T12:24 (+2)

Happy to talk that through if you'd like, though I'm kind of biased, so probably better to speak to someone who doesn't have a horse in the race.

Yanni Kyriacos @ 2024-02-26T23:48 (+1)

I suppose it is plausible for a person to never have a "Pens Down" moment if;

  1. There is a FOOM
  2. They feel they won't be able to positively contribute to making ASI safe / slowing it down
Yanni Kyriacos @ 2024-02-26T23:34 (+1)

I'm somewhat worried we're only 2-3 years from this, FWIW. I'd give it a ~ 25% chance.

Yanni Kyriacos @ 2024-02-22T00:43 (+3)

A potential failure mode of 80k recommending EAs work at AI labs:

  1. 80k promotes a safety related job within a leading AI lab.
  2. 80k's audience (purposefully) skews to high prospect candidates (HPC) - smarter, richer, better connected vs average. 
  3. HPC applies for and gets safety role within AI lab.
  4. HPC candidate stays at the lab but moves roles. 
  5. Now we have a smart, rich, well connected person no longer in safety but in capabilities.

I think this is sufficiently important / likely that 80k should consider tracking these people over time to see if this is a real issue.

NickLaing @ 2024-02-22T13:38 (+2)

Thanks Yanni, I think a lot of people have been concerned about this kind of thing.

I would be surprised if 80,000 hours isn't already tracking this or something like it - perhaps try reaching out to them directly, you might get a better response that way

yanni kyriacos @ 2024-04-22T09:39 (+2)

Has anyone seen an analysis that takes seriously the idea that people should eat some fruits, vegetables and legumes over others based on how much animal suffering they each cause?

I.e. don't eat X fruit, eat Y one instead, because X fruit is [e.g.] harvested in Z way, which kills more [insert plausibly sentient creature].

Yanni Kyriacos @ 2024-03-21T00:27 (+2)

Even though I've been in the AI Safety space for ~ 2 years, I can't shake the feeling that every living thing dying painlessly in its sleep overnight (due to AI killing us) isn't as bad as (i.e. is 'better' than) hundreds of millions of people living in poverty and/or hundreds of billions of animals being tortured. 

This makes me suspicious of my motivations. I think I do the work partly because I kinda feel the loss of future generations, but mainly because AI Safety still feels so neglected (and my counter factual impact here is larger).

I don't think s-risks play much of a role in this, although they have in the past (here, I define disempowerment due to AGI, or authoritarian use of AGI as s-risks).

Jason @ 2024-03-21T01:34 (+18)

Thanks for sharing. I suspect most of the hundreds of millions of people living in poverty would disagree with you, though, and would prefer not to painlessly die in their sleep tonight.

Yanni Kyriacos @ 2024-03-21T01:52 (+1)

I think its possible we're talking passed each other?

NickLaing @ 2024-03-21T06:12 (+5)

I don't think he's talking past you. His point seems that the vast majority of the hundreds of millions of people living in poverty both have net positive lives, and don't want to die.

Even with a purely hedonistic outlook, it wouldn't be better for their lives to end.

Unless you are not talking about the present, but a future far worse than today's situation?

Yanni Kyriacos @ 2024-03-21T08:51 (+1)

I'm saying that on some level it feels worse to me that 700 million people suffer in poverty than every single person dying painlessly in their sleep. Or that billions of animals are in torture factories. It sounds like I'm misunderstanding Jason's point?

NickLaing @ 2024-03-21T09:02 (+2)

I would contend they are not "suffering" in poverty overall, because most of their lives are net positive. There may be many struggles and their lives are a lot harder than ours, but still better than not being alive at all.

I agree with you on the animals in torture factories, because their lives are probably net negative unlike the 700 million in poverty. 

titotal @ 2024-03-21T11:17 (+3)

If AI actually does manage to kill us (which I doubt), It will not involve everybody dying painlessly in their sleep. That is an assumption of the "FOOM to god AI with no warning" model, which bears no resemblance to reality. 

The technology to kill everyone on earth in their sleep instantaneously does not exist now, and will not exist in the near-future, even if AGI is invented. Killing everyone in their sleep is orders of magnitude more difficult than killing everyone awake, so why on earth would that be the default scenario? 

Stephen Clare @ 2024-03-21T17:25 (+2)

I think you have a point with animals, but I don't think the balance of human experience means that non-existence would be better than the status quo.

Will talks about this quite a lot in ch. 9 of WWOTF ("Will the future be good or bad?"). He writes:

If we assume, following the small UK survey, that the neutral point on a life satisfaction scale is between 1 and 2, then 5 to 10 percent of the global population have lives of negative wellbeing. In the World Values Survey, 17 percent of respondents classed themselves as unhappy. In the smaller skipping study of people in rich countries, 12 percent of people had days where their bad experiences outweighed the good. And in the study that I commissioned, fewer than 10 percent of people in both the United States and India said they wished they had never been born, and a little over 10 percent said that their lives contained more suffering than happiness.

So, I would guess that on either preference-satisfactionism or hedonism, most people have lives with positive wellbeing. If I were given the option, on my deathbed, to be reincarnated as a randomly selected person alive today, I would choose to do so.

And, of course, for people at least, things are getting better over time. I think animal suffering complicates this a lot.

yanni kyriacos @ 2024-05-05T11:02 (+1)

Something I'm confused about: what is the threshold that needs meeting for the majority of people in the EA community to say something like "it would be better if EAs didn't work at OpenAI"?

Imagining the following hypothetical scenarios over 2024/25, I can't predict confidently whether they'd individually cause that response within EA?

  1. Ten-fifteen more OpenAI staff quit for varied and unclear reasons. No public info is gained outside of rumours
  2. There is another board shakeup because senior leaders seem worried about Altman. Altman stays on
  3. Superalignment team is disbanded
  4. OpenAI doesn't let UK or US AISI's safety test GPT5/6 before release
  5. There are strong rumours they've achieved weakly general AGI internally at end of 2025
yanni kyriacos @ 2024-05-05T11:11 (+1)

A concrete prediction - 60% chance 80k doesn't list any jobs at OpenAI in 2026.

yanni kyriacos @ 2024-04-23T02:58 (+1)

I think it would be good if lots of EAs answered this twitter poll, so we could get a better sense for the communities views on the topic of Enlightenment / Awakening: https://twitter.com/SpencrGreenberg/status/1782525718586413085?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet

yanni kyriacos @ 2024-04-22T03:16 (+1)

The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).

I thought it might be useful to spell that out.

yanni kyriacos @ 2024-04-21T04:46 (+1)

I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist.

Gemini did a good job of summarising it:

This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown:

What it Doesn't Mean:

What it Does Mean:

Analogy:

Imagine a pebble thrown into a still pond. The pebble represents the external situation, and the ripples represent your emotional response. While you can't control the pebble (the external situation), you can control the ripples (your reaction).

Benefits:

Here are some additional points to consider:

yanni kyriacos @ 2024-04-12T10:14 (+1)

What are some historical examples of a group (like AI Safety folk) getting something incredibly wrong about an incoming Technology? Bonus question: what led to that group getting it so wrong? Maybe there is something to learn here.

titotal @ 2024-04-12T15:54 (+12)

In the 90's and 2000's, many people such as Eric Drexler were extremely worried about nanotechnology and viewed it as an existential threat through the "gray goo" scenario. Yudkowsky predicted drexler style nanotech would occur by 2010, using very similar language to what he is currently saying about AGI. 

It turned out they were all being absurdly overoptimistic about how soon the technology would arrive, and the whole drexlerite nanotech project flamed out by the end of the 2000's and has pretty much not progressed since. I think a similar dynamic playing out with AGI is less likely, but still very plausible. 

Habryka @ 2024-04-13T17:10 (+3)

Do you have links to people being very worried about gray goo stuff?

(Also, the post you link to makes this clear, but this was a prediction from when Eliezer was a teenager, or just turned 20, which does not make for a particularly good comparison, IMO)

yanni kyriacos @ 2024-04-13T07:28 (+1)

I hope you're right. Thanks for the example, it seems like a good one.

Ives Parr @ 2024-04-13T20:19 (+9)

This is probably a good exercise. I do want to point out a common bias about getting existential risks wrong. If someone was right about doomsday, we would not be here to discuss it. That is a huge survivorship bias. Even catestrophic events which lessen the number of people are going to be systemically underestimated. This phenomenon is the anthropic shadow which is relevant to an analysis like this. 

yanni kyriacos @ 2024-04-15T03:36 (+1)

Yeah, Case Studies as Research need to be treated very carefully (i.e. they can still be valuable exercises but the analyser needs to be aware of their weaknesses)

saulius @ 2024-04-12T12:01 (+5)

There were many predictions about AI and AGI in the past (maybe mostly last century) that were very wrong. I think I read about it in Superintelligence. A quick Google search shows this article which probably talks about that.

saulius @ 2024-04-12T11:55 (+4)

Cultured meat predictions were overly optimistic, although many of those predictions might have been companies hyping up their products to attract investors. There's also probably a selection bias where the biggest cultured meat optimisits are the ones who become cultured meat experts and make predictions

John Salter @ 2024-04-12T11:19 (+2)

https://pessimistsarchive.org/

Yanni Kyriacos @ 2024-03-29T00:18 (+1)

I have heard rumours that an AI Safety documentary is being made. Separate to this, a good friend of mine is also seriously considering making one, but he isn't "in" AI Safety. If you know who this first group is and can put me in touch with them, it might be worth getting across each others plans.

Yanni Kyriacos @ 2024-03-21T01:55 (+1)

One of the seminal texts in marketing science is The Long and The Short of it by Field and Binet. They argue that for maximum effectiveness, marketing should aim to create two distinct work streams and results; immediate sales and longer term sales. 

They argue the tactics that go into either are distinct (e.g. short term = create a sense of urgency, long term = match brand associations with category associations). 

This feels like a good analogy for AI Safety Advocacy / Governance - keep talking about short term things people can buy now (in an Australian context that is the AI Safety Institute) while maintaining salience on the long term (x-risk, a global Pause, etc).

The question shouldn't be either/or, rather, its about how much (%) effort should go into either at any point in time.

The Greatest Hits of Binet and Field – Tom Roach

Yanni Kyriacos @ 2024-03-19T02:42 (+1)

What would be the pros and cons of adding a semi-hidden-but-permanent Hot Takes section to the Forum? All of my takes are Hot and due to time constraints I would otherwise not post at all. Some would argue that someone like me should not post Hot Takes at all. Anyway, in true lazy fashion here is ChatGPT on the pros and cons:

Pros:

Cons:

Rebecca @ 2024-03-21T16:16 (+2)

This feels like it could just be a genre of Quick Takes that people may choose to post?

NickLaing @ 2024-03-19T05:11 (+2)

That's an interesting one - I'm a fan of hot takes myself :D. I think "Quick takes" does the job on these though, even if the posts are a bit longer. I'm not sure we need another section. Maybe a "Hot takes" tab could be added to signify that the thought behind a take isn't so deep?

Yanni Kyriacos @ 2024-03-16T01:07 (+1)

Feels kinda obnoxious to write a quick take along the lines of "I'm thinking about writing a post on X, does anyone actually give a sh*t? Otherwise I won't write it."

I just wanted to check, since I can't place my finger on why it feels obnoxious but it certainly does. 

Yanni Kyriacos @ 2023-06-13T00:23 (+1)

How impactful could non-dual meditation be for improving wellbeing?

Are there any EAs out there who have practiced non-dual meditation? Specifically, anything from the Dzogchen or Mahamudra traditions of Tibetan Buddhism? 

More secular teachers would be Loch Kelly, Sam Harris, Michael Taft.

This has been a life changing experience for me and I'm wondering whether it could be a blind spot for EA.

I'd also just love to chat with someone else who has experienced non-duality / awakening through this form of meditation :)