ALTER Israel Mid-2025 Semiannual Update

By Davidmanheim @ 2025-07-15T07:47 (+12)

We are happy to once again share an update about our work this past half year, and our current public plans for new work.

Funding

First, ALTER is pleased to announce that we have received funding for our work on AI policy and related AI safety work for the coming year. We thank Open Philanthropy and two Survival and Flourishing Fund Speculation grantors for their grants, and their implicit confidence in the value of our work. The grants are dedicated to support our work on AI policy, and not work in adjacent or other areas. While those funds only support our AI policy work, we have some flexibility due to the funds received from contracts, overhead, and reimbursements, notably work for RAND and ARIA, which is unrestricted funding, and which allows us to pursue valuable projects both in other areas, and work like lobbying, which cannot be funded via nonprofit grants.

Second, we have a variety of projects that are ongoing or nearing completion, and a few new projects to mention.

Projects and Initiatives

AI Policy - Standards

Our NIST AI Security (née Safety) Institute Consortium work has been slowed by the presidential transition and the uncertainty around its status, but we are encouraged to see that they are continuing work on the intersection of AI and both chemical weapons and biorisk, and we will continue participating. We view this as a useful forum to push for positive directions, even though any work is (unfortunately) mostly legally restricted to only sharing with members. At the same time, as a government-hosted multi-stakeholder organization, the work is frustratingly slowed by bureaucratic necessities and the limited responsiveness intrinsic to large coalitions.

Our work with the International Standards Organization is even more bureaucratically constrained, but we are happy with the  progress, albeit slow, on a few critical issues at the intersection of standards and law. Specifically, there are a number of places where legal standards for responsibility and required behavior implicitly or explicitly require organizations to follow “best practice,” and standards set a minimum bar for that. This is particularly critical for the EU AI act, where (per a JRC technical report) terms like “Human Oversight” and “Accuracy and Robustness” are explicitly expected to be set by ISO standards.

Given that, our recently released (July 8th) preprint, “Limits of Safe AI Deployment: Differentiating Oversight and Control,” written with input from co-author Aiden Homewood of GovAI, is intended to specifically address some key issues in ISO discussions around human oversight, and to clearly push the point that both control and oversight must be shown to be meaningfully present in a system, rather than simply putting a human “in charge” and calling it human-in-the-loop control - and that it is not always possible to have any meaningful control, or meaningful oversight, of certain types of systems. (Feel free to read / promote the twitter thread as well.)

Lastly, an article on the necessity of AI audit standards boards was accepted and published in AI & Society. In the paper we argued (tweet thread) that credible standards require independence from, as well as buy-in from, various groups to be meaningful. This also requires a structure separate from government to be flexible. We therefore outlined a new, independent AI Audit Standards Board (AIASB), which would need academic, industry, and other stakeholder buy-in. We believe that such a board would not only be useful, but also in the interest of frontier AI firms, which would be able to point to a single international standard or set of standards in place of vague best practices and the inevitable imposition of different rules in different jurisdictions. We are hopeful that at least some AI firms will agree with this. (We welcome suggestions, tentative ideas for who would host this, and proposals for getting this started in practice,but caution that it requires buy-in from a large group of stakeholders first.)

AI Policy - Research

David recently led a session at the ASRA conference, which focused on and encouraged the consideration of the geopolitical and systemic impacts of near-term AI systems and managing the interim period before existentially dangerous systems emerge. We received very positive feedback from participants.

We are also now starting a project on understanding technical AI futures and uncertainties, and have recently started to send out invitations to partipate. This is intended to lay out plausible futures and enable more robust policy and risk planning. We would be happy to receive nominations for participation, though the participant list will be very limited in size.

Additional work is being considered on evaluation standards, and the details of a project on this are still being discussed.

Lastly, David has submitted a more philosophical paper to a journal. This paper uses Semiosis to discuss why Aligned AI needs some form of embodiment, interaction with reality, or similar “grounding” - but this is but both dangerous and insufficient.  It then explains that there is a strong philosophical basis for expecting that many claimed limits to AI are not likely to remain, given current and near-future developments in AI. The paper is also an attempt to provide a stronger set of arguments for philosophers to take the questions around alignment seriously as potentially fundamentally unsolvable ones. We hope that this helps further the (currently unusual) view that alignment and emergence of smarter-than-human AI is a reasonable and legitimate philosophical topic, and show that many previous objections do not hold.

Mathematical AI Safety

Our work with Vanessa Kosoy and her research team is now a separate organization, and they have received new and additional support for that work, while we continue to support the work administratively to some extent. Their web site will presumably host future information and updates.

That said, Alex (Diffractor) presented a paper in COLT 2025 (Alignment Forum announcement.) Vanessa has a paper coming out in JMLR, based on her master's thesis, and a new preprint online (Alignment Forum announcement) She is also mentoring Matthias Mayer at PIBBS and working on a collaboration with Jean-Marie Droz on ambidistributions. She and Alex are also working on new directions  related to compositional learning theory. Lastly, Gergely Szuchs has decided to leave the organization to pursue interests unrelated to mathematics or AI alignment, and we wish him luck.

Biosecurity

We continue to engage on both Biosecurity policy and AIxBio issues with other organizations.  However, we are refocusing our thinking on biosecurity on issues around implementation and related topics, such as synthesis screening, metagenomic monitoring, and PPE. This shift is because many of the conceptual projects in the past decade on extreme biorisks are turning into concrete engineering and practical projects (IBBIS and SecureBio for gene synthesis screening, Blueprint Biosecurity work on PPE, Far-UVC, and similar, and CEPI and other investments in metagenomic monitoring.), a transition that we strongly support.

We are continuing some work on non-AI biosecurity, largely focused on biosurveillance. There is a paper led by Isabel Meusel on building a clinical metagenomic surveillance network in Israel which was recently accepted to Health Security. We are also now members of the International Pathogen Surveillance Network, a network of organizations working “to accelerate progress in pathogen genomics,” organized by the WHO Hub for Pandemic and Epidemic Intelligence. We hope this enables us to continue pushing for broad metagenomic monitoring internationally.

Lastly on this topic, we have also begun to engage with the biosafety and biosecurity community here in Israel, including discussions about how to monitor and regulate synthetic biology and AI-enabled bioengineering, and are likely to present at a conference being held in December, as well as hopefully advise on how regulatons could be adapted to monitor these new risks.

Other Risks / Policy Areas

As part of our ongoing push for salt iodization, David will be presenting at a Knesset meeting on July 16 , and both the Ministry of Health and Industry have signaled clear support of universal iodization. While still very uncertain, it seems that there is significant momentum, and politics pending, this work could conclude successfully in the near future.

David spoke at the Unpolitics conference this week about how to push for substantive  marginal change on global priorities, and is excited to be collaborating with and supporting people there on various projects, including in pandemic biosecurity.

Naham has a recent article in an Israeli journal making the argument for consideration of catastrophic risks for food security, in part to support work on alternative proteins, similar to arguments ALLFED makes. 

Lastly, we are continuing to engage with ASRA and others on the interrelationship between risks and systemic fragility, with a focus on how AI will impact / accelerate / have risks compounded by other large scale systemic issues. There is also a discussion about doing standards-related work on systemic risk assessment, which we have been supporting.


SummaryBot @ 2025-07-16T19:32 (+1)

Executive summary: ALTER's mid-2025 update outlines their ongoing work across AI policy, mathematical AI safety, biosecurity, and systemic risk, emphasizing slow but meaningful progress on standards and oversight initiatives, new research on AI futures and philosophical grounding, and continued engagement with international and local networks, supported by new grants for AI policy work.

Key points:

  1. Funding and Scope: ALTER received new grants specifically for AI policy work, while unrestricted funding from consulting work enables them to pursue adjacent initiatives such as lobbying and broader biosecurity efforts.
  2. AI Policy – Standards and Governance: ALTER is active in both U.S. (NIST AI Security Consortium) and international (ISO) standards forums, contributing research (e.g. the July 8 preprint on oversight vs. control) and advocating for independent structures like an AI Audit Standards Board to formalize meaningful standards with broad stakeholder support.
  3. AI Policy – Research: Ongoing projects include mapping technical AI futures and risks, promoting early-stage analysis of geopolitical and systemic impacts, and publishing philosophical arguments for treating alignment as a potentially unsolvable problem requiring embodied grounding.
  4. Mathematical AI Safety: The formerly integrated mathematical research group has become an independent organization, with continued research output in theoretical AI safety (e.g. COLT and JMLR publications), including work on ambidistributions and compositional learning.
  5. Biosecurity and AIxBio: ALTER is refocusing on implementation-level biosecurity (e.g. gene synthesis screening, metagenomic monitoring, PPE), while maintaining involvement in international pathogen surveillance networks and national-level biosecurity regulation efforts in Israel.
  6. Other Risk Areas and Policy Work: ALTER continues policy engagement on systemic risk, food security, and public health (e.g. salt iodization), participating in national and international discussions and collaborating with organizations like ASRA and Unpolitics.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.