AISN #30: Investments in Compute and Military AI Plus, Japan and Singapore’s National AI Safety Institutes
By Center for AI Safety, aogara, Dan H, Corin Katzke @ 2024-01-24T19:38 (+7)
This is a linkpost to https://newsletter.safe.ai/p/aisn-30-investments-in-compute-and
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Subscribe here to receive future versions.
Listen to the AI Safety Newsletter for free on Spotify.
Compute Investments Continue To Grow
Pausing AI development has been proposed as a policy for ensuring safety. For example, an open letter last year from the Future of Life Institute called for a six-month pause on training AI systems more powerful than GPT-4.
But one interesting fact about frontier AI development is that it comes with natural pauses that can last many months or years. After releasing a frontier model, it takes time for AI developers to construct new compute clusters with larger numbers of more advanced computer chips. The supply of compute is currently unable to keep up with demand, meaning some AI developers cannot buy enough chips for their needs.
This explains why OpenAI was reportedly limited by GPUs last year. Relatedly, the chairman of TSMC, the world’s largest chip manufacturer, said in September, “Currently, we cannot fulfill 100% of our customers' needs, but we try to support about 80%.”
Despite its limited supply, investments in compute have continued to grow. Here, we report on recent news from Meta and OpenAI on their plans for investing in compute.
Meta is investing in compute to build open-source AGI. Mark Zuckerberg announced that Meta plans to have 350,000 of Nvidia’s most advanced GPUs, the H100, by the end of 2024. This is more than ten times the number of GPUs in Inflection AI’s cluster, announced last summer. The only company expected to purchase as many H100s as Meta is Microsoft.
Zuckerberg announced that Meta plans to “build general intelligence” and “open source it responsibly.” This contrasts with OpenAI and DeepMind’s approaches of closed source development, which some argue reduces risks from malicious use. Advocates of open-source instead argue that broad access to AI will counteract centralization of power and distribute the benefits of AI broadly.
Sam Altman considers founding a new company for chip design and manufacturing. Currently, the global supply chain for advanced AI chips is reliant on a few companies. Nvidia is one of those companies, and therefore it has significant control over which companies thrive and struggle in the AI race. But recent news indicates that a new competitor could disrupt Nvidia’s position.
Bloomberg reports that Sam Altman, CEO of OpenAI, is “working to raise billions of dollars” to “set up a network of factories to manufacture semiconductors.” Sources have said Altman is concerned that as AI capabilities improve, there won’t be enough chips for widespread deployment. Chip fabrication plants take years to build, and Altman is rumored to believe that advanced planning is necessary to meet demand at the end of this decade.
Rumors of compute clusters in the Middle East. Elon Musk relayed another rumor on Twitter, saying that he “heard today about a gigawatt-class AI compute cluster being built in Kuwait (or something), with 700,000 B100s.” This is twice as many chips as Meta and Microsoft are expected to purchase this year, and B100s are the next generation of chips that Nvidia plans to release in 2024. Musk says “there are many such things; that’s just the biggest one I’ve heard of so far.”
Compute clusters in the Middle East would have access to ample energy sources including oil and solar power, which is a significant constraint on compute cluster operation. Western nations might struggle to enforce policies on AI systems trained in other countries.
Developments in Military AI
Private corporations have built most of today’s leading AI models. Thus, corporations are often the focus of AI policy efforts to demonstrate the dangerous capabilities of AI systems and ensure that companies do not accidentally release dangerous systems.
But unlike corporations, militaries are explicitly interested in building AIs with hazardous capabilities. Policies such as the EU AI Act and the White House’s recent executive order focus on corporate AI development and explicitly exempt militaries from their rules and regulations. Additional effort will therefore be necessary to mitigate risks from military AI.
OpenAI will allow military use of ChatGPT. In a recent update to its rules for the use of ChatGPT, OpenAI deleted a prohibition on “military and warfare.” They seem to have a good reason: an OpenAI spokesperson said they’ll be working with DARPA to create tools for improving cybersecurity. DARPA is currently running a $20M competition on AI for cybersecurity, which could strengthen our societal resistance against the threat of AI-enabled cyberattacks.
But it’s possible that OpenAI’s new policy could open the door to other military partnerships. Just as Palantir, Anduril, and Scale AI have developed technology for the US military, OpenAI could follow suit.
The US military invests in AI capabilities. OpenAI’s update is part of a larger trend of the Pentagon’s interest in integrating AI into the US military. Last year, the Pentagon tested several LLMs in military applications and established a generative AI taskforce.
Early reports from the Pentagon’s drone program called it “disorganized and confusing.” Since then, former Google CEO Eric Schmidt has partnered with the Pentagon on its drone development program. Schmidt will be speaking today at 12pm EST with Ernest Moniz, former Secretary of Energy.
Multilateral efforts towards the responsible use of military AI. In November, US Vice President Kamala Harris announced that 31 countries had endorsed the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” The declaration makes few concrete commitments, but it is a sign that many countries recognize and hope to reduce the risks of military AI systems.
The document provides a hard-nosed perspective on the military use of AI. It does not ban the use of lethal autonomous weapons, as many countries have requested, nor does it require a “human in the loop” to make decisions about the use of lethal force. The first version of the document said that “states should maintain human control…concerning nuclear weapons employment,” but the most recent version has scrapped that guidance. Although the declaration allows for AI in military applications, it recommends careful management of the associated risks.
Even in the presence of military competition, it can be important to seek opportunities for cooperation. During the Cold War, the United States and the Soviet Union agreed on several different arms control measures. For example, after several near collisions between US and Soviet ships and planes, the two countries agreed in 1972 to steps for avoiding collisions and minimizing the risk of escalation to war in the event of a collision. Similarly, if a buildup of AI weapons threatens global security, even geopolitical rivals might seek to coordinate on arms control policies to reduce collective risks.
Japan and Singapore Support AI Safety
Japan announces an AI safety institute. Japan’s Prime Minister, Fumio Kishida, announced last month that Japan intends to form an AI safety institute to “conduct research on safety evaluation methods, create standards and carry out other matters.” The institute has yet to launch, although more information is expected this month.
The announcement was accompanied with the release of a 192-page draft set of guidelines for AI companies. Companies developing advanced AI systems are advised to follow the Code of Conduct established at the G7 Hiroshima Summit. Among other provisions, the code directs AI companies to publicly report system capabilities, implement robust cybersecurity, and share best practices on risk mitigation.
Singapore proposes a governance framework for generative AI. Last week, Singapore released a draft Model AI Governance Framework for Generative AI. The new draft updates a previous document from 2020. The release included a request for feedback, which can be submitted by emailing info@aiverify.sg by March 15th.
The document was developed by the AI Verify Foundation — a subsidiary of the Infocommunications Media Development Authority of Singapore. Notably, the AI Verify Foundation has indicated concerns about catastrophic AI risks, including releasing a paper on evaluating LLMs for extreme risks.
Links
First, we have a few US legislative proposals:
- Democrats in Washington reportedly plan to introduce several new AI bills in early 2024.
- Bipartisan members of both the House and Senate have introduced bills to require federal agencies to adhere to the NIST AI RMF when using AI.
- The proposed AI Foundation Model Transparency Act of 2023 would require AI developers to disclose information about training data, safety evaluations, and risk management policies.
- The National AI Advisory Committee suggested that NIST could use $100M or more for their work on AI safety, which is far more than suggested by by previous supporters.
Other policy discussions:
- How OpenAI is approaching the 2024 elections.
- The EU opens an antitrust investigation about the relationship between Microsoft and OpenAI.
- OpenAI responds to the New York Times lawsuit over training on copyrighted data.
- OpenAI has paid some media outlets up to $5M per year for access to their data.
- A book review discusses AI and the copyright system.
- AI scientists from American companies and Chinese universities met to discuss AI risks.
- To enforce AI policies, a new white paper advocates research on secure, governable chips.
Technical AI developments:
- DeepMind created an AI system that can prove geometric theorems at an expert human level.
- DeepSeek, a Chinese company, trained a model which claims to match GPT-3.5’s performance.
- Here are the results of ten research projects on Democratic Inputs to AI sponsored by OpenAI.
Finally, the Center for AI Safety has a cluster of 256 A100 GPUs available for safety research. Apply for access here.
See also: CAIS website, CAIS twitter, A technical safety research newsletter, An Overview of Catastrophic AI Risks, our new textbook, and our feedback form
Listen to the AI Safety Newsletter for free on Spotify.
Subscribe here to receive future versions.
SummaryBot @ 2024-01-25T14:40 (+1)
Executive summary: Investments in compute for AI continue to grow, with Meta planning a large expansion and OpenAI considering chip fabrication. Militaries are also investing in AI, though some countries signed a declaration on responsible use. Separately, Japan and Singapore announced national AI safety institutes.
Key points:
- Meta plans to have 350,000 advanced GPUs by 2024 to build open-source AGI.
- OpenAI CEO Sam Altman may start a chip fabrication company to increase supply.
- Militaries continue expanding AI capabilities, though 31 countries signed a declaration on responsible use.
- Japan announced an AI safety institute to research evaluation methods and standards.
- Singapore released a draft governance framework for generative AI, seeking feedback.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.