Anthropic teams up with Palantir and AWS to sell AI to defense customers
By Matrice Jacobineđ¸đłď¸ââ§ď¸ @ 2024-11-09T11:47 (+28)
This is a linkpost to https://techcrunch.com/2024/11/07/anthropic-teams-up-with-palantir-and-aws-to-sell-its-ai-to-defense-customers/
Anthropic on Thursday said it is teaming up with data analytics firm Palantir and Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to Anthropicâs Claude family of AI models.
The news comes as a growing number of AI vendors look to ink deals with U.S. defense customers for strategic and fiscal reasons. Meta recently revealed that it is making its Llama models available to defense partners, while OpenAI is seeking to establish a closer relationship with the U.S. Defense Department.
Anthropicâs head of sales, Kate Earle Jensen, said the companyâs collaboration with Palantir and AWS will âoperationalize the use of Claudeâ within Palantirâs platform by leveraging AWS hosting. Claude became available on Palantirâs platform earlier this month and can now be used in Palantirâs defense-accredited environment, Palantir Impact Level 6 (IL6).
The Defense Departmentâs IL6 is reserved for systems containing data thatâs deemed critical to national security and requiring âmaximum protectionâ against unauthorized access and tampering. Information in IL6 systems can be up to âsecretâ level â one step below top secret.
âWeâre proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations,â Jensen said. âAccess to Claude within Palantir on AWS will equip U.S. defense and intelligence organizations with powerful AI tools that can rapidly process and analyze vast amounts of complex data. This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource intensive tasks and boost operational efficiency across departments.â
This summer, Anthropic brought select Claude models to AWSâ GovCloud, signaling its ambition to expand its public-sector client base. GovCloud is AWSâ service designed for U.S. government cloud workloads.
Anthropic has positioned itself as a more safety-conscious vendor than OpenAI. But the companyâs terms of service allow its products to be used for tasks like âlegally authorized foreign intelligence analysis,â âidentifying covert influence or sabotage campaigns,â and âproviding warning in advance of potential military activities.â
â[We will] tailor use restrictions to the mission and legal authorities of a government entityâ based on factors such as âthe extent of the agencyâs willingness to engage in ongoing dialogue,â Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to âsubstantially increase the risk of catastrophic misuse,â show âlow-level autonomous capabilities,â or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.
Government agencies are certainly interested in AI. A March 2024 analysis by the Brookings Institute found a 1,200% jump in AI-related government contracts. Still, certain branches, like the U.S. military, have been slow to adopt the technology and remain skeptical of its ROI.
Anthropic, which recently expanded to Europe, is said to be in talks to raise a new round of funding at a valuation of up to $40 billion. The company has raised about $7.6 billion to date, including forward commitments. Amazon is by far its largest investor.
huw @ 2024-11-10T00:01 (+6)
Military applications of AI are not an idle concern. AI systems are already being used to increase military capacity by generating and analysing targets faster than humans can (and in this case, seemingly without much oversight). Palantirâs own technology likely also allows police organisations to defer responsibility for racist policing to AI systems.
Sure, for the most part, Claude will probably just be used for common requests, but Anthropic have no way of guaranteeing this. You cannot do this by policy, especially if itâs on Amazon hardware that you donât control and canât inspect. Ranking agencies by âcooperativenessâ should also be taken as lip service until they have a proven mechanism for doing so.
So they are revealing that, to them, AI safety doesnât mean that they try to prevent AI from doing harm, just that they try to prevent it from doing unintended harm. This is a significant moment for them and I fear what it portends for the whole industry.