abstract Meta logo


To combat the perception that its “open” AI is aiding foreign adversaries, Meta today said that it’s making its Llama series of AI models available to U.S. government agencies and contractors in national security.

“We are pleased to confirm that we’re making Llama available to U.S. government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work,” Meta wrote in a blog post. “We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies.”

Oracle, for example, is using Llama to process aircraft maintenance documents, Meta says. Scale AI is fine-tuning Llama to support specific national security team missions. And Lockheed Martin is offering Llama to its defense customers for use cases like generating computer code.

Last week, Reuters reported that Chinese research scientists linked to the People’s Liberation Army (PLA), the military wing of China’s ruling party, used an older Llama model, Llama 2, to develop a tool for defense applications. Chinese researchers, including two affiliated with a PLA R&D group, created a military-focused chatbot designed to gather and process intelligence as well as offer information for operational decision-making.

Meta told Reuters in a statement that the use of the “single, and outdated” Llama model was “unauthorized” and contrary to its acceptable use policy. But the report added much fuel to the ongoing debate over the merits and risks of open AI.

The use of AI, open or closed, for defense is controversial to begin with.

According to a recent study from the nonprofit AI Now Institute, the AI deployed today for military intelligence, surveillance, and reconnaissance poses dangers because it relies on personal data that can be exfiltrated and weaponized by adversaries. It also has vulnerabilities, like biases and a tendency to hallucinate, that are currently without remedy, write the co-authors, who recommend creating AI that’s separate and isolated from “commercial” models.

Leave a Reply

Your email address will not be published. Required fields are marked *