(Bloomberg) -- US intelligence agencies are grappling with a daunting new challenge: Making artificial intelligence safe for America’s spies.

An arm of the Office of the Director of National Intelligence is tapping companies and colleges to help harness rapidly developing AI technology that could provide an edge against global competitors like China. The challenge is ensuring it doesn’t open a backdoor into the nation’s top secrets or generate fake data.

“The intelligence community wants to avail itself of the large-language models out there, but there are a lot of unknowns,” said Tim McKinnon, a data scientist who manages one of the ODNI’s projects, known as Bengal. “The end goal is being able to work with a model with trust.” 

The focus on reliability and security is part of a broader US military and intelligence campaign to harness the power of AI and compete with China, which is seeking to become the global leader in the field by 2030. It’s also helping drive a surge in AI-related hiring in the Washington area as the government and its contractors embrace the emerging technology.

The most pressing concerns center on large-language models, which use massive data sets to power tools such as OpenAI Inc.’s ChatGPT to provide detailed responses to user prompts and questions. 

Read more: How Large Language Models Work, Making Chatbots Lucid

“The intelligence community views AI with healthy skepticism and a range of enthusiasm,” said Emily Harding, director of the Intelligence, National Security, and Technology Program at the Center for Strategic and International Studies, citing as an example the ability for analysts to process large amounts of information but doubts about the reliability of current models. 

“It’s a tool in its earliest stages of usefulness,” she said. 

The Central Intelligence Agency’s chief technology officer, Nand Mulchandani, sees AI potentially boosting productivity by digesting vast volumes of content and finding patterns that would be difficult or impossible for humans to discern. He also sees it as way to compete with China’s numerical advantage in intelligence staffing.

“Human beings are great but are hard to scale,” Mulchandani said in an interview. “Helping to scale human beings with tech is the smart business move.”

US spy agencies have already begun to experiment with AI programs of their own. The Central Intelligence Agency was preparing to roll out a feature akin to ChatGPT that will give analysts better access to open-source intelligence, Bloomberg News reported in September.

The appeal of AI to US intelligence stems partly from its potential to distinguish signals from noise among the troves of data collected every day, as well as to develop creative ways of looking at problems, according to an intelligence analyst who spoke on the condition that they not be identified discussing internal matters. Still, there are ways to influence and interfere with AI models that introduce new risks, the analyst said.

Threats, Meddling

McKinnon said AI is vulnerable to both insider threats and outsider meddling. Those threats could be humans trying to trick the systems into divulging classified information — an attempt to “jailbreak” the model. Or the opposite: a compromised system “trying to elicit information from humans that it shouldn’t,” he said. 

McKinnon’s Bengal program — short for Bias Effects and Notable Generative AI Limitations — is run by the Intelligence Advanced Research Projects Activity, a little-known organization that’s under the ODNI. Bengal has gathered input from companies including a unit of Amazon.com Inc., but McKinnon declined to say if it was working with any specific firms. 

The program aims to alleviate some of AI’s risks by developing ways to deal with potential biases or toxic outputs such as “hallucinations,” when a program fabricates information or delivers an incorrect result, which isn’t uncommon. 

“Right now there are only a few models that are publicly available, but they’ll become more prevalent and the ways they’ll be trained could be biased,” McKinnon said. “If there’s a poisoned model, we want to be able to mitigate against that.”

©2024 Bloomberg L.P.