AI admin tools pose a threat to national security

0

Stay informed with free updates

The writer is chief AI scientist at the AI Now Institute

Artificial intelligence is already being used on the battlefield. Accelerated adoption is in sight. This year, Meta, Anthropic and OpenAI all declared that their AI foundation models were available for use by US national security. AI warfare is controversial and widely criticised. But a more insidious set of AI use cases have already been quietly integrated into the US military.

At first glance, the tasks that AI models are being used for may seem insignificant. They are helping with communications, coding, resolving IT tickets and data processing. The problem is that even banal applications can introduce risks. The ease with which they are being deployed could compromise the safety of civil and defence infrastructure.

Take US Africa Command, one of the US Department of Defense’s combatant commands, which has been explicit about its use of an OpenAI tool for “unified analytics for data processing”.

Such administrative tasks can feed into mission-critical decisions. Yet as repeated demonstrations have shown, AI tools consistently fabricate outputs (known as hallucinations) and introduce novel vulnerabilities. Their use might lead to an accumulation of errors. As USAfricom is a war-fighting force, over time small errors could result in decisions that cause civilian harm and tactical mistakes.

USAfricom is not alone. This year, the US Air Force and Space Force launched a generative AI chatbot called the Non-classified Internet Protocol Generative Pre-training Transformer, or NIPRGPT. It can “answer questions and assist with tasks such as correspondence, background papers and code”. Meanwhile, the navy has developed a conversational AI tech-support tool model that it calls Amelia.

Military organisations justify their use of AI models by claiming they enhance efficiency, accuracy and scalability. In reality, their procurement and adoption reveal a concerning lack of awareness about the risks posed.

These risks include adversaries poisoning data sets that models are trained on, allowing them to subvert outcomes when certain trigger keywords are used, even on purportedly “secure” system. Adversaries could also weaponise hallucinations.

Despite this, US military organisations have not addressed or provided assurances about how they intend to protect critical defence infrastructure.

Lack of fitness for purpose poses just as many, if not more, safety risks as deliberate attacks. The nature of AI systems is to provide outcomes based on statistical and probabilistic correlations from historical data, not factual evidence, reasoning or “causation”. Take code generation and IT tasks, where researchers at Cornell University last year found that OpenAI’s ChatGPT, GitHub Copilot, and Amazon CodeWhisperer generated correct code only 65.2 per cent, 46.3 per cent and 31.1 per cent of the time, respectively.

Even as AI companies assure us they are working on improvements, what’s undeniable is that current error rates are too high for applications that require precision, accuracy and safety. Overreliance and trust in the tools could cause users to overlook their mistakes too.

This begs the question: how have military organisations been permitted to procure AI and implement models with ease?

One answer comes from the fact that they appear to be regarded as an extension of IT infrastructure, when in fact they might be used as analytical tools that can alter the outcomes of crucial missions. This ability to classify the use of AI as infrastructure, bypassing appropriate procurement channels that assess them as suitable for mission-critical purposes, should give us pause.

In the pursuit of potential future efficiencies, the use of AI administrative tools by military agencies introduce real risks. This is a trade-off that their purported benefits cannot justify.

#admin #tools #pose #threat #national #security

Leave a Reply

Your email address will not be published. Required fields are marked *