UK’s ambitions to police AI face Trump’s ‘starkly’ different approach

0

The UK’s ambitions of taking a central role in policing artificial intelligence globally have been hit by its struggles to launch an outpost in the US and an incoming Trump administration which is threatening to take a ‘starkly’ different approach to AI regulation.

The British government is looking to bolster its AI Safety Institute (AISI), established last year with a £50mn budget and 100 staff, as it seeks to solidify its position as the world’s best-resourced body investigating the risks around AI.

Leading tech companies including OpenAI and Google have allowed the AISI to test and review their latest AI models. But plans to expand further by opening a San Francisco office in May were delayed, because of elections in both the US and UK and difficulty recruiting for the Silicon Valley outpost, according to people with knowledge of the matter.

In an effort to maintain its influence, people close to the UK government believe it will increasingly position the AISI as an organisation focused on national security, with direct links to intelligence agency GCHQ.

Amid a tense period of relations between the UK’s left-leaning Labour government and the incoming US administration, some believe the AISI’s security work could function as a powerful diplomatic tool.

“Clearly, the Trump administration will take quite starkly different approaches to certain areas, probably regulation,” said UK technology secretary Peter Kyle, who emphasised Britain’s “secure relationships” with the US, including in security and defence. The UK government minister added he would “make a considered decision” over when the AISI would open a San Francisco office once it could be properly staffed.

The increasing emphasis reflects changing priorities in the US, home to the world’s leading AI companies. President-elect Donald Trump has vowed to cancel President Joe Biden’s executive order on artificial intelligence, which established a US AI Safety Institute. Trump is also appointing venture capitalist David Sacks as his AI and crypto tsar, with tech investors known to be concerned about the overregulation of AI start-ups.

Civil society groups and tech investors have questioned whether AI companies will continue to comply with the British AI safety body as the incoming US administration signals a more protectionist attitude over its tech sector.

Republican senator Ted Cruz, nominated by Trump as the next US secretary of state, has warned of foreign actors — including European and UK governments — imposing heavy-handed regulations on American AI companies, or having too much influence over US policy on the technology.

Another complication is the role of Tesla chief and Trump adviser Elon Musk. The tech billionaire has raised concerns about the security risks of AI while recently developing his own advanced models with his start-up, xAI.

“There is an obvious pitch to Elon on the AISI, basically selling the work we’re doing on security much more than the work we’re doing on safety,” said a person close to the British government, adding AISI provided a “front door into the UK GCHQ”.

Tech companies have said AISI’s research is already helping improve the safety of AI models built by mainly US-based groups. In May, AISI identified the potential for leading models to facilitate cyber attacks and provide expert-level knowledge in chemistry and biology, which could be used to develop bioweapons.

The UK government also plans to put its AISI on a statutory footing. Leading companies, including OpenAI, Anthropic and Meta have all volunteered to grant AISI access to new models for safety evaluations before they are released to businesses and consumers. Under the proposed UK legislation, those voluntary commitments would be made mandatory.

“[These] will be the codes that become enshrined in law, and that’s simply because I don’t think when you see the potency of the technology we’re talking about, the public would remain comfortable thinking that the capabilities of some of this technology should be harnessed based on voluntary codes,” said Kyle, part of the Labour government elected in July.

The UK safety institute has also hired from tech firms such as OpenAI and Google DeepMind, helping to maintain good relationships with leading AI companies and ensure they adapt to its recommendations.

“We basically will live or die based on how good our talent pool is,” said Jade Leung, chief technology officer at the UK’s AISI, who previously worked at OpenAI.

Despite these links, there have been points of conflict with AI companies.

The AISI has complained it was not given enough time to test models before they were released, as the tech companies raced each other to launch their latest offerings to the public.

“It is not perfect, but there’s a constant conversation on that front,” said Geoffrey Irving, chief scientist at AISI, who previously worked at OpenAI and DeepMind. “It is not always the case that we have a lot of notice [for testing], which can be a struggle at times [but we have had] enough access for most of the major releases to do good evaluations.”

The UK AISI has so far tested 16 models and identified a lack of strong safeguards and robustness against misuse in most of them. It publishes its findings publicly, without specifying which models it has tested.

While people inside the companies acknowledge some issues working with the institute, Google, OpenAI, and Anthropic were among those who welcomed its work. “We do not want to be grading our own homework,” said Lama Ahmad, a technical programme manager at OpenAI.

#UKs #ambitions #police #face #Trumps #starkly #approach

Leave a Reply

Your email address will not be published. Required fields are marked *