Bosses struggle to police workers’ use of AI
Matt had a secret helping hand when he started his new job at a pharmaceutical company in September.
The 27-year-old researcher, who asked to be identified by a pseudonym, was able to keep up with his more experienced colleagues by turning to OpenAI’s ChatGPT to write the code they needed for their work.
“Part of it was sheer laziness. Part of it was genuinely believing it could make my work better and more accurate,” he says.
Matt still does not know for sure whether this was allowed. His boss had not explicitly prohibited him from accessing generative AI tools such as ChatGPT but neither had they encouraged him to do so — or laid down any specific guidelines on what uses of the technology might be appropriate.
“I couldn’t see a reason why it should be a problem but I still felt embarrassed,” he says. “I didn’t want to admit to using shortcuts.”
Employers have been scrambling to keep up as workers adopt generative AI at a much faster pace than corporate policies are written. An August survey by the Federal Reserve Bank of St Louis found nearly a quarter of the US workforce was already using the technology weekly, rising closer to 50 per cent in the software and financial industries. Most of these users were turning to tools such as ChatGPT to help with writing and research, often as an alternative to Google, as well as using it as a translation tool or coding assistant.
But researchers warn that much of this early adoption has been happening in the shadows, as workers chart their own paths in the absence of clear corporate guidelines, comprehensive training or cyber security protection. By September, almost two years after the launch of ChatGPT, fewer than half of executives surveyed by US employment law firm Littler said their organisations had brought in rules on how employees should use generative AI.
Among the minority that have implemented a specific policy, many employers’ first impulse was to jump to a blanket ban. Companies including Apple, Samsung, Goldman Sachs, and Bank of America prohibited employees from using ChatGPT in 2023, according to Fortune, primarily due to data privacy concerns. But as AI models have become more popular and more powerful, and are increasingly seen as key to staying competitive in crowded industries, business leaders are becoming convinced that such prohibitive policies are not a sustainable solution.
“We started at ‘block’ but we didn’t want to maintain ‘block’,” says Jerry Geisler, chief information security officer at US retailer Walmart. “We just needed to give ourselves time to build . . . an internal environment to give people an alternative.”
Walmart prefers staff to use its in-house systems — including an AI-powered chatbot called ‘My Assistant’ for secure internal use — but does not ban its workers from using external platforms, so long as they do not include any private or proprietary information in their prompts. It has, however, installed systems to monitor requests that workers submit to external chatbots on their corporate devices. Members of the security team will intercept unacceptable behaviour and “engage with that associate in real-time”, says Geisler.
He believes instituting a “non-punitive” policy is the best bet for keeping up with the ever-shifting landscape of AI. “We don’t want them to think they’re in trouble because security has made contact with them. We just want to say: ‘Hey, we observed this activity. Help us understand what you’re trying to do and we can likely get you to a better resource that will reduce the risk but still allow you to meet your objective.’
“I would say we see probably almost close to zero recidivism when we have those engagements,” he says.
Walmart is not alone in developing what Geisler calls an “internal gated playground” for employees to experiment with generative AI. Among other big companies, McKinsey has launched a chatbot called Lilli, Linklaters has started one called Laila, and JPMorgan Chase has rolled out the somewhat less creatively named “LLM Suite”.
Companies without the resources to develop their own tools face even more questions — from which services, if any, to procure for their staff, to the risk of growing dependent on external platforms.
Victoria Usher, founder and chief executive of communications agency GingerMay, says she has tried to maintain a “cautious approach” while also moving beyond the “initial knee-jerk panic” inspired by the arrival of ChatGPT in November 2022.
GingerMay started out with a blanket ban but has in the past year begun to loosen this policy. Staff are now permitted to use generative AI for internal purposes but only with the express permission of an executive. Workers must only access generative AI using the company’s subscription to ChatGPT Pro.
“The worst-case scenario is that people use their own ChatGPT account and you lose control of what’s being put into that,” says Usher.
She acknowledges that her current approach of asking employees to request approval for each individual use of generative AI may not be sustainable as the technology becomes a more established part of people’s working processes. “We’re really happy to keep changing our policies,” she says.
Even with more permissive strategies, workers who have been privately using AI to accelerate their work may not be willing to share what they have learnt.
“They look like geniuses. They don’t want to not look like geniuses,” says Ethan Mollick, a professor of management at the University of Pennsylvania’s Wharton School.
A report published last month by workplace messaging service Slack found that almost half of desk workers would be uncomfortable telling their managers they had used generative AI — largely because, like Matt, they did not want to be seen as incompetent or lazy, or risk being accused of cheating.
Workers polled by Slack also said they feared that, if their bosses knew about productivity gains made using AI, they would face lay-offs, and that those who survived future cuts would simply be handed a heavier workload.
Geisler expects he will have to constantly review Walmart’s approach to AI. “Some of our earlier policies already need updating to reflect how the technology is evolving,” he says.
He also points out that Walmart, as a large global organisation, faces the challenge of establishing policies applicable to many different types of workers. “We’re going to want to share with our executives, our legal teams, and our merchants much different messages around how we’re going to use this technology than we might [with] somebody that works in our distribution centres or our stores,” he says.
The shifting legal landscape can also make it tricky for companies to implement a long-term strategy for AI. Legislation is under development in regions including the US, EU, and UK but companies still have few answers about how the technology will affect intellectual property rights, or fit into existing data privacy and transparency regulations. “The uncertainty is just leading some firms to try to ban anything to do with AI,” says Michelle Roberts Gonzales, an employment lawyer at Hogan Lovells.
For those attempting to develop some kind of strategy, Rose Luckin, a professor at University College London’s Knowledge Lab, says the “first hurdle” is simply figuring out who within the organisation is best placed to investigate what kinds of AI will be useful for their work. Luckin says she has so far seen this task assigned to everyone from a chief executive to a trainee, as companies make vastly divergent assessments of just how crucial AI will be to their businesses.
Sarah, a paralegal at a boutique law firm in London, was surprised to be asked to research and design the rule book for how her more senior colleagues should be using AI. “It’s weird that it’s become my job,” she says. “I’m literally the most junior member of staff.”
#Bosses #struggle #police #workers