AI is transforming the world of work, are we ready for it?

0

You can enable subtitles (captions) in the video player

Artificial intelligence is set to reshape our workplaces.

What does this mean for my job? Am I going to have a job? Will my children have jobs?

Companies are itching to incorporate AI into their systems, but are we really ready for it?

So two-thirds of our desk worker population are still not using this technology.

I’m Isabel Berwick. I host the FT’s Working It podcast and write a newsletter about the workplace. In this series I’ll explore some of the most pressing issues around the future of work and talk to senior leaders about how they’re making work better.

We have this extraordinary responsibility to shape the new world of work.

For everyone.

San Francisco’s one of the tech hubs of the world, and AI is definitely in the air.

Generative AI is kind of a subset of artificial intelligence more broadly. There’s a long history behind this technology. But really, when we talk about generative AI today what we’re talking about is really something that’s emerged over the last three years. It’s translate between text, images, video, audio, even code. Now you’re seeing that technology applied to lots of other types of patterns, including things like DNA even.

And so the big turning point was with the launch of ChatGPT in 2022 at the end of that year, where for the first time anybody, any user could literally just communicate directly, interact with a generative AI system. And since then, there’s been a lot of these.

Since AI’s explosion into public consciousness in 2022, we’ve also seen a huge push from businesses to harness the power of generative AI to streamline the workplace. The buzz of AI is being felt everywhere.

But when talking to many business leaders, one wise person said to me that CEOs have bought Ferraris in the shape of state-of-the-art AI systems. They just haven’t given any driving lessons to their staff.

A survey of 10,000 desk workers found that the AI benefit that executives are most looking forward to is increased productivity among workers. But leaders’ biggest concerns about embracing AI are around data security and privacy, followed by distrust in AI’s accuracy and reliability.

What we see in the data is that the executive urgency to incorporate AI is at an all-time high, right? This has increased seven times over the last six months. So this is the most top-of-mind thing for executives worldwide.

But what’s really interesting is two-thirds of our desk worker population are still not using this technology. So there’s this really interesting disconnect.

Salesforce’s global HQ in San Francisco is home to Slack’s Workforce Lab, which studies how to make work better. The team there has been researching what motivates workers to use AI. I went to meet the head of the Workforce Lab and SVP of Research and Analytics, Christina Janzer.

So what are the conditions that might make workers more likely to trust AI or be interested to use it?

We’ve been really interested in understanding this gap of executive urgency and employee adoption. And so what we really wanted to do was, let’s better understand the humans, right? Why are the humans using it or not using it?

And so we did some research to really understand the emotions that people are feeling about AI. And we uncovered five different personas that help us understand the AI workplace.

The first one is called a maximalist. This is a person who’s very excited about the technology. They use it very actively. The second persona, also using AI very actively, is called the underground. And the really interesting thing about the underground is although they’re using it and getting a lot of value from it, they’re hiding their usage. They’re hiding their AI usage because they feel guilty, and they feel like people are going to think that they’re cheating.

And then the next three are the ones that aren’t really actively using AI. So the rebel is the person who feels like AI is a little bit of a threat. The superfan is very excited about AI, but they aren’t using it themselves. They don’t know how to start.

And the final one is the observer. The observer is simply someone who’s in a wait-and-see mentality. They show some interest. They show some caution. They’re just not actively engaged, and they’re kind of just waiting to see how the whole thing plays out.

Intrigued, I took Slack’s test to find out what AI persona I have.

What is your AI persona? Take the Slack AI persona quiz to find out who you are. How frequently do you use AI tools for work-related tasks? Mm, probably a couple of times a week.

How do you feel about the use of AI in the workplace? Excited, guilty, indifferent, concerned, relieved, reluctant? Excited, actually. I’m concerned about AI replacing my job. I’m quite old, so I’m just going to put 2 for that. I’m interested in learning or further developing AI skills. Yes, I’m a 5 on that. I’m a maximalist.

As a maximalist, I can see the benefits that AI can offer. But with companies full of staff with such diverging views on AI, is it a good idea to have everyone speeding ahead?

So do you think that organisations should have put guardrails in place first? I mean, AI’s moving so quickly. That’s hard. Is there a trust piece missing here, I guess is what I’m saying?

There is a trust piece missing. I mean, what we see in the data is that only 7 per cent of workers worldwide fully trust AI, right? And that’s to be expected with new technology. You have to use it. You have to get used to it in order to really understand whether you can trust it.

The other thing that’s really interesting about trust is there’s a big manager component here. People who feel trusted by their manager are twice as likely to actually try AI.

What I think you’re saying is that there’s a very human part to this. Do you get on with your colleagues? Do you trust your manager? Do you feel safe to communicate where you’ve got things wrong?

Oh my gosh, I’m so happy you said it like that, because I think so much of the conversation that we’re having is around the technology, all of the amazing advances that we’re making, all of the amazing things that this technology can do. But you can build the coolest technology in the world, and if people don’t use it, it doesn’t matter.

And so to your question about, should people have come up with guidelines earlier, maybe. But I also think we need to give leaders a break. This is new technology. It’s developing so quickly, and we’re just trying to catch up.

And so what we suggest is it’s not too late, right? Now’s the time to really sit down and figure out, what is your policy going to be? What are you going to allow your employees to do? And just be clear. The most important thing is transparency.

Slack has found that when businesses cater for all types of AI personas and have defined safe usage guidelines, employees are nearly six times more likely to use AI tools in the workplace. But in a recent survey of desk workers, 43 per cent say they’ve received no guidance from their leaders or organisation on how to use AI tools at work.

These models are potentially so powerful. They are remarkable in what they could provide to us as humans. And if we think that analysis and structured thinking and creativity is a net economic good, we really want to be able to distribute that as widely as we can.

Tech investor and founder of Exponential View, Azeem Azhar, looks at the impact of AI on society. I invited Azeem to the FT’s offices to find out more about what AI can and can’t do for the workplace.

How are you using AI yourself at the moment?

One of my favourites is that I have a number of different AI assistants who will attend my meetings. So one is extremely good at taking a detailed transcript, and there’s another assistant which evaluates my performance in the meetings. And I’ll get an email, and it’ll say, you did this well. You didn’t do this so well. Next time, try doing this.

What some of the academic research has shown is that the more expertise you have, the better you can get out of the system. The reason why somebody who’s senior can do better with AI than someone who perhaps is junior is because when you use a generative AI tool, it’s a little bit like delegating tasks. And who best delegates tasks? Well, people who have been delegating tasks for 15 or 20 years. That is, the senior exec.

What are the downsides that are obvious to you as someone who is in that world all the time?

One of the biggest downsides is that this is still quite a complicated technology, and I think people that have used AI know that it can also be a little bit unreliable. And when you have a complicated technology that’s unreliable, you have got to be prepared for things to go a bit askew and awry. And I think firms have to figure out how they experiment and invest at a pace while recognising that the ground is going to be shifting quite a lot.

A second issue is going to be about the temptation that companies may have to use this first and foremost as a cost-cutting exercise. And the reason they need to be a little bit careful is that this is an unstable market and an unstable environment. And so one of the things that I urge bosses to do is to be much, much more circumspect about headcount reductions, because you never know exactly where the pieces are going to fall.

AI is set to be a skills equaliser, helping weaker employees to level up. But Azeem has highlighted the complexity of adoption, and that it is CEOs who have to lead the charge.

I think what’s been different with the generative AI wave is that it is so easy to use, and it doesn’t require changing your back end systems or replacing big contracts that you might have with enterprise software companies or whatever. Because in a lot of cases tech companies that we already have relationships with in the enterprise, like, say, Microsoft or Google, these are the companies offering generative AI. So it can easily pitch to business leaders, saying, we’re the world’s biggest enterprise software companies, and we think this is going to change the world.

There’s been a lot of buzz now verging on maybe even hype around if you don’t adopt this now, you’re going to be left behind. It’s a good job of both marketing and a sort of consumer-led technology.

HR software company Lattice has made the move to becoming an AI-powered platform. I went to their HQ to see its capabilities.

So right now, I’m logged in as Alivia’s manager. And what you’ll see on the right is a summary of all of the feedback Alivia received over the past year.

Lattice’s AI software takes all the available data, feedback, previous reviews, and learns the tone and grammar of the user. It then creates an authentic performance review.

With the best will in the world, some managers are terrible at feedback. They give bad feedback. They’re blunt. They’re clumsy. They may offend people. What can Lattice do to stop that happening?

So what we are actually maintaining is a set of what good feedback looks like. It should be inclusive. It should be actionable. It should be concise. Regardless of what level of experience you have with feedback delivery, it up-levels your writing in a way that converges with best feedback writing practises.

So it saves bad managers from themselves, essentially.

Yeah.

I love it.

Every time we’ve had advances in tech, we’ve had to work harder. Is the promise of AI that this time we’ll get it right?

We have more tech that’s supposed to simplify, that requires more tech to integrate. But our collaboration has not been easier because there’s too much tech. This is what I think is so powerful with AI, is that it really is simplifying things down from an experience standpoint.

The way that we will experience the technology is the way that we interact as humans. It doesn’t matter if you’re in system A or system B. That data is brought together behind the scenes. So when you ask it a question, it can give you an answer.

So what are you finding are the main use cases for it? And also, how are people responding in a perhaps more cautious way?

In the world of HR, you know a bunch of information about structured data around your employee record, your compensation, your performance, the feedback you’ve received, all of your skills that you may have. And by being able to bring all of that together and just make your work life easier right now to answer questions and give you guidance, that’s where we’re at right now with AI.

And we’re just going to see this develop faster and faster and faster, which is amazing. But then it also makes you question, how do I scale up my teams, my employees to match these fast-changing expectations? And how do we govern it?

Over the next year, tech companies will unleash the next wave of innovation to business, AI agents. These gen AI assistants won’t just tell you what to do, but will be given access to perform actions on your behalf. But are senior leaders and employees really willing to hand over their autonomy?

The questions then is, how are we going to manage it? How are we going to hold it accountable? How are we going to be transparent with decisions that we’re making? There is no handbook, so hope can’t be our strategy that we’re going to get it right. We have to hold ourselves accountable and be very transparent so that we can learn every step of the way.

And so that’s the thing for leaders, is that, how do you build trust with your employees? With communication, education, and a deep understanding of what you’re intending the AI to do.

So things are moving very quickly in the AI world. Is it too quickly? Should leaders be pressing pause, or how should one best be implementing?

It’s a great question because one could say you move slow to go fast. The other thing is you need to be rapidly experimenting to learn along the way. What I will go to is the thing that is holding people back from going fast is their data not being in order, integrations not being set up, and people not having understanding for what’s happening. And then you can move very fast because people will see, oh, I’m getting this value. Oh, my job just got a lot easier.

I think Sarah’s reassuringly unsure about the ways that AI is going to change how we work. It’s the biggest workplace shift in our lifetimes. No wonder there’s a lot of hype and some trepidation. We will only find by trial and error what works. And companies like Lattice are asking the questions now so that we can all learn later.

This is a very expensive technology to build. For now, the companies that are building it, they’re not passing that cost on to consumers or to customers because they want people to adopt it. And that’s generally the playbook of tech. How do you reach enough people so that you get to a point where you basically can’t live without this? And then you start to make money. That’s the phase they’re in today.

But that’s going to change, because it is so expensive to train AI systems. Tens of billions of dollars to build these huge models. And the more sophisticated they get, the more expensive that becomes. So I think the first question is, how much are you willing to pay when it’s not clear yet what the real big business benefits are?

So many leaders have gone all in on the hype around AI without really thinking about their specific organisational needs. One size doesn’t fit all. Is the key to success simply to take a step back, a deep breath, and think about where AI might truly make a difference, and where it’s not needed?

Some staff won’t want to be forced into using it, and the tech itself is still imperfect. We aren’t very patient about mistakes in the workplace, but will we all be willing to shift our behaviour to accommodate the software’s learning curve? AI is going to transform the world of work, no doubt about that. But it’s right to be a bit sceptical.

#transforming #world #work #ready

Leave a Reply

Your email address will not be published. Required fields are marked *