‘I see AI as born out of surveillance’

0

Meredith Whittaker is not a follower of norms. It takes a few tries to get her to pick a restaurant to meet at — her suggestions include the lobby of her hotel in London and a coffee shop that doesn’t take reservations, which makes me nervous.

Eventually, she relents and chooses Your Mum’s Kitchen, a tiny family-run eatery in the basement of a Korean supermarket on a quiet road in north London. Korean cuisine is her comfort food, she says, having grown up in Los Angeles’s Koreatown neighbourhood. Even now, Korean hotpots and jjigae, a spicy kimchi stew, are her go-to recipes when she’s cooking at home in Brooklyn.

Whittaker, who is petite, with a signature sweep of dark curls streaked with grey, is arguably Silicon Valley’s most famous gadfly. Over the past few years, she has come to represent the antithesis of Big Tech, an industry that has built its wildly successful economic model mostly through “surveillance capitalism” — profiting from people’s personal data.

Meanwhile, Whittaker is the iconoclast president of the foundation behind the popular encrypted messaging app Signal, which is funded primarily by donations, and has been downloaded hundreds of millions of times by people all over the world. She is a rare tech executive who decries excesses of corporate power, rails against what she calls a “mass surveillance business model” and lobbies for the preservation of privacy.

Trying to pin down Whittaker is like bottling lightning. She is largely itinerant, travelling the world speaking on her pet themes of privacy, organised labour and power, and is constantly fizzing with words and ideas that tend to shoot off in unexpected directions. Her day-to-day involves running a tech company, but she also publishes academic papers on the sociopolitics of AI and is an outspoken anti-surveillance activist. To her, the disparate threads form a coherent picture of what she stands for.

“I see AI as born out of the surveillance business model . . . AI is basically a way of deriving more power, more revenue, more market reach,” she says. “A world that has bigger and better AI, that needs more and more data . . . and more centralised infrastructure [is] a world that is the opposite of what Signal is providing.”

At Google, where she started her career in 2006, Whittaker witnessed the rise of this new wave of so-called artificial intelligence — the ability to pull out patterns from data to generate predictions, and more recently create text, images and code — as Google began to leverage the precious data trails it was harvesting from its users.

“Suddenly, everywhere, there were little courses in Google that were like, learn machine learning, apply machine learning to your thing,” she says. “We hadn’t decided it was called AI then. The branding was still kind of up in the air.”

In 2014, a visiting engineer from Harvard told Whittaker about an idea he had to use AI software to predict genocides. She remembers this as the moment she began to have ethical concerns about the technology.

She knew the software to be imperfect, shaped heavily by the human behavioural data it was trained on, which was often biased, incomplete and messy. The idea still haunts her: “I was like, how do you know that that is actually accurate, and how do you take responsibility for the fact that a prediction itself can tip the scales? And what is the role of such a thing?”

It prompted her to co-found the AI Now Institute in New York, alongside Kate Crawford, a peer at Microsoft, to research the urgent societal impacts of AI, focused on the present rather than an amorphous future. Since then, she has been involved with organising worldwide employee walkouts to protest against Google’s military contracts, and advised Lina Khan, the chair of the US Federal Trade Commission, on the link between corporate concentration of power and AI harms.

Whittaker became president of the Signal Foundation in 2022. The Signal app it runs is used in the most sensitive scenarios by militaries, politicians and CEOs as well as dissidents and whistleblowers, and the encryption techniques developed by its engineers over the past decade are used by its rivals WhatsApp and Facebook Messenger. It currently oscillates between 70mn and 100mn users every month, depending on external triggers such as the wars in Ukraine and Gaza, which cause spikes in sign-ups, she explains.

“Within Signal, we are constantly trying to collect no data,” she says. “Having to actually make the thing [when] we are dependent on large actors in the ecosystem who set the norms, own the infrastructure . . . as an intellectual proposition is really interesting.

“It’s given me a lot of analytical tools to apply in the AI space and think about the political economy from within.”


Whittaker is clearly the expert between us on the menu, and she spearheads the ordering of sundubu jjigae — a soft tofu stew with seafood that she later proclaims “the perfect food” — plump vegetable dumplings and sour-hot kimchi on the side. My ramen bowl is piled with sliced bean curd and a fried egg floats, jewel-like, on the surface. The café is unlicensed (you can bring your own booze) but Whittaker tries to interest me in barley tea. It’s even better iced on a hot day, she says wistfully.

Menu

Your Mum’s Kitchen
17 Goldhurst Terrace, London NW6 3HX

Sundubu jjigae £11.50
Ramen £5.80
Fried bean curd £2.30
Vegetable dumplings £3
Kim £1.50
Kimchi £3.50
Barley tea x2 £4.60
Ginger honey tea x3 £5.60
Total inc service £41.58

The mother-daughter duo in the open kitchen are working in practised tandem, plating and ladling out steaming bowls whose aromas permeate the compact pastel-painted room. Whittaker says it reminds her of dinners with her dad as a teenager at a student joint called Tofu House in LA, usually after an argument. It’s now been turned into a chain, she complains.

Whittaker arrived at Google straight from a degree in English literature and rhetoric at the University of California, Berkeley. Before then, the only jobs she’d had were helping out in LA jazz clubs where her dad played the trombone. “I didn’t have a career ambition, it was very happenstantial,” she says. “I see kids now whose lives are structured within an inch of every minute. And it seems really hard as a young person who’s developing and trying to figure out yourself in the world.”

The job was in customer support and involved resolving user complaints. But because this was Google in the late 2000s, the team was staffed by college graduates with humanities degrees from elite universities, including a woman who as an undergraduate had codified a Central American language that had never been mapped before. Their performance was scored based on how many reported bugs the engineering team resolved, but this was particularly hard to do, because the team was in a different building to the engineers.

Whittaker decided the efficient thing to do would be to ride over to the engineering building (Google has free campus bikes) and set herself up on a couch there, so that she could collaborate directly with the engineers and solve problems in real time. It was her first taste of how large tech companies worked — if you wanted to fix something, you had to work around the bureaucracy and do it yourself, even if it meant making enemies of your managers and colleagues.

It was at Google that Whittaker also learnt the subtleties of building a global internet business. She worked closely with the internally powerful technical infrastructure team, which would go on to become Google Cloud, and became particularly interested in net neutrality — the concept of an open and democratised internet. It led her to found an open-source research group known as M-Lab, working with civil society and privacy researchers outside Google to measure the global speed and performance of the internet. Google funded M-Lab with roughly $40mn a year, which Whittaker calls a “rounding error” for the search giant. But it gave her an idea of how much it would cost an independent, community-led project to build new internet infrastructure without a Big Tech sponsor.

In 2010, through her work at M-Lab, Whittaker became involved in online security circles, getting to know digital mavericks accused of being paranoid about privacy in a pre-Snowden era. “I thought of it as cutting tributaries in the river [of Google] and releasing some of it to anarchist privacy projects, like . . . Tor,” she explains, referring to the non-profit that runs anonymous web browsers. “I was just trying to figure out how do we support the community that is effectively building prophylactics to the business model I’m beginning to understand?”

That was when she first met Moxie Marlinspike, a cryptographer and entrepreneur who had founded Signal, which she was helping to fundraise for at the time. “There just wasn’t an understanding then of what it actually meant economically to build large-scale tech . . . and there still isn’t,” Whittaker says. “People are too afraid to ask the political and economic questions.”

More than a decade later, as president of the Signal Foundation, she remains a privacy absolutist — committed to the idea of end-to-end encryption and the need for pockets of digital anonymity in an industry fuelled by the monetisation of data — despite political pushback from governments all over the world.

She has also chosen to publish a detailed breakdown of Signal’s operating costs, estimating that by 2025 Signal will require approximately $50mn a year to operate. Most of the cost is to maintain the digital infrastructure required to run a real-time consumer messaging app, such as servers and storage, but the app’s end-to-end encrypted calling functionality is one of the most expensive services it provides.

“I wanted to talk about the money, and how not free tech is, if you’re not willing to monetise surveillance. What’s paying for this if you don’t know the cost?” she says. “Not that many people have access to this information, and one thing I can do is shift the narrative by speaking honestly about the economics.”

Until 2017, Whittaker had thought she could successfully mobilise change from inside the machine, building up ethical AI research and development programmes at Google in collaboration with academics at universities and companies such as Microsoft. But in the autumn of that year, a colleague contacted her about a project they were working on. They had learnt it was part of a Department of Defense pilot contract, codenamed Project Maven, that used AI to analyse video imagery and eventually improve drone strikes. “I was basically just a . . . dissent court jester,” she says, still visibly disappointed.

She drafted an open letter to Google’s chief executive, Sundar Pichai, that received more than 3,000 employee signatures, urging the company to pull out of the contract. “We believe that Google should not be in the business of war,” the letter said.

“The Maven letter was sort of like, I can’t make my name as an ethical actor redounding to Google’s benefit,” she says. “You’re talking about Google becoming a military contractor. It’s still shocking, although it’s become normalised for us, but this is a centralised surveillance company with more kompromat than anyone could ever dream of, and now they’re partnering with the world’s most lethal military, as they call themselves.

“Yeah, that was the end of my rope.”

Whittaker went on to help organise employee protests and walkouts, in which more than 20,000 Google workers participated, to protest against the company’s handling of other ethical matters such as sexual harassment allegations against high-profile executives. At the time, Google’s management opted not to renew the Pentagon contract once it expired. But Whittaker left Google in 2019, after the company presented her with a set of options that she says gave her no choice but to quit. “It was like, you can go be an administrator, doing spreadsheets and budgets for the open source office [and] stop all the shit I had been building forever.”

In recent academic papers that Whittaker has published on the closed nature of AI technology and industrial capture of the field, she often refers to the power of workplace activism and organising within tech firms and universities, as a lever to check the tech industry’s dominance and power over civil society, academia and governments.

“This is how I landed on labour organising and social movements. It wasn’t an ideological a priori radicalism,” she says. “Politics aside, I tried force of argument, I tried working from the inside, I tried working in government. The place that seems to have the capacity to rein in capital seems to be labour.”


Whittaker is on the road for more than 120 days in a year. To keep sane, she says she sticks to little routines wherever she is in the world. Like making a French press coffee from a particular chicory-mushroom blend she likes, or doing a daily yoga class via video from her teacher’s New York studio. She always tries to cook dinner on the day she gets home from a trip, to feel grounded again, “like a human in the world, not a brain on a stick”.

She reads voraciously on the political history of technology and today is animated by efforts to reshape the existing tech ecosystem through lessons she is learning.

“The [AI] market is crazy right now. Seventy per cent of Series A [early-stage start-up] investment is coming from the hyperscalers, and the majority of that goes back to the hyperscalers,” she says, referring to cloud companies Microsoft, Google and Amazon. “It’s like a Potemkin market, it’s not a real market.”

The consequences, according to Whittaker, are a handful of centralised actors that are determining the shape of AI and, ultimately, who gets to use systems that are capable of making sensitive determinations in health, war, financial services and energy. “There’s been a real problematic assumption that you have to have a computer science degree to make decisions about medical care or education or resource distribution from public agencies,” she says.

“We are led to see these [AI systems] as a kind of . . . revolutionary inflection point in scientific progress. I don’t think they are that. They are the derivatives of massive network monopolies, and they’re a way for these monopolies to grow their reach.”

I ask if she sees any potential for the AI systems we are building now to have a positive impact, but she pushes back. “Not without radical social change,” she says — a restructure that would disrupt the economic forces around AI and the handful of private companies that currently control it, but also one that prioritises social goals over revenue and growth. In order to do this, she refers to the idea proposed by Maria Farrell and Robin Berjon of “rewilding the internet” — renewing the current ecosystem so that it is surrounded by “other pine trees and forest life”, a plurality of digital life. 

This is where she feels that projects such as Signal play a role. “Independent alternatives [to Big Tech] are actually safe havens from those paradigms,” she says. “For me, fighting for another model of tech, as an infrastructure for dissent and honesty in the face of . . . surveillance, it’s the same project. It’s just another place to do it.”

Madhumita Murgia is the FT’s artificial intelligence editor

Find out about our latest stories first — follow FT Weekend on Instagram and X, and subscribe to our podcast Life & Art wherever you listen



#born #surveillance

Leave a Reply

Your email address will not be published. Required fields are marked *