Was the Polymarket Trump whale smart or lucky?
Unlock the White House Watch newsletter for free
Your guide to what the 2024 US election means for Washington and the world
The Wall Street Journal today has an interview with “Théo”, the mystery prediction-market trader who says he’ll make nearly $50mn on Polymarket by betting on Donald Trump winning the US presidency.
It offers some interesting new information about his apparent edge:
Théo argued that pollsters should use what are known as neighbor polls that ask respondents which candidates they expect their neighbors to support. The idea is that people might not want to reveal their own preferences, but will indirectly reveal them when asked to guess who their neighbors plan to vote for.
Théo cited a handful of publicly released polls conducted in September using the neighbor method alongside the traditional method. These polls showed Harris’s support was several percentage points lower when respondents were asked who their neighbors would vote for, compared with the result that came from directly asking which candidate they supported.
To Théo, this was evidence that pollsters were—once again—underestimating Trump’s support. The data helped convince him to put on his long-shot bet that Trump would win the popular vote. At the time that Théo made those wagers, bettors on Polymarket were assessing the chances of a Trump popular-vote victory at less than 40%.
As Théo celebrated the returns on Election Night, he disclosed another piece of the analysis behind his successful wager. In an email, he told the Journal that he had commissioned his own surveys to measure the neighbor effect, using a major pollster whom he declined to name. The results, he wrote, “were mind blowing to the favor of Trump!”
Théo declined to share those surveys, saying his agreement with the pollster required him to keep the results private. But he argued that U.S. pollsters should use the neighbor method in future surveys to avoid another embarrassing miss.
“Public opinion would have been better prepared if the latest polls had measured that neighbor effect,” Théo said.
Théo’s hunch has been proved right, but does the methodology stack up? According to the experts, it’s impossible to know.
“Unless the evidence is put into the public domain with tables (often missing for many US polls) it is frankly impossible to comment,” Sir John Curtice, professor of politics at Strathclyde University, told FTAV.
Though only a few papers have been published that test the accuracy of indirect opinion polls, the wisdom of crowds remains an active area of research. James Surowiecki’s 2004 pop-sociology bestseller of the same name sets out the argument that decentralised groups of independent, diverse thinkers can provide unbiased estimates of reality. More recent research — such as this paper from Roni Lehrer, Sebastian Juhl and Thomas Gschwend of Mannheim University — has suggested crowds are fairly good at guessing what “share of the population has [a] socially undesirable characteristic”.
Building on the theme is Predicting Elections: a ‘Wisdom of Crowds’ Approach by Martin Boon, co-founder of Deltapoll. His study concludes that while know-thy-neighbour polling can be more accurate than conventional surveys, the method is “more than capable of producing seriously misleading predictions”.
So-called wisdom polling outperformed the best conventional poll for the UK 2010 general election, Boon finds, but was a notably poor predictor when applied to the outcomes of the 2011 Welsh referendums on devolution powers and voting reform.
Wisdom polls struggle when a high proportion of the electorate doesn’t understand the question, he suggests:
When our general election prediction proved accurate, most people had the advantage of both a basic understanding of British politics at general election time, and a prompted understanding of how each party had fared at the previous election. In short, they had enough information to be smart. However, this may not have been the case in the referenda; both were characterised by the electorate’s limited understanding.
Making people take a view about proportional representation versus first-past-the-post delivered superficial answers that grouped like a coin flip around the median point of 50 per cent, Boon finds. Their predictive powers improved in all cases when given information around which to frame an answer, such as the result of a previous vote, the trade-off being that prompted questions introduce potential biases. And even then, given a difficult question, voting-intention polling methods still won out.
How informed and engaged the US electorate was in this year’s presidential election is being explored at length elsewhere, as is the possibility that systematic biases skewed conventional polls. Whether one trader’s private polling tapped sentiment more accurately than the publicly available surveys, or whether statistical noise just happened to reinforce his confidence to buy a dollar for 40c, can’t be known without seeing the data.
Whichever way, the bet on Trump winning the popular vote was not quite as contrarian as the risk-and-reward of a binary market makes it appear. “A 40 per cent chance is quite high!” said Curtis:
In any event the polls were not far off. [They] probably underestimated Trump relative to Harris by 4 points and by less than that in most of the swing states. Nobody would have noticed such errors if the election had not been as close as it was.
That’s not to deny that the polls still have a bit of a problem estimating Trump — but finding the source of an error as small as the one this time around will not be easy.
Further reading:
— Take political betting markets literally, not seriously (FTAV)
#Polymarket #Trump #whale #smart #lucky