Author Topic: Polling, Poll Manipulation, & Using Poll Results to Surpress Voting  (Read 231 times)

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3244
    • View Profile
We'll start this topic off with a great find by Doug that counts the ways aggragate polls were manipulated to show a closer 2024 race than the more accurate polls predicted, with those more accurate polls STILL underrepresenting likely Republican voters:

https://www.racket.news/p/how-americas-accurate-election-polls

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3244
    • View Profile
Sem-Pros Spank Those That Run Polls for a Living
« Reply #1 on: November 19, 2024, 05:00:01 PM »

John Tierney

The New Election Gurus

Using early-voting data, an emerging crop of number-crunchers predicted Trump’s win long before the pollsters and pundits did.

Nov 19 2024

Six days before the election, the statistician Nate Silver issued a warning to his 3.4 million followers on X: “Just Say No to analysis of early voting. It probably won’t help you to make better predictions. But you may fool yourself.” He received a prompt reply from an Utah woman, posting anonymously under the handle @DataRepublican.

“Nate, sit down,” she wrote. “You don’t know anything about early voting.”

She turned out to be right—at least about this election.

@DataRepublican had only 6,000 followers on X at the start of the campaign and hasn’t made a dime from her predictions, but she routed Silver and the rest of the pundit-pollster industry. So did her collaborators, an informal network of a half-dozen mostly anonymous and unpaid number-crunchers on X who believe in the predictive value of early-voting data. After building their own computer models to analyze early-voting trends in different states, they swapped results and strategies with one another on X, and then posted separate predictions.

While pollsters were calling the race an unpredictable tossup, these analysts fearlessly forecast a decisive Trump victory, calling Nevada and other battleground states for the now-president-elect two weeks before Election Day. By the campaign’s final weekend, they had correctly called six of the seven battleground states for Trump (they still weren’t sure about Michigan), and their forecasts of his winning margins in these states proved remarkably accurate, too. Several added predictions of the national popular vote—and correctly picked Trump to win it.

“This is an earthquake that should shake the polling industry, but we’ll see whether or not anyone recognizes it,” said one of the analysts, Christian Heiens, a Republican political consultant and podcaster who lives in Culpeper, Virigina. “Early voting has been almost totally missed by the industry, which is why they’ve been humiliated three times in a row. They don’t realize that it’s not election day anymore—it’s election season.”

The other analysts I interviewed (after contacting them on X) insisted on remaining anonymous, citing concerns that their forecasts and right-leaning political opinions could lead to complications in their communities and at their day jobs. “People get so passionate about politics these days,” said the analyst tweeting as @earlyvotedata, who lives in southern California. “I do this for fun, and I don’t want any publicity from it to affect my family or my work.”

Except for Heiens, who had worked in the Virginia state legislature and as a data analyst for Republican candidates in Virginia and Pennsylvania, all told me that they didn’t work in politics or the polling industry—the models were an unpaid sideline inspired by their fascination with numbers and their frustration with traditional polls. While pollsters rely on answers from at most a few thousand people that they hope will be representative of electorate, these analysts draw data from tens of millions of people who have already voted by mail or in person. They don’t know how those millions voted, but they made informed guesses and projected the composition of the electorate.

“People who respond to pollsters tend to more partisan and engaged with politics than the average voter,” said the analyst who posts as @TonerousHyus, a former defense-industry data specialist who lives in San Antonio, Texas. “A lot of working-class voters, especially conservatives, don’t want to talk to pollsters. The early voting data gives you a better picture of the whole electorate—you can actually see who’s voting where.”

 @DataRepublican, who built the most elaborate of the group’s computer models, told me by email that she had been refining her algorithms for more than a decade by drawing on her professional experience as a software engineer developing machine-learning libraries. “I have extensive knowledge of database kernels, data indexing, and scalability,” she wrote. “If pollsters and pundits are racecar drivers, you might say that I engineer the racecars themselves.”

The analysts tailored their models to account for variation in states’ data. Some states report not only where early votes are cast but also each voter’s party registration, sex, and race. Many states will release each voter’s record in previous primaries and general elections—whether they voted, and if they voted early or on Election Day.

One of the major challenges is projecting how independent voters will break. The analysts rely on clues from each voter’s history (which primary did they vote in?) and from the voter’s address: an independent in a red county is more likely to vote Republican than is one living in a blue stronghold. “Modeling independents is very tricky, and it’s something I’ve had to learn by trial and error the past few elections,” Heiens said. “I have figured out patterns of independents who vote early, and they’re different from the ones who vote on Election Day. I’m waiting for the polling industry to catch on to that.”

In states that don’t reveal early voters’ party registrations, the analysts look for trends, comparing turnout in red municipalities and counties versus blue ones. @DataRepublican’s model can drill down to each city block, enabling her to see how many people voted this year and compare it with previous elections. In Pennsylvania and Nevada, for example, she saw that turnout was sharply down in the blue neighborhoods of Pittsburgh, Philadelphia, and Las Vegas, especially among blacks and Latinos, and had surged in red rural communities.

That same pattern showed up in other states. Heiens was ridiculed on X for predicting that Kamala Harris would win Virginia by only 5 points, but her margin turned out to be a little under 6 points. On October 21, @DataRepublican predicted that Trump would carry Florida by more than 10 points—a margin that seemed ridiculously wide to Silver and most pollsters. In an exchange with a Republican enthusiast on X in early October, Silver had offered to bet $100,000 that Trump would not win Florida by more than 8 points. He’s lucky that he didn’t wind up making the bet: Trump won the state by 13 points.

Establishment pundits downplayed the significance of the red surge in early voting, pointing to the tight polls and arguing that it was misleading to compare this year’s trends with past ones because Republicans had previously been reluctant to vote early. Now that Trump was encouraging it, the argument went, the party was “cannibalizing” its own votes—racking up early ballots from dependable voters who in earlier years would have waited until Election Day.

This argument, though, didn’t jibe with the trends spotted by @DataRepublican and her collaborators. Yes, they could see that some dependable Republicans had switched to early voting, but plenty were still waiting for Election Day. Much of the early red surge came from either newly registered Republicans or from “low-propensity” voters—the ones whose history showed that they couldn’t be counted on to show up on Election Day. Meantime, the analysts’ models showed that Democrats’ fabled turnout machine was failing to bank enough early ballots from its own low-propensity voters to offset Republicans’ advantage among Election Day voters.

The early-voting analysts were spectacularly correct this election, but could their performance owe to luck—and to the fact that they were rooting for a Republican victory? Skeptics like Silver and Sean Trende, the senior voting analyst at RealClearPolitics, say that early-voting analysts tend to be partisans who let their biases creep into their projections, so they’re accurate only when their party has a good year. “The real proof,” Trende wrote on the eve of the election, “will come when Democrats start to predict good Republican years off of early voting numbers, and vice versa.”

This wasn’t such a year—not if you were following the crossfire on X between Democratic early-voting analysts and @DataRepublican and her right-leaning collaborators. The most prominent of those Democrats, Jon Ralston, is editor of the Nevada Independent news site; his fans in the media have dubbed him the “Oracle of Nevada.” His record of predicting winners in Nevada impressed even Silver and Trende, who singled him out as the only early-voting analyst who might be worth attention. Ralston held off calling Nevada until the day before the election, when he predicted that a flurry of late votes by Democrats and independents would give Harris a victory by 0.3 percent. (Trump won Nevada by 3 full points.) Other Democratic early-voting analysts called not just Nevada but most of the battleground states for Harris.

The Democrats’ forecasts were gleefully mocked—and compiled on a scorecard—by their Republican rivals on X. “Their math comprehension was so atrocious that we never took them seriously,” @DataRepublican told me. Heines, scoffing that Ralston had never deserved his reputation, posted data showing that while Ralston had correctly called Nevada winners in previous elections, his predictions of the winning margins were even worse than those of pollsters.

“Ralston kept missing badly on the margins by overestimating Democratic performance,” Heines said. “For too long, punditry has been dominated by hacks on the Left who want to generate enthusiasm among their voters.” Heines, whose collaborators on X have jokingly started calling him the “Oracle of Virginia,” said his own models in previous blue-trending elections in Virginia had accurately predicted the losing margins for Republican legislative candidates—“including, unfortunately, the candidates I was working for.”

 @DataRepublican said that there was always a temptation for partisans to look for trends that favored their party, and that she had erred in the past by making too many judgments and assumptions instead of relying on computerized tools. “The future of early voting will be in tooling rather than manual analysis—building products that are capable of slicing data in many different ways,” she told me. “You can’t gaslight yourself into thinking your party is winning if your own tooling is telling you they’re losing.”

She also told me that she might soon quit her day job, shed her anonymity, and create her own forecasting website. When I asked what she hoped to accomplish, she quoted from an earlier post on X: “This is about me taking a dagger to the current state of punditry.” She’d posted that a week before the election to explain why she’d done so much unpaid work to make such risky predictions: “I have always loved numbers and it has been a personal affront to see this getting worse every election. I know I’m setting myself to fall hard if I am wrong. I am fine with that. But if I succeed, I want this type of accountability and analysis to become the new normal, and to create a new age of meritocracy.”

John Tierney is a contributing editor of City Journal and coauthor of The Power of Bad: How the Negativity Effect Rules Us and How We Can Rule It.

https://www.city-journal.org/article/the-new-election-gurus