Brexit: because statistics isn't really math

A last minute change in my post due to Friday's historic day that wiped out 2 trillion off of the worlds equity markets. The cause? Virtually all the experts predictions were wrong on Brexit: the pollsters got it wrong, the betting line was wrong, and the markets got it wrong-- thank the statisticians for another job well done! CNN reports, "Ahead of the 2012 U.S. elections, Nate Silver, from the website FiveThirtyEight, correctly predicted who would win all 50 states, even as pundits were saying the race was "too close to call." In 2008, he had also correctly projected all but one state. As this year's British election results started trickling in, Silver tweeted that the world "may have a polling problem." "Polls were bad in U.S. midterms, Scottish referendum, Israeli election and now tonight in UK," Silver said....In a commentary on FiveThirtyEight, Silver suggested that forecasters had been overconfident. "Polls, in the UK and in other places around the world, appear to be getting worse as it becomes more challenging to contact a representative sample of voters. That means forecasters need to be accounting for a greater margin of error," he said....Prediction models for the U.S. elections had also become more reliable, Parakilas said, something he didn't believe had happened yet in the UK...." And while they tell you NOW that the models in the UK are less accurate, the Nate Silver that CNN trumpeted as the expert is the same Nate Silver I mentioned here, who performed so badly.

The great thing about statistics is you can explain why you were wrong after you learn that you were wrong. Here are some other explanations floating around as to why the results were so wrong:

  • there was a lot of rain in London and that could impact the turnout of city dwellers who tended to support Bremain.
  • Brexit voters were criticized as racists and not too bright, as all the "experts" came out for Bremain. As a result, Brexit voters were not honest when polled. I think this is a big factor, and a reason why applied statistics is more of an art than science.
  • the inability to predict how many people would turn out to vote
  • Bremain is what the establishment wanted to win so the dissenting voices were minimized. This seems unlikely given how genuinely surprised the establishment was.
  • From Reuters, "Predicting the outcome of Thursday's referendum was harder than that of a national election because there was virtually no historical data to draw on, said David Rothschild, an economist at Microsoft Research. He said pollsters also did not pay enough attention to working class and less educated voters....Rothschild, who also is a fellow at Columbia University’s Applied Statistics Center, said he expected forecasting to improve with a transition from polls using small, random representative samples to large Internet-based ones with rich demographic data. "If I have one million respondents with a large amount demographic data, I should be able to predict outcomes better, or I'm not a very good statistician," he said." OK, but right up to the end statisticians were confident of the result even though many polls were based on only a few thousand people. How representative were the samples they claimed were representative and why were so many experts wrong.
  • Statsblog.com gives their 5 reasons, including "Survey respondents not being a representative sample of potential voters (for whatever reason, Remain voters being more reachable or more likely to respond to the poll, compared to Leave voters)". Hard to argue with the samples not being representative of the whole AFTER you've been proven wrong. But aren't statisticians supposed to be guarding against this happening?

Lots of excuses; of course some or all arguments could be right BUT:

  • From NYTimes, "Britain’s decision to leave the European Union on Thursday was a big surprise. As late as 6 p.m. Eastern in the United States, less than five hours before the results became clear betting markets gave “Remain” an 88 percent chance to win the election, but it wound up losing by four percentage points....One could certainly argue that the polls were “wrong” in the sense that they tended to show a slight Remain advantage heading into the vote count. But it was clearly a distinct possibility that Brexit would win, based on the available survey data. So it’s hard to argue that this was a big polling failure, and it’s a bit strange that the financial markets appear to have been caught completely by surprise." The American Thinker responds to this stupidity,  "Sure it wasn't.  Once you are done rolling around on the floor in laughter at claims that this wasn't a massive polling failure, read on....Not a single one of the well known polling aggregators/predictors picked Brexit in their last-minute final projections...Thus, we had a systematic bias in the aggregated polling data that ranged from 4% to almost 11%.Individual polls leading up to the vote were publishing ridiculous results.  In the week prior to the vote, 9 of the 13 polls predicted a victory for Remain ranging from 1% up to 10%.  Just three polls had Leave in the lead, but just by 1% to 3%– i.e., still below the actual margin of victory – and one poll had a tie.  Not a single individual poll got the result correct, or overpredicted a Leave win....The overall bias in favor of Remain was effectively uniform, which is statistically impossible if the bias was random.  The bias was systematic."
  • In region after region the Brexit numbers were consistently underestimated by several percentage points.

The Telegraph has a lot more analysis with charts and graphs as to what wrong, "Professor Curtice was cautious throughout the campaign, saying that“some of the polls are definitely wrong” in “a cloud of uncertainty”. There were also clear distinctions between phone and online polls - phone polls invariably scored higher results for Remain compared to online." [Comment: Aren't you glad you know this now, after the fact?] "Interestingly, the Leave vote remained constant across both phone and online - it was the “don’t know” score that decreased for phone polls, and Remain seemed to be securing most of these. This was falsely encouraging for the final result. According to YouGov’s analysis, the reduction of don't knows for phone polls was because people were more likely to give an opinion when in conversation with someone, rather than admitting they didn’t know what they thought about such an important choice.Analysts dismissed the idea that different methods would reach different demographics.." So they got it wrong again. But at least their hindsight is 20/20.

Some lessons according to the Washington Post,

  • "First, we did not see this coming. For some weeks now, Stephen Fisher and Rosalind Shorrocks have been tracking referendum forecasts. They consider a wide range of sources, from forecasting models based on polls, to citizen forecasts, to betting markets. None of these methods saw a Leave outcome as the most likely outcome."
  • "Second, this was not a systematic polling failure of the same magnitude as last year’s U.K. general election, where opinion polls badly underestimated the Conservatives’ chance of victory." So the defense is: the failure isn't as bad as when they REALLY messed up last year. That should inspire confidence.
  • "Third, we learned something about campaign dynamics in referendums — and we went wrong by believing too firmly in a claim about how voters decide. Part of the disparity between relatively close polls and relatively confident betting markets was due to the belief in status quo reversion — the idea that undecided voters will be more likely to choose the status quo option (in this case, Remain) than the alternative."
  • "Fourth, given the types of areas that voted to Leave, and given the available polling evidence, it seems likely that a majority of Britons have traded economic benefits for restrictions on people from the European Union coming to live and work in Briton. The areas which voted Leave were older, whiter, and less likely to have a university education."

Another black eye for statistics and statisticians but you can't expect statistics to have the accuracy of mathematics--it isn't math any more than mathematical economics is.

Here are some stories that caught my eye last week:

  • ZeroHedge reports "The percentage of new doctorate recipients without jobs or plans for future study climbed to 39% in 2014, up from 31% in 2009according to a National Science Foundation survey. Those graduating with doctorates in the US climbed 28% in the decade ending in 2014 to an all-time high of 54,070, but the labor market - surprise surprise - has not been able to accommodate that growth. "The supply of PhD's has increased enormously and the demand in the labor market has increased but not nearly as fast. When you can import an international workforce or outsource research, you have a buyer's market" said Michael Teitelbaum, senior adviser to the Alfred P. Sloan Foundation."
  • Wow! RT notes "Around one in 10 of the students attending the largest four-year public university system in the US is homeless, while one in five cope with food insecurity, according to a new study by the California State University system."
  • I've got nothing against unions, my issue is against the terrible decisions/policies they (or anyone) support. Case in point. ChicageCBSlocal reports "He crossed the line – the CPS teachers’ one-day strike — out of his love for the classroom. Joseph Ocol stuck with his kids and brought home a chess championship. Tonight, he’s expelled from the union and wonders if he’ll even have a jobCBS 2’s Brad Edwards reports.The union’s decision came via certified mail, in a letter signed by Chicago Teachers Union President Karen Lewis.....the CTU said in a statement: “Mr. Ocol has been informed of his member privileges and is talking to us through media, which is unfortunate. All members are well aware of what happens to strike breakers and are informed by their own peers of the process for both suspension and reinstatement. CTU is a democratically led member-organization.”"
  • CounterPunch explains how Common Core helps bust the unions.
  • HeatStreet looks into how colleges are letting students censor speach, "For many students and professors, one of the great appeals of college life is being exposed to new and different ways of thinking. But that age-old process is now under threat at schools around the country. Take the University of Northern Colorado. After two of the school’s professors asked their students to discuss controversial topics and consider opposing viewpoints, they received visits from the school’s Bias Response Team to discuss their teaching style. The professors’ students had reported them, claiming the curriculum constituted bias. These incidents, both in the 2015-2016 academic year, reflect a growing trend in higher education. College students increasingly demand to be shielded from “offensive,” “triggering” or “harmful” language and topics, relying on Bias Response Teams to intervene on their behalf. Such teams are popping up at a growing number of universities....To date, more than 100 U.S. public colleges and universities have established Bias Response Teams."
  • HeatStreet again with "Kayla-Simone McKelvey will serve 90 days in jail, five years of probation and 100 hours of community service for her role in a racially charged hoax threat issued to Kean University students. McKelvey, who is black and the former president of the New Jersey college’s Pan-African Student Union, used a fake Twitter account to send a message threateningto kill a group of black students at an on-campus rally in NovemberThe Twitter account, @Keanuagainstblk, claimed that the anonymous user would “kill all male and female black students” at Kean and issued a bomb threat against the school. The account was quickly suspended from Twitter, but not before causing an uproar on social media. Supporters of #BlackLivesMatter across the country called on the university to take action to protect protesting students, and demanded that Kean President Dawood Farahi resign. They tried to use the threat to demonstrate that Farahi had not done enough to diffuse racial tension on campus....McKelvey told the court she was sorry she issued the threat, and that she still believes her actions helped expose racism on campus...But if McKelvey’s excuse sounds a bit strange, she’s not alone, even at Kean, in thinking that her clearly illegal actions “helped” fellow social justice warriors to bring Kean’s “systemic racism” to light. Some Kean students said that the threat’s author didn’t matter that the threat was still evidence of strong racial bias on campus."
  • LA Times reports that Pat Haden used an educational foundation to enrich himself and his family, "Under Haden's leadership as board chairman, however, the $25-million foundation became a lucrative source of income for him and two of his family members -- even as its scholarship spending plunged to a three-decade low and the size of its endowment stagnated, a Times investigation has found.Haden, his daughter and sister-in-law together collected about $2.4 million from the foundation for part-time roles involving as little as one hour of work per week, according to the foundation’s federal tax returns for 1999 to 2014, the most recent year available. Half of that, about $1.2 million, went to Haden. His annual board fees have been as high as $84,000; the foundation paid him $72,725 in 2014....Many foundations do not pay their board members, philanthropy experts say. The $1.5-billion California Community Foundation, for example, does not pay board members. Foundations that do compensate board members, those experts say, typically pay far less than the amounts received by Haden and his relatives. The $12.5-billion Ford Foundation paid its board chairman about $30,000 less than the Mayr foundation paid Haden in 2014. Mark Hager, an associate professor of philanthropic studies at Arizona State University, said in an email the Mayr payments to the board would be high “even for a foundation that was giving out more than $50 million in grants each year.”“I’ve never heard of fees that large,” said Adam Hirsch, a law professor at the University of San Diego who specializes in trusts."

Leave a Reply

Your email address will not be published. Required fields are marked *

AlphaOmega Captcha Classica  –  Enter Security Code