I've said it before: statistics isn't really math. It's an application of math like engineering, mathematical economics, mathematical finance, predicting the weather, etc that uses math to claim respectability. Sure, theoretical statistics is mathakin to analysisbut actually applying statistics to numerous problems when there is no certainty that underlying assumptions (such as the data being random) are true is just wrong. In a previous post, "Statistics Isn't Really Math" I looked at some of the problems with statistics. In particular, I cited the post at AMSTAT News, (The Membership Magazine of the American Statistical Association) saying "Statistics, however, is not a subfield of mathematics. Like economics and physics, statistics uses mathematics in essential ways, “but has origins, subject matter, foundational questions, and standards that are distinct from those of mathematics” (Moore, 1988, p. 3). David Moore, statistics educator and former president of the American Statistical Association, gives the following four compelling reasons why statistics is a separate discipline from mathematics:
 Statistics does not originate within mathematics
 The aims and foundational controversies of statistics are unrelated to those of mathematics
 The standards of excellence in statistics differ from those of mathematics
 Statistics does not participate in the interrelationships among subfields that characterize contemporary mathematics
Statistics exists because of the need for other disciplines to examine and explain variation in their data."
That's a nice, clean authoritative explanation by statisticians as to why statistics isn't really math.
I followed that post up later with "Brexit: because statistics isn't really math" when virtually all the hacks statisticians made a horrendous call on Brexit. The bad prediction wiped out trillions in market cap when the vote went against what almost every public poll believed would happen.
Now with the US election coming up, I figure I should highlight a couple of new pieces that caught my attention. The first is a link that was brought to my attention by a reader, courtesy of the NY Times: "When you hear the margin of error is plus or minus 3 percent, think 7 instead". Now margin of error is a key term in polling statistics; except, according to this article, prepare to have your knowledge challenged: "In a new paper with Andrew Gelman and Houshmand ShiraniMehr, we examined 4,221 latecampaign polls — every public poll we could find — for 608 statelevel presidential, Senate and governor’s races between 1998 and 2014. Comparing those polls’ results with actual electoral results, we find the historical margin of error is plus or minus six to seven percentage points. (Yes, that’s an error range of 12 to 14 points, not the typically reported 6 or 7.)". Yes, throw out everything you learned about confidence intervals. But wait, it gets worse. A link off of that article takes you to here, an article entitled: "We Gave Four Good Pollsters the Same Raw Data. They Had Four Different Results" which is about just what the title indicates. Four pollsters were given exactly the same set of polling data and 4 different predictions. This is NOT what you'd get in a math course. The reason?: "Polling results rely as much on the judgments of pollsters as on the science of survey methodology. Two good pollsters, both looking at the same underlying data, could come up with two very different results.How so? Because pollsters make a series of decisions when designing their survey, from determining likely voters to adjusting their respondents to match the demographics of the electorate. These decisions are hard. They usually take place behind the scenes, and they can make a huge difference....Pollsters usually make statistical adjustments to make sure that their sample represents the population – in this case, voters in Florida. They usually do so by giving more weight to respondents from underrepresented groups.". Got that? Pollsters tamper with adjust the data as they feel like.
In fact, ZeroHedge has this post where polls contradict the margin error of others, and another post looking at the methodology behind a recent Washington Post poll: "Of course, like many of the recent polls from the likes of Reuters, ABC and The Washington Post, something curious emerges when you look just beneath the surface of the headline 12point lead."METHODOLOGY – This ABC News poll was conducted by landline and cellular telephone Oct. 2022, 2016, in English and Spanish, among a random national sample of 874 likely voters. Results have a margin of sampling error of 3.5 points, including the design effect. Partisan divisions are 362731 percent, Democrats  Republicans  Independents."As we've pointed out numerous times in the past, in response to Reuters' efforts to "tweak" their polls, per the The Pew Research Center, at least since 1992, democrats have never enjoyed a 9point registration gap despite the folks at ABC and The Washington Post somehow convincing themselves it was a reasonable margin."
Finally, there is this somewhat humorous post "Here's The 30 Seconds After The Last Debate That CNN Would Rather You Didn't See" where CNN polling has a 52% to 39% win for Clinton in the 3rd debate "So when the CNN focus group was asked "did this debate help anyone make up their mind or possibly change their vote", the results did not turn out how Goebbels they expected...
 5 Clinton
 10 Trump
 0 3rd Party
 6 Undecided
A much, much different result than there poll. Polling data is not the same coin flip data and the situation is even worse with respect to the integrity of the dataStanford University called attention to Election Fraud here. By now it should be even more obvious that statistics isn't really math: the margin of error doesn't mean what it should and qualified statisticians with exactly the same data come up with different answers.
Here are some events that caught my eye lately.
 Poor Nigel Short can't really catch a break. After getting into trouble with PC police for his comments on women which got twisted and blown way out of proportion, he had a 6 game match with Hou Yifan, the highest rated woman chess player in the world. The match was actually less close than the score would indicate with Short winning the match after 5 games in which he was never really in troublebefore losing badly in the final game. Was it a gift?after all, he'd won the match, the last game wouldn't be rated, and it would be good gesture. Whatever the reason for the one game in which Short played badly, he got punished yet again. Chess.com has a report which states, "Well, Short had secured match victory after the fifth game, and later that day, he discovered that according to official regulations the last game should not be rated. Paragraph 6.5 of the FIDE Rating Regulations says:"Where a match is over a specific number of games, those played after one player has won shall not be rated." Short had an email discussion with tournament director Loek van Wely late Friday night. Van Wely wasn't immediately convinced. In fact, two years ago, when Anish Giri had won his match before the last game with Alexey Shirov, that sixth game was rated." So Short showed clear dominance throughout the first 5 games, never being in any danger, knew the last game wasn't going to be rated before it was played and "finished" the match with a bad loss, only to find the game WAS rated, which violated FIDE rules. Now he's not happy. And check out the footnote like reference to Short winning the match at Chessbase. Had he lost there would have been a BIG story on woman beats man in grudge match.
 American Thinker with a piece on precious snowflakes scared of Halloween: "College offers roundtheclock counseling for students 'troubled' by Halloween costumes"

EAGNews on the high school principle who told a student to remove his headphones in school, and "When the student refused, Tossman attempted to remove the headphones, which allegedly sent Penzo into a rage. “ … (T)he 18yearold student coldcocked the principal,” according to the news site. “Penzo continued to pounce on Tossman, socking him several times in the face, causing swelling and lacerations around both the principal’s eyes.”A prepared statement released by the school contends “The NYPD immediately responded” and took Penzo into custody.""
 FiveThirtyEight reports "A new study shows that firstgrade teachers consistently rate girls’ math ability below boys’ — even when they have the same achievement level and learning style. The study out today in the journal AERA Open from researchers at New York University and the University of Illinois at UrbanaChampaign seems to represent a setback for gender equity in math. A widely reported 2008 study found that girls score as well as boys do on standardized state math tests. But the latest study suggests that early in their math education, many girls run into a teacher who perceives them as being worse at the subject than they are — which could discourage some of them from heading down a path that could lead to a career in math, science or engineering.". This is surprising to me because I have the impression that most math teachers at that level are women. Unfortunately I don't have the data to back that up. Is anyone aware of the data for this?
 A powerful image of education winds up in a story on Sott.net. Take a look at the school in Afghanistan halfway down the page. I've got to believe that some teachers would find that useful in their class.
 Geekwire with a piece "Meet the minds behind Axiomatic: An art project based in theoretical mathematics"
 Carlsen versus Nakamura in a blitz match today. Chess.com tells you how to watch it online.