Whether it’s 55/45 or 65/35, we’re still basically talking about the same thing. This race is neck and neck, and whoever gets the turnout edge will win. We’re talking about fractions of percents that are at play, which is why these odd are a coin toss.
Edit: it looks like 538’s model is new, and Silver doesn’t like it or the guy behind it.
Social media isn’t a search engine. If an article is referring to someone by name in the title, they almost certainly have a Wikipedia page the questioner could read rather than requesting random strangers on a message board provide answers for them (in the form of multiple answers of varying bias and accuracy).
Wanting to learn isn’t the problem, it’s not spending the tiniest bit of personal effort before requesting service from other people.
Yeah. I think we take our easy navigation for granted sometimes. Like… I can get most information pretty quickly and not have a lot of trouble discerning what I need to do to get that information.
But not everyone is as “natural” at surfing. Maybe they have trouble putting things in perspective, they don’t know how to use a tool like Wikipedia, or even - maybe they just don’t like researching.
I’m so glad we have people that are great at keeping up with everything. But we have to remember that presenting and teaching information accurately and helpfully is a skill that we need desperately.
He’s a degen gambler who admits in his book he was gambling up to $10k a day while running 538… It never made him go “huh maybe I fucked my employees because I’m a degen gambler.”
Nate is not with 538 anymore. Disney didn’t renew his contract. However, he got to keep the model that he developed and publishes it for his newsletter subscribers. 538 had to rebuild their model from scratch this year with G Elliot Morris.
Now Nate hosts the podcast Risky Business with Maria Konnikova. The psychologist who became a professional poker player while researching a book. It’s pretty good.
In statistical modeling you don’t really have right or wrong. You have a level of confidence in a model, a level of confidence in your data, and a statistical probability that an event will occur.
So if my model says RFK has a 98% probability of winning, then it is no more right or wrong than Silver’s model?
If so, then probability would be useless. But it isn’t useless. Probability is useful because it can make predictions that can be tested against reality.
In 2016, Silver’s model predicted that Clinton would win. Which was wrong. He knew his model was wrong, because he adjusted his model after 2016. Why change something that is working properly?
But for the person above to say Silver got something wrong because a lower probability event happened is a little silly. It’d be like flipping a coin heads side up twice in a row and saying you’ve disproved statistics because heads twice in a row should only happen 1/4 times.
Silver made a prediction. That’s the deliverable. The prediction was wrong.
Nobody is saying that statistical theory was disproved. But it’s impossible to tell whether Silver applied theory correctly, and it doesn’t even matter. When a Boeing airplane loses a door, that doesn’t disprove physics but it does mean that Boeing got something wrong.
Comparing it to Boeing shows you still misunderstand probability. If his model predicts 4 separate elections where each underdog candidate had a 1 in 4 chance of winning. If only 1 of those underdog candidates wins, then the model is likely working. But when that candidate wins everyone will say “but he said it was only a 1 in 4 chance!”. It’s as dumb as people being surprised by rain when it says 25% chance of rain. As long as you only get rain 1/4 of the time with that prediction, then the model is working. Presidential elections are tricky because there are so few of them, they test their models against past data to verify they are working. But it’s just probability, it’s not saying this WILL happen, it’s saying these are the odds at this snapshot in time.
Polling guru Nate Silver and his election prediction model gave Donald Trump a 63.8% chance of winning the electoral college in an update to his latest election forecast on Sunday, after a NYT-Siena College poll found Donald Trump leading Vice President Kamala Harris by 1 percentage point.
He’s just a guy analizing the polls. The source is Fox News. He mentions in the article that tomorrow’s debate could make that poll not matter.
Should you trust Nate or polls? They’re fun but… Who is answering these polls? Who wants to answer them before even October?
So yeah take it seriously that a poll found that a lot of support for Trump exists. But it’s just a moment of time for whoever they polled. Tomorrow’s response will be a much better indication of any momentum.
It just seems strange because I don’t think that many people are on the fence. Perhaps I’m crazy, but I feel most people know exactly who they’re voting for already. Makes me wonder how valid this cross-section was that was used as the sample set. If it accurately represents the US, including undecided voters, then… 😮
but I feel most people know exactly who they’re voting for already
The cross-section of people you know are more politically off the fence than the entire nation. Those that aren’t online at all are also more undecided and less likely to interact with you.
There’s a Trump undercount in polling: Trump voters don’t trust “MSM” and therefore don’t answer calls from pollsters, or are embarrassed to admit they will vote for him.
I don’t know many people (boomers and younger) who answer the phone from numbers they do not recognize. I would like to imagine that the people who do answer strange numbers tend to be out of touch. Bias in the polls to fools or the lucky who are not spammed ?
The issue isn’t really people on the fence for Trump or Harris but mainly with generating turnout. After Biden’s poor debate performance, people didn’t change their mind and decide to vote for Trump, they became apathetic and maybe wouldn’t show up to vote.
Harris doesn’t need to persuade people to abandon Trump, she needs to get people excited to show up to vote.
The key to doing statistics well is to make sure you aren't changing the results with any bias. This means enough samples, a good selection of samples, and weighing the outcome correctly. Even honest polling in pre-election is hard to get right, and because of that it's easy to make things lean towards results if you want to get certain results, or or getting paid to get those results.
There's only one poll that matters, and that poll should include as large of a sample as possible, and be counted correctly. Even though some will try to prevent that from happening.
The problem was that hidden was actually trying to say something complicated and he got tripped up. Trump has always spoken at a kindergarten level because he knows he has nothing to say.