Script / Documentation
Ever lost a bet? From the lottery or sporting events to casinos or friendly wagers, you may have risked and lost some money because you hoped to win big.

But let me ask you this: How big would the payout have to be and how good would the odds need to be to gamble with your life or the lives of your loved ones?
In this lesson from Just Facts Academy about Margins of Error, we’ll show you how people do that without even realizing it. And more importantly, we’ll give you the tools you need to keep you from falling into this trap.
Ready? C’mon, what have you got to lose?
People often use data from studies, tests, and surveys to make life-or-death decisions, like about what medicines they should take, what kinds of foods they should eat, and what activities should they embrace or avoid.
The problem is such data is that it isn’t always as concrete as the media and certain scholars make it out to be.
Look at it this way. There are four layers to this “margin of error” cake. Let’s start with the simplest one, like this headline from the Los Angeles Times, which declares, “California sea levels to rise 5-plus feet this century, study says.”[1]

That sounds pretty scary, but the study has margins of error, and it actually predicts a sea-level rise of 17 to 66 inches.[2] In the body of the article, the reporter walks back the headline a little, but he fails to provide even a hint that the “5-plus-feet” is the upper bound of an estimate that extends all the way down to a quarter of this.[3]
Studies often have margins of error, or bounds of uncertainty, so the moment you hear someone summarize a study with a single figure, dig deeper. This is the same principle taught in Just Facts Academy’s lesson on Primary Sources: Don’t rely on secondary sources because they often reflect someone’s interpretation of the facts—instead of the actual facts.
Also, don’t assume that the authors of the primary sources will report the vital margins of error near the top of their studies. In the famed Bangladesh face mask study, for example, the authors lay down 4,000 words before they disclose a range of uncertainty that undercuts their primary finding.[4] [5] [6]
Here are a few more tips to help you critically examine margins of error.
Surveys often present their results like this:

11.5% ± 0.3
It’s quite simple. The first number is the nominal or best estimate, technically called the “point estimate.” The second number is the margin of error.
In the case of this survey,[7] it means that the best estimate of the U.S. poverty rate is 11.5%, but the actual figure may be as low as 11.2% or as high as 11.8%.
Scholarly publications often use a less intuitive convention and present their results like this:

Now, don’t let this barrage of digits intimidate you. They’re actually easy to understand once you crack the code.
The first number is the best estimate. In the case of this study,[8] it means that bisexual men are roughly 4.7 times “more likely to report severe psychological distress” than heterosexual men.
The last two digits are the outer bounds of the study’s results after the margins of error are included. They mean that bisexual men are about 1.8 to 12.5 times more likely to report distress than heterosexual men.[9]
That’s a really broad range, especially when compared to the single figure of 4.7. Do you see why margins of error are so essential?
Now, here’s something a lot of people don’t know because journalists rarely explain it or don’t understand it: Reported margins of error and ranges of uncertainty typically account for just one type of error known as sampling error.[10] [11] [12] [13] [14] [15] This is based purely on the size of the sample used for the study or survey. Generally speaking, the larger the sample, the smaller the margin of sampling error.[16]
It’s super important to be aware of this, because there are often other layers of uncertainty that aren’t reflected in sampling errors.[17] [18] [19] [20] Figures like 1.77 to 12.52 sound very specific and solid, but that can be an illusion. If you don’t understand this, you can be easily misled to believe that the results of a study are ironclad when they are not.
This brings us to the “95% CI.” What does that mean?

It stands for “95% confidence interval,”[21] and contrary to what your statistics teacher may have told you,[22] it generally means that there’s a 95% chance the upper and lower bounds of the study contain the real figure.[23] That means there’s a 5% chance they don’t.
How’s that for gambling? Would you step outside your home today if you knew there was a 1 in 100 chance you wouldn’t make it back alive? Well, even the outer bounds of most study results are less certain than that.
You see, time, money, and circumstances often limit the sizes of studies, tests, and surveys.[24] So even if their methodologies are sound, reality may lie outside the bounds of the results due to mere chance.[25]
On top of this, some studies measure multiple types of outcomes while failing to account for the fact that each attempt to measure a separate outcome increases the likelihood of getting a seemingly solid result due to pure chance.[26] Look at it this way: if you roll of pair of dice 12 times, you’re 12 times more likely roll a 2 than if you roll them once.
Even worse, there are scholars who roll those dice behind the scenes by calculating different outcomes until they find one that provides a result they want. And that’s the only one they’ll tell you about.[27] [28]
Now, let’s take a step back and look at the layers of the cake:

- First, you have the point estimate.
- Then, you have the outer bounds, which commonly account for the margin of sampling error but no other sources of uncertainty.
- Then, you have the confidence interval percentage, or the probability that the outer bounds are correct.
We’ll get to the base layer in a moment, but now is a good time to talk about a concept called “statistical significance,” because we’ve cut through enough cake to understand it.
Study results are typically labeled “statistically significant” if the margin of sampling error with 95% confidence is entirely positive or entirely negative.[29] [30] [31] [32]

For example, if a medical study finds a treatment is 10% to 30% effective with 95% confidence, this is considered to be a statistically significant outcome. That’s a shorthand way of saying the result probably isn’t due to sampling error.[33]
And if a study finds that a treatment is –10% to 30% effective with 95% confidence, such a result is considered to be “statistically insignificant” because it crosses past the line of zero effect.[34] [35] [36] [37] This could mean that the treatment has a positive effect, or no effect, or a negative effect.[38] [39]
One way to sort this out is to look at the size of the study sample. If it’s relatively large, and the results are statistically insignificant, that’s a pretty good indication the effect is trivial.[40] [41] [42] [43]
Hundreds of scholars have called for ending the convention of labeling results as “statistically significant” or “insignificant.” This is because it can lead people to jump to false conclusions.[44] Nonetheless, it’s a common practice,[45] [46] [47] [48] so here are some tips to avoid such risky leaps:

- One, don’t mistake statistical significance for real-world importance. A study’s results can be statistically significant but also tiny or irrelevant.[49] [50]
- Two, don’t assume that a statistically insignificant result means there’s no difference or no effect.[51] Sometimes studies are underpowered, which means their samples are too small to detect statistically significant results.[52] In other words, there’s a major distinction between saying that a study “found no statistically significant effect” and saying “there’s no effect.”[53]
- Third and most importantly, don’t fall into the trap of believing that a study is reliable just because the results are statistically significant.[54] That’s the final layer to the cake, and it’s where the riskiest gambling occurs.

Here’s what I mean.
The study on sea level rise we discussed—well, it’s based on a computer model,[55] a type of study that is notoriously unreliable.[56] [57] [58] [59] [60] [61]
And the study about psychological distress and sexuality—it’s an observational study,[62] which can rarely determine cause and effect, even though scholars falsely imply or explicitly claim that they do.[63] [64] [65] [66] [67]
Then there’s all kinds of survey-related errors exposed by Just Facts’ lesson on Deconstructing Polls & Surveys.
Bottom line—the “margins of error” reported by journalists and scholars rarely account for many other sources of error.
Gone are the days when you can blindly trust a study just because it is publicized by your favorite news source, appears in a peer-reviewed journal, was written by a PhD, or is endorsed by a government agency or professional association.
Incompetence and dishonesty are simply far too rampant to outsource major life decisions without critical analysis.
So don’t gamble your life on “experts” who offer solid bets that “you can’t lose.” Instead, keep it locked to Just Facts Academy, so you can learn how to research like a genius.