# How to Avoid Regression to the Mean

Published by

at September 14th, 2023 , Revised On October 5, 2023Regression to the mean is a concept that has been floating around in scholarly source materials and statistical literature for over a century. The term is often mentioned in research papers, academic discussions, and casual conversations related to performance, investments, or health results. But what does it really mean? Let’s discuss this in detail, referencing both primary source and secondary source materials.

**Example of Regression to the Mean**

Imagine a basketball player, let’s call him Alex, who usually makes 50% of his free throws. One night, Alex has an amazing game and makes 90% of his free throws. People are impressed and start thinking maybe Alex has improved drastically due to some cognitive bias.

However, Alex made 60% of his free throws in the next game. While this is still above his average, it is much closer to his usual 50% than the 90% he achieved the previous night. An actor-observer bias may lead outsiders to believe he’s slacking, while Alex might attribute it to external factors.

The initial 90% shooting performance was an outlier, an extreme event. The subsequent performance, which was closer to his average, is an example of regression to the mean.

It is a common mistake to think, “Oh, he did poorly because he was overconfident after the last game,” or “Maybe he practised less because he thought he got better.” This is an example of confirmation bias. In reality, the 60% might just be a result of the natural variability in his performance and the statistical inevitability that after an extreme event, the next measurement is likely to be closer to the mean.

Similarly, if Alex had an off night and only made 10% of his free throws due to a ceiling effect, he would be expected to perform closer to his 50% average in the subsequent game. This, too, would be a regression to the mean.

**What is Regression to the Mean**

Regression to the mean is a statistical phenomenon that refers to the tendency of an extremely high or low variable to move closer to the average on its next measurement or over time. This concept is often misunderstood to mean that performance will always deteriorate after a particularly good measure, or improve after a particularly bad one, but what it actually describes is the natural variability in any data series.

Imagine you are testing the performance of students in a class with a series of quizzes. If a typically average-performing student scores exceptionally high on one quiz, it is possible (or even likely) that their score on the next quiz might be closer to their average performance, which could be lower than that exceptional score. Similarly, if that student scores exceptionally low on one quiz, their next score might be closer to their average, which would be higher than that low score.

Regression to the mean does not mean that improvement or deterioration is inevitable. Instead, it is about the statistical tendency for extreme values in random distributions to be followed by less extreme values.

**Why is Regression to the Mean a Problem**

Regression to the mean is not inherently a problem in and of itself. Instead, the issue arises when people misunderstand or fail to account for it in various contexts, which can lead to incorrect conclusions and misguided actions. Here are some ways in which regression to the mean can be problematic:

**Misinterpretation of Treatment Efficacy**

Suppose a group of patients with particularly severe disease symptoms are treated with a new drug, and their symptoms subsequently become less severe. It might be tempting to attribute the improvement to the drug. However, if we had not accounted for regression to the mean, the improvement might have occurred anyway, even without the drug.

**Performance Evaluations**

In sports, a player might have an excellent season followed by an average or below-average season. Regression to the mean might explain the difference in performance, but coaches, fans, or management might incorrectly attribute the change to other factors, like motivation or effort.

**Educational Assessment**

If students are given extra tuition because they scored poorly on a test, and then perform closer to their average on the next test, it could be mistaken that the extra tuition was the sole cause of improvement when regression to the mean also played a role.

**Policy and Decision-Making**

If an intervention is implemented following a particularly extreme event (e.g., a year with a very high crime rate), and the next year sees a return to average levels, an affinity bias might cause officials to gravitate toward interventions they are familiar with or favour. The intervention might be credited with the change when, in fact, the extreme event was an outlier, and a return to the mean was likely. It is an important reminder of the need for a bias for action based on evidence rather than assumptions.

**Investments**

Investors might assume that an asset performing exceptionally well will continue to do so or that a poorly performing one will continue to underperform without considering the possibility of regression to the mean.

**Research Bias**

In scientific research, failing to account for regression to the mean can lead to incorrect conclusions. This can be especially problematic in fields where it is crucial to understand the true effects of interventions, such as in medicine.

**Regression to the Mean Examples**

Here are some illustrative examples:

**Sports Performance**

An athlete may have an extraordinarily high-scoring season or game, followed by a season or game that is closer to their usual performance. This subsequent decrease in performance could simply be a return to their average rather than a decline in skill or effort.

**Medical Treatments**

Imagine patients with chronic pain. On particularly bad days, they might try a new pain relief treatment. If the next few days are better, they might attribute this improvement to the treatment, when, in fact, the earlier measurement was an outlier, and some improvement might have occurred anyway due to regression to the mean.

**Academic Testing**

A student scores exceptionally high on one standardised test. Under the pressure to maintain that high score, they take another test and score closer to their average. This doesn’t necessarily mean their abilities have declined, but it could be a result of regression to the mean.

**Traffic Enforcement**

Suppose a traffic department sets up speed checks on roads where the most accidents have occurred in the previous year, influenced by explicit bias regarding specific accident-prone areas. They found that accidents decreased the following year and may credit the speed checks. However, this could also be an instance of the Pygmalion effect, where the increased attention to these areas changes driver behaviour.

**Economic Indicators**

After a year of exceptionally high economic growth, a country might experience a year of more modest growth. While many factors can contribute to changes in economic growth, a simple return to more typical levels due to regression to the mean could be one of them.

**Weather Patterns**

After a year of extreme temperatures (either very high or very low), the following year might see temperatures closer to long-term averages. While various factors (like El Niño or La Niña) can influence year-to-year variations, regression to the mean might also play a role in such changes.

**Business Sales**

A company launches a new product and experiences exceptionally high sales in the first month due to the novelty. In the subsequent months, if sales stabilise at a lower level, it could be a regression to a more sustainable mean, rather than a sign of the product’s declining popularity.

**How to Identify Regression to the Mean**

Here are some steps and considerations to help identify if regression to the mean is occurring:

**Consider the Selection Criteria**

If subjects (e.g., patients, students, athletes) are selected based on extreme values (either high or low), there is a potential for regression to the mean in subsequent measurements.

**Plot the Data**

Graphing the data can often provide a visual representation of regression to the mean. For example, you could use scatter plots to visualise paired data points from two measurements. If you notice that most extreme values on the first measurement tend to be closer to the mean on the second measurement, regression to the mean might be at play.

**Use a Control Group**

This is especially relevant in experimental studies. If an intervention group (selected based on extreme initial values) shows changes, compare it with a control group that did not receive the intervention. If both groups show similar trends, regression to the mean might be responsible.

**Examine Repeated Measures**

If possible, obtain multiple measurements before and after an intervention or event. If values consistently gravitate towards the mean over time, regression to the mean is likely occurring.

**Consider Other Explanations**

Always consider alternative explanations for observed changes. Are there confounding variables? Was there an external event that could have caused the change?

**Statistical Analysis**

Some statistical techniques can help quantify and control for regression to the mean. For example, ANCOVA (Analysis of Covariance) can be used to adjust for initial measurements when assessing changes.

**Understand the Underlying Distribution**

If you are familiar with the natural variability and distribution of the data you’re working with, you will have a better foundation to identify regression to the mean.

**Beware of Subjective Interpretations**

People tend to notice and remember extreme events more vividly. This can create a perception bias, where subsequent less extreme events are seen as evidence of change. Being aware of this bias can help in assessing if perceived changes are genuinely significant or just a reflection of regression to the mean. Employing a reliable** source evaluation method **can help in discerning true patterns from randomness.

**Consider the Timing**

If a measurement is taken immediately after an extreme event or value, the likelihood of encountering regression to the mean on the subsequent measurement is higher.

**Research and Domain Knowledge**

Sometimes, understanding the specifics of a domain can help in recognising regression to the mean. For example, in sports analytics, knowing the typical performance ranges of athletes can guide interpretations.

**How to Avoid Regression to the Mean**

Avoiding the pitfalls of regression to the mean involves understanding its nature and applying rigorous design and analysis strategies. While you can not prevent the phenomenon itself (it is a natural statistical occurrence), you can take measures to ensure it does not lead to erroneous conclusions:

**Randomized Controlled Trials (RCTs)**

This is one of the most powerful tools to control regression to the mean. By randomly assigning subjects to treatment and control groups, you can ensure that the extreme scores are equally likely in both groups, which helps control for the effect of regression to the mean.

**Use Baseline Measurements**

Collect multiple measurements before any intervention. This provides a more stable baseline against which post-intervention measurements can be compared.

**Account for it in Analysis**

Use statistical methods like Analysis of Covariance (ANCOVA), which can adjust for baseline differences when comparing groups. By accounting for initial scores as covariates, the analysis can help control the impact of regression to the mean.

**Repeated Measurements**

Rather than relying on a single pre- and post-measurement, take several measurements over time. This approach can help tease apart true effects from natural variability in scores.

**Avoid Selecting Based on Extremes**

If possible, avoid selecting samples based on extremely high or low scores. If you must select based on extremes (e.g., in a study of very ill patients), be aware of the potential for regression to the mean in your analysis and interpretation.

**Use a Control Group**

Even if you are not doing an RCT, using a control group that does not receive the intervention can help you gauge the natural fluctuations in scores and better assess the true impact of the intervention.

**Consider External Factors**

Always consider other variables or events that might explain changes in scores or measurements. It might not always be regression to the mean or the intervention; other external factors could influence results.

**Consistent Assessment Methods**

Ensure that the measurement method remains consistent across all-time points. Changing methods can introduce variability that can be mistaken for real change.

## Is the Statistics assignment pressure too much to handle?

### How about we handle it for you?

Put in the order right now in order to save yourself time, money, and nerves at the last minute.

**How to Calculate the Percent of Regression to the Mean**

Calculating the percent of regression to the mean isn’t a standard procedure because regression to the mean is a phenomenon based on the idea that extreme initial measurements tend to be followed by less extreme ones. However, there is a concept in regression analysis that can give insight into the strength and direction of the relationship between two variables: the regression coefficient.

When you have two variables, say X and Y, and you fit a simple linear regression to predict Y from X, the regression coefficient (often represented as b1) tells you the change in Y for a one-unit change in X. If X and Y are both measures of the same underlying phenomenon at two different times (like test scores of students at the start and end of the year), then b1 can give insight into regression to the mean.

If b1 =1, it means that there’s a perfect linear relationship and no regression to the mean. If b1 < 1, it indicates regression to the mean, with values closer to 0 indicating stronger regression to the mean.

Here is a rough approach to estimate the “percent of regression to the mean” using the regression coefficient:

- Fit a simple linear regression model where Y is the second measurement, and X is the initial measurement.
- Obtain b1, the regression coefficient for X.
- The “percent of regression to the mean” might be interpreted as (1−b1)×100%.

**Frequently Asked Questions**

Regression to the mean refers to the statistical phenomenon where extreme initial measurements tend to be closer to the average upon subsequent measurements. In essence, exceptionally high or low observations are likely followed by less extreme values, gravitating towards the mean or average of a data set.

Yes, regression to the mean can occur with dependent data. If an extreme value is influenced by chance and subsequent measurements or observations are dependent on previous ones. Those later observations might still gravitate towards the mean, demonstrating regression to the mean in a dependent data context.

Statistical regression to the mean occurs when less extreme ones likely follow extreme initial observations. It is often observed in repeated data collection. Chance and variability play roles, leading outliers to move closer to the group average on subsequent measurements, illustrating the unpredictability of extreme scores.

One can use control variables or covariates to control for regression to the mean in statistical analyses. By including these variables in regression models, researchers account for potential confounders. By comparing models with and without the control variable, the true effect of the predictor of interest can be better isolated.