Get topics and a plan for your dissertation. Find out more.

What are Confounding Variables

Published by at August 26th, 2021 , Revised On November 5, 2021

A cause-and-effect relationship in academic research involves an assumed cause and an assumed effect, but it also includes a third unmeasured variable – known as a confounding variable.

A confounding variable can potentially affect both the suspected cause and the suspected effect. To ensure your results are valid and reliable, you must consider confounding variables in testing a cause and effect relationship.

Understanding Confounding Variables

Confounding variables share a close relationship with the dependent and independent variables of any research study. They are also called the cofounding factors or cofounders. A variable should meet the following criteria to be considered as a confounding variable.

It should correlate to an independent variable, although their relationship may or may not be causal.

It should be causally related to the dependent variable.

Confounding Variable Example

You are testing the relationship between athletes’ on-field performances and muscular endurance. You collect data, and the findings reveal that an improvement in muscular endurance results in improved athlete performance.

The confounding variable in the above example is the lack of motivation: Lack of motivation causes athletes to neglect their muscular endurance training and underperform on the field.

  • Confounding variable
  • Lack of motivation
  • Weakened muscular endurance
  • Poor on-field performance

The Significance of Confounding Variables

You must take confounding variables into consideration to ensure the internal validity of your research. Your results could be invalid for other researchers if you ignore the relationship between your research variables.

For example, you could discover the absence of a cause-and-effect relationship between the variables you tested because the measured effect is caused by the confounding variable rather than the independent variable.

Your find that countries with higher minimum wages have more workers employed. Does this mean that higher minimum wages lead to a high employment rate?

This may not or may not be necessary. Perhaps, countries with a higher minimum wage also have a better job market. It is vitally important that you assess the previous employment trends in detail to ensure your analysis of the minimum wage effect on the employment rate is valid. Or you could end up observing a cause-and-effect relationship even if there is none.

Even when you find a causal relationship, In many cases, confounding variables can cause you to underestimate or overestimate an independent variable’s effect on a dependent variable.

You find that depression increases the risk of poor pregnancy outcomes in terms of baby weight. However, if we neglect other unhealthy factors like smoking, weakened immune system, and maternal malnutrition, we will overestimate the impact of depression on the baby’s weight.

Hire an Expert Researcher

Orders completed by our expert writers are

  • Formally drafted in academic style
  • 100% Plagiarism free & 100% Confidential
  • Never resold
  • Include unlimited free revisions
  • Completed to match exact client requirements

How to Control the Effect of Confounding Variables?

There are several ways you can keep the impact of confounding variables in check. The most common methods employed for this purpose are as follows:

  • Restriction
  • Randomisation
  • Statistical control
  • Matching

Each method has its advantages and disadvantages, and you can use them for a study on any subject, including chemicals, plants, humans, and animals. Details for each method are provided below:

Restriction

In the restriction method, you restrict the study to subjects in one category of the confounding variable. This method helps ensure that all participants of the study have the same values of the potential confounder.

The use of the restriction method ensures the values of the potential confounding factors cannot correlate with your independent variable. So they cannot affect the cause-and-effect relationship.

Restriction Examples

  • If age is the confounder, then you could limit your study to subjects in a specific age group, e.g., participants between 20-40 years old.
  • If smoking is a confounding factor, you can restrict your research scope to either only non-smokers or smokers.
  • If sex is the confounder, restrict your study to either men or women.

Disadvantages of Restriction

While the restriction method is the easiest to employ, it does have some drawbacks, as listed below:

  • You can only use it when the potential subjects’ status is known with respect to the known confounders.
  • If the restriction is too broad, you will end up having residual confounding. For example, for a study testing the correlation between physical activity and heart disease, you could restrict the subjects in the age group between 20-40. However, with such a broad age range in consideration, your results on the risk of heart disease in that age particular age group could still be invalid.
  • Because the restricted variable remains constant, it is not possible to measure its true impact.
  • Your sample size could be inappropriate because restriction limits the number of potential subjects you can involve in your study.
  • The findings of the research will not apply to subjects who were excluded due to restrictions.
  • Not ideal when there is a need to control multiple confounders.

Randomisation

Randomisation is the best way to control the effects of a confounding variable. This method includes performing a large randomised clinical trial and randomising the values of your independent variable. For example, if some of the participants are in a control group while others are in a treatment group, you can randomly assign participants to each group.

With this technique applied, each subject has an equal chance of being assigned to any treatment options. With a sufficiently large sample size, the confounders will also be equally distributed among the treatment groups. What’s more, is that even the unknown confounders will be equally distributed among the comparison groups.

The potential confounders will not correlate with your independent variable because they will be distributed equally among the groups. Hence, your study will not be confounded by them.

Randomisation is often the ideal way to control or reduce the impact of confounding variables because it considers all the potential confounders, which cannot be achieved through other methods.

Randomisation Example

You choose a large group of subjects to participate in your research on the effects of smoking on baby weight at birth. You randomly select some of them to be smokers and the remaining to be non-smokers.

With randomisation, you can be confident that both the control group and the treatment groups will have the same average values on all the characteristics, including those not measured.

Benefits of Randomisations

  • It enables you to account for all the potential confounders including those you didn’t measure.
  • It is considered the best method to account for confounding variables in a study.
  • When implemented correctly, there is no need to “adjust” for confounding.

Drawbacks of Randomisation

  • It is highly complex and time-consuming
  • It should be conducted before starting the data collection process
  • Only subjects in the treatment group should be treated – not those that are in the control group.

Matching

This method involves the selection of a comparison group matching with the treatment group. While the independent variables will have different values, each subject in the control group must have an opposite number in the treatment group with the same values of potential confounding variables.

You can eliminate the possibility of a variation in results between the comparison and treatment groups by removing the differences in the confounding variables. With the confounders taken care of, you can be certain that any variation in the dependent variable results from a variation in the independent variable.

Matching Examples

  • In a case-control study of lung cancer where age is a potential confounding factor, match each case with one or more control subjects of similar age. If this is done, the comparison groups’ age distribution will be the same, and there will be no confounder by age.
  • In a study that aims to measure the impact of smoking, each smoker in the control group has a matching counterpart of similar age in the reference group. The “confounder by age” will be eliminated because the two groups compared will have the same age distribution.

Advantages of Matching

  • You can include more subjects than the restriction method.

Disadvantages of Matching

  • It can be challenging to implement because for each member you need a counterpart with botch matching on every potential confounder
  • Some variables that you cannot find a match for could also be confounding variables. These variables could be left unaccounted for.

Statistical Method

If you aim to run regression analysis on the data already collected, consider adding the potential confounding variables as control variables to reduce their effects on the findings.

The regression results will reveal the impact of the possible confounders on the dependant variable, so you can detach the effects of the independent variable.

Statistical Control Example

After gathering data about the effects of smoking on the baby weight at birth from a variety of subjects, you introduce several control variables in your regression, such as age, education, exercise levels, and quality of diet. Smoking will be included in the model as the independent variable. This will enable you to separate the effect of smoking from the influence of these four control variables on the baby’s weight in your analysis.

Benefits of Statistical Method

Drawbacks of Statistical Method

  • With this method, the confounders that you didn’t observe directly may not be accounted for.

Frequently Asked Questions

There are several methods of controlling the effects of confounding variables on your research, including randomisation, restriction, statistical control, and matching.

In randomisation , you account for all possible confounders by randomly assigning the treatment to a large number of participants.

You introduce possible confounding variables in your aggression when implementing the statistical control method.

The matching technique requires you to have a matching member in the comparison group for each treatment group member. The matching subjects have the same values on any possible confounders and only vary in the independent variable.

In restriction, you restrict your sample by considering only specific subjects with the same values of possible confounders.

If you don’t consider confounding variables when testing a cause-and-effect correlation, your findings will not have internal validity.  Without accounting for confounders, there is a possibility that you will either under-or-over estimate the cause-and-effect relationship between your dependent and independent variables, or you could discover a causal relationship when none exists. For this reason, you need to make sure to account for confounders.

Confounding variables share a close relationship with both the dependent and independent variables. In research, a dependent variable is the assumed effect, while the independent variable is the assumed cause. There is a third variable called the confounding variable that can influence both the dependent and independent variables. Neglecting confounding variables can result undermine the validity of the research.

A cause-and-effect relationship in academic research involves an assumed cause and an assumed effect, but it also includes a third unmeasured variable – known as a confounding variable or a confounder. 

A confounding variable can potentially affect both the suspected cause and the suspected effect. To ensure your results are valid and reliable, you must consider confounding variables in testing a cause and effect relationship.

About Owen Ingram

Ingram is a dissertation specialist. He has a master's degree in data sciences. His research work aims to compare the various types of research methods used among academicians and researchers.

25%

Type your email to get special

DISCOUNT ON YOUR ORDER!

Research Prospect star rating
Research Prospect star rating