Home > Library > Research Bias > What is Publication Bias – Causes & Examples

What is Publication Bias – Causes & Examples

Published by at August 22nd, 2023 , Revised On October 5, 2023

In the world of research and scholarly source communication, publication bias stands as an impediment to a holistic understanding of phenomena. It is an essential topic as a primary source of information for anyone involved in research, whether as a creator, consumer, or evaluator.

Example: Antidepressant Efficacy

Imagine that 100 different studies are conducted on the efficacy of a new antidepressant drug.

  • 70 of these studies find no significant difference between the drug and a placebo.
  • 30 of these studies find a significant benefit of the drug over the placebo.

Now, due to bias for action by journals that often prioritise publishing positive results:

  • 25 of the 30 positive studies get published.
  • Only 5 of the 70 non-significant studies get published.

If a researcher or clinician then conducts a literature review or meta-analysis based only on the published studies, they would find:

  • 25 positive studies
  • 5 non-significant studies

Based on these numbers, it would appear that the overwhelming majority of studies (83%) support the efficacy of the drug. This would be a distorted view, as the true picture from all conducted studies is that only 30% found a positive effect.

What if the actor-observer bias also plays a role? This bias refers to the tendency for people to attribute their own behaviours to situational factors while attributing others’ behaviours to inherent personality traits. Therefore, researchers may attribute their positive findings to the effectiveness of their methods, while negative findings might be seen as a fault in the drug.

What is Publication Bias Definition

Publication bias is a cognitive bias that refers to the phenomenon where the outcome of a study influences the likelihood of its being published. In other words, studies with positive, statistically significant, or novel results are more likely to be published than studies with negative or null results. This can become an explicit bias that exists openly in the research community, contrasting with hidden or unconscious biases.

Confirmation bias can further exacerbate the issue. Researchers might lean towards confirming what they already believe and prefer studies that validate their pre-existing beliefs.

The ceiling effect is another concern. Suppose most research results in a particular field are close to the upper limit of what’s possible. In that case, minor variations might be blown out of proportion, leading to skewed perceptions of efficacy.

The affinity bias, which refers to our unconscious tendency to connect with others with similar backgrounds or interests, can also shape the types of research deemed valuable. For instance, reviewers might favour studies that align more closely with their areas of interest or expertise.

This can lead to a skewed representation of the evidence in the literature, as the published studies are not a representative sample of all the research conducted on a particular topic.

Causes of Publication Bias

Publication bias can distort the perceived efficacy or strength of evidence on a particular topic in the literature. It might be beneficial to use a source evaluation method to understand the origins and impacts of these biases better.

The causes of publication bias include:

Journal Editors and Reviewers’ Preferences

Journals tend to favour publishing studies with positive results, often deemed more newsworthy or impactful. Editors might believe that positive findings advance the field more than negative or null findings.

Researchers’ Bias

Researchers might not submit studies with negative or null results for publication because they feel that these studies are less valuable or they anticipate rejection by journals. This phenomenon is sometimes called the “file drawer problem,” where negative results are left in the researcher’s file drawer rather than published.

Funding and Sponsorship

Research sponsored by entities with a vested interest (e.g., pharmaceutical companies) might only choose to publish studies that show their product in a favourable light.

Academic Pressure

The “publish or perish” culture in academia can pressure researchers to produce “significant” findings. Positive results are often perceived as more publishable, which can influence the types of studies researchers conduct and which results they decide to write up and submit.

Statistical Significance

Studies that don’t reach the standard p<0.05 significance level might not be considered important or meaningful, even if they are practically significant or inform the scientific community about what doesn’t work.

Outcome Reporting Bias

Even if a study is published, the researchers might choose to only report on the significant outcomes while omitting or downplaying non-significant outcomes.


Studies with positive results are often cited more than those with negative results, leading to a skewed representation in the literature.

Inadequate Peer Review

Some journals might have a lax peer-review process for “exciting” positive results, allowing them to be published more easily.

Misunderstanding of the Value of Negative Results

Both researchers and publishers might not understand the value of negative results. Negative results can provide valuable information about what does not work, helping to refine hypotheses and guide future research.

Replication Studies

There is often less interest in replication studies, especially if they fail to replicate original findings. This can mean that false positives remain unchallenged in the literature.

Language Bias

Studies published in English are more widely accessible and may have higher chances of getting cited. This can lead to a bias where valuable studies in other languages remain underrepresented.

Why is Publication Bias a Problem?

Publication bias is a significant concern in scientific research for several reasons:

Distorted Scientific Record

When only studies with positive, statistically significant results are published, the scientific literature does not reflect the true state of knowledge on a topic. This can lead researchers to believe that more evidence supports a particular hypothesis than there is.

Wasted Resources

If researchers are unaware of studies with negative or null results, they might duplicate efforts, wasting time and resources that could have been better spent on other inquiries.

Compromised Meta-Analyses

Meta-analyses pool data from multiple studies to provide an overall effect size or association. If the studies included are biased towards positive findings, then the meta-analysis results will also be skewed. This can lead to false conclusions and mistaken inferences about the strength or nature of an effect.

Encourages P-Hacking and Questionable Research Practices

Knowing that positive results are more likely to be published, researchers might engage in “p-hacking” or data dredging, where they search through their data for any statistically significant pattern, even if it wasn’t the original hypothesis being tested. Such practices can lead to spurious findings.

Therapeutic Misjudgment

In medical research, publication bias can be particularly dangerous. If studies showing a drug or treatment is ineffective (or harmful) are not published, clinicians and patients may be misled about its true efficacy and safety.

Erosion of Public Trust

When the public or policymakers realise that scientific literature is biased or incomplete, it can erode trust in science and scientists. This can have implications for funding, policy decisions, and public opinion on contentious issues.

Stifling of Novel Theories and Findings

Suppose unconventional or unexpected findings are less likely to be published because they go against the prevailing wisdom or do not show a strong effect. The Pygmalion effect, a phenomenon where higher expectations lead to increased performance, could also play a role. 

If researchers expect positive outcomes and believe in their efficacy, they might unconsciously produce more successful outcomes. In contrast, less conventional approaches might not receive the same expectation or effort, leading them to be sidelined.

Economic Implications

For fields like economics and finance, where research can directly inform policy and business decisions, publication bias can lead to misguided policies or strategies based on an incomplete understanding of the true effects.

Publication Bias Examples

Here are some examples to illustrate publication bias:

  • A drug company funds 10 studies to test the efficacy of a new drug. Nine studies show that the drug is no better than a placebo, but one study shows a significant positive effect. Only the study showing the positive effect gets published, giving the impression that the drug is effective when the overall evidence suggests otherwise.
  • Researchers conduct multiple studies on a new psychotherapy technique. Studies that find the technique to be effective are quickly published in prominent journals, while studies finding no effect remain unpublished. As a result, the published literature suggests the technique is more effective than it might be in reality.
  • Numerous studies have been done on the health benefits of a particular food. Studies showing benefits (like reduced risk of a certain disease) are more likely to be published than those showing no effect, leading the public to believe there’s stronger evidence for the health benefits of that food than there might be.
  • These aim to summarise all the available evidence on a particular topic. However, if the underlying studies are affected by publication bias, these reviews can also be skewed. Tools like the “funnel plot” are sometimes used to detect potential publication bias in systematic reviews.
  • A researcher conducts multiple experiments on a hypothesis. Only the experiments with significant results are published, while the rest are left in the “file drawer.” This can lead to an over-representation of significant findings in the literature.
  • Studies that show significant economic effects of a policy are more likely to be published than those that don’t find significant effects. As a result, policymakers might get a skewed view of the actual impact of the policy.
  • Studies that replicate previous work and confirm its findings are less likely to be published than those with novel results. This can be a problem because replication is an important part of the scientific method.
  • In some fields, studies that contradict the prevailing theory or wisdom may face more scrutiny or hurdles to publication, leading to the under-representation of such contradictory evidence.

Hire an Expert Writer

Proposal and research paper orders completed by our expert writers are

  • Formally drafted in academic style
  • Plagiarism free
  • 100% Confidential
  • Never Resold
  • Include unlimited free revisions
  • Completed to match exact client requirements

How to Avoid Publication Bias

This bias can skew the overall scientific understanding of a particular field. Here are several steps and strategies to avoid or reduce publication bias:

Register Clinical Trials

By registering clinical trials in advance, researchers commit to making the outcomes public regardless of the results. 

Support Open Access

Open-access journals and platforms can help ensure that research, regardless of its findings, is accessible to anyone.

Promote Negative Result Publications

Journals and researchers should give equal importance to negative results. Some journals are dedicated specifically to publishing negative results.

Systematic Reviews and Meta-analyses

These methods can be used to aggregate data from several studies, even unpublished ones, to get a more accurate overall picture. When conducting meta-analyses, it’s essential to use techniques like the funnel plot to detect potential publication bias.

Preprint Archives

Encourage using preprint archives, where researchers can upload drafts of their studies before they are peer-reviewed. This ensures the wider availability of results, irrespective of their final publication status.

Data Sharing

Encouraging data sharing can allow other researchers to conduct secondary source analyses or combine datasets to detect patterns that might have been missed in individual studies.

Transparent Reporting

Using guidelines like the CONSORT statement for randomised trials or the STROBE statement for observational studies can help ensure that research is transparently reported.

Peer Review Process

Journals should ensure that their peer review process is blind and objective, focusing on the quality of the research methodology rather than the results themselves.

Educate Stakeholders

Training researchers, peer reviewers, and editors about the risks and implications of publication bias can create awareness and drive change.

Statistical Tools

Tools like funnel plots, trim-and-fill methods, and Egger’s test can be used to detect and adjust for publication bias in systematic reviews.

Replication Studies

Encourage the replication of studies, especially the significant ones, to verify findings.


Funders should emphasise the importance of publishing all results and ensure that funding is not biased towards studies with expected positive outcomes.

Institutional Repositories

Universities and institutions can maintain repositories where researchers can deposit datasets and results, regardless of whether they lead to publications.

Post-Publication Peer Review

Some platforms allow for post-publication commentary and review, which can bring attention to both significant and non-significant findings.

Acknowledge Bias

Recognise that no system is perfect. Being transparent about potential biases, even while trying to minimise them, is crucial for integrity in research.

Frequently Asked Questions 

Publication bias occurs when studies with positive or significant results are more likely to be published than those with negative or non-significant findings. This can skew the scientific understanding, as the published literature doesn’t represent the full range of research outcomes on a given topic.

Researchers often use funnel plots to assess publication bias, which visually represents study size versus effect size. Asymmetry in the plot suggests potential bias. Statistical methods like Egger’s regression test can further quantify bias. Comprehensive literature searches, including grey literature, can also help identify potential omissions.

In systematic reviews, publication bias can be assessed using funnel plots to visualise study size against effect size. Asymmetry in these plots suggests possible bias. Statistical tests like Egger’s regression or the Begg-Mazumdar test help quantify this bias. Checking grey literature and trial registries can identify missing studies.

To avoid publication bias, register studies in advance, ensuring all outcomes are reported. Promote open access and publish negative results. Use preprint archives for broader dissemination. Encourage data sharing, transparent reporting, and support journals that prioritise rigorous methodology over sensational results. Educate stakeholders about the implications of selective publication.

Funnel plots graph study effect size against precision (often sample size or its inverse, standard error). In the absence of bias, the plot resembles an inverted funnel. Asymmetry suggests potential publication bias, where smaller studies with negative or non-significant results are missing. However, asymmetry can also arise from non-bias-related factors.

About Carmen Troy

Avatar for Carmen TroyTroy has been the leading content creator for ResearchProspect since 2017. He loves to write about the different types of data collection and data analysis methods used in research.