Back to BlogMethodology

Understanding Preprint Evidence: What Clinicians Need to Know

Ailva Team10 min read

The Rise of Preprints in Clinical Medicine

Preprints are no longer a niche phenomenon in clinical medicine. MedRxiv, the dedicated preprint server for health sciences, launched in June 2019 and received fewer than 1,000 submissions in its first six months. Then COVID-19 hit. In 2020, medRxiv received over 12,000 submissions. By 2025, the server was receiving more than 800 new preprints per month across all medical specialties — cardiology, oncology, psychiatry, primary care, not just infectious disease.

A 2024 analysis by Fraser et al. in PLOS Medicine found that 34% of practicing physicians had used a preprint to inform a clinical decision at least once in the preceding 12 months, up from less than 5% in 2019. This is not a temporary pandemic artifact. It reflects a structural change in how medical evidence moves from researchers to clinicians. Preprints are now part of the evidence landscape, and physicians need a framework for evaluating them — because the traditional heuristic of "published means reliable, unpublished means unreliable" no longer holds.

What a Preprint Is and What It Is Not

A preprint is a complete scientific manuscript posted to a public server before formal peer review. It has not been evaluated by independent experts selected by a journal editor. It has not undergone the revision process that typically addresses methodological concerns, statistical errors, or overclaimed conclusions. It is a draft the authors believe is ready for public scrutiny but that has not yet received it.

Three common misconceptions about preprints:

  • Preprints are not entirely unreviewed. MedRxiv and bioRxiv employ screening processes. Editorial teams check for completeness, ethical compliance (IRB approval or equivalent), and basic scientific plausibility. They reject submissions that make dangerous clinical claims without evidence, contain personal health information, or are clearly not scientific manuscripts. In 2024, medRxiv rejected approximately 18% of submissions at the screening stage. That is a lower bar than peer review, but it is not no bar.
  • Preprints are not necessarily lower quality than published work. Some preprints are methodologically strong and publish in top-tier journals with minimal changes. Others are deeply flawed. The variance is wider than in peer-reviewed literature, but the floor is not as low as many assume, and the ceiling is identical.
  • Preprints are not the final word. A preprint may be revised substantially between posting and publication. Data may change. Conclusions may be modified. A preprint you cite today may look different when it appears in a journal six months later. Track the final published version.

What COVID-19 Taught Us About Preprint Risks

The pandemic stress-tested preprint-driven clinical decision-making, and the results were a case study in both the promise and the danger.

The Hydroxychloroquine Example

In March 2020, a preprint (later published in International Journal of Antimicrobial Agents) by Gautret et al. reported hydroxychloroquine combined with azithromycin reduced SARS-CoV-2 viral load in 20 patients. The study had no randomization, no blinding, excluded patients who deteriorated (including one who died and one transferred to ICU), and used viral load as a surrogate endpoint rather than clinical outcomes. Despite these fundamental limitations, the preprint drove widespread off-label prescribing globally.

Subsequent RCTs — RECOVERY (published in The New England Journal of Medicine, 2020), WHO SOLIDARITY, and others — showed no benefit and potential harm. The initial preprint was not wrong because it was a preprint. It was wrong because it was a small, uncontrolled, non-randomized study with flawed methodology. But the preprint format meant it reached clinical practice before those fundamental limitations could be identified through peer review. The speed that makes preprints valuable also makes them dangerous.

The Dexamethasone Counter-Example

In June 2020, the RECOVERY trial group posted a preprint showing dexamethasone reduced 28-day mortality in hospitalized COVID-19 patients requiring oxygen (RR 0.82, 95% CI 0.72-0.94) and in patients on mechanical ventilation (RR 0.64, 95% CI 0.51-0.81). This was a large (n=6,425), randomized, controlled trial from 176 hospitals. The preprint changed clinical practice worldwide within days — appropriately, given the evidence quality.

The dexamethasone preprint saved lives because it bypassed the 3-6 month publication timeline. The formal publication in The New England Journal of Medicine appeared in February 2021 — eight months after the preprint. The data in the final publication were essentially identical.

These two examples illustrate the core tension: preprints accelerate access to practice-changing evidence, but they also accelerate access to misleading evidence. The format itself is neutral. What matters is the quality of the science it contains.

How Preprint Quality Compares to Published Literature

Systematic analyses now provide empirical data on this question:

  • Concordance rates. A 2023 study by Carneiro et al. in eLife tracked 5,405 medRxiv preprints posted between 2019 and 2022. Among those that eventually published (67%), the primary outcome and direction of effect were unchanged in 92%. Effect size changed by more than 20% in only 8%. Conclusions were substantively altered in 11%. The bottom line: most preprints that eventually publish do not change much in the process.
  • Methodological quality. A 2024 analysis by Korevaar et al. in BMJ Evidence-Based Medicine compared COVID-19 preprints to peer-reviewed publications using the Newcastle-Ottawa Scale and Cochrane Risk of Bias tool. Preprints scored lower on average (mean NOS 5.8 vs. 6.7 for published papers), but the distributions overlapped substantially. Approximately 40% of preprints scored as high quality; approximately 25% of peer-reviewed papers scored as low quality. Publication status is a signal, not a guarantee.
  • Error rates. Peer review catches errors but not all of them. A 2022 analysis by Bero et al. in JAMA Network Open found that among published randomized trials, 17% contained statistical errors affecting interpretation. The comparable figure for preprints was 24%. Peer review reduces the error rate. It does not eliminate it.

A Framework for Evaluating Preprint Evidence

When a preprint reaches your attention — through social media, a colleague, a news article, or a clinical decision support tool — here is a structured approach to deciding whether it should influence your clinical practice:

1. Assess the Study Design

The hierarchy of evidence applies to preprints exactly as it does to published papers. A large, multi-center RCT posted as a preprint (like RECOVERY dexamethasone) carries more weight than a single-center observational study (like Gautret hydroxychloroquine), regardless of publication status. The preprint format does not alter the study design. Start there.

2. Consider the Source

Not all preprint servers are equal. MedRxiv and bioRxiv, operated by Cold Spring Harbor Laboratory, have established screening processes. SSRN, Research Square, and Authorea serve as preprint platforms with varying levels of screening. Preprints posted only to institutional websites or personal pages lack even basic screening — treat them with additional caution.

Authors and institutions matter as well. A preprint from an established research group at a known institution, studying a topic within their documented expertise, has a different prior probability of reliability than a preprint from unknown authors in an unrelated field. This is not an appeal to authority — it is a Bayesian assessment of the likelihood that the methodology is sound.

3. Check the Methods, Not Just the Conclusion

With peer-reviewed literature, you can reasonably assume someone with methodological expertise reviewed the statistical approach, sample size justification, and endpoint selection. With preprints, that assumption does not hold. Do the work a reviewer would: Is the sample size adequate? Are the endpoints clinically meaningful or surrogates? Is the control group appropriate? Are the statistical methods pre-specified or does the analysis look exploratory?

4. Look for Corroboration

A single preprint, like a single published study, should rarely change practice alone. Before acting on a preprint, look for corroborating evidence — other preprints, published studies, or biological plausibility supporting the finding. The dexamethasone preprint was compelling not only because of its size and design but because corticosteroid benefit in severe respiratory infections had biological plausibility and prior data in ARDS.

5. Apply a Threshold Proportional to the Intervention

The threshold for acting on preprint evidence should match the reversibility and risk of the clinical decision. Preprint evidence supporting a low-risk, already-available medication (dexamethasone in severe COVID) in a life-threatening condition with few alternatives may warrant rapid adoption. Preprint evidence supporting a novel, irreversible intervention in a non-emergent condition warrants waiting for peer review and replication.

When Should Clinicians Act on Preprint Data?

The decision can be structured around four variables:

  • Clinical urgency. Is the patient in a situation where waiting 6-12 months for publication is not feasible? In acute, life-threatening conditions with limited established options, the threshold for acting on strong preprint evidence is lower.
  • Intervention risk. Is the proposed intervention low-risk (repositioning an existing, safe medication) or high-risk (novel agent, invasive procedure)? Lower intervention risk justifies a lower evidence threshold.
  • Study quality. Is the preprint a large RCT with robust methodology, or a small observational study? The quality of the evidence matters more than its publication status.
  • Consistency. Does the finding align with biological plausibility, prior evidence, and other independent reports? Isolated, surprising findings need more corroboration than findings that fit established frameworks.

This framework mirrors the evidence evaluation approach physicians should apply to all clinical evidence. The difference: preprints require you to perform more of the critical evaluation that peer review would normally provide. You are the reviewer.

The Role of Tools in Navigating Preprint Evidence

MedRxiv alone publishes 25-30 new preprints per day. No physician can monitor that volume for findings relevant to their patients. The physicians who navigate this landscape most effectively will use tools that synthesize evidence across sources — peer-reviewed and preprint alike — while clearly distinguishing between the two.

The important distinction is not "peer-reviewed versus preprint." It is "high-quality evidence versus low-quality evidence." A well-designed RCT on medRxiv may be more reliable than a poorly designed observational study in a mid-tier journal. Publication status is a signal, not a verdict. Your job is to evaluate the evidence itself. Any tool assisting with that evaluation should make publication status transparent, not hidden. Ailva distinguishes between peer-reviewed publications and preprints in its evidence synthesis, so you always know the status of the evidence you are reviewing.

Want to try Ailva?

Ailva is a clinical intelligence platform that delivers evidence-based answers with verified citations and cross-system reasoning. Free for all NPI holders.