We have written on what it is confounding
variables; nonetheless we came up short.
An article written in www.medscape.org
helps to understand this subject better.
We just copied the introduction for you. Unfortunately, the running research is full of confounding variables due to the complexity of running.
How Do You Know Which
Health Care Effectiveness Research You Can Trust? A Guide to Study Design for
the Perplexed CME
Stephen B. Soumerai, ScD; Douglas Starr, MS; Sumit
Majumdar, MD, MPH, FRCPC
Editor’s Note: The purpose of this Editor’s Choice article is translational in
nature. It is intended to illustrate some of the most common examples of
potential study bias to help policy makers, journalists, trainees, and the
public understand the strengths and weaknesses of various types of healthcare
research and the kinds of study designs that are most trustworthy. It is
neither a comprehensive guide nor a standard research methods article. The
authors intend to add to these examples of bias in research designs in future
brief and easy-to-understand articles designed to show both the scientific
community and the broader population why caution is needed in understanding and
accepting the results of research that may have profound and long-lasting
effects on health policy and clinical practice.
Evidence is mounting that publication in a
peer-reviewed medical journal does not guarantee a study’s validity.[1]Many studies of healthcare
effectiveness do not show the cause-and-effect relationships that they claim.
They have faulty research designs. Mistaken conclusions later reported in the
news media can lead to wrong-headed policies and confusion among policy makers,
scientists, and the public. Unfortunately, little guidance exists to help
distinguish good study designs from bad ones, the central goal of this article.
There have been major reversals of study findings
in recent years. Consider the risks and benefits of postmenopausal hormone
replacement therapy (HRT). In the 1950s, epidemiological studies suggested
higher doses of HRT might cause harm, particularly cancer of the uterus.[2] In subsequent
decades, new studies emphasized the many possible benefits of HRT, particularly
its protective effects on heart disease — the leading killer of North American
women. The uncritical publicity surrounding these studies was so persuasive
that by the 1990s, about half the postmenopausal women in the United States
were taking HRT, and physicians were chastised for under-prescribing it. Yet in
2003, the largest randomized controlled trial (RCT) of HRT among postmenopausal
women found small increases in breast cancer and increased risks of heart
attacks and strokes, largely offsetting any benefits such as fracture
reduction.[3]
The reason these studies contradicted each other
had less to do with the effects of HRT than the difference in studydesigns,
particularly whether they included comparable control groups and data on
preintervention trends. In the HRT case, health-conscious women who chose to
take HRT for health benefits differed from those who did not — for reasons of
choice, affordability, or pre-existing good health.[4] Thus, although most
observational studies showed a “benefit” associated with taking HRT, findings
were undermined because the study groups were not comparable. These fundamental
nuances were not reported in the news media.
Another pattern in the evolution of science is that
early studies of new treatments tend to show the most dramatic, positive health
effects, and these effects diminish or disappear as more rigorous and larger
studies are conducted.[5]As these positive effects
decrease, harmful side effects emerge. Yet the exaggerated early studies, which
by design tend to inflate benefits and underestimate harms, have the most
influence.
Rigorous design is also essential for studying
health policies, which essentially are huge real-world experiments.[1]Such policies, which may
affect tens of millions of people, include insurance plans with very high
patient deductible costs or Medicare’s new economic penalties levied against
hospitals for “preventable” adverse events.[6] We know little about
the risks, costs, or benefits of such policies, particularly for the poor and
the sick. Indeed, the most credible literature syntheses conducted under the
auspices of the international Cochrane Collaboration commonly exclude from
evidence 50% to 75% of published studies because they do not meet basic
research design standards required to yield trustworthy conclusions (eg, lack
of evidence for policies that pay physicians to improve quality of medical
care).[7,8]
This article focuses on a fundamental question:
which types of healthcare studies are most trustworthy? That is, which study
designs are most immune to the many biases and alternative explanations that
may produce unreliable results?[9] The key question is
whether the health “effects” of interventions — such as drugs, technologies, or
health and safety programs — are different from what would have happened anyway
(ie, what happened to a control group). Our analysis is based on more than 75
years of proven research design principles in the social sciences that have
been largely ignored in the health sciences.[9] These simple
principles show what is likely to reduce biases and systematic errors. We will
describe weak and strong research designs that attempt to control for these
biases. Those examples, illustrated with simple graphics, will emphasize 3
overarching principles:
- No study is perfect. Even the most rigorous research design can
be compromised by inaccurate measures and analysis, unrepresentative
populations, or even bad luck (“chance”). But we will show that most
problems of bias are caused by weak designs yielding exaggerated effects.
- “You can’t fix by analysis what you
bungled by design”.[10] Research design
is too often neglected, and strenuous statistical machinations are then
needed to “adjust for” irreconcilable differences between study and
control groups. We will show that such differences are often more
responsible for any differences (effects) than is the health service or
policy of interest.
- Publishing innovative but severely
biased studies can do more harm than good.
Sometimes researchers may publish overly definitive conclusions using
unreliable study designs, reasoning that it is better to have unreliable
data than no data at all and that the natural progression of science will
eventually sort things out. We do not agree. We will show how single,
flawed studies, combined with widespread news media attention and advocacy
by special interests, can lead to ineffective or unsafe policies.[1]
The case examples in this article describe how some
of the most common biases and study designs affect research on important health
policies and interventions, such as comparative effectiveness of various
medical treatments, cost-containment policies, and health information
technology.
The examples include visual illustrations of common
biases that compromise a study’s results, weak and strong design alternatives,
and the lasting effects of dramatic but flawed early studies. Generally,
systematic literature reviews provide more conservative and trustworthy
evidence than any single study, and conclusions of such reviews of the broad
evidence will also be used to supplement the results of a strongly designed
study. Finally, we illustrate the impacts of the studies on the news media,
medicine, and policy.
Aucun commentaire:
Enregistrer un commentaire