Meta-analysis

Meta-analysis

Overview
In statistics
Statistics
Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments....

, a meta-analysis combines the results of several studies that address a set of related research hypotheses. In its simplest form, this is normally by identification of a common measure of effect size
Effect size
In statistics, an effect size is a measure of the strength of the relationship between two variables in a statistical population, or a sample-based estimate of that quantity...

, for which a weighted average might be the output of a meta-analyses. Here the weighting might be related to sample sizes within the individual studies. More generally there are other differences between the studies that need to be allowed for, but the general aim of a meta-analysis is to more powerfully estimate the true "effect size" as opposed to a smaller "effect size" derived in a single study under a given single set of assumptions and conditions.

Meta-analyses are often, but not always, important components of a systematic review
Systematic review
A systematic review is a literature review focused on a research question that tries to identify, appraise, select and synthesize all high quality research evidence relevant to that question. Systematic reviews of high-quality randomized controlled trials are crucial to evidence-based medicine...

procedure.
Discussion
Ask a question about 'Meta-analysis'
Start a new discussion about 'Meta-analysis'
Answer questions from other users
Full Discussion Forum
 
Unanswered Questions
Recent Discussions
Encyclopedia
In statistics
Statistics
Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments....

, a meta-analysis combines the results of several studies that address a set of related research hypotheses. In its simplest form, this is normally by identification of a common measure of effect size
Effect size
In statistics, an effect size is a measure of the strength of the relationship between two variables in a statistical population, or a sample-based estimate of that quantity...

, for which a weighted average might be the output of a meta-analyses. Here the weighting might be related to sample sizes within the individual studies. More generally there are other differences between the studies that need to be allowed for, but the general aim of a meta-analysis is to more powerfully estimate the true "effect size" as opposed to a smaller "effect size" derived in a single study under a given single set of assumptions and conditions.

Meta-analyses are often, but not always, important components of a systematic review
Systematic review
A systematic review is a literature review focused on a research question that tries to identify, appraise, select and synthesize all high quality research evidence relevant to that question. Systematic reviews of high-quality randomized controlled trials are crucial to evidence-based medicine...

procedure. Here it is convenient to follow the terminology used by the Cochrane Collaboration
Cochrane Collaboration
The Cochrane Collaboration is a group of over 28,000 volunteers in more than 100 countries who review the effects of health care interventions tested in biomedical randomized controlled trials. A few more recent reviews have also studied the results of non-randomized, observational studies...

, and use "meta-analysis" to refer to statistical methods of combining evidence, leaving other aspects of 'research synthesis' or 'evidence synthesis', such as combining information from qualitative studies, for the more general context of systematic review
Systematic review
A systematic review is a literature review focused on a research question that tries to identify, appraise, select and synthesize all high quality research evidence relevant to that question. Systematic reviews of high-quality randomized controlled trials are crucial to evidence-based medicine...

s.

The term "meta-analysis" was coined by Gene V. Glass
Gene V. Glass
Gene V Glass is an American statistician and researcher working in educational psychology and the social sciences. He coined the term "meta-analysis" and illustrated its use in 1976 while a faculty member at the University of Colorado Boulder...

.

History


The first meta-analysis was performed by Karl Pearson
Karl Pearson
Karl Pearson FRS was an influential English mathematician who has been credited for establishing the disciplineof mathematical statistics....

 in 1904, in an attempt to overcome the problem of reduced statistical power
Statistical power
The power of a statistical test is the probability that the test will reject the null hypothesis when the null hypothesis is actually false . The power is in general a function of the possible distributions, often determined by a parameter, under the alternative hypothesis...

 in studies with small sample sizes; analyzing the results from a group of studies can allow more accurate data analysis. However, the first meta-analysis of all conceptually identical experiments concerning a particular research issue, and conducted by independent researchers, has been identified as the 1940 book-length publication Extra-sensory perception after sixty years, authored by Duke University psychologists J. G. Pratt
Joseph Gaither Pratt
Joseph Gaither Pratt was an American psychologist who specialized in the field of parapsychology. Among his research interests were extrasensory perception, psychokinesis, mediumship, poltergeists and psi....

, J. B. Rhine
Joseph Banks Rhine
Joseph Banks Rhine was a botanist who later developed an interest in parapsychology and psychology. Rhine founded the parapsychology lab at Duke University, the Journal of Parapsychology, and the Foundation for Research on the Nature of Man...

, and associates. This encompassed a review of 145 reports on ESP experiments published from 1882 to 1939, and included an estimate of the influence of unpublished papers on the overall effect (the file-drawer problem). Although meta-analysis is widely used in epidemiology
Epidemiology
Epidemiology is the study of health-event, health-characteristic, or health-determinant patterns in a population. It is the cornerstone method of public health research, and helps inform policy decisions and evidence-based medicine by identifying risk factors for disease and targets for preventive...

 and evidence-based medicine
Evidence-based medicine
Evidence-based medicine or evidence-based practice aims to apply the best available evidence gained from the scientific method to clinical decision making. It seeks to assess the strength of evidence of the risks and benefits of treatments and diagnostic tests...

 today, a meta-analysis of a medical treatment was not published until 1955. In the 1970s, more sophisticated analytical techniques were introduced in educational research
Educational research
Educational research refers to a variety of methods, in which individuals evaluate different aspects of education including but not limited to: “student learning, teaching methods, teacher training, and classroom dynamics”....

, starting with the work of Gene V. Glass
Gene V. Glass
Gene V Glass is an American statistician and researcher working in educational psychology and the social sciences. He coined the term "meta-analysis" and illustrated its use in 1976 while a faculty member at the University of Colorado Boulder...

, Frank L. Schmidt
Frank L. Schmidt
Frank L. Schmidt is an American psychology professor known for his work in personnel selection and employment testing. Schmidt is a researcher in the area of industrial and organizational psychology with the most number of publications in the two major journals in the 1980s...

 and John E. Hunter
John E. Hunter
John E. "Jack" Hunter was an American psychology professor known for his work in methodology. His best-known work is Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. The American Communication Association named a research award in his honor.Hunter received his Ph.D...

.
Gene V Glass was the first modern statistician to formalize the use of meta-analysis, and is widely recognized as the modern founder of the method. The online Oxford English Dictionary
Oxford English Dictionary
The Oxford English Dictionary , published by the Oxford University Press, is the self-styled premier dictionary of the English language. Two fully bound print editions of the OED have been published under its current name, in 1928 and 1989. The first edition was published in twelve volumes , and...

 lists the first usage of the term in the statistical sense as 1976 by Glass. The statistical theory surrounding meta-analysis was greatly advanced by the work of Nambury S. Raju
Nambury S. Raju
Nambury S. Raju was an American psychology professor known for his work in psychometrics, meta-analysis, and utility theory...

, Larry V. Hedges
Larry V. Hedges
Larry V. Hedges is a researcher in statistical methods for meta-analysis and evaluation of education policy. He is Professor of Statistics and Social Policy, Institute for Policy Research, Northwestern University. Previously, he was the Stella M...

, Harris Cooper, Ingram Olkin
Ingram Olkin
Ingram Olkin is a professor emeritus and chair of statistics and education at Stanford University and the Stanford University School of Education...

, John E. Hunter
John E. Hunter
John E. "Jack" Hunter was an American psychology professor known for his work in methodology. His best-known work is Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. The American Communication Association named a research award in his honor.Hunter received his Ph.D...

, Jacob Cohen, Thomas C. Chalmers
Thomas C. Chalmers
Thomas Clark Chalmers, MD, FACP was famous for his role in the development of the randomized controlled trial and meta-analysis in medical research....

, Robert Rosenthal
Robert Rosenthal (psychologist)
Robert Rosenthal is Distinguished Professor of Psychology at the University of California, Riverside. His interests include self-fulfilling prophecies, which he explored in a well-known study of the Pygmalion Effect: the effect of teachers' expectations on students.Rosenthal was born in Giessen,...

 and Frank L. Schmidt
Frank L. Schmidt
Frank L. Schmidt is an American psychology professor known for his work in personnel selection and employment testing. Schmidt is a researcher in the area of industrial and organizational psychology with the most number of publications in the two major journals in the 1980s...

.

Advantages of meta-analysis


Advantages of meta-analysis (e.g. over classical literature reviews, simple overall means of effect sizes etc.) include:
  • Shows if the results are more varied than what is expected from the sample diversity
  • Derivation and statistical testing of overall factors / effect size parameters in related studies
  • Generalization to the population of studies
  • Ability to control for between-study variation
  • Including moderators to explain variation
  • Higher statistical power to detect an effect than in 'n=1 sized study sample'
  • Deal with information overload: the high number of articles published each year.
  • It combines several studies and will therefore be less influenced by local findings than single studies will be.
  • Makes it possible to show if a publication bias
    Publication bias
    Publication bias is the tendency of researchers, editors, and pharmaceutical companies to handle the reporting of experimental results that are positive differently from results that are negative or inconclusive, leading to bias in the overall published literature...

     exists.

Steps in a meta-analysis


1. Formulation of the problem

2. Search of literature

3. Selection of studies ('incorporation criteria')
  • Based on quality criteria, e.g. the requirement of randomization and blinding in a clinical trial
  • Selection of specific studies on a well-specified subject, e.g. the treatment of breast cancer.
  • Decide whether unpublished studies are included to avoid publication bias (file drawer problem)


4. Decide which dependent variables or summary measures are allowed. For instance:
  • Differences (discrete data)
  • Means (continuous data)
  • Hedges' g is a popular summary measure for continuous data that is standardized in order to eliminate scale differences, but it incorporates an index of variation between groups:

in which is the treatment mean, is the control mean, the pooled variance.

5. Model selection (see next paragraph)

For reporting guidelines, see QUOROM statement

Meta-regression models


Generally, three types of models can be distinguished in the literature on meta-analysis: simple regression, fixed effect meta-regression and random effects meta-regression.

Simple regression


The model can be specified as


Where is the effect size in study and (intercept) the estimated overall effect size. The variables specify different characteristics of the study, specifies the between study variation. Note that this model does not allow specification of within study variation.

Fixed-effect meta-regression


Fixed-effect meta-regression assumes that the true effect size is normally distributed with where is the within study variance of the effect size. A fixed effect meta-regression model thus allows for within study variability, but no between study variability because all studies have the identical expected fixed effect size , i.e. . ***Note that for the "fixed-effect" no plural is used (in contrast to "random-effects") as only ONE true effect across all datasets is assumed.***


Here is the variance of the effect size in study .
Fixed effect meta-regression ignores between study variation. As a result, parameter estimates are biased if between study variation can not be ignored. Furthermore, generalizations to the population are not possible.

Random effects meta-regression


Random effects meta-regression rests on the assumption that in is a random variable following a (hyper-)distribution A random effects meta-regression is called a mixed effects model when moderators are added to the model.


Here is the variance of the effect size in study . Between study variance is estimated using common estimation procedures for random effects models (restricted maximum likelihood (REML) estimators).

Which model to choose


The simple regression model does not allow for within study variation, this yields in to significant results too easy. The fixed effects regression model does not allow for between study variation, this also yields in to significant results too easy. The random or mixed effects model allows for within study variation and between study variation and is therefore the most appropriate model to choose. Whether there is between study variation can be tested by testing whether the effect sizes are homogeneous. If the test shows that the effect sizes are not heterogeneous the fixed effects meta-regression might seem appropriate, however this test often does not have enough power to detect between study variation. Besides the lack of power of this test, you can reason that the fixed effects assumption of homogeneous effect sizes is rather weak, because it assumes that all studies are exactly the same. However you can assume that no two studies are exactly the same. To cope with the fact that each study is different (different sample; different time; different place; etc) a random or mixed effects model is always the appropriate model to choose and gives the most reliable results.

Applications in modern science


Modern statistical meta-analysis does more than just combine the effect sizes of a set of studies. It can test if the outcomes of studies show more variation than the variation that is expected because of sampling different research participants. If that is the case, study characteristics such as measurement instrument used, population sampled, or aspects of the studies' design are coded. These characteristics are then used as predictor variables to analyze the excess variation in the effect sizes. Some methodological weaknesses in studies can be corrected statistically. For example, it is possible to correct effect sizes or correlations for the downward bias due to measurement error or restriction on score ranges.

Meta-analysis can be done with single-subject design as well as group research designs. This is important because much of the research on low incidents populations has been done with single-subject research
Single-subject research
Single-subject research is a group of research methods that are used extensively in the experimental analysis of behavior and applied behavior analysis with both human and non-human participants. Four principal methods in this type of research are: changing criterion, reversal , alternating...

 designs. Considerable dispute exists for the most appropriate meta-analytic technique for single subject research.

Meta-analysis leads to a shift of emphasis from single studies to multiple studies. It emphasizes the practical importance of the effect size instead of the statistical significance of individual studies. This shift in thinking has been termed "meta-analytic thinking". The results of a meta-analysis are often shown in a forest plot
Forest plot
A forest plot is a graphical display designed to illustrate the relative strength of treatment effects in multiple quantitative scientific studies addressing the same question. It was developed for use in medical research as a means of graphically representing a meta-analysis of the results of...

.

Results from studies are combined using different approaches. One approach frequently used in meta-analysis in health care research is termed 'inverse variance method
Inverse-variance weighting
In statistics, inverse-variance weighting is a method of aggregating two or more random variables to minimize the variance of the sum. Each random variable in the sum is weighted in inverse proportion to its variance....

'. The average effect size
Effect size
In statistics, an effect size is a measure of the strength of the relationship between two variables in a statistical population, or a sample-based estimate of that quantity...

 across all studies is computed as a weighted mean, whereby the weights are equal to the inverse variance of each studies' effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. Other common approaches include the Mantel–Haenszel method
and the Peto
Richard Peto
Sir Richard Peto FRS is Professor of Medical Statistics and Epidemiology at the University of Oxford.He attended Taunton's School in Southampton and subsequently studied Natural Sciences at Cambridge University....

 method.

A recent approach to studying the influence that weighting schemes can have on results has been proposed through the construct of gravity, which is a special case of combinatorial meta-analysis.

Signed differential mapping
Signed differential mapping
Signed differential mapping or SDM is a statistical technique for meta-analyzing studies on differences in brain activity or structure which used neuroimaging techniques such as fMRI, VBM, DTI or PET...

 is a statistical technique for meta-analyzing studies on differences in brain activity or structure which used neuroimaging techniques such as fMRI, VBM or PET.

Weaknesses


Some have argued that a weakness of the method is that sources of bias are not controlled by the method. A good meta-analysis of badly designed studies will still result in bad statistics, according to Robert Slavin
Robert Slavin
Robert "Bob" Slavin is an American psychologist who studies educational and academic issues. He founded the Success for All reform program for primary and middle schools....

. Slavin has argued that only methodologically sound studies should be included in a meta-analysis, a practice he calls 'best evidence meta-analysis'. Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size. However, Glass argued that the better approach preserves variance in the study sample, casting as wide a net as possible, and that methodological selection criteria introduce unwanted subjectivity, defeating the purpose of the approach.

File drawer problem


Another weakness of the method is the heavy reliance on published studies, which may create exaggerated outcomes, as it is very hard to publish studies that show no significant results. For any given research area, one cannot know how many studies have been conducted but never reported and the results filed away.

This file drawer problem results in the distribution of effect sizes that are biased, skewed or completely cut off, creating a serious base rate fallacy
Base rate fallacy
The base rate fallacy, also called base rate neglect or base rate bias, is an error that occurs when the conditional probability of some hypothesis H given some evidence E is assessed without taking into account the "base rate" or "prior probability" of H and the total probability of evidence...

, in which the significance of the published studies is overestimated. For example, if there were fifty tests, and only ten got results, then the real outcome is only 20% as significant as it appears, except that the other 80% were not submitted for publishing, or thrown out by publishers as uninteresting. This should be seriously considered when interpreting the outcomes of a meta-analysis.

This can be visualized with a funnel plot which is a scatter plot of sample size and effect sizes. There are several procedures available that attempt to correct for the file drawer problem, once identified, such as guessing at the cut off part of the distribution of study effects.

Other weaknesses are Simpson's Paradox
Simpson's paradox
In probability and statistics, Simpson's paradox is a paradox in which a correlation present in different groups is reversed when the groups are combined. This result is often encountered in social-science and medical-science statistics, and it occurs when frequencydata are hastily given causal...

 (two smaller studies may point in one direction, and the combination study in the opposite direction); the coding of an effect is subjective; the decision to include or reject a particular study is subjective; there are two different ways to measure effect: correlation or standardized mean difference; the interpretation of effect size is purely arbitrary; it has not been determined if the statistically most accurate method for combining results is the fixed effect model or the random effects model; and, for medicine, the underlying risk in each studied group is of significant importance, and there is no universally agreed-upon way to weight the risk.

The example provided by the Rind et al. controversy illustrates an application of meta-analysis which has been the subject of subsequent criticisms of many of the components of the meta-analysis.

Dangers of agenda-driven bias


The most severe weakness and abuse of meta-analysis often occurs when the person or persons doing the meta-analysis have an economic, social
Social
The term social refers to a characteristic of living organisms...

, or political agenda such as the passage or defeat of legislation
Legislation
Legislation is law which has been promulgated by a legislature or other governing body, or the process of making it...

. Those persons with these types of agenda have a high likelihood to abuse meta-analysis due to personal bias
Bias
Bias is an inclination to present or hold a partial perspective at the expense of alternatives. Bias can come in many forms.-In judgement and decision making:...

. For example, researchers favorable to the author's agenda are likely to have their studies "cherry picked" while those not favorable will be ignored or labeled as "not credible". In addition, the favored authors may themselves be biased or paid to produce results that support their overall political, social, or economic goals in ways such as selecting small favorable data sets and not incorporating larger unfavorable data sets.

If a meta-analysis is conducted by an individual or organization with a bias or predetermined desired outcome, it should be treated as highly suspect or having a high likelihood of being "junk science
Junk science
Junk science is a term used in U.S. political and legal disputes that brands an advocate's claims about scientific data, research, or analyses as spurious. The term may convey a pejorative connotation that the advocate is driven by political, ideological, financial, or other unscientific...

". From an integrity perspective, researchers with a bias should avoid meta-analysis and use a less abuse-prone (or independent) form of research.

A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals; 15 from specialty medicine journals, and 3 from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed an aggregate of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources with 219 (69%) industry funded. 132 of the 509 RCTs reported author conflict of interest disclosures, with 91 studies (69%) disclosing industry financial ties with one or more authors. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded “without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers’ understanding and appraisal of the evidence from the meta-analysis may be compromised.”

Comparison of meta-analysis to the scientific method


Francis Bacon described a method of procedure for advancing the physical sciences.

“Aphorism 106: In forming our axioms from induction, we must examine and try whether the axiom we derive be only fitted and calculated for the particular instances from which it is deduced, or whether it be more extensive and general. If it be the latter, we must observe, whether it confirms its own extent and generality by giving surety, as it were, in pointing out new particulars, so that we may neither stop at actual discoveries, nor with a careless grasp catch at shadows and abstract forms, instead of substances of a determinate nature: and as soon as we act thus, well authorized hope may with reason, be said to beam upon us.”

George Boole gave a similar description .

“The study of every department of physical science begins with observation; it advances by the collation of facts to a presumptive acquaintance with their connecting law, the validity of such presumption it tests by new experiments so devised as to augment, if the presumption be well founded, its probability indefinitely; and finally, the law of the phenomenon having been with sufficient confidence determined, the investigation of causes, conducted by the due mixture of hypothesis and deduction, crowns the inquiry.” (Boole, 1958, p. 402)

In both descriptions there are three steps:first assemble data, second formulate an explanatory physical law, and third test the proposed physical law in future experiments. In a meta analysis the first two steps are carried out, but the third step is modified. Meta-analysis being retrospective has no data gathered after the formulation of the physical law and so confirms the physical law using data that were known at the time the physical law was formulated. This requires a change from the usual notion of probability:
“Probability is expectation founded upon partial knowledge. A perfect acquaintance with all the circumstances affecting the occurrence of an event would change expectation into certainty, and leave neither room nor demand for a theory of probabilities.”(Boole, 1958, p. 402)
Statistical significance in a hypothesis test is the probability rejecting the null hypothesis when it is true. In the scientific method, statistical significance is the probability of a future event. In a meta-analysis, statistical significance is the probability of a past event.

In a meta-analysis the analyst has “perfect acquaintance with all the circumstances affecting the occurrence” of any event defined by the data at the time the hypotheses are specified. So there is no uncertainty and the probabilities of such events, using Boole’s notion of probability, would be zero or one. The procedure in meta-analysis is to simulate necessary incompleteness of knowledge by calculating the power and statistical significance as if none of the data were known to the analyst at the time the hypotheses were specified. A meta-analysis hypothesis test is, within the context of the scientific method of Bacon and Boole, a simulated hypothesis test.

See also



  • Epidemiologic methods
  • Newcastle–Ottawa scale
    Newcastle–Ottawa scale
    In statistics, the Newcastle–Ottawa scale is a method for assessing the quality of nonrandomised studies in meta-analyses. The scales allocate stars, maximum of nine, for quality of selection, comparability, exposure and outcome of study participants. The method was developed as a collaboration...

  • Reporting bias
    Reporting bias
    In empirical research, reporting bias refers to a tendency to under-report unexpected or undesirable experimental results, attributing the results to sampling or measurement error, while being more trusting of expected or desirable results, though these may be subject to the same sources of error...

  • Review journal
    Review journal
    A review journal in academic publishing is a periodical or series that is devoted to the publication of review articles that summarize the progress in some particular area or topic during a preceding period.-Types:Review journals can be divided by...

  • Study heterogeneity
    Study heterogeneity
    In statistics, study heterogeneity is a problem that can arise when attempting to undertake a meta-analysis. Ideally, the studies whose results are being combined in the meta-analysis should all be undertaken in the same way and to the same experimental protocols: study heterogeneity is a term used...

  • Systematic review
    Systematic review
    A systematic review is a literature review focused on a research question that tries to identify, appraise, select and synthesize all high quality research evidence relevant to that question. Systematic reviews of high-quality randomized controlled trials are crucial to evidence-based medicine...


Further reading

. Explores two contrasting views: does meta-analysis provide "objective, quantitative methods for combining evidence from separate but similar studies" or merely "statistical tricks which make unjustified assumptions in producing oversimplified generalisations out of a complex of disparate studies"?
  • Wilson, D. B., & Lipsey, M. W. (2001). Practical meta-analysis. Thousand Oaks: Sage publications. ISBN 0761921680
  • O'Rourke, K. (2007) Just the history from the combining of information: investigating and synthesizing what is possibly common in clinical observations or studies via likelihood. Oxford: University of Oxford, Department of Statistics. Gives technical background material and details on the "An historical perspective on meta-analysis" paper cited in the references.
  • Owen, A. B. (2009). "Karl Pearson's meta-analysis revisited". Annals of Statistics
    Annals of Statistics
    The Annals of Statistics is a peer-reviewed statistics journal published by the Institute of Mathematical Statistics. It was started in 1973 as a continuation in part of the Annals of Mathematical Statistics, which was split into the Annals of Statistics and the Annals of Probability.Articles older...

    , 37 (6B), 3867–3892. Supplementary report.
  • Ellis, Paul D. (2010). The Essential Guide to Effect Sizes: An Introduction to Statistical Power, Meta-Analysis and the Interpretation of Research Results. United Kingdom: Cambridge University Press. ISBN 0521142466
  • Bonett, D.G. (2009). Meta-analytic interval estimation for standardized and unstandardized mean differences, Psychological Methods, 14, 225-238.

External links


Software