Epidemiology of Wound Care: the High Stakes
The economic burden of chronic wound care for the United States is sobering. Currently, more than 70 million acute surgical wounds and approximately 7 million chronic wound patients require care every year.4 For pressure ulcer care alone, the cost is estimated to approach $11 billion annually.5
The population is aging. By 2030, 19% of Americans will be >65 years old, a group that includes 9 million frail elders.6 Age is a risk factor for chronic wounds. Diabetes care will challenge the American health care system. Currently, 29.1 million Americans are diabetic (mostly type 2); 86 million are prediabetics. Diabetes is a risk factor for wounds and the seventh leading cause of death in the United States.7
Chronic wound prevalence is not just an American issue. A recent market forecast report8 described global predictions: chronic wounds (pressure, venous, and diabetic ulcers) are on course to increase globally from 40 million to >60 million by 2017. The need for good evidence on which to base wound care is imperative, but available high-level evidence is rare. Calls for quality conduct and reporting of randomized controlled trials (RCTs) are occurring in the wound literature.9
Evidence-Based Practice (EBP) and How/Why it is Used
EBP is a term describing a problem-solving approach to health care delivery that crosses all disciplines10; it involves the conscientious, explicit, and judicious use of current best evidence in making decisions about clinical patient care.11 The literature supports that best evidence should be integrated with patient/family preferences and values, individual clinical expertise, and the patient’s clinical context.10,12,13
EBP is critical to current and future clinical care, especially in light of the Affordable Care Act. According to expert researchers and clinicians,10-12 EBP can lead to the highest quality of care and the best patient outcomes. Interventions for patients with clinical conditions have to be selected with effectiveness in mind. Scarce resources will become more so as the patient burden in the nation increases.4,10 In addition, in contemporary care reimbursement for interventions is provided based on clinical outcomes. The legal community also is aware of recommended approaches. Clinicians will be expected to keep current with effective techniques and interventions and provide legally defensible care. Patients seek advice from many sources, too. Clinicians will be expected to explain why they do or do not select a treatment. The specialty of wound care will be heavily impacted by these factors because extensive explanations are time-consuming, and time is money in this costly arena.10,12
The process of EBP is first enacted by obtaining the best research evidence available. In the hierarchy of EBP, where clinical evidence is ranked according to strength of freedom from biases, meta-analyses and SRs are ranked at the top of describing quantitative effectiveness.14 If feasibility, appropriateness, and meaningfulness are the goals, qualitative meta-syntheses provide the strongest evidence.15 Researchers will have synthesized numerous high-quality studies (RCTs) or clinical trial (CTs) results from the literature.12 Then EBP researchers will approach the literature review based on the PICO or PICOT format — that is, the heart of the SR is a focused question based on a population or group of participants regarding an intervention of interest. The intervention is compared to a control or standard treatment. The outcomes of the intervention are specified. Some researchers include timing issues as well. A sample PICOT question for a SR may read: “In patients with chronic wounds (eg, diabetic foot ulcers), how does use of negative pressure wound therapy (NPWT) affect rate and quality of wound healing in the first 2 weeks of use?” Researchers then perform a systematic search of the literature using clearly specified search terms (Medical Subject Headings or MESH) in a variety of databases (eg, Medline, EMBASE, CINAHL), and RCTs and CTs that match the search processes are retrieved and reviewed.
Using a mnemonic should not infer this is a simplistic activity. The protocol process used in SRs is highly rigorous. Inclusion/exclusion criteria and terminology are thoroughly discussed and explicitly defined. The entire process must be clearly and succinctly detailed in any SR.16,17
The EBP approach is not without critics. In a narrative review, Berguer18 argues the evidence-based medicine (EBM) approach, for example, implies only one right way. Proponents suggest EBM embodies scientific truth, and those who critique it must be opting for “non-evidential unscientific data.”18 Berguer18 cautions the need to examine how “best evidence” is selected, how pertinent best evidence recommendations are to individual patients, and how the epistemic limitations of RCTs relate to the mechanics of disease.
The Language of Research: Helpful Terminology
Research and EBP use terminology with which clinicians may not be familiar. Just as wound care providers have to learn the correct terms to document wound assessment, they will have to learn new words to comprehend SRs. Table 1 contains a list of selected commonly used terms and definitions for easy reference.
SR. A SR is a structured comprehensive synthesis of the research literature to determine the best research evidence to address a health care question (frequently an intervention). Most often groups of researchers (at least 2) conduct the quantitative SR using meta-analysis (if appropriate).19,20 A qualitative SR summarizes primary quantitative studies but does not combine the study results using statistical methods.21 This latter form should not be confused with synthesis of qualitative research studies, a process called meta-synthesis.
Also known as a research synthesis, a SR has well-defined characteristics17,21:
- Clearly describes objectives and focused questions
- Specifies explicit inclusion/exclusion criteria before work begins
- Involves an exhaustive search to identify published and unpublished relevant studies
- Appraises the validity and quality of studies and reporting of inclusion/exclusion choices
- Includes a data analysis of included studies
- Presents synthesized extracted findings
- Provides transparent explicit reporting of methodology used to conduct the review.19
To promote rigor and transparency and to reduce potential error, a protocol for the SR is developed a priori. The finished SR publication must address explicitly all the multiple steps and decisions. At least 2 reviewers conduct the data extraction.22
Two international groups commonly conduct SRs related to health care. The Cochrane Collaboration usually involves clinical effectiveness questions utilizing CTs and RCTs.23 Another group, Joanna Briggs Institute, is more focused on nursing issues that include a greater focus on qualitative research as well as the publication of meta-syntheses.24
The plethora of SRs across health care disciplines is so substantial some researchers are doing SRs of SRs to identify trends, strengths, and gaps in areas of health care.25,26 The literature supports that SRs have the power to change practice positively when done well. For example, most clinical practice guidelines are a combination of clinical experience, expert opinion, and research evidence. Narrative reviews27 note many guidelines rely on SRs to bolster the evidence base of the guideline.
Meta-analysis. Meta-analysis refers to statistical techniques for combining results from distinct clinical studies. In a sense, a meta-analysis is conducting research on existing research. Meta-analysis is usually conducted to answer a clinical question and should employ research studies that closely match the topic of interest. Gene Glass, who coined the term, called meta-analysis the analysis of analyses.28
The purpose of a meta-analysis is to gather and combine information from research studies to gain higher statistical power for some common metric (a single numerical value of overall treatment effect) across the studies.29 Many wound care RCTs sample too few patients to tell if clinically important outcome differences are statistically significant. If P is the probability of incorrectly rejecting the null hypothesis (the conclusion there is no effect of the treatment tested), the recognized acceptable P value for statistical significance is P <0.05 — ie, differences this large would be found by chance alone only 5% of the time (1 time in 20 replications of the RCT). Combining small studies all measuring the same outcome in the same way adds to the statistical power: the probability the test correctly rejects the null hypothesis when it is false. This allows reviewers to report significant trends not observable in any one of the individual RCTs. An essential belief in meta-analysis is all trials measure a common treatment effect; any observed differences between the trials are due primarily to chance. For example, this common treatment effect can be estimated as a weighted average of the treatment effects in the individual trials.30
Critical to meta-analysis is the inclusion of all relevant primary studies (significant/not significant, published/not published). Narrative descriptions31 note unpublished studies, doctoral dissertations, and, if possible, data from primary researchers should be included to enhance the quality of the meta-analysis; therefore, searching for all available studies can be quite complex.22
Notably, investigators have to make critical choices in conducting a meta-analysis. Studies have to be selected on a set of well-defined objective criteria. Incomplete data issues have to be addressed, data have to be analyzed appropriately, and correct data entered (eg, no typographical errors). Researchers must choose to address publication bias (or not), including the impact of retracted studies.
Because of the ability of SRs and meta-analyses to support best patient management decisions (if done well) or to deleteriously erode quality care (if done poorly), standards for the conduct and reporting of SRs and meta-analyses have been published.32 This statement — Preferred Reporting Items of Systematic Reviews and Meta-Analyses (PRISMA)33 — and the Institute of Medicine34 recommended standards for developing SRs provide guidance for research authors and clinician users. Synthesized meta-analyses are being conducted to allow other researchers to identify trends and weaknesses inherent in published studies when meta-analysis is the study focus.35
Strengths and Weaknesses of SR and Meta-analysis
Narrative literature reviews3,14,32,36,37 support that the strengths of SRs and meta-analyses are substantive. Combining studies in SRs and meta-analysis increases sample size and gives more precise estimates of effect size (increasing power and precision); clinicians can evaluate a body of evidence pertinent to a clinical question in a facilitated way by encapsulating information into a single result36 that includes all eligible studies related to the clearly defined topic. Statistics can be applied to assess clinical variation or heterogeneity and include Cochran’s Q Statistic or the Inconsistency Index (I2); users must understand how to interpret, but they provide help with understanding the big picture.14
When done well, SRs and meta-analysis can result in conclusions more informative than any individual trials they combine30; meta-analysis should act to decrease bias because all included trials will likely not be affected equally by a source of bias. Meta-analysis can include studies (trials) that do and do not feature statistically significant results; the latter may have not been published because of a lack of statistical power or small sample size.13 As such, using both kinds of studies increases external validity (real world applicability).
Because meta-analysis uses objective statistical techniques, researcher bias should be much less than what would be present in a traditional narrative review.38 In summary, a well-designed and well-conducted meta-analysis can provide valuable information for clinicians, researchers, and policy-makers.1
Narrative reviews29,30,39-43 also note SRs and meta-analysis have potential flaws or criticisms. The problematic issues raised about SRs and meta-analyses are serious given their power to positively inform or seriously degrade good clinical treatment. The reality is even good SRs with meta-analysis cannot summarize a research field. By definition, the literature suggests meta-analysis should synthesize effect sizes and not report just a summary effect. A reported summary effect that ignores heterogeneity is “missing the point of the synthesis.”39
Another issue is the file drawer effect. Because unpublished studies may be missed in a meta-analysis, the use of primarily published studies may overestimate true effect size. Meta-analysis has the goal of accurately synthesizing all existing data, so experts suggest attempts to obtain unpublished trials have to be described. Otherwise, publication bias may occur.44-46
Publication bias is defined as a bias against publishing negative findings (eg, those that do not achieve statistical significance). Several ways exist to identify publication bias in SRs. The most common approach is the funnel plot, which visually identifies possible bias. Statistical tests can be used, including Rosenthal’s Fail-Safe-N and the “trim and fill” analysis.45,46
The “apples and oranges” problem (also called clinical heterogeneity) is another potential flaw. If dissimilar clinical trials are combined in the SR and meta-analysis, the ultimate meaning of results is potentially threatened. Clinical trials in a meta-analysis will differ somewhat in their characteristics. The challenge for researchers is to decide just how similar they need to be. Data from chronic and parasitic wounds may appear statistically homogeneous, but it may not be clinically relevant to combine such data to inform clinical decisions about one of these wound types. If chosen comparisons are not logical, the validity of the results is eroded. The literature supports that sources of heterogeneity have to be investigated and identified.47
Quality of clinical trials used in SR with meta-analysis matters — the flaw is called garbage in-garbage out. Inclusion criteria of selection must be clear and applied fastidiously. Close attention to bias and individual study quality is paramount. One approach is to set a quality threshold (ie, explicit selection criteria established in advance that is logically and systematically applied).2 A meta-analysis of poor quality studies cannot yield good results in most circumstances.40
Another potential flaw is important studies may be left out of the SR. Good researcher judgment, transparently applied, must be used to make sure all studies meeting the inclusion criteria to address the SR objective are included and studies subjected to combined analysis are similar enough to yield interpretable results.
Potential exists for meta-analyses to disagree with large-scale randomized trials. Clinician users need to beware of discounting the results of the meta-analysis or individual clinical trials. This is not a scenario of the large clinical trial or meta-analysis being right or wrong. Upon closer review, it is likely something inherent in the 2 publications, such as patient risk factors or co-interventions, differ significantly.39
Another problem or flaw with SR and meta-analysis is poorly performed work. Meta-analysis is very complex; researcher mistakes are inevitable. Reviewers and clinician users need to consider the impact of these errors on the validity of the SR and meta-analysis. Research suggests the methodological qualities of SRs vary considerably.42 A large potential for bias exists in the selection and interpretation of data in retrospective research. Narrative reviews note meta-analysis can be influenced by biases inherent in data-derived analyses.40
Another problem is variation of standards of treatment over time. “Usual care” controls add 2 sources of error variability to a SR. First, the actual interventions applied in the study are omitted from the SR. Second, interventions representing or concomitant to “usual care” change over time and across settings, with no consistent meaning to readers. The influence of concomitant treatments in standard treatment protocols may influence meta-analysis interpretation.40
SRs also can differ substantially from real world practice. For example, clinical practice guidelines are usually consistent with clinical practices in disciplines and are an advancement of best practices. SRs potentially reflect the biases and philosophies of the authors who control the choice of included studies,41 which can lead to stark differences in practical usage and clinical relevance.
Due to limited availability of relevant RCTs, the recognized gold standard for reducing bias in subject treatment assignment, researchers may resort to other experimental studies that are not RCTs. Researchers must acknowledge biases inherent in the chosen designs and make definitive recommendation about practice with great caution.22 Because publication bias occurs more frequently in small trials, meta-analyses based on only small studies cannot be trusted.35
Meta-analyses on the same topic even can have discrepant results. This discrepancy is likely a multifactorial problem and can be due to differences in inclusion/exclusion criteria for study design, outcomes, populations, interventions, settings, definitions, and other factors.48
To avoid some of these concerns, researchers conducting meta-analysis should adhere to a few basic requirements. Given the need for a conclusion about a treatment, the minimum number of studies needed is 2.37 In addition, the usefulness of small meta-analyses (<200 to 300 events) to guide practice is very limited.40 Also, because of the limitations of meta-analyses and when adequately powered RCTs exist on a topic, the meta-analysis should not be given preferential treatment.49
In summary, with all the limitations or flaws inherent in SRs and meta-analysis noted in the literature, it is important to remember to “not forget to critique the critique.”50
Critically Evaluating SRs
Although utilization of SR by busy clinicians seems challenging, key strategies for critical assessment, described in the literature, can help with evaluation and critique.13,16,22,26,32,36 The clinician needs to ask 8 yes-no questions:
- Objective(s) of the SR: Do the included studies meet the SR objective(s)?
- Inclusion/exclusion criteria: Do the studies in the SR match inclusion/exclusion intervention criteria?
- Quality of studies: Is there critical appraisal of included studies for “quality indicators” such as adequate randomization, allocation concealment, dropout rates, reporting accuracy, blinding, and appropriate statistical analyses?32
- Data extraction/synthesis: Were data from the included studies extracted correctly and synthesized appropriately? Are inconsistencies and problems explained?
- Homogeneity: Do the foci of interest (eg, types of wounds) pass the similarity test (comparing apples to apples)?
- Accuracy of results: Has a clear and accurate summary of each included RCT been provided in the Results text and analyses?
- Interpretation: Are descriptions or implications of the SR supported by the data provided? Are limitations acknowledged?
- Consistency: Do the Abstract and Conclusions of the SR reflect the SR’s significant RCTs results appropriately?
Although this checklist does not guarantee appropriate differentiation of good quality versus erroneous SRs with meta-analyses, it does offer analytical clarity assistance to clinicians.
One of the challenges to clinician users of SRs with meta-analysis is to understand why certain statistics are reported. In addition, clinicians must be able to interpret graphical or visual displays of results. For selected statistical usage of effect size, the nature of the variable being targeted (eg, effect of a wound intervention) is the determining factor. For dichotomous (yes/no) variables (developed a disease; did not develop a disease), the literature suggests the most commonly used effect size estimate is the odds ratio (OR). Risk ratio or relative risk also can be used, but OR usually is utilized. For continuous variables (eg, decrease in wound size or pain intensity scales), the most commonly utilized presentation is the standardized mean difference (SMD) — ie, the mean difference between groups (trials) divided by the pooled standard deviation (SD) of the groups, also called a Cohen’s D statistic.13 Mean difference also can be used.
The visual (graphical) display of the meta-analysis is called a forest plot. A forest plot offers valuable information: the number of RCTs and subjects reporting a specific outcome favoring the subject intervention and whether each RCT result was statistically significant at P <0.05, as well as overall significance of the result and potential lack of data homogeneity. It is beyond the scope of this article to discuss interpretation, but excellent, easy-to-understand resources are available for clinicians.13
Although utilization of SRs and meta-analysis may seem daunting, the literature also contains articles targeting critical appraisal and clinical application. The reader is encouraged to read more about these processes. The best way to learn to appraise and apply SR with meta-analysis is to do it.16,29,36,43
SRs and meta-analyses are valuable processes that, without additional resources, permit exploration of treatment benefits from previously completed studies. Clinicians reading their results should have a clear understanding of their strengths and weaknesses, as illustrated in the following sections of this article. Knowledge works only if used. The following provides examples of getting optimal results from SRs even when they have issues.
The Role of SRs in Informing Clinical Practice
Taking a critical thinking approach to reading relevant SRs can enhance their usefulness in improving wound care outcomes. The literature51,52 provides guidance on using clinical expertise with a healthy dose of critical thinking gleaned from the authors’ experience as Cochrane23 or Joanna Briggs reviewers.24 Below are some ways to use the critical thinking checklist to improve benefits from a SR or meta-analysis to inform clinical practice decisions.
Look past the abstract. Ideally, best available RCT evidence supporting effects of the SR’s topic intervention on its subject outcome for each clinical indication addressed by the SR is detailed in the Results and Analysis sections. This is where clinical expertise is useful, sifting out results relevant to a challenging patient or wound. To get the most value from a SR, wound care providers need to find clinically relevant RCT content that meets the needs of current patients and wounds. Rather than accepting a SR abstract on faith, diligent wound care professionals will stay focused on finding RCTs that match their patient’s characteristics, capabilities, and wound care needs to glean relevant RCT evidence that may help improve wound and patient outcomes. If a SR abstract omits significant results from RCTs most relevant to the patient at hand, an astute decision maker will likely find them in its Results or Analyses.
Do not wait for perfection. One should not be discouraged from using evidence in a SR that concludes there is insufficient evidence to support an intervention. Many SRs teach clinicians how to aim for perfect evidence to inform clinical decisions. Patients and practitioners who cannot wait for perfect evidence can still find valuable evidence in such SRs to inform choices of care. Authors51,52 suggest the scientific method will sort out SR flaws and improve evidence-based guidelines of care to optimize results. Today’s patient needs the best possible help now, informed with the best evidence available. One or more relevant RCTs in the Results section of a SR may better inform clinical decisions than 1,000 less-informed opinions or selected cases. Wise professionals can use the best quality of evidence found in SRs to inform clinical decisions instead of giving up on SRs that conclude evidence is insufficient to inform their decisions.
Watch for red flags. Common flaws, discrepancies, or errors such as those described can warn observant readers a SR may not be robust. Inaccurate or inconsistent elements of a SR diminish its credibility. Even if SR conclusions do not reflect its results, one still can check accuracy, then use its relevant, valid RCT results to optimize wound care outcomes for relevant patients.