Skip to main content

Pressure Ulcer Prevention Algorithm Content Validation: A Mixed-methods, Quantitative Study

Empirical Studies

Pressure Ulcer Prevention Algorithm Content Validation: A Mixed-methods, Quantitative Study

Index: Ostomy Wound Manage. 2015;61(4):48–57


Translating pressure ulcer prevention (PUP) evidence-based recommendations into practice remains challenging for a variety of reasons, including the perceived quality, validity, and usability of the research or the guideline itself.   Following the development and face validation testing of an evidence-based PUP algorithm, additional stakeholder input and testing were needed. Using convenience sampling methods, wound care experts attending a national wound care conference and a regional wound ostomy continence nursing (WOCN) conference and/or graduates of a WOCN program were invited to participate in an Internal Review Board-approved, mixed-methods quantitative survey with qualitative components to examine algorithm content validity. After participants provided written informed consent, demographic variables were collected and participants were asked to comment on and rate the relevance and appropriateness of each of the 26 algorithm decision points/steps using standard content validation study procedures. All responses were anonymous. Descriptive summary statistics, mean relevance/appropriateness scores, and the content validity index (CVI) were calculated. Qualitative comments were transcribed and thematically analyzed. Of the 553 wound care experts invited, 79 (average age 52.9 years, SD 10.1; range 23–73) consented to participate and completed the study (a response rate of 14%). Most (67, 85%) were female, registered (49, 62%) or advanced practice (12, 15%) nurses, and had >10 years of health care experience (88, 92%). Other health disciplines included medical doctors, physical therapists, nurse practitioners, and certified nurse specialists. Almost all had received formal wound care education (75, 95%). On a Likert-type scale of 1 (not relevant/appropriate) to 4 (very relevant and appropriate), the average score for the entire algorithm/all decision points (N = 1,912) was 3.72 with an overall CVI of 0.94 (out of 1). The only decision point/step recommendation with a CVI of ≤0.70 was the recommendation to provide medical-grade sheepskin for patients at high risk for friction/shear. Many positive and substantive suggestions for minor modifications including color, flow, and algorithm orientation were received. The high overall and individual item rating scores and CVI further support the validity and appropriateness of the PUP algorithm with the addition of the minor modifications. The generic recommendations facilitate individualization, and future research should focus on construct validation testing.



Efforts aimed at reducing barriers to implementing evidence-based protocols of patient care have continued unabated since the Institute of Medicine’s1 (IOM) document Crossing the Quality Chasm highlighted the imperative for patients to receive care based on the best available scientific knowledge and that care should not vary illogically from clinician to clinician or from place to place. Since that time, many review and observational studies have shown implementing pressure ulcer prevention (PUP) protocols of care, including standardization of pressure ulcer (PU)-specific interventions and documentation, reduces their prevalence in acute care,2,3 long-term care,3-5 and home care populations.6 However, for a wide variety of reasons, knowledge translation of PU best practice recommendations into practice remains a challenge.2 The perceived quality and usability of the research or the guideline itself is an important barrier to its implementation. Processes used for the development of guidelines have varied considerably. Most are developed by groups or panels and not evaluated or tested by stakeholders, resulting in sometimes conflicting recommendations.7 For example, in a review8 of 5 wound care guidelines developed by and/or for physicians it was observed many are difficult to evaluate by clinicians not versed in guideline appraisal and ratings for many development aspects were very low. End-user concerns about the relevance or validity of guidelines can be addressed when developed using a process more closely aligned with recent IOM guidelines by soliciting input from relevant stakeholders through a rigorous established process toward establishing validity.7 Usability is facilitated when large amounts of information can be captured in a step-by-step process or algorithm.9

  Recently, a PUP algorithm was developed based on a systematic literature review and face validation by wound care experts.10 Following an on-admission risk assessment using a valid and reliable instrument, the end user is guided toward modifiable risk factors included in the Braden Scale score and interventions designed to address them (see Figure 1).

  Using a systematic review of published evidence from 2007 to 2013 and Strength of Recommendation Taxonomy (SORT),11 the authors developed a 1-page algorithm with 26 distinct steps/decision points. Each step/recommendation was developed with study quality ratings for all identified publications and resultant strength of recommendation. As part of the external review and validation process, a face validation was conducted among 12 wound care experts. In a process reported by Lynn12 and Waltz and Bausell,13 experts rated content for validity using a 4-point Likert scale (1 = not relevant/appropriate, 4 = very relevant/appropriate). The algorithm’s overall mean score was 3.6 (SD 0.8) with a content validity index (CVI) of 0.89 (out of a possible 1), indicating strong content validity. Qualitative comments were analyzed, and minor revisions were made to the algorithm. However, additional stakeholder input and testing were needed to determine the content validity of this algorithm. The purpose of this prospective, descriptive study was to examine the content validity of the algorithm using a larger sample size of interdisciplinary wound care experts.


Design. A mixed-methods, quantitative survey design with qualitative components was used to obtain content validation data for the PUP algorithm. Institutional Review Board (IRB) approval was obtained from Holy Family University (Philadelphia, PA).

  Sample and setting. Wound care providers were invited to participate in the study using convenience sampling methods. Sample inclusion criteria were relatively broad to encourage participation by a wide range of providers. Criteria included: 1) licensed health care professional or clinical researcher with wound care background; 2) able to speak, read, and write in English; and 3) willing to review the algorithm and provide input, abiding by a confidentiality expectation. Wound care background meant substantive (>5 years) wound care experience and/or formal wound care education, preferably with board certification in wound care. Participants did not receive financial compensation. Health care clinicians attending a national interdisciplinary conference, a regional conference for wound, ostomy, and continence nurses (WOCN), and/or graduates of a WOCN program were invited to participate. Data collection occurred over a period of 6 months. A total of 553 health care providers were invited to participate. Conference attendees were invited via email before the conference or with a posting or personal invitation during the conferences. Graduates of a WOCN program received an emailed invitation. If they agreed to participate, they were emailed a copy of the consent form; the algorithm and study instrument were sent via regular mail. The consent form could be returned via email to ensure confidentiality of the actual survey responses. Volunteers were asked to send the completed instruments via regular mail to the researchers within 1 week, but the mail was monitored for a period of 6 months.

  Ethical considerations. Following IRB approval procedures, study volunteers were asked to read a consent form and provide written informed consent. Signed consent forms were collected and the algorithm surveys were distributed or sent by mail. Participants were asked to return consent forms within 1 week. No participant identification or linkage between consent and data form was possible. All consent forms and survey responses were collected and stored in a locked file cabinet of the first author.

  Instrumentation. The data collection survey was comprised of a paper-pencil instrument consisting of an 18-item demographic data form, the content validation questionnaire containing 26 statements matched to each of the 26 decision points/steps of the PUP algorithm, and a final segment with 2 open-ended questions asking for overall comments about the algorithm and the research process itself. The content validation survey asked participants to review the entire algorithm and then read the statements related to each of the decision points/steps and rate their level of agreement with the relevance (appropriateness) of the item. A 4-point rating scale was used as per Lynn12 and Waltz and Bausell13: 4 = very relevant and appropriate: 3 = relevant but needs minor alteration; 2 = unable to assess relevance without revision; 1 = not relevant/appropriate. For each statement, participants were asked to add written comments considering omission(s), suggest changes to improve clarity/succinctness, present an alternative, and provide literature references if possible.

  Data collection procedures. At the national meeting, participants were directed by signage to an adjacent room at a specified time. Following attention to ethical considerations and a brief oral introduction about the history of the algorithm development and purpose of the study, participants signed and returned the consent forms and received the algorithm and the survey response form. Participants reviewed the algorithm then provided validation ratings and narrative comments. All participants completed and returned the survey before leaving the room. Attendees at the WOC regional nursing conference and graduates of the WOC nursing program received the same introduction in writing in addition to the consent form, algorithm, and data collection instrument and were asked to complete and return them during the meeting or via regular mail. The data collection process took approximately 45 minutes. Time to complete the study for those who mailed the forms could not be verified, but no comments were added about excessive time.

  Data analysis. All variables were coded and entered into SPSS® Version 19.0 (IBM, New York, NY) for analysis. Descriptive summary statistics were calculated for all demographic variables. Mean scores and the CVI were calculated for each of the 26 individual algorithm components and the entire algorithm. CVI was calculated by grouping very relevant/relevant (ratings 3 and 4) and not relevant/unable to assess relevance (ratings 1 and 2). The proportion of items rated 3 and 4 was used to calculate the CVI; validity was indicated by a score of >0.70 (scale 0 to 1.0).14,15

  Qualitative comments regarding individual decision statements/steps and overall processes were transcribed and thematically analyzed using qualitative data reduction techniques. Qualitative comments about individual decision steps were substantive, and a frequency count method was utilized to give meaning for most commonly identified themes.


Demographics. Of the 553 providers invited, 79 consented to participate and completed the study (a response rate of 14%). All common wound care-associated disciplines were represented.

  The majority (67, 85%) of participants were female, average age 52.9 years (SD 10.1; range 23–73), and practicing in the United States (95%). Most were registered nurses (49, 62%) or advanced practice (12, 15%) nurses. Other health disciplines included medical doctors, physical therapists, nurse practitioners, and certified nurse specialists (see Table 1). The majority of participants spoke English only (61, 77%) and had received their health care education in the US (71, 90%); 70 (89%) had a baccalaureate degree or above. Almost all participants had received formal wound care education (75, 95%); >75% of nurses were board-certified in wound care. Most participants were well experienced with health care; 77 (92%) had 10 or more years’ experience. Geographically, most participants were from the northeastern US (53, 68%) practicing in both urban (30, 38%) and suburban (46, 58%) settings. Only 13 (17%) of respondents practiced in a rural area. Fifty-eight (nearly 75%) encountered more than 10 patients weekly who had or were at risk for PUs. The top 3 types of wounds most commonly managed included PUs (67, 89%), lower extremity ulcers (61, 77%), and diabetic foot ulcers (42, 53%) (see Table 1).

  Quantitative analysis. The calculated average item relevance/appropriateness score for the entire algorithm/all decision points (1,912 ) was 3.72, with an overall CVI of 0.94 (out of 1). The only decision point/step recommendation with a CVI of 0.70 was the recommendation to provide medical-grade sheepskin for patients with activity/mobility limitations who are at high risk for friction/shear (see Table 2). Otherwise, quantitative data analysis supports the algorithm components and inherent decision processes are appropriate.

  Qualitative analysis. Respondents’ comments about the PUP algorithm were both positive and negative (see Table 3 and Table 4). Qualitative analysis of overall comments generated themes (see Table 3) regarding algorithm strengths such as simplicity, good use of color, necessity, and flexibility. Other potential uses for patient education, quality improvement, possibility of individualization, and aligning with other guidelines or algorithms were identified. Negative themes included complexity, confusion with color use, omission of components, support surface clarification, education issues, and problems with specifics such as directions.

  All participants provided written comments, necessitating a more quantitative approach to the data analysis; a decision was made to generate frequency counts by item to suggest more imperative themes (see Table 4 for items with a frequency count >5). Four items received the largest number of comments. For admission assessment and documentation, 12 study participants indicated the word usually should be removed from the “within 24 hours” statement. Timing and content of staff, caregiver, and patient education also were concerning. Twenty-nine participants questioned the recommendation regarding the use of medical-grade sheepskin. Finally, 15 participants commented that the need to obtain a dietitian consult should be included for patients with a less-than-optimal nutritional status.


The history of the development of the PUP algorithm via systematic review and face validation were described in an earlier publication.10 The current study provided content validation data and an overview of strengths and areas of challenge. The rating scores (average score 3.72 out of 4) and CVI (0.94 out of 1) of the PUP algorithm for use in adults were strong, suggesting the components were appropriate to the purpose of the instrument. Only 1 practice recommendation, the recommendation to use medical-grade sheepskin for patients with activity/mobility limitations and/or high risk for friction or shear, received a low score. This recommendation also received a low appropriateness score in the earlier face validation study.10 Ironically, this was one of a handful of recommendations based on the results of several high-quality studies and an overall A strength of recommendation based on several, mostly Australian, studies. Study participants were concerned about this recommendation because most practitioners in the US are not familiar with medical-grade sheepskin and may have used synthetic sheepskins only. Because the literature-based quality of the research underlying this recommendation is good (A-strength level of evidence) and the algorithm can be used in other countries, the recommendation was not removed but included in smaller print and with an “if available” footnote.

  By design, algorithms are ideal for specifying appropriate management strategies, communicating complex series of conditional statements, and helping the transfer of research into clinical practice, but they are not exhaustive.9 The qualitative comment results of this and other content validation studies16 suggest a constant tension between clinician’s need for easy-to-follow, simple directions and more details and guidance. On the one hand, study participants were pleased with the easy-to-follow steps and were interested in algorithm pocket guides, yet for a number of action steps they would have liked more details.

  Following a careful review of all quantitative and qualitative results, several minor algorithm modifications were made. Specifically, concerns about information flow, design, and colors centered mainly on the admission assessment regarding current or recent history of limited mobility that was preceded by “Not at risk and intact skin.” The latter was removed because it did not provide any actionable information and caused confusion (see Figure 1). This also facilitated a change in the color of the decision step/point that makes it easier to identify as part of an admission assessment decision point.

  Nineteen participants indicated an admission assessment should be conducted within 24 hours and not, as originally stated, “usually within 24 hours.” The original algorithm version did not have any time designation because time recommendations in the literature and face validation study participant opinions varied; hence, the “usually within 24 hours” recommendation.10 However, because participants in the current study were less ambiguous about this statement and a risk assessment “at admission” has been shown to reduce the incidence of PUs17 and is now commonly recommended in all health care facilities,18,19 the word usually was removed during the final algorithm revision.

  Concerns about the timing of education were addressed by moving that step to the top left corner as a visual reminder that education about risk and skin assessment should precede all processes. Finally, the colors were standardized and box shapes edited to match current standards9,20 (see Figure 1).

  With respect to the need for more details and directions (eg, type of moisturizer or high-quality foam or need to obtain a dietary consult), it is important to note the algorithm is generic. Although the information contained within the algorithm cannot be changed without compromising validity, facilities interested in incorporating more specific evidence-based recommendations are encouraged to review the published evidence upon which the recommendations are based for further refinement to suit their protocols of care.10

  After completing a case study using ethnographic methods to examine decision-making in nursing, Rycroft-Malone et al21 reported tension exists between the standardization demanded of evidence-based practice and individualizing decision-making. The authors suggested the use of protocols and guidelines may be dependent on incorporating nurses’ decision-making processes into the context of the work environment. Thus, giving nursing staff the opportunity to individualize specific evidence-based intervention recommendations (such as types of high-density foam and protective barrier creams) may facilitate adoption of this algorithm and help standardize care.

  Study participant verbal comments were generally positive, especially regarding the focus on and organization of modifiable risk factors. These comments echo the results of a recent consensus study22 to construct a theoretical model for identifying the etiological factors of PUs. The authors concluded the local approach to risk reduction should be determined by production mechanism (eg, those that address pressure, friction, shear, and moisture) and modifiable etiological factors (less-than-optimal nutritional status).

  Because the decision points were based on best available evidence10 and the content validation ratings by stakeholders supported their appropriateness, the algorithm is, to the authors’ knowledge, the first algorithm targeting PUP in adults that is strongly evidence-based. Only the Association for the Advancement of Wound Care23 PU clinical practice guideline, which includes PUP recommendations, has been formally content-validated. In addition, qualitative comments were generally supportive and positive. The few negative comments were used to tweak the structure of the algorithm to support its best usage.


The PUP algorithm was designed to focus on adults and cannot be suggested for safe use in pediatric and/or neonatal populations. As part of this rationale, device-related PUs are not addressed because they are a major issue in pediatric care of the skin. Suspected deep tissue injury (DTI) also was not incorporated because the associated evidence base is limited and, at this time, a DTI is considered a PU. Although the previously reported evidence and face validation combined with the current content validation study results are important steps in building evidentiary support for the algorithm’s use, future research to test its utility and construct validity are needed.

  Another potential study limitation is the low invitation response rate (14%) and potential nonresponse bias. Results of a meta-analysis conducted by Shih and Fan24 showed reported survey response rates to regular and email invitations range from 7% to 89%, averaging around 40%, and email reminders are less effective than regular mail reminders. Results of randomized, controlled survey research25 show physician survey response rates are lower than nurse response rates. The primary recruitment method in the current study was via email and email reminders, but other study design factors preclude response rate comparisons. For the national meeting, in addition to being interested in participating, potential participants had to be available during the data collection times. Not all persons registered — and invited — were present for the entire conference. Also, data collection occurred during regular conference session hours. As a result, potential study participants would have had to forego attending an educational session. Similarly, at the regional meeting, many attendees expressed an interest, but time was limited and the survey may have been lost among other meeting-related paperwork. More consent forms (16) were collected at the meeting than completed surveys obtained via mail. The optimal method for collecting this type of study data requires further research.


 A PUP algorithm was developed based on a rigorous systematic review process and subsequently face validated.10 The results of the current study of content validation involving 79 wound care experts were similar to face validation results. All except 1 of the 26 steps/items had a high CVI (average 0.94), supporting that the PUP algorithm is valid and appropriate with the addition of the minor modifications. To the authors’ knowledge, this is the first PUP algorithm based on systematic review, face validation, and formal content validation. Future research should focus on construct validation.


The authors gratefully acknowledge the willingness of all study participants to share their expertise, experience, and time; the study support and guidance provided by Ana Maria Catanzaro, RN, MSN, MA, MHSC, PhD; and the data entry assistance of Danielle Devine, MSN, RN.


This study was supported by a research grant from Convatec Inc (Skillman, NJ) to Holy Family University School of Nursing and Allied Health Professions, Philadelphia, PA.


Ms. van Rijswijk is a Mentor, W. Cary Edwards School of Nursing, Thomas Edison State College, Trenton, NJ; a doctoral student, Department of Nursing, West Chester University, West Chester, PA; and Clinical Editor, Ostomy Wound Management. Dr. Beitz is a Professor of Nursing, School of Nursing, Rutgers University-Camden, Camden, NJ. Please address correspondence to: Lia van Rijswijk, MSN, RN, CWCN, 210 S. Chancellor Street, Newtown, PA 18940; email: