A New In vivo Test Method to Compare Wound Dressing Fluid Handling Characteristics and Wear Time
A refined in vivo artificial wound model utilizing artificial wound fluid (AWF) was developed to overcome important limitations such as adhesive-skin interactions, movement, pressure, shear, and varying environmental conditions in in vitro models. Using this model, a study with primary endpoints of wear time and fluid handling capacity was conducted to compare two foam dressings: a high performance (HPF) dressing and an adhesive dressing (AAF).
A 10-cm2 artificial wound bed, created using a nonstick absorbent pad, was applied to the lower back of 24 healthy volunteers and the tip of a 23-gauge catheter was inserted into the pad to administer AWF. The pad and catheter were secured in place with paper tape and covered with the test dressings. This model and 1.0-mL AWF infusions every 12 hours was used to simulate highly exudating wounds. The HPF dressing absorbed 75% more fluid before failure and remained intact for a median of 6.1 days compared to 3.5 days for the AAF dressing (P <0.001, Cox proportional hazard model). Comparisons between dressing outcomes using this model and previously published in vitro results suggest the model may be valid and reliable. Studies to ascertain the ability of this model to predict clinical dressing performance and research to compare other wound outcomes that affect dressing change frequency and cost (eg, healing) are needed.
Potential Conflicts of Interest: Mr. Lutz is a former employee of 3M Health Care (St. Paul, MN) and received financial compensation for preparing this manuscript. Ms. Zehrer, Ms. Solfest, and Ms. Walters are employed by 3M Health Care.
Chronic wounds such as pressure ulcers, diabetic foot ulcers, and venous ulcers typically exhibit some degree of drainage.1 The amount of drainage can vary from scant to copious depending on a variety of factors such as wound size, stage of healing, the presence of necrotic tissue, and numerous local, systemic, and practical issues of care.1,2
According to Rolstad and Ovington,3 a fundamental performance parameter critical for all wound dressings is to prevent tissue desiccation by providing a moist wound healing environment. At the same time, the wound should not become overly wet and cause periwound skin maceration. Thus, the authors conclude that dressings must have sufficient fluid handling capacity to reduce the risk of maceration and still provide a moist environment for healing, an important concept not to be overlooked. In a retrospective study,4 400 participants with pressure, diabetic, or venous ulcers with varying amounts of exudate (ranging from none to large) in a variety of healthcare settings were less likely to be healed after 3 months of care if their wounds were not provided an appropriate exudate management dressing when compared to wounds receiving appropriate exudate management.
Fluid handling capacity also may affect the functional wear time of dressings. Extended wear time is important not only in minimizing disruption of healing wounds, but also in controlling the overall cost of wound treatment. In an economic model of data derived from 26 studies of three pressure ulcer (n = 519) and three venous ulcer (n = 883) protocols of care, Kerstein et al5 found that over 12 weeks of modeled care, differences in the cost per ulcer healed were influenced by both healing outcome and differences in the frequency of dressing changes. Thus, the authors concluded that defining wound care costs solely on the cost of products used (and not taking wear time or healing outcome into account) is inaccurate and may lead to false assumptions of economy.
Several advanced adhesive dressing technologies, including hydrocolloid, transparent absorbent, and foam, have been developed over the past three decades to help meet the clinical need for moisture-retentive absorbent adhesive wound dressings with extended wear time. However, information about the comparative effectiveness (healing outcome) of these dressings, as well as their performance (wear time), remains limited. Chronic wound patients usually have complex comorbidities requiring multiple therapies; these patients are difficult to follow as they move through the healthcare system. Rarely is a patient hospitalized for an extended period of time simply for a chronic wound.
Clinical measurement of dressing wear time (performance) is extremely challenging due to the high degree of variability that exists among patients, wounds, clinical practice, and established care protocols. Even for wounds with similar etiology, test condition variability make estimates of clinical wear time difficult to obtain. As a result, clinicians, product engineers, and regulatory bodies have sought methods of comparing the performance of absorbent adhesive dressings—ie, an in vivo test method that bridges the gap between purely in vitro and clinical study models. One such test method has been proposed and tested with hydrocolloid and transparent absorbent dressings.6-9 The test involves the creation of artificial wounds models (AWM) on the lower backs of healthy adult volunteers and intermittently infusing artificial wound fluid (AWF) into the models over a predefined study period. Although this test is not intended to be a substitute for clinical trials, it can provide meaningful insight into potential wear times and fluid handling capacities of absorbent adhesive dressings.
The objective of this study was to assess the wear time and fluid handling capacity of two foam dressings: Tegaderm™ High Performance Foam adhesive dressing (3M Health Care, St. Paul, MN [HPF] and Allevyn™ adhesive foam dressing (Smith & Nephew, Largo, FL [AAF]), using a refinement of the previously described in vivo AWM procedure.
Relatively little information has been published on the use of nonpatient models to evaluate absorbent wound dressing performance. In 1997, while studying the fluid handling properties of hydrocolloid dressings, Thomas and Loveless10 proposed using the sum of dressing absorbency and moisture vapor transmission rate (MVTR), as determined by the Paddington Cup method, for comparing total fluid handling capacity of dressings. In 1998, Sprung et al11 described a similar procedure and using an immersion test procedure further demonstrated that absorption characteristics of some wound dressings (hydrogels and hydrocolloids) can be influenced by the choice of fluid used in the test (wound fluid, saline, or water).
Briefly, the Paddington Cup method (as codified in the European Standard EN13726-1)12 measures the mass of moisture vapor lost through semipermeable dressings at standard laboratory conditions of 37˚ ± 1˚ C and 20% relative humidity. The test utilizes a Paddington Cup apparatus that consists of an open and a closed chamber, each with an orifice area of 10 cm2. A sample of the test dressing is placed between the two chambers, which are clamped together with a moisture-tight seal. The closed chamber contains a known quantity of a solution of sodium and calcium chloride that is isotonic to human serum. The fluid is in contact with the wound contact portion of the dressing and the upper surface of the dressing is exposed to ambient air. At predetermined times during the study, the amount of moisture lost through the dressing is determined by gravimetric analysis.
In 2007, Thomas13 used the Paddington Cup method to compare the performance of two polyurethane foam dressings. In this study, a foam dressing with a high MVTR and another with a low MVTR were compared over a 7-day test period using both an inverted and an upright technique. In the inverted position, the dressing was in contact with the test fluid throughout the entire procedure, modeling fluid handling capability of the dressings under highly exudating wound conditions. In the upright position, the dressing was only in contact with moisture vapor from the test fluid, thus modeling fluid handling (or retention) capability under low exudating or dry wound conditions. The results then were graphed to show the range of expected fluid handling capability of the dressings over the 7-day test period. Superimposed on these graphs were anticipated volumes of exudate produced from a variety of wound types based on literature observations. Results showed that the foam dressing with the higher MVTR was projected to be capable of handling exudate volumes from a wide variety of wounds, whereas the dressing with the lower MVTR would be saturated within about 24 hours. Although novel, this approach failed to model clinically important adhesive-skin interactions that can lead to dressing failure before full saturation.
In 2001, Thomas and Fram14 described the development of an AWM for in vitro testing of absorbent wound dressings. The model was automated for fluid delivery and strike- through detection; however, it required a specially constructed apparatus, making it difficult to replicate outside of their test laboratory. In 2007, Young et al15 described a unique in vitro ultrasound method for observing and measuring fluid absorption dynamics of polyurethane foam dressings. In their model, an ultrasound transducer probe was applied to the absorbent surface of a dressing to generate visual images that followed the absorption and transduction of a 5-mL bolus of isotonic fluid through the dressing. The ultrasound images were analyzed for the presence or absence of moisture and the time required for the dressing to fully absorb the fluid and then to transpire it through the film backing. The technique did not provide a quantitative measurement of fluid handling capacity of dressings and was limited by the availability of specialized ultrasound equipment and its utilization of a single large bolus application of a simple isotonic fluid to the dressing.
Over time, the Paddington Cup method has become the gold standard for measuring and comparing fluid handling capacities of dressings. However, as Thomas and Young16 point out, numerous limitations of the method must be considered. First, this method exposes a relatively small sample of dressing (10 cm2) to an effective wound area of equivalent size when clinically, wound dressings are typically much larger than the wound over which they are placed. Secondly, the test is destructive in nature because the dressings need to be cut to fit within the test apparatus. Many dressings are constructed with multiple and interactive layers, so it is impossible to predict how cutting and rigidly fixing these dressings in a test apparatus might affect overall dressing performance. Thirdly, the Paddington Cup method does not take into account numerous relevant clinical variables that may influence dressing performance; the most important is the adhesive-skin interactions that are difficult to model in vitro. Fourthly, the Paddington Cup method is conducted under tightly controlled laboratory temperature (37° C/99° F) and relative humidity (RH) (20%) conditions that may not be entirely clinically relevant, particularly when clothing and bed linen cover the dressing.14 Finally, the test method utilizes a simple isotonic solution of sodium and calcium chloride, whereas wound fluid is a complex mixture of electrolytes, carbohydrates, proteins, and lipids.17 Sprung et al11 demonstrated that absorption characteristics of both hydrogel and hydrocolloid dressings are differentially affected by the choice of test fluid (wound fluid, saline, or water), although this has yet to be replicated with adhesive foam dressings.
Given the limitations of the Paddington Cup and other in vitro test methods that have been developed thus far, it would be advantageous for product engineers and developers to have available an easy-to-perform test that overcomes these limitations.
Methods and Procedures
Model overview. This in vivo test method is intended to provide better real-world estimates of fluid handling and wear time capabilities of absorbent dressings than purely in vitro methods such as the Paddington Cup method.12,16 Each test dressing is placed over an AWM constructed on the lower backs of healthy volunteers. AWF is intermittently infused into the AWM at a predetermined amount, frequency, and duration and allowed to be absorbed into the dressing much like a dressing placed over a chronic wound. The study is typically conducted over approximately 1 week or until test dressing failure. During the test period, dressings are regularly monitored and assessed for edge lift, adhesive failure, and leakage.
Ethical considerations. This study conformed to the ethical guidelines of the 1975 Declaration of Helsinki and was conducted in compliance with the Good Clinical Practice guidelines and the Health Insurance Portability and Accountability Act regulations.18-20 An ethics review was conducted before the start of the study, and all participants provided written informed consent before study enrollment.
The artificial wound model. No unusual or difficult-to-obtain materials are needed to construct the artificial wounds (see Table 1). The artificial wound was made from a nonstick pad, folded in half to form an 1 inch x 1.5 inch (2.54 cm x 3.81 cm) artificial wound bed, approximately 10 cm2, the size of a typical chronic wound.5 The tip of a 23-gauge catheter cannula was inserted into the folded pad to diffuse the fluid and secured in place with paper tape. Study participants self-infused fluid into the model at hourly intervals through the use of a needleless graduated syringe (see Figure 1).
Test fluid. The test method can accommodate a variety of fluids and has been used successfully by the authors with normal saline, phosphate-buffered saline, and AWF. For this experiment, AWF was used because it approximates the macronutrient and electrolyte composition of chronic wound fluid17,21 and contains a preservative system that keeps it stable and aesthetically acceptable to subjects throughout a typical week-long test (see Table 2).
The use of a skin-friendly preservative system such as Germaben® II (International Specialty Products, Wayne, NJ) is recommended when formulating AWF to suppress microbial growth and odor during testing (author experience). Although suppression of microbial growth may not fully replicate clinical conditions, and it is possible that some organisms may feed on some types of carbohydrate or protein-based dressing materials (eg, alginates, hydrocolloids, and pectins), it is doubtful that such an effect would be observed with fully synthetic dressings such as those utilized in this experiment.
Test sites. The lower lumbar region was chosen for this experiment because it can be easily accessed, accommodate two wound dressings, and is inconspicuous when covered by clothing. Other anatomical locations such as the volar forearm,6,8 elbow (unpublished data), and calf (unpublished data) have been successfully used and can be accommodated by the model, but the lower lumbar region works best for ease of dressing application, ease of participant fluid injections, and daily dressing assessments. Additionally, the lower lumbar region exposes the dressings to external pressures, friction, and shear forces that one might expect during normal dressing wear with ambulatory patients, especially in a supine position during sleep. Therefore, no special adjustments were made to protect the dressings during daily wear.
Study design. A paired study design was used. Each participant received both test dressings alternately placed on the left and right side of the lumbar region. An adaptive design was deemed appropriate because dressing wear time within this procedure was unknown. Previous studies using hydrocolloid and transparent absorbent dressings (unpublished data) found that a sample size of 12 to 24 participants provides useful data. Therefore, a sample size of 24 volunteers was planned with an interim analysis of the data after 12 participants had completed the study. Termination at that time was allowed based on the primary analysis of time (in hours) to product failure using the O’Brien-Fleming stopping boundary, with P = 0.0056 at 50% of the planned data.
Procedures. To minimize possible confounding variables, the use of lotions, creams, or any other skin contact materials on the test sites was prohibited for at least 1 day before the start of the study. Volunteers also were asked to refrain from strenuous exercise during the study and from getting the dressings excessively wet (eg, no tub bathing or showering directly on dressings).
One coordinator implemented the study. On the first day of the study, this coordinator cleansed the test areas with a no-rinse skin cleanser, constructed the AWMs, and prepared the infusion cannulae by pre-flushing it with the AWF. Maintenance of sterile technique was not required. The study coordinator also educated the volunteers on infusing AWF into the model and provided an infusion log to record AWF administration. All study participants received a sufficient quantity of AWF (in a small capped bottle).
The exact amount and frequency of AWF infusions is based on the study objectives and the wound type to be modeled. Using previously documented clinical observations,22 AWF administration frequency and dose were determined for minimally, moderately, and highly exuding wound models (see Table 3). For this test, a highly exudating wound model was chosen, requiring 12 hourly injections of 1.0 mL of AWF per day (12.0 mL/day). This dose was also consistent on an equivalent wound size basis with the highest level of exudate observed in a study of 10 venous ulcers (1.2 g/cm2/24 hours).23
For this experiment, dressing failure was defined as leakage of the infused AWF, dressing lift sufficient to expose the artificial wound, or the dressing falling off. Volunteers were asked to monitor the wound model for signs of dressing failure, and the study coordinator evaluated the dressings every day except on weekends. Signs of dressing failure and the date were recorded. Typically, volunteers detected dressing failure when leakage caused wetness of the surrounding skin or clothing. When dressing failure occurred, the volunteer or study coordinator removed the failed dressing and continued infusions into remaining test dressing until it also failed.
Dressings remained in place until failure for a maximum of 87 infusions (12 infusions per day on days 1 through 7, plus three infusions the morning of day 8). The test procedures (eg, infusion dosage, frequency, and duration) were chosen to ensure a sufficient number of failures within the limited study duration in order to ensure some measure of assay sensitivity—ie, the ability to detect statistical differences in dressing fluid handling capacity. Dressings are rarely left in place for more than 7 days, so slightly exceeding 7 days of testing allows for observations up to and including 7 days of wear time.
Outcome measures. The three main outcomes of interest for this study were dressing wear time, volume of fluid administered until dressing failure, and the proportion of intact study dressings at specific times during the study.
Data and statistical analysis. Because dressing removal may be due to factors other than dressing failure (eg, end of study period, accidental removal), survival analysis methods were used to compare dressing wear times while including all dressing removals (also known as censored cases). In general, if the majority of dressings fail during the course of the study, a median survival time may be computed (time to 50% dressing failure). If all dressings fail during the course of the study, a mean survival time may be computed.
Both the wear time (in hours) and number of AWF doses until dressing failure were compared between dressing types using a Cox proportional hazard model that accounted for the clustered data using a robust sandwich covariance matrix estimate. If the dressing remained intact at the end of the study, or if the dressing was removed for reasons not associated with dressing performance (eg, caught on clothing), the time to those events were treated as censored. The primary test of difference was a one-sided Wald test, with a null hypothesis of the hazard ratio equal to 1.0. The nonparametric Log-rank test was used to compare survival curves and estimates of the proportion of dressings remaining intact after a specific time or fluid exposure were computed using the Kaplan-Meier or actuarial method with 24-hour or 12-dose intervals.
Participants. Twenty-four (24) volunteers (four men and 20 women) were enrolled and completed the study. The interim primary analysis on wear time (in hours) to product failure resulted in a Pvalue that did not cross the predefined O’Brien-Fleming stopping boundary. The one-sided Wald test with the null hypothesis that the hazard ratio was equal to 1.0 resulted in a P value equal to 0.0364. Therefore, all planned volunteers were enrolled into the study.
Reason for dressing removal. The primary reason for removal of both dressings was leakage or dressing edge lift sufficient to expose the artificial wound bed. This was lower for HPF (n = 14 out of 24, 58%) than for the AAF dressing (n = 22 out of 24, 92%). Only one (4%) AAF dressing survived to the end of the study, compared to nine (38%) HPF dressings. One dressing (4%) from each group was removed for reasons other than leakage/lift: one was mistakenly removed by the volunteer when the comparator dressing failed, and the other for unknown reasons.
Wear time to dressing failure. The median wear time (survival time) for the HPF dressing was 6.1 days. The median wear time to failure for the AAF dressing was 3.5 days (Log-Rank test, P = 0.0018) (see Figure 2). The end-of-study survival rate (approximately 7 days) was estimated to be 39% (10% SE) for the HPF and 8% (6% SE) for the AAF dressing. The hazard ratio estimate was 0.36—ie, the instantaneous risk of HPF dressing failure based on wear time measurements was 36% of that for the AAF dressing. The hazard ratio was significantly smaller than 1.0 (one-sided P <0.001).
Cumulative doses to dressing failure. The estimated median doses to dressing failure based on the Kaplan-Meier method was 78.0 (78.0 mL) for the HPF and 44.5 (44.5 mL) for the AAF dressing. The proportion of intact dressings over doses administered (the survival curve) was significantly different between the two dressings (Log-Rank test, P = 0.0023) (see Figure 3). The survival rate to 84 doses (84.0 mL) at the end of the study was estimated to be 46% (10% SE) for the HPF and 28% (9% SE) for the AAF dressing.
The hazard ratio estimate was 0.37—ie, the instantaneous risk of HPF dressing failure based on the volume of AWF infused into the wound model was 37% of that for the AAF dressing (one-sided P <0.001).
The main utility of the in vivo AWM used in this experiment is its ability to compare the wear time and fluid handling performance of wound dressings under conditions that are stable over time and can be easily replicated. Over the last decade, the authors have performed this test more than 40 times using a variety of wound dressings and test conditions and have progressively refined and standardized the technique to the procedure presented in this paper. The model has previously shown utility for predicting wear time performance of dressings in a clinical environment. In a study conducted by Atwood et al,24 the model was used to measure in vivo wear time of a hydrocolloid dressing and a transparent absorbent dressing applied to the lower backs of 18 healthy volunteers and adapted to simulate a moderately draining wound (12 infusions of 0.7 mL AWF per day). The model differed slightly from the current study in that it also incorporated once-daily exposure to simulated diaphoretic and incontinent conditions. Results showed a mean (SD) wear time of 2.5 (1.16) days for the transparent absorbent dressing and 0.8 (0.77) days for the hydrocolloid dressing.
In a clinical study with the same two dressings on 72 patients with Stage II and shallow Stage III pressure ulcers, Brown-Etris et al25 found a mean (SD) wear time of 5.7 (2.55) for the transparent absorbent and 4.7 (2.29) days for the hydrocolloid dressing. Although dressing wear times were longer on actual wound patients, directionally the results were consistent with the results of the in vivo model.
Both dressings tested in the current experiment have recently been modified to enhance fluid handling capabilities under highly exudating conditions while retaining a moist environment under dryer conditions.16,26 To date, no clinical studies involving the HPF dressing have been published and only one market surveillance study27 involving 84 patients with mixed etiology chronic wounds has been published for the AAF dressing (113 additional patients in the same study received various other adhesive and nonadhesive forms of the same brand of foam dressing). Dressing wear time was not reported in this market surveillance study, but the authors found across all of the dressings studied that 90% of dressing removals were part of the regular dressing routine and only 6% were removed for leakage. Only 24% of wounds in this study were classified as highly exudative; the majority (76%) were classified as having either moderate (50%), slight (24%), or no drainage (2%). Therefore, it is difficult to draw any interpretive conclusions about the results of this market surveillance study relative to the more highly exudative artificial wounds utilized in this in vivo model study.
Thomas and Young16 used the results of data generated with the Paddington Cup method to predict how the AAF dressing might perform under a range of exudate levels and compared the data to literature observations of exudate levels in a variety of chronic wounds.23,28 Based on these clinical observations, it was predicted that this dressing should be able to handle “typical” clinical wound situations. However, chronic wounds vary considerably in their production of exudate, and the literature is sparse in terms of understanding the full range of exudate levels likely to be encountered in a variety of clinical settings and wound etiologies. The results of the current in vivo study are a good extension of Thomas and Young’s16 in vitro study, as the current study compares in vivo performance of the dressings near the upper end of what they predicted the AAF dressing is likely to handle with actual wounds. They predicted that, in a clinical environment, the AAF dressing is likely to handle 16 g/10 cm2 of exudate per 24 hours at day 1 to 13 g/10 cm2/24 hours at day 7.
In the current study, the estimated median dose of AWF until failure (dose to 50% of dressing failures) for the AAF dressing was 44.5 mL, similar to the maximum dose of fluid that Thomas and Young16 predicted the dressing could handle at 3.3 days using the Paddington Cup method. This comes remarkably close to the estimated medium survival time of 3.5 days for the same dressing in the current in vivo study. Therefore, these observations lend credence to the usefulness of this in vivo test method for comparing total fluid handling capacity of absorbent dressings.
As discussed by Thomas and Young,16 in vitro test methods have numerous limitations, including overestimating or underestimating dressing fluid handling capabilities, which makes extrapolation of results to a clinical setting difficult.13,16 The in vivo model utilized in this experiment overcomes many of these limitations, which makes it a good in vivo alternative for modeling clinical wear time and fluid handling comparisons of potential clinical performance.
The artificial wound bed for this in vivo model measured approximately 10 cm2, which is within the range of baseline wound sizes encountered in a variety of published chronic wound studies.5 Both dressings tested in this study have an absorbent pad size of 10 cm x 10 cm (100 cm2), which is a 10:1 dressing-to-wound size ratio and allows for both vertical and lateral wicking of fluid within the dressing. Lateral wicking is an important component of the test because it increases the effective surface area for evaporative water loss, which is artificially constrained in the Paddington Cup model. Furthermore, the dressings in this in vivo model are used without modification (cutting), underscoring the method’s ability to better serve as a real-world model of dressing performance than the Padding Cup method.
A major limitation of all in vitro methods developed thus far is that they do not take into account several relevant clinical variables that may influence dressing wear time, including exposure to sweat, oils, loose surface skin cells, movement, friction, pressure, and shear. If there is sufficient dressing edge lift, a channel will form between the pad and the outer edge of the dressing and leakage will occur well before the dressing reaches its full fluid handling capacity. Also, unlike dressing use in a stationary test apparatus, during clinical use the dressing may not always lie parallel to the pull of gravity, which could influence the position of the absorbed fluid within the dressing, possibly pulling it toward one edge. This could affect evaporative water loss characteristics and may cause leakage if the pooling occurs in an area with substantial dressing edge lift. With this in vivo model, the dressings are placed on the lower backs of ambulatory subjects; doing so more closely replicates movement and adhesive-skin interactions likely to be encountered under clinical use.
In a clinical situation, the environmental conditions to which dressings are exposed likely will vary over time and differ considerably from the standardized laboratory values used in most in vitro methods. Thomas and Fram14 measured temperature and RH beneath the bedclothes of three hospitalized patients and found average conditions of 26.5° C/79.7° F and 50% RH, both significantly different from the standard Paddington Cup test conditions of 37° C/99° F and 20% RH. Therefore, side-by-side modeling of dressings in an anatomical location where they are routinely covered by clothing and exposed to normal and varying environmental conditions more closely replicates actual use conditions than in vitro methods.
One of the salient features of this in vivo model is the use of AWF rather than a simple isotonic salt solution. Sprung et al11 reported that the choice of test fluid (wound fluid, saline, or water) differentially affected the amount of fluid hydrogel and hydrocolloid dressings could absorb in an immersion test procedure. Interestingly, no clear trends emerged in terms of whether the dressings absorbed more or less wound fluid compared to normal saline or water. Some dressings absorbed more wound fluid while others absorbed less, but the data clearly showed the importance of selecting a test fluid that closely models chronic wound fluid when comparing absorbent dressings. The AWF utilized in this in vivo wound model was formulated to closely resemble chronic wound fluid, not only in ionic strength, but also in viscosity and macronutrient composition.17
Additionally, as discussed by Thomas and Young,16 it is reasonable to expect that as moisture from wound fluid is transpired through semipermeable absorbent dressings, solutes may accumulate at the inner surface of the transparent film backing. These solutes, particularly the larger organic compounds, may partially occlude the backing and gradually decrease dressing moisture vapor permeability. Such an effect would not be observed with a simple isotonic solution of sodium and calcium chloride, as utilized in the Paddington Cup and most other in vitro methods.12
Limitations and Benefits of the In Vivo Test Method
As with any model used to compare potential clinical performance of absorbent wound dressings, the main limitation of this test is that it is simply a model and outcomes may not reflect dressing physical performance in a clinical environment. Wounds are highly variable and studies conducted on healthy human volunteers cannot be used to predict exact clinical outcomes of dressing wear time or fluid handling capacity. However, this limitation also highlights the main benefit of the test in that it provides an in vivo method to quickly and easily compare the fluid handling and wear time performance of absorbent dressings in a stable environment that is relatively free of confounding factors that can complicate or bias clinical evaluations of dressings. Thus, this in vivo model facilitates side-by-side comparisons of dressing fluid handling and wear time performance with a relatively small number of subjects and in a relatively short timeframe with more clinically meaningful results than can be obtained by in vitro methods. The model also does not assess other variables that may affect patient outcomes, such as healing.
In this in vivo experiment simulating highly exudating wounds, the fluid handling capacity of the HPF dressing was significantly better than that of the AAF dressing. The HPF absorbed 75% more fluid before it failed and remained intact for a median of 6.1 days compared to 3.5 days for the AAF dressing. Although projecting exact dressing performance into a clinical environment is beyond the scope of this model, these results suggest that on highly exudating wounds, the HPF dressing likely will require fewer dressing changes than the AAF dressing. The model described overcomes important limitations of current in vitro dressing performance tests. Additional clinical studies to ascertain the ability of this model to predict wear time performance of dressings in clinical practice are warranted, and studies to compare wound outcomes are needed.
Mr. Lutz is a freelance medical writer, Lutz Consulting LLC, Buellton, CA. Ms. Zehrer is a research specialist; Ms. Solfest is a clinical study coordinator; and Ms. Walters is a biostatistical specialist, 3M Health Care, St. Paul, MN. Please address correspondence to: Jim Lutz, MS, CCRA, Lutz Consulting LLC, 411 Dogwood Drive, Buellton, CA 93427-6810; email: email@example.com.