Translate

Πέμπτη 25 Ιουλίου 2019

Supply Chain Delays in Antimicrobial Administration After the Initial Clinician Order and Mortality in Patients With Sepsis,
Objectives: There is mounting evidence that delays in appropriate antimicrobial administration are responsible for preventable deaths in patients with sepsis. Herein, we examine the association between potentially modifiable antimicrobial administration delays, measured by the time from the first order to the first administration (antimicrobial lead time), and death among people who present with new onset of sepsis. Design: Observational cohort and case-control study. Setting: The emergency department of an academic, tertiary referral center during a 3.5-year period. Patients: Adult patients with new onset of sepsis or septic shock. Interventions: None. Measurements and Main Results: We enrolled 4,429 consecutive patients who presented to the emergency department with a new diagnosis of sepsis. We defined 0–1 hour as the gold standard antimicrobial lead time for comparison. Fifty percent of patients had an antimicrobial lead time of more than 1.3 hours. For an antimicrobial lead time of 1–2 hours, the adjusted odds ratio of death at 28 days was 1.28 (95% CI, 1.07–1.54; p = 0.007); for an antimicrobial lead time of 2–3 hours was 1.07 (95% CI, 0.85–1.36; p = 0.6); for an antimicrobial lead time of 3–6 hours was 1.57 (95% CI, 1.26–1.95; p < 0.001); for an antimicrobial lead time of 6–12 hours was 1.36 (95% CI, 0.99–1.86; p = 0.06); and for an antimicrobial lead time of more than 12 hours was 1.85 (95% CI, 1.29–2.65; p = 0.001). Conclusions: Delays in the first antimicrobial execution, after the initial clinician assessment and first antimicrobial order, are frequent and detrimental. Biases inherent to the retrospective nature of the study apply. Known biologic mechanisms support these findings, which also demonstrate a dose-response effect. In contrast to the elusive nature of sepsis onset and sepsis onset recognition, antimicrobial lead time is an objective, measurable, and modifiable process. This work has been performed at the Virginia Commonwealth University Medical Center. A limited abstract of this research was presented, in part, at the CHEST International Congress, Bangkok, Thailand, April 11, 2019. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal). The authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: mkashiouris@vcu.edu Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Characteristics of Rapid Response Calls in the United States: An Analysis of the First 402,023 Adult Cases From the Get With the Guidelines Resuscitation-Medical Emergency Team Registry
Objectives: To characterize the rapid response team activations, and the patients receiving them, in the American Heart Association-sponsored Get With The Guidelines Resuscitation-Medical Emergency Team cohort between 2005 and 2015. Design: Retrospective multicenter cohort study. Setting: Three hundred sixty U.S. hospitals. Patients: Consecutive adult patients experiencing rapid response team activation. Interventions: Rapid response team activation. Measurements and Main Results: The cohort included 402,023 rapid response team activations from 347,401 unique healthcare encounters. Respiratory triggers (38.0%) and cardiac triggers (37.4%) were most common. The most frequent interventions—pulse oximetry (66.5%), other monitoring (59.6%), and supplemental oxygen (62.0%)—were noninvasive. Fluids were the most common medication ordered (19.3%), but new antibiotic orders were rare (1.2%). More than 10% of rapid response teams resulted in code status changes. Hospital mortality was over 14% and increased with subsequent rapid response activations. Conclusions: Although patients requiring rapid response team activation have high inpatient mortality, most rapid response team activations involve relatively few interventions, which may limit these teams’ ability to improve patient outcomes. This study was performed at The University of Chicago, Chicago, IL. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal). Dr. Lyons’ institution received funding from a National Institutes of Health (NIH) T32 grant (5T32 HL007317). Drs. Lyons, Chan, and Churpek received support for article research from the NIH. Dr. Edelson’s institution received funding from EarlySense, Tel Aviv, Israel and Philips Healthcare, Andover, MA. Drs. Edelson and Churpek disclosed received funding from a Patent Pending (ARCD.P0535US.P2) for risk stratification algorithms for hospitalized patients. Dr. Chan is supported by a research grant award from the National Institutes of Health (1R01 HL123980). Ms. Praestgaard’s institution received funding from American Heart Association. Dr. Churpek received support from the National Institutes of Health, and he is supported by a career development award from the National Heart, Lung, and Blood Institute (K08 HL121080). The remaining authors have disclosed that they do not have any potential conflicts of interest. Address requests for reprints to: Matthew M. Churpek, University of Chicago Medical Center, Section of Pulmonary and Critical Care Medicine, 5841 South Maryland Avenue, MC 6076, Chicago, IL 60637.E-mail: matthew.churpek@uchospitals.edu Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
B-Mode Ultrasound Findings in a Patient With Suspected Pulmonary Gangrene
Objectives: Lung ultrasound has shown increasing diagnostic value in many lung diseases and has become an efficient tool in the management of dyspnea. In the present case report, we describe a new ultrasound feature of potential interest. Data Sources: Clinical observation of a patient. Study Selection: Case report. Data Extraction: Data were extracted from medical records, after obtaining consent from the patient’s family. Illustrations were extracted from the imaging software and a video device. Data Synthesis: A 56-year-old man was admitted with pneumonia of adverse outcome. Lung ultrasound, a method increasingly considered as a bedside gold standard in critically ill patients due to its overwhelming advantages, was the only tool able to specify the lung injuries. We describe herein a distinctive sign unequivocally evoking a destructive process suggestive of pulmonary gangrene, a variant of the fractal sign combining a lung consolidation with an underlying heterogeneous free fluid. Conclusions: Lung ultrasound may help highlight pulmonary gangrene, a poorly-known disease, with this new ultrasonographic description. The next step will be to ascertain the relation between this new ultrasound feature and pulmonary gangrene and to assess how this bedside diagnosis could impact the prognosis of the disease. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal). Dr. Lichtenstein received funding from Novartis (board fees) and honoraria from Boehringer and Servier. The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: m.echivard@neuf.fr Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Ultrasonographic Detection of Micro-Bubbles in the Right Atrium to Confirm Peripheral Venous Catheter Position in Children
Objectives: In pediatric patients, indwelling peripheral venous catheters are sometimes displaced to extravascular positions, causing infiltration or extravasation. No reliable techniques are available to confirm accurate IV catheterization. However, ultrasonographic detection of micro-bubble turbulence in the right atrium after saline injection has been reported to be useful in confirming central venous catheter positions in both adults and children. This study evaluated whether this micro-bubble detection test can offer better confirmation of peripheral venous catheter positions compared with the smooth saline injection technique in pediatric patients. Design: Randomized controlled study. Setting: Single tertiary PICU. Patients: Pediatric patients (weighing < 15 kg) who already had or required a peripheral venous catheter. Interventions: Patients were randomly allocated to either of the two groups (150 patients per group): undergoing either the micro-bubble detection test (M group) or the smooth saline injection test (S group). Measurements and Main Results: The peripheral venous catheters were confirmed to be IV located in the final position in 137 and 139 patients in the M and S groups, respectively. In properly located catheters, the tests were positive in 100% (n = 137/137; sensitivity, 100%; 95% CI, 97.8–100), and in 89% (n = 124/139; 95% CI, 82.8–93.8) of the M and S groups, respectively (p = 0.0001). Among the catheters located in extravascular positions, the tests were negative in 100% (n = 13/13; specificity, 100%; 95% CI, 79.4–100), and in 64% (n = 7/11; 95% CI, 30.8–89.1) of the M and S groups, respectively (p = 0.017). Conclusions: The micro-bubble detection test is a useful technique for detecting extravasation and confirming proper positioning of peripheral IV catheters in pediatric patients. The authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: t-k-s-t@koto.kpu-m.ac.jp Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Use of Cell Cycle Arrest Biomarkers in Conjunction With Classical Markers of Acute Kidney Injury
Objectives: Decreased urine output and/or increased serum creatinine may herald the development of acute kidney injury or reflect normal physiology. In this secondary analysis of the Sapphire study, we examined biomarkers of cell cycle arrest in the settings of oliguria and/or azotemia to improve risk assessment when used with conventional indices in predicting severe acute kidney injury (Kidney Disease: Improving Global Outcomes 3 defined by the need for renal replacement therapy or changes in urine output, serum creatinine or both) or death. Design: Prospective, international, Sapphire study. Setting: Academic Medical Center. Patients: Patients without acute kidney injury Kidney Disease: Improving Global Outcomes stage 2 or 3. Interventions: None. Measurements and Main Results: The primary endpoint being development of severe acute kidney injury or death within 1 week. Secondary analysis examined the relationship between tissue inhibitor of metalloproteinases-2 ([TIMP-2]) and insulin growth factor binding protein 7 ([IGFBP7]) and 9-month death or dialysis conditioned on progression to stage 2–3 acute kidney injury within 1 week. Seventy-nine patients reached the primary endpoint and were more likely to be surgical, with higher nonrenal Acute Physiology and Chronic Health Evaluation III scores and more chronic kidney disease. Stage 1 urine output, serum creatinine, and urinary [TIMP-2]•[IGFBP7] greater than 2.0 were all predictive of progression to the primary endpoint independent from nonrenal Acute Physiology and Chronic Health Evaluation III score. Combinations of predictors increased the hazard ratios considerably (from 2.17 to 4.14 to 10.05, respectively). In the presence of acute kidney injury (stage 1), [TIMP-2]•[IGFBP7] greater than 2.0 leads to an increased risk of death or dialysis at 9 months even in the absence of progression of acute kidney injury (stage 2–3) within 7 days. Conclusions: Cell cycle arrest biomarkers, TIMP-2 and IGFBP7, improve risk stratification for severe outcomes in patients with stage 1 acute kidney injury by urine output, serum creatinine or both, with risk increasing with each acute kidney injury indicator. Longer term outcomes demonstrate that the associated risks of a [TIMP-2]•[IGFBP7] greater than 2.0 is equivalent to acute kidney injury progression even where no progression from stage 1 acute kidney injury is observed. For full list of Sapphire Investigators, see Appendix 1 (Supplemental Digital Content 6, http://links.lww.com/CCM/E860). Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal). Supported, in part, by Astute Medical. Drs. Joannidis’, Forni’s, Koyner’s and Kellum’s institutions received funding from Astute Medical. Dr. Haase received funding from Astute Medical and lecture fees from Abbott and Alere. Dr. Koyner’s institution received funding from the University of Chicago (for enrolling patients into this prospective observational cohort study), and he received funding from Astute Medical (consulting). Drs. Shi and Chawla received funding from Astute Medical (consultant). Dr. Kellum received support for article research from Astute Medical. Dr. Kashani disclosed that he does not have any potential conflicts of interest. For information regarding this article, E-mail: kellum@pitt.edu This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Patient Outcomes and Cost-Effectiveness of a Sepsis Care Quality Improvement Program in a Health System
Objectives: Assess patient outcomes in patients with suspected infection and the cost-effectiveness of implementing a quality improvement program. Design, Setting, and Participants: We conducted an observational single-center study of 13,877 adults with suspected infection between March 1, 2014, and July 31, 2017. The 18-month period before and after the effective date for mandated reporting of the sepsis bundle was examined. The Sequential Organ Failure Assessment score and culture and antibiotic orders were used to identify patients meeting Sepsis-3 criteria from the electronic health record. Interventions: The following interventions were performed as follows: 1) multidisciplinary sepsis committee with sepsis coordinator and data abstractor; 2) education campaign; 3) electronic health record tools; and 4) a Modified Early Warning System. Main Outcomes and Measures: Primary health outcomes were in-hospital death and length of stay. The incremental cost-effectiveness ratio was calculated and the empirical 95% CI for the incremental cost-effectiveness ratio was estimated from 5,000 bootstrap samples. Results: In multivariable analysis, the odds ratio for in-hospital death in the post- versus pre-implementation periods was 0.70 (95% CI, 0.57–0.86) in those with suspected infection, and the hazard ratio for time to discharge was 1.25 (95% CI, 1.20–1.29). Similarly, a decrease in the odds for in-hospital death and an increase in the speed to discharge was observed for the subset that met Sepsis-3 criteria. The program was cost saving in patients with suspected infection (–$272,645.7; 95% CI, –$757,970.3 to –$79,667.7). Cost savings were also observed in the Sepsis-3 group. Conclusions and Relevance: Our health system’s program designed to adhere to the sepsis bundle metrics led to decreased mortality and length of stay in a cost-effective manner in a much larger catchment than just the cohort meeting the Centers for Medicare and Medicaid Services measures. Our single-center model of interventions may serve as a practice-based benchmark for hospitalized patients with suspected infection. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal). Dr. Afshar received funding from the National Institute of Health (NIH)/National Institute of Alcoholism and Alcohol Abuse (K23 AA024503). Dr. Churpek received funding from the NIH/National Institute of General Medical Sciences (R01 GM 123193), NIH/National Heart, Lung, and Blood Institute (K08 HL121080), American Thoracic Society Foundation: Recognition Award for Early Career Investigators, and from a patent pending (ARCD. P0535US.P2). He received support for article research from the NIH. The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: Majid.afshar@lumc.edu Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
External Validation of Two Models to Predict Delirium in Critically Ill Adults Using Either the Confusion Assessment Method-ICU or the Intensive Care Delirium Screening Checklist for Delirium Assessment
Objectives: To externally validate two delirium prediction models (early prediction model for ICU delirium and recalibrated prediction model for ICU delirium) using either the Confusion Assessment Method-ICU or the Intensive Care Delirium Screening Checklist for delirium assessment. Design: Prospective, multinational cohort study. Setting: Eleven ICUs from seven countries in three continents. Patients: Consecutive, delirium-free adults admitted to the ICU for greater than or equal to 6 hours in whom delirium could be reliably assessed. Interventions: None. Measurements and Main Results: The predictors included in each model were collected at the time of ICU admission (early prediction model for ICU delirium) or within 24 hours of ICU admission (recalibrated prediction model for ICU delirium). Delirium was assessed using the Confusion Assessment Method-ICU or the Intensive Care Delirium Screening Checklist. Discrimination was determined using the area under the receiver operating characteristic curve. The predictive performance was determined for the Confusion Assessment Method-ICU and Intensive Care Delirium Screening Checklist cohort, and compared with both prediction models’ original reported performance. A total of 1,286 Confusion Assessment Method-ICU–assessed patients and 892 Intensive Care Delirium Screening Checklist–assessed patients were included. Compared with the area under the receiver operating characteristic curve of 0.75 (95% CI, 0.71–0.79) in the original study, the area under the receiver operating characteristic curve of the early prediction model for ICU delirium was 0.67 (95% CI, 0.64–0.71) for delirium as assessed using the Confusion Assessment Method-ICU and 0.70 (95% CI, 0.66–0.74) using the Intensive Care Delirium Screening Checklist. Compared with the original area under the receiver operating characteristic curve of 0.77 (95% CI, 0.74–0.79), the area under the receiver operating characteristic curve of the recalibrated prediction model for ICU delirium was 0.75 (95% CI, 0.72–0.78) for assessing delirium using the Confusion Assessment Method-ICU and 0.71 (95% CI, 0.67–0.75) using the Intensive Care Delirium Screening Checklist. Conclusions: Both the early prediction model for ICU delirium and recalibrated prediction model for ICU delirium are externally validated using either the Confusion Assessment Method-ICU or the Intensive Care Delirium Screening Checklist for delirium assessment. Per delirium prediction model, both assessment tools showed a similar moderate-to-good statistical performance. These results support the use of either the early prediction model for ICU delirium or recalibrated prediction model for ICU delirium in ICUs around the world regardless of whether delirium is evaluated with the Confusion Assessment Method-ICU or Intensive Care Delirium Screening Checklist. This work was performed at the Radboud University Medical Center, Tufts Medical Center, The Canberra Hospital, Antwerp University Hospital, Erasmus Medical Center, Jeroen Bosch Ziekenhuis, Rigshospitalet, Mount Sinai Hospital, Medisch Spectrum Twente, Hospital Espírito Santo, and University Medical Centre Utrecht. Drs. Wassenaar, Schoonhoven, Donders, Pickkers, and van den Boogaard contributed to study concept and design. Drs. Wassenaar, Devlin, and van Haren, Slooter, Jorens, van der Jagt, Simons, Egerod, Burry, and Beishuizen, and Mr. Matos contributed to acquisition of data. Drs. Wassenaar, Donders, and van den Boogaard contributed to statistical analysis. Drs. Wassenaar, Schoonhoven, Donders, Pickkers, and van den Boogaard contributed to analysis and interpretation of data. Dr. Wassenaar contributed to drafting of the article. Drs. Schoonhoven, Devlin, and van Haren, Slooter, Jorens, van der Jagt, Simons, Egerod, Burry, and Beishuizen, Mr. Matos, and Drs. Donders, Pickkers, and van den Boogaard contributed to critical revision of the article for important intellectual content. Drs. Schoonhoven, Pickkers, and van den Boogaard contributed to study supervision. All authors read and approved the final article. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal). Dr. Pickkers received funding from AM-Pharma, Adrenomed, Exponential Biotherapies, and Baxter Consultation (speaking fee). The remaining authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: mark.vandenboogaard@radboudumc.nl Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Declining Mortality of Cirrhotic Variceal Bleeding Requiring Admission to Intensive Care: A Binational Cohort Study
Objectives: We aimed to describe changes over time in admissions and outcomes, including length of stay, discharge destinations, and mortality of cirrhotic patients admitted to the ICU for variceal bleeding, and to compare it to the outcomes of those with other causes of ICU admissions. Design: Retrospective analysis of data captured prospectively in the Australian and New Zealand Intensive Care Society Centre for Outcome and Resource Evaluation Adult Patient Database. Settings: One hundred eighty-three ICUs in Australia and New Zealand. Patients: Consecutive admissions to these ICUs for upper gastrointestinal bleeding related to varices in patients with cirrhosis between January 1, 2005, and December 31, 2016. Interventions: None. Measurements and Main Results: ICU admissions for variceal bleeding in cirrhotic patients accounted for 4,003 (0.6%) of all 720,425 nonelective ICU admissions. The proportion of ICU admissions for variceal bleeding fell significantly from 0.8% (83/42,567) in 2005 to 0.4% (53/80,388) in 2016 (p < 0.001). Hospital mortality rate was significantly higher within admissions for variceal bleeding compared with nonelective ICU admissions (20.0% vs 15.7%; p < 0.0001), but decreased significantly over time, from 24.6% in 2005 to 15.8% in 2016 (annual decline odds ratio, 0.93; 95% CI, 0.90–0.96). There was no difference in the reduction in mortality from variceal bleeding over time between liver transplant and nontransplant centers (p = 0.26). Conclusions: Admission rate to ICU and mortality of cirrhotic patients with variceal bleeding has declined significantly over time compared with other causes of ICU admissions with the outcomes comparable between liver transplant and nontransplant centers. Dr. Majeed helped with drafting of the article, interpretation of the data, and study concept. Dr. Majumdar helped with preparation and critical review of the article. Dr. Bailey helped with data acquisition, statistical analysis, and critical review of the article. Drs. Kemp and Bellomo helped with preparation and critical review of the article. Mr. Pilcher helped with data acquisition, review of the article, interpretation of the data, and study concept. Dr. Roberts helped with preparation and critical review of the article, and study concept. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal). Supported, in part, by grants from the Alfred Hospital Department of Gastroenterology and the Australian and New Zealand Intensive Care Research Centre. The authors have disclosed that they do not have any potential conflicts of interest. For information regarding this article, E-mail: a.majeed@alfred.org.au Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Antimicrobial Disposition During Pediatric Continuous Renal Replacement Therapy Using an Ex Vivo Model
Objectives: Little is known on the impact of continuous renal replacement therapy on antimicrobial dose requirements in children. In this study, we evaluated the pharmacokinetics of commonly administered antimicrobials in an ex vivo continuous renal replacement therapy model. Design: An ex vivo continuous renal replacement therapy circuit was used to evaluate drug-circuit interactions and determine the disposition of five commonly used antimicrobials (meropenem, piperacillin, liposomal amphotericin B, caspofungin, and voriconazole). Setting: University research laboratory. Patients: None. Interventions: Antimicrobials were administered into a reservoir containing whole human blood. The reservoir was connected to a pediatric continuous renal replacement therapy circuit programmed for a 10 kg child. Continuous renal replacement therapy was performed in the hemodiafiltration mode and in three phases correlating with three different continuous renal replacement therapy clearance rates: 1) no clearance (0 mL/kg/hr, to measure adsorption), 2) low clearance (20 mL/kg/hr), and 3) high clearance (40 mL/kg/hr). Blood samples were drawn directly from the reservoir at baseline and at 5, 20, 60, and 180 minutes during each phase. Five independent continuous renal replacement therapy runs were performed to assess inter-run variability. Antimicrobial concentrations were measured using validated liquid chromatography-mass spectrometry assays. A closed-loop, flow-through pharmacokinetic model was developed to analyze concentration-time profiles for each drug. Measurements and Main Results: Circuit adsorption of antimicrobials ranged between 13% and 27%. Meropenem, piperacillin, and voriconazole were cleared by the continuous renal replacement therapy circuit and clearance increased with increasing continuous renal replacement therapy clearance rates (7.66 mL/min, 4.97 mL/min, and 2.67 mL/min, respectively, for high continuous renal replacement therapy clearance). Amphotericin B and caspofungin had minimal circuit clearance and did not change with increasing continuous renal replacement therapy clearance rates. Conclusions: Careful consideration of drug-circuit interactions during continuous renal replacement therapy is essential for appropriate drug dosing in critically ill children. Antimicrobials have unique adsorption and clearance profiles during continuous renal replacement therapy, and this knowledge is important to optimize antimicrobial therapy. Drs. Purohit and Elkomy shared equally in the first authorship. Drs. Purohit and Elkomy contributed equally to be the first author on this article. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal). Supported, in part, by grant from the Child Health Research Institute, Lucile Packard Foundation for Children’s Health and the Stanford Clinical and Translational Science Award UL1 TR001085. Dr. Purohit disclosed that this work was supported by the National Institutes of Health and the Child Health Research Institute, Lucile Packard Foundation for Children’s Health and the Stanford Clinical and Translational Science Award UL1 TR001085. Dr. Drover received funding from Masimo. The remaining authors have disclosed that they do not have any potential conflicts of interest. This work was performed at Stanford University School of Medicine. For information regarding this article, E-mail: drpurohit22@gmail.com Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Implementation of a Bundled Consent Process in the ICU: A Single-Center Experience
Objectives: A bundled consent process, where patients or surrogates provide consent for all commonly performed procedures on a single form at the time of ICU admission, has been advocated as a method for improving both rates of documented consent and patient/family satisfaction, but there has been little published literature about the use of bundled consent. We sought to determine how residents in an academic medical center with a required bundled consent process actually obtain consent and how they perceive the overall value, efficacy, and effects on families of this approach. Design: Single-center survey study. Setting: Medical ICUs in an urban academic medical center. Subjects: Internal medicine residents. Interventions: We administered an online survey about bundled consent use to all residents. Quantitative and qualitative data were analyzed. Measurements and Main Results: One-hundred two of 164 internal medicine residents (62%) completed the survey. A majority of residents (55%) reported grouping procedures and discussing general risks and benefits; 11% reported conducting a complete informed consent discussion for each procedure. Respondents were divided in their perception of the value of bundled consent, but most (78%) felt it scared or stressed families. A minority (26%) felt confident that they obtained valid informed consent for critical care procedures with the use of bundled consent. An additional theme that emerged from qualitative data was concern regarding the validity of anticipatory consent. Conclusions: Resident physicians experienced with the use of bundled consent in the ICU held variable perceptions of its value but raised concerns about the effect on families and the validity of consent obtained with this strategy. Further studies are necessary to further explore what constitutes best practice for informed consent in critical care. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal). Dr. Stevens is supported by Agency for Healthcare Research and Quality Grant 5K08HS024288 and a Clinical Scientist Development Award from the Doris Duke Charitable Foundation. The remaining authors have disclosed that they do not have any conflicts of interest. For information regarding this article, E-mail: aanandai@bidmc.harvard.edu Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate