Translate

Κυριακή 7 Ιουλίου 2019

Simulation in Healthcare

Quantification of Student Radiographic Patient Positioning Using an Immersive Virtual Reality Simulation
Introduction Immersive virtual reality (VR) simulation environments facilitate novel ways for users to visualize anatomy and quantify performance relative to expert users. The ability of software to provide positional feedback before a practitioner progresses with subsequent stages of examinations has broad implications for primary and allied healthcare professionals, particularly with respect to health and safety (eg, exposing to x-rays). The effect of training student-radiographers (radiology technicians), with a VR simulation environment was quantitatively assessed. Methods Year 1 radiography students (N = 76) were randomly split into 2 cohorts, each of which were trained at performing the same tasks relating to optimal hand positioning for projection x-ray imaging; group 1 was trained using the CETSOL VR Clinic software, whereas group 2 was trained using conventional simulated role-play in a real clinical environment. All participants completed an examination 3 weeks after training. The examination required both posterior-anterior and oblique hand x-ray positioning tasks to be performed on a real patient model. The analysis of images from the examination enabled quantification of the students' performance. The results were contextualized through a comparison with 4 expert radiographers. Results Students in group 1 performed on average 36% (P < 0.001) better in relation to digit separation, 11% (P ≤ 0.001) better in terms of palm flatness and 23% (P < 0.05) better in terms of central ray positioning onto the third metacarpal. Conclusion A significant difference in patient positioning was evident between the groups; the VR clinic cohort demonstrated improved patient positioning outcomes. The observed improvement is attributed to the inherent task deconstruction and variety of visualization mechanisms available in immersive VR environments. Reprints: Daniel Sapkaroski, BBioMed, MMedRad, Monash University School of Medicine, Nursing and Health Science, Melbourne, Australia (e-mail: daniel.sapkaroski@monash.edu). The authors declare no conflict of interest. The cost of this study was approximately AUD $10,000, for which funding was provided by a grant awarded from the Victorian Medical Radiation Practitioners Education Trust (VMRPET). The funding allowed for a dedicated laboratory to be built consisting of 5 high-end PC's and 5 oculus rift virtual headsets and leap motion controllers. Ethics approval was obtained from Monash University Human Research Ethics Committee (MUHREC) CF16/661 – 2016000317. © 2019 Society for Simulation in Healthcare
Simulation-Based Event Analysis Improves Error Discovery and Generates Improved Strategies for Error Prevention
Introduction An adverse event (AE) is a negative consequence of health care that results in unintended injury or illness. The study investigates whether simulation-based event analysis is different from traditional event analysis in uncovering root causes and generating recommendations when analyzing AEs in hospitalized children. Methods Two simulation scenarios were created based on real-life AEs identified through the hospital's Safety Reporting System. Scenario A involved an error of commission (inpatient drug error) and scenario B involved detecting an error that already occurred (drug infusion error). Each scenario was repeated 5 times with different, voluntary clinicians. Content analysis, using deductive and inductive approaches to coding, was used to analyze debriefing data. Causes and recommendations were compiled and compared with the traditional event analysis. Results Errors were reproduced in 60% (3/5) of scenario A. In scenario B, participants identified the error in 100% (5/5) of simulations (average time to error detection = 15 minutes). Debriefings identified reasons for errors including product labeling, memory aid interpretation, and lack of standard work for patient handover. To prevent error, participants suggested improved drug labeling, specialized drug kits, alert signs, and handoff checklists. Compared with traditional event analysis, simulation-based event analysis revealed unique causes for error and new recommendations. Conclusions Using simulation to analyze AEs increased unique error discovery and generated new recommendations. This method is different from traditional event analysis because of the immediate clinician debriefings in the clinical environment. Hospitals should consider simulation-based event analysis as an important addition to the traditional process. Reprints: Anna-Theresa Lobos, MD, FRCPC, Division of Critical Care, Department of Pediatrics, University of Ottawa, CHEO, 401 Smyth Rd, Ottawa, Ontario, K1H 8L1 (e-mail: alobos@cheo.on.ca). The authors declare no conflict of interest. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.simulationinhealthcare.com). © 2019 Society for Simulation in Healthcare
Comparing the Learning Effectiveness of Healthcare Simulation in the Observer Versus Active Role: Systematic Review and Meta-analysis
Summary Statement The benefits of observation in simulation-based education in healthcare are increasingly recognized. However, how it compares with active participation remains unclear. We aimed to compare effectiveness of observation versus active participation through a systematic review and meta-analysis. Effectiveness was defined using Kirkpatrick's 4-level model, namely, participants' reactions, learning outcomes, behavior changes, and patient outcomes. The peer-reviewed search strategy included 8 major databases and gray literature. Only randomized controlled trials were included. A total of 13 trials were included (426 active participants and 374 observers). There was no significant difference in reactions (Kirkpatrick level 1) to training between groups, but active participants learned (Kirkpatrick level 2) significantly better than observers (standardized mean difference = −0.2, 95% confidence interval = −0.37 to −0.02, P = 0.03). Only one study reported behavior change (Kirkpatrick level 3) and found no significant difference. No studies reported effects on patient outcomes (Kirkpatrick level 4). Further research is needed to understand how to effectively integrate and leverage the benefits of observation in simulation-based education in healthcare. Reprints: Megan Delisle, MD, Department of Surgery, University of Manitoba, Room 344-825 Sherbrook St, Health Sciences Centre, Winnipeg, Manitoba, Canada R3A 1R9 (e-mail: megandelisle@gmail.com). Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.simulationinhealthcare.com). © 2019 Society for Simulation in Healthcare
PEARLS for Systems Integration: A Modified PEARLS Framework for Debriefing Systems-Focused Simulations
Summary Statement Modern healthcare organizations strive for continuous improvement in systems and processes to ensure safe, effective, and cost-conscious patient care. However, systems failures and inefficiencies lurk in every organization, often emerging only after patients have experienced harm or delays. Simulation and debriefing, focused on identifying systems gaps, can proactively lead to improvements in safety and quality. Systems-focused debriefing requires a different approach than traditional, learner-focused debriefing. We describe PEARLS for Systems Integration, a conceptual framework, debriefing structure and script that facilitators can use for systems-focused debriefing. The framework builds on Promoting Excellence And Reflective Learning in Simulation, using common debriefing strategies (plus/delta, focused facilitation, and directive feedback) in a modified format, with new debriefing scripts. Promoting Excellence And Reflective Learning in Simulation for System Integration offers a structured framework, adaptable for debriefing systems-focused simulations, to identify systems issues and maximize improvements in patient safety and quality. Reprints: Mirette Dubé, MSc, Alberta Health Services and University of Calgary, Foothills Medical Centre, 1403 29th Street NW, McCaig Tower, Room 04154 Calgary, AB, T2N 2T9 (e-mail: Mirette.Dube@albertahealthservices.ca). V.G., W.E., and A.C. receive remuneration as faculty for the Debriefing Academy, which teaches debriefing courses. © 2019 Society for Simulation in Healthcare
Development and Evaluation of a Cognitive Aid Booklet for Use in Rapid Response Scenarios
Introduction Rapid response teams (RRTs) have become ubiquitous among hospitals in North America, despite lack of robust evidence supporting their effectiveness. Many RRTs do not yet use cognitive aids during these high-stakes, low-frequency scenarios, and there are no standardized cognitive aids that are widely available for RRTs on medicine patients. We sought to design an emergency manual to improve resident performance in common RRT calls. Methods Residents from the New York University School of Medicine Internal Medicine Residency Program were asked to volunteer for the study. The intervention group was provided with a 2-minute scripted informational session on cognitive aids as well as access to a cognitive aid booklet, which they were allowed to use during the simulation. Results Resident performance was recorded and scored by a physician who was blinded to the purpose of the study using a predefined scoring card. Residents in the intervention group performed significantly better in the simulated RRT, by overall score (mean score = 7.33/10 and 6.26/10, respectively, P = 0.02), and by performance on the two critical interventions, giving the correct dose of naloxone (89% and 39%, respectively, P < 0.001) and checking the patient's blood glucose level (93% and 52%, respectively, P = 0.001). Conclusions In a simulated scenario of opiate overdose, internal medicine residents who used a cognitive aid performed better on critical tasks than those residents who did not have a cognitive aid. The use of an appropriately designed cognitive aid with sufficient education could improve performance in critical scenarios. Reprints: Charles Madeira, MD, Department of Veterans' Affairs, New York Harbor Healthcare System, Department of Internal Medicine. 423 E 23rd St, New York, NY, 10010 (e-mail: charles.madeira@va.gov). The authors declare conflict of interest. Supported by by the Division of Pulmonary and Critical Care Medicine, Department of Veterans' Affairs, New York Harbor Healthcare System. All work was performed at the Department of Veterans' Affairs, New York Harbor Healthcare System. The work should be attributed to the Division of Pulmonary and Critical Care Medicine, Department of Veterans' Affairs, New York Harbor Healthcare System. © 2019 Society for Simulation in Healthcare
Physician Versus Nonphysician Instruction: Evaluating an Expert Curriculum-Competent Facilitator Model for Simulation-Based Central Venous Catheter Training
Introduction Healthcare simulation supports educational opportunities while maintaining patient safety. To reduce costs and increase the availability of training, a randomized controlled study evaluated central venous catheter (CVC) insertion training in the simulation laboratory with nonphysician competent facilitators (NPCFs) as instructors. Method A group of learners naive to central line placement participated in a blended curriculum consisting of interactive online materials and simulation-based training. Learners were randomized to training with NPCFs or attending physician faculty. The primary outcome was simulated CVC insertion task performance, graded with a validated checklist by blinded physician reviewers. Learner knowledge and satisfaction were also evaluated. Analysis was conducted using noninferiority testing. Results Eighty-five students, 11 attending physicians, and 7 NPCFs voluntarily participated. Noninferiority testing of the difference in CVC insertion performance between NPCF-trained learners versus physician-trained learners found no significant difference [rejecting the null hypothesis of inferiority using an 8% noninferiority margin (P < 0.01)]. In addition, there was no difference found between the 2 groups on pre/post knowledge scores, self-reported learner comfort, course satisfaction, or instructor satisfaction. Conclusions An introductory CVC curriculum can be taught to novice learners by carefully trained and supported NPCFs and achieve skill and knowledge outcomes similar to learners taught by physicians. Reprints: Andrew Musits, MD, MS, Lifespan Medical Simulation Center, One Hoppin St, Suite 106, Providence RI 02903 (e-mail: Andrew_Musits@brown.edu). The authors declare no conflict of interest. This study was determined to have exempt status by the Human Research Protection Office at the University of Pittsburgh and approved by the research on medical students committee at the University of Pittsburgh Medical School. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.simulationinhealthcare.com). © 2019 Society for Simulation in Healthcare
Educational Interventions to Enhance Situation Awareness: A Systematic Review and Meta-analysis
Summary Statement We conducted a systematic review to evaluate the comparative effectiveness of educational interventions on health care professionals' situation awareness (SA). We searched MEDLINE, CINAHL, HW Wilson, ERIC, Scopus, EMBASE, PsycINFO, psycARTICLES, Psychology and Behavioural Science Collection and the Cochrane library. Articles that reported a targeted SA intervention or a broader intervention incorporating SA, and an objective outcome measure of SA were included. Thirty-nine articles were eligible for inclusion, of these 4 reported targeted SA interventions. Simulation-based education (SBE) was the most prevalent educational modality (31 articles). Meta-analysis of trial designs (19 articles) yielded a pooled moderate effect size of 0.61 (95% confidence interval = 0.17 to 1.06, P = 0.007, I2 = 42%) in favor of SBE as compared with other modalities and a nonsignificant moderate effect in favor of additional nontechnical skills training (effect size = 0.54, 95% confidence interval = 0.18 to 1.26, P = 0.14, I2 = 63%). Though constrained by the number of articles eligible for inclusion, our results suggest that in comparison with other modalities, SBE yields better SA outcomes. Reprints: Nuala Walshe, RN, MTLHE, School of Nursing and Midwifery, Brookfield Health Science Complex, University College Cork, Cork T12 K8AF, Ireland (e-mail: n.walshe@ucc.ie). The authors declare no conflict of interest. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.simulationinhealthcare.com). © 2019 Society for Simulation in Healthcare
Virtual Standardized Patient Simulation: Case Development and Pilot Application to High-Value Care
Introduction High-value care (HVC) suggests that good history taking and physical examination should lead to risk stratification that drives the use or withholding of diagnostic testing. This study describes the development of a series of virtual standardized patient (VSP) cases and provides preliminary evidence that supports their ability to provide experiential learning in HVC. Methods This pilot study used VSPs, or natural language processing–based patient avatars, within the USC Standard Patient platform. Faculty consensus was used to develop the cases, including the optimal diagnostic testing strategies, treatment options, and scored content areas. First-year resident physician learners experienced two 90-minute didactic sessions before completing the cases in a computer laboratory, using typed text to interview the avatar for history taking, then completing physical examination, differential diagnosis, diagnostic testing, and treatment modules for each case. Learners chose a primary and 2 alternative “possible” diagnoses from a list of 6 to 7 choices, diagnostic testing options from an extensive list, and treatments from a brief list ranging from 6 to 9 choices. For the history-taking module, both faculty and the platform scored the learners, and faculty assessed the appropriateness of avatar responses. Four randomly selected learner-avatar interview transcripts for each case were double rated by faculty for interrater reliability calculations. Intraclass correlations were calculated for interrater reliability, and Spearman ρ was used to determine the correlation between the platform and faculty ranking of learners' history-taking scores. Results Eight VSP cases were experienced by 14 learners. Investigators reviewed 112 transcripts (4646 learner query-avatar responses). Interrater reliability means were 0.87 for learner query scoring and 0.83 for avatar response. Mean learner success for history taking was scored by the faculty at 57% and by the platform at 51% (ρ correlation of learner rankings = 0.80, P = 0.02). The mean avatar appropriate response rate was 85.6% for all cases. Learners chose the correct diagnosis within their 3 choices 82% of the time, ordered a median (interquartile range) of 2 (2) unnecessary tests and completed 56% of optimal treatments. Conclusions Our avatar appropriate response rate was similar to past work using similar platforms. The simulations give detailed insights into the thoroughness of learner history taking and testing choices and with further refinement should support learning in HVC. Reprints: William F. Bond, MD, MS, 1306 N Berkeley Ave, Peoria IL 61603 (e-mail: william.bond@jumpsimulation.org). Author Thomas B. Talbot has a software company Medical Mechanica, LLC that provides customizations to the virtual patient platform. Open-source cases authored for this study are on a Creative Commons 4.0 platform. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.simulationinhealthcare.com). © 2019 Society for Simulation in Healthcare
Prior Participation in Simulation Events Is Associated With Insimulation Team Performance Among Emergency Medical Services Professionals
Introduction Prior evidence has supported the use of high-fidelity simulation in initial emergency medical services (EMS) education, but there is a dearth of research on whether EMS professionals can also benefit from it. We sought to examine simulation use and years of practice as predictors of insimulation team performance among EMS professionals. The hypothesis is that both the prior participation in simulation events and the accumulated years of practice will predict insimulation performance. Methods This cross-sectional study was conducted as part of a simulation-based EMS competition. Paramedic and physician teams were tested. Participants' years of EMS and healthcare practice and their prior participation in simulation events were assessed with a survey and correlated with performance in the competition. Results Participants were 120 EMS professionals from 51 teams, which was 75% of all competitors. They had in average 8.03 years of healthcare practice and 5.71 years of EMS practice and had previously participated in 4.34 simulation events. The prior participation in simulation events correlated significantly with EMS insimulation performance at the team level (r = 0.40–0.59). In contrast, neither the years of healthcare practice nor the years of EMS practice significantly predicted insimulation team performance. Furthermore, there was no interaction of simulation use and years of practice. Conclusions The benefits of simulation use are not limited to initial EMS education but spread also to experienced professionals. Even individuals who have been working in the field for many years may benefit from high-fidelity simulation. Future research should examine whether this also translates into better clinical performance. Reprints: Peter Gröpel, PhD, Department of Applied Psychology: Work, Education, and Economy, University of Vienna, Universitaetsstrasse 7, 1010 Vienna, Austria (e-mail: peter.groepel@univie.ac.at). The authors declare no conflict of interest. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.simulationinhealthcare.com). © 2019 Society for Simulation in Healthcare
Training Cesarean Section: A Scoping Review
Summary Statement This study is a scoping review that reviews the existing literature on educational strategies in training of cesarean section. A systematic search was carried out in relevant databases, identifying 28 studies for inclusion. Thematic analysis revealed the following training strategies: simulation-based training (team training, in situ training, technical training), simulators (low-fidelity simulators, high-fidelity simulators), clinical training, e-learning or videos, classroom-based learning (lectures, small groups), and assessment (assessment programs/interventions, assessment of learners). Simulation-based training provides a unique milieu for training in a safe and controlled environment. Simulation-based team training is widely accepted and used in obstetrics and improves nontechnical skills, which are important in emergency cesarean section. High-fidelity simulators are advanced and realistic, but because of the expense, low-fidelity simulators may provide a reasonable method for training surgical skills. The literature in training and assessment of surgical skills in relation to cesarean section is sparse, and more studies are warranted. Reprints: Diana B. Zetner, MD, Copenhagen Academy for Medical Education and Simulation, Capital Region, Blegdamsvej 9, 2200 Copenhagen, Denmark (e-mail: diana.zetner@gmail.com). The study was performed by Copenhagen Academy for Medical Education and Simulation, Rigshospitalet, Capital Region, Blegdamsvej 9, 2200 Copenhagen, Denmark. The Independent Research Fund Denmark and Ebba Celinders Foundation have made this project possible by financial funding of a research year in which this review was performed. Equipment was provided by Copenhagen Academy for Medical Education and Simulation, Capital Region, Copenhagen, Denmark. The authors declare no conflict of interest. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.simulationinhealthcare.com). © 2019 Society for Simulation in Healthcare

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate