Translate

Πέμπτη 3 Οκτωβρίου 2019

Patient Satisfaction With Medical Student Participation in a Longitudinal Integrated Clerkship: A Controlled Trial
Purpose: To determine whether longitudinal student involvement improves patient satisfaction with care. Method: The authors conducted a satisfaction survey of patients followed by 10 students enrolled in 2016-2017 in the Veterans Affairs Longitudinal Undergraduate Medical Education (VALUE) program, a longitudinal integrated clerkship (LIC) at the Minneapolis Veterans Health Care System. Students were embedded in an ambulatory practice with primary preceptors who assigned students a panel of 14 to 32 patients to follow longitudinally in inpatient and outpatient settings. Control patients, matched on disease severity, were chosen from the preceptor’s panel. Two to five months after the students completed the VALUE program, the authors conducted a phone survey of the VALUE and control patients using a validated, customized questionnaire. Results: Results are reported from 97 VALUE patients (63% response rate) and 72 controls (47% response rate) who had similar baseline characteristics. Compared to control patients, VALUE patients reported greater satisfaction with explanations provided by their health care provider, their provider’s knowledge of their personal history, and their provider’s looking out for their best interests (P < 0.05). Patients in the VALUE panel selected the top category more often than control patients for overall satisfaction with their health care (65% vs 43%, P < 0.05). Conclusions: The results of this controlled trial demonstrate that VALUE student longitudinal participation in patient care improves patient satisfaction and patient-perceived quality of health care for VALUE patients compared to controls matched by primary care provider and disease severity. These findings may have implications outside the Veterans Administration population. Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A759. Acknowledgements: The authors wish to thank Dr. Mark Rosenberg, Dr. Areef Ishani, and Dr. Anne Pereira for critically reviewing this report. The authors also would like to thank Dr. Julia Langer for conducting focus groups, interns from University of Minnesota School of Public Health for conducting interviews, and Dr. Tom Rector for assisting with survey design. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: The Minneapolis VA Health Care System Institutional Review Board reviewed the study design and deemed it program evaluation and thus exempt from full review. Disclaimer: The views in the report are those of the authors and do not necessarily represent the views of the Veterans Administration or the United States government. Correspondences should be addressed to Nacide G. Ercan-Fang, One Veterans Drive, Minneapolis, MN 55417; telephone: (612) 725-2085; email: ercan001@umn.edu; alternate email: Nacide.Ercan-Fang@va.gov. Written work prepared by employees of the Federal Government as part of their official duties is, under the U.S. Copyright Act, a “work of the United States Government” for which copyright protection under Title 17 of the United States Code is not available. As such, copyright does not extend to the contributions of employees of the Federal Government. © 2019 by the Association of American Medical Colleges
A Signal Through the Noise: Do Professionalism Concerns Impact the Decision-Making of Competence Committees?
Purpose: To characterize how professionalism concerns influence individual reviewers’ decisions about resident progression using simulated competence committee (CC) reviews. Method: In April 2017, the authors conducted a survey of 25 Royal College of Physicians and Surgeons of Canada emergency medicine residency program directors and senior faculty who were likely to function as members of a CC (or equivalent) at their institution. Participants took a survey with 12 resident portfolios, each containing hypothetical formative and summative assessments. Six portfolios represented residents progressing as expected (PAE) and 6 represented residents not progressing as expected (NPAE). A professionalism variable (PV) was developed for each portfolio. Two counterbalanced surveys were developed in which 6 portfolios contained a PV and 6 portfolios did not (for each PV condition, 3 portfolios represented residents PAE and 3 represented residents NPAE). Participants were asked to make progression decisions based on each portfolio. Results: Without PVs, the consistency of participants giving scores of 1 or 2 (i.e., little or no need for educational intervention) to residents PAE and to those NPAE was 92% and 10%, respectively. When a PV was added, the consistency decreased by 34% for residents PAE and increased by 4% for those NPAE (P = .01). Conclusions: When reviewing a simulated resident portfolio, individual reviewer scores for residents PAE were responsive to the addition of professionalism concerns. Considering this, educators using a CC should have a system to report, collect, and document professionalism issues. Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A752. Funding/Support: This study was funded (grant number: 2016-SP-22) by the Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada. Other disclosures: None reported. Ethical approval: This study (protocol number: 20170202) was approved on June 8, 2017, by the Ottawa Health Science Network Research Ethics Board. Correspondence should be addressed to Scott Odorizzi, Department of Emergency Medicine, University of Ottawa, 1053 Carling Ave., E-Main, Room EM-206, Box 227, Ottawa, Ontario, K1Y 4E9, Canada; telephone: 1-613-798-5555, ext. 19255; email: SOdorizzi@toh.ca. © 2019 by the Association of American Medical Colleges
Clinical Reasoning as a Core Competency
Diagnostic error is a challenging problem; addressing it effectively will require innovation across multiple domains of health care, including medical education. Diagnostic errors often relate to problems with clinical reasoning, which involves the cognitive and relational steps up to and including establishing a diagnostic and therapeutic plan with a patient. However, despite a call from the National Academies of Sciences for medical educators to improve the teaching and assessment of clinical reasoning, the creation of explicit, theory-informed clinical reasoning curricula, faculty development resources, and assessment tools has proceeded slowly in both undergraduate and graduate medical education. To accelerate the development of this critical element of health professions education, and to promote needed research and innovation in clinical reasoning education, the Accreditation Council for Graduate Medical Education (ACGME) should revise its core competencies to include clinical reasoning. The core competencies have proven to be an effective means of expanding educational innovation across the United States and ensuring buy-in across a diverse array of institutions and disciplines. Reformulating the ACGME core competencies to include clinical reasoning would spark much-needed educational innovation and scholarship in graduate medical education, as well as collaboration across institutions in this vital aspect of physicianship, and ultimately, could contribute to a reduction of patient suffering by better preparing trainees to build individual, team-based, and system-based tools to monitor for and avoid diagnostic error. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable. Disclaimer: The views expressed are those of the authors and not necessarily those of the Department of Defense, Veterans Affairs, or other federal agencies. Correspondence should be addressed to Denise M. Connor, San Francisco VA Medical Center, 4150 Clement St., San Francisco, CA 94121; telephone: (415) 221-4810, ext. 26427; email: denise.connor@ucsf.edu; Twitter: @Denise_M_Connor. Written work prepared by employees of the Federal Government as part of their official duties is, under the U.S. Copyright Act, a “work of the United States Government” for which copyright protection under Title 17 of the United States Code is not available. As such, copyright does not extend to the contributions of employees of the Federal Government. © 2019 by the Association of American Medical Colleges
Is Academic Medicine Ready for Term Limits?
The use of term limits in politics and business has been proposed as a means to refresh leadership, encourage innovation, and decrease gender and racial disparities in positions of power. Many U.S. states and the executive boards of businesses have incorporated them into their constitutions and bylaws; however, studies in politics and business have shown that implementing term limits has mixed results. Specifically, research in politics has shown that terms limits have had a minimal effect on the number of women and minorities elected to office, while research in business indicates term limits do increase innovation. Additionally, term limits may have unintended negative consequences, including inhibiting individuals from developing deep expertise in a specific area of interest and destabilizing institutions that endure frequent turnover in leaders. Given this conflicting information, it is not surprising that AMCs in the United States have not widely incorporated term limits for those holding positions of power, including deans, presidents, provosts and department heads. Notably, a few academic medical centers (AMCs) have incorporated such limits for some positions, and faculty have viewed these positively for their ability to shape a more egalitarian and collaborative culture. Drawing on studies from academic medicine, politics, and business, the author examines arguments both for and against instituting term limits at AMCs. The author concludes that despite strong arguments against term limits, they deserve attention in academic medicine, especially given their potential to help address gender and racial disparities and to encourage innovation. Acknowledgments: The author wishes to thank Dr. Alex Foster and Dr. Michelle Noelck for their helpful suggestions and edits. Funding: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable. Correspondence should be addressed to Jared P. Austin, Mail code: CDRC, 707 SW Gaines St., Portland, Oregon 97239; telephone: (503) 494-6300; email: austinja@ohsu.edu. © 2019 by the Association of American Medical Colleges
Teaching Motivational Interviewing to Medical Students: A Systematic Review
Purpose: Medical students must be prepared to work with patients with maladaptive health behaviors and chronic health conditions. Motivational interviewing (MI) is an evidence-based, patient-centered, directive communication style designed to help patients address behaviors that are detrimental to their health (e.g., substance abuse, poor diet). In this study, the authors systematically reviewed the evidence pertaining to MI curricula in medical schools. Their aims were to describe the pedagogical and content-related features of MI curricular interventions and to assess the effectiveness of the interventions and the quality of the research evidence. Method: In March 2019, the authors searched databases, seeking studies on MI in medical schools. They manually extracted descriptive information, used the Medical Education Research Study Quality Instrument (MERSQI) to assess the quality of the included studies, and synthesized the included studies’ results. Results: Sixteen studies met inclusion criteria. The majority of included studies were pre-post evaluation designs; the most rigorous were randomized controlled trials. MI curricula were heterogeneous, varying in timing, content, pedagogical approaches, and outcomes measured. Conclusions: The results of this review suggest that the implementation of MI curricula in medical schools can be feasible and effective, and that students can achieve beginning levels of proficiency. The results support the inclusion of MI in undergraduate medical education curricula and highlight next steps to advance this area of medical education research: achieving consensus around essential early MI skills that should be taught in medical schools and identifying the most effective scaffolding strategies to teach this complex mode of communication. Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A754. Acknowledgments: The authors wish to thank C. Scott Dorris for his guidance in conducting the literature search and Drs. Carrie Chen and Ming-Jong Ho for their very helpful critiques of an earlier version of this report. Funding/support: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable. Data: All data for this review were obtained from published manuscripts. Correspondence should be addressed to Dr. Stacey Kaltman, Department of Psychiatry, Georgetown University School of Medicine, 2115 Wisconsin Avenue, NW, Suite 120, Washington, DC 20007; telephone: (202) 687-6571; email: sk279@georgetown.edu. © 2019 by the Association of American Medical Colleges
The Not Underrepresented Minorities: Asian Americans, Diversity, and Admissions
Several lawsuits have recently been filed against U.S. universities; the plaintiffs contend that considerations of race and ethnicity in admissions decisions discriminate against Asian Americans. In prior cases brought by non-Latino whites, the U.S. Supreme Court has upheld these considerations, arguing that they are crucial to a compelling interest to increase diversity. The dissenting opinion, however, concerns the possibility that such policies disadvantage Asian Americans, who are considered overrepresented in higher education. Here, the authors explain how a decision favoring the plaintiffs would affect U.S. medical schools. First, eliminating race and ethnicity in holistic review would undermine efforts to diversify the physician workforce. Second, the restrictions on considering race/ethnicity in admissions decisions would not remedy potential discrimination against Asian Americans that arise from implicit biases. Third, such restrictions would exacerbate the difficulty of addressing the diversity of experiences within Asian American subgroups, including recognizing those who are underrepresented in medicine. The authors propose that medical schools engage Asian Americans in diversity and inclusion efforts and recommend the following strategies: incorporate health equity into the institutional mission and admissions policies; disaggregate data to identify underrepresented Asian subgroups; include Asian Americans in diversity committees and support faculty who make diversity work part of their academic portfolio; and enhance the Asian American faculty pipeline through support and mentorship of students. Asian Americans will soon comprise one-fifth of the U.S. physician workforce and should be welcomed as part of the solution to advancing diversity and inclusion in medicine, not cast as the problem. Acknowledgments: The University of California Davis Office of Diversity, Equity and Inclusion supports Dr. Ton’s efforts in promoting policies and practices that foster a climate of inclusion and diversity. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable. Correspondence should be addressed to Michelle Ko, Department of Public Health Sciences, Medical Sciences 1C, Rm 127, One Shields Avenue, Davis, CA, 95616; telephone: (530) 752-7709; email: mijko@ucdavis.edu; Twitter: @michelleko2d. © 2019 by the Association of American Medical Colleges
Evaluating a Center for Interprofessional Education via Social Network Analysis
Centers and institutes are created to support interdisciplinary collaboration. However, all centers and institutes face the challenge of how best to evaluate their impact since traditional counts of productivity may not fully capture the interdisciplinary nature of this work. The authors applied techniques from social network analysis (SNA) to evaluate the impact of a center for interprofessional education (IPE), a growing area for centers because of the global emphasis on IPE. The authors created networks based on the connections between faculty involved in programs supported by an IPE center at Virginia Commonwealth University from 2014 to 2017. They used mathematical techniques to describe these networks and the change in the networks over time. The results of these analyses demonstrated that, while the number of programs and involved faculty grew, the faculty maintained a similar amount of connection between members. Additional faculty clusters emerged, and certain key faculty were important connectors between clusters. The analysis also confirmed the interprofessional nature of faculty collaboration within the network. SNA added important evaluation data beyond typical metrics such as counts of learners or faculty. This approach demonstrated how a center was evolving and what strategies might be needed to support further growth. With further development of benchmarks, SNA could be used to evaluate the effectiveness of centers and institutes relative to each other. SNA should guide strategic decisions about the future of centers and institutes as they strive to meet their overarching goal of tackling a social challenge through interdisciplinary collaboration. Acknowledgments: None. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable. Disclaimers: None. Previous presentations: Portions of this work were presented at the Collaborating Across Borders conference in Banff, Canada, October 3, 2017; the All Together Better Health conference in Auckland, New Zealand, September 4, 2018; and the Emswiller Interprofessional Symposium in Richmond, Virginia, February 3, 2018. Correspondence should be addressed to Alan W. Dow, Virginia Commonwealth University Center for Interprofessional Education and Collaborative Care, Box 980071, Richmond, VA 23298-0071; telephone: (804) 828-2898; e-mail: alan.dow@vcuhealth.org. © 2019 by the Association of American Medical Colleges
Validity Evidence for a Knowledge Assessment Tool for a Mastery Learning Scrub Training Curriculum
Purpose: To examine the validity evidence for a scrub training knowledge assessment tool to demonstrate the utility and robustness of a multimodal, entrustable professional activity (EPA)-aligned, mastery learning scrub training curriculum. Method: Validity evidence was collected for the knowledge assessment used in the scrub training curriculum at Stanford University School of Medicine from April 2017–June 2018. The knowledge assessment had 25 selected response items that mapped to curricular objectives, EPAs, and operating room policies. A mastery passing standard was established using the Mastery Angoff and Patient-Safety approaches. Learners were assessed pre-curriculum, post-curriculum, and 6 months after the curriculum. Results: From April 2017–June 2018, 220 medical and physician assistant students participated in the scrub training curriculum. The mean pre- and post-curriculum knowledge scores were 74.4% (SD = 15.6) and 90.1% (SD = 8.3), respectively, yielding a Cohen’s d = 1.10, P <.001. The internal reliability of the assessment was 0.71. Students with previous scrub training performed significantly better on the pre-curriculum knowledge assessment than those without previous training (81.9% [SD = 12.6] vs. 67.0% [SD = 14.9]; P <.001). The mean item difficulty was 0.74, and the mean item discrimination index was 0.35. The Mastery Angoff overall cut score was 92.0%. Conclusions: This study describes the administration of and provides validity evidence for a knowledge assessment tool for a multimodal, EPA-aligned, mastery-based curriculum for scrub training. The authors support the use of scores derived from this test for assessing scrub training knowledge among medical and physician assistant students. Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A753. Acknowledgments: The authors wish to thank perioperative services at Stanford Hospital and Clinics, especially Susan Gates, RN, BSN, whose ongoing commitment to student education and patient safety are exemplary. Funding/Support: The production of the scrub training curriculum video was funded by the Stanford Teaching and Mentoring Academy Innovations Grant at the Stanford University School of Medicine. Other disclosures: None reported. Ethical approval: The institutional review boards at Stanford University and the University of Illinois at Chicago approved this study. Previous presentations: Results from this study were previously presented at the 19th Annual Master of Health Professions Education (MHPE) Summer Conference, Chicago, Illinois, July 2018. This study was submitted as a master thesis at the University of Illinois at Chicago and accepted for partial requirement for the Master of Health Professions Education degree. Correspondence should be addressed to Brittany N. Hasty, Department of Surgery, Stanford University School of Medicine, 300 Pasteur Dr., H3591B, Stanford, CA, 94305; telephone: (650) 724-6490; email: bhasty@stanford.edu; Twitter: @brittnhasty. © 2019 by the Association of American Medical Colleges
Illuminating Shadows: The Power of Learning by Observing
“Shadowing” refers to the practice of a student following an instructor. Although the term implies less light, rather than more, shadowing as an instructional modality in medical education can illuminate, stimulate, and move students to emulate what they might otherwise never observe. In this Invited Commentary, the author reports on a teaching encounter on the wards in which shadowing was the principal modality used. This modality is situated within the spectrum of approaches to learning—from experiential learning to passive learning. Based upon personal experience as both a teacher and learner, the author identifies the unique value of shadowing, including its value in influencing career choice. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable. Correspondence should be addressed to Jack Ende, MD, MACP, Penn Medicine, 3400 Spruce St., 5034 W. Gates Pavilion, Philadelphia, PA 19104; telephone: (215) 614-0928; email: jack.ende@uphs.upenn.edu. © 2019 by the Association of American Medical Colleges
Using Team Census Caps to Optimize Education, Patient Care, and Wellness: A Survey of Internal Medicine Residency Program Directors
Purpose: To discover whether internal medicine (IM) residency program directors use lower-than-required caps on general medicine wards, critical care units, and inpatients subspecialty wards, describe justifications for lower-than-required general medicine ward caps and strategies for when caps have been exceeded or the number of patients is a detriment to critical thinking or education, and assess whether caps were associated with program characteristics. Method: From August–December 2016, the Association of Program Directors in Internal Medicine surveyed all member program directors about team caps and their effects on the learning environment. Responses were appended with publicly available or licensed third-party data. Programs were categorized by type, size, and region. Results: Overall response rate was 65.7% (251/382 programs). Nearly all (244/248; 98.4%) reported caps for general medicine ward teams (mean = 17.0 [SD = 4.2]). Fewer (171/247; 69.2%) had caps for critical care (mean = 13.8 [SD = 5.4]). Fewer still (131/225; 58.2%) had caps for subspecialty teams (mean = 14.8 [SD = 6.0]). Fewer 1st quartile programs (0–28 residents) reported having caps on inpatient subspecialty teams (P < .001). Directors reported higher caps compromised education (109/130; 83.8%), patient care (89/130; 68.5%), and/or resident wellness (77/130; 59.2%). Nonteaching services (181/249; 72.7%), patient transfers (110/249; 44.2%), or “back up” residents (67/249; 26.9%) were used when caps are reached or the number of patients is detrimental to critical thinking or education. Conclusions: IM program directors frequently exercise discretion when setting caps. Accrediting bodies should explicitly encourage such adjustments and allow differentiation by setting. Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A756. Acknowledgments: The authors wish to thank the Mayo Clinic Survey Research Center and the Association of Program Directors in Internal Medicine (APDIM) Survey Committee. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: This study was exempted from human subjects research review by both the Mayo Clinic Institutional Review Board (ID#: 08-007125) and The George Washington University Institutional Review Board (ID# 101757). Data: The number of filled residency programs was obtained from publicly available data (the Accreditation Council for Graduate Medical Education Accreditation Data System, https://apps.acgme.org/ads/Public/Programs/Search). Residency program type was obtained from the American Medical Association (AMA) Fellowship and Residency Electronic Interactive Database Access (FREIDA) system, for which the Alliance for Academic Internal Medicine (the legal entity under which APDIM exists) obtains a data license and user agreement to receive select variables from the AMA and then match residency programs from its survey database to the FREIDA data, prior to deidentifying the survey results. For the 2016 survey data, the data license was executed on January 11, 2017. Geographic region of the residency program was based on U.S. Census Bureau data online, which are publicly available online (https://www.census.gov/geographies/reference-files/2016/demo/popest/2016-fips.html) and which were entered into the survey database prior to deidentifying the results. Correspondence should be addressed to Heather S. Laird-Fick, 965 Fee Rd., A106 East Fee Hall, East Lansing, MI 48824; telephone: (517) 355-0264; email: lairdhea@msu.edu. © 2019 by the Association of American Medical Colleges

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate