Translate

Τετάρτη 31 Ιουλίου 2019

Technological Pedagogical and Content Knowledge Among Medical Educators: What Is Our Readiness to Teach With Technology?
Purpose: This study aimed to empirically assess medical educator knowledge of pedagogy and technology to inform the direction of medical school faculty development efforts. Method: The Technological Pedagogical and Content Knowledge framework (TPACK) survey is a validated instrument for understanding educators’ knowledge of content (CK), pedagogy (PK) and technology (TK) in teaching. A modified version of the TPACK was administered to medical educators (N = 76) at two public institutions, University of California, Irvine (UC Irvine) and University of Colorado (CU). Results: An independent-samples t-test compared TK to PK and CK within each institution. The means of TK (UC Irvine: 3.4; CU 3.4) and both PK for a didactic session (UC Irvine: 3.9; CU: 4.4) and PK for a clinical setting (UC Irvine: 4.0; CU: 4.4) were compared using a t-test and found to be statistically different, P < .01. Similarly, the means of TK and CK (UC Irvine: 4.5; CU: 4.7) were found to be statistically different, P < .01. A Wilcoxon Rank Sum test indicated that the CU PK for a didactic session (Mean: 4.4) was greater than the UC Irvine PK for a didactic session (Mean: 3.9), P < .01. Similarly, the CU PK for a clinical setting (Mean: 4.4) was greater than the UC Irvine PK for a clinical setting (Mean: 4.0), P < .01. Conclusions: There is a clear need for faculty development programs for medical educators to focus on how to teach with technology if medical schools continue to adopt technology within their curricula. Acknowledgements: The authors wish to thank Monica McNulty, Senior Professional Research Assistant/Data Analyst at the University of Colorado School of Medicine for her contributions to the data analysis. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: This study was approved by the institutional review boards of University of California, Irvine and the University of Colorado as exempt. Correspondence should be addressed to Julie Youm, University of California, Irvine School of Medicine, 836 Health Sciences Road, Irvine, CA 92697; email: jyoum@uci.edu. © 2019 by the Association of American Medical Colleges
Learning Conversations: An Analysis of Their Theoretical Roots and Their Manifestations of Feedback and Debriefing in Medical Education
Feedback and debriefing are experience-informed dialogues upon which experiential models of learning often depend. Efforts to understand each have largely been independent of each other, thus splitting them into potentially problematic factions. Given their shared purpose of improving future performance, the authors asked whether efforts to understand these dialogues are, for theoretical and pragmatic reasons, best advanced by keeping these concepts unique, or whether some unifying conceptual framework could better support educational contributions and advancements in medical education. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable. Correspondence should be addressed to Walter Tavares, Wilson Centre, 200 Elizabeth St., 1ES-565, Toronto, Ontario, Canada M5G 2C4; telephone: 416-340-3646; email: walter.tavares@utoronto.ca; Twitter: @WalterTava. © 2019 by the Association of American Medical Colleges
Attitudes Towards Physicians Requiring Remediation: One-of-Us or Not-Like-Us?
Purpose: The data for this paper were collected as part of a larger project exploring how the medical profession conceptualizes the task of supporting physicians struggling with clinical competency issues. In this paper, the authors focus on a topic that has been absent in the literature thus far—how physicians requiring remediation are perceived by those responsible for organizing remediation and by their peers in general. Method: Using a constructivist grounded theory approach, the authors conducted semi-structured interviews with 17 remediation stakeholders across Canada. Given that in Canada health is a provincial responsibility, the authors purposively sampled stakeholders from across provincial and language borders, and across the full range of organizations that could be considered as participating in the remediation of practicing physicians. Results: Interviewees expressed mixed, sometimes contradictory, emotions towards and perceptions of physicians requiring remediation. They also noted that their colleagues, including physicians in training, were not always sympathetic to their struggling peers. Conclusions: The medical profession’s attitude towards those who struggle with clinical competency—as individuals and as a whole—is ambivalent at best. This ambivalence grows out of psychological and cultural factors, and may be an undiscussed factor in the profession’s struggle to deal adequately with underperforming members. To contend with the challenge of remediating practicing physicians, the profession needs to address this ambivalence and its underlying causes. Funding/Support: Financial support for the preparation for this study was generously provided by a Medical Education Research Grant from the Royal College of Physicians and Surgeons of Canada Other disclosures: None reported. Disclaimer: The views expressed in this manuscript are those of the authors and not necessarily reflect those of the Department of Defense or other American federal agencies. Ethical approval: This study was approved by the University of British Columbia Behavioral Research Ethics Board (protocol no. H16-00529). Previous presentations: Some of these findings were reported at the 2017 meeting of the Association for Medical Education in Europe, Helsinki, Finland, and the 2018 Canadian Conference on Medical Education, Halifax, Nova Scotia, Canada. Correspondence should be addressed to Gisèle Bourgeois-Law, Royal Jubilee Hospital, Coronation Annex, 1952 Bay Street, Victoria, British Columbia V8R 1J8, Canada; email: gisele.bourgeoislaw@ubc.ca Written work prepared by employees of the Federal Government as part of their official duties is, under the U.S. Copyright Act, a “work of the United States Government” for which copyright protection under Title 17 of the United States Code is not available. As such, copyright does not extend to the contributions of employees of the Federal Government. © 2019 by the Association of American Medical Colleges
What’s Next? Developing Systems of Assessment for Educational Settings
No abstract available
Medical Spanish Standardization in U.S. Medical Schools: Consensus Statement From a Multidisciplinary Expert Panel
Medical Spanish (MS) education is in growing demand from U.S. medical students, providers, and health systems, but there are no standard recommendations for how to structure the curricula, evaluate programs, or assess provider performance or linguistic competence. This gap in medical education and assessment jeopardizes health care communication with Hispanic/Latino patients and poses significant quality and safety risks. The National Hispanic Health Foundation and University of Illinois College of Medicine convened a multidisciplinary expert panel in March 2018 to define national standards for the teaching and application of MS skills in patient-physician communication, establish curricular and competency guidelines for MS courses in medical schools, propose best practices for MS skill assessment and certification, and identify next steps needed for the implementation of the proposed national standards. Experts agreed on the following consensus recommendations: (1) create a Medical Spanish Taskforce to, among other things, define educational standards, (2) integrate MS educational initiatives with government-funded research and training efforts as a strategy to improve Hispanic/Latino health, (3) standardize core MS learner competencies, (4) propose a consensus core curricular structure for MS courses in medical schools, (5) assess MS learner skills through standardized patient encounters and develop a national certification exam, and (6) develop standardized evaluation and data collection processes for MS programs. MS education and assessment should be standardized and evaluated with a robust interinstitutional medical education research strategy that includes collaboration with multidisciplinary stakeholders to ensure linguistically appropriate care for the growing Spanish-speaking U.S. population. Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A724. Funding/Support: The Medical Spanish Summit was funded in part by the National Hispanic Health Foundation, the Josiah Macy Jr. Foundation, Baylor University, the University of Illinois Hospital & Health Sciences System, the Certification Commission for Healthcare Interpreters, and the Research Institute of United States Spanish. Other disclosures: P. Ortega receives author royalties from Saunders (an imprint of Elsevier) for a textbook. N. Pérez receives author royalties from University of Texas Medical Branch. Ethical approval: Reported as not applicable. Disclaimer: The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the authors’ or expert panelists’ respective institutions. Correspondence should be addressed to Pilar Ortega, University of Illinois at Chicago, College of Medicine, 808 S. Wood St., Suite 990, Chicago, IL 60612; email: POrtega1@uic.edu; Twitter: @pilarortegamd. © 2019 by the Association of American Medical Colleges
Future Research in Feedback: How to Use Feedback and Coaching Conversations in a Way That Supports Development of the Individual as a Self-Directed Learner and Resilient Professional
No abstract available
Can Non-Clinician Raters Be Trained to Assess Clinical Reasoning in Post-Encounter Patient Notes?
Purpose: Clinical reasoning is often assessed through patient notes (PN) following standardized patient (SP) encounters. While non-clinicians can score PNs using analytic tools such as checklists, these do not sufficiently encompass the holistic judgments of clinician faculty. To better model faculty judgments, the authors developed checklists with faculty-specified scoring formulas embedded in a spreadsheet, and studied the resulting inter-rater reliability (IRR) of non-clinician raters (SPs and medics) and student pass/fail status. Method: In Study-1 (pilot phase), non-clinician and faculty raters rescored PNs of 55 third-year medical students across 5 cases of the 2017 Graduation Competency Examination (GCE) to determine IRR. In Study-2, non-clinician raters scored all notes of the 5-case 2018 GCE (178 students). Faculty rescored all notes of failing students, and could modify formula-derived scores if they felt appropriate. Faculty also rescored and corrected scores of additional notes, for a total of 90 notes (3 cases, including failing notes). Results: Mean overall percent exact agreement between non-clinician and faculty ratings was 87% (weighted kappa .86) and 83% (weighted kappa .88) for Study-1 and Study-2, respectively. SP and medic IRRs did not differ significantly. Four students failed the note section in 2018; three passed after faculty corrections. Few corrections were made to non-failing students’ notes. Conclusions: Non-clinician PN raters using checklists and scoring rules may provide a feasible alternative to faculty raters for low-stakes assessments and for the bulk of well-performing students. Faculty effort can be targeted strategically at rescoring the notes of low-performing students and providing more detailed feedback. Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A719. Acknowledgments: The authors would like to acknowledge with thanks the standardized patients, skills instructors, and staff of the University of Illinois at Chicago Dr. Allan L. and Mary L. Graham Clinical Performance Center, without whom this study would not have been possible. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: This study was approved by the institutional review board of the University of Illinois at Chicago. Correspondence should be addressed to Rachel Yudkowsky, UIC-COM Department of Medical Education 986 CMET, 808 S Wood Street MC 591, Chicago IL 60612; telephone: 312-996-3598; email: rachely@uic.edu. © 2019 by the Association of American Medical Colleges
A Continuum of Innovation: Curricular Renewal Strategies in Undergraduate Medical Education, 2010-2018
Purpose: Since 2010, medical schools across the United States have engaged in a new cycle of curricular revision and renewal for their undergraduate medical curricula. But what structures, features, and trends have emerged in U.S. medical schools as a result of deliberate curricular redesign efforts? An analysis of the ways that medical schools have approached the reorganization of their curricula to prepare their students for the growing complexity of medical practice is presented. Method: This study drew a total pool of 40 U.S. MD-granting programs, of which 25 met the inclusion criteria for the study. The authors used a qualitative coding approach to materials from the undergraduate medical education (UME) program websites to identify four dimensions of strategies that these programs used to renew their curricula. Results: The analysis of the curricular maps and website content of the UME programs provided evidence for a continuum approach to the description of innovation strategies: 96% of schools employed a cohort-based linear pathway, 80% of schools used thematic basic science blocks, 47% placed their Step 1 exams outside of the second year, and 68% moved their clerkships to the second year. Conclusion: The Continuum of Innovation strategies will enable programs to renew their curricula in ways that promote deliberate curricular changes that are consistent with emerging needs in the field. This study and future research may be useful for UME programs with limited resources by having consensus practices that enable them to plan curricular changes in ways that best serve their institutions. Acknowledgements: The authors wish to thank Cha-Chi Fung, PhD, Anne Vo, PhD, Cathy Jalali, PhD, and Amanda Frataccia, MSMI, for their support and efforts in the creation of this manuscript. Funding/Support: None reported. Other disclosures: None reported. Ethical approval: Reported as not applicable Correspondence should be addressed to Daniel A. Novak, 1975 Zonal Avenue, Keith Administration Building 214 Los Angeles, California, 90033; telephone: 323-442-0475; email: dnovak@usc.edu © 2019 by the Association of American Medical Colleges
The Patient Experience Debrief Interview: How Conversations With Hospitalized Families Influence Medical Student Learning and Reflection
Purpose: To determine the effect of patient debrief interviews on pediatric clerkship student depth of reflection and learning. Method: The authors conducted a multi-institutional, mixed-methods, cluster randomized trial among pediatric clerkship students from July 2016 to February 2017. Intervention students completed a debrief interview with a patient-caregiver, followed by a written reflection on the experience. Control students completed a written reflection on a memorable patient encounter. Three blinded authors scored written reflections according to the 4-level REFLECT rubric to determine depth of reflection. Inter-rater reliability was examined using kappa. REFLECT scores were analyzed using a chi-squared test; essays were analyzed using content analysis. Results: Eighty percent of eligible students participated. One hundred eighty-nine essays (89 control, 100 intervention) were scored. Thirty-seven percent of the control group attained reflection and critical reflection, the two highest levels of reflection, compared to 71% in the intervention group; 2% of the control group attained critical reflection, the highest level, compared to 31% in the intervention group (χ2(3, N=189) = 33.9, P < .001). Seven themes were seen across both groups, three focused on physician practice and four focused on patients. Patient-centered themes were more common in the intervention group whereas physician-focused themes were more common in the control group. Conclusions: Patient debrief interviews offer a unique approach to deepen self-reflection through direct dialogue and exploration of patient-caregiver experiences during hospitalization. Acknowledgments: The authors are grateful to additional members of the Medical Education Staff and Pediatric Clerkship Children’s National Medical Center staff who collected reflection essays, conducted focus groups, and blinded and de-identified data, including Wilhelmina Bradford, Kenya Spencer, Rachel Sarnacki, Joyce Campbell, Craig DeWolfe, MD, MEd, Clarissa Dudley, MD, MPH, and Gabrina Dixon, MD, MEd. Funding/Support: The authors gratefully acknowledge educational grant funding from the Council of Medical Student Education in Pediatrics and Association of American Medical Colleges Northeastern Group on Educational Affairs. Ethical approval: Institutional Review Board approval to conduct this study was obtained from Children’s National Medical Center. Previous presentations: The abstract of an earlier version of this article was presented at the Council of Medical Education in Pediatrics annual meeting, April 2018, Denver, CO; Northeastern Group on Educational Affairs annual meeting, April 2018, Hofstra, NY; Pediatric Hospital Medicine annual meeting, July 2017, Nashville, TN; and the Pediatric Academic Societies annual meeting, May 2018, Toronto, Ontario, Canada. Correspondence should be addressed to Ian S. Chua, 111 Michigan Ave NW, Washington, DC, 20010; telephone: 202-476-4143; email: ichua@childrensnational.org. © 2019 by the Association of American Medical Colleges
Effects of Moving the United States Medical Licensing Examination Step 1 After Core Clerkships on Step 2 Clinical Knowledge Performance
Purpose: To investigate the effect of a change in USMLE Step 1 timing on Step 2 Clinical Knowledge (CK) scores, the effect of lag-time on Step 2 CK performance, and the relationship of incoming MCAT score to Step 2 CK performance pre- and post-change. Method: Four LCME–accredited schools that moved Step 1 after core clerkships between academic years 2008–2009 and 2017–2018 were analyzed in a pre-post format. Standard t-tests were used to examine the change in Step 2 CK scores pre- and post-change. Tests of differences in proportions were used to evaluate whether Step 2 CK failure rates differed between curricular change groups. Linear regressions were used to examine the relationships between Step 2 CK performance, lag-time and incoming MCAT score, and curricular change group. Results: Step 2 CK performance did not change significantly (P = .20). Failure rates remained highly consistent (pre-change: 1.83%, post-change: 1.79%). The regression indicated that lag-time had a significant effect on Step 2 CK performance, with scores declining with increasing lag-time. The regression yielded small but significant interaction effects between MCAT and Step 2 CK scores. Students with lower incoming MCATs tended to perform better on Step 2 CK when Step 1 was after clerkships. Conclusions: Moving Step 1 after core clerkships appears to have had no significant impact on Step 2 CK scores or failure rates, supporting the argument that such a change is noninferior to the traditional model. Students with lower MCAT scores benefit most from the change. Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A725. Acknowledgments: The authors wish to thank Colleen Ward for her instrumental contributions throughout the planning and development of this manuscript. Funding/Support: The University of Michigan School of Medicine, Vanderbilt School of Medicine, and New York University School of Medicine have Accelerating Change in Medical Education grants from the American Medical Association. Virginia Commonwealth University School of Medicine receives funding from the American Medical Association for S.A. Santen’s consultation on the Accelerating Change in Medical Education grant. Other disclosures: D. Jurich, M. Paniagua, and M.A. Barone work for the National Board of Medical Examiners, the organization that administers the United States Medical Licensing Examinations examined in this study. Ethical approval: This study was reviewed and determined to be exempt by the American Institutes for Research (project number EX00398; June 27, 2016). Disclaimer: The views expressed are those of the authors and do not reflect the official policy or position of their universities, the National Board of Medical Examiners, the Department of Defense, the United States Air Force, or the United States Government. Previous presentations: Data contained in Table 1 has been modified from tables in Jurich D, Daniel M, Paniagua M, et al. Moving the United States Medical Licensing Examination Step 1 After Core Clerkships: An Outcomes Analysis. Acad Med. 2019;94:371–377. and Pock A, Daniel M, Santen SA, Swan-Sein A, Fleming A, Harnik V. Challenges Associated With Moving the United States Medical Licensing Examination (USMLE) Step 1 to After the Core Clerkships and How to Approach Them. Acad Med. 2019;94:775–780. Correspondence should be addressed to Michelle Daniel, University of Michigan Medical School, 6123 Taubman Health Sciences Library, 1135 Catherine St., SPC 5726, Ann Arbor, MI 48109; telephone: (401) 525-0251; email: micdan@med.umich.edu; Twitter:@Emergdoc1975. Written work prepared by employees of the Federal Government as part of their official duties is, under the U.S. Copyright Act, a “work of the United States Government” for which copyright protection under Title 17 of the United States Code is not available. As such, copyright does not extend to the contributions of employees of the Federal Government. © 2019 by the Association of American Medical Colleges

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate