Perspective

RMMJ Rambam Maimonides Medical Journal Rambam Health Care Campus 2022; 13(3): e0023. ISSN: 2076-9172
Published online 2022 July 31. doi: 10.5041/RMMJ.10480

Quality Assurance of Undergraduate Medical Education in Israel by Continuous Monitoring and Prioritization of the Accreditation Standards

Jochanan Benbassat, M.D.,1* Reuben Baumal, M.D.,2 and Robert Cohen, Ph.D.3

1Department of Medicine (retired), Hadassah—Hebrew University Medical Centre, Jerusalem, Israel
2Department of Laboratory Medicine and Pathobiology (retired), University of Toronto, Toronto, Ontario, Canada
3Center of Medical Education (retired), Hebrew University—Hadassah Faculty of Medicine, Jerusalem, Israel

*To whom correspondence should be addressed. E-mail: Jochanan.bengassag@gmail.com

Abstract

External accreditation reviews of undergraduate medical curricula play an important role in their quality assurance. However, these reviews occur only at 4–10-year intervals and are not optimal for the immediate identification of problems related to teaching. Therefore, the Standards of Medical Education in Israel require medical schools to engage in continuous, ongoing monitoring of their teaching programs for compliance with accreditation standards. In this paper, we propose the following: (1) this monitoring be assigned to independent medical education units (MEUs), rather than to an infrastructure of the dean’s office, and such MEUs to be part of the school governance and draw their authority from university institutions; and (2) the differences in the importance of the accreditation standards be addressed by discerning between the “most important” standards that have been shown to improve student well-being and/or patient health outcomes; “important” standards associated with student learning and/or performance; “possibly important” standards with face validity or conflicting evidence for validity; and “least important” standards that may lead to undesirable consequences. According to this proposal, MEUs will evolve into entities dedicated to ongoing monitoring of the education program for compliance with accreditation standards, with an authority to implement interventions. Hopefully, this will provide MEUs and faculty with the common purpose of meeting accreditation requirements, and an agreed-upon prioritization of accreditation standards will improve their communication and recommendations to faculty.

Keywords: Accreditation of medical schools, Israel, medical education, quality assurance

INTRODUCTION

To assure and improve the quality of their undergraduate programs, medical schools have established departments/offices for education (referred to as medical education units, MEUs),1 and most countries conduct periodic external reviews to ascertain that medical schools meet predetermined accreditation standards.25 In Israel, the Council for Higher Education is the official national authority on issues related to higher education. It evaluates the quality of teaching at the various institutions, including the six medical schools, using the standards of medical education in Israel (SMEI).5

The function, structure, and staffing of MEUs vary among medical schools in the West. In addition to quality assurance, MEUs are expected to contribute to other aspects of education, such as integration of new methods of instruction and their evaluation, enhancing scholarly activity, creating and monitoring the institution’s vision and mission, and recommending reforms and innovations. Some MEUs are subdivisions of the office of the Dean; others are independent academic departments; and still others, such as in Sydney, Australia,6 have evolved into centers or schools of public health. In 2008, North American MEUs employed on average five professional and faculty staff who were supported by university funds, research and training grants, and contracts with other institutions.7

The external reviews for accreditation require medical schools to perform a self-evaluation of their programs. This self-evaluation helps the accreditation committee prepare for site visits that include reviews of documentation, inspections, and meetings with faculty and students. After visits, the committee provides the deans with its initial findings, and, several months later, with final recommendations. Although its main purpose is to improve the educational processes in medical schools, accreditation is subject to several types of uncertainties and criticism.

Firstly, accreditation visits in North America occur at 4–10-year intervals and do not identify problems promptly as they occur.8

Secondly, the accreditation standards are not equally important. Experts reportedly agreed that only 14 of the 150 standards of the World Federation of Medical Education (WFME) were essential, and disagreed regarding the importance of the remaining standards.9 The UK General Medical Council (GMC) and WFME distinguish between standards that “must” and those that “should” be met. The Liaison Committee of Medical Education (LCME) discerns between standards that, if not complied with, place a teaching program at “immediate” and “lesser” risks. Nevertheless, we know of no agreed-upon taxonomy of standards, and consequently the differences in their importance do not figure meaningfully in the accreditation process.

Thirdly, a 2021 review of the literature of the impact of accreditation on medical teachers indicated that even though faculty and students recognized the merits of accreditation (e.g. switching to active learning), they also recognized its unintended negative consequences (e.g. faculty distraction from teaching in favor of accreditation bureaucracy). Faculty and students thought that a dedicated unit overseeing the quality assurance and preparation for accreditation would improve the management of the curriculum.10

Finally, accreditation and re-accreditation have been implemented in North America for 80 years. However, in Israel, a single accreditation review was conducted in 2007 of the then four medical schools, and, only recently, the two newly founded medical schools had their first accreditation visit along with the older schools of medicine. There is no established tradition for the implementation of the SMEI,5 for self-study of the curriculum, or for discerning between important and less important standards.

In this paper, we propose that MEUs are assigned the task of overseeing the preparation for accreditation by continuous self-evaluation/monitoring of the implementation of the teaching programs in the medical school. Indeed, such continuous monitoring has already been shown in ten United States medical schools to improve the learning environment, career advising, teaching the physical examination, clerkship feedback, and communication with faculty and other stakeholders.11 Furthermore, we suggest a four-level classification of standards according to the strength of evidence for their importance, derived from published review articles. Although the review of the literature was only preliminary, we hope that our suggestions will open a discussion of the function, structure, staffing, funding, and expectations from MEUs, and of the relative importance and need for prioritization of the accreditation standards.

MEDICAL EDUCATION UNITS IN ISRAEL

In 2016, MEUs in Israel were either independent departments, units of the office of the Dean, or combinations thereof, and they varied in the number of full-time and part-time academic (MD and PhD) staff.12 Beyond other activities, Israeli MEUs conducted workshops for faculty development and were involved in the teaching of the behavioral sciences and clinical skills. In addition, there were independently staffed units of two or more full-time faculty/professionals who reported to the office of the Dean and advised on student assessment, implemented multiple-choice tests, provided faculty with feedback based on students’ rating of teaching, and offered multimedia, simulations, and support in computer use.12

As of January 2022, all six of Israel’s medical schools monitored the quality of teaching based on student ratings of instruction; additionally, in three schools, student debriefing, focus groups, and faculty reports were also used. Medical schools did not implement a continuous review of compliance with accreditation standards, as proposed in 2015 by Barzansky et al.,8 and as required by the SMEI (standard 1.1) to engage “in ongoing … continuous quality improvement processes … [and] ensure effective monitoring of the medical education program’s com-pliance with accreditation standards.”5(p Barzansky et al.8 raised the question of whether this monitor-ing should be guided by all or only selected accred-itation standards and, if the latter, how these should be chosen. In the following sections, we attempt to answer this question by proposing a prioritization of the standards of accreditation based on the strength of evidence for their importance.

PROPOSED PRIORITIZATION OF ACCREDITATION STANDARDS BY STRENGTH OF EVIDENCE FOR THEIR Validation

A straightforward validation of the accreditation standards would demonstrate their association with student well-being and patient health outcomes. However, until 2000, most measures of teaching addressed only their face validity and their association with student learning and satisfaction, and only 0.7% of the studies assessed patient outcomes.13 Only in the last two decades did research use patient health outcomes for validation of teaching programs, and the advent of electronic medical records offers potential use of big data to improve care by linking clinical outcomes to educational programs.

We propose a four-tier prioritization of the SMEI5 according to the level of their validation in the literature (Table 1). Level 1 contains the “most important” standards shown to be associated with student well-being and, in practicing doctors, with improved patient health outcomes. Level 2 contains “important” standards associated with student learning and/or performance. Level 3 consists of “possibly important” standards with face validity or conflicting evidence for validity, and level 4 comprises the “least important” standards, which are subject to controversy and may lead to unintended adverse consequences.

Table 1Table 1
Proposed Classification of the Standards for Medical School Accreditation by Strength of Validation.

Level 1: Most Important Accreditation Standards
The SMEI require a “professional, respectful, and intellectually stimulating academic and clinical environment” (standard 3) that “allows medical students to report … incidents of harassment or abuse without fear of retaliation” (standard 3.5).5(p As early as 1973, Atkinson noted that preceptors of the clerkship rotations varied between those viewing students as subordinates “… [whose] progress towards qualification was … a long obstacle race” and those viewing learners as student-physicians “treated in an egalitarian manner, and … being groomed for full professional status as soon as possible.”68(p This impression is supported by the variability in students’ appreciation of their learning environment among different medical schools.14,15 The SMEI requirement is consistent with evidence that student learning environment assessments are inversely associated with student burnout14 and correlate with student learning,15,16 quality of life, resilience, positive attitudes towards the course, preparedness for practice, and well-being.17,18 Evidence also suggests that the learning environment, rather than students’ personality traits, is the main source of students’ distress.19 As late as 2019, it was reported that student humiliation20 and neglect21 by faculty were frequent in clinical teaching settings. We believe that it is impossible to ignore students’ distress while teaching them how to be sensitive to patients’ distress and, if medical students are humiliated, it is equally impossible to teach them how to respect patients. Therefore, we consider the quality of the learning environment and student experiences during the clerkship rotations in terms of their perceived relationship with their preceptors as the most important standard of accreditation.

The SMEI require “instruction and assessment of students’ communication skills with patients, families, colleagues and other health professionals” (standard 7.8)5(p and “… the use of … simulations equipment and facilities” (standard 5.5).5(p Patient health outcomes improved when practicing doctors were taught communication skills2224 and used simulations during their training.2527 Accreditation also requires “that the assessment of student achievement employs a variety of measures of knowledge, competence, and performance, systematically and sequentially applied throughout the medical school” (standard 9.1).5(p This requirement is supported by evidence that examination performance in medical school predicts internship performance, the United States Medical Licensing Examination (USMLE), and clinical practice.28,29 Academic achievements before admission to medical school have also been shown to predict grades on preclinical examinations, assessments during the clerkship rotations, and post-graduate evaluations.30,31 There is also evidence that patients treated by certified cardiologists32 and anesthesiologists33 who had passed board examinations have better health outcomes than patients treated by non-certified care providers.

Examinations not only assess students’ knowledge, skills, and attitudes, they also affect learning, because students perceive the content of examinations as reflecting faculty priorities.69 Evidence suggests that examinations are more powerful drivers of student learning than instructional format.70 Hence the need for a variety of measures of competence, such as supervised patient interviews, long case presentations, objective structured clinical examinations, high-fidelity simulations, assessments of students’ professionalism, and the ability for self-directed learning.

The SMEI require “… an effective system of personal counseling for its medical students that includes programs to promote their well-being and to facilitate their adjustment to the physical and emotional demands of medical education” (standard 11.5).5(p This requirement is consistent with the report that student well-being initiatives aimed at reducing stressors, upgrading the learning environment, managing stress, and using psychological and emotional support led to an 85% reduction in depression rates and a 75% decrease in anxiety rates in first-year medical students during a 10-year follow-up.34

Level 2: Important Accreditation Standards
The accreditation standards require that “methods of pedagogy utilized for each segment of the curriculum, as well as for the entire curriculum, [be] subjected to periodic evaluation” (standard 8.4).5(p There is evidence that using online lectures,35 promoting self-directed learning,36 teaching evidence-based medicine,37,38 and teaching decision-support systems45 improve learning, knowledge, and attitudes. The COVID-19 pandemic has affected the delivery of medical education with a shift towards online teaching platforms. It has been suggested to incorporate online teaching methods within traditional face-to-face medical education, thereby maximizing the benefits of both, and promoting the shift in medical practice toward virtual consultations.71

Problem-based learning (PBL) is one of the most studied methods of pedagogy. A review of the 1972–1992 literature indicated that, when compared with conventional instruction, PBL is more enjoyable and its graduates perform as well on clinical examinations and faculty evaluations; but they score lower on basic sciences examinations, with gaps in the knowledge base that could affect practice outcomes.39 More recent studies have similarly indicated that PBL has positive effects on physician competence40 and the learning environment.41 A 2010 review indicated that 12 of 15 studies found no differences between PBL and traditional learning in knowledge acquisition; however, a few studies found improved clerkship or residency performance.42 Finally, a 2019 review indicated that merging traditional lecture-based teaching and PBL led to better student performance and satisfaction than either PBL or traditional teaching alone.43

Standard 6.1 requires that “[t]he curriculum provides a broad-base education in … various ethical, cultural, behavioral and socioeconomic subjects pertinent to medicine,”5(p and standard 7.7 requires specifying “how students are prepared for their role in addressing the medical consequences of common societal problems, for example, providing instruction in the diagnosis, prevention, appropriate reporting and treatment of violence and abuse. Students are instructed in the social determinants of health.”5(p A recent literature review indicated that most reviewed studies concluded that teaching the social determinants of health was effective in terms of student performance or self-reported ability to identify social determinants of health.44

Accreditation standards require that each medical student be “assessed and provided with formative feedback early enough to allow sufficient time for remediation” (standard 9.7).5 There is undisputed evidence that formative examinations improve clinical performance,46 learning,47 and professional behavior.72 The SMEI also require that “[t]he faculty members of a medical school are qualified through their education, training, experience, and continuing professional development” (standard 4); that the “recruitment and development of a medical school’s faculty takes into account its mission, the diversity of its student body, and the populations that it serves” (standard 4.2); and that “[o]pportunities for professional development are provided to enhance faculty members’ skills and leadership abilities in teaching and research” (standard 4.4).5(p A recent review of studies of staff-development programs indicated that participants rated most of these programs highly, and some of them also reported enhanced confidence and comfort with their teaching, higher student ratings, and improved academic ability in terms of publications and conference presentations.48

Standard 4.4 states: “Faculty members receive feedback on teaching.”5(p Although a subject of controversy, students’ ratings of teaching agree with several credible indicators of teaching effectiveness: student learning, student comments, alumni ratings, and ratings of teaching by outside observers.49 Furthermore, students’ ratings have been reported to discern between individual teachers,50 and to improve teaching programs,51 performance of individual teachers,49 and clinical teaching.52 On the other hand, students’ ratings may be influenced by factors unrelated to teaching effectiveness, such as course workload,66 student motivation for taking the course, and anticipated success in examinations.67 However, while students’ feedback on courses, clinical teaching, and individual teachers may lead to improved teaching performance, using students’ ratings of individual instructors to inform and influence academic promotions may have undesirable consequences, as discussed in the last paragraph of the section Level 4: Least Important Standards.

Currently, clinical training is performed through bedside teaching in hospitals and field exercises in the community. Standard 6.5 requires that “[i]nstruction and experience in patient care are provided in both ambulatory and hospital settings.”5(p Some medical schools have introduced into their programs “integrated clerkships,” a 6–12-month experience in a single general practice setting. Students are expected to follow their patients through the entire healthcare continuum, including hospital admission, to meet the curriculum requirements on the various medical disciplines. Comparative studies have indicated that students rated a year-long, integrated clerkship higher than the traditional, block clerkships with respect to teaching, feedback, role-modeling, and patient-centered experiences; students of integrated clerkships outperformed those of block clerkships in clinical skills and performed similarly on the USMLE.53 To the best of our knowledge, while all medical schools in Israel include primary care clerkship rotations, no medical school has substituted block clerkship rotations with longitudinal integrated clerkships.

Level 3: Possibly Important Accreditation Standards
Standard 6.1 requires that “[a] medical school defines its objectives and makes them known to all medical students and faculty.”5(p The need for pre-determined learning objectives has compelling face validity because intended outcomes underpin all teaching, learning, and assessment activities. However, the association between formal objectives and student outcomes is uncertain. While defining learning objectives has been reported to improve student learning,54 another study showed that providing learning objectives did not improve students’ performance in an emergency ward,55 and using learning objectives did not enhance ward evaluations, examination success, and student satisfaction.56

As stated earlier, standard 8.4 requires “methods of pedagogy utilized for each segment of the curriculum, as well as for the entire curriculum.”5(p Evidence suggests that web-based instruction,57 flipped classrooms,58 case-based learning,59 and small-group teaching60 are at least as effective as traditional learning in improving healthcare professionals’ behavior.

Finally, the requirement for “… a sufficient number of faculty in leadership roles and of senior administrative staff with the skills, time, and administrative support necessary to achieve the goals of the medical education program” (standard 2)5(p has compelling face validity. Even a program with a superb curriculum cannot maintain itself without resources and governance. It makes sense that student services affect learners’ well-being, and efforts to improve the quality of education will affect students’ learning.

Level 4: Least Important Accreditation Standards
Accreditation standards require medical schools to implement admission policies aimed at selecting applicants with academic, personal, and emotional attributes necessary for them to become competent physicians (standards 10.1–10.5).5 There is undisputed evidence that students with top academic achievements before admission to medical school outperform other students not only during the first three years in medical school but also during the clerkship rotations.30,31

However, the different attempts to identify the applicants’ attributes deemed necessary for becoming a competent physician have led to the present wide variability in admission policies. On the one hand, these attempts respond to social expectations. They attest to the mission and values of the medical school, and a 2020 Dutch study found that applicants admitted via a selection procedure for personal attributes outperformed initially rejected lottery-admitted students by 12%–19%.61 However, a different study, also from Holland, found that selected students did not outperform lottery-admitted students and questioned the justification of the expensive selection procedure.62 Furthermore, a 2016 systematic review of the literature found that the few longitudinal predictive validity studies available lacked sufficient detail regarding the outcome variables,63 and it has been argued that a declared quest for personal attributes may affect the self-esteem of rejected applicants, particularly if they are left wondering if indeed there is something wrong with their character.64 Finally, society needs not only clinicians but also researchers and a variety of other medical specialists. Different careers require different personal attributes.65

We stated earlier that students’ ratings of individual teachers (standard 4.4) may provide useful feedback and improve teaching effectiveness.49 However, such feedback may also be biased by workload, student motivation, and anticipated success on examinations. Therefore, while student ratings of courses and student feedback to individual teachers should be considered an important standard, we believe that the use of student ratings to inform decisions for academic promotions may be humiliating and contribute to student–faculty alienation, and should be considered among the least important standards.

DISCUSSION

Two suggestions emerge from the presented overview. The first one is to assign to MEUs the task of monitoring the implementation of the curriculum. Beyond ascertaining its accord with accreditation standards, MEUs would attend to the relationship between their medical school with the regulatory authorities (Ministry of Health), and professional authorities (Scientific Council).

In 2010, Chassin et al.73 proposed four criteria for measuring quality of patient care. These criteria require evidence that the measure, firstly, is associated with improved clinical outcomes; secondly, reveals whether the evidence-based care process was provided; thirdly, addresses a process proximate to the outcome (e.g. appropriately administered medications, rather than appropriate diagnostic tests); and fourthly, has few or no unintended adverse consequences. We suggest applying the first, third, and fourth of these criteria to the SMEI and using the proposed four-level classification of the accreditation standards in the monitoring of teaching programs of Israeli medical schools.

Non-compliance with Level 1, the “most important” standards (associated with student well-being and/or improved patient health outcomes) and Level 2, “important” standards (associated with student learning and/or performance), would require urgent attention, and their correction should take precedence over non-compliance with the remaining standards. For example, earlier in this paper, we referred to our belief that the perceived quality of the clinical learning environment is the most important standard of accreditation. The MEUs can obtain insight into this environment through student debriefing, focus groups, and student surveys aimed at obtaining information on students’ reflections on what they find difficult, their experiences, critical incidents, learner–faculty relationship, and the degree to which faculty support students in distress at all times and especially during clinical rotations. Negative student perceptions of their learning environment would justify immediate remediation.

The proposed prioritization is consistent with the recommendation of “evidence-guided education,”74 whereby the choice of learning objectives and teaching content should be derived from patient health outcomes, rather than from tradition and opinion. However, our proposal is only partly consistent with previously identified important accreditation standards.9,75 Similar to our proposal, the previously identified important accreditation standards were teaching clinical skills and assessment of students’ learning. Unlike our proposal, they did not identify as important students’ perceptions of their learning environment.

Continuous monitoring of teaching program implementation would assure the outcomes of external evaluations by accreditation and re-accreditation committees. However, even when faculty understand the importance of meeting these standards, criticism is likely to generate confrontations. We have repeatedly heard faculty blame MEU members for being oblivious of the realities of clinical practice, and MEU members claim that clinicians are ignorant of the basic principles of teaching. By defining one of the MEU functions as continuous monitoring of the curriculum and the degree of its accord with accreditation standards, this polarization may be reduced since both MEU and faculty members would be united in a common purpose, to wit, meeting accreditation standards.

LOOKING BEYOND ISRAEL

Discussions of the function of MEUs, and of the relative importance of the teaching standards, are germane also for countries with a longer tradition of accreditation reviews than Israel. Hopefully, such discussions will lead to an agreement regarding MEUs’ authority to implement the accreditation standards, and rapport between MEUs and the office of the Dean.

Monitoring of the curriculum is of no value without a mechanism in the medical school and university hierarchy of ensuring that the elicited information is promptly acted upon. Hence we propose the creation of MEUs, with appropriately trained staff and budget, the foremost function of which would be the continuous evaluation of the implementation of the teaching program, and helping faculty correct detected flaws. The MEUs would be part of the governance of the medical school and have the authority to implement interventions.

However, we have no certain answer to the question from whom MEUs would draw their authority. The term “self-evaluation” implies that they would have the backing and support of the Dean. However, in Israel, deans are elected for short terms, and most of them have limited knowledge of how to assess teaching programs. To be effective, MEUs cannot risk being vetoed by the Dean, particularly when she or he cancels a specific effort to improve the educational process. Therefore, policies need to be developed that would establish a meaningful role for MEUs in medical schools. For example, MEUs may draw authority from university institutions that would have to rule in cases of disagreement between the MEU and the Dean. Hopefully, such cases would be rare and exceptional; however, we feel that MEUs, although part of the school governance, should not be subordinate to the office of the Dean.

Certainly, the proposed taxonomy of accreditation standards will generate criticism and various degrees of disagreement. However, we believe that some type of categorization of the accreditation standards is needed to discern between their importance and to identify standards that may lead to undesirable consequences. Specifically, future research should explore the following three areas of uncertainty: firstly, how the current block clerkship rotations compare with integrated clerkships in providing students with clinical training and with exposure to patients with common disorders; secondly, whether the quest for non-academic attributes in medical school applicants justifies its cost; and finally, how to assess the contribution of individual faculty members to the implementation of the undergraduate teaching program.

Acknowledgments

The authors thank Julie Van and Ella Fitzpatrick for their assistance in preparing the manuscript.

Abbreviations

GMC General Medical Council
LCME Liaison Committee of Medical Education
SMEI Standards of Medical Education in Israel
MEUs medical education units
PBL problem-based learning
USMLE United States Medical Licensing Examination
WFME World Federation of Medical Education.

Footnotes

Disclosure: The authors are responsible for the opinions presented in this paper. They do not reflect the views and policies of the institutions with which the authors are and were affiliated in the past.

Conflict of interest: No potential conflict of interest relevant to this article was reported.

REFERENCES
1.
Davis MH, Karunathilake I, Harden RM. AMEE Education Guide no 28: the development and role of departments of medical education. Med Teach. 2005;27:665–75. 10.1080/01421590500398788.
2.
Liaison Committee on Medical Education (LCME). Functions and structure of a medical school. Standards for accreditation of medical education programs leading to the MD degree. Mar2017 [accessed May 25, 2022]. Available at: https://medicine.vtc.vt.edu/content/dam/medicine_vtc_vt_edu/about/accreditation/2018-19_Functions-and-Structure.pdf.
3.
General Medical Council. Tomorrow’s doctors. London, UK: General Medical Council; 2003 [accessed May 25, 2022]. Available at: https://www.educacionmedica.net/pdf/documentos/modelos/tomorrowdoc.pdf.
4.
World Federation for Medical Education. Basic medical education WFME global standards for quality improvement. 2020 Revision. [accessed May 25, 2022]. Available at: https://wfme.org/wp-content/uploads/2020/12/WFME-BME-Standards-2020.pdf.
5.
Council for Higher Education in Israel. Committee for the Evaluation of Medical Schools in Israel. Standards of medical education in Israel. [accessed May 22, 2022]. Available at: https://che.org.il/wp-content/uploads/2020/09/Standards-and-Elements-ENGLISH.pdf.
6.
School of Population Health University of New South Wales. Website. 2022 [accessed May 25, 2022]. Page last updated May 24. Available at: https://sph.med.unsw.edu.au/about-us/.
7.
Gruppen L. Creating and sustaining centres for medical education research and development. Med Educ. 2008;42:121–3. 10.1111/j.1365-2923.2007.02931.x.
8.
Barzansky B, Hunt D, Moineau G, et al. Continuous quality improvement in an accreditation system for undergraduate medical education: benefits and challenges. Med Teach. 2015;37:1032–8. 10.3109/0142159x.2015.1031735.
9.
van Zanten M, Boulet JR, Greaves I. The importance of medical education accreditation standards. Med Teach. 2012;34:136–45. 10.3109/0142159x.2012.643261.
10.
Choa, G.; Arfeen, Z.; Chan, SC.; Rashid, MA. Understanding impacts of accreditation on medical teachers and students: a systematic review and meta-ethnography. Med Teach. 2022. pp. 63–70. 10.1080/0142159x.2021.1965976.
11.
Hedrick JS, Cottrell S, Stark D, et al. A review of continuous quality improvement processes at ten medical schools. Med Sci Educ. 2019;29:285–90. 10.1007/s40670-019-00694-5.
12.
Reis S, Urkin J, Nave R, et al. Medical education in Israel 2016: five medical schools in a period of transition. Isr J Health Policy Res. 2016;5:45. 10.1186/s13584-016-0104-5.
13.
Prystowsky JB, Bordage G. An outcome research perspective on medical education: the predominance of trainee assessment and satisfaction. Med Educ. 2001;35:331–6. 10.1046/j.1365-2923.2001.00910.x.
14.
Dyrbye LN, Thomas MR, Harper W, et al. The learning environment and medical student burnout: a multicenter study. Med Educ. 2009;43:274–82. 10.1111/j.1365-2923.2008.03282.x.
15.
Wayne SJ, Fortner SA, Kitzes JA, Timm C, Kalishman S. Cause or effect? The relationship between student perception of the medical school learning environment and academic performance on USMLE Step 1. Med Teach. 2013;35:37680. 10.3109/0142159x.2013.769678.
16.
Van Hell EA, Kuks JB, Cohen-Schotanus J. Time spent on clerkship activities by students in relation to their perceptions of learning environment quality. Med Educ. 2009;43:674–9. 10.1111/j.1365-2923.2009.03393.x.
17.
Chan CYW, Sum MY, Tan GMY, Tor PC, Sim K. Adoption and correlates of the Dundee Ready Educational Environment Measure (DREEM) in the evaluation of undergraduate learning environments - a systematic review. Med Teach. 2018;40:1240–7. 10.1080/0142159x.2018.1426842.
18.
Helou MA, Keiser V, Feldman M, Santen S, Cyrus JW, Ryan MS. Student well-being and the learning environment. Clin Teach. 2019;16:362–6. 10.1111/tct.13070.
19.
Tackett S, Wright S, Lubin R, Li J, Pan H. International study of medical school learning environments and their relationship with student well-being and empathy. Med Educ. 2017;51:280–9. 10.1111/medu.13120.
20.
Barret J, Scott KM. Acknowledging medical students’ reports of intimidation and humiliation by their teachers in hospitals. J Paediatr Child Health. 2018;54:69–73. 10.1111/jpc.13656.
21.
Buery-Joyner SD, Ryan MS, Santen SA, Borda A, Webb T, Cheifetz C. Beyond mistreatment: learner neglect in the clinical teaching environment. Med Teach. 2019;41:949–55. 10.1080/0142159x.2019.1602254.
22.
Charlton CR, Dearing KS, Berry JB, Johnson MJ. Nurse practitioners’ communication styles and their impact on patient outcomes: an integrated literature review. J Am Acad Nurse Pract. 2008;20:382–8. 10.1111/j.1745-7599.2008.00336.x.
23.
Zolnierek KBH, DiMatteo MR. Physician communication and patient adherence to treatment: a meta-analysis. Med Care. 2009;47:826–34. 10.1097/mlr.0b013e31819a5acc.
24.
Tavakoly Sany SB, Peyman N, Behzhad F, Esmaeily H, Taghipoor A, Ferns G. Health providers’ communication skills training affects hypertension outcomes. Med Teach. 2018;40:154–63. 10.1080/0142159x.2017.1395002.
25.
Mundell WC, Kennedy CC, Szostek JH, Cook DA. Simulation technology for resuscitation training: a systematic review and meta-analysis. Resuscitation. 2013;84:1174–83. 10.1016/j.resuscitation.2013.04.016.
26.
Cook, DA.; Hamstra, SJ.; Brydges, R., et al. Comparative effectiveness of instructional design features in simulation-based education: systematic review and meta-analysis. Med Teach. 2013. pp. e867–98. 10.3109/0142159x.2012.714886.
27.
Zendejas B, Brydges R, Wang AT, et al. Patient outcomes in simulation-based medical education: a systematic review. J Gen Intern Med. 2013;28:1078–89. 10.1007/s11606-012-2264-5.
28.
Terry R, Hing W, Orr R, Milne N. Do coursework summative assessments predict clinical performance? A systematic review. BMC Med Educ. 2017;17:40. 10.1186/s12909-017-0878-3.
29.
Hecker, KG.; Donahue, M.; Kaba, A.; Veale, P.; Coderre, S.; McLaughlin, K. Summative assessment of interprofessional “collaborative practice” skills in graduating medical students: a validity argument. Acad Med. 2020. pp. 1763–9. 10.1097/acm.0000000000003176.
30.
Wouters A, Croiset G, Schripsema NR, et al. A multi-site study on medical school selection, performance, motivation and engagement. Adv Health Sci Educ Theory Pract. 2017;22:447–62. 10.1007/s10459-016-9745-y.
31.
McManus I, Dewberry C, Nicholson S, Dowell JS, Woolf K, Potts HWW. Construct-level predictive validity of educational attainment and intellectual aptitude tests in medical student selection: meta-regression of six UK longitudinal studies. BMC Med. 2013;11:243. 10.1186/1741-7015-11-243.
32.
Norcini JJ, Kimball HR, Lipner RS. Certification and specialization: do they matter in the outcome of acute myocardial infarction? Acad Med. 2000;75:1193–8. 10.1097/00001888-200012000-00016.
33.
Silber JH, Kennedy SK, Even-Shoshan O, et al. Anesthesiologist board certification and patient outcomes. Anesthesiology. 2002;96:1044–52. 10.1097/00000542-200205000-00004.
34.
Slavin S. Reflections on a decade leading a medical student well-being initiative. Acad Med. 2019;94:771–4. 10.1097/acm.0000000000002540.
35.
Tang, B.; Coret, A.; Qureshi, A.; Barron, H.; Ayala, AP.; Law, M. Online lectures in undergraduate medical education: scoping review. JMIR Med Educ. 2018. p. e11. 10.2196/mededu.9091.
36.
Murad MH, Coto-Yglesias F, Varkey P, Prokop LJ, Murad AL. The effectiveness of self-directed learning in health professions education: a systematic review. Med Educ. 2010;44:1057–68. 10.1111/j.1365-2923.2010.03750.x.
37.
Ahmadi SF, Baradaran HR, Ahmadi E. Effectiveness of teaching evidence-based medicine to undergraduate medical students: a BEME systematic review. Med Teach. 2015;37:21–30. 10.3109/0142159X.2014.971724.
38.
Simons MR, Zurynski Y, Cullis J, Morgan MK, Davidson AS. Does evidence-based medicine training improve doctors’ knowledge, practice, and patient outcomes? A systematic review of the evidence. Med Teach. 2019;41:532–8. 10.1080/0142159x.2018.1503646.
39.
Albanese MA, Mitchell S. Problem-based learning: a review of literature on its outcomes and implementation issues. Acad Med. 1993;68:52–81. 10.1097/00001888-199301000-00012.
40.
Koh GC, Khoo HE, Wong ML, Koh D. The effects of problem-based learning during medical school on physician competency: a systematic review. CMAJ. 2008;178:34–41. 10.1503/cmaj.070565.
41.
Qin Y, Wang Y, Floden RE. The effect of problem-based learning on improvement of the medical educational environment: a systematic review and meta-analysis. Med Princ Pract. 2016;25:525–32. 10.1159/000449036.
42.
Hartling L, Spooner C, Tjosvold L, Oswald A. Problem-based learning in pre-clinical medical education: 22 years of outcome research. Med Teach. 2010;32:28–35. 10.3109/01421590903200789.
43.
Jiménez-Saiz R, Rosace D. Is hybrid-PBL advancing teaching in biomedicine? A systematic review. BMC Med Educ. 2019;19:226. 10.1186/s12909-019-1673-0.
44.
Doobay-Persaud A, Adler MD, Bartell TR, et al. Teaching the social determinants of health in undergraduate medical education: a scoping review. J Gen Intern Med. 2019;34:720–30. 10.1007/s11606-019-04876-0.
45.
Jaspers MWM, Smeulers M, Vermeulen H, Peute LW. Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthessis of high-quality systematic review findings. J Am Med Inform Assoc. 2011;18:327–34. 10.1136/amiajnl-2011-000094.
46.
Ivers, N.; Jamtvedt, G.; Flottorp, S., et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012. p. CD000259. 10.1002/14651858.CD000259.pub3.
47.
Saint DA, Horton D, Yool A, Elliott A. A progressive assessment strategy improves student learning and perceived course quality in undergraduate physiology. Adv Physiol Educ. 2015;39:218–22. 10.1152/advan.00004.2015.
48.
Alexandraki, I.; Rosasco, RE.; Mooradian, AD. An evaluation of faculty development programs for clinician–educators: a scoping review. Acad Med. 2021. pp. 599–606. 10.1097/acm.0000000000003813.
49.
Kulik, JA. Student ratings: validity, utility, and controversy. New Directions for Institutional Research. 2001 Spring. pp. 9–25. 10.1002/ir.1.
50.
Boerboom TB, Mainhard T, Dolmans DH, Scherpbier AJ, Van Beukelen P, Jaarsma AD. Evaluating clinical teachers with the Maastricht clinical teaching questionnaire: how much ‘teacher’ is in student ratings? Med Teach. 2012;34:320–6. 10.3109/0142159x.2012.660220.
51.
Goldfarb S, Morrison G. Continuous curricular feedback: a formative evaluation approach to curricular improvement. Acad Med. 2014;89:264–9. 10.1097/acm.0000000000000103.
52.
Peters WG, van Coppenolle L, Scherpbier AJ. Combined student ratings and self-assessment provide useful feedback for clinical teachers. Adv Health Sci Educ. 2010;15:315–28. 10.1007/s10459-009-9199-6.
53.
Walters L, Greenhill J, Richards J, et al. Outcomes of longitudinal integrated clinical placements for students, clinicians, and society. Med Educ. 2012;46:1028–41. 10.1111/j.1365-2923.2012.04331.x.
54.
Slaughenhoupt BL, Lester RA, Rowe JM, Wollack JA. Design, implementation, and evaluation of a new core learning objectives curriculum for a urology clerkship. J Urol. 2011;186:1417–21. 10.1016/j.juro.2011.05.076.
55.
Wyte C, Pitts F, Cabel JA, Yarnold PF, Bare A, Adams SL. Effect of learning objectives on the performances of students and interns rotating through an emergency department. Acad Med. 1995;70:1145–6. [PubMed]
56.
McLaughlin K, Coderre S, Woloschuk W, Lim T, Muruve D, Mandin H. The influence of objectives, learning experiences and examination blueprint on medical students’ examination preparation. BMC Med Educ. 2005;5:39. 10.1186/1472-6920-5-39.
57.
Pei L, Wu H. Does online learning work better than offline learning in undergraduate medical education? A systematic review and meta-analysis. Med Educ Online. 2019;24:1666538. 10.1080/10872981.2019.1666538.
58.
Hew KF, Lo CK. Flipped classroom improves student learning in health professions education: a meta-analysis. BMC Med Educ. 2018;18:38. 10.1186/s12909-018-1144-z.
59.
Thistlethwaite JE, Davies D, Ekeocha S, et al. The effectiveness of case-based learning in health professional education. A BEME systematic review: BEME Guide no 23. Med Teach. 2012;34:e421–44. 10.3109/0142159x.2012.680939.
60.
Reimschisel T, Herring AL, Huang J, Minor TJ. A systematic review of the published literature on team-based learning in health professions education. Med Teach. 2017;39:1227–37. 10.1080/0142159x.2017.1340636.
61.
Schreurs, S.; Cleutjens, KBJM.; Cleland, J.; Oude Egbrink, MGA. Outcomes-based selection into medical school: predicting excellence in multiple competencies during the clinical years. Acad Med. 2020. pp. 1411–20. 10.1097/acm.0000000000003279.
62.
Wouters A. Effects of medical school selection on student motivation: a Ph.D. thesis report. Perspect Med Educ. 2018;7:54–7. 10.1007/s40037-017-0398-1.
63.
Patterson F, Knight A, Dowell J, Nicholson S, Cousans F, Cleland J. How effective are selection methods in medical education? A systematic review. Medical Education. 2016;50:36–60. 10.1111/medu.12817.
64.
Norman G. The morality of medical school admissions. Adv Health Sci Educ Theory Pract. 2004;9:79–82. 10.1023/b:ahse.0000027553.28703.cf.
65.
Benbassat J. Assessments of non-academic attributes in applicants for undergraduate medical education: an overview of advantages and limitations. Med Sci Educ. 2019;29:1129–34. 10.1007/s40670-019-00791-5.
66.
Donnon T, Delver H, Beran T. Student and teaching characteristics related to ratings of instruction in medical sciences graduate programs. Med Teach. 2010;32:327–32. 10.3109/01421590903480097.
67.
Svanum S, Aigner C. The influences of course effort, mastery and performance goals, grade expectancies, and earned course grades on student ratings of course satisfaction. Br J Educ Psychol. 2011;81:667–9. 10.1111/j.2044-8279.2010.02011.x.
68.
Atkinson P. Worlds apart. Learning environments in medicine and surgery. Br J Med Educ. 1973;7:218–24. 10.1111/j.1365-2923.1973.tb02237.x.
69.
Cilliers FJ, Schuwirth LW, Adendorff HJ, et al. The mechanism of impact of summative assessment on medical students’ learning. Adv Health Sci Educ Theory Pract. 2010;15:695–715. 10.1007/s10459-010-9232-9.
70.
Raupach T, Brown J, Anders S, Hasenfuss G, Harendza S. Summative assessments are more powerful drivers of student learning than resource-intensive teaching formats. BMC Med. 2013;11:61. 10.1186/1741-7015-11-61.
71.
Dost, S.; Hossain, A.; Shehab, M., et al. Perceptions of medical students towards online teaching during the COVID-19 pandemic: a national cross-sectional survey of 2721 UK medical students. BMJ Open. 2020. p. e042378. 10.1136/bmjopen-2020-042378.
72.
Lerchenfeldt S, Mi M, Eng M. The utilization of peer feedback during collaborative learning in undergraduate medical education: a systematic review. BMC Med Educ. 2019;19:321. 10.1186/s12909-019-1755-z.
73.
Chassin MR, Loeb JM, Schmaltz SP, Wachter RM. Accountability measures—using measurement to promote quality improvement. N Engl J Med. 2010;363:683–8. 10.1056/nejmsb1002320.
74.
Glick TH. Evidence-guided education: patients’ outcome data should influence our teaching priorities. Acad Med. 2005;80:147–51. 10.1097/00001888-200502000-00008.
75.
Hunt D, Migdal M, Waechter DM, Barzansky B, Sabalis RF. The variables that lead to severe action decisions by the liaison committee on medical education. Acad Med. 2016;91:87–93. 10.1097/acm.0000000000000874.