Innovation Report The Effect of Rubric-Guided, Focused, Personalized Coaching Sessions and Video-Recorded Presentations on Teaching Skills Among Fourth-Year Medical Students: A Pilot Study Vatche Tchekmedyian, MD, MEd, Helen M. Shields, MD, Stephen R. Pelletier, PhD, and Valeria C. Pazo, MD Abstract Problem As medical students become residents, teaching becomes an expected and integral responsibility. Yet, trainingfor-teaching opportunities are lacking. In 2014, the authors designed a pilot study using rubric-guided, focused, personalized coaching sessions and video-recorded presentations to improve student teaching skills among fourthyear students at Harvard Medical School. Approach In 2014–2015, the authors recruited students from an elective on how to tutor preclinical students for the pilot, which consisted of four phases: a precoaching teaching presentation, a 30- to 45-minute coaching session, a postcoaching teaching presentation, and blinded reviewer ratings. Students’ pre- and postcoaching presentations were video recorded. Using a scoring rubric for 15 teaching skills, students rated their pre- and postcoaching videos. Blinded reviewers also rated the pre- and postcoaching presentations using the same rubric with an additional category to gauge their overall impression. Outcomes Fourteen students completed all four phases of the pilot. Students’ ratings demonstrated statistically significant improvement in several teaching skills, including presentation content (P < .001), rate of speech (P = .001), and opening statement and learning objectives (P = .004). Blinded reviewers’ ratings demonstrated statistically significant improvements in several teaching skills, including opening statement and learning objectives (P < .001), overall impression (P = .001), and conclusion and summary of learning objectives (P = .004). Students provided largely positive comments on the interventions. Next Steps The authors will work toward addressing limitations in the rubric, using coaching in different teaching settings, addressing the interventions’ generalizability, training coaches, and performing additional evaluations. Problem As medical students transform into residents, teaching becomes an expected and integral new responsibility. The Accreditation Council for Graduate Medical Education has recognized and formalized program requirements to help residents succeed in this new role, and residency programs have responded by Please see the end of this article for information about the authors. Correspondence should be addressed to Helen M. Shields, 1620 Tremont St., Boston, MA 02120; telephone: (617) 525-9315; e-mail: hmshields@ partners.org. Copyright © 2017 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the Association of American Medical Colleges. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Acad Med. 2017;92:1583–1589. First published online April 18, 2017 doi: 10.1097/ACM.0000000000001686 Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A446. increasing programs aimed at improving resident teaching skills.1 A strong case has been made for beginning training for teaching during medical school.2 Yet, these training opportunities are lacking.2 However, the medical education literature does provide some examples of student-as-teachertype programs that have been created and implemented in medical schools to provide a formal pedagogy for teaching skills.3 For example, a 2008 survey showed that only 44% of MD-granting U.S. schools had formal student-asteacher curricula.3 The format of these programs commonly included smallgroup work, lectures, and role playing, but less commonly included observation of actual and simulated teaching sessions.3 Evaluation of these programs included peer-to-peer evaluation, direct observation and/or videotaping, and student satisfaction surveys.3 However, the documentation of objective improvement in student abilities as a result of these programs is rare. Furthermore, while student satisfaction surveys are often used to evaluate these programs, there are few studies that compare a student’s subjective personal improvement as a teacher against any objective measures of improvement. Given its successful use in improving skills in surgery and other fields, coaching may have the potential to improve teaching skills in the field of medicine.4,5 In 2014, we designed a pilot study using rubric-guided, focused, personalized coaching sessions and video-recorded presentations to improve student teaching skills among fourth-year students at Harvard Medical School. Our evaluations of the students’ teaching skills used both self-assessment and blinded reviewer pre- and postcoaching ratings. Approach The pilot study was approved by the institutional review board at Harvard Medical School in 2014. Student participants In 2014–2015, we recruited fourth-year medical students for the pilot from the senior preceptorship course—a two-credit Academic Medicine, Vol. 92, No. 11 / November 2017 1583 Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited. Innovation Report elective for fourth-year students designed to teach them how to tutor preclinical students—run by V.C.P. at Harvard Medical School. During this time, there were 33 full-time students enrolled in the elective and 20 students auditing the course. Students who participated in the pilot did so during their free time. We obtained informed consent from the students who agreed to participate in the pilot. Coaching faculty and blinded reviewers Two authors (H.M.S. and V.T.) coached each of the participating students together (see below). H.M.S. was known to the students as the director of the gastrointestinal pathophysiology course from the second preclinical year, and V.T. was known to the students who had rotated through Brigham and Women’s Hospital as the chief resident in medicine. We recruited 10 physician educators to be blinded reviewers. The reviewers did not have prior contact with the participating students and were blinded to whether they were rating a pre- or postcoaching video. We gave each reviewer a $100 gift card for participating in the pilot. The four phases of the pilot Phase 1. In the first phase, we asked students who agreed to participate to choose one of five selected articles from JAMA’s The Rational Clinical Examination: Evidence-Based Clinical Diagnosis6 and prepare a 10-minute presentation on the content as if they were on teaching rounds. Students had between two and eight weeks to prepare their presentation and were free to choose how they presented the material (lecture, PowerPoint, whiteboard, some combination thereof, etc.). We did not provide the students with further guidance at this point, as we wanted the experience to simulate a real-life scenario and to study the student’s true baseline teaching skills. Students gave their teaching presentations live to the coaches (H.M.S. and V.T.) and were video recorded using an iPhone 5S (Apple Inc., Cupertino, California). Phase 2. In the second phase, the students and two coaches independently reviewed the recorded presentations and rated the presentations using the scoring rubric (see below). The coaches compared their independent ratings to determine which teaching skills to focus on during the coaching session. If there were differences between the two coaches, they discussed which teaching skills to focus on and came to an agreement prior to the coaching session. Each student met with the coaches for a 30- to 45-minute coaching session within three weeks of giving their initial teaching presentation. The sessions followed a three-step feedback system: (1) asking for the student’s selfassessment; (2) providing the coaches’ assessment of the student’s strengths; and (3) analyzing teaching skills that need improvement (using specific examples from the student’s video) and providing concrete, practical strategies and solutions that the student could use to improve these skills. The coaches summarized the notes from the coaching session in a Microsoft Word 2007 document (Redmond, Washington) and sent it to the student within a week. In addition to the session notes, the coaches also attached Shields’ article “Teaching Well Matters: Tips for Becoming a Successful Medical Teacher.”7 Phase 3. During the third phase, which occurred within two to eight weeks of the initial presentation, students were video recorded as they presented a different article from The Rational Clinical Examination: Evidence-Based Clinical Diagnosis to the coaches.6 Students then reviewed their postcoaching presentation and rated it using the same scoring rubric they had used when rating their precoaching presentation. Phase 4. During the fourth phase, we assigned all of the pre- and postcoaching videos a random number and sent them to the blinded reviewers. A total of three reviewers rated each video using the scoring rubric, with the same set of three reviewers rating a student’s pre- and postcoaching videos. Scoring rubric Chart 1 shows the scoring rubric used by the students, faculty coaches, and blinded reviewers, which we modified from a validated scoring system.8 The rubric evaluated 15 specific teaching skills within the categories of nonverbal delivery, verbal delivery, visual aids, organization, and presentation content, using a five-point rating scale (where 1 = poor and 5 = excellent). Participants could also select a rating of not applicable (or 6). Specific criteria were included on the rubric to define each rating for each skill. In their precoaching assessment, students also had a free-text entry where they could describe any specific issues they wanted to review during the coaching session. In their postcoaching assessment, students were able to assign a rating (on a five-point rating scale, where 1 = least helpful and 5 = most helpful) to the four major interventions (viewing their video, using the scoring rubric, the coaching session, and the notes from the coaching session) to indicate how valuable each was. We also added the following additional question to the blinded reviewers’ rubrics to gauge their overall assessment of the students’ teaching skills: “Overall Impression: Rank the overall quality of the teaching presentation in the student video with a score ranging from one to five (one signifying poor, and five signifying excellent).” Statistical analyses One author (S.R.P.) performed statistical analyses on both the student and the blinded reviewer data (with the students’ ratings serving as subjective measures and the blinded reviewers’ ratings serving as objective measures) using IBM SPSS Statistics V.22 (IBM Corporation, Armonk, New York). (We did not analyze the coaches’ data because the second presentation was not formally graded by the coaches as the first had been.) Means of both student and blinded reviewer ratings were obtained for each of the 15 rubric questions for both the pre- and postcoaching videos. Paired t tests were performed on these means for the preand postcoaching scores for both the students’ self-assessments and the blinded reviewers’ (three per video) evaluations. “Not applicable” answers were not included in the statistical analyses. We considered P < .05 to be significant. Outcomes Of the 53 students taking the senior preceptorship course in 2014–2015, 18 (34%) agreed to participate in the pilot study and gave informed consent. Of those 18 students, 15 (83%) completed the first two phases of the study, and 14 (78%) completed all four phases. The 4 students who withdrew from the pilot did so because of scheduling difficulties 1584 Academic Medicine, Vol. 92, No. 11 / November 2017 Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited. Academic Medicine, Vol. 92, No. 11 / November 2017 Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited. Chart 1 Scoring Rubric Used in a Pilot Study Using Rubric-Guided, Focused, Personalized Coaching Sessions and Video-Recorded Presentations to Improve Teaching Skills Among Fourth-Year Medical Students, Harvard Medical School, 2014–2015a Ratingsb Teaching skill 1 (= poor) 2 (= fair) 3 (= satisfactory) 4 (= good) 5 (= excellent) 6 (= not applicable) Nonverbal delivery   Eye contact Does not attempt to look at the Infrequent eye contact; does audience not scan the audience Regularly makes eye contact with part of the audience; occasionally scans the audience   Facial expression Lacks expression; deadpan during the entire presentation Minimal expression; significant Uses expressions with amount of deadpan minimal variation; some deadpan  Composure Obvious anxiety leading to long Anxiety is present and pauses and frequent confusion distracts significantly from the with the material presentation; some confusion with the material Some anxiety is present and detracts minimally from the presentation; minimal confusion with the material  Posture and mannerisms Continuously slumping or shifting weight with constant distracting mannerisms Frequent slumping; regular Occasionally slumps; some use of distracting mannerisms distracting mannerisms Consistent eye contact for most of the presentation; regularly scans the room Expression assists with presentation and varies; minimal deadpan Mostly at ease; some anxiety is present although it does not detract significantly from the presentation; no confusion with the material Stands straight; minimal distracting mannerisms Verbal delivery  Enthusiasm and pitch No interest or enthusiasm; Minimal interest or displays negativity toward topic enthusiasm; significant with constant monotone voice amount of monotone voice Moderate interest or enthusiasm; some pitch variance Good interest or enthu­ siasm; good pitch variance  Articulation and Unintended pauses are present Frequent unintended pauses; vocalized pauses throughout the presentation some mispronunciations or (“uh,” “um,” etc.) and are distracting; frequent mumbling mispronunciations or mumbling   Rate of speech Too fast (talk could not be understood) or too slow (audience became disengaged) Tendency to talk too fast or too slow such that the presentation is difficult to understand or unengaging Some unintended pauses; minor mispronunciations or mumbling Rare unintended pauses; no mispronunciations; minimal mumbling Talks too fast or too slow, but Appropriate rate for the this only minimally interferes audience to maintain with the presentation understanding and attention  Volume So poorly heard that points were lost Significant difficulty hearing the presentation Some difficulty hearing the Appropriate volume for presentation most of the presentation Maintains constant eye contact with the audience; regularly scans the audience Expression assists with presentation and varies; no deadpan At ease; speaker enjoys audience interaction; no confusion with the material Unable to assess based on video recording Unable to assess based on video recording Stands straight; natural hand gestures assist in presentation; no distracting mannerisms Demonstrates strong positive feelings about the topic; uses voice or pitch efficiently to emphasize points throughout the presentation No unintended pauses; no mispronunciations or mumbling Uses varying rate to correspond to difficulty of topics and modulates depending on audience reception Appropriate volume for all of the presentation (Chart continues) Innovation Report 1585 Innovation Report 1586 Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited. Chart 1 (Continued) Ratingsb Teaching skill 1 (= poor) Visual aids   Slide effectiveness Slides are poorly constructed, disorganized, and difficult to read  Use of white or chalk board  Reliance on visual aids Organization  Opening statement and learning objectives White or chalk board content is messy and difficult to understand; excessive time spent writing on the board; board use detracts significantly from the presentation Reads off visual aids constantly No opening statement; learning objectives are not explicitly stated  Organization and Disorganized and difficult to transitions follow  Conclusion and No conclusion or summary is summary of given learning objectives Presentation content All points are poorly described; important points are omitted 2 (= fair) 3 (= satisfactory) Most slides are ineffective; slides are too wordy, lack of variation between slides, and graphs/tables are not described White or chalk board content is messy but understandable; board use detracts slightly from the presentation Most slides are effective; some slides have too much/ too little, and little variation between slides White or chalk board content neither detracts from nor adds to the presentation Reads off visual aids frequently Reads off visual aids sometimes Minimal opening statement; Opening statement is made, learning objectives are unclear but learning objectives mentioned are incomplete Disorganized; talk goes back and forth between points without clear order Conclusion is given, but learning objectives are not summarized Most key points are poorly described; some points are omitted Relatively organized; only minimal illogical shifting back and forth between points Conclusion and summary of learning objectives is given, but reviewed too quickly or shallowly for impact Key points are adequately described; no points are omitted 4 (= good) 5 (= excellent) 6 (= not applicable) Most slides are effective; slides are easy to read, graphs/tables are well described, and slides correspond well to talk All slides are effective; slides are easy to read, graphs/tables are well described, and slides correspond well to talk Did not use slides or slides could not be evaluated based on video recording White or chalk board White or chalk board Did not use a white content is clear; content is clear and or chalk board information is diagramed organized; information slowly but effectively; is diagramed quickly and board use slightly enhances effectively; board use greatly the presentation enhances the presentation Reads off visual aids infrequently Does not read off visual aids Did not use visual at all aids Good opening statement with clearly stated learning objectives Presentation follows logical flow, but transitions are abrupt Effective opening statement with clearly stated learning objectives; makes reference back to the objectives throughout the presentation Clear logical flow with smooth transitions between points Strong conclusion; learning objectives are adequately summarized Key points are well described, although some explanations lack depth and clarity Excellent conclusion; learning objectives are summarized clearly and all are reemphasized Thoroughly explains all key points; makes essential points clear Academic Medicine, Vol. 92, No. 11 / November 2017 aModified from Peeters MJ, Sahloff EG, Stone GE. A standardized rubric to evaluate student presentations. Am J Pharm Educ. 2010;74:171.8 bRatings of 6 (or not applicable) were not included in the statistical analyses. Innovation Report and time constraints. We only included the 14 students who completed all four phases in the statistical analyses. These 14 students matched in obstetrics– gynecology (1; 7%); ear, nose, and throat (1; 7%); dermatology (1; 7%); pathology (1; 7%); anesthesia (1; 7%); radiation oncology (1; 7%); and internal medicine (8; 57%). Supplemental Digital Appendix 1 (at http://links.lww.com/ACADMED/A446) shows the frequency with which each of the different teaching skills was discussed during the coaching sessions. On average, each coaching session focused on seven teaching skills. The six most frequently discussed skills were presentation content (11/14; 78%), opening statement and learning objectives (11/14; 78%), reliance on visual aids (11/14; 78%), use of white or chalk board (10/14; 71%), slide effectiveness (9/14; 64%), and rate of speech (9/14; 64%). Students’ pre- and postcoaching selfassessment ratings demonstrated statistically significant improvement in the following teaching skills: presentation content (P < .001), rate of speech (P = .001), opening statement and learning objectives (P = .004), composure (P = .01), enthusiasm and pitch (P = .01), volume (P = .02), slide effectiveness (P = .03), conclusions and summary of learning objectives (P = .04), and reliance on visual aids (P = .04) (see Table 1 for pre- and postcoaching means). In comparison, analysis of the blinded reviewers’ pre- and postcoaching ratings demonstrated statistically significant improvements in the following teaching skills: opening statement and learning objectives (P < .001), overall impression (P = .001), conclusion and summary of learning objectives (P = .004), slide effectiveness (P = .002), reliance on visual aids (P = .009), facial expression (P = .003), organization and transitions (P = .007), articulation and vocalized pauses (“uh,” “um,” etc.) (P = .02), and enthusiasm and pitch (P = .04) (see Table 2 for pre- and postcoaching means). There was a slight shift toward increased time spent preparing the teaching presentations with the postcoaching presentations (see Supplemental Digital Appendix 2 at http://links.lww.com/ ACADMED/A446). Table 1 Student Pre- and Postcoaching Self-Assessment Ratings, From a Pilot Study Using Rubric-Guided, Focused, Personalized Coaching Sessions and Video-Recorded Presentations to Improve Teaching Skills Among Fourth-Year Medical Students, Harvard Medical School, 2014–2015 Teaching skill Nonverbal delivery   Eye contact   Facial expression  Composure   Posture and mannerisms Verbal delivery   Enthusiasm and pitch  Articulation and vocalized pauses (“uh,” “um,” etc.)   Rate of speech  Volume Visual aids   Slide effectiveness  Use of white or chalk board   Reliance on visual aids Organization  Opening statement and learning objectives  Organization and transitions  Conclusions and summary of learning objectives Presentation content Pre or Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Mean (standard n deviation)a 14 3.64 (0.84) 14 4.21 (0.58) 14 3.92 (1.00) 14 4.62 (0.51) 14 3.43 (0.76) 14 4.21 (0.58) 14 3.71 (1.07) 14 4.21 (0.58) 14 3.86 (0.66) 14 4.64 (0.50) 14 2.93 (0.83) 14 3.57 (0.76) 14 3.57 (0.51) 14 4.43 (0.65) 14 4.00 (0.78) 14 4.64 (0.50) 6 3.67 (0.82) 6 4.33 (0.52) 5 4.30 (0.58) 5 4.67 (0.58) 12 3.08 (1.08) 12 4.00 (0.74) 14 3.57 (0.94) 14 4.64 (0.63) 14 4.21 (1.05) 14 4.64 (0.50) 14 3.64 (0.84) 14 4.36 (0.63) 14 3.43 (0.65) 14 4.64 (0.50) ∆ mean Effect (1st – 2nd) P value sizeb –0.57 –0.69 –0.79 –0.50 .06 0.37 .10 0.37 .01 0.50 .15 0.28 –0.79 –0.64 –0.86 –0.64 .01 0.55 .08 0.37 .001 0.59 .02 0.44 –0.67 –0.33 –0.92 .03 0.43 .42 0.41 .04 0.44 –1.07 –0.43 –0.71 .004 0.56 .17 0.25 .04 0.44 –1.21 < .001 0.72 aOn a five-point rating scale, where 1 = poor and 5 = excellent. bThe standard interpretation of effect size is as follows: small = 0.20, medium = 0.50, and large = 0.80. Nine (64%) of the 14 students provided free-text comments on the rubric questionnaire when scoring their postcoaching videos. Students provided largely positive comments. For example, one student wrote: The sequence of the precoaching talk, coaching, and postcoaching talk was very helpful in improving my teaching skills. Another student wrote: The video and personalized coaching were good to reorient me to the qualities that were desirable in a brief teaching session. I say reorient because I had already heard all of the advice, but looking critically at my teaching with a fresh set of eyes allowed me to appreciate poor habits that I had [built] up and was relying on (several of which were learned and based on prior teaching I witnessed on the wards I had mistakenly assumed to be the desirable standard). Academic Medicine, Vol. 92, No. 11 / November 2017 1587 Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited. Innovation Report Table 2 Blinded Reviewer Pre- and Postcoaching Ratings, From a Pilot Study Using RubricGuided, Focused, Personalized Coaching Sessions and Video-Recorded Presentations to Improve Teaching Skills Among Fourth-Year Medical Students, Harvard Medical School, 2014–2015 Teaching skill Mean Pre or (standard Post na deviation)b Nonverbal delivery   Eye contact   Facial expression  Composure   Posture and mannerisms Verbal delivery   Enthusiasm and pitch  Articulation and vocalized pauses (“uh,” “um,” etc.)   Rate of speech  Volume Visual aids   Slide effectiveness   Use of white or chalk board   Reliance on visual aids Organization  Opening statement and learning objectives  Organization and transitions  Conclusions and summary of learning objectives Presentation content Overall impression Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post Pre Post 42 3.86 (1.03) 42 4.19 (0.83) 38 4.18 (0.80) 38 4.55 (0.60) 42 3.93 (0.87) 42 4.17 (0.82) 41 4.12 (0.60) 41 4.32 (0.69) 41 3.88 (0.95) 41 4.19 (0.84) 41 3.46 (0.71) 41 3.78 (0.82) 42 3.88 (0.71) 42 4.07 (0.81) 42 4.38 (0.70) 42 4.45 (0.71) 18 3.61 (0.70) 18 4.33 (0.77) 13 3.61 (1.12) 13 4.08 (1.19) 34 3.29 (1.03) 34 3.73 (0.86) 42 3.38 (1.13) 42 4.24 (0.88) 42 3.95 (1.03) 42 4.43 (0.70) 40 3.15 (1.44) 40 3.72 (1.24) 42 3.83 (0.88) 42 4.12 (0.92) 42 3.24 (1.14) 42 3.91 (0.96) ∆ mean Effect (1st – 2nd) P value sizec –0.33 .12 0.18 –0.37 .003 0.18 –0.24 .12 0.14 –0.20 .07 0.15 –0.32 –0.32 –0.19 –0.07 .04 0.17 .02 0.20 .17 0.12 .50 0.05 –0.72 –0.46 –0.44 .002 0.25 .19 0.07 .009 0.07 –0.86 < .001 0.39 –0.48 .007 0.26 –0.58 .004 0.21 –0.29 .06 0.16 –0.67 .001 0.30 aThree blinded reviewers rated each student, with the same three reviewers rating a student’s pre- and postcoaching video. bOn a five-point rating scale, where 1 = poor and 5 = excellent. cThe standard interpretation of effect size is as follows: small = 0.20, medium = 0.50, and large = 0.80. All verbatim comments can be found in Supplemental Digital Appendix 3 (at http://links.lww.com/ACADMED/A446). Students also rated the pilot’s four major interventions (see above) in their postcoaching assessment. Of the four, students found the coaching session to be the most helpful intervention (4.86), followed closely by viewing the presentation videos (4.71). The full results can be seen in Supplemental Digital Appendix 4 (at http://links.lww. com/ACADMED/A446). Next Steps Coaching is well studied in many areas, from sports to performance art.9 Using the Doctor Coach framework developed by Gifford and Fall5 at the Geisel School of Medicine at Dartmouth that introduced coaching to medical education, we designed and tested a coaching intervention. We designed a pilot study to assess the use of rubricguided, focused, personalized coaching sessions using readily accessible videorecording technology to improve teaching skills among fourth-year medical students. Students’ selfassessment ratings were compared with blinded reviewer assessments using the same scoring rubric. With 30 to 45 minutes of personalized and directed coaching, students showed significant improvements in several teaching skills as rated by both themselves and the blinded reviewers. Our experience with this pilot and the data collected from it have informed several next steps, including addressing the limitations of the rubric, using coaching in different teaching settings, addressing the generalizability of our work, training coaches, and evaluating the impact of the teaching presentations on learners and of practice on coaching and improvement in teaching skills. After completion of the pilot, we recognized there were some limitations of the rubric. For example, the rubric did not address audience participation and engagement, as initially we feared the video recordings would poorly capture this dimension. However, audience engagement is critical to a successful teaching presentation and can be done either effectively or ineffectively, making it a measurable and potentially coachable point. We also recognized the heavy weight placed on the nonverbal and verbal delivery categories in our rubric, while the organization and presentation content categories may have benefited from expansion. We plan to use our experience to modify and improve upon the rubric. The scoring rubric and coaching sessions were aimed at improving teaching in a single medium—a short, focused presentation on rounds. However, we would like to generalize this concept by 1588 Academic Medicine, Vol. 92, No. 11 / November 2017 Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited. Innovation Report creating adaptable rubrics that could be applied to a wide array of teaching settings and thus could assist with coaching across the spectrum of medical education settings. Accordingly, we would like to optimize different versions of the rubric to include peer-to-peer teaching, bedside teaching, lectures, and case presentations. We have discussed the challenges to generalizing the interventions in this pilot for larger populations of medical students. Our pilot enrolled student volunteers from a fourth-year elective, who participated during their free time. However, through the use of trained coaches, our interventions could be structured to occur during in-class sessions either as part of an existing medical education elective or as part of a new broadly available course. Because we had an enriched study population by virtue of students opting into the study, it will also be important to show a positive impact of coaching on teaching skills in a more general group of medical students. We have also discussed the generalizability of our work as it relates to how to train effective coaches. H.M.S. and V.T. were the only coaches used in this pilot study. These coaches were unique in that they brought to the students two individual perspectives: H.M.S. as an experienced medical educator, and V.T. as a recent graduate of internal medicine training with an interest in medical education. However, we believe medical educators, as well as students, could be trained to be coaches. Thus, another next step is to develop a training system to foster this development. Lastly, although we studied objective and subjective data on the teaching presentations, we did not study how the presentations affected learners. Were the objectively improved presentations actually more effective in teaching a learner content and having that content retained by the learner? This will be an important next step in studying our interventions. It will also be important for us to study the importance of practice as it relates to both coaching and improvement in teaching skills. In conclusion, our pilot study showed that the use of rubric-guided, focused, personalized coaching sessions and video-recorded presentations could be effective tools for improving teaching skills among fourth-year medical students. We hope to continue our work by improving the rubric, applying coaching to different teaching settings, making these interventions more widely generalizable, developing faculty and students as coaches, and performing additional evaluations of the impact of the teaching presentations on learners and of practice on coaching and improvement in teaching skills. Acknowledgments: The authors wish to thank Nancy and Elliot Comenitz for their generosity in funding the study. The authors would also like to thank Dr. Marshall Wolf, Dr. Joel Katz, and Dr. Fidencio Saldana for their guidance and support, as well as all the blinded reviewers, including Dr. José Figueroa, Dr. Ravi Patel, Dr. Thomas Finn, Dr. Jaidip Chakravartti, Dr. Stephen Kidd, Dr. Chadi Cortas, Dr. Alyssa Perez, Dr. Nadaa Ali, Dr. Kyle Morawski, and Dr. Srividya Bhadriraju. Finally, the authors would like to thank all the students who participated in the study for their time, positivity, and eagerness to learn and improve their teaching skills. Funding/Support: The study was funded by the Nancy and Elliot Comenitz Medical Education Fellowship Fund. Other disclosures: None reported. Ethical approval: The study was approved by the institutional review board at Harvard Medical School, Boston, Massachusetts, in 2014. Previous presentations: Preliminary data were previously presented at The Academy at Harvard Medical School’s Medical Education Day on October 27, 2015, at Harvard Medical School, Boston, Massachusetts. V. Tchekmedyian is former Nancy and Elliot Comenitz Medical Education Fellow, Brigham and Women’s Hospital, Boston, Massachusetts, and is now hematology–oncology fellow, Memorial Sloane Kettering Cancer Center, New York, New York. H.M. Shields is professor of medicine and associate chief, Division of Medical Communications, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts. S.R. Pelletier is senior project manager, Center for Evaluation, Harvard Medical School, Boston, Massachusetts. V.C. Pazo is hospitalist, Brigham and Women’s Hospital, and instructor in medicine, Harvard Medical School, Boston, Massachusetts. References 1 Morrison EH, Friedland JA, Boker J, Rucker L, Hollingshead J, Murata P. Residentsas-teachers training in U.S. residency programs and offices of graduate medical education. Acad Med. 2001;76(10 suppl):S1–S4. 2 Dandavino M, Snell L, Wiseman J. Why medical students should learn how to teach. Med Teach. 2007;29:558–565. 3 Soriano RP, Blatt B, Coplit L, et al. Teaching medical students how to teach: A national survey of students-as-teachers programs in U.S. medical schools. Acad Med. 2010;85:1725–1731. 4 Min H, Morales DR, Orgill D, Smink DS, Yule S. Systematic review of coaching to enhance surgeons’ operative performance. Surgery. 2015;158:1168–1191. 5 Gifford KA, Fall LH. Doctor coach: A deliberate practice approach to teaching and learning clinical skills. Acad Med. 2014;89:272–276. 6 Simel DL, Rennie D, eds. The Rational Clinical Examination: Evidence-Based Clinical Diagnosis. New York, NY: McGrawHill; 2009. 7 Shields HM. Teaching well matters: Tips for becoming a successful medical teacher. Gastroenterology. 2012;143:1129–1132. 8 Peeters MJ, Sahloff EG, Stone GE. A standardized rubric to evaluate student presentations. Am J Pharm Educ. 2010;74:171. 9 Gawande A. Personal best: Top athletes and singers have coaches. Should you? New Yorker. October 3, 2011:44–53. Academic Medicine, Vol. 92, No. 11 / November 2017 1589 Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited.