Enhancing clinical reasoning skills in medical students through team-based learning: a mixed-methods study | BMC Medical Education

0
Enhancing clinical reasoning skills in medical students through team-based learning: a mixed-methods study | BMC Medical Education

Study design

A mixed-methods exploratory sequential design was used to integrate the quantitative and qualitative evaluation (Fig. 1) [25,26,27]. The qualitative assessment was intended to support and explain the results of the quantitative assessment. This evaluation was based on the US National Institutes of Health guidelines, which advocate a mixed-methods approach to research “to improve the quality and scientific power of data” and better address the complexity of issues in health science education [28, 29].

Establishing a control group in medical education studies is challenging, as this may influence students’ opportunities to have equal educational opportunities [30,31,32]. If establishing a control group is difficult, devising an alternative method to obtain highly reliable research results is necessary. Because this study is a before and after evaluation of an educational intervention without a control group, a mixed-methods sequential explanatory study design was used to integrate qualitative and quantitative research. Although this study design cannot compensate for the lack of a control group, it allows the researchers to understand the experimental results better by incorporating medical students’ perspectives [25,26,27,28,29].

Quantitative study

A cross-sectional study was conducted using objective tests and questionnaires to investigate the effectiveness of TBL in medical students for learning clinical reasoning skills. The quantitative data in this study examined student performance on objective tests and self-assessed clinical reasoning competency by questionnaires before and after the educational intervention by TBL.

Qualitative study

Following the quantitative evaluation, the qualitative evaluation was conducted to examine students’ self-perceptions of learning, covering aspects such as attention, perception, memory, reasoning, decision-making, problem-solving, information processing, and critical thinking, and of the advantages of TBL for learning clinical reasoning skills [19, 33, 34]. The qualitative data were collected using an open-ended questionnaire, and content analysis was used to investigate the benefits of TBL in medical students for learning clinical reasoning skills.

Fig. 1
figure 1

Visual diagram (Mixed-methods, sequential explanatory design)

Context and participants

This study was conducted at a single facility in the Yokohama City University School of Medicine in Japan. The study included all 92 fourth-year medical students who participated in TBL course on “Symptoms and Pathophysiology” at the Yokohama City University School of Medicine from September to October 2023 in order to make the target population homogeneous. Participation in the study was voluntary, and informed consent was obtained from all students. Students were assured of their right to withdraw from the study at any time. Participants were randomly assigned to 20 groups of 4–5 medical students per group using Microsoft Excel 2019. Allocation was not blinded to medical students or faculty members. To minimize bias from non-blinded allocation, the group assignment was designed to ensure diversity in academic performance and experience levels. Instructional activities and evaluations were the same in all groups, and the group assignment was randomly changed midway through the course after the completion of the first five TBL units, to reduce fixed group dynamics and promote interaction with a broader range of peers. In TBL, based on the concept of peer-assisted learning, in which students acquire knowledge-based skills through active help and support on matched-status individuals from similar social groupings, small groups of 4–7 students are reported to be the ideal group size [20, 35, 36]. In addition, the TBL course on “Symptoms and Pathophysiology” is required for all students and covers the major clinical symptoms for 2 months. The TBL course aims to provide medical students with the ability to learn the mechanisms and causes of major clinical symptoms and pathophysiology, which often span multiple specialties, and to develop clinical reasoning skills. Although all participants had already received lectures in basic and clinical medicine by the fourth year, these were primarily didactic. The TBL course supplemented this foundational knowledge by integrating interdisciplinary and practical applications, serving as the basis for evaluating its role in fostering clinical reasoning skills. Medical students for whom complete data were not available on the objective tests and questionnaires before and after the educational intervention were excluded from the study.

Instruction design and procedures

In Japan, the national core curriculum for undergraduate medical education was updated in 2022, and a list was introduced of possible diagnoses for common signs, symptoms, and pathophysiological findings that ought to be learned as part of the 6-year undergraduate curriculum [21, 37]. For the TBL course on “Symptoms and Pathophysiology” at Yokohama City University School of Medicine, the Committee of the Division for the Promotion of Medical Education selected 10 clinical symptoms (shock, dyspnea, trauma/burn, dysphagia, back pain, arthralgia/arthritis, nausea/vomiting, rash, general malaise, abdominal pain) from the major clinical symptoms covered in the core curriculum in coordination with each department. One unit was assigned per symptom, and a total of 10 units were covered over a 2-month period.

Each unit of the TBL course lasted 240 min and was conducted in three phases: “Preparation (Pre-class),” “Readiness assurance process (RAP), and “Application exercises” (Fig. 2) [14,15,16].

Phase 1: preparation (Pre-class) (60 min)

Prior to class, medical students were provided with detailed class materials and reference books. They were required to review these resources individually to familiarize themselves with the content and prepare for in-class activities.

Phase 2: readiness assurance process (RAP) (100 min)

Individual Readiness Assurance Test (IRAT) and Group Readiness Assurance Test (GRAT) assessments were performed. Each assessment lasted 20 min and was graded by faculty members. The faculty members facilitated assessment feedback to address gaps in understanding and reinforce key concepts. After completing the IRAT, students participated in a GRAT with the same questions to consolidate their knowledge through group discussion. All teams then presented their answers to the whole class, and each team confirmed the knowledge of the team members by explaining the rationale for its answer (30 min). Finally, the faculty member provided feedback on the students’ answers, and the medical students compared their team’s answers with those of other teams to see what was lacking in their team or individual responses (30 min).

Phase 3: application exercises (80 min)

In this phase, medical students worked on applied problems at the level of clinical problems of the national examination, which tested problem-solving skills, in the same way as in the GRAT (20 min). The medical students worked on the problem in groups and shared their thinking process, and the faculty member gave feedback (40 min). In addition, a peer evaluation was conducted at the end of each unit (20 min). In the peer evaluation, team members evaluated each other’s learning efforts and contributions to the team and were expected to recognize their roles as members of the team. The contribution to the team was defined as the presentation of their thought process to the solution of the group. The peer evaluation used criteria such as clarity of ideas, active participation, and the quality of students’ contributions to problem-solving [14,15,16]. A standardized evaluation form was used to ensure consistency, and medical students focused on observable behaviors to reduce subjectivity [14,15,16]. Faculty members guided the evaluation process and reviewed the results for fairness [14,15,16]. The voting system awarded 5 points to the most active student and 2 points to the second most active student, thereby fostering collaboration and accountability.

Fig. 2
figure 2

The steps in the TBL course

The TBL course was assigned to one to three faculty members per symptom from among 18 faculty members with postgraduate year (PGY)-9 or above (median: PGY-21.5 [range 9–35]) affiliated with Yokohama City University Hospital. Faculty Development sessions were held immediately before TBL to ensure standardized instructions (Supplement 1) [16]. A faculty member also provided a pre-course orientation to medical students to explain the importance of TBL.

Data collection

Quantitative data

As the TBL course is integrated into the curriculum of Yokohama City University School of Medicine, this research was conducted on all fourth-year medical students who participated in the TBL course. For the quantitative data, the required sample size, calculated using the Wilcoxon signed-rank test of the difference between the means of the two groups, was 94 students in total, assuming a significance level of 0.05, a power of 0.8, and an effect size of 0.3. In this study, a total of 92 students distributed across 20 groups were included in the analysis.

Performance evaluation based on the script concordance test (SCT)

Each students’ performance was evaluated before the first TBL and after the last TBL using SCT. The SCT is a validated tool for evaluating diagnostic reasoning and adaptability in high-fidelity, real-world context situations [5, 38, 39]. It assesses how certainty in a diagnostic hypothesis evolves as new clinical information, such as patient background, symptoms, and findings, is introduced [5, 38, 39]. The SCT has been validated by its reliability and effectiveness in evaluating components of clinical reasoning, such as differential disease, appropriate diagnosis, management and treatment [5, 38, 39]. It allows the respondent to answer a large number of questions in a short period of time [5, 38, 39]. In addition, to ensure reliability, it is recommended that the SCT questions cover at least 10 clinical symptoms with 3–5 questions per symptom [40]. Therefore, in the SCT, each of the 10 major clinical symptoms covered in TBL was quantitatively evaluated on a 30-point scale. In this study, the SCT included pre-tests and post-tests of 30 questions each, with students randomly assigned to one of two test sets in order to minimize their familiarity with the test and enhance the reliability of the evaluation (Fig. 3). The SCT questions were determined through focus group discussions among the faculty members of the Department of General Medicine, Yokohama City University School of Medicine, based on the diseases covered in the core curriculum (Supplement 2) [21].

Fig. 3
figure 3

The SCT included pre-tests and post-tests of 30 questions each, with students randomly assigned to one of two test sets

Self-assessed clinical reasoning competency

The self-assessed clinical reasoning competency before and after the TBL educational intervention were compared. Questionnaires were distributed to the students before and after the TBL educational intervention. Each participating medical student was given an identification number to preserve their anonymity.

The data were collected using a self-administered 7-point Likert scale questionnaire. Before and after the TBL educational intervention, we investigated the self-evaluation of competency in clinical reasoning using the following six items: (1) recalling appropriate history, physical examination, and tests on clinical hypothesis generation; (2) recalling appropriate differential diagnosis from the patient’s chief complaint; (3) verbalizing points that fit/don’t fit the recalled differential diagnosis appropriately; (4) verbalizing and reflecting appropriately on own mistakes; (5) selecting keywords from the whole aspect of the patient; and (6) practicing the appropriate clinical reasoning process (Supplement 3). The self-assessment of clinical reasoning competency investigated in this study was based on a report by Cooper et al. [4] and was determined through discussions among the faculty members of our department. The scale of the 7-point Likert scale was as follows: 1 = not at all confident, 7 = very confident for self-assessment of clinical reasoning competency.

Data analysis

All statistical analyses were performed using SPSS Statistics for Windows 26.0 (IBM Corp., Armonk, NY, USA). P values < 0.05 were considered statistically significant. Continuous data were expressed as the mean ± standard deviation (SD) unless otherwise indicated. The results of the performance evaluations based on the SCT and the self-assessed clinical reasoning competency before and after the educational intervention were compared using the Mann-Whitney U-test and Wilcoxon signed-rank test, respectively.

Qualitative data

Qualitative data were used to assess the acquisition of higher-order intellectual skills, and the results were integrated with the quantitative results as a mixed-methods sequential explanatory study [25,26,27, 41, 42]. An open-ended questionnaire, designed according to the study objectives, was used to investigate students’ self-perception of their learning experience, and of the advantages of TBL for learning clinical reasoning skills. The content of the questionnaire was decided through discussions among two faculty members (KI and KS), following key principles such as ensuring alignment with the study objectives, focusing on critical aspects of clinical reasoning, and framing questions in a manner that encouraged in-depth and reflective responses [29]. After obtaining informed consent from medical students, medical students were asked the following open-ended questions: “What elements of TBL influence improved clinical reasoning? Why do you think so?” [29, 41, 42] Sampling was all medical students who were included in the quantitative evaluation [29]. Names and other identifiers were removed from the questionnaires and the statements were tabulated [29]. All personal identifiers, such as names and student identification numbers, was redacted during transcription, and any identifying details accidentally included in responses were removed during this process. The anonymized data were stored in a secure, password-protected database accessible only to the researchers. The faculty members did not disclose information about their personal attitudes and behaviors [29, 41, 42]. The teams were debriefed after completing the questionnaire survey [29, 41, 42]. Each participant completed the survey only once and participants were not asked to review their transcripts or to provide feedback [29, 41, 42].

Content analysis was used to analyze the themes identified in the qualitative research (Table 1) [29, 41,42,43,44]. The framework for content analysis was guided by the principles of inductive category development, as outlined by Wesley [43], and Creswell and Plano Clarck [29], focusing on deriving categories directly from the data to ensure relevance and alignment with the study objectives. A preliminary analytic template was developed as a starting point for the analysis. Two researchers (KI and NT) independently read all open-ended questionnaire transcripts and performed the initial coding [29, 41,42,43,44]. KI and NT then discussed and refined the initial codes using iterative consensus building to ensure consistency and alignment with the data [29, 41,42,43,44]. Disagreements in coding were resolved by consensus, and input was sought from KS, a third researcher with extensive experience in qualitative research, when necessary [29, 41,42,43,44]. To ensure the quality of the research, two researchers (KI and NT) conducted triangulation of the research by identifying, discussing, and agreeing on the coding of the descriptors [29, 41,42,43,44]. After coding, similar codes were grouped into themes and subcategories, which were further refined and validated through regular discussions with KS to enhance the credibility of the findings [29, 41,42,43,44]. To ensure thematic saturation, which is a critical step in qualitative research, data collection and analysis was conducted until no new themes, subcategories, or insights emerged [29, 43]. This process ensured that the data adequately captured the range of students’ experiences and perceptions and was verified through iterative coding and consensus building among the researchers [29, 43]. The findings were reported using the consolidated criteria for reporting qualitative research (COREQ) checklist [44].

Table 1 Steps in the qualitative content analysis investigate students’ self-perception of their learning experience, and of the advantages of team-based learning (TBL) for learning clinical reasoning skills

The analytic categories were set according to the six levels of Fink’s taxonomy of significant learning (learning how to learn, caring, human dimension, integration, application, and foundational knowledge) (Supplement 4) [45]. After open coding, similar codes were classified into subcategories. Each subcategory was carefully examined to determine its alignment with one of the six levels of Fink’s taxonomy of significant learning. “Learning how to learn” included codes related to self-regulated learning strategies, whereas “caring” included emotional engagement and motivation toward learning clinical reasoning [45]. “Human dimension” included teamwork and peer interactions. “Integration” captured the ability to connect ideas across clinical contexts. “Application” included the practical use of clinical reasoning skills in problem-solving, and “foundational knowledge” focused on the acquisition of core medical concepts [45]. We analyzed the concepts in each of the six levels of Fink’s taxonomy and calculated the number of analysis units for each concept [45]. We also grouped similar codes as themes and assigned them to the corresponding dimension of the cognitive process [45].

Ethics statement

This research was performed in accordance with the Declaration of Helsinki and approved by the Ethics Review Committee of the Yokohama City University School of Medicine on August 31, 2023 (approval number: 2023-010). The study procedures were explained to the medical students, and each participant provided informed consent for participation. Although the researcher who administered the consent was also a class teacher, it was made clear to the medical students that participation in this study would not affect their grade evaluations. This study was registered in the University Hospital Medical Information Network Clinical Trials Registry (UMIN000053523).

link

Leave a Reply

Your email address will not be published. Required fields are marked *