1Department of Surgery, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, 2Siriraj Health Science Education Excellence Center, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, 3Undergraduate Education Unit, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand.
*Corresponding author: Cherdsak Iramaneerat E-mail: cherdsak.ira@mahidol.ac.th
Received 30 July 2025 Revised 17 August 2025 Accepted 18 August 2025 ORCID ID:http://orcid.org/0000-0002-2750-3981 https://doi.org/10.33192/smj.v77i11.276829
All material is licensed under terms of the Creative Commons Attribution 4.0 International (CC-BY-NC-ND 4.0) license unless otherwise stated.
ABSTRACT
Objective: The objective of this study was to compare the administration of Multiple-Mini Interview (MMI) between the online and onsite platforms, in terms of validity evidence.
Materials and Methods: We retrospectively reviewed records of past MMI scores, examination materials, and participant questionnaire responses over a six-year period at the Faculty of Medicine Siriraj Hospital, Mahidol University. A total of eight MMI administrations were included, four onsite and four online. Validity evidence was assessed based on three key sources: test content, response processes, and internal structure.
Results: Over six years, eight MMIs were conducted, with 237 out of 340 candidates admitted to a medical school. Content analysis of the test specification tables indicated that both onsite and online platforms adequately addressed similar objectives. Participant satisfaction ratings were comparable between onsite and online MMIs. Qualitative analysis revealed minor issues in a few stations regarding clarity of instructions and scoring criteria. Additionally, some online MMI stations showed discrepancies between task time requirements and allotted time, and more technical issues were also reported. Score analysis showed that both highest and average scores from online MMIs were slightly lower than those from onsite MMIs. However, both formats yielded moderately reliable test scores (Cronbach’s Alpha of 0.49 - 0.51).
Conclusion: The online MMI is a viable alternative to the traditional onsite MMI. Both platforms effectively covered the same assessment objectives, yielded comparable reliability and participant satisfaction.
Keywords: Medical student selection; multiple mini-interview; validity; reliability (Siriraj Med J 2025; 77: 768-776)
INTRODUCTION
Interviewing plays a central role in the selection process of medical students. While various methods are used during admissions, such as pre-admission grades, academic records, aptitude tests, reference letters, and interviews, the interview remains a key tool for evaluating non-cognitive attributes that medical schools seek as a foundation for producing high-quality doctors. Traditionally, personal interviews involved a small number of interviewers engaging candidates in unstructured interview questions to evaluate attitudes. However, this format has been criticized for its low reliability, lack of standardization, susceptibility to interviewer bias, and limited generalizability across different contexts.1-3 To address these limitations, the Multiple-Mini Interview (MMI) was developed by McMaster University as an alternative to traditional interviews.2
In the MMI, candidates rotate through a series of stations, each designed to assess specific personal attributes. These assessments are based on candidates’ performance in a task or response to questions. Trained interviewers observe and give scores using a standardized rating scale.2,3 The MMI has been widely adopted by health professional schools. A systematic review revealed that undergraduate health professional schools typically use five to 12 stations (average = 9.2 stations), with each station lasting five to ten minutes (average = 7.3 minutes), and staffed by one to two interviewers per station.3 Psychometric studies have shown that MMI scores demonstrate high internal
consistency reliability (intra-station inter-item correlation
= 0.96), and moderate to high inter-station reliability (ranging from 0.59 – 0.87).3-5 In general, MMIs with a greater number of stations yielded higher reliability.2,6 Importantly, MMI scores show low correlations with past academic performance and personal interview scores,2,7 but are strongly correlated with judgment and decision-making abilities (r = 0.75).3 MMI scores have also been shown to predict Objective Structured Clinical Examination (OSCE) scores, licensing examination performance, clinical decision-making performance, and clerkship performance measures.8,9 Although the MMI process is often perceived as more stressful than traditional interviews, the majority of candidates report preferring the MMI to traditional interviews.10,11
With increasing evidence suggesting that problems among medical students during their study might relate to non-academic issues, such as burnout12, our medical school looked for innovative approach in medical student selection besides academic readiness. With the intention to bring in students who had personal characteristics that fit with medical school environment, Faculty of Medicine Siriraj Hospital introduced MMI into medical student selection in 2018. We initially administered MMI in an onsite format. Due to the COVID-19 pandemic, the MMI transitioned to an online format during 2021–2022, before returning to onsite administration in 2023 as public health conditions improved.
Although internet-based MMIs (iMMIs) have been
described in literature since 2013, often involving multiple sessions with questions drawn from large item banks to reduce costs for international candidates13, our online approach differed. At our institution, a single-session online MMI was implemented because multiple sessions could increase the risk of station content being shared between candidates participating in different sessions, potentially compromising the fairness of this highly-competitive, high-stakes process. To ensure comparability and fairness in candidate evaluation, all applicants completed the same set of tasks and interview questions within a single session. The objective of this study was to compare the administration of MMI between the onsite and online platforms, in terms of validity evidence.
MATERIALS AND METHODS
According to Messick’s framework, validity refers to the degree to which evidence supports the intended interpretation and use of test scores.14 A comprehensive validity study would demonstrate evidence from five sources including test content, response processes, internal structure, relations to other variables, and consequences. However, some sources of validity evidence are rarely reported in the literature, namely response processes and consequences.15 This study focused on three sources of validity evidence: test content, response processes, and internal structure.14 Test content refers to the extent to which MMI scenarios reflect the competencies expected of future medical professionals. Response processes examine whether candidates approach the tasks in ways that align with the intended constructs. Internal structure involves analyzing the relationships among scores across stations to assess consistency and ensure whether the MMI reliably measures the targeted attributes. We employed a mixed- methods approach, incorporating both quantitative and qualitative data to gather validity evidence. We did not include the analysis related to relation to other variables and consequences in this study due to the difficulty in making valid inferences that those two aspects in medical students that obtained later for years were solely caused by the MMI administration platforms, as many other factors in the curriculum could influence them more.
After obtaining ethical approval from the Institutional Review Board at our medical school (COA no. Si 160/2023), we retrospectively reviewed past MMI score records, examination documents, and participants’ questionnaire responses over a period of six years. The data included eight MMI administrations, of which four were conducted onsite and four online. In Thailand, multiple rounds of student selection are held for admission into medical schools. The Faculty of Medicine Siriraj Hospital employs
the MMI in selection rounds aimed at students with specific achievements or those who had already obtained a bachelor’s degree. These rounds are highly competitive and allow the faculty to select candidates who best align with the program’s criteria. MMIs were conducted once annually from 2018 to 2020 and twice annually from 2021 onward. However, in 2023, only the first MMI session was included in this study, as the second session occurred after data analysis had been completed. Over the six-year study period, 340 candidates participated in MMIs, and 237 were admitted to the MD program at the Faculty of Medicine Siriraj Hospital.
We retrospectively reviewed test scores from all eight MMIs, comparing onsite and online MMIs in terms of score distribution, average score, and internal consistency reliability. All statistical analyses were conducted in PASW Statistics 18.0.16
We conducted a document analysis of the test specification blueprints for all MMI sessions to compare content coverage. Additionally, we analyzed responses from the open-ended sections of post-test surveys completed by candidates, interviewers, and administrative staff. Comments and feedback were summarized and categorized into themes.
RESULTS
The characteristics of the eight MMI sessions included in this review are summarized in Table 1. Each session consisted of 8–12 stations, with 1–6 rest stations, and each active station lasted 8 minutes. The total duration of each MMI ranged from 1.5 to 2.5 hours. Across cohorts, around 51%–81% of candidates were selected for admission. However, these selection rates were influenced by a fixed admission quota for each round and therefore do not necessarily reflect differences in candidate competency across cohorts.
Content analysis of the test specification tables revealed that both onsite and online platforms covered similar objectives. The candidate attributes targeted across all stations included honesty, altruism, creativity, motivation for medical study, responsibility, self-directed learning, social media literacy, critical thinking skills, teamwork skills, communication skills, perseverance, stress management, growth mindset, situational awareness and decision-making. Table 2 showed the comparison of
TABLE 1. Characteristics of the MMIs included in the analysis.
Exam | Platform | Candidates | Passing candidates | Number of stations |
2018 | onsite | 29 | 21 (72.41%) | 8 |
2019 | onsite | 24 | 21 (87.5%) | 8 |
2020 | onsite | 49 | 25 (51.02%) | 8 |
2021 (round 1) | online | 31 | 25 (80.65%) | 8 |
2021 (round 2) | online | 14 | 7 (50%) | 8 |
2022 (round 1) | online | 53 | 40 (75.47%) | 12 |
2022 (round 2) | online | 13 | 8 (61.54%) | 12 |
2023 | onsite | 127 | 90 (70.86%) | 10 |
TABLE 2. Comparison of test specification between two MMI platforms.
Exam | Platform | Honesty Altruism Communication Innovation/ creativity Motivation Stress management Perseverance Responsibility Critical thinking Growth mindset Self-directed learning Teamwork Decision making Media literacy |
2018 | onsite | |
2019 | onsite | |
2020 | onsite | |
2023 | onsite | |
2021 (round 1) | online | |
2021 (round 2) | online | |
2022 (round 1) | online | |
2022 (round 2) | online | |
test content coverage between the two platforms. With more experience in conducting MMI, administrators tended to increase content coverage to more domains. Almost all objectives were covered by both platforms, except only teamwork which was not covered in an onsite platform, due to the decision to decrease the number of stations when changing from online MMI back to onsite MMI. Although both platforms evaluated similar
competencies, the nature of tasks differed. Onsite MMIs allowed interviewers to observe candidates interacting with standardized patients and/or use onsite instruments and tools (e.g., video clips, part-task trainers, mannequins, puzzles, or cameras). In contrast, online MMIs had limited access to such resources. Therefore, online station design required greater creativity and utilized video clips, pen- and-paper tasks, and online tools.
Validity evidence related to response processes can be categorized into two types: direct methods, which capture real-time thought processes, and indirect methods, which infer cognitive engagement through participant reflection.17 Due to feasibility constraints, we employed an indirect approach using a post-test satisfaction surveys from candidates, interviewers, and administrative staff as indirect evidence for how effectively the MMI assessed intended competencies.
Quantitative analysis of the survey data demonstrated overall satisfaction with the assessment procedure. Two key aspects were evaluated: (1) satisfaction with test content and (2) satisfaction with test administration. Content satisfaction was measured using four items of the relevance and appropriateness of the MMI content. Satisfaction with test administration was assessed using six items in online MMIs and three items in onsite MMIs, focusing on organization, equipment, station arrangement, assistance, and internet connectivity. All items were rated on a five-point Likert scale, where five represented excellent and one indicated improvement required. A summary of survey rating data is summarized in Table 3.
Qualitative analysis of open-ended survey responses revealed three key themes: test instructions, time management, and technical issues.
Theme 1: Test Instruction
A few MMI stations in each administration received feedback suggesting improvements in the clarity of
instructions or scoring criteria. The frequency of this issue was similar between the online and onsite formats. One interviewer noted that a candidate appeared to misunderstand the general instruction for an online MMI. On the other hand, a significant number of comments from candidates expressed appreciation for the MMI format, stating that it effectively showcased their potential to become competent medical students.
Theme 2: Time Management
Many candidates and interviewers commented on a mismatch between task demands, and the time allotted per station. Most felt the time was insufficient to complete the tasks, while a few reported having too much time. A unique challenge in the online MMI was the delivery of the one-minute warning. Unlike the onsite format, which used a shared audible signal, the online version relied on a small pop-up notification, some of which were reportedly missed. Nearly all time-related concerns were associated with the online MMIs, except for a single issue reported during the initial onsite implementation in 2018.
Theme 3: Technical Issues
Numerous comments highlighted how technical issues could hinder candidates’ ability to demonstrate their true potential. Reported issues included unfamiliarity with software, unstable internet connection, difficulty handling digital station instruments, and suboptimal performance by standardized patients (SPs). Online MMIs, in particular, received more feedback about these
TABLE 3. Satisfaction ratings of the MMIs.
Exam | Onsite MMI Test content | Test administration | Exam | Online MMI Test content | Test administration | |
Mean (SD) | Mean (SD) | Mean (SD) | Mean (SD) | |||
2018 | 3.81 (0.34) | 4.44 (0.13) | 2021 (round 1) | 3.72 (0.32) | 4.19 (0.15) | |
2019 | 4.33 (0.09) | 4.47 (0.13) | 2021 (round 2) | 4.28 (0.22) | 4.80 (0.17) | |
2020 | 4.32 (0.23) | 4.70 (0.14) | 2022 (round 1) | 4.23 (0.15) | 4.48 (0.09) | |
2023 | 4.32 (0.17) | 4.64 (0.07) | 2022 (round 2) | 4.41 (0.11) | 4.61 (0.14) | |
Average | 4.20 (0.21) | 4.56 (0.12) | Average | 4.16 (0.20) | 4.52 (0.14) |
technical issues. Both interviewers and candidates reported experiencing higher stress levels during online MMIs due to the added challenge of managing multiple tasks during online interview.
Three core components of internal structure are dimensionality, measurement invariance, and reliability.18 This study focused on the measurement invariance and reliability. For measurement invariance, we examined score distributions and average score across various MMI administrations. We also calculated internal consistency reliability for each round of MMI scores.
Table 4 presents the lowest, highest, and average scores all eight MMIs. For onsite MMIs, the aggregated
data showed a minimum score of 63.5%, a maximum of 90.9%, and an average of 80.1%. For online MMIs, the lowest score was 64.1%, the highest 86.3%, and the average score was 76.8%. The highest and average scores obtained from online MMIs seemed to be slightly lower than onsite MMIs.
The adjusted Cronbach’s alpha values for all eight MMI administrations are presented in Table 5. The average Cronbach’s alpha for the onsite MMIs was 0.49, while the average for online MMIs was 0.51. Line graph of adjusted Cronbach’s alpha of all eight administrations revealed that both platforms yielded similar levels of reliability, ranging between 0.36 – 0.61 (Fig 1). These results indicate that both platforms showed a similar level of internal consistency reliability.
TABLE 4. Score distribution and average scores.
Onsite MMI | Online MMI | |||||||
Exam | Lowest | Score (%) Highest | Average | Exam | Score (%) Lowest Highest | Average | ||
2018 | 66.1 | 92.3 | 83.2 | 2021 (round 1) | 67.7 | 88.9 | 79.6 | |
2019 | 67.3 | 86.9 | 78.8 | 2021 (round 2) | 69.8 | 84.2 | 75.7 | |
2020 | 60.7 | 93.5 | 80.6 | 2022 (round 1) | 60.9 | 89.0 | 78.7 | |
2023 | 60.0 | 90.8 | 77.7 | 2022 (round 2) | 57.8 | 83.2 | 73.0 | |
Average | 63.5 | 90.9 | 80.1 | Average | 64.1 | 86.3 | 76.8 | |
TABLE 5. Internal consistency reliability of MMI scores.
Onsite MMI | Online MMI | |||||
Exam | Number of stations | Cronbach’s Alpha (adjusted) | Exam | Number of stations | Cronbach’s Alpha (adjusted) | |
2018 | 8 | 0.458 | 2021 (round 1) | 8 | 0.492 | |
2019 | 8 | 0.363 | 2021 (round 2) | 8 | 0.437 | |
2020 | 8 | 0.604 | 2022 (round 1) | 12 | 0.612 | |
2023 | 10 | 0.547 | 2022 (round 2) | 12 | 0.504 | |
Average | 0.49 | Average | 0.51 |
DISCUSSION
This study compared the traditional onsite format and the newer online format of the MMI to assess their validity and reliability in the context of medical student selection. We focused on three sources of validity evidence, including test content, response processes, and internal structure.14
Analysis of the test specification tables revealed that both MMI platforms were comparable in their coverage of objectives, targeting a range of personal characteristics important in medical education. Onsite MMI implementation was relatively straightforward as medical teachers were already familiar with the OSCE format, which closely resembles onsite MMI. In contrast, implementing an online MMI presented unique challenges. Each station required careful task design and scoring criteria to meet assessment objectives within a virtual environment. Observing candidates remotely demanded creativity and extensive reliance on digital technologies, such as teleconferencing platforms, internet connectivity, file- sharing tools, and video recording capabilities. Candidates were limited to basic tools such as pens, pencils, and paper, unlike onsite MMI, which could use mannequins, part-task trainers, and standardized patients. While the online MMI was able to meet test content coverage, it required more effort to do so. Additionally, the online format inadvertently assessed candidates’ technological proficiency, with those more adept with digital tools appearing less stressed during the process, likely due to the heavy dependence on digital technology.
We employed both quantitative and qualitative data to evaluate how effectively each platform allowed candidates to demonstrate their competencies, free from interference by administrative factors. Quantitative analysis of satisfaction ratings from participants indicated high satisfaction with both test content and administration. However, ratings tended to dip when a new interview format was first introduced. This trend was observed in the 2018 cohort (initial onsite MMI) and in the first round of the 2021 cohort (initial online MMI), likely due to anxiety associated with unfamiliar testing environments. Despite this, no significant impact on test scores was observed (Table 4). Satisfaction ratings were comparable between platforms, with onsite MMI scoring revealing a mean of
4.38 (SD = 0.16) and online MMI a mean of 4.34 (SD = 0.17). Despite comparable satisfaction for the overall testing processes, qualitative analysis revealed important insights into the response processes. Three major themes emerged from participants’ qualitative responses: test instruction, time management, and technical issues.
Although MMIs have been discussed extensively in literature for many years2, they remain a relatively new and unfamiliar to candidates and interviewers in Thailand, necessitating significant communication efforts for implementation. For each cohort, detailed instructions were provided through both written and verbal formats, along with opportunities for participants to seek clarification. Despite these measures, a few participants still reported confusion, particularly regarding task-specific instructions at individual stations. This underscores the need for thorough item review and the importance
of clarity in all task instructions and scoring criteria to support response process validity.
Feedback on time management revealed two important lessons. First, pilot testing is essential for detecting mismatches between task demands and time allocation, allowing adjustments before actual administration. Second, managing timing across geographically dispersed participants posed a unique challenge for the online format. Unlike onsite MMIs, where a single shared signal minimizes distraction, the online platform relied on individual notifications. This calls for administrators to develop carefully planned communication strategies to preserve validity in the response process.
Technical issues emerged as a third key theme and potential threat to response process validity. While onsite MMIs occasionally involved equipment and computers, technical problems were minimal and manageable within a controlled environment. In contrast, online MMIs were more vulnerable to external factors such as varying internet connectivity and access to suitable equipment in remote locations. Managing these issues required flexibility on the part of the administrators, including rescheduling. Consequently, both candidates and interviewers reported greater stress during online MMIs.
Evaluation of MMI test scores revealed slightly lower scores in the online format compared to the onsite format. Given that both formats were designed to address similar competencies and administered to candidates of comparable educational levels, we hypothesized that the lower scores in the online MMIs may be attributed to validity issues related to response processes, such as minor technical difficulties during online administration, which may have increased candidate stress and perceived task difficulty.
Analysis of internal consistency reliability supported findings from previous research regarding the relationship between the number of stations and test reliability. Our study showed average Cronbach’s alpha values of 0.47, 0.55, and 0.56 for MMIs with 8, 10, and 12 stations, respectively. These findings are consistent with prior literature suggesting reliability improves with an increasing number of stations, typically plateauing around 10 stations.12,19 However, the reliability observed in our context was lower than that reported in other studies, which reported Cronbach’s alpha values ranging from 0.69-0.98.3,20 We hypothesize that enhancing the design of the scoring rubric and improving rater training could improve reliability in our context. Notably, internal consistency reliability was comparable between onsite and online MMIs.
In summary, our findings suggest that online MMIs, with thoughtful adaptation of tasks at some stations, can effectively assess the same test objectives as traditional onsite MMIs. Both platforms demonstrated comparable levels of reliability and participant satisfaction. However, the online MMI required more technical resources and posed greater logistical challenges, contributing to higher participant stress and slightly lower average scores.
Our experience indicates that while the online MMI is a viable alternative when in-person testing is not feasible, under normal circumstances, onsite MMIs remain easier to manage and are less likely to cause stress among participants. Future efforts to enhance MMI quality should focus on improving scoring rubrics and rater training to strengthen internal consistency reliability. Based on these findings, we recommended transitioning back to onsite MMI as soon as pandemic restrictions allowed (beginning in 2023).
There are several limitations to the generalizability of the findings from this study. First, the study was conducted at a single medical school, evaluating a specific set of candidate characteristics. Therefore, the test specifications, station designs, and evaluation criteria may differ from those used at other institutions. Second, the online MMI was implemented rapidly in response to the COVID-19 pandemic, with limited preparation time and minimal rater training. These constraints may have influenced the outcomes. It is possible that with more extensive planning, better technological infrastructure, and enhanced training, the results of online MMIs under more prepared conditions and with improved technology could yield different results. Additionally, although the study benefited from a large sample size (340 candidates) collected over six years across eight MMI cohorts, this extended period may have introduced variability. Changes in student characteristics, preparedness, technological familiarity, and participant attitudes over time could have influenced the results. Furthermore, we acknowledged that we did not provide a complete list of validity evidence as suggested by Messick’s framework. We only provided limited validity evidence in three aspects. There were two more sources of validity evidence, relations to other variables, and consequences, waiting to be explored in future studies.
CONCLUSION
Online MMI is a viable alternative to onsite administration when in-person interviews requiring candidates and interviewers to come together are difficult. With appropriate task modifications, the online format can fulfill the same test specifications and achieve comparable score reliability. Participants reported overall satisfaction
with both MMI formats, though some raised minor concerns in online MMIs related to technical issues during the online sessions, which appeared to slightly increase the perceived difficulty and resulted in marginally lower average scores. In summary, both online and onsite MMIs showed no significant difference in their validity evidence related to test content, response processes, and internal structure. However, the MMI administrators have to deal with some difficulty in setting up an online MMI.
The datasets generated and analyzed in this study are not publicly available. However, they can be accessed upon reasonable request made to a corresponding author.
ACKNOWLEDGEMENT
We thank Siriraj Health Science Education Excellence Center for their support.
DECLARATIONS
This study received financial support from an educational research grant from the Faculty of Medicine Siriraj Hospital, Mahidol University.
All the authors confirm that they have no personal or professional conflicts of interest to declare relating to any aspect of this research study.
Not applicable.
Conceptualization and methodology, C.I.; Data collection O.U.; Data analysis, C.I.; Manuscript preparation, C.I.; Critical review and editing, C.I. and P.M. All authors reviewed the results and approved the final version of the manuscript.
Artificial intelligence was not used in the preparation of the manuscript. All study concepts, analysis, interpretation, and writing were carried out by the authors.
REFERENCES
Eva KW, Macala C, Fleming B. Twelve tips for constructing a multiple mini-interview. Med Teach. 2019;41(5):510-16.
Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: the multiple mini-interview. Med Educ. 2004;38(3): 314-26.
Rees EL, Hawarden AW, Dent G, Hays R, Bates Hassell AB. Evidence regarding the utility of multiple mini-interview (MMI) for selection to undergraduate health programs: A BEME systematic review: BEME Guide No. 37. Med Teach. 2016;38(5): 443-55.
Hecker K, Donnon T, Fuentealba C, Hall D, Illanes O, Morck DW, et al. Assessment of applicants to the veterinary curriculum using a multiple mini-interview method. J Vet Med Educ. 2009; 36(2):166-73.
Uijtdehaage S, Doyle L, Parker N. Enhancing the reliability of the multiple mini-interview for selecting prospective health care leaders. Acad Med. 2011;86(8):1032-9.
Sebok SS, Luu K, Klinger DA. Psychometric properties of the multiple mini-interview used for medical admissions: findings from generalizability and Rasch analyses. Adv Health Sci Educ Theory Pract. 2014;19(1):71-84.
Eva KW, Reiter HI, Rosenfeld J, Trinh K, Wood TJ, Norman GR. Association between a medical school admission process using the multiple mini-interview and national licensing examination scores. JAMA. 2012;308(21):2233-40.
Eva KW, Reiter HI, Rosenfeld J, Norman GR. The ability of the multiple mini-interview to predict preclerkship performance in medical school. Acad Med. 2004;79(10 Suppl):S40-2.
Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini- interviews predict clerkship and licensing examination performance. Med Educ. 2007;41(4):378-84.
Dowell J, Lynch B, Till H, Kumwenda B, Husbands A. The multiple mini-interview in the U.K. context: 3 years of experience at Dundee. Med Teach. 2012;34(4):297-304.
McAndrew R, Ellis J. An evaluation of the multiple mini-interview as a selection tool for dental students. Br Dent J. 2012;212(7):331-5.
Thamwiriyakul N, Thamissarakul S, Wannapaschaiyong P. Association between grit and burnout among clinical medical students. Siriraj Med J. 2025; 77(2):175–82.
Tiller D, O’Mara D, Rothnie I, Dunn S, Lee L, Roberts C. Internet-based multiple mini-interviews for candidate selection for graduate entry programmes. Med Educ. 2013;47(8):801-10.
American Educational Research Association, American Psychological Association, National Council on Measurement in Education. Standards for educational and psychological testing. Washington, DC: American Educational Research Association 2014.
Cizek GJ. Sources of validity evidence for educational and psychological tests: A follow-up study. Paper presented at the Annual meeting of the National Council on Measurement in Education, Denver, CO, May 2010.
PASW Statistics for Windows, version 18.0 [program]. Chicago: SPSS Inc., 2009.
Padilla JL, Benítez I. Validity evidence based on response processes. Psicothema. 2014;26(1):136-44.
Rios J, Wells C. Validity evidence based on internal structure. Psicothema. 2014;26(1):108-16.
Eva KW, Reiter HI, Trinh K, Wasi P, Rosenfeld J, Norman GR. Predictive validity of the multiple mini-interview for selecting medical trainees. Med Educ. 2009;43(8):767-75.
Pau A, Jeevaratnam K, Chen YS, Fall AA, Khoo C, Nadarajah VD. The Multiple Mini-Interview (MMI) for student selection in health professions training - a systematic review. Med Teach. 2013;35(12):1027-41.