Development of the Evaluation Form for a Patient Report on Difficult Case in Primary Care

Main Article Content

Jirawadee Chumpol
Sasiton Kanchanasuvarna
Thanaporn Daengjai
Panitee Poonpetcharat

Abstract

Background: Medical student’s patients reports are narrative and different in opinions. The assessment form for evaluate the quality of report is essential to be effective and fair.


Objective: To develop and evaluate the quality of assessment form for medical student’s patient report on difficult case in primary care.


Methods: Academic staffs develop scoring rubrics as guideline method for aspects to be assessed then apply to medical student’s report. Fifteen group reports of the 6th year medical students from subject of Family and Community Medicine (RAID 615) in academic year 2021, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Thailand are assessed by 3 selected staffs, purpose samplings. The validity and reliability of the assessment form are evaluated.


Results:The scoring rubrics assessment form included 10 items and 3 levels of points. The index of congruence (IOC) was between 0.67 to 1.00 and the construct validity from inter-rater reliability (IRR) by using interclass correlation coefficient (ICC) was moderate.


Conclusions: The assessment form is effective to use. However, the understanding how to use the assessment form affect the quality of it.


 

Article Details

How to Cite
1.
Chumpol J, Kanchanasuvarna S, Daengjai T, Poonpetcharat P. Development of the Evaluation Form for a Patient Report on Difficult Case in Primary Care. Rama Med J [Internet]. 2022 Dec. 28 [cited 2024 Apr. 26];45(4):35-44. Available from: https://he02.tci-thaijo.org/index.php/ramajournal/article/view/259391
Section
Original Articles

References

Reddy YM, Andrade H. A review of rubric use in higher education. Assessment & Evaluation in Higher Education. 2010;35(4):435-448. doi:10.1080/02602930902862859

Piyaphimonsi C. Scoring rubrics, 2001. Accessed September 23, 2022. http://www.watpon.in.th/Elearning/mea5.htm

Saleh S. Scoring rubrics, 2005. Accessed September 23, 2022. https://shorturl.asia/oS1c4

Ritcharoon P. Scoring rubrics: the tool for teachers to accurately and fairly evaluate learning outcomes. STOU Education Journal. 2019;12(1):1-16. Accessed September 23, 2022. https://so05.tci-thaijo.org/index.php/edjour_stou/article/view/151008/140274

Rovinelli RJ, Hambleton RK. On the use of content specialists in the assessment of criterion-referenced test item validity. Tijdschrift Voor Onderwijs Research. 1977;2(2):49-60.

Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155-163. doi:10.1016/j.jcm.2016.02.012

Metler CA. Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation. 2000;7(25):1-8. doi:10.7275/gcy8-0w24

Siladech C. The Development of Mathayomsuksa 3 English Portfolio Assessment. Unpublished doctoral dissertation, Srinakharinwirot University; 1997.

Pinyoanuntapong B. Measurement and Evaluation of Learning Outcome. (New Assessment). Faculty of Education, Srinakharinwirot University; 2004.

Apaikawee D, Tuksino P. The results of scoring of essay test by different groups of rater and scoring designs. In: The 27th Thailand Measurement Evaluation and Research Conference. 2019:108-124. Accessed September 23, 2022. https://profile.yru.ac.th/storage/journals/aec9e8669753f84464d9a1bacd8d1760.pdf

Fahim M, Bijani H. The effects of rater training on raters’ severity and bias in second language writing assessment. Iranian Journal of Language Testing, 2011;1(1):1-16. Accessed September 23, 2022. https://www.ijlt.ir/article_114349_e13647117bb44051247e053c09eddb89.pdf

Saal FE, Downey RG, Lahey MA. Rating the ratings: assessing the psychometric quality of rating data. Psychological Bulletin. 1980;88(2):413-428. doi:10.1037/0033-2909.88.2.413