A COMPARISON OF DETECTING DIFFERENTIAL ITEM FUNCTIONING IN GRADE 12 O-NET RESULTS BETWEEN THE IRT-LR AND SIBTEST METHODS

Authors

  • พิจักษณา กาวี ม.บูรพา
  • ปิยะทิพย์ ประดุจพรม Major in Research and Statistics in Cognitive Science, Collage of Research Methodology and Cognitive Science, Burapha University, Chonburi, Thailand

Keywords:

Differential Item Functioning, Item Response Theory, Ordinary National Education Test, IRT-LR Method,, SIBTEST Method

Abstract

Differential Item Functioning (DIF) and Item Bias had a different idea. Differential Item Functioning was a process that uses statistical methods to validate. Bias was a fair trial of the test. The results of the Differential Item Functioning ware content analysis. The experts consider it. The goals of this research were to analyze the quality of the test (a, b and c) by check reliability index, construct validity of the test before and after DIF items eliminated and comparing the error rate type 1 and 2 of the O-NET for Grade twelve between IRT-LR and SIBTEST methods. The instument used in this analysis was the test of 8 groups of curriculum implementations. Results were as following: 1) The O-NET of before and after DIF Items according to IRT in 3 parameters. In the test of 8 groups of curriculum implementations, had item-discrimination value (a) and item-difficult value (b) were significantly different and guessing factor (c) of Items not exceeding 0.30. The
reliability index was significantly different of before DIF Items. The construction validity revealed that 2 groups of curriculum implementations be consistent with empirical data. 2) Differential Item Functioning in the O-NET of before DIF Items are 430 items revealed that IRT-LR method detected DIF 256 items had represented 59.53% and SIBTEST method detected DIF 79 items had represented 18.37% 3) Comparing the result of differential Item Functioning before DIF Items revealed that IRT-LR method detected DIF over SIBTEST method had represented 41.86% and there are 65 items match all of two methods had represented 15.12% (p < 0.05). Type 1 and 2 error are different.

References

Chanmaha, N. (2011). A Comparison of Achievement Test Reliabilities under Increasing Levels of DIF. Research Methodology & Cognitive Science, 8(2), 58-71. (in Thai)

Choranong, C., Wongnam, P., Lila, S. & Anusartsananan, S. (2010). Efficiency of model and detecting differential multidimensional items functioning by nested confirmatory factor analysis. Journal of education, 22(1), 23-35. (in Thai)

Ellis, B. B., & Mead, A. D. (2002). Item analysis: Theory and practice using classical and modern test theory. In S. G.

Rogelberg (Ed.), Handbook of research methods in industrial and organizational psychology (pp. 324-343). Malden, MA: Blackwell Publishing, Inc.

Elosua, E., & Wells, C. S. (2013).Detecting DIF in Polytomous Items Using MACS, IRT and Ordinal Logistic Regression. International Journal of Methodology and Experimental Psychology, 34(2), 327-342.

Gulliksen, H. (1950). Theory of mental tests. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Kanchanawasri, S. (2013). Classical Test Theory. 7th Edition. Bangkok, Chulalongkorn University. (in Thai)

Kose, A. I., & Demirtasli, C. N. (2012). Comparison of unidimensional and multidimensional models based on item response theory in terms of both variables of test length and sample size. Procedia - Social and Behavioral Sciences, 46 (2012), 135-140.

Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley.

Loevinger, J. (1957). Objective tests as instruments of psychological theory. Psychological Reports, 3, 635-694.

Mark, J. G., Andrea, G. & Keith, A. B. (2004). Performance of SIBTEST When the Percentage of DIF Items is Large. Applied Measurement in Education, 17(3), 241-264.

Mellenbergh, G. J. (1982). Contingency table models for assessing item bias. Journal of Educational Statistics, 7(2), 105-118.

Narayanan, P., & Swaminathan, H. (1996). Identification of items that show nonuniform DIF. Applied Psychological Measurement, 20(3), 257-274.

Peak, I., & Han, K. T. (2012). IRTPRO 2.1 for Windows (Item Response Theory for Patient Reported Outcomes). Applied Psychological Measurment, 37(3), 242-252.

Phanphueg, S. (2014). Handout in Development of Competencies in the measurement of learning outcomes. National Institute of Educational Testing Service (PublicOrganization). (in Thai)

Roussos, L. A., & Stout, W. F. (1996). Simulation Studies of the Effects of Small Sample Size and Studied Item Parameters on SIBTEST and Mantel-Haenszel Type I Error Performance. Journal of Educational Measurement, 33(2), 215–230.

Suriart, P. & Tuksino, P. (2016). Developing Global Awareness in 21st Century Skills for Lower Secondary School Using Situation Scale : An Application of Differential Item Functioning. Journal of Education Khon Kaen University (Graduate Studies Research), 10, 94-100. (in Thai)

Wiboonsri, R,Y. (2013). Measurement and achievement test construction. 11th Edition. Bangkok, Chulalongkorn University. (in Thai)

Downloads

Published

2018-07-20

How to Cite

กาวี พ., & ประดุจพรม ป. (2018). A COMPARISON OF DETECTING DIFFERENTIAL ITEM FUNCTIONING IN GRADE 12 O-NET RESULTS BETWEEN THE IRT-LR AND SIBTEST METHODS. Academic Journal Phranakhon Rajabhat University, 9(2), 224–241. Retrieved from https://so01.tci-thaijo.org/index.php/AJPU/article/view/133166

Issue

Section

บทความวิจัย (Research Article)