Validity and reliability of research instruments: Principles and actual practice

Vol.23,No.4(2013)

Abstract
The aim of the paper was to explore how principles of estimating validity and reliability of research instruments, as described in respected methodology textbooks, are satisfied. The initial parts of the paper delineate the theoretical framework and describe the concepts of validity and reliability. The following sections of the paper explain the process of analysis and its findings. The Journal of Educational Research was chosen as the research focus. A sample of 56 randomly selected articles from it has been inspected. The analysis revealed that a large majority (91 %) of research instruments used in these articles was scales and tests, the rest were questionnaires, observation schemes and interviews. Surprisingly, validity was calculated only with 26 of instruments; the rest of instruments were standardized tests or they were facevalidated. As far as scales are concerned, construct validity was documented by means of factor analyses. Content validity and face validity were used in tests, questionnaires and interviews. We consider the infrequent use of combination of two sources of validity (e.g., construct and concurrent or discriminant) to be a weak element in the validation processes in the sample of studies. Reliability was documented with 80 % of research instruments. The most frequent method of calculation was Cronbach’s alpha. Inter-rater reliability was used in observations and tests; test-retest reliability was used to control the stability of the pretest-posttest measuring instrument. The size of reliability coefficients in most of studies exceeded 0.80. Throughout the analysis it was corroborated that when judging validity and reliability one has to critically consider the specific conditions of each research study before expressing an evaluation statement.

Keywords:
metastudy; validity; reliability; research instrument; research study
References

Bang, H. J. (2011). Newcomer immigrant students’ perspectives on what aff ects their homework experiences. The Journal of Educational Research, 104(6), 408–419. https://doi.org/10.1080/00220671.2010.499139">https://doi.org/10.1080/00220671.2010.499139

Disman, M. (1993). Jak se vyrábí sociologická znalost. Praha: Vydavatelství Karolinum.

Duatepe-Paksu, A., & Ubuz, B. (2009). Eff ects of drama-based geometry instruction on student achievement, attitudes, and thinking levels. The Journal of Educational Research, 102(4) 272–286. https://doi.org/10.3200/JOER.102.4.272-286">https://doi.org/10.3200/JOER.102.4.272-286

Edmonds, E., O’Donoghue, C., Spano, S., & Algozzine, R. F. (2009). Learning when school is out. Journal of Educational Research, 102(3), 213–221. https://doi.org/10.3200/JOER.102.3.213-222">https://doi.org/10.3200/JOER.102.3.213-222

Elliot, J. (2012). Using narrative in social research. Qualitative and quantitative approaches. Los Angeles: Sage.

Handelsman, M. N., Briggs, W. L., Sullivan, N., & Towler, A. (2005). A measure of college student course engagement. Journal of Educational Research, 98(3), 184–189. https://doi.org/10.3200/JOER.98.3.184-192">https://doi.org/10.3200/JOER.98.3.184-192

Hendl, J. (2005). Kvalitatívní výzkum. Základní metody a aplikace. Praha: Portál.

Hong, E., & Milgram, R. M. (2000). Homework: Motivation and learning preference. Westport, CT: Bergin & Garvey.

Hoover-Dempsey, K. V., Battiato, A. C., Walker, J. M., Reed, R. P., De-Long, J. M., & Jones, K. P. (2001). Parental involvement in homework. Educational Psychologist, 36(3), 195–209. https://doi.org/10.1207/S15326985EP3603_5">https://doi.org/10.1207/S15326985EP3603_5

Hopkins, K. D. (1998). Educational and psychological measurement and evaluation, 8th edition. Boston: Allyn and Bacon.

Janík, T., & Miková, M. (2006). Videostudie: výzkum výuky založený na analýze videozáznamu. Brno: Paido.

Kline, P. (2000). Handbook of psychological testing. 2nd edition. London: Routledge.

Koh, C. K., Wang, J., Tan, O. S., Liu, W.C., & Ee, J. (2009). Bridging the gaps between students’
perceptions of group project work and their teachers’ expectations. Journal of Educational Research, 102(5), 334–347.

Madrid, L. S., Canas, M., & Ortega-Medina, M. (2007). Eff ects of team competition versus team cooperation in classwide peer tutoring. Journal of Educational Research, 100(3), 155–160. https://doi.org/10.3200/JOER.100.3.155-160">https://doi.org/10.3200/JOER.100.3.155-160

Najvar, P., Najvarová, V., Janík, T., & Šebestová, S. (2011). Videostudie v pedagogickém výskumu. Brno: Paido.

Prinz, R. J., Foster, S. L., Kent, R. N., & O’Leary, K. D. (1979). Multivariate assessment of conflict in distressed and non-distressed mother-adolescent dyads. Journal of Applied Behavior Analysis, 12(4), 691–700. https://doi.org/10.1901/jaba.1979.12-691">https://doi.org/10.1901/jaba.1979.12-691

Salvia, J., & Ysseldyke, J. E. (1998). Assessment. 7th edition. Boston: Houghton Mifflin Company.

Seitsinger, A. (2005). Service learning and standards-based instruction in middle schools. Journal of Educational Research, 98(1), 19–30.

Shih, S. S. (2009). An examination of factors related to Taiwanese adolescents’ reports of avoidance strategies. Journal of Educational Research, 102(4), 377–388. https://doi.org/10.3200/JOER.102.5.377-388">https://doi.org/10.3200/JOER.102.5.377-388

Standards for educational and psychological testing. (1999). Washington: American Educational Research Association.

Suarez-Orozco, C., & Suarez-Orozco, M. (2001). Children of immigration. Cambridge, MA: Harvard University Press.

Metrics

0


2780

Views

1050

PDF (Čeština) views