Determining standard of academic potential based on the Indonesian Scholastic Aptitude Test (TBS) benchmark

Idwin Irma Krisna, Centre of Educational Assessment, Jakarta, Indonesia
Djemari Mardapi, Universitas Negeri Yogyakarta, Indonesia
Saifuddin Azwar, Universitas Gadjah Mada, Indonesia

Abstract


The aim of this article was to classify The Indonesian Scholastic Aptitude Test or Tes Bakat Skolastik (TBS) results for each subtest and describe scholastic aptitudes in each subtest. The subject of this study was 36,125 prospective students who took the selection test in some universities. Data analysis began by estimating  testees’ ability using the Item Response Theory, and benchmarking process using the scale anchoring method applying ASP.net web server technology. The results of this research are four benchmarks (based on cutoff scores) on each subtest, characters which differentiate potential for each benchmark, and measurement error on each benchmark. The items netted give a description of the scholastic aptitude potential clearly and indicate uniqueness so that it could distinguish difference in potential between a lower bench and a higher bench. At a higher bench, a higher level of reasoning power is required in analyzing and processing needed information so that the individual concerned could do the problem solving with the right solution. The items netted at a lower bench in the three subtests tend to be few so that the error of measurement at such a bench still tends to be higher compared to that at a higher bench.

Keywords


Indonesian Scholastic Aptitude Test (TBS); benchmark; scholastic aptitude

Full Text:

PDF

References


Anastasi, A. (1988). Psychological testing (6th ed.). New York, NY: Macmillan.

Azwar, S. (2008). Kualitas tes potensi akademik versi 07A [The quality of the academic potential test version 07A]. Jurnal Penelitian dan Evaluasi Pendidikan, 12(2), 231-250. Retrieved from http://journal.uny.ac.id/index.php/jpep/article/view/1429/1217

Beaton, A.E. & Allen, N.L. (1992). Interpreting scales through scale anchoring. Journal of Educational Statistics, summer, 17(2), 191-204.

Bejar, I. I. (2008). Standard setting: What is it? Why is it important?. R&D Connections, 7.

Berk, L. (2000). Child development (5th ed.). Massachusetts, MA: Allyn and Bacon.

Cizek, G.J. & Bunch, M.B. (2007). Standard setting: A guide to establishing and evaluating performance standards on test. Thousand Oaks, CA: Sage.

Cohen, R.J. & Swerdlik, M.E. (2002). Psychological testing and assessment: An introduction to test and measurement (5th ed.). Boston, MA: McGraw-Hill.

Embretson, S. & Reise, S.P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum.

Ferrara, S., Svetina, D., Skucha, S. & Davidson, A.H. (2011). Test development with performance stan-dards and achievement growth in mind. Educational Measurement: Issues and practice, 30(4), 3-15.

Forsyth, R.A. (1991). Do NAEP scales yield valid criterion-referenced interpreta-tions? Educational Measurement: Issues and Practice, 10(3), 3-9, 16.

Frey, M.C. & Detterman, D.K. (2003). Scholastic assessment or g? The relationship between the scholastic assessment test and general cognitive ability. Case Western Reserve, OH: Department of Psycho-logy.

Galotti, K.M. (2004). Cognitive psychology in and out of the laboratory (3rd ed.) pp.391-392). Belmont, CA: Wadsworth.

Geisinger, K.F. & McCormick, C.M. (2010). Adopting cut scores: Post-standard-setting panel considerations for decision makers. Educational Measure-ment: Issues and Practice, spring, 1, 38-44.

Gomez, P.G., Noah, A., Schedl, M., Wright, C., & Yolkut, A. (2007). Proficiency descriptors based on a scale-anchoring study of the new TOEFL iBT reading test. Language Testing, 24, 417-444.

Hambleton, R.K., Swaminathan, H. & Rogers, H.J. (1991). Fundamentals of item response theory. Newbury Park, CA: Sage.

Hambleton, R.K. & Swaminathan, (1985). Item resnse theory: Principles and applications. Boston, MA: Kluwer-Nijhoff.

Harman, G. (1994). Student selection and admission to higher education: Policies and practices in the Asian region. Higher Education, 27(3), 313-339.

Hayes, J. R. (1989). The complete problem solver (2nd ed). Hillsdale, NJ: Erlbaum.

Jaeger, R.M. (1989). Certification of student competence. In R.L. Linn (Ed.), Educational measurement (3rd ed., pp. 485-514). New York, NY: American Council on Education/Macmillan.

Keeves, J.P. & Alagumalai, S. (1999). New approaches to measurement. In G.F. Masters & J.P. Keeves (Eds.), Advances in measurement in educational research and assessment (pp.23-42). New York, NY: Pergamon.

Kelly, D.L. (2002). Appplication of the scale anchoring method to interpret the TIMSS achievement scales. In D.F. Robitaille & A.E. Beaton (Eds), Secondary analysis of the TIMSS data. New York, NY: Kluwer Academic Publishers.

Mardapi, D., Hadi, S., & Retnawati, H. (2015). Menentukan kriteria ketuntasan minimal berbasis peserta didik. Jurnal Penelitian dan Evaluasi Pendidikan, 19(1), 38-45. doi:http://dx.doi.org/10.21831/pep.v19i1.4553

Martin, M.O., Mullis, I.V.S., Beaton, A.E., Gonzalez, E.J., Smith, T.A., & Kelly, D.L. (1997). Science achievement in the primary school years: IEA’s Third International Mathematics and Science Study (TIMSS). Chestnut Hill, MA: Boston College.

Mullis, I. V. S., Martin, M. O., Beaton, A.E., Gonzalez, E. J., Kelly, D. L., & Smith, T.A. (1998). Mathematics and science achievement in the final year of secondary school: IEA’s Third International Mathematics and Science Study (TIMSS). Chestnut Hill, MA: Boston College.

Olatoye, R.A. & Aderogba, A.A. (2011). Performance of senior secondary school science students in aptitude test: The role of student verbal and numerical abilities. Journal of Emerging Trends in Educational Research and Policy Studies (JETERAPS), 2(6),431-435.

Perie, M. (2008). A guide to understanding and developing performance-level descriptors. Educational Measurement: Issues and Practice, Winter, 27(4),15-29.

Resnick, L.B, Nolan, K.J., & Resnick, D.P. (1995). Benchmarking education standards. Educational Evaluation and Policy Analysis, 17(4), 438-461.

Solso, R. (2001). Cognitive psychology (6th ed, pp.428-429). Boston, MA: Allyn and Bacon.

Wedman, I. (1994). The swedish scholastic aptitude test: Development, use, and research. Educational Measurement: Issues and Practice, Winter, 13, 5-11.

Wyatt, J., Kobrin, J., Wiley, A., Camara, W.J., & Proestler, N. (2011). SAT benchmarks: Development of the college readiness and its relationship to secondary and postsecondary school performance. College Board: Research Report, 5, 5-30.




DOI: https://doi.org/10.21831/reid.v2i2.8465

Refbacks

  • There are currently no refbacks.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.




Find REID (Research and Evaluation in Education) on:

  

ISSN 2460-6995 (Online)

View REiD Visitor Statistics