Measuring instructional design competencies of future professionals: construct validity of the ibstpi(R) standards

Yalçın Y., Ursavaş Ö. F., Klein J. D.

ETR&D-EDUCATIONAL TECHNOLOGY RESEARCH AND DEVELOPMENT, vol.69, no.3, pp.1701-1727, 2021 (SSCI) identifier identifier

  • Publication Type: Article / Article
  • Volume: 69 Issue: 3
  • Publication Date: 2021
  • Doi Number: 10.1007/s11423-021-10006-7
  • Journal Indexes: Social Sciences Citation Index (SSCI), Scopus, Academic Search Premier, IBZ Online, Periodicals Index Online, EBSCO Education Source, Education Abstracts, Educational research abstracts (ERA), ERIC (Education Resources Information Center), INSPEC, MLA - Modern Language Association Database, Psycinfo
  • Page Numbers: pp.1701-1727
  • Keywords: Instructional design, Competency, Measurement, Construct validity, ibstpi, PROJECT-MANAGEMENT, SKILLS
  • Recep Tayyip Erdoğan University Affiliated: Yes


Competencies that constitute the instructional design profession and how these competencies should be measured have been of interest to researchers for many years. Among the competency sets developed to date, the ibstpi(R) instructional designer competencies have been widely used by researchers, practitioners, educational programs, and employers in the field of Instructional Design and Technology. The purpose of this study was to investigate whether the ibstpi(R) instructional design competency set could be used as a self-report instrument to measure instructional design competencies. For this purpose, we assigned a five-point scale to the ibstpi(R) competency set and investigated the construct validity of the resulting instrument by administering it to a sample of future professionals. We used a robust method to translate the competency set into Turkish and recruited 820 junior and senior students who were enrolled in a degree program in Computer Education and Instructional Technology in Turkey. The conceptual framework of the competency set was used to develop three models to be tested via a series of Confirmatory Factor Analysis. All three models showed a good fit to the data and factors had above satisfactory internal consistency reliability and convergent validity. Although the factors in three models did not show a perfect discriminant validity, we argue that the instrument can be used to measure instructional design competencies. Discussion and implications of the findings are provided.