Matthew J. Madison

Matthew J. MadisonMatthew J. MadisonMatthew J. Madison
  • Home
  • LEAP Lab
  • Research
  • DCMNET
  • More
    • Home
    • LEAP Lab
    • Research
    • DCMNET

Matthew J. Madison

Matthew J. MadisonMatthew J. MadisonMatthew J. Madison
  • Home
  • LEAP Lab
  • Research
  • DCMNET

Research​

My main research goals lie in the advancement of multivariate psychometric models. Specifically, I have focused my research on a class of contemporary item response models called diagnostic classification models [DCMs]. In addition to methodological research, I collaborate with applied researchers to use DCMs and other psychometric models to answer critical questions in educational contexts.


​Recently, I have focused my research in a few different areas. One such area is the development and 

application of longitudinal DCMs. Longitudinal DCMs support categorical and criterion-referenced interpretations of growth and can be useful in the evaluation of intervention effects and the study of learning progressions. For more on longitudinal DCMs, including a new R package, click here.


I have also focused my research on issues related to the implementation of DCMs. One such issue is item influence. In cases of short assessments, common in the DCM literature, item influence becomes paramount as individual items can have a disproportionate impact on, or entirely determine, classification. We developed four indices to quantify item influence and distinguish them from other available items and test measures. For more on item influence, see here.

Longitudinal DCMs

​Differing from classical test theory, item response theory, and student growth percentiles, which support norm-referenced interpretations of growth, longitudinal DCMs support categorical and criterion-referenced interpretations of growth. I have three recent articles that detail these developments:


Madison, M. J., & Bradshaw, L. P. (2018). Assessing growth in a diagnostic classification model framework. Psychometrika, 83(4), 963-990.


Madison, M. J., & Bradshaw, L. P. (2018). Evaluating intervention effects in a diagnostic classification model framework. Journal of Educational Measurement, 55(1), 32-51.


Madison, M. J. (2019). Reliably assessing growth with longitudinal diagnostic classification models. Educational Measurement: Issues and Practice.

​

The Psychometrika article details the foundations of the Transition Diagnostic Classification Model (TDCM). The TDCM is a general longitudinal DCM that combines latent transition analysis (LTA) with the Log-linear Cognitive Diagnosis Model (LCDM; Henson, Templin, Willse, 2009). Via simulation, we show that the TDCM provides accurate and reliable classifications in a pre-test and post-test setting and is robust in the presence of item parameter drift. The Journal of Educational Measurement article extends the TDCM to multiple groups, thereby enabling the examination of group‐differential growth in attribute mastery and the evaluation of intervention effects. The utility of the multigroup TDCM was demonstrated in the evaluation of an innovative instructional method in mathematics education. The EM:IP article introduces reliability measures for longitudinal DCMs.

TDCM R Package

All three articles cited above used Mplus to estimate the TDCM. Mplus provides tremendous flexibility. However, TDCM syntax is tedious. To make it easier to access the TDCM, we have developed the TDCM R package (Madison, Haab, Jeon, & Cotterell, 2024) to estimate the TDCM. It uses the CDM package (George et al., 2016) as a foundation, but adds TDCM functionality. It was recently published on CRAN. Find the full CRAN release and documentation here. The videos below demonstrate the core and extended functionalities. Email me at mjmadison@uga.edu with questions or comments, or provide feedback here. 

Item Influence

When analyzing or constructing assessments scored by DCMs, understanding how each item influences attribute classifications can clarify the meaning of the measured constructs, facilitate appropriate construct representation, and identify items contributing minimal utility. In cases of short assessments, common in the DCM literature, item influence becomes paramount as individual items can have a disproportionate impact on, or entirely determine, classification. We developed four indices to quantify item influence and distinguish them from other available items and test measures. We use simulation methods to evaluate and provide guidelines for interpreting each index, followed by a real data application to illustrate their use in practice. Furthermore, we discuss theoretical considerations regarding when influence presents a psychometric concern and other practical concerns, such as how the indices function when reducing influence imbalance.


Jurich, D., & Madison, M. J. (2023). Measuring item influence for diagnostic classification models. Educational Assessment.

​

A function to compute item influence measures is included in the TDCM package (Madison, Haab, Jeon, & Cotterell, 2024). Check the documentation, pages 4-5, for a demonstration of the function on sample data.

NCME 2025

NCME Presentation Slides (pdf)Download
NCME Presentation Paper (pdf)Download

Copyright © 2024, Matthew J. Madison. All Rights Reserved.

Powered by

  • Home
  • LEAP Lab
  • Research
  • DCMNET