Estimating Effects of Non-Participation on State NAEP Scores Using Empirical Methods

The primary objectives of NAEP tests are to accurately monitor the progress of defined groups of students over time and to measure valid differences in scores between student groups at a single point in time. In this context, valid scores reflect differences in scores that are linked to “real” differences in student knowledge as measured on achievement tests.

The NVS Panel previously sponsored analysis directed toward estimating the potential bias from changing exclusion rates in NAEP tests. This study takes up a second threat to the validity of scores arising from differential and changing participation rates of schools and students in NAEP testing. Non-participation can arise either from the absence or refusal to participate of a student chosen in the sample (student non-participation) or from a decision of a principal to refuse to allow the school to participate (school non-participation). School participation was voluntary until the 2003 test when participation of sampled schools became mandatory by federal statute. However, student participation continues to be voluntary.

This study has the following objectives:

  • To compile and examine student and school non-participation rates across state that might explain non-participation patterns across states and their potential for bias;
  • To treat the 2002–2003 4th and 8th grade state scores as a natural experiment to estimate the extent of possible bias;
  • To develop statistical models that account for the pattern of state NAEP scores for 696 state scores from 1990–2003, and to assess whether the pattern of nonparticipation is a significant explanatory factor in this pattern of state NAEP scores; and
  • To compare estimates of bias from these methods to the bias from worst case scenarios estimated by McLaughlin, 2004.