Spring brings a new round of tests for New York's students, reigniting debate about the quality of those tests ["Common Core is upsetting to kids," Letters, April 21].
Taxpayers should consider the lack of transparency with which testing is managed by the state and its contractor, Pearson Inc.
Last year's tests brought a range of problems: Topics not taught to students, questions inappropriate for grade levels, misalignment with the Common Core standards, and now-notorious questions that had more than one correct answer, or none.
Are the 2014 tests any better? Only state education officials know, and they're not telling. They have prohibited administrators and teachers from discussing the tests except in the most general terms. Moreover, the state has not released information needed for parents and educators to determine the quality of the tests.
Test developers rightly bear the burden of demonstrating how well their tests work. This should entail studies with statistical procedures that evaluate a test vis-à-vis such nerdy concepts as construct validity and internal consistency reliability. To what extent does the test measure what it purports to, and nothing else? To what extent do items that are supposed to measure the same thing actually do so?
Answers to such questions establish the extent to which confidence is warranted.
Bruce Torff, Locust Valley
Editor's note: The writer is a professor of teaching, literacy and leadership at Hofstra University.