File photo of children in a classroom. (Aug. 6, 2012)

File photo of children in a classroom. (Aug. 6, 2012) Credit: iStock

For the first time, public school districts have been given initial ratings, or "growth scores," for their teachers, plucked from two years of standardized testing by the State Department of Education. The data are deeply flawed in many cases. That's not good news, but it's not the full story.

The system is being set up with enough time to iron out the kinks. Most districts won't judge teachers based on growth scores until next year. And the information to be gleaned from them -- judging by how much more complete the 2011-12 data are than the 2010-11 numbers -- is rapidly improving and should continue to do so.

Getting and learning to use this information is a crucial step in implementing the performance-based teacher evaluations that were the subject of political fights over the past two years. The state agreed to create its evaluation system in exchange for competitive grants from the federal Race to the Top program. Student achievement on state tests in English and math will count as 20 percent of a teacher's annual evaluation in grades three through eight. Another 20 percent of the teachers' performance will be rated on locally accepted testing. Other criteria for the rest of a teacher's score will be designated by individual districts, which have until Jan. 1 to complete their plans. Those that fail to do so could lose annual increases in state aid.

One problem with the data districts are getting is that as many as half of the students in certain grades could not be matched to specific teachers in the 2010-11 testing. The hitch is that the schools weren't required by law to submit complete data matching students with teachers for that year's process, though they were encouraged to do so. State officials say this "linkage data" for 2011-12 tests has improved to cover 75 to 80 percent of students, and will be even better next year.

The other problem, one that won't ever be solved perfectly, is that principals are seeing growth scores that occasionally defy what they personally know about the skills of certain teachers. That's going to happen with sample sizes as small as single classrooms, but hopefully not too often, and teachers who do well on the other 60 to 80 percent of their evaluation shouldn't have much to fear from it. Experts, even those who aren't big on judging teachers this way, agree the system New York is putting in place to measure student achievement year-over-year is "state of the art."

The challenge is to develop as sharp a tool as possible. On Long Island, about 75 percent of districts have submitted plans, and it's up to the State Education Department to assess and approve them. Then the task, for all involved, will be implementing, monitoring and improving the systems. That's the educators' job, but parents have a role too. They can access the evaluations of their kids' current teachers, feed the community grapevine about which ones are soaring or swooning, and speak up if they feel their child's instructors are substandard. That's the kind of pressure that can spur administrators to hire, train, keep and reward the best possible teachers.