Suppose they gave a test and nobody took it. Three states, Nevada, Montana and North Dakota, are currently attempting to right their assessment ships as their newly prescribed computer-based Common Core tests crashed and burned. At least around the edges. Software problems and technical glitches have kept students in these states from being able to complete the federally mandated exams. The U.S. Department of Education has not backed off their expectation that ninety-five percent of students need to be tested by the end of the year. It must be done, on earth as it is in Washington.
And just what are the results going to show? My guess would be that high-stakes testing is not a viable way to get fair and accurate data on student achievement. But that has never really been the aim of these tests. Teachers know how their students are doing without sitting them in front of a computer for hours at a stretch to measure their capacities and abilities. The simple fact that these tests are given in March and April and the school year doesn't end for another month or two lets us know that the summative nature of these measures are not that. If you want to find out what a kid knows at the end of a year, you ask the kid at the end of the year. Instead, we come back from our spring break and lash them to their chairs and ask them a battery of questions that are designed by companies who are trying to deliver statistics to government officials, not teachers and administrators. Yes, it will be interesting to have scores from these tests while the students who take the test are still in the grade for which they took that test, but how meaningful will that be for the teacher, student and parent who look at the aggregate score of the week that student took doing something that they tend not to do for weeks at a stretch: taking tests.
What will they get? A picture of how the testing system is working, and a chance for those companies that sell their services to school districts across the country to improve the software and network issues that cropped up on these most recent go-rounds. Far from getting an accurate snapshot of how each student is performing in their studies, we will find out that this vast chunk of a percentile is under-performing to the degree that shows that we have somehow failed and that our students must apply themselves more fully and their teacher must commit themselves to creating better test subjects for these wacky experiments in computer adaptive assessment. I didn't think I would ever find a time that I would miss those pages of bubbles and newsprint booklets full of questions. How friendly and benign they suddenly seem by comparison. What do I suggest instead? The old standard: D) None of the above.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment