We’ve gone down the rabbit trail of standardized testing for too long.
But now, the release of the End of Course (EOC) algebra and geometry scores has shined a surprising light, casting doubt on the ability of testing to provide the trustworthy information its advocates have touted.
For a test to be the final arbiter in determining things like students graduating or teachers getting fired, we better be sure this test accurately assesses teacher instruction as well as student growth. But, can one test realistically perform both of these complicated tasks? Can’t students learn even from a bad teacher? Can they fail to learn from a good one?
The EOC exams assess specific math courses. In contrast, the HSPE/WASL tested “problem solving”–never clearly defined–and never really related to what students do in class.
At my high school, our passing rates hovered around the 25-35% range for ten years. Schools get labeled as failures from data like this. For today’s comparison we’ll use data from the last two years, in which the same teachers utilized the same curriculum with the same student demographics.
How do the results compare? The differences are staggering. In 2010, our math HSPE passing rate was 36.1%. Last June, the algebra EOC was passed by 60.5% of students who took it, and geometry by 69.2%. Essentially, our rates doubled in one year. (And these scores are misleading and low, actually, because they include students who didn’t even take the test...why that makes sense? Ask a testing industry lobbyist or your nearest ed-reformer).
And what is the only significant difference between years?
It’s the test. When we finally gave a test that presumes to assess what the course actually teaches, the passing rate skyrocketed. Comparable gains occurred around the state.
It seems all the teachers got a whole lot better all at the same time. That must be some hairy professional development. The truth is, of course, we’ve been working our tails off the whole time; the new tests just reward it more.
Think about the panic that ensued each year with the release of WASL scores. How much of that panic was fostered by a test with little relation to actual student learning? What purpose did all of the labor and money that went into the Testing Decade serve?
Further, had we been forced to accept a system where teachers were held “accountable” by test scores, as some continue to demand, some of the same teachers behind these doubled passing rates could have been fired for the low ones before.
And this reveals the fatal flaw in every argument that trusts testing to guide our educational policy decisions. If these tests are so unreliable that scores will double just by using a different one, how can we possibly use them as an authoritative source for decisions about school choice, teacher accountability, merit pay, school “failures”, the need for more “training” and “better” teachers, or any other hare-brained solutions for manufactured crises?
For example, do we really know for sure that our nation “lags” behind other nations, as we are so often told? That American education is in “crisis?” On which test do we lag behind? Which students took it? How reliable an instrument is it?
When you discover how much of your perspective on education has been influenced by testing data that is in fact largely meaningless, you may now be open to the possibility that the real issues faced by schools are on a totally different plane than most people talk about.
The truth is, schools will reflect the surrounding culture. If education is struggling, examine our society for the reasons. It makes little sense to expect a shining beacon of light at a school house to exist in the midst of a culture in economic, moral, social, and family decay.
We’ve gone so far, forging ahead with excessive exertions and expenditures, and we don’t even realize we’re on the wrong trail.
Standardized testing? You fail.