The results of the PSSA exams that were taken last
April by all Pennsylvania students in grades 3-8 were recently released to
great consternation across the Commonwealth. To its credit, the recent CDT article
was considerably more nuanced, but almost universally, the headlines have been
something along the lines of “PA Math test scores drop.”
This is actually not a true statement. What is true is
that test results have been released from entirely new exams, covering new
material, using new resources (that have been unevenly distributed), taught
largely for the first time by teachers who have received varying levels of
professional preparation.
It is as if we decided this year to measure ‘fruit
production’ by counting oranges, when in past years we counted apples; the
results are not comparable. Schools and teachers are now being evaluated in
part by how many oranges they produce, and since there happens to be fewer oranges
than apples, it gives the false impression that schools and teachers aren’t ‘performing’
as well as in past years. The cynical side of me wonders whether this was not
intentional.
And by the way, we count the oranges before the
harvest is even complete! Of course, this
problem has existed for years. The PSSA exams are given in mid-April, barely
80% into the school year. So every year, schools face a dilemma: do they try to
cram a year’s worth of material into eight months, or just accept the fact that
students will be tested on material they have yet to see? Who would design such a system?
The other important thing to know, especially for
parents, is that these scores are not a good indication of how well your child
is doing in school. First, they attempt to measure only a very narrow range of
what is important in your child’s education. Your child’s teacher has far
better tools at her disposal – including her own observations - to help you
understand how well your child is doing.
Part of the reason this is true - and this may seem
hard to believe – is that these exams were
not designed for the purpose of evaluating individual students! (And
certainly not their teachers) The tests were designed to give us an indication
of how well schools are doing in the aggregate, as part of the federal mandate
under No Child Left Behind. One indication of the unreliability of these scores
is that your child’s teacher will tell you that your child’s score “falls
within a range” - which is actually quite large. Given the better alternatives, I consider
that even sharing the results of these tests with individual students and
parents borders on educational mal-practice. But that’s just me.
Ironically, all this testing has told us little that
we didn’t already know if we were paying attention. Has anyone been surprised
to discover that a particular school is struggling? All you really had to do
was look up the zip code and know the community’s median income.
Also - and this is deeply ironic - if the purpose of
these exams is to measure how well schools are doing in the aggregate, all this testing is unnecessary! Has no
one heard of statistical sampling? You could get the same results by testing a
small percentage of randomly selected students, without the fortress-like test
security, the tears of special ed students being tested two grades levels about
their IEP, and without disrupting a full twelve days on the school calendar.
Instead of every student taking nine 2-hour tests, you could randomly divide
the students into nine groups, and have each student take one 2-hour test.
The bottom line: the education of our children is
important. These tests aren’t. Take the results with a grain of salt.
P.S. The answer to the above question is our unelected
State Board of Education, which appears to be firmly entrenched in an
educational philosophy that would have been more appropriate in the middle of
the last century.
P.P.S. I wrote this last April, but forgot to publish it. oops...
No comments:
Post a Comment