Michael Jones brandishes his credentials when attacking Peter Cowley, author of the Fraser Institute’s school rankings. Mr. Jones wants us to know he has a Master’s degree in education. Thus, we are supposed to believe his opinions.
Yet he does not bother to quote verbatim even two sentences from the Fraser Institute’s report on schools – nor even paraphrase one sentence fairly and accurately.
His advanced degree did not save him from setting up a fallacious straw man by imagining that Mr. Cowley somehow “pits” a private school “against” an inner city school. On the contrary, the institute deliberately praised one inner city school for having improved its numbers from one year to the next.
Mr. Jones repeats an old, unoriginal cliche when he complains that the Foundational School Assessment tests create a “one test fits all” program. How often has this standard gripe been repeated as if it were a decisive, knock-down argument?
Unfortunately, any comparative testing of any kind whatsoever must – of necessity – “fit all.” Any comparative testing of income, longevity, horsepower, etc. must always use a test that “fits all.”
Without a standardized “one” test, comparisons in any research area will automatically be invalid.
For no clear reasons, Mr. Jones rejects FSA tests, but then embraces one other form of testing, the Integrated Resource Package. He acknowledges that the IRP is also “standardized testing.” So, we shall have some standard testing of some kind after all.
The one and only way to reject standardized testing completely is to reject the whole idea of comparisons between schools in the first place. People who want to keep the public in the dark about comparative school performances should simply say so honestly.
They should not hide behind the wrongheaded complaint that there is automatically something wrong with one test that fits all.
Could we ever learn anything from a test that does not fit all?
Greg Lanning