But, I have joined in on a couple of occasions, because this whole “progress" issue has such a stink to it that even an electives teacher like myself is under a moral obligation to look where it’s coming from.
What I can do, and have done already, is ask some questions, and I can also be grateful for the responses and happy that I have a site where I can post them.
The first question I asked was on a couple of listservs was this:
I've been hearing a lot of parents and teachers saying their school got an A or a B or a D, and I'm not sure they've understood that these letter grades are assigned to percentiles, not scores in the way we use them every day to grade papers.
In the DOE school grading system, an "A" does not mean 90 or above, but rather that a school is in the top 15%, and so on thus:85 or above = AThis table is from a printed report called "Educator Guide: the NYC Progress Report", and it was given to me by a teacher trainer who takes administrative classes with superintendents. She got it there. When we checked the online version today, this table does not appear. Another table is used, on p.29. It gives rather:
85 - 45 = B
45 - 15 = C
5 - 5 = D
Below 5 = F67.6 - 100 = AApart from the fact that neither of us [editorial comment: nor a reporter who subsequently contacted me about this] can understand why one table appears in the printed version and another online (maybe one version is for HS and another for another level, I'm not sure), I am very troubled by the ambiguity the DOE has created by using LETTER grades in the first place. The general public looking at a letter grade does not think of statistics and percentiles. They see what they are used to, that A is usually in the 90s, B in the 80s, C in the 70s, D in the upper 60s, and F below 65, or failing. I am sure a less ambiguous system could have been used so that no one could possibly confuse the percentile groups they've labeled A - F with the normal and widespread meaning of letter grades.
48.8 - 67.5 = B
35.1 - 48.8 = C
28.9 - 35.1 = D
0 - 28.9 = F
More obfuscation and confusion on top of the contortions of the computations themselves.
The teacher trainer also told me this also: "The progress report is an attempt by the DOE to show NYS that NYC is meeting the mandates of NCLB. Thus the huge number of grades that seem above average (A's and B's) but are actually pretty darn low."
I do admit to not knowing a whole lot about this whole grading project/fiasco, but my reaction is visceral, that I believe there is a purposeful effort on the part of the DOE to misinform the general public. Can someone tell me if I am at least getting the facts right, or is there something I am not understanding?
One response came from Gary, who not only linked to a parody he had just written, but drew my attention to Leonie Haimson’s 2004 testimony that included a section on “Creative Confusion.” Very helpful in understanding the general ideology, but not specifically dealing with the weird use of letter grades that represent sets of percentages far different from what the hoi polloi are used to, like A=90s, B=80s, etc.
I asked another question in November, this time on comparisons of test grades from a long time ago to the present. I was basically interested in finding out if there’s been a measurable “dumbing down” of the population over time.
Has anyone ever administered a test, let's say for 8th grade math, written in the 1960s to a current 8th-grade class to see what kids now and what they knew 40 years ago?This led to some back and forth on how some tests are somewhat comparable over time (e.g., the Long-Term Trend NAEP, explained by Ravitch) while others (like the city and state tests) are not. I looked at the NAEP site as she suggested and found you still cannot get very accurate long-term comparisons, since accommodations were allowed in some years but not in all.
Yesterday I read a post on NYCEducator that brought to mind some other questions about testing and the obfuscation thereof:
Kids taking a certain exam this year, say, will be different from the kids taking the same exam next year, right? So what is the point of comparing this year's scores against next year's scores in the first place? Different set of kids means different set of test-takers. What good does it do comparing one set of test-takers against a later set?And here are some of the answers to those 5 questions, first from xkaydet65:
Shouldn't progress be measured by comparing how well the same set of kids do from one year to the next? I know that's not really possible, so what's the point of these comparisons anyway?
If you deal with the Elem and Int schools you can follow a kid's progress. Does he go from a 2 to a 3 or vice versa (BTW a change of 2 mult. choice answers can change a high 2 to a 3 and a low 3 to a 2)Then from Schoolgal:
For you HS folks there is no way to compare results. The tests are different and the kids are different. Yet some people try to do that and they don't work for Bloomklein. Colleges trumpet increases in the SAT scores of their frosh classes from year to year even though the exams change. This is probably the paradigm that the DoE is working with.
Elementary and Jr. HS students can be tracked. And it is true how one wrong answer can be the difference between a 2 and a 3.And from 15 more years:
However our principal informed us that the report card measurement will no longer look at the grade as much as the raw score. So if a child got a high 3 or high 4 two years in a row, they will see it as zero improvement rather than meeting or exceeding the standards. That is why schools with very high scores are getting Bs or Cs.
What is a teacher or principal supposed to do with this crazy method?
Schools in the high 80s are pressuring their teachers to bring scores up will only be hurting themselves in the future once they hit the ceiling. Yet the most violent schools got As because their low scores showed some improvement. If I knew how to spell in Yiddish, I would say this was one cockamayme plan. (my apologies to the Yiddish-speaking readers)
[editorial note: the alternative spelling is kakameyme, which brings up an unfortunate comparison between cock and kaka.....]
This business of kids not improving from year to year is a big crock of hooey. If they were taking the same exact test every year, then yes, as they go from 5th grade, to 6th, and onward, then you naturally would expect a child to show growth. BUT, plenty of kids top out in 5th or 6th grade. The ELA in 7th grade is much easier than the ELA in 8th grade –– so it is almost expected for a child to remain static, if not drop a bit if he is a struggling student to begin with. And what if a child is sick the day of the test? There are so many variables to consider, the concept is ridiculous and ill-conceived.
The mantra coming out of Tweed these days is “accountability,” and Bloom/Klein have made much of testing and test score comparisons to convince people that they know how to make schools accountable.
But it is clear that from almost every possible angle – different tests, different test-takers and what side of the bed they woke up on, different amounts of accommodation, different degree and quality of prep, different testing conditions, and even different morality in marking – comparisons of test scores do not stand up to scrutiny or further the cause of accountability.
Test score data is useful for one thing only: Spin. And that’s the farthest away from accountability you can get.
What are they really trying to do with our schools and with our kids?
New link!! Thanks to Ednotesonline, here's a Test you can take, put out by Prof. Celia Oyler of Columbia TC, on the KleinLieb school grading system. Find out if you already know all there is to know about this colossal boondoggle and if it will ever in a hundred million years be an accurate, fair, or relevant tool for judging NYC schools.