Knute, at least it's noted.

In higher ed, it (IME) isn't standardized at all, nor does it have to be explicitly stated in a lot of institutions-- that is, the faculty member can decide. In fact, if s/he doesn't note it in writing at the start of the term, it can change.

(Yes, I find such tactics fairly deplorable, myself.)

I was a big fan of Moomin's first described practice, and from there, using a straight 90/80/70 percentage-based metric with my own classes. The only part that they found confusing was that I did it on an assignment basis rather than a term-long, cumulative one. So the top score on any one assignment became 100% for that assignment. I let the chips fall where they would with respect to the larger summation of student scores, however.

Generally speaking, this worked to the advantage of students, and I also told them that if they all earned A's, I'd be happy with that. Usually, grades followed a normal distribution anyway, for whatever that is worth. I tracked my stats pretty carefully, and I taught the same service courses a lot, so I had a fair amount of data.

I found that it actively encouraged a positive learning community within a class, and fostered healthy (rather than unhealthy) competition, and it tended to make students much more accepting of their own mistakes since I was telling them at the outset that I wasn't perfect either, and didn't expect to be (because this kind of grading scheme automatically ditches bad questions, since the 'top scorer' probably won't get them right either).

I feel that it is sad that more instructors don't use such a scale, myself. I've never taught a class that someone didn't earn an A in, after all. What was funny was that generally it was not just one student who earned the top marks on assessed items-- it was often a rotating cast of the top 4-6 students in a class of 35 or 40.



DD's high school used a 98 A+, 91 A- scale, and so on.



Schrödinger's cat walks into a bar. And doesn't.