The value of testing based on the curriculum in use in that program is that one should not need to do any extra side teaching on special terminology or specific methods from previous grades, should they come up in the course of new instruction. It should hypothetically be possible to determine exact equivalence to the performance of and expectations for a student who did receive instruction in the course to be skipped. Schools feel much more comfortable with skipping when they can create these kinds of equivalences. No one has to compare the scope and sequences of the curriculum from which the tool was taken and the curriculum in which the skip is occurring. There should be no gaps at all.

I understand where you are likely coming from--that not making the cutoff on the test because you have a different, but equally effective, method for solving the problem, or because you don't know the exact vocabulary they use for some skill or concept for which you can demonstrate mastery, seems like an assessment of something other than math skills--and I don't disagree, but it's also not totally irrational to use CBA this way if the SSA is going to be essentially unsupported. That is, if no special accommodations are going to be made for the skipped student, such as catching them up on gaps where they appear. And it's really the most straightforward way of predicting their likelihood of success in this grade, in this curriculum, which is actually going to be their core instruction.


...pronounced like the long vowel and first letter of the alphabet...