I think this essay makes an important point: the College Board is making big changes in the SAT without any evidence that the new test will better predict success in college.
http://isteve.blogspot.com/2014/04/sat-new-test-hasnt-been-tested.htmlSAT: The new test hasn't been tested
by Steve Sailer
April 16, 2014
Looking through the couple hundreds pages of verbiage that the College Board has released about their revisions to the SAT, I haven't found any evidence that they've tested the new test they've announced. It wouldn't be terribly hard to carry out research to see what kind of questions predict college performance best, but they don't seem to have done any research whatsoever involving potential questions. They've conducted various market research studies (focus groups, surveys, etc.) of what various people say they want in the SAT, but they have done nothing to see if what they've announced will actually work.
There's an amusing irony here: the SAT is a test used to predict how individuals do. But, as for predicting how the predictor is going to work, well, we'll just have to wing it. This strikes me as fundamentally irresponsible -- nearly a couple of million kids per year take the SAT -- but all too typical of contemporary elites in America.
...
This is the fundamental problem, actually.
Well, it's not limited to College Board, either-- ACT dances around it, as well. The
real reason why colleges are going test-optional, or, as Dude and I discussed earlier-- relying upon their OWN metrics with incoming students-- is that standardized testing is HORRIBLE at predicting college success, and every successive iteration seems to make the connection more tenuous still.
Oh. Tenuous. Another of those strange words, I suppose.
And while yes, I expect that the top 10% is likely to be relatively unfazed by the shift, the
real problem is when you have teens who are living in a RADICALLY different mental/cognitive space from those writing test items. Just because something
seems to be good at differentiating the center of the distribution curve (say the 5th through 95th percentiles) doesn't mean that weird things can't occur outside of that range.
What was the admission rate at the Ivies this year, again?
RIGHT.
So this kind of revamping stands (at least potentially) to harm the very top of the distribution by making the test questions
unanswerable if you know TOO MUCH.
I've seen this again and again and again with DD-- the SAT already had issues this way, and everything I have seen of the 'rewrite' thus far indicates that it
elevates ambiguity in trying to make itself much more "clever" than before... but the problem is that when a test like that is HARDER the more you know, it's not measuring performance or potential very well for the top __th percentile, whatever it turns out to be. The questions get harder when you can see them in ways that are CORRECT, but which test writers never anticipated.
Beta testing doesn't give you sufficient numbers to really KNOW that you have a big problem on your hands there until it happens during rollout.
This part of things (IMO) stands to harm MG+ students the most. Probably increasingly so with increasing LOG.
Not to mention the fact that validation here doesn't mean what they think it does. It's the dumbest idea EVER to 'align' the SAT with
high school curriculum rather than with what college faculty are saying is deficient in incoming freshmen. I expect that this move will simply make that gap all the more apparent, myself.
Hello? Secondary education? Yeah-- you're.not.listening.