I'm wishing I'd paid closer attention in stats class. I found this link that had information about RIT standard deviation. It says:
"At each grade the standard deviation (for reading) is between 14 and 15. That means that if a student takes the test repeatedly on the same day, his/her scores will fall within 7-7.5 points on either side of the reported RIT score.
It's not you, it's them! That is absolutely not what a standard deviation means. In fact, if you think about it, it couldn't possibly be this. Suppose someone takes the test at 9am and gets a score of N. According to the above, if they take the test again at 10am, 11am, ... 11pm, all the subsequent scores will be between N - 7.5 and N + 7.5. That is, magically, the very first test managed to produce a score that was in the middle, not an outlier, among the set of all possible scores that that student could produce. How's that supposed to happen?
The standard deviation is about how much variation there is among the test scores of a group of subjects, in this case the ones the test was normed on. It has nothing to do with how much an individual's score may vary. (You *could* talk about that using SD, but I looked at the link - it isn't.)