1 members (Wes),
199
guests, and
35
robots. |
Key:
Admin,
Global Mod,
Mod
|
|
S |
M |
T |
W |
T |
F |
S |
|
|
|
|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
|
|
Joined: Apr 2014
Posts: 4,076 Likes: 6
Member
|
Member
Joined: Apr 2014
Posts: 4,076 Likes: 6 |
Once upon a time, some examiners observed anecdotal patterns of depressed scores in Arithmetic, Coding, Digit Span, (and some include Information), in the context of at least average performance on the remaining subtests, all associated with inattention and impulsivity (aka ADD/ADHD). If a particular examiner thought there was a question of ADHD, and they also believed the ACID profile was legitimate (newsflash: it's not), they might throw in the optional Arithmetic subtest to validate their behavioral observations of inattentiveness and impulsivity.
...pronounced like the long vowel and first letter of the alphabet...
|
|
|
|
Joined: Apr 2014
Posts: 4,076 Likes: 6
Member
|
Member
Joined: Apr 2014
Posts: 4,076 Likes: 6 |
Yes. That is correct. In some cases, this is described as intrasubtest scatter, and may suggest that the measure is a low estimate of ability, the examinee was not fully engaged or attending, or that their access to instruction has been inconsistent (which might happen for gifted youngsters because they are exploring knowledge on their own, in sequences not typical of the general population).
I have also seen gifted students exhibit scatter because they employed a more simplistic problem solving approach on easier items, and then switched to a more efficient, high-level approach only after failing a number of items. One student I recall nearly reached a ceiling on the memory for beads subtest on the old Stanford-Binet-4, because he was using a brute force rote memory approach, but suddenly succeeded at quite a number of items in a row when he changed his approach to one involving grouping and mini-patterns. (Not sure if my explanation of his approach is clear, but) the takeaways are 1) his actual working memory was much higher than that represented by the scaled score, and 2) I would not have derived as much information from this test if I had not explored the reason for his abrupt increase in performance near the end of the test. The qualitative information about his thinking and reasoning was much more interesting than the scaled score.
...pronounced like the long vowel and first letter of the alphabet...
|
|
|
|
Joined: May 2011
Posts: 269
Member
|
Member
Joined: May 2011
Posts: 269 |
That makes sense and is fascinating. Thank you!
|
|
|
|
Joined: Apr 2014
Posts: 12
Junior Member
|
OP
Junior Member
Joined: Apr 2014
Posts: 12 |
Thanks for all the info... it is fascinating.
|
|
|
|
Joined: Jul 2010
Posts: 480
Member
|
Member
Joined: Jul 2010
Posts: 480 |
OK, so extended norms are being over-used, but in the Pearson link posted earlier it says the highest score in the norming sample was 151. So how is a score from 150 to when extended norms kick in valid? I do understand it's all pretty timey-wimey wibbly wobbly over 99.9 percentile, but extrapolating beyond the top end of your sample seems even more suspect.
|
|
|
|
Joined: Apr 2014
Posts: 4,076 Likes: 6
Member
|
Member
Joined: Apr 2014
Posts: 4,076 Likes: 6 |
And that also became, de facto, the top score in the gifted validation sample, using the standard norms. That is to say, we have no idea whether that 151 was really 151, or should have been 161 or 181. The idea of the exercise with extended norms is to more finely distinguish all the examinees who bunch up at between about 140 and 151 (if you factor in the confidence interval). If you look at the increased spread from standard to extended norms (e.g., FSIQ top score goes from 151 to 159, VCI from 155 to 188), clearly not all 150s are created equal. Also, extended norms are a way of rescoring the raw data into scaled scores above 19. It overlays the standard norms, rather than purely replacing them (you can think of the 151 on the standard norms as 151+). The rule about needing two max scaled scores is supposed to restrict use of extended norms to those children whose confidence intervals include the maximum Index score under the standard norms. But I grant you that the extended norms have more clinical utility than strict psychometric robustness. I'm sure there was a fair amount of fun with curve smoothing involved.
Pearson also really had to release some technical guidance, because people were starting to make their own rules for this. Plus, more cynically, the Stanford-Binet LM had been clinging to the gifted eval market for a long time, and part of its appeal was undoubtedly the possibility of scores in the 200s. The extreme agedness of the LM created a hole in the market, which had not been definitively captured.
I hope what I just wrote makes sense, because it is definitely past my bedtime!
...pronounced like the long vowel and first letter of the alphabet...
|
|
|
|
Joined: Jul 2010
Posts: 480
Member
|
Member
Joined: Jul 2010
Posts: 480 |
Thank you, it does make sense. It's like above grade testing instead of grade level testing. Is that a fair comparison?
I don't know that ranking of the kids in the over 150 area matters as much as the individual child's personality and quirks (which is probably true of all children, now that I think about it).
|
|
|
|
Joined: Jun 2011
Posts: 4
Junior Member
|
Junior Member
Joined: Jun 2011
Posts: 4 |
That's what happened with my son, twice he got 60 out of 60 on the math part of the Cogat. We have no idea what it means, because what would he have gotten if there were 80 or 100 questions?
I always scored 13th grade on state standardized tests at my school, from 6th grade (the first grade I took it) on. They never increased the maximum level and didn't tell us what our raw score was.
My opinion is that if your child is tested, especially if you are paying for it yourself, they better give you a raw score too. This is also true with SAT subject tests too, for some tests, you could get 1 wrong and not get a 800, but others you could get 7 wrong and get an 800. The raw score will tell you more than the converted reported score.
(note that the G&T criteria for JH CTY seems very low to me, compared to what little they give much more gifted kids in public schools)
|
|
|
|
Joined: Apr 2014
Posts: 4,076 Likes: 6
Member
|
Member
Joined: Apr 2014
Posts: 4,076 Likes: 6 |
"The raw score will tell you more than the converted reported score."
Not really. The converted score is the part that tells you where the student stands in comparison to the national norm population (hence, are they statistically unusual enough to be considered gifted). It also compensates for differences in forms of the same test (such as for the SAT/ACT/GRE, etc.), by rooting them in the same normative sample. I do, however, always include raw scores in my score tables, in addition to all the converted scores, because I would like that information if I were receiving the report, both as a parent and as a professional. I would agree that the raw score provides additional information about ceiling effects.
...pronounced like the long vowel and first letter of the alphabet...
|
|
|
|
Joined: Dec 2018
Posts: 20
Junior Member
|
Junior Member
Joined: Dec 2018
Posts: 20 |
I know that this is a very old post, but anybody claiming that inclusion in the extended norms sample required 98th %ile in VCI and PRI, a 99.9th %ile GAI, and a ceiling raw score in at least one subtest is clearly having you on. According to Technical Report #7, norming data came from both the original standardization sample and data provided by the NAGC. NAGC data included a VCI of 110, PRI of 102, FSIQ of 118, and GAI of 120. Even the subject of the case study provided with the report (with a GAI of 208, although the reliability of norming data is likely quite dubious in that range) did not attain the maximum raw score on any subtest.
Last edited by Anisotropic; 06/21/19 10:43 PM.
"The thing that doesn't fit is the most interesting." -Richard Feynman
|
|
|
|
|