We invite you to share your experiences and to post information about advocacy, research and other gifted education issues on this free public discussion forum. CLICK HEREto Log In. Click herefor the Board Rules.

My son is taking the WISC-V soon, and Iï¿½m trying to learn all I can about the test. After reading the extended norms report, Iï¿½m left with a couple questions:

If I understand correctly, the WISC-V subtests are all normally distributed and imperfectly correlated with one another, so composite indices derived from multiple subtests should have lower standard deviations than what is obtained through averaging those of its components (for example, an index comprised of two subtests with mean = 10 and standard deviation = 3: an index averaging the two should have a mean of 10 but a standard deviation less than 3). This makes sense to me, as a student averaging +1 SD on two sufficiently unique but equally relevant tasks would yield a > +1 SD composite score. Indices such as the VCI and FRI, both derived from two subtests, seem to support this, as a sum of 38 on either index, or average score of 19 (+3 SD), yields a composite score of 155 (+3.67 SD). The effect is even stronger in the GAI, derived from five subtests: an average subtest score of 19 yields a GAI of 160 (+4 SD). However, when three more subtests are added to the GAI to make the EGAI, the effect stays the same (mean = 19 : +4 SD). This is also true for the VCI and VECI and the GAI, CPI, and FSIQ. Was this done to maintain consistency in interpretation, or have the additional subtests been designed with substitution in mind? In the last case, the FSIQ seems to be an average of the other two top-level indices.

I am also struggling to understand why the confidence intervals listed are the same size throughout the scaled score continuum for every index. From what Iï¿½ve read on Item Response Theory, the standard error of measurement is calculated from the inverse square root of the testï¿½s information function (which I assume is high around the average score of 100 and tapers off at the extremes, since the test is designed to work best around the population average). I took the expected score moving up within the confidence intervals the higher the score as an indication of the information function bottoming out and scores subsequently regressing to the mean, but the size of the SEm appears to be constant. The gifted sample undoubtedly helps in providing more information for the upper extreme, but even so, I canï¿½t imagine why the SEm wouldnï¿½t change throughout such a large scale.

I am neither a psychologist nor a statistician, so anything Iï¿½ve written here could be erroneous; nevertheless, any help would be appreciated.

Thanks in advance.

Last edited by OldManDan; 08/12/2302:06 PM. Reason: Replaced Greek letters

Thanks for the resources, Indigo! His test isnï¿½t until a couple weeks later, so we still have some time to ease him into the process. My wife and I are both avid gamers along with our son, so thatï¿½s how weï¿½re planning to explain the test to him.

OMD, you are very astute. As it happens, yes, the additional VCI subtests that are included in the VECI are allowable substitutions (and likewise the EGAI, etc.), but only intended for use as substitutions in the FSIQ. That is, substitutions are not allowed for any other index-level score.

As to the CIs, Pearson chose to base them on estimated true scores (using the standard error of estimation (SEE)), as a partial correction for regression to the mean. The size of the SEMs appears constant largely because they list them as integers. In the tables included in the technical manual, it is apparent that there is variation by age (you can see this in the freely-downloadable expanded indices technical report (#5), too). Although not expressly listed, it can be inferred from the tables on significant differences between measures that the SEMs also vary by ability.

I expect that the variation (across both age and ability) was judged sufficiently minor that the functional impact of simplicity (rounding to whole numbers) was considered more important than that of precision.

...pronounced like the long vowel and first letter of the alphabet...

We wrapped up the testing process last week, and everything went smoothly, thanks to the resources provided by Indigo and other info found on this forum. We havenï¿½t received the detailed report yet, but we know he qualified for DYS!

I have a few more questions after doing some more research, if you donï¿½t mind indulging my curiosity a bit further:

From what Iï¿½ve since read about factor analysis, it picks the axis which minimizes the average shortest distance between each data point and the axis and projects each data point onto this axis using its aforementioned shortest distance. It then scales the projected point toward the mean by the average of the variances of the data along axes orthogonal to the chosen axis (unexplained variance) in order to extract the correlation between the variables by eliminating the correlation among the error vectors (vectors pointing from the final projected, scaled point to the original data point). This also implies that the result will stop changing as more variables with similar correlations to the original group are added, as thereï¿½s only so much correlation to extract.

Data from the technical and interpretive manual show the correlations between indices and their two primary subtests is very high, often above 0.9 (such as SI and VC with VCI both above 0.9), but correlations between indices and their extra subtests is much lower, usually around 0.7. This to me seems like the indices were computed based on only their primary subtests, while the extra subtests were kept around due to having similar correlations with the primary subtests as the primary tests have with each other (as that would suggest the average unexplained variance wouldnï¿½t change significantly if substituted in for the FSIQ, but could have a sizable effect if substituted in for an index score, as there are only two to substitute). Is this why substitutions are only allowed for the FSIQ (and why the average score < composite score effect described in my original post doesnï¿½t apply to the expanded indices)?

However, the GAI and CPI have middling correlation with each other but high correlation with the FSIQ (analogous to VC and SI with the VCI), yet the FSIQ is a direct average of those two indices (whereas the VCI gets a boost); is this due to the FSIQ explaining most of the correlation among subtests within the GAI and CPI separately?

You are correct that the WISC-V indices were derived from only their first two subtests. In addition, the GAI/FSIQ do not include all of the subtests that contribute to the primary indices, and the FSIQ further does not include all of the CPI (WMI/PSI) subtests. (The GAI expressly does not include any of them, of course.) GAI has the two primary verbal subtests, the two primary fluid reasoning subtests, and the first visual spatial subtest (block design). The FSIQ adds one subtest each from WMI (digit span) and PSI (coding).

Which, you'll notice, means that the FSIQ is not the average of the GAI and the CPI, consisting as it does of 5/7 GAI subtests, and 2/7 CPI subtests. (If you've been looking at documents pertaining to the WISC-IV, I can see how this detail may have slipped by, since it changed on the test revision.) In fact, the GAI/FSIQ are not even evenly constituted from the three indices considered more reasoning-focused, since VS only has one subtest contributing.

If you feel like digging through this () intercorrelation tables by age for the indices and subtests begin on page 48. You'll see that generally, the FSIQ has the best intercorrelation numbers with the GAI, followed by the VCI/FRI. (For obvious reasons). The VSI and CPI come after that.

I have a feeling I've missed one of your questions somewhere along the way, so please ask again if not satisfied!

...pronounced like the long vowel and first letter of the alphabet...

Thanks for the kind words and for shedding more light on the FSIQ, Aeh.

I had originally assumed that the FSIQ was derived from the five first level factors in their entirety and that some fancy extrapolation was involved in reducing the number of subtests involved in the score calculation, but knowing that itï¿½s computed from only the core seven subtests clears a lot of things up.

In hindsight, I believe I meant to say that the FSIQ seemed like a weighted average of the GAI and CPI even though itï¿½s based on more (and more unique) information, which in my mind seemed to contradict what happened when two subtests combined to form an index whose score is greater than the average of the two contributing subtest scores. Going from GAI and CPI to FSIQ seemed similar to what the expanded VCI did with the VCI: refining whatï¿½s already there versus changing the foundation.

Now knowing that the FSIQ takes bits and pieces from the other indices to form the most complete picture with a leaner amount of information, the weighted average makes more sense. It also agrees with how the WISC-IV has a more lenient average score cutoff for each FSIQ than the WISC-V, as that FSIQ is derived from ten subtests. Now Iï¿½m curious as to why the expanded scores donï¿½t include an expanded FSIQ, unless adding the extra information would change the interpretation by changing the average (and sum) score thresholds.

Now for the real challenge: getting DS excited about more schoolwork (we promised to buy him a $60 game if he gave his best on the test, and now he is thoroughly engrossed in it)!