0 members (),
715
guests, and
30
robots. |
Key:
Admin,
Global Mod,
Mod
|
|
S |
M |
T |
W |
T |
F |
S |
|
|
|
|
|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
31
|
|
|
|
|
|
|
|
Joined: Nov 2016
Posts: 35
Junior Member
|
OP
Junior Member
Joined: Nov 2016
Posts: 35 |
I'm looking for information on the benefits of certain testing combinations, Cogat + Iowa in this case, and why they might be trusted for gifted identification within school districts.
For many years, our district used the combo of MAP and Olsat to qualify students for its gifted accelerated program (9th stanine on all required). A couple years ago, the system was overhauled. The district now gives the following tests to every third grader: CogAT, Iowa reading, Iowa math, Iowa social studies, and Iowa science. The Iowa tests are used to identify GT students in each subject area for enrichment at their home schools, but the gifted accelerated program is only for those who score in the 9th stanine on CogAT, Iowa math, and Iowa reading. There are sometimes more qualifiers than available spots, so students are rank ordered and offered spots in that order--which means some qualifiers are left out.
My questions to anyone who is very familiar with these tests:
1) What might be the reasons the district chose to use Iowa as a better indicator of readiness for acceleration (or GT services per subject) than MAP? Assuming they are giving grade level Iowa tests, it seems on the surface that MAP would be a better indicator. Then again, I am familiar with MAP, and not at all familiar with Iowa.
2) What are possible reasons for the switch from Olsat to Cogat? Is one considered more reliable than the other?
3) Is this combination of Cogat + Iowa widely used and known to be a good indicator of which students might be more successful in an accelerated program?
Our district changed their entire identification process but did not publicize any reasons for the switch. I now have a 3rd grader who has qualified for the accelerated program, and he would have under the old system as well (his MAPs are always 9th stanine in both reading and math). But I know of other students who have YEARS of 9th stanine MAP history, and have demonstrated class performance above grade level, but were not offered a spot in the accelerated program because an Iowa math or reading score was not high enough. Seeing these children fall through the cracks when I feel they should have been offered an opportunity makes me wonder why the district might think this new combination of testing is better. In addition, my older child was tested last year as a 5th grader, and his Iowa reading result was a full 20 percentile points LOWER than his MAP results have been for many years. He is always 96th-99th on reading MAP, which has held true from kindergarten through 5th grade, but was 79th on Iowa reading. All of this makes me look at Iowa in a suspicious light. Any insight is appreciated.
|
|
|
|
Joined: Apr 2014
Posts: 4,078 Likes: 8
Member
|
Member
Joined: Apr 2014
Posts: 4,078 Likes: 8 |
1. I would suspect that the switch to ITBS is to align with the CogAT, as they are co-normed, unlike MAP and OLSAT, or MAP and any other ability measure. MAP does have the advantage of adaptive testing, and thus hypothetical access to items through the upper reaches of fifth grade (for a third grader testing on the 2-5 version), but Iowas have the advantage--which is not insignificant--of co-norming.
2. The norms for CogAT/Iowas are about 3-4 years newer than those for OLSAT/SAT-10. As above, co-normed instruments for ability and achievement are generally preferred, and these are the co-normed combinations for CogAT and OLSAT. If the district doesn't want the SAT/10s, then the OLSAT doesn't allow for any other co-normed situation. I tend to favor the level of ongoing research feeding into the CogAT, also.
3. Yes. Though of course, all group-administered instruments have their flaws when assessing extremely low incidence outliers (such as 2e or PG).
...pronounced like the long vowel and first letter of the alphabet...
|
|
|
|
Joined: Nov 2016
Posts: 35
Junior Member
|
OP
Junior Member
Joined: Nov 2016
Posts: 35 |
Thank you, this information helps. I don't know enough about testing to know which tests align or are co-normed with others, which norms are older/newer, etc. I assumed the district had a reason for switching, but in the research I'd tried to do on my own I wasn't finding much useful information.
I have also observed that in the past couple of years, since the identification testing changed, the gifted accelerated program for elementary has fewer participants by nearly half. I suppose it could be that more families are declining the program offer, but my gut tells me that they suddenly only had half as many qualifiers when the testing requirements changed. Now that my younger child is at the grade level for testing, I'm seeing the personal effects of the change among his peers. The new testing caught him in the gifted/accelerated net, but it seems to be leaving out several kids who would be ideal candidates for this program, who clearly need more than the regular classroom can offer. And after my older child's relatively poor performance on Iowa reading last year, I'm just asking myself if this system is flawed. This is a child who is most often 98/99th on MAP and always has been, and upon entering middle school this year got the highest language arts/reading pretest score in the entire 6th grade (out of 378 kids)...but scored 79th percentile on Iowa. But if a lot of districts use this combo and it is generally reliable, I guess it just...is what it is. Any change in the system is going to cause some ripples, and maybe this one just means that the accelerated program shrinks because it's harder for kids to qualify--although I'm not sure why so many are performing lower than their MAP norms on Iowa if it's a grade level test. I may never know! Maybe I should be doubting the validity of MAP instead!
|
|
|
|
Joined: Feb 2014
Posts: 336
Member
|
Member
Joined: Feb 2014
Posts: 336 |
CoGAT/ITBS is used by almost all local districts here, too, but I have gotten more and more skeptical about it as I watch what happens with kids we know and our program.
My DD's class is a 3-grade split. So kids identified in 3rd are in the class starting in 4th. Kids who aren't identified in 3rd can apply to re-test. Every year there are nearly as many kids entering in 6th as 4th. Why? These kids did not qualify to enter in 4th, and did not qualify to enter in 5th, but qualified to enter in 6th. Every year the class is structued similar to this: 5-4th graders; 9-5th graders; 13-6th graders. Supposedly they take the best-qualified kids regardless of grade, but somehow fail to identify the 13 highest scoring 6th graders when they are in the 4th grade. And there are only ~80 6th graders in the school, so by 6th quite a large percentage of them are in the program.
|
|
|
|
Joined: Nov 2016
Posts: 35
Junior Member
|
OP
Junior Member
Joined: Nov 2016
Posts: 35 |
That does sound odd. I know very little about testing and tendencies of kids to go up or down as they get older, but I do have one personal experience I can share that relates.
My older child was tested as a 3rd grader, as all students in our district are. At the time they used OLSAT and now they use CogAT, but those are fairly similar if I understand correctly. When he took OLSAT in 3rd grade, he scored in the 93rd percentile (cutoff in our district is 96th for both academic and intellectual/reasoning tests). Then, to our surprise, last year in 5th grade our son was nominated for testing again by our school's GT teacher, for potential entry into the program as a 6th grader. That time, he took CogAT and scored in the 98th percentile. Big change and suddenly a qualifying score (he did not qualify on the ITBS tests, but we were surprised at the increase on the other side). I did a little research on CogAT and found that it is a learned reasoning/problem solving test. I chalked his gain up to an additional two years of education and push in the high groups in his classes, plus maturity, plus a better ability to gauge what the most "normal" answer would be on certain questions (we suspect his avid reading of above-level texts and outside-the-box thinking may have contributed to a lowered verbal score the first time around).
Anyway, that's just what happened to us. I suppose it could also be that there's some test prep going on in your area, when parents realize their kids aren't getting in on their own the first time around. Or maybe the quality of education in your 3rd, 4th, and 5th grade classes is just that good, and by the time they're doing ITBS academic tests that last year, they're extremely well-prepared. Just some ideas. I have no professional experience in this area.
Last edited by melissan; 02/02/18 06:25 PM.
|
|
|
|
Joined: Apr 2014
Posts: 4,078 Likes: 8
Member
|
Member
Joined: Apr 2014
Posts: 4,078 Likes: 8 |
I'm going to link back to an earlier post of mine: http://giftedissues.davidsongifted....sion_to_the_Mean_Why_tes.html#Post230897regarding regression to the mean and repeated testing. Although the main discussion point was how scores fall over time, one may also look at it from the angle simply of, if you test enough times, eventually you will get an unusually high result, that isn"t statistically representative of the bulk of the test scores, but has a better chance of meeting your cutoff criteria. So if the district allows multiple entry years to the same pool of children, and especially if the general population is just outside of the cutoff range, testing every child who has not previously qualified, while keeping every child who has in the program, means that one will accumulate many more than the nominal expected percent of the school in GT. Quote: "The “or” is not defensible, however, when both tests are assumed to measure the same construct. For example, the test scores may represent multiple administrations of the same ability test or consecutive administrations of several different ability tests. Error of measurement is defined as the difference between a particular test score for an individual and the hypothetical mean test score for that individual that would be obtained if many parallel forms of the test could be administered. The highest score in a set of presumably parallel scores is actually the most error- encumbered score in that set. Therefore, unless one has a good reason for discounting a particular score as invalid, taking the highest of two or more presumably parallel test scores will lead to even more regression to the mean than would be observed by using just one score." When he says "lead to more regression to the mean" at the end of this passage, he is referring to GT-identified students later testing below GT level. So the author of the CogAT specifically does not recommend repeated administration, where any qualifying score gets and keeps you in, as a defensible criterion for GT entry. His recommendation for repeated testing is actually to average the scores, but use a lower cut score, and to make decisions on long-term GT placement only after consistently high averaged scores. Or to make GT placement more about state than trait. I.e., the presenting needs and skills of the student from year to year.
...pronounced like the long vowel and first letter of the alphabet...
|
|
|
|
|