I know you didn't actually ask about these, but reading about your DC reminded me. FYI:
"Center for Technology & Disability Studies" has educator resources and direct services (AT eval). There appear to be some resources about AT tools on the site.
https://uwctds.washington.edu/consultation Similar center exist at a number of major universities.

"Center on Technology & Disability" is a DOE site with instructional modules for educators, advocacy resources, etc. on AT.
http://www.ctdinstitute.org

As to your actual questions, in no particular order:

I would hesitate to use the WISC-IV at this point, as its norms are quite old. I actually think the WISC-V does a better job of picking up strengths for most 2e children (although your child may be one of the exceptions). There are supplementary subtests on the -V, though, that can be used to fill in some of those gaps. For example, the VECI is an expanded VCI that adds back in some of the verbal subtests that were part of the -IV. Most of the old -IV subtests are still around on the -V, as optional subtests. A thoughtful evaluator should have reviewed her old testing, noted places where the change in subtest contributions to index scores might affect her new composite scores, and considered whether to add some of those subtests back in, in order to do a closer one-to-one comparison, and to be able to interpret current testing in the context of previous testing.

To be fair, I always want to see all old testing numbers, not because they necessarily represent that child's "true" potential, but because they track changes (or stability) through time in function on standardized tasks. (Though I rarely put them in a chart in my eval reports, unless there's a TBI, a neurodegenerative disorder, or marked improvement after an effective intervention, and I'm trying to demonstrate a trajectory.) Any standardized test is only a sample of a small subset of skills, at one moment in time.--As to how this applies to evaluator selection: I would want to know that clinical interpretation was driven by sensitive assessment of the child as a whole, including history, interviews, and clinical observations of test and naturalistic (e.g., classroom) behavior, and not purely by numbers.

An evaluator who is part of a larger clinical group might be preferable, so that the audiologist, OT, neurologist, and gastro all communicate with the person doing the neuropsych. That would point to a hospital-affiliated group.

I think screening by running her past testing by the evaluator might be a good idea, as it will tell you something about how they interpret test data (numbers-driven or child-driven).

I would ask how they design evaluations, and especially what they do when "odd" test results emerge during the course of testing. Do they follow-up on these? Test limits? Do further testing? Ask confirmatory questions about IRL (in school/home/community) behavior/performance? (By which I mean, do they spontaneously come up with these further actions. If you ask any reputable evaluator if they do these things specifically, of course they will answer yes!)

What's their experience with low-incidence profiles? Multiple disabilities? See if you can get a bead on how they feel about unusual, creative, or quirky kids--those evaluators who enjoy originals often are more able to pick up on their strengths, and to get optimal performance from them. (After all, most people perform better when they feel liked and appreciated.)


...pronounced like the long vowel and first letter of the alphabet...