I have the current software and could run numbers (would need age at testing + actual raw scores), but you should be able to get the scores from the tester, whose job it is -- those grade-equivalent-estimates that you can get as you go are really only very vague estimates of "above, below, way above," etc.

The deal with the normative-update software is that it is able to have different standard deviations above and below the mean, which is particularly relevant when you're testing kids at ages when most of the kids in the norming sample don't have much skill development at all (so the normal curve is skewed). That has tended to pull down some of the ridiculously high scores, and it's had some very odd results in terms of composites -- I've had to put some very long explanations in reports about why a composite is actually *lower* than the three above-average scores that went into it.

I've seen gifted-perfectionistic kids do similar things on fluency tests -- I sort of "cheer them on" and remind them to just slash fast through the one they don't want and keep going (since I'm not trying to measure their ability to remain on task without reminders, this is legal). It's also useful information, that a kid either can't stay on task, or is so pulled by the perfectionism thing that they can't tolerate the mistake.

If I were writing the report with clear evidence of a kid having perfectionisted on the fluency test, I think I would report both the "Broad Math" and the "Brief Math" composites (brief doesn't include fluency), and I would discuss the specifics of the performance and why I felt that this called the validity of the fluency score into question. I'm sure your tester could do that, too.