It's not clear to me whether this was an age-normed instrument, a grade-normed instrument, a criterion-referenced instrument, or a curriculum-based assessment, and whether you are actually reporting percentage or percentile results. Thus, I'm not ready to assume that he has as much difficulty with easier books as he does with difficult books.
If this were an age-normed instrument with different possible start points (grade 1, grade 2, grade 5) on the same test, then it would make perfect sense that he performed at the same percentile regardless of start points. No matter where he started the test, he would be compared to other children his age (or possibly his grade), so he would continue to fall at the same relative performance level. That is, he performed better than 82 out of 100 children HIS AGE would be expected to perform on the grade 1 test, 84 out of 100 HIS AGE on the grade 2 test, and 85 out of 100 HIS AGE on the grade 5 test. This would not mean that he had the same number of right answers on grade 5 as on grade 1, since presumably other six-year-olds (or first graders) also received fewer correct marks on the grade 5 test than on the grade 1 test.
If she used the respective grade norms to determine these percentile scores, that would be different. That is, better than 82 out of 100 FIRST GRADERS, and better than 84 out of 100 SECOND GRADERS, and better than 85 out of 100 FIFTH GRADERS.
Oh, and I should note that many of the assessments or placement tests associated with common reading curricula have negligible psychometric power. Meaning they're not very reliable measures of actual reading ability.
Do you know which of these situations, if any, applies?