First, Duuuuuuuuuuuude!!! [Insert aggrieved tone and moan of pain]

Second, I am not a researcher, let alone a neuroscientist, just someone who reads this stuff as part of my work. So this is just a summary of the the thoughts/ questions I would have reading a paper like this, not a scientific assessment. The short answer: interesting paper, totally irrelevant. If you just tripped over this, cool. If a school is sending you this to prove that Arrowsmith works for LDs, problem.

If you're interested in where I'm pulling that view from, here's the long answer:

Source: The first thing I look at is who funded the research and who published it (which can be a lot easier to judge than who did it). In this case, the publisher is Heliyon, a new web source which does not yet have a reputation or impact factor, positive or negative. At this time, it is considered a legitimate place to publish with peer review, if not a particularly prestigious one. The funding source suggests the research was a grad student project, part of an 4-month external internship. The longish list of authors suggests the project was likely part of a larger piece of ongoing research. About half the authors look like students, the other half come from a well-regarded research university, and seem to be cited by other researchers pretty often. All that to say, the research would appear to be legitimately sourced.

Participants: The research involves a pilot study. Basically, they are seeing if it is feasible to measure what they think they want to measure. There were only 10 research subjects, which means nothing can be generalized from this other than whether the methodology seemed to work well enough to try it with a real group. The test subjects were compared to normal controls, not people with the same kind of brain injury, which means you can't make any kind of conclusion as to whether there is a relation between the intervention used and any changes. In other words, if there actually were measured improvements, would those improvements have happened anyway? This study can't tell us that.

Findings: Even with this tiny group and no-one to compare them with, the authors still had trouble finding statistically significant changes post-intervention. They were picking and choosing bits and pieces among the data to find pieces that moved, as the overall picture did not. That doesn't mean the intervention didn't work, but rather that if there are effects, they are much too small or too infrequent to jump out in such a tiny sample.

Relevance: The biggest question is always, do the findings have meaning in the real world? In this case, if the intervention caused changes in the brain, and those changes related to improved scores in the research testing, would those improved scores relate to actual improved function doing everyday tasks in the real world? These are questions way, way beyond the remit of this pilot study. It just doesn't go there, and has many years of work to do before it could.

Relevance to you: The study has nothing whatsoever to do with LDs. It doesn't refer to or build on any research that has anything to do with LDs. Even if it proved that the intervention restores function after a brain injury, it would then need to make an evidence-based case that this is the same as remediating a healthy mind that's wired differently. There's nothing here that makes any kind of attempt to do that.