I am not at all surprised at the publication of something that seems so poorly vetted. This is why ONE publication on a finding is "interesting" and five begin to be "convincing."


Often the 'independent' reviewers are nowhere near as unbiased or detached as they theoretically ought to be (in a small field it is all the more potentially incestuous). It's possible that this was a soft-reviewed paper, or that the journal only managed to generate a single 'qualified' reviewer, who happened to be too busy to REALLY review the paper, or tossed it to a borderline-competent or green graduate student or post-doc. That happens-- even in big name journals like Analytical Chemistry or Neuroscience, occasionally you'll find a publication that is cringe-worthy.

I wouldn't assume that it's you and that if you had enough expertise it would look 'better' to you. Chances are good that it would be even more obvious how awful the methodology or data analysis actually is. wink

I agree with Val here in general terms. The social science disciplines all too often train people to look at correlation and assume causative linkages, and the physical sciences tend to train people to avoid that very natural human impulse at all costs. <SIGH>

As every physical scientist learns: the plural of anecdote is NOT 'data' in any sense of the term.

Therefore, the so-called conclusions drawn from cherry-picked anecdotes are little more than pet conjectures, because the experimental design is frequently biased to such an extreme that it isn't even POSSIBLE to call any of the variables truly dependent or independent. <shrug>

I know a lot of physical scientists who privately snark pretty openly about social (airquotes) ''scientists'' and experimental design or statistics...('oh, look, isn't that cute?? They tried to use a two-tailed analysis here... how sweet that they tried... too bad that they don't explain why they dumped four data points from each trial.... Hmmm...'). This is what gave birth something called The Journal of Irreproducible Results. Think of it as MAD magazine for geeks. wink

My own graduate group was going to do a study once for them, I recall. Head circumfrence against a number of other things-- caffeine consumption, IQ, shoe size, GPA, number of siblings... We got quite nice correlations by plotting inverses against one another... <giggling> And the nice thing is, we just invalidated anything that didn't fit-- oh, sure, sometimes we had to LOOK for a reason to disqualify a study participant. But most people have corrected vision or peircings, or eat meat, have a family member who is a republican/atheist/nudist or something... grin

In all seriousness, though, I am very wary of ANYTHING plotted against even a simple mathematical transform of another quantity. Those relationships are frequently artifacts if there isn't a clear mechanistic reason posited or known for why the relationship should exist in the first place.


Now, does social science make for thought-provoking and sometimes insightful reading? Of course. I'm not saying that there aren't usually some interesting perspectives stated. But I tend to view most of it with a pretty jaded and critical eye. Mistakes/oversights in the statistics or in the experimental design are things I look for immediately in judging how much stock I should put in the conclusions section. I always consider whether a paper seems to be a reasonable example of a rigorous investigation... or if it's more of an op-ed bit written by an expert in the field. MOST of the literature in this field is the latter, unfortunately. That's an observation as much as a criticism-- there are reasons why sampling is so hard here. The same thing is true in the medical literature for rare genetic conditions.

smile


Schrödinger's cat walks into a bar. And doesn't.