Warning against confirmation bias is missed because of confirmation bias
Lior Pachter is an absolute genius. Please read his The relative impact factor of glamour journals is 2.166. Read it now, before you look through the rest of this post, because I will ruin it.
Done? Did you love it? If you loved it, is that because it confirms that the impact factor is a terrible metric? Is that because it confirms that PLOS ONE doesn't have good articles? Did you love it because of the study design?
If you answered "yes" to any of the above, you probably missed the "supplementary" part of Lior's post. You probably didn't click on the "data avaialble here" link. Don't worry, you are not alone. Lior didn't just do one experiment. He used us as subjects on Twitter, and then, more importantly, he used us subjects in his post. It's not at all a post about the impact factor.
Lior's post is a brilliant illustration of the confirmation bias. He is pretty clear about it. But confirmation bias is so strong, most of us missed it. I also missed it. Was going to ask Lior why the results for Cell and Nature, compared to Nature Biotech, were so strange. Before asking on Twitter, I looked carefully at the "supplementary" on Lior's post and then realized the true genius behind this.
So, again, if you haven't done so yet, take a look at Lior's dataset and read his "supplementary" section.
And, truly, don't feel too bad. As I scan the Twitter buzz around Lior's post, it seems that 90-99% of people didn't look at the supplementary. I have been talking for years about the travesty of relegating methods and data to the supplementary section. Until today, I didn't have anything concrete to illustrate the point. Now, thanks to Lior, I do.