Corlett Novis investigates the problems facing science publishing
We trust scientists to give us something incredibly vital: the truth.
But as recent years have passed, the truth seems to have become less and less important.
Conflicts of interest, perverse incentives, poorly designed methods and sensationalized results are just a few of the many unseemly warts blemishing the face of modern science and turning it into an enterprise which increasingly encourages misrepresentation and unreliability.
But will these warts start disappearing any time soon?
Publish or Perish
“Over time the most successful people will be those who can best exploit the system” — Paul Smaldino of the University of California Merced
You may think that the biggest threat to scientific integrity is fraud, an uncommon practice of fabricating results to get published. However, only about 0.05% of all science publications are considered fraudulent, making it a very fringe issue. Instead, the problems of scientific integrity are much more endemic and entrenched. They even reach all the way up to the most prestigious journals.
Some recent research found that the most prestigious and respected journals “struggle to reach even average reliability”. In fact, the study found that more prestigious journals tended to be less reliable because their publications were more likely to be sensationalised and misrepresented in order to garner more interest and to seem more groundbreaking. Furthermore, these studies tended to have worse methodologies because researchers are prioritising good outcomes over robust methodologies.
So why would scientists be looking for quick fixes and misrepresenting their data? Today, scientists are pushed to publish as frequently as possible and in the best journals they can in order to sustain their careers, the so called “publish or perish” incentive. Because they are pushed to publish so much and because they are rewarded by journals for having more flashy and interesting results they often choose simple methodologies which will yield the results they want. Worse still, these results are then made to sound more interesting and promising than they really are.
In short, publishing actively biased research is good for scientists, but bad for science.
Scientists get farther by exploiting the system. There are many good examples of this. Firstly, good science should often lead to inconclusive or negative results, by virtue of excusably incorrect hypotheses, but the majority of studies have positive results. This is known as the publication bias and it has emerged because scientists are incentivized to publish successful studies.
Secondly, many scientists will take money from companies to do research on their products or to do studies which end up having a huge conflict of interest. A good example of this was when the University of Maryland published a study into the effects of Quarter Fresh chocolate milk which was funded by the company itself.
Finally, data will often be fudged or drawn towards questionable conclusions to make the study seem more impactful than it really is or to make the results positive.
Furthermore, the constant need to “publish or perish” can push scientists to their limits and has been linked to mental health issues for science researchers.
But what about peer review? In any academic paper, submissions are critically inspected by other researchers through a “peer review” to make sure that what’s getting published is good quality science. Ideally this should weed out bad research, but unfortunately this isn’t often the case.
The Peer Review Pandemic
The science publishing industry is one of the most lucrative industries in the world worth billions of dollars, a fact which on its own is incredibly suspicious. You may not have known it, but science publishing is the most profitable in the publishing industry, even more than the film and recording industries with a huge 36% profit margin. This is because, unlike other industries, science publishing doesn’t need to pay its contributors or, in a sense, many of it’s editors. Not only are scientists expected to publish their work for free in order to get exposure and prestige, the working scientists who conduct peer review are also expected to work on a pro-bono basis. In other words, scientists being paid (largely) by the government to do research and are then expected to privatise their work for free which is then sold back to them for a huge profit; worst of all, with very little reward going towards the researchers who did the work in the first place.
This exploitative business practice is one reason why Berkeley biologist Michael Eisen once wrote in a 2003 article for the Guardian that the state of science publishing, which was as “perverse and needless” then as it is today, “should be a public scandal”.
Partially because peer reviewers have no incentive to do do their reviewing well, peer review itself has been demonstrated to be next to useless as a means of quality testing. Research conducted over many decades from 1982, 2000, 2002, 2006, 2010 and even 2018 (mentioned earlier) has found, time and time again, that peer review is ineffective and even, in some cases, little better than random chance.
With all of this in mind, it’s no surprise that the Lancet editor Richard Horton once called the process of modern peer review:
“unjust, unaccountable … often insulting, usually ignorant, occasionally foolish, and frequently wrong.”
The Replication Crisis
None of this is to say that science is not credible, or that there is a better alternative. It’s quite the opposite: science is the most credible institution we have, full of incredible expertise and insight about the world around us. It is because of science’s important and hard-earned authority that we need to be aware of its current shortcomings in order to promote and sustain good practice in science publishing.
With this in mind, we can take a look at the real icing on the cake. In recent times, replication, another vital aspect of science, has been facing its own extensive crisis.
Replication is an important part of making sure that results are reliable and sound. By carrying out research a second time it is possible to see if there are any issues with the original study, but today, many new and important studies aren’t getting replicated. A 2015 study found that 40% of the most important psychiatric research that had ever been done had never been replicated, and out of those which had been, many found either contradictory results, or the same results as the original study, but with much smaller effects.
Partly this crisis is due to the fact that some studies cannot be replicated, but in most cases the real issue is a lack of incentive.
Simply put, no one wants to fund or publish replication studies because it is perceived to be less important than the original study. Scientists have no incentives to do replication studies as they are much less likely to get published or to get funding and so most never bother to check each other’s work through replication, despite its vital role in quality control.
The result is that many of the most important studies are never double checked by their peers.
Because of these issues, and many others, the world of science publishing is in desperate need of remodeling, but its not all bad news: many institutions, from Nature to the Wellcome Trust, are calling for reforms to the way science is carried out. Some are calling for peer reviews to become public, whilst others are calling for the government to better regulate the exploitative science publishing industry. With the rise of misinformation in recent years, however, there is one thing all parties can agree on: Good scientific practice is more important now than it has ever been.
Image credits: shutterstock