Author: Jayashree Rajagopalan
Source: Editage Insight
Datum: December 14th, 2017
Picture: Lex Bouter


How each stakeholder in scholarly publishing can help fight the problem of sloppy science

We are in conversation with Prof. Lex Bouter, Professor of Methodology and Integrity from the VU University Amsterdam. In the previous segment, Prof. Bouter introduced the Netherlands Research Integrity Institute (NRIN) and emphasized the need to educate research about ethical guidelines.

In this segment, we take a closer look at the topic of research integrity. Prof. Bouter shares some of his personal views on the topic and talks about the root cause of the irreproducibility crisis. The highlight of this conversation is the section where he shares his views on what each stakeholder in scholarly publishing could do to fight the monster of sloppy science – this includes scientists themselves, journals, publishers, libraries, and academic institutions.

More about Prof. Bouter: Prof. Lex Bouter has had a distinguished career as an academic, teacher, and research integrity specialist. His is among the 400 most influential biomedical researchers in the world. He has performed various roles, including that of a Professor, PhD supervisor, and Rector; been affiliated in senior roles with several academic and scientific organizations; and has authored/co-authored nearly 700 publications. In 2017, he organized and co-chaired the 5th World Conference on Research Integrity in Amsterdam and became chair of the World Conferences on Research Integrity Foundation.

A detailed profile is available here and here.

Your survey-based research about questionable research practices (QRPs) sounds really interesting, especially since it sought to seek participants’ perception of QRPs. What is also interesting is the choice of sample – you chose attendees of international research integrity conferences. Could you tell us more about your research?

In that survey we asked how often the respondents believed that 60 specific major and minor misbehaviors occur. We also asked what the impact on the validity of each misbehavior would be. Subsequently these scores were multiplied and the misbehaviors were ranked, leading to the following Top 5:

  1. Insufficiently supervise or mentor junior coworkers
  2. Insufficiently report study flaws and limitations
  3. Keep inadequate notes of the research process
  4. Turn a blind eye to putative breaches of research integrity by others
  5. Ignore basic principles of quality assurance

It was striking that the three classical ‘deadly sins’ of research integrity only appeared in the lower half of the ranking of the 60 items. Fabrication and falsification because – if it occurs – its impact is believed to be huge but its prevalence is assumed to be low to very low. And plagiarism was thought to occur frequently but also to have very little impact on validity.

Currently we are repeating the survey among all active scientists in Amsterdam. That will provide an insight in the views of scientists who are not experts on research integrity. The larger numbers in that survey will also enable us to study the differences between disciplinary fields and academic ranks.

If irreproducibility and unreliable results are among the top two problems we are facing today, what would you identify as the root cause of these problems?

Indeed, during recent years it became clear that the rate of reproducibility of published scientific studies is typically between 10% and 40%. This ‘replication crisis’ came as a shock to many, both within and outside academia. The causes are probably diverse and not yet investigated adequately. But it’s very probable that selective reporting is a main driver of the current low levels of reproducibility. Positive and spectacular findings will be accepted by high impact journals much more easily, will be cited much more often, and frequently get substantial media attention. All of these are wonderful for the career prospects of scientists. Negative findings are so unpopular that often these are not reported at all, which we label as publication or reporting bias. The consequence is that the published body of evidence is strongly biased and will substantially overestimate effects and associations. Especially small studies with positive outcomes will predominantly be chance findings. Additionally the temptation to make results look better by utilizing questionable research practices (or worse) can be substantial. This distorts the published record further.

The resulting low levels of reproducibility are wasteful in the sense that resources were wasted on the production of these false leads in the scientific literature. It’s also unethical when animals or humans have been burdened for unpublished studies or for published false positive findings. In theory the solution is easy and takes the form of ensuring that all research findings are published and the whole process is transparent, meaning that all steps can be checked and reconstructed. Studies need to be preregistered and a full protocol must be uploaded in a repository before the start of data collection. Similarly a data-analysis plan, syntaxes, data sets and full results need to be uploaded. Amendments and changes are possible but should always leave traces, thus enabling users to identify actions that were potentially data-driven. While ideally these elements of transparency are publicly accessible, there are many situations where delayed, conditional or incomplete access is indicated. But that does not detract from the principle of full transparence: even the process and outcomes of highly classified research for the defense industry should if necessary be made available for a thorough check by an investigation committee that is bound by confidentiality.

That’s some solid advice, Prof. Bouter. Now, in an NRIN blog post, you say that “we first and foremost need to fight the monster of sloppy science and the underlying perverse incentives and flaws in research culture.” How could each stakeholder in scholarly publishing go about doing this?

The most important stakeholders are the scientists themselves. Breaches of research integrity and sloppy science result from their professional behavior. For that behavior they are responsible as they are for fostering research integrity in their own work and that of their colleagues, first and foremost the PhD students and others they supervise or mentor. But the behavior of scientists is of course to a large extent driven by what happens in their environment. Both in the local research climate and in the system of science at large important determinants can be identified. Sadly, some of these can act as a perverse incentive. Empowering the scientist and optimizing the incentives is the responsibility of the other stakeholders. They should together make it more easy to live up to the standards and more difficult to misbehave.

Research institutions have a series of duties in this context. They should make available adequate training, good facilities, a robust system for guarding the quality of research (codes, guidelines, audits etc.), good supervisors, and have fair procedures for handling allegations of breaches of research integrity. Furthermore, research institutions ought to foster a research climate that stimulates the ‘blame free’ reporting of errors and constructive discussions on the daily dilemmas scientists face. Finally, research institutions must develop a fair and balanced set of criteria for the recruitment and promotion of their scientific staff; these criteria should not convey the message that only high impact publications and citation parameters like the Hirsch Index matter.

Funding agencies should demand that research institutions fulfill the duties outlined above and that the study is executed with full transparency and according to the study proposal that was granted. Funding agencies can easily take measures that scientist and their institutions don’t like, as applicants are eager for grants and willing to sign demanding contracts. Increasingly funding agencies make effective use of the power they have to strengthen the quality and integrity of research.

Also scientific journals and their publishers have an important role to play. They should adopt as much of the Transparency and Openness Promotion (TOP) Guidelines as they can. Journals should also deal with research integrity issues fairly and adequately, and preferably follow the guidance provided by the Committee on Publication Ethics (COPE). With a view to prevent selective reporting, it is important that scientific journals are not distracted by the results of a study and decide about its publication only on the basis of the importance of the study question and the soundness of its methods. This can, for instance, be done by adopting the format of Registered Reports. Finally journals have to perform the important task of making the publication process more transparent and the documentation of research more complete. In the digital era, they don’t need to struggle any longer with page limitations. Promising developments concern the use of preprints, open peer review and post publication peer review. It’s too early to predict where we’re heading, but the disruptive innovation by PubPeer and the novel approaches that journals like PeerJ and F1000research take are quite interesting to follow.

If you could set up an NRIN group in another country to support the future growth of scientific research, sustainable publishing, and best and ethical practices in research and publishing, which country would you look at and why?

I have no ambition to set up NRIN clones in other countries. But I would certainly encourage colleagues abroad to consider starting a Research Integrity Network (RIN) in their countries or continent. We would be happy to share our experiences and to help where we can. Also the content of our website can be used by everyone. No need to copy that, although adding a local flavor can be important. We know of some colleagues in Europe, Africa, South America and Asia who contemplate starting a RIN. And of course some partly similar networks and websites already exist, like for instance the Asian Pacific Research Integrity (APRI) Network, the European Network of Research Integrity Offices (ENRIO) and the Ethics Collaborative Online Resource Environment (EthicsCORE).

It’s difficult to say which country or continent would need a RIN most, as there is very little evidence on geographic inequalities in research integrity. But some recent evidence suggests that in Low and Middle Income Countries (LMIC) the challenges are somewhat larger. Given the quickly growing volume of scientific research China and India could be good candidates to start a RIN. It that sense, it seems appropriate that the 6th World Conference on Research Integrity will be organized on June 2 – 5, 2019, in Hong Kong.

That brings us to the end of this interview. Thank you, Prof. Bouter for sharing your thoughts and such valuable advice!

Read Part 1 of this conversation here


This post How each stakeholder in scholarly publishing can help fight the problem of sloppy science was originally published on Editage Insights.