FeaturedOpinionTechnology

Putting the peer-review process to the test 

A recent article published in Nature highlights the overwhelming number of peer-reviewed research papers that were retracted in 2023. To some, this may appear to be a reason for concern with how scholarly research is being conducted, the legitimacy of the peer-review process and how research is being used to influence policy that impacts society.  

Given that scientific findings may be reported in the media, journalists writing about research that eventually gets retracted is a problem. It gives a false perception that research is flawed, effectively throwing the “baby out with the bath water.” 

But research errors by reputable researchers occur — this is a product of the discovery process. Given that research reputation is critical to preserve scientific integrity, which is necessary, for example, to gain and maintain research funding support, retractions occur anytime errors are published. The recent action by Dana-Farner Cancer Institute, retracting six studies, reflects the significance of such actions to uphold scientific integrity. 

However, some researchers exploit the peer-review process to their advantage, incentivized to jump on this publication “Titanic” for personal gains

As the Nature article notes, the preponderance of retractions — around 80 percent — were in journals owned by Hindawi. Many of these papers were also in journal issues classified as “special,” with guest editors that may not be beholden to the standards that the journals wish to maintain. 

Journals that publish research are only as reputable as the peer-review process that vets and assesses the contributions, as well as the integrity and knowledge of the editorial board that executes it. Some general questions that must be addressed to ensure the rigor of studies include: 

1) Has the scientific method been employed for conducting the research? 

2) Do the research findings move the body of knowledge forward in meaningful ways? 

3) Is the research that is reported reproducible such that independent researchers using the same scientific methods can replicate the findings? 

Without positive answers to such questions, a shadow of uncertainty about the results reported will make it difficult for other researchers to build upon the findings. Much like how a house must be built with a strong and reliable foundation, research advances rely on the peer-review process to create structure. 

Taking a deeper dive into the retractions, the countries that top the list with the highest retraction rates include Saudi Arabia, Pakistan, Russia, China, Egypt, Malaysia, Iran and India.  

The good news is that these absolute rates are somewhat low, measured at retractions per 10,000 papers published, with the Saudi Arabia rate around 30 per 10,000 papers. 

The ideal retraction rate is zero. However, errors in research processes do occur; the peer-review process is not infallible. 

Should this tarnish the perception of scientific research in the public eye? Absolutely not. 

What it does is force journalists and others who report on study findings in the media to not blindly accept any findings just because it is published in peer-reviewed journals. Every such paper must be evaluated based on the past record of the authors of the study and the methods employed to reach the conclusions of the study.  

This places an additional burden on journalists, who may be ill-equipped to make such evaluations and assessments on each compelling study published. But the public relies on journalists to communicate research findings. So, what can journalists take into consideration when vetting the integrity of research studies and findings that they personally lack the necessary expertise to evaluate? 

The peer-review process works provided the participants are invested in it. When a research finding is reported in a journal, journalists may need to review its editorial board. Is the journal’s editor-in-chief sufficiently well-established with a record of success to warrant such a position? Are senior editors similarly qualified? This provides some context for the peer-review process in such journals and establishes a basis for its legitimacy and rigor. 

It may also force journalists to elicit additional evaluation from independent researchers to provide an assessment of published research findings, rather than taking the study findings — or a summary of findings — at face value and as indisputable fact. Even getting feedback on the quality of journals can be helpful to journalists. 

Along these lines, a scientific journal’s reputation is often based on the history of the journal and the quality of the editorial board. There are no fast tracks to gain such legitimacy — reputations are earned, not endowed or bought. 

Asking questions about the reproducibility of the research is another way to assess its legitimacy. Every researcher will perform their own set of replications to ensure that their findings are indeed valid. This should be reported in some form in the peer-reviewed manuscript. Journalists should be encouraged to ask such questions, delving not into the scientific details but into the scientific process. 

Retractions are certainly a red flag, but they’re also an important sentinel for when research findings may not be what they appear to be. Much like how scammers make offers that are “too good to be true” to exhort money from unsuspecting victims, the same may be true about some research findings that offer more than what sound reasoning of qualified people may accept and believe. 

The peer-review process works. Adding a smidgen of sensibility when assessing research findings helps as well. 

Sheldon H. Jacobson, Ph.D., is a professor of computer science at the University of Illinois Urbana-Champaign. A data scientist, he applies his expertise in data-driven risk-based decision-making to evaluate and inform public policy. 

Source link

Related Posts

Load More Posts Loading...No More Posts.