JIF: Peer Review Thoroughness and Helpfulness by sML

宁静纯我心 感得事物人 写朴实清新. 闲书闲话养闲心,闲笔闲写记闲人;人生无虞懂珍惜,以沫相濡字字真。
打印 被阅读次数

How to monitor when peer reviewers or editors make citation requests? Who call the shot? authors? peer-reviewers? editor? "one in five academics in various social science and business fields reported being asked to pad their papers with superfluous citations."“You’re making a decision on something that’s not a characteristic of the article itself.”

**

Computer Science > Digital Libraries

[Submitted on 20 Jul 2022 (v1), last revised 19 Aug 2022 (this version, v3)]

Journal Impact Factor and Peer Review Thoroughness and Helpfulness: A Supervised Machine Learning Study

Anna Severin, Michaela Strinzel, Matthias Egger, Tiago Barros, Alexander Sokolov, Julia Vilstrup Mouatt, Stefan Müller

The journal impact factor (JIF) is often equated with journal quality and the quality of the peer review of the papers submitted to the journal. We examined the association between the content of peer review and JIF by analysing 10,000 peer review reports submitted to 1,644 medical and life sciences journals. Two researchers hand-coded a random sample of 2,000 sentences. We then trained machine learning models to classify all 187,240 sentences as contributing or not contributing to content categories. We examined the association between ten groups of journals defined by JIF deciles and the content of peer reviews using linear mixed-effects models, adjusting for the length of the review. The JIF ranged from 0.21 to 74.70. The length of peer reviews increased from the lowest (median number of words 185) to the JIF group (387 words). The proportion of sentences allocated to different content categories varied widely, even within JIF groups. For thoroughness, sentences on 'Materials and Methods' were more common in the highest JIF journals than in the lowest JIF group (difference of 7.8 percentage points; 95% CI 4.9 to 10.7%). The trend for 'Presentation and Reporting' went in the opposite direction, with the highest JIF journals giving less emphasis to such content (difference -8.9%; 95% CI -11.3 to -6.5%). For helpfulness, reviews for higher JIF journals devoted less attention to 'Suggestion and Solution' and provided fewer Examples than lower impact factor journals. No, or only small differences were evident for other content categories. In conclusion, peer review in journals with higher JIF tends to be more thorough in discussing the methods used but less helpful in terms of suggesting solutions and providing examples. Differences were modest and variability high, indicating that the JIF is a bad predictor for the quality of peer review of an individual manuscript.

Comments: 44 pages
Subjects: Digital Libraries (cs.DL); Machine Learning (cs.LG); Machine Learning (stat.ML)
Cite as: arXiv:2207.09821 [cs.DL]
  (or arXiv:2207.09821v3 [cs.DL] for this version)
  https://doi.org/10.48550/arXiv.2207.09821
Focus to learn more

Submission history

From: Matthias Egger [view email]
[v1] Wed, 20 Jul 2022 11:14:15 UTC (1,441 KB)
[v2] Sat, 23 Jul 2022 17:39:02 UTC (2,265 KB)
[v3] Fri, 19 Aug 2022 05:11:10 UTC (1,050 KB)

**

NATURE INDEX
03 May 2023

Researchers who agree to manipulate citations are more likely to get their papers published

Data suggest that these researchers are more willing to publish in journals that participate in such coercion.

Dalmeet Singh Chawla

Twitter 
Facebook 
Email

[Female scientist looking tired reading risk report on computer]

Pressure from editors to include extra citations is a frequent problem in scientific publishing, surveys suggest.LumiNola/Getty

Researchers who are coerced by editors into adding superfluous citations in their manuscripts are more likely to succeed in publishing papers than are those who resist, finds a study published in Research Policy1.

The paper focuses on 1,169 scholars who reported in a survey done in 2012 that they had been asked to add superfluous citations, of whom 1,043 complied, adding at least one citation.

When scientists are coerced into padding their papers with citations, the journal editor might be looking to boost either their journal’s or their own citation counts, says study author Eric Fong, who studies research management at the University of Alabama in Huntsville. In other cases, peer reviewers might try to persuade authors to cite their work. Citation rings, in which multiple scholars or journals agree to cite each other excessively, can be harder to spot, because there are several stakeholders involved, instead of just two academics disproportionately citing one another.

Although the survey data were collected around a decade ago, the findings are just as relevant today and the situation is unlikely to have changed significantly, says Fong.

AI science search engines are exploding in number — are they any good?

In previous work based on the 2012 survey, Fong and his Huntsville colleague Allen Wilhite found that one in five academics in various social science and business fields reported being asked to pad their papers with superfluous citations2. And in a 2017 survey by the same authors, women were less likely than men to be persuaded to add unjustifiable citations, but more likely to include honorary authors, potentially because of power dynamics between themselves and more senior colleagues3.

There are legitimate circumstances in which a peer reviewer or an editor might ask for a certain paper to be cited, particularly in fields with only a handful of researchers. “I think if a researcher is coerced, then they should seek the advice of trusted scholars and their supervisor,” Fong says. “If editors become aware that their coercive requests are being shared, maybe that would dissuade them from future coercion.”

Removing incentives

According to the latest study, researchers who agreed to comply fully with editors’ coercive requests had an acceptance rate of 85%, compared with 77% for those who complied only partially and 39% for those who refused to comply.

Researchers who cave to adding undeserved citations into their papers are likely to publish in the same journals in the future and to engage in citation manipulation repeatedly. More than 68% of researchers who weren’t coerced agreed that they would be less likely to submit to coercing journals in the future, compared with 47% of those who were coerced.

Fong says that one solution to the problem is that journal self-citations should be excluded from metrics such as the journal impact factor, which is sometimes misused to evaluate scientists and their work. That, he notes, would give journal editors fewer reasons to ask for more citations to other papers published in their title.

Matt Hodgkinson, research integrity manager at the UK Research Integrity Office in London and an elected council member of the Committee on Publication Ethics (COPE), a UK-based non-profit organization, agrees. “If we want to reduce these distorting practices, then we have to take away the incentive to do them,” he says. “It’s a systemic corruption of the process.”

The researchers using AI to analyse peer review

Hodgkinson says that editors who are making decisions on manuscripts on the basis of coercion are distorting the literature. “You’re making a decision on something that’s not a characteristic of the article itself.”

“I think there is still a lot of work to be done on educating individual journal editors,” Hodgkinson says. But, he adds, publishers and journals should actively monitor when peer reviewers or editors make citation requests.

Detecting and flagging

In February, a journal run by Dutch publishing giant Elsevier drew a lot of criticism after it stated in a rejection letter to the author of a manuscript that one of the reasons their paper was rejected was that they had not included enough citations of papers published in the journal. The case came to light after the rejection letter was circulated widely on social media.

At the time, Elsevier clarified that it had a clear policy on citation manipulation, which states that editors must not attempt to boost the journal rankings by artificially inflating any metrics.

The onus is on publishers to take action, Hodgkinson says. If they don’t, the community needs to respond and flag these issues to bodies such as COPE, which can provide oversight.

Jonathan Wren, a bioinformatician at the Oklahoma Medical Research Foundation in Oklahoma City, contends that researchers who engage in citation manipulation should be named and shamed.

Wren is in the process of developing a tool that can automatically detect and flag excessive referencing between journals and individuals.

doi: https://doi.org/10.1038/d41586-023-01532-w

References

Fong, E. A., Patnayakuni, R. & Wilhite, A. W. Res. Pol. 52, 104754 (2023).

Article Google Scholar 

Wilhite, A. W. & Fong, E. A. Science 335, 542–543 (2012).

Article PubMed Google Scholar 

Fong, E. A & Wilhite, A. W. PLoS ONE 12, e0187394 (2017).

**

Elsevier journal under fire for rejecting paper that didn’t cite enough of its old papers
Publisher says it has policies against artificially increasing journal metrics
by Dalmeet Singh Chawla, special to C&EN

February 10, 2023

Advertisement

MOST POPULAR IN POLICY

Biden nominates Monica Bertagnolli to lead NIH
Afghanistan’s crystal meth boom is rooted in this plant
Elsevier journal under fire for rejecting paper that didn’t cite enough of its old papers
Reviewer comment on NSF fellowship application sparks outrage on Twitter
P&G settles benzene class action lawsuit

[The cover of the International Journal of Hydrogen Energy.]
Credit: Elsevier
The International Journal of Hydrogen Energy is published by Elsevier on behalf of the International Association for Hydrogen Energy.

Ascholarly journal run by the Dutch publishing giant Elsevier has come under scrutiny for rejecting a paper submitted for publication because, among other reasons, it didn’t cite enough of the journal’s previously published papers.

The rejection letter, from the International Journal of Hydrogen Energy (IJHE), which is published by Elsevier on behalf of the International Association for Hydrogen Energy, reads: “while the subject is within the scope of the journal, there are only four citations to past papers published in IJHE out of 150 references cited.”

The letter came to light after microbiologist-turned-scientific integrity expert Elisabeth Bik posted it on Twitter on Jan. 19.

Several researchers took to Twitter to criticize the wording of the letter. “Asking for more citations of the journal is inappropriate,” tweeted Jeff Catalano, a geochemist at Washington University in St. Louis. “While papers that are a poor fit often lack citations from the journal, this is not a predictor of fit.”

If a paper doesn’t cite any studies from a journal, that should be a flag to check if the paper is within the journal’s scope, Catalano tells C&EN. But “this shouldn’t be an acceptable deciding factor on an acceptance or rejection,” he says.

Nabi Ullah, a chemist at the University of Lodz who wrote the rejected paper, says it’s not the first time he has been asked to pad his manuscript with citations. “Previously, they just requested and did not reject our articles,” he says. “But this time they rejected our article, and once I saw the letter and read it I was surprised.”

Ullah subsequently submitted his paper to a different journal. “This makes me very disappointed, and I feel it is against science integrity,” he says.

“The obvious implication is that they are gaming the system,” says Guy Gratton, a professor of aviation and the environment at Cranfield University who also posted on Twitter about the rejection letter. “They’re simply trying to get more citations to up the Journal Impact Factor.” The impact factor is a controversial metric that is used to judge the significance of journals.

Darren Broom, product manager at Hiden Isochema, a company that makes instruments for sorption analysis, tweeted that he and his colleagues received a similar request when publishing in the IJHE. “I think we pushed back against the request & it was fine, but I wish they would not do this,” he wrote on Twitter. Broom declined a request for an interview with C&EN.

In a statement, Elsevier communications director David Tucker says the publisher takes manipulation of citation metrics seriously. “Elsevier has a clear policy around editors influencing citations to their journals: ‘The editor must not attempt to influence the journal’s ranking by artificially increasing any journal metric,’” he says. “We take coercive citation practices and peer-review manipulation very seriously and have retracted 131 papers from Elsevier journals over the last three years for peer-review manipulation.”

Referring specifically to the rejection letter circulating on Twitter, Tucker notes that it was issued without the knowledge of the IJHE’s editor-in-chief. “The Editor-in-Chief does not agree with the wording used and doesn’t in any way promote citations to the journal as an acceptance criterion. He is now talking with the other Editors of the journal to reinforce this point.”

But Gratton noted on Twitter that he previously received a similar request from another Elsevier journal. “The wording looked almost identical to the wording a group of colleagues and I had in a rejection about three years ago,” Gratton tells C&EN. “It was a paper [submitted] to one of Elsevier’s engineering journals.” He recalls that he and his colleagues were asked to add three or four citations from papers already published by the same journal before the manuscript would be considered for peer review.

“We decided that this was a piece of cynical game playing we weren’t prepared to get involved with, so we simply pulled it and submitted to another journal,” Gratton adds. He also notes, that the journal didn’t specify which papers should be cited; instead, it just demanded more papers from the same title. “This sort of behavior has no place. It’s cheating,” he says.

Peer reviewers asking for authors to cite their work is a similar problem. In 2019, Elsevier launched an investigation into hundreds of peer reviewers accused of manipulating citations by asking authors of submitted papers to pad them with citations to the reviewers’ own papers. The investigation was launched after a study by Elsevier staff found that fewer than 1% of peer reviewers of Elsevier journals out of the 55,000 examined consistently had their own papers referenced by the studies they refereed.

Academics say abuse and manipulation will continue as long as current incentive systems—which encourage publishing as many papers as possible in so-called prestigious publications—are in place.

“If we could find a way to value quality and have people slow down and not try to publish the smallest increment of results or not publish excessively, that would be helpful,” Catalano says.

Chemical & Engineering News
ISSN 0009-2347
Copyright © 2023 American Chemical Society 
https://cen.acs.org/policy/publishing/Elsevier-journal-under-fire-for-rejecting-paper/101/web/2023/02? 

 

登录后才可评论.