Red Ice News

The Future is the Past

"But is it peer reviewed?" - Fraud and Science
New to Red Ice? Start Here!

"But is it peer reviewed?" - Fraud and Science

Source: psychologytoday.com


Cheating in the sciences may be the most serious form of fraud .

Cheating amongst students grabs the headlines one day. Everyone knows about cheating in the business world. Think Libor.

A smaller story about cheating on the inside page of the NY Times is equally upsetting, if only because it comes from an unexpected quarter.

Now the culprits are research scientists.

More than 2,000 papers were analyzed. These were scientific findings that were published in journals but later retracted. When the news first reported the retractions in publications in the biomedical and life sciences fields, the reason offered for the recalls was that the researchers had made honest mistakes.

When looked at more closely, it turns out that three-quarters of the retracted papers were fraudulent. Researchers engaged in active falsification of data, not computational mistakes.

Benjamin Druss, an Emory University professor of health policy, notes that while the number of published research papers is high, the overall percentage is low, about one in 10,000.

Cheating in schools is a serious matter because it undermines the integrity of education. A real education means a student has learned the subject matter. There is no way of knowing what a cheater has learned. More seriously, it makes every potential employer question what an applicant knows and will do.

Cheating in business undercuts the trust that is necessary for efficient economic functions. There is no way to know the true market value of goods or services when the price has been manipulated.

Cheating in the sciences may be the most serious form of fraud. The modern world rests upon society accepting the outcomes of scientific research. Without trust in the scientific procedure, there is no way to know the quacks from the experts. Society can no longer sift through competing claims. Who knows which is snake oil and which a true medicine?

Underlying the streak of cheating in education, business and science is a single factor: the pressure to succeed and the steep cost of failure. One of the authors of the study of fraud in the biomedical and life sciences research, Arturo Casadevail of Albert Einstein College of Medicine, says that the increase in fraudulent papers is due to winner-take-all atmosphere in the field. He notes that getting a paper published in a major journal is the difference between heading a lab and heading for the door.

[...]

Read the full article at: psychologytoday.com




Peer review: a flawed process at the heart of science and journals
By Richard Smith | Journal of the Royal Society of Medicine

Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have.

When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new `disease’, female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). `But,’ the news editor wanted to know, `was this paper peer reviewed?’. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)

WHAT IS PEER REVIEW?

My point is that peer review is impossible to define in operational terms (an operational definition is one whereby if 50 of us looked at the same process we could all agree most of the time whether or not it was peer review). Peer review is thus like poetry, love, or justice. But it is something to do with a grant application or a paper being scrutinized by a third party—who is neither the author nor the person making a judgement on whether a grant should be given or a paper published. But who is a peer? Somebody doing exactly the same kind of research (in which case he or she is probably a direct competitor)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying `The paper looks all right to me’, which is sadly what peer review sometimes seems to be. Or somebody pouring all over the paper, asking for raw data, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.

What is clear is that the forms of peer review are protean. Probably the systems of every journal and every grant giving body are different in at least some detail; and some systems are very different. There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you’d expect by chance.1

That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish’ and `reject’. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed.

I wrote back `How do you know I haven’t already done it?’

[...]

THE DEFECTS OF PEER REVIEW

So we have little evidence on the effectiveness of peer review, but we have considerable evidence on its defects. In addition to being poor at detecting gross defects and almost useless for detecting fraud it is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused.
Slow and expensive

Many journals, even in the age of the internet, take more than a year to review and publish a paper. It is hard to get good data on the cost of peer review, particularly because reviewers are often not paid (the same, come to that, is true of many editors). Yet there is a substantial `opportunity cost’, as economists call it, in that the time spent reviewing could be spent doing something more productive—like original research. I estimate that the average cost of peer review per paper for the BMJ (remembering that the journal rejected 60% without external review) was of the order of£ 100, whereas the cost of a paper that made it right though the system was closer to £1000.

The cost of peer review has become important because of the open access movement, which hopes to make research freely available to everybody. With the current publishing model peer review is usually `free’ to authors, and publishers make their money by charging institutions to access the material. One open access model is that authors will pay for peer review and the cost of posting their article on a website. So those offering or proposing this system have had to come up with a figure—which is currently between $500-$2500 per article. Those promoting the open access system calculate that at the moment the academic community pays about $5000 for access to a peer reviewed paper. (The $5000 is obviously paying for much more than peer review: it includes other editorial costs, distribution costs—expensive with paper—and a big chunk of profit for the publisher.) So there may be substantial financial gains to be had by academics if the model for publishing science changes.

There is an obvious irony in people charging for a process that is not proved to be effective, but that is how much the scientific community values its faith in peer review.
Inconsistent

People have a great many fantasies about peer review, and one of the most powerful is that it is a highly objective, reliable, and consistent process. I regularly received letters from authors who were upset that the BMJ rejected their paper and then published what they thought to be a much inferior paper on the same subject. Always they saw something underhand. They found it hard to accept that peer review is a subjective and, therefore, inconsistent process. But it is probably unreasonable to expect it to be objective and consistent. If I ask people to rank painters like Titian, Tintoretto, Bellini, Carpaccio, and Veronese, I would never expect them to come up with the same order. A scientific study submitted to a medical journal may not be as complex a work as a Tintoretto altarpiece, but it is complex. Inevitably people will take different views on its strengths, weaknesses, and importance.

So, the evidence is that if reviewers are asked to give an opinion on whether or not a paper should be published they agree only slightly more than they would be expected to agree by chance. (I am conscious that this evidence conflicts with the study of Stephen Lock showing that he alone and the whole BMJ peer review process tended to reach the same decision on which papers should be published. The explanation may be that being the editor who had designed the BMJ process and appointed the editors and reviewers it was not surprising that they were fashioned in his image and made similar decisions.)

Sometimes the inconsistency can be laughable. Here is an example of two reviewers commenting on the same papers.

"Reviewer A: `I found this paper an extremely muddled paper with a large number of deficits’"

"Reviewer B: `It is written in a clear style and would be understood by any reader’."


[...]

Read the full article at: rsmjournals.com


Trust me - I’m a Doctor

Comments

Red Ice Radio

3Fourteen

UK White Riot: Channeling The Rage
Jayda Fransen - UK White Riot: Channeling The Rage
The Covid to "Hate" Pipeline & Imprisonment For Protesting Covid Rules
Morgan May - The Covid to "Hate" Pipeline & Imprisonment For Protesting Covid Rules

TV

The "Best" Replacement: Trump's Anti-Woke Admin To Push "Merit Based" DEI
The "Best" Replacement: Trump's Anti-Woke Admin To Push "Merit Based" DEI
Muh Blitz
Muh Blitz

RSSYoutubeGoogle+iTunesSoundCloudStitcherTuneIn

Design by Henrik Palmgren © Red Ice Privacy Policy