Science and the critical process of circulation and storytelling

Simon Gandevia

What do fairy tales and scientific papers have in common? Consider the story of Rumpelstiltskin.

A poor miller tries to please the king by saying how his daughter can spin gold. The evil king imprisons the girl and tells her to take out the gold. He fails, until the goblin, Rumpelstiltskin, comes to his rescue.

In science, publishers and editors of academic journals prefer to publish tangible new findings – gold – rather than conclusions or refutations of already published research. This “new emphasis” requires the production of “significant” results – usually including being “statistically significant.”

In a typical null-hypothesis test of significance, this means using the marginal probability. Often in biology and medicine, the acceptable cutoff is about 0.05 (5% chance, or one in 20) and its use is clearly documented in the description of the methods section of publications. Other fields of science, such as genetics and physics, use strict probability thresholds. But the need to have a limit is still there.

How do researchers create the illusion of novelty because a finding has a probability value close to, but on the negative side of, the specified probability limit – for example, a probability of 0.06? Tell it, tell the story! It’s the story of Rumpelstiltskin in modern clothes.

Here are more than 500 examples of pretzel logic researchers have used to make significant claims despite p values ​​above .05. It would be funny if it weren’t for the sheer scientific confusion caused by the stories.

In recent years, the process of claiming that it is important and important for such results has been called “spin”. In fact, we call it “reporting that may distort the interpretation of results and mislead readers.”

Increasingly, academics are measuring and analyzing how to spin quality opportunities. Associated with our development of the “Qualitative Outcomes and Content Assessment Checklist” (ie QuOCCA) as a tool for assessing research quality and reproducibility, my colleagues and I we measured the frequency of appearances in three popular journals, Journal of Physiologythe British Journal of Pharmacology and goes Journal of Neurophysiology.

We found that when probability values ​​were presented in the results section of the publication, but were not statistically significant (greater than 0.05 but less than 0.10), the authors discussed the findings and presented story about 55%-65% of the prints. . Often, they labeled the results “normal” as important. So, the fruit of grass can turn into fruit of gold! It is of interest to researchers, editors, publishing houses and universities.

Focusing on trivial probability values ​​is a strange and depressing – that is questionable, to our friends outside Australia – scientific practice. It shows the authors’ failure to appreciate the need for an absolute threshold to claim the existence (or not) of an effect, or to support (or not) a hypothesis. It reveals an ingrained and irreversible capacity for bias. Furthermore, the authors seem to be unaware of the fact that a probability value of, say, 0.07 is not logical as a method: The addition of additional samples or participants does not drive the probability value below the threshold of 0.05.

The number of spin levels in a print is theoretically unlimited; any value above 0.05 is negotiable. However, although our previous reviews of the three journal publications sometimes found more than one example of circulation within a single article, such cases were seems unusual.

2022 Paper at British Journal of Pharmacology entitled “Deferiprone attenuates neuropathology and improves outcome after traumatic brain injury,” concluded this opinion. At least 25 times, the authors overestimate the results associated with a probability value greater than 0.05. Some offensive comments use phrases like: “did not reach significance but showed a strong trend (p=0.075);” “Neurological preservation was observed, but not significant;” “No significant changes were seen in proBDNF despite the increasing trend.”

In this edition, by Daglas and colleagues, there are many probabilities between 0.05 and 0.10, but even values ​​above 0.10 were considered “normal.” These included values ​​of 0.11, 0.14, 0.16, 0.17, 0.23 and 0.24. The authors have not responded to my request for comment,

As the 2024 Paris Olympics get underway, it’s tempting to ask: Does the emerging edition set a World Record for scientific circulation? Please comment with your comments.

What should be done about the spread of spin probability values? This question is part of a larger problem. All levels of the scientific “industry” know the problems caused by continuing shonky science, but their efforts to control and improve are fraught with difficulty and hindered by selfishness. Education about scientific publication and the mandatory requirements before publication are potentially useful measures.

The messages from Rumpelstiltskin should be that spinning straws can cause problems, and science is not fiction.

Simon Gandevia is deputy director of Neuroscience Research Australia.

Do you like watching Retraction? You can do a a tax-deductible donation to support our workfollow us on Twitterlike us on Facebookadd us to yours RSS readeror subscribe to us daily digestion. If you find such a deviation it is not our databaseyou can let us know here. For comments or feedback, email us at [email protected].

By clicking submit, you agree to share your email address with the site owner and Mailchimp to receive promotions, updates and other emails from the site owner. Use the unsubscribe link in those emails at any time.

It’s still working…

Success! You are on the list.

Whoops! An error occurred and we were unable to process your booking. Please open the page again and try again.


#Science #critical #process #circulation #storytelling

Leave a Comment