Dear readers, I have news: this is my finale. Science fiction Newsletter I. It’s been a really fun year and I hope you find these weekly science articles helpful.
Since it’s the latter, I’d like to offer a few thoughts about what I look for when I read new scientific articles in the media—what I need to pay attention to in order to be a skeptical and well-informed person. Science News Readers.
Correlation and Causality
This is such a fundamental point. It’s a cliché. This is the first thing you learn in a high school philosophy course: correlation is not the same as causation.
Everyone—especially scientists—know that you can’t draw conclusions about the cause of anything based on a study that contains only correlational data. And yet they still make this mistake. When I wrote about science, I had to say this over and over again: “This is just an observational study where people fill out questionnaires. This is not an experiment, even if the researchers say so.”
The fact that so many people who should know better still make this mistake – or, at best, just say it with their mouths and admit that they cannot draw causal conclusions, but write their scientific papers as if they could do it – suggests that this is so. deeply rooted in our psychology and the way scientists conduct research.
Essentially, scientists almost always ask cause-and-effect questions. For example, the question “Does taking vitamin D improve your health?” is a causal question, but often we just need to do a correlational study (at least until a randomized controlled trial can be done). So you can understand why scientists end up using causal language – it’s what they want to say.
But if you base your causal language on correlational studies, you may make mistakes. The issue of vitamin D is a good example: observational studies seem to indicate benefits from supplementation; randomized controlled trials do not. I know which ones I’d value more (to be clear: they’re random!).
There is nothing wrong with correlational research; in some contexts this can be very revealing. But a lot of it just confuses us. It would be nice if scientists did not create even more confusion when describing their research.
Beware of Press Releases
As a journalist, your inbox is filled with press releases and all sorts of strange messages from PR companies asking you to interview an “expert” about the “frontiers of bathroom science” or something similar that no one has ever heard of. .
The reason they send out so many emails is because press releases have a lot of influence: since journalists often don’t have the time or skills to read and review a scientific article, most of the science reporting you see in the media , seems difficult. . based on information in the press release.
This leaves a lot of room for distortion of the truth when describing research. No one checks (let alone reviews) press releases for accuracy; You can literally say whatever you want.
And many do just that. A good example is the press release from earlier in the year about vaping. difficult implied that the study was related to humans, when in fact it involved mice. There are countless other examples – and there are even studies showing that scientific press releases containing exaggerated research results often lead to exaggerated press reports, while more honest press releases lead to more accurate press reports.
So what’s my advice? If you’re really interested in how accurate a media report is, look at sites like EurekAlert and Science Daily (which publish science press releases) to see to what extent it is simply a repetition of the press release or whether the journalist has independence. Comments received. from other scientists (or provide a critical perspective yourself).
If not, you might want to check out the Science Media Center website, which regularly collects scientists’ views on research currently in the news. These scientists may not be critical; You might think the new study is great. But it is a very useful test to determine whether the press release has been exaggerated and whether the media coverage of the article has gone too far.
Deep breathing
I’ve often talked about the “replication crisis” – the idea that there is a serious problem in science when research results cannot be replicated by independent researchers. This is often because these original studies were riddled with serious flaws: small sample sizes, statistical errors, data errors and, in some extreme cases, outright fraud.
In the decade or so since the replication crisis began to be discussed, there have been many reforms that have made some individual studies somewhat more reliable (by allowing scientists to register their plans, share their data openly, etc.). However, individual research is still just that: individual research.
Just because a study is published in a scientific journal does not mean it is immune from criticism. In fact, he could only be seen by two or three other researchers, who could easily have missed some of his problems.
So when a new study comes out, it’s best to take a deep breath and wait for the debate. Very often, other scientists notice problems and write about them on the Internet.
Authors of self-aware social media posts (not to mention news articles) about new scientific research are usually surprised when the research they are promoting reveals a serious flaw that undermines the results.
Don’t be like these authors. Take your time – wait for the scientific community to comment on the study before getting too excited.
Get advice from NASA
NASA has a system called Technology Readiness Levels (TRLs), a scale of one to nine that engineers use to communicate what stage a particular piece of space technology is at. At the lower levels, the technology is still in its infancy: perhaps it is simply a phenomenon observed in the laboratory.
The further you go up the “TRL” scale, the closer the technology gets to real-world use: TRL 7, for example, deployed a prototype of the technology in space. At the highest level, TRL 9, the technology has finally been proven in real space missions.
Scientific studies of similar size may be considered. “Preclinical” studies on rats and mice? They are often a necessary start, but are lower on the scientific TRL scale. Observational studies in humans? Maybe somewhere in the middle. High-quality randomized controlled trials on thousands of people? Much closer to the top.
When you see a new scientific study published in the media, ask yourself where it might rank on the TRL scientific scale. If you’re close to the bottom, this might be interesting, but you probably can’t bet your life savings on it. However, if you see several studies at the higher end of the scale, you can be much more confident that what they report is true.
This will never be an accurate method: it is obviously better to be a subject matter expert and be able to evaluate all the details of a scientific study than to use such a scale. However, thinking about whether the data about a particular phenomenon (diet, training method, treatment) is still preliminary or more ready for prime time can help you avoid focusing on studies you’ll see later and find in the future. may fail.
Parting
I am so grateful to everyone who read this newsletter and my other scientific articles in 2023, and especially to those who reached out to discuss the topics I covered. I wish everyone a Merry Christmas and a Happy New Year. Bye!
Source: I News

With a background in journalism and a passion for technology, I am an experienced writer and editor. As an author at 24 News Reporter, I specialize in writing about the latest news and developments within the tech industry. My work has been featured on various publications including Wired Magazine and Engadget.