Our confusion is understandable – medical research is carried out by experts with advanced technical knowledge, and most of us don't have that knowledge.
So how do you know whether a study is reliable or not?
Hassan Vally, an associate professor in Public Health at Melbourne's La Trobe university, has written a helpful article on this topic for the SciBlogs website.
She tells Jesse Mulligan the reason we often see contradicting results is that there are a lot of variables when scientists are at the stage of gathering more evidence or conducting more studies to prove a point.
“Sometimes it takes a little while for the true picture to emerge so we kind of have to be a little bit patient and let people do a number of studies and see what all of these collectively are telling us, but these are the frustrations really, saying something is good for you one day and bad the next.”
While media do play a role in the problem, Vally says, he doesn’t believe they’re completely to blame.
The media's tendency to look for the dramatic side of the story is understandable, he says.
“The attraction for the media, I guess, is to present the most emotional and most scary aspect of the story because that’s what get people interested, and at the end of the day, people in the media want to write interesting stories, they get people to click on links or buy newspapers.
“I do feel like scientists have to do better, and pay more attention to how we communicate our findings to help the media and of course help the general public who consume these stories.”
How to check reliability of medical studies
Vally's checklist to help people become responsible consumers of scientific studies and make their own assessments of what they read:
Is it peer-reviewed?
When a scientist wants their work to be published in a journal, they assess the quality of the science by sending it to experts and deciding whether it’s good enough to get published or whether there are flaws.
"We need to make sure that when you’re reading an article, in a newspaper or the internet, that it’s based on work that’s been peer-reviewed," Vally says.
"It’s not a perfect system, but it’s the best we’ve got – I think sometimes people compare it to, you know... People say democracy is not perfect but the alternative is much worse and so peer-review has its flaws but if it’s been published in a top tier journal you can at least rest assured that it’s been scrutinised pretty well by some experts in that field."
Was it conducted on humans?
Often we read about studies because we wonder about the implications it has on us.
"The only way research can be applicable to you and me as human beings is if that research was conducted on humans. And that’s not to discount other research in laboratory or animals, but that’s the early stages of the scientific discovery process, if you like, and it’s only when we do research in humans that we can be confident that that actually could have some meaning for us."
Is there proven causation?
It's important to remember that correlation doesn’t equal causation. Often this is seen in observational studies that look at a certain area, and from the data collected draw on two elements that may be linked, Vally says.
"Whenever you do that research you can never be sure that when you see two variables that they're are associated with each other, so that they’re not just correlated – so they don’t just move together coincidentally – or whether one variable causes the other."
Cusation is hard to prove, he says.
"I like to think of it like you’re watching those courtroom dramas on television, and you’ve got the lawyer trying to prove that someone’s murdered someone and they build a case based on lots of different lines of evidence and some evidence is stronger than others."
What is the size of the effect?
Most risk estimates in studies are presented as relative risk, Vally says, and the way risks are expressed can mislead.
"We know that the brain is not very good at understanding what that risk actually means, and what the brain does is it overestimates, how much of a hazard that is.
"Now the real signs of the problem are usually not as dramatic as that, so we need to start understanding risk better and we need to start expressing risk as absolute risk, which is looking at your actual chance of getting ill and once you start doing these risks start sounding a whole lot less scary."
Is the finding corroborated by other studies?
You shouldn't believe studies that have no backing or been done only once, Vally says.
"Because whenever you’re studying humans and the way the human system works, it’s very complex, there’s lots and lots of variables."
It also explains the contradicting results we often see from one study to another, he says.
"The reason that happens is because we overreact or over-interpret a single study on its own, because these studies can vary quite a bit.
"So to stop us from bouncing around and having to change our minds all the time, what we need to do is make sure when we’re reading a study [ask] what the context is, is that one study out of many that have all shown the same thing or is this the first study that’s shown this, [if so then] we better be a little bit cautious about how we interpret it."