——> Follow us on TWITTER

Advice for journalists who write about epidemiologic studies

Epidemiology is the scientific discipline that aims to  identify the occurrence and determinants of health and diseases in human populations and the application of this knowledge to control health problems. It is a discipline that systematically uses population experience concerning exposures (social factors, lifestyle factors, environmental factors, treatments, etc.) and health. It is a discipline that uses experimental and observational methods.

As in other areas of scientific research our findings are not always right; they are seldom completely right and sometimes simply wrong. Many of our observations will not be replicated by others. Our findings may suggest causal links, but no single study can claim causality, whether the study is based on experimental or observational designs. Conclusions are based on a body of literature, and journalists should always ask the researcher what previous studies have found, and what the new study adds to the evidence. If the study is the first of its kind it will usually not lead to anything more than a reformulated hypothesis that requires further testing.

Journalists – and researchers – should avoid describing differences between exposed and unexposed as “statistically significant” and should consider the strength of the evidence against the null hypothesis. Confidence intervals should always be examined and here the issue is not simply to see if they include the null result but what the clinical or public health implications would be if the true findings were in the range of values bounded by the confidence intervals. Many P-values arise as a result of researchers searching for significance – so called “data-dredging”.  Journalists should be highly skeptical of sub-group analyses of randomized trials. Any observed sub-group (i.e interaction) effects should be supported by formal statistical tests to indicate the strength of the evidence for interaction. When no main effect from exposure is found in the overall study population, some researchers resort to looking for a sub-group within which strong evidence for association is seen.  In these circumstances it is not very likely that the sub-group effect is real but it becomes unfortunately more publishable.

Of more importance than P-values are the writers’ ability to address important sources of error, particularly in observational studies, in addition to the strength of evidence against the null hypothesis. The most important of these errors (biases) are confounding, selection bias and information bias.

Confounding is a result of associations between many risk factors for a given heath problem; eating a healthy diet, for example, is associated with many other behavioral characteristics that may be causally related to the disease under study. So is the association between vitamin C and the risk of infections really related to vitamin C or is it due to something else that is associated with vitamin C intake? – for example, poverty and over-crowding increasing susceptibility to infections.

Selection bias can occur even in the perfectly designed study because people may not accept the invitation to take part in the study, or may get tired and leave the study before it is over. A randomized trial of, for example, colon cancer screening needs to run for many years to show whether screening is beneficial or not. During those years, many people will drop out and those who drop out may well have a different risk for colon cancer than those who remain in the study.

Information bias is a result of working with imperfect information on exposures or health problems. Asking people to recall their diet and intake of vitamin C is not very accurate, even for recent exposures. Measuring vitamin C in the blood will only produce a snapshot of the exposure situation that will not cover the entire exposure time window of interest or tell us much about vitamin C levels in body tissues. Further, our data on health and diseases also come with uncertainties. Even the best experts will often disagree when diagnosing diseases, especially if these diagnoses are not based on well defined and well recorded diagnostic criteria.

 

A study’s credibility rests much more on how the authors address these sources of error than the calculated P-values. Seldom will it be possible to address more than a fraction of these uncertainties in a single study, and it may take time (often years) to collect new data to assess whether the findings of a particular study are real or are due to bias or chance. For this reason, epidemiologists publish their data in order for others to take a critical look at their findings. The aim is not to confirm results (be wary of authors who use this term). The aim is to see if we can decide whether identified or postulated associations are due to chance or bias, or whether they stand up under further testing.

In most scientific disciplines, publications generally only reach the scientific community and interesting ideas emerge and disappear without the public necessarily hearing about them. However, epidemiologists address issues of great public concern, such as how we create a safe environment, how we make sure we get the best medical treatment, how we can prevent serious diseases, how should be live our lives etc. For that reason, many of our papers reach the public media, even though findings are only preliminary, and unfortunately they often make the public concerned for n

o reason or too optimistic for limited reasons. The public hopefully develops a healthy skepticism to these results, especially if they get a little help from journalists. Finding a new cure for cancer is a good story in the media, but try to see if you remember a year where that promise was not made in recent decades.  It is essential for journalists to realize that researchers, and not just the studies they conduct,  have biases too. Hyping up the importance of findings may indicate pressure from funding agencies (e.g. pharmaceutical industry), and researchers need to renew research grants and to promote a field of activity over other areas. Perhaps the best examples in the last decade have been in genetic epidemiology and personalized medicine where we are still waiting for the promised impact. There was a time when funding was available for almost all who could spell DNA.

Please ask critical questions and forget the sensational headlines. Read the whole paper, not just the section called “What this study shows”. Especially ask about what other studies have found.

Epidemiologists are giving greater and greater importance to systematic review and meta-analyses, that is, analyses that take into consideration all available studies on a given topic. This allows us to obtain a comprehensive view of what is known to the present date, rather than focusing on the “hot off the press” study – which may be misleading in view of the sources of error discussed above.

It is important that we share our findings with the public, but we want to do this in a balanced and cautious way.

Rodolfo Saracci has written a very short introduction to epidemiology1 which may be a good starting point for journalists who cover epidemiologic results in the public media.

Jørn Olsen, Shah Ebrahim, Neil Pearce, Cesar Victora


  1. Saracci, R. Epidemiology. A very short introduction. Oxford University Press, Oxford: 2010. ISBN 978-0-19-954333-5.
Share Button

Comments are closed.