——> Follow us on TWITTER

Back to School

The really old-school epidemiologists will remember how influential epidemiologists in the 1980s complained about all these new methods that flooded our scientific papers, like the log-linear models, later the logistic regression model and then the Cox models.

Epidemiologists had been served well in aetiological studies by doing stratified analyses using first Cochran’s way of weighting stratum specific estimates to a common measure of association (Cochran, 1954). Later the Mantel-Haenszel method took over (Mantel-Haenszel, 1959). All of this could be done with pencil and paper or by using the growing power of pocket calculators – anyone remember the HP41-C?  What these new methods had added to our toolbox, except making calculations simple once they entered standard computer packages, was not clear in the beginning. What had we gained we did not have before in our search for determinants of diseases, was the question asked and the assumed response was; not much. Indeed, in 1981 the London School of Hygiene & Tropical Medicine Masters in Epidemiology only taught six (virtually incomprehensible) sessions on use of GLIM – but we did learn Cochran’s method and how to compare survival curves. All will probably agree now that the new methods of modeling were and still are important. These methods and powerful computers and better funding have resulted in much more robust research information to be used in public health and clinical practice.

In recent years, we have seen a whole set of new design tools, analytical tools and philosophies of science come into epidemiology. We have moved an important part of resources from deductive science to inductive methods collecting huge amounts of data in the hope we can somehow make sense of the patterns they hide. We have adopted Mendelian Randomization as a new way of using newly gathered genetic variation data to understand environmental determinants of disease. We have advanced methods of simulating and imputing missing data to the state where we start doubting if we need real data at all. We have developed new methods for assessing the reliability of diagnostic tests and their contributions to accurate prediction of outcomes. We have seen new ways of causal reasoning by using DAGs. We have tools for adjusting for loss to follow-up or non-compliance to the protocol by using inverse probability weighting, and we have seen many new and creative designs using family based data. The structure of the case-control study relies on what we want to estimate, as Miettinen told us in the ‘70s, and it is beginning to sink in, but slowly. But most important in some parts of research is the introduction of Bayesian analytical principles that make statistical analyses closer to common ways of making scientific inference.

The slew of new methods may be considered yet another set of devices that scientific reviewers can use to annoy less sophisticated epidemiologists seeking publication, but does it advance our production of useful information to public health practitioners? We believe the answer is yes. There are important new tools available, and we should go back to school to learn about them. There are, of course, many good studies that can be done by using standard designs and simple ways of analyzing data. But even for these studies, new principles in sensitivity analyses will hopefully remove some of the unreasonable reliance on P-values, and there are important new studies that can only be done by adopting new techniques.

We know that new methods are adopted much more rapidly in the parts of the world with good universities, and other parts of the world are still using methods from a past century. The IEA is investing in bringing these new methods to less developed parts of the world, having recently conducted epidemiological methods courses in Jaipur, India (2009), Riyadh, Saudi Arabia (2010) and Blantyre, Malawi in 2011 and an advanced course will be held in Peru in May 2012. The IJE is also playing a role by publishing “tutorials” that aim to present the essence of these new methods with essential citations – not just to books and paper but also to software – for those who would like to learn more. Many of these methods have not yet found their way into the standard software packages that epidemiologists use (SAS, STATA, SPSS, R).

Jørn Olsen, Cesar Victora, Neil Pearce, Shah Ebrahim


References:

  • Cochran, W.G. 1954. Some methods for strengthening the common χ2 tests. Biometrics 10: 417-451.
  • Mantel, N., and W. Haenszel. 1959. Statistical aspects of the analysis of data from retrospective studies of disease. J. Natl. Cancer Inst. 22: 719-748.
Share Button

Comments are closed.