Dickersin K, Chalmers F (2014). Thomas C Chalmers (1917-1995): a pioneer of randomized clinical trials and systematic reviews.

© Kay Dickersin, Center for Clinical Trials, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland 21205, USA. Email: kdicker3@jhu.edu.


Cite as: Dickersin K, Chalmers F (2014). Thomas C Chalmers (1917-1995): a pioneer of randomized clinical trials and systematic reviews. JLL Bulletin: Commentaries on the history of treatment evaluation (https://www.jameslindlibrary.org/articles/thomas-c-chalmers-1917-1995/)


Introduction

Few people have contributed as much as Thomas (Tom) Chalmers to the fair assessment of treatment effects. His research was wide ranging, including both the randomized comparison of specific treatments and their effects and also studies about how randomized clinical trials and systematic reviews of randomized trials are and should be conducted and reported. Tom Chalmers’ research influenced many who have reflected publicly on this, (see, for example, Maclure 1996a; Maclure 1996b; Lau 1996; Knatterud 1996; Greenhouse 1996; Sackett 2008; Memorial Meeting), and many others who have not.

Tom Chalmers, a son and grandson of doctors, grew up in Forest Hills, New York, in Queens. He attended Yale University in 1939 and received his MD degree from Columbia University College of Physicians and Surgeons in 1943. He was an intern at Presbyterian Hospital in New York City and completed his residency at Harvard Medical Services of the Boston City Hospital in 1947.

Upon completion of his residency, Tom practised as a primary care internal medicine physician in Cambridge, Massachusetts, and also worked at the Thorndike Memorial Laboratory.  He held appointments at Harvard Medical School from 1941 to 1975, and was Professor of Medicine at Tufts from 1961-1968.  In 1968, he moved to the Washington, DC area, where he began work as head of the Research and Education Program of the Veteran’s Administration. Shortly after, he was appointed Associate Director for Clinical Care and Director of the Clinical Center at the National Institutes of Health in Bethesda, Maryland.  During this time he continued to work in an academic setting, as Professor of Medicine at George Washington School of Medicine.

In 1973, Tom began a 10-year term as President of Mount Sinai Medical Center and Dean of Mount Sinai Medical School in New York City. While continuing his promotion of the methodology of clinical trials there through the Clinical Trials Unit that he established, he is also credited with having created departments of Biomathematical Sciences and of Geriatrics and Adult Development. Citing “the increasing numbers of aging people and escalating medical costs [that] are creating a crisis with all sorts of ethical, medical, and moral problems,” the Department of Geriatrics was one of the first of its kind at an American medical school. Under Tom’s leadership, the Mount Sinai Medical School not only became established as a leader in the study of biostatistics and geriatrics, but also became one of the highest ranked medical schools in the country.

Tom did a 1-year sabbatical at Harvard School of Public Health 1983-4, with statistician Fred Mosteller (Petrosino 2004). Tom returned to Harvard in 1987 as part of the Health Care Technology Assessment Group, after completing his tenure as Chief of the Clinical Trials Unit at the Mount Sinai Medical Center.  While in Boston, he also held a 5-year post as Veterans’ Administration (VA) Distinguished Physician at the Boston VA Medical Center at Jamaica Plain (1987-1992). After leaving Harvard in 1992 he returned to Tufts University Medical Center as Adjunct Professor of Medicine .

In the early 1990s, a business, MetaWorks, was launched in response to a contract from a pharmaceutical company to perform a meta-analysis of research on the company’s calcium channel blocker drug. Tom was approached by MetaWorks in the summer of 1992  to serve as its chairman, and was involved with them as a major shareholder until his death. After Tom died, from 1997 to 2002, the company was designated by the Agency for Health Care Research and Quality as one of its twelve Evidence-Based Practice Centers.

Asking questions about the effects of inadequately tested treatments

Tom’s questioning approach to standard clinical practice began early in his clinical career. In 1951 he began his military service in Kyoto, Japan, as the principal investigator (Chalmers et al. 1955) in a randomized factorial trial (Doll 2003) to assess the effects of then standard treatments for hepatitis among soldiers in the Korean War. The study found no evidence that the prolonged bed rest that was commonly prescribed at the time promoted recovery. The detailed, 70-page report is remarkable. Years later, the clinical epidemiologist David Sackett recounted the paper’s impact on him: “Reading this paper not only changed my treatment plan for my patient. It forever changed my attitude toward conventional wisdom, uncovered my latent iconoclasm, and inaugurated my career in what I later labeled ‘clinical epidemiology’” (Sackett 2008).

After returning from Japan, Tom continued his military service as a member of the metabolic unit of the Army Medical Services Graduate School at Walter Reed Medical Center in Washington, DC. He went on to serve as Chief of Medical Services at Lemuel Shattuck Hospital in Boston, Massachusetts, from 1955 to 1968, and also continued to practice clinically and do research in hepatology. He was obsessed with documentation and analysis, and challenged conventional clinical wisdom repeatedly.

‘Randomize the first patient’

Tom eventually left clinical practice to concentrate on refining and promoting randomized clinical trials and to ensuring that his students, colleagues and anyone else who would listen were encouraged to think critically and to question textbooks and other sources of authority in medicine. It was during this era that he began writing and speaking about randomization as a decision-making technique (Chalmers 1967), and first proposed (Chalmers 1968) and subsequently illustrated how rigorous evaluation of new treatments could be integrated from the moment of their introduction into clinical practice by ‘randomizing the first patient’ (Grace et al. 1966; Chalmers 1975a).  Tom was active and prominent in many aspects in the field of clinical trials. Indeed, his 1977 letter to the editor of the New England Journal of Medicine (Chalmers 1977), exhorting his colleagues to “randomize the first patient”, indicated prescience about trial registration when he said:  “Also, there ought to be at the very least some better method of centrally recording the sporadic individual trials now going on.” His colleagues from those days believe that Tom’s primary impact was his lifelong encouragement of colleagues in the medical profession to insist on evidence from clinical trials and on advising those performing clinical trials on how to make them better (Knatterud 1996).

Exposing deficiencies in clinical trials

Tom’s teaching emphasized the scientific and ethical strengths of randomized clinical trials (Chalmers 1975b; 1975c), but he also drew attention to the weaknesses of many randomized studies (Chalmers et al. 1983; Moskowitz et al. 1983). His interest in the statistical power of controlled trials was longstanding (Zubrod et al. 1960). In the late 1970s he led a review of the statistical power of clinical trials published in major medical journals and showed that a high proportion were much too small to confirm or exclude treatment effects of clinical importance (Freiman et al. 1978). In retrospect, Tom judged this paper to have had a net negative effect because it led some clinicians to cite it as an excuse for not embarking on randomized controlled trials to assess poorly evaluated aspects of their practice.

Concerned by the poor quality of the design, execution, analysis and reporting of clinical trials, Tom led an effort to promote improvements using a multi-item assessment to judge trial quality (Chalmers et al. 1981). Although the checklist was complicated and its usefulness challenged by the CONSORT Group (Begg et al. 1996), this early effort, together with Tom’s key role in organizing the SORT group, one of two groups working concurrently on reporting standards for clinical trials (SORT Group 1994), eventually led to the successful consensus reached by the CONSORT group (Begg et al. 1996). Tom’s paper co-authored with Henry Sacks and Harry Smith involved the comparison of results of studies using randomized controls with those that had used historical controls (Sacks et al. 1982). This paper was a reminder of how important it is to take steps to reduce allocation biases in tests of treatments, and another published the following year analyzed bias in treatment assignment in clinical trials (Chalmers et al. 1983).

Systematic reviews, meta-analyses and reporting bias

The problem of trials with inadequate sample sizes exposed in the paper by Jenny Freiman, Tom and others (Freiman et al. 1978) was one element in a growing recognition that more statistically reliable estimates of treatment effects could be obtained by synthesizing the results of similar trials using meta-analysis, a term that had recently been introduced by Gene Glass, a social scientist in the USA (Glass 1976). While he was President and Dean of Mount Sinai Medical School in New York from approximately 1978 to 1988, Tom led a seminal program of research, funded by the National Library of Medicine on ‘Technical Evaluation of the Clinical Literature’ (LM-03116), which made extensive use of meta-analysis (Baum et al. 1981; Blackburn et al. 1982). This grant also resulted in numerous publications on reporting bias (see, for example, Dickersin et al. 1987) a phenomenon to which Tom first drew attention in 1965 (Chalmers et al. 1965); on searching for relevant clinical trials for systematic reviews using MEDLINE (Dickersin et al. 1985); and on the need for meta-analysis generally (Chalmers 1982; Chalmers et al. 1987).

In this regard, Tom helped to initiate and contributed substantially to the wave of meta-analyses in medicine which took off during the late 1970s and 1980s, following his initial illustrations of systematic reviewing of the available data using examples from serum hepatitis (Chalmers et al. 1965), transaminase tests in liver disease (Clermont and Chalmers 1967), portacaval shunts (Grace et al. 1966), emergency surgical treatment of bleeding ulcer (Chalmers et al., 1970), trials of Vitamin C for the common cold (Chalmers 1975d), and trials of anticoagulants in myocardial infarction (Chalmers et al. 1977). Tom and his colleagues showed how synthesizing the results of systematically collected, similar but separate studies could yield more useful information than examining study findings one by one, and provide statistically more robust estimates of treatment effects (Chalmers 1982).

Tom’s role as a lecturer at the Harvard School of Public Health during his 1983-1984 sabbatical, and other teaching roles, are particularly noteworthy in terms of his leadership in establishing and refining meta-analysis as a necessary method in assessing treatment effects. At Harvard, he co-taught a seminar in meta-analysis with Fred Mosteller (Petrosino 2004), and initiated collaborations with Alessandro Liberati, Anne Jacquotte, Marc Buyse, and Kay Dickersin, among others. Tom was known for encouraging and publishing with students and colleagues across a wide variety of clinical specialties, one of whom later rated Tom’s greatest achievement as having been his cultivation of the careers of countless young investigators, as “a most generous and humble mentor” (Lau 1996). This mentorship lives on in the Student Scholarship Program established by the Society for Clinical Trials, and the Thomas C. Chalmers Prize awarded each year by the Cochrane Collaboration.

From then until the end of his life, Tom’s main research preoccupation became improving methods of research synthesis and of the studies on which they were based (Chalmers et al. 1987; Sacks et al. 1987; Chalmers 1988). He and his colleagues at Mount Sinai Medical School in New York and Boston were among the most productive medical users of meta-analysis (Messer et al. 1983; Sacks et al. 1985; Himel et al. 1986; Hine et al. 1989).

The most significant of Tom’s work on research synthesis was probably the series of retrospective cumulative meta-analyses of randomized trials (Lau et al. 1992), in particular those analysing treatments for myocardial infarction. Comparisons of these analyses with what had been written in contemporary textbooks and review articles showed that valid advice on some lifesaving treatments had been delayed for more than a decade, and other forms of care had been promoted long after they had been shown to be harmful (Antman et al. 1992; Rennie and Chalmers I 2009). This report made it abundantly clear that the failure of researchers to prepare reviews of therapeutic research systematically could have very real human costs (Chalmers I et al. 1992).

Later years

Tom loved traveling and attended many national and international meetings and events. Around mid-1993 he developed leg edema in one leg, and this turned out to be metastatic prostate cancer. Up to the week he died of prostate cancer in 1995, he continued to juggle the final stages of multiple projects while scheming about the next question to tackle. For example, in an interview in 1993 (two years before Tom’s death), Malcolm Maclure (1996a; 1996b) asked whether he thought it would be better if the first course in medical school should be ‘the Anatomy of Evidence’. Tom replied as follows:

I have become convinced that this is the last crusade I am going on: the first two years of medical school have got to be changed. Students are spending more and more time understanding the difficult aspects of molecular biology, but we are kidding ourselves to think they use their knowledge of DNA – in my day it was the Krebs cycle – in making clinical decisions at the bedside. They do not. They make clinical decisions based on how the last patient did, how their friends are treating patients and what the latest article by an authority says they should do. And we have got repeated evidence now that authorities are way behind with regard to the data in clinical trials [citing one of his most influential papers (Antman et al. 1992) in support of this assertion].

Tom’s obsession with testing treatments was reflected in his personal and family life. He randomized his route to work through the city of Boston to identify the quickest average journey time. He based his timing of when to drink decaffeinated rather than caffeinated coffee on blinded randomized studies done with Frankie Talcott, his wife of 53 years, and Tom’s choice of wine was made after blinded wine tasting. His four children and six grandchildren have all been left with a keen sense of the importance of seeking reliable evidence. The blinded wine tastings continue to this day and the cheaper wines continue to do very well! And so when it came time to decide on a treatment plan for Tom’s metastatic prostate cancer, it did not come as a surprise that he insisted on being treated within a randomized trial.

Tom Chalmers’ papers have been archived by the National Library of Medicine’s History of Medicine Division.

Laura Bothwell’s blog on Tom Chalmers’ role in the advent of randomized controlled trials (RCTs) in medical research. Thomas c. Chalmers: clinical research pragmatist

Acknowledgements

The authors are grateful to Marc Buyse, Joseph Cappelleri,  Harvey Himel, Anne Jacquotte, Bruce Kupelnick, Joseph Lau, Malcolm Maclure, and Libby Wright for comments on an earlier draft of this article.

This James Lind Library article has been republished in the Journal of the Royal Society of Medicine 2015;108:237-241. Print PDF

References

Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC (1992). A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. JAMA 268:240-48.

Baum ML, Anish DS, Chalmers TC, Sacks HS, Smith H, Fagerstrom RM (1981). A survey of clinical trials of antibiotic prophylaxis in colon surgery: Evidence against further use of no-treatment controls. N Engl J Med 305:795-799.

Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, Pitkin R, Rennie D, Schulz KF, Simel D, Stroup DF (1996).  Improving the Quality of Reporting of Randomized Controlled Trials. The CONSORT Statement. JAMA 276(8):637-639.

Blackburn BA, Smith H, Chalmers TC (1982). The inadequate evidence for short hospital stay after hernia or varicose vein stripping surgery. Mt Sinai J Med (NY) 49:383-390.

Chalmers I, Dickersin K, Chalmers TC (1992). Getting to grips with Archie Cochrane’s agenda: a register of all randomized controlled trials. BMJ 305:786-788.

Chalmers TC (1967). The ethics of randomization as a decision-making technique and the problem of informed consent. Report of 14th conference of cardiovascular training grant xerogram directors, June 3-4, 1967, Bethesda MD: National Heart Institute, pp 87-93. Reprinted in: Contemporary Issues in Bioethics. Beauchamp TL, Walters L eds (1978). Encino CA: Dickenson Publishing Co., pp 426-9.

Chalmers TC (1968). When should randomization begin? Lancet 1:858.

Chalmers TC, Sebestyen CS, Lee S (1970). Emergency surgical treatment of bleeding peptic ulcer: An analysis of the published data on 21,130 patients. Trans Am Clin Climatol Assoc 82: 188-99.

Chalmers TC (1975a). Randomization of the first patient. Med Clin North Am 59:1035-8.

Chalmers TC (1975b). Ethical aspects of clinical trials. Am J Opthalmol 79:2-7.

Chalmers TC (1975c). Randomized controlled clinical trial. National Eye Institute Workshop for Ophthalmologists. Ethical aspects of clinical trials. Am J Ophthalmol 79:753-8 .

Chalmers TC (1975d). Effects of ascorbic acid on the common cold. An evaluation of the evidence. Amer J Med 58:532-6.

Chalmers TC (1977). Randomize the first patient. N Engl J Med 296:107.

Chalmers TC (1982). Combinations of data from randomized control trials. Discussion. Proceedings of Current Topics in Biostatistics and Epidemiology. A memorial symposium in honor of Jerome Cornfield. Biometrics 38:250-3.

Chalmers TC, ed (1988). Data analysis for clinical medicine: the quantitative approach to patient care in gastroenterology.  New York: International University Press.

Chalmers TC, Eckhardt RD, Reynolds WE, Cigarroa JG, Deane N, Reifenstein RW, Smith CW, Davidson CS (1955). The treatment of acute infectious hepatitis. Controlled studies of the effects of diet, rest, and physical reconditioning on the acute course of the disease and on the incidence of relapses and residual abnormalities. Journal of Clinical Investigation 34:1163-1235.

Chalmers TC Koff RS Grady GF (1965). A note on fatality in serum hepatitis. Gastroenterology 1965;49:22-6.

Chalmers TC, Matta RJ, Smith H, Kunzler A-M (1977). Evidence favoring the use of anticoagulants in the hospital phase of acute myocardial infarction. N Eng J Med 297:1091-96.

Chalmers TC, Smith H, Blackburn B, Silverman B, Schroeder B, Reitman D, Ambroz A (1981). A method for assessing the quality of a randomized control trial. Controlled Clin Trials 2:31-49.

Chalmers TC, Celano P, Sacks HS, Smith H (1983). Bias in treatment assignment in controlled clinical trials. N Engl J Med 309:1358-61.

Chalmers TC, Berrier J, Sacks HS, Levin H, Reitman D, Nagalingam R (1987). Meta-analysis of clinical trials as a scientific discipline. II. Replicate variability and comparison of studies that agree and disagree. Stat Med 6:733-44.

Clermont RJ, Chalmers TC (1967). The transaminase tests in liver disease. Medicine 46: 197-207.

Dickersin K, Hewitt P, Mutch L, Chalmers I, Chalmers TC (1985). Perusing the literature: comparison of MEDLINE searching with a perinatal trials database. Controlled Clinical Trials 6:306-317.

Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H (1987). Publication bias and clinical trials. Controlled Clin Trials 8:343-53.

Doll R (2003). Controlled trials testing two or more treatments simultaneously. JLL Bulletin: Commentaries on the history of treatment evaluation (https://www.jameslindlibrary.org/articles/controlled-trials-testing-two-or-more-treatments-simultaneously/).

Freiman J, Chalmers TC, Smith H Jr, Kuebler RR (1978). The importance of ß, type II error and sample size in the design and interpretation of the randomized control trial. N Engl J Med 299:690-4.

Glass G (1976). Primary, secondary and meta-analysis of research. Educational Researcher 10: 3-8.

Grace ND, Muench H, Chalmers TC (1966). The present status of shunts for portal hypertension in cirrhosis. Gastroenterology 50:684-691.

Greenhouse SW (1996). Tributes to Dr. Thomas C. Chalmers. Controlled Clinical Trials 17:474-5.

Himel HN, Liberati A, Gelber RD, Chalmers TC (1986). Adjuvant chemotherapy for breast cancer: A pooled estimate based on results from published randomized control trials. JAMA 256:1148-1159.

Hine LK, Laird N, Hewitt P, Chalmers TC (1989). Meta-analytic evidence against prophylactic use of lidocaine in acute myocardial infarction. Archives of Internal Medicine 149:2694-8.

Knatterud G (1996). Tributes to Dr. Thomas C. Chalmers. Controlled Clinical Trials. 17:473-4.

Lau J (1996). Tribute to Thomas Chalmers. Annals of Internal Medicine 124:696.

Lau J, Antman EM, Jimenez-Silva J, Mosteller F, Chalmers TC (1992). Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 327:248-54.

Maclure M (1996a). Dr. Tom Chalmers, 1917-1995: the trials of a randomizer. Canadian Medical Association Journal 155:757-760.

Maclure M (1996b). Dr. Tom Chalmers, 1917-1995: the trials of a randomizer. Canadian Medical Association Journal 155:986-988.

Messer J, Reitman D, Sacks HS, Smith H, Chalmers TC (1983). Association of adrenocorticosteroid therapy and peptic-ulcer disease. New Engl J Med 309:21-24.

Moskowitz G, Chalmers TC, Sacks HS, Fagerstrom RM, Smith H Jr (1983). Deficiencies of clinical trials of alcohol withdrawal. Alcoholism 7:42-46.

Petrosino A (2004). Charles Frederick [Fred] Mosteller (1916-2006). JLL Bulletin: Commentaries on the history of treatment evaluation (https://www.jameslindlibrary.org/articles/charles-frederick-fred-mosteller-1916-2006/).

Rennie D, Chalmers I (2009). Exposing the dangers to patients of medical reviews and textbooks that ignore scientific principles. JLL Bulletin: Commentaries on the history of treatment evaluation (https://www.jameslindlibrary.org/articles/exposing-the-dangers-to-patients-of-medical-reviews-and-textbooks-that-ignore-scientific-principles/)

Sackett D (2008). A 1955 clinical trial report that changed my career. JLL Bulletin: Commentaries on the history of treatment evaluation (https://www.jameslindlibrary.org/articles/a-1955-clinical-trial-report-that-changed-my-career/)

Sacks H, Chalmers TC, Smith H (1982). Randomized versus historical controls for clinical trials. Am J Med 72:233-40.

Sacks HS, Chalmers TC, Berk AA, Reitman D (1985). Should mild hypertension be treated? An attempted meta-analysis of the clinical trials. Mt Sinai J Med 52:265-270.

Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC (1987).  Meta-analysis of randomized controlled trials. New Engl J Med 316:450-455.

Standards Of Reporting Trials (SORT) Group (1994).  A proposal for structured reporting for randomized controlled trials. JAMA 272:1926-1931.

Zubrod CG, Schneiderman M, Frei E, Brindley C, Gold GL, Shnider B, Oviedo R, Gorman J, Jones R, Jonsson U, Colsky J, Chalmers T, Ferguson B, Dederick M, Holland J, Selawry O, Regelson W, Lasagna L, Owens AH (1960). Appraisal of methods for the study of chemotherapy in man: Comparative therapeutic trial of nitrogen mustard and  thiophosphoramide. Journal of Chronic Diseases 11:7-33.

Click here to view complete Bibliography for Thomas C Chalmers