2.5 Bias introduced after looking at study results

Biases can be introduced when knowledge of the results of studies influences analysis and reporting decisions, for example, when studies stop earlier than planned, or if there is bias in the selection of the treatment outcomes analyzed.

Bias results from processes that tend to produce information that departs systematically from the truth. Avoiding bias is relevant when analysing the results of studies statistically. Analysis biases may be introduced during the design of studies, when decisions about which analyses to do might lead to the favouring of one of the treatments compared with another. This might include decisions about how to deal with data for participants who don’t adhere to their allocated intervention, the analysis of those who experience other outcomes before the main outcomes for the study, or the definition, counting and combination of particular outcomes in the analyses. These design biases are akin to those that can arise when the choice of the comparator to test in the study has been biased so that the eventual results will be unduly favourable to the newer treatment.

Things can get much worse after study results have been inspected. Changes might then be made to how the analyses will be done or reported, with fore-knowledge of how these changes will favour one or other of the treatments compared. If these changes occur between the collection of the study data and its eventual reporting, the reader of the published results might be misled, especially if the changes are not clearly described and explained.

Biased analyses before the planned end of a study

Biases after looking at study results can occur both after formal statistical analyses, and through more informal routes. For example, if the researchers are collecting or observing these outcomes because they are providing the treatments for participants in the study, they may get a sense of the accumulating results, for example, about which patients are doing particularly well or badly. This might lead them to alter the planned analyses, such as changing what they feel is the “most important” outcome, choosing an earlier time point as the main one to emphasise, or dividing the data in different ways in subgroup analyses. One way to avoid this is by keeping the researchers and the practitioners blind (masked) to the treatment allocated to each participant.

When study results are being analysed more formally, different problems can arise. Such biases might occur before or after the study has reached its intended completion.

During a study, accumulating results might be examined to see if there is clear evidence of benefit or harm for one intervention, which might make it unethical to continue the study. On the other hand, it may become clear that the effect that was hoped for is not achievable in the study and that it would be better to stop the study for futility rather than to continue to recruit participants to a study that will use resources but will not resolve the initial uncertainties. These early stopping decisions can lead to bias when the interim results happen to be high or low simply by chance, especially if there is a vested interest in closing the study and turning these interim results into its final results (Trotta 2008).

One way to avoid biases that might arise if the researchers themselves are responsible for these interim decisions is to have an independent Data Monitoring Committee consider the accumulating results. The committee can agree guidelines for deciding when to make interim analyses available to an oversight group for the study, such as a Trial Steering Committee (Grant 2005).

Sometimes, interim results may be presented more publicly, to allow practitioners and potential participants in the trial to make up their own minds about whether or not to continue with the study. For example, the preliminary results of the ISIS-2 trial of aspirin and streptokinase for people having a heart attack (myocardial infarction). The trial Steering Committee published a half-page interim report showing benefits reported to them the previous month. These showed a reduction in the risk of death in the short term among patients who had received streptokinase within 4 hours of experiencing symptoms of heart attack (ISIS-2 1987). Despite this information, some insufficiently persuaded clinicians continued to recruit patients to the trial within this time window, as well as other patients who had presented more that 4 hours after their symptoms had begun (ISIS-2 1988).

Biased analyses after the planned end of a study

At the end of a study, changes to the analyses after looking at the results can lead to bias through:

  • changes in the designated primary outcome, or in how outcomes are defined or combined in composite outcomes;
  • introduction or modification of subgroup analyses, in which different groups of participants are analysed separately; perhaps to highlight the presence or absence of benefit in certain types of person or setting. In addition to the problems of bias in these analyses, chance might mean that the findings are not a reliable guide to the truth (Counsell 1994, Clarke 2001);
  • selective reporting of particular outcomes, analyses or treatment comparisons. For example, in a study comparing three treatments, there are seven different ways in which the treatments might be compared. This gives researchers opportunities to highlight some comparisons over others, based purely on their results; and
  • changes to the statistical techniques, such as the introduction of adjustments for differences in  baseline characteristics of the participants which had not been pre-planned or pre-specified.

The potential impact of some of these biases has been studied, and some of these studies have themselves have been considered in systematic reviews. For example, systematic reviews by Kerry Dwan and colleagues have brought together information on how the methods used in the analyses and reporting of randomised trials changed between the design phase of the trial and the publication of its results.

Kerry DwanIn their most recent review, they found 22 studies (containing more 3000 randomised participants) published between 2000 and 2013 that found discrepancies in statistical analyses (8 studies), composite outcomes (1), the handling of missing data (3), unadjusted versus adjusted analyses (3), handling of continuous data (3) and subgroup analyses (12), concluding that discrepancies in analyses between publications and other study documentation were common, but not discussed in the trial reports (Dwan 2014). In their systematic reviews of studies of selective reporting, they found that comparisons of trial publications to protocols found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted (Dwan 2011; Dwan 2013).

In systematic reviews of the impact of early stopping, Montori and his colleagues in 2005 and Bassler and colleagues in 2010 have shown how early stopping might bias conclusions about the effects of treatments. The Montori review included 143 randomised trials stopped early for benefit, with 92 of these published in 5 high-impact, influential medical journals and, on average, the trials recruited about two-thirds of their planned sample size. Montori et al concluded that randomised trials stopped early for benefit were becoming more common, often fail to adequately report relevant information about the decision to stop early, and show implausibly large treatment effects, particularly when the number of events is small. They wrote that “clinicians should view the results of such trials with scepticism” (Montori 2005). Five years later, Bassler and colleagues compared 91  randomised trials that were stopped early with 424 matched trials that were not, finding a pooled ratio of relative risks of 0.71 (95% confidence interval, 0.65-0.77). This showed that the effects estimates in the trials that stopped early were on average more favourable to the treatments than those from similar trials that did not stop early (Bassler 2010).

If users of the reports of studies are to have confidence in their final reports, they need to be reassured that bias was not introduced to the results in those reports after the early results had been seen. Although the afore-mentioned reviews show that protocols are no guarantee against this, access to a protocol or a study’s statistical analysis plan might identify any changes that were made; and, since 2013, guidance on the structured reporting of protocols is available from the SPIRIT group (Chan 2013). In relation to the choice of outcomes to analyze and report, those designing studies should consider the use of core outcome sets as the minimum that they should measure, analyze and report in all trials in a particular condition. Work by the COMET initiative has already identified 200 such outcome sets (Gargon 2014), which are now available through the COMET database www.cometinitiative.org/studies/search.

It is tempting for people to change their views on what is important about a study after they have knowledge of the results. Such biases need to be avoided by careful planning of which analyses will be done, and clear explanations of any changes that were made to those plans, and the reasons for them.

< Previous Essay | Next Essay >

The text in these essays may be copied and used for non-commercial purposes on condition that explicit acknowledgement is made to The James Lind Library (www.jameslindlibrary.org).