Get PDF Adaptive Design Methods in Clinical Trials (Chapman & Hall/CRC Biostatistics Series)

Free download. Book file PDF easily for everyone and every device. You can download and read online Adaptive Design Methods in Clinical Trials (Chapman & Hall/CRC Biostatistics Series) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Adaptive Design Methods in Clinical Trials (Chapman & Hall/CRC Biostatistics Series) book. Happy reading Adaptive Design Methods in Clinical Trials (Chapman & Hall/CRC Biostatistics Series) Bookeveryone. Download file Free Book PDF Adaptive Design Methods in Clinical Trials (Chapman & Hall/CRC Biostatistics Series) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Adaptive Design Methods in Clinical Trials (Chapman & Hall/CRC Biostatistics Series) Pocket Guide.

Articles

  1. Adaptive Design Methods in Clinical Trials : Shein-Chung Chow :
  2. Adaptive Design Methods in Clinical Trials
  3. Account Options
  4. 1. Introduction

Why not share! Embed Size px. Start on.

Adaptive Design Methods in Clinical Trials : Shein-Chung Chow :

Show related SlideShares at end. WordPress Shortcode. Published in: Education. Full Name Comment goes here. Are you sure you want to Yes No. Be the first to like this. No Downloads. Views Total views. Actions Shares. Embeds 0 No embeds. No notes for slide. Book Details Author : Karl E. Description Please continue to the next page Emphasizes the importance of statistical thinking in clinical research and presents the methodology as a key component of clinical research. From ethical issues and sample size considerations to adaptive design procedures and statistical analysis, the book first covers the methodology that spans various clinical trials.

Estimates will be unbiased , meaning that if the study were to be repeated many times according to the same protocol, the average estimate would be equal to the true treatment effect. These are by no means the only relevant criteria for assessing the performance of a trial design. Other metrics include the accuracy of estimation e. ADs usually perform considerably better than non-ADs in terms of these other criteria, which are also of more direct interest to patients.

The analysis of an AD trial often involves combining data from different stages, which can be done e. It is still possible to compute the estimated treatment effect, its CI and a p value. If these quantities are, however, naively computed using the same methods as in a fixed-design trial, then they often lack the desirable properties mentioned above, depending on the nature of adaptations employed [ 72 ].

This is because the statistical distribution of the estimated treatment effect can be affected, sometimes strongly, by an AD [ 73 ]. The CI and p value usually depend on the treatment effect estimate and are, thus, also affected. As an example, consider a two-stage adaptive RCT that can stop early if the experimental treatment is doing poorly against the control at an interim analysis, based on a pre-specified stopping rule applied to data from patients assessed during the first stage.

If the trial is not stopped early, the final estimated treatment effect calculated from all first- and second-stage patient data will be biased upwards. This is because the trial will stop early for futility at the first stage whenever the experimental treatment is—simply by chance—performing worse than average, and no additional second-stage data will be collected that could counterbalance this effect via regression to the mean. The bottom line is that random lows are eliminated by the stopping rule but random highs are not, thus, biasing the treatment effect estimate upwards.

See Fig.

Adaptive Design Methods in Clinical Trials

This phenomenon occurs for a wide variety of ADs, especially when first-stage efficacy data are used to make adaptations such as discontinuing arms. Therefore, we provide several solutions that lead to sensible treatment effects estimates, CIs and p values from AD trials. Illustration of bias introduced by early stopping for futility. This is for 20 simulated two-arm trials with no true treatment effect.

The trajectories of the test statistics as a standardised measure of the difference between treatments are subject to random fluctuation. Two trials red are stopped early because their test statistics are below a pre-defined futility boundary blue cross at the interim analysis. Allowing trials with random highs at the interim to continue but terminating trials with random lows early will lead to an upward bias of the average treatment effect. When stopping rules for an AD are clearly specified as they should be , a variety of techniques are available to improve the estimation of treatment effects over naive estimators, especially for group-sequential designs.

One approach is to derive an unbiased estimator [ 74 — 77 ]. Though unbiased, they will generally have a larger variance and thus, be less precise than other estimators. A second approach is to use an estimator that reduces the bias compared to the methods used for fixed-design trials, but does not necessarily completely eliminate it.

Examples of this are the bias-corrected maximum likelihood estimator [ 78 ] and the median unbiased estimator [ 79 ]. Another alternative is to use shrinkage approaches for trials with multiple treatment arms [ 36 , 80 , 81 ]. In general, such estimators substantially reduce the bias compared to the naive estimator. Although they are not usually statistically unbiased, they have lower variance than the unbiased estimators [ 74 , 82 ].

In trials with time-to-event outcomes, a follow-up to the planned end of the trial can markedly reduce the bias in treatment arms discontinued at interim [ 83 ]. An improved estimator of the treatment effect is not yet available for all ADs. In such cases, one may empirically adjust the treatment effect estimator via bootstrapping [ 84 ], i. Simulations can then be used to assess the properties of this bootstrap estimator. The disadvantage of bootstrapping is that it may require a lot of computing power, especially for more complex ADs. For some ADs, there are CIs that have the correct coverage level taking into account the design used [ 18 , 19 , 85 , 86 ], including simple repeated CIs [ 87 ].

If a particular AD does not have a method that can be readily applied, then it is advisable to carry out simulations at the design stage to see whether the coverage of the naively found CIs deviates considerably from the planned level. In that case, a bootstrap procedure could be applied for a wide range of designs if this is not too computationally demanding.

A p value is often presented alongside the treatment effect estimate and CI as it helps to summarise the level of evidence against the null hypothesis. In a fixed-design trial, this is simply the magnitude of the test statistic.


  1. Buffettology: Warren Buffetts Investing Techniques.
  2. Bayesian Adaptive Methods for Clinical Trials (Chapman & Hall CRC Biostatistics Series) - PDF Drive.
  3. Through Hell and the Thousand-Year Reich.

However, in an AD that allows early stopping for futility or efficacy, it is necessary to distinguish between different ways in which the null hypothesis might be rejected [ 73 ]. There are several different ways that data from an AD may be ordered, and the p value found and also the CI may depend on which method is used. Thus, it is essential to pre-specify which method will be used and to provide some consideration of the sensitivity of the results to the method. The total probability of rejecting the null hypothesis type I error rate is an important quantity in clinical trials, especially for phase III trials where a type I error may mean an ineffective or harmful treatment will be used in practice.

In some ADs, a single null hypothesis is tested but the actual type I error rate is different from the planned level specified before the trial, unless a correction is performed. As an example, if unblinded data with knowledge or use of treatment allocation such that the interim treatment effect can be inferred are used to adjust the sample size at the interim, then the inflation to the planned type I error can be substantial and needs to be accounted for [ 16 , 34 , 35 , 88 ].

On the other hand, blinded sample size re-estimation done without knowledge or use of treatment allocation usually has a negligible impact on the type I error rate and inference when performed with a relatively large sample size, but inflation can still occur [ 89 , 90 ]. In some ADs, multiple hypotheses are tested e. In any AD or non-AD trial, the more often the null hypotheses are tested, the higher the chance that one will be incorrectly rejected. This can sometimes be done with relatively simple methods [ 95 ]; however, it may not be possible for all multiple testing procedures to derive corresponding useful CIs.

In a MAMS setting, adjustment is viewed as being particularly important when the trial is confirmatory and when the research arms are different doses or regimens of the same treatment, whereas in some other cases, it might not be considered essential, e. When making a decision about whether to adjust for multiplicity, it may help to think what adjustment would have been required had the results of the equivalent trials been conducted as separate two-arm trials.

Regulatory guidance is commonly interpreted as encouraging strict adjustment for multiple testing within a single trial [ 97 — 99 ]. While this paper focuses on frequentist classical statistical methods for trial design and analysis, there is also a wealth of Bayesian AD methods [ ] that are increasingly being applied in clinical research [ 23 ]. Bayesian statistics and adaptivity go very well together [ 4 ]. For instance, taking multiple looks at the data is statistically unproblematic as it does not have to be adjusted for separately in a Bayesian framework. Although Bayesian statistics is by nature not concerned with type I error rate control or p values, it is common to evaluate and report the frequentist operating characteristics of Bayesian designs, such as power and type I error rate [ — ].

Consider e. Moreover, there are some hybrid AD methods that blend frequentist and Bayesian aspects [ — ]. Besides these statistical issues, the interpretability of results may also be affected by the way triallists conduct an AD trial, in particular with respect to mid-trial data analyses. Using interim data to modify study aspects may raise anxiety in some research stakeholders due to the potential introduction of operational bias.

Account Options

Knowledge, leakage or mere speculation of interim results could alter the behaviour of those involved in the trial, including investigators, patients and the scientific community [ , ]. Hence, it is vital to describe the processes and procedures put in place to minimise potential operational bias. Triallists, as well as consumers of trial reports, should give consideration to:. The importance of confidentiality and models for monitoring AD trials have been discussed [ 46 , ].

Inconsistencies in the conduct of the trial across different stages e. As an example, modifications of eligibility criteria might lead to a shift in the patient population over time, and results may depend on whether patients were recruited before or after the interim analysis.

Bayesian Adaptive Methods for Clinical Trials Chapman & Hall CRC Biostatistics Series, Vol 38

Consequently, the ability to combine results across independent interim stages to assess the overall treatment effect becomes questionable. Heterogeneity between the stages of an AD trial could also arise when the trial begins recruiting from a limited number of sites in a limited number of countries , which may not be representative of all the sites that will be used once recruitment is up and running [ 55 ].

Difficulties faced in interpreting research findings with heterogeneity across interim stages have been discussed in detail [ — ]. Although it is hard to distinguish heterogeneity due to change from that influenced by operational bias, we believe there is a need to explore stage-wise heterogeneity by presenting key patient characteristics and results by independent stages and treatment groups. High-quality reporting of results is a vital part of running any successful trial [ ]. The reported findings need to be credible, transparent and repeatable.

Where there are potential biases, the report should highlight them, and it should also comment on how sensitive the results are to the assumptions made in the statistical analysis. Much effort has been made to improve the reporting quality of traditional clinical trials. Recent work has discussed the reporting of AD trials with examples of and recommendations for minimum standards [ — ] and identified several items in the CONSORT check list as relevant when reporting an AD trial [ , ]. Mindful of the statistical and operational pitfalls discussed in the previous section, we have compiled a list of 11 reporting items that we consider essential for AD trials, along with some explanations and examples.

Given the limited word counts of most medical journals, we acknowledge that a full description of all these items may need to be included as supplementary material. However, sufficient information must be provided in the main body, with references to additional material. This will enable readers and reviewers to gauge the appropriateness of the design and interpret its findings correctly. Research objectives and hypotheses should be set out in detail, along with how the chosen AD suits them.

Reasons for using more established ADs have been discussed in the literature, e. The choice of routinely used ADs, such as CRM for dose escalation or group-sequential designs, should be self-evident and need not be justified every time. A trial report should not only state the type of AD used but also describe its scope adequately. This allows the appropriateness of the statistical methods used to be assessed and the trial to be replicated. The scope relates to what the adaptation s encompass, such as terminating futile treatment arms or selecting the best performing treatment in a MAMS design.

The scope of ADs with varying objectives is broad and can sometimes include multiple adaptations aimed at addressing multiple objectives in a single trial. In addition to reporting the overall planned and actually recruited sample sizes as in any RCT, AD trial reports should provide information on the timing of interim analyses e.

Transparency with respect to adaptation procedures is crucial [ ]. Hence, reports should include the decision rules used, their justification and timing as well as the frequency of interim analyses. It is important for the research team, including the clinical and statistical researchers, to discuss adaptation criteria at the planning stage and to consider the validity and clinical interpretation of the results. Some ADs, however, may require simulation work under a number of scenarios to:. It is important to provide clear simulation objectives, a rationale for the scenarios investigated and evidence showing that the desired statistical properties have been preserved.

The simulation protocol and report, as well as any software code used to generate the results, should be made accessible. In addition, traditional naive estimates could be reported alongside adjusted estimates. Whenever data from different stages are combined in the analysis, it is important to disclose the combination method used as well as the rationale behind it. Reporting the following, if appropriate for the design used, could provide some form of assurance to the scientific research community:.

Nonetheless, differentiating between randomly occurring and design-induced heterogeneity or population drift is tough, and even standard fixed designs are not immune to this problem. Prospective planning of an AD is important for credibility and regulatory considerations [ 41 ]. However, as in any other non-AD trial, some events not envisaged during the course of the trial may call for changes to the design that are outside the scope of a priori planned adaptations, or there may be a failure to implement planned adaptations.

Questions may be raised regarding the implications of such unplanned ad hoc modifications. Is the planned statistical framework still valid? Were the changes driven by potential bias? Are the results still interpretable in relation to the original research question? Thus, any unplanned modifications must be stated clearly, with an explanation as to why they were implemented and how they may impact the interpretation of trial results. As highlighted earlier, adaptations should be motivated by the need to address specific research objectives.

In the context of the trial conducted and its observed results, triallists should discuss the interpretability of results in relation to the original research question s. In particular, who the study results apply to should be considered. For instance, subgroup selection, enrichment and biomarker ADs are motivated by the need to characterise patients who are most likely to benefit from investigative treatments. Thus, the final results may apply only to patients with specific characteristics and not to the general or enrolled population. What worked well? What went wrong? What could have been done differently?

We encourage the discussion of all positive, negative and perhaps surprising lessons learned over the course of an AD trial. Sharing practical experiences with AD methods will help inform the design, planning and conduct of future trials and is, thus, a key element in ensuring researchers are competent and confident enough to apply ADs in their own trials [ 27 ]. For novel cutting-edge designs especially, we recommend writing up and publishing these experiences as a statistician-led stand-alone paper.

Otherwise, retrieving and identifying AD trials in the literature and clinical trial registers will be a major challenge for researchers and systematic reviewers [ 28 ]. We wrote this paper to encourage the wider use of ADs with pre-planned opportunities to make design changes in clinical trials. Although there are a few practical stumbling blocks on the way to a good AD trial, they can almost always be overcome with careful planning.

We have highlighted some pivotal issues around funding, communication and implementation that occur in many AD trials. When in doubt about a particular design aspect, we recommend looking up and learning from examples of trials that have used similar designs. As AD methods are beginning to find their way into clinical research, more case studies will become available for a wider range of applications. Practitioners clearly need to publish more of their examples.

1. Introduction

Over the last two decades, we have seen and been involved with dozens of trials where ADs have sped up, shortened or otherwise improved trials. That is, however, not to say that all trials should be adaptive. Under some circumstances, an AD would be nonsensical, e. Moreover, it is important to realise that pre-planned adaptations are a safeguard against shaky assumptions at the planning stage, not a means to rescue an otherwise poorly designed trial.

ADs indeed carry a risk of introducing bias into a trial. That being said, avoiding ADs for fear of biased results is uncalled for. The magnitude of the statistical bias is practically negligible in many cases, and there are methods to counteract it. The best way to minimise operational bias which is by no means unique to ADs is by rigorous planning and transparency.

Measures such as establishing well-trained and well-informed IDMCs and keeping triallists blind to changes wherever possible, as well as clear and comprehensive reporting, will help build trust in the findings of an AD trial. The importance of accurately reporting all design specifics, as well as the adaptations made and the trial results, cannot be overemphasised, especially since clear and comprehensive reports facilitate the learning for future AD or non-AD trials.

Working through our list of recommendations should be a good starting point. These reporting items are currently being formalised, with additional input from a wide range of stakeholders, as an AD extension to the CONSORT reporting guidance and check list. Fundamentals of clinical trials, 4th ed. New York: Springer; Shih WJ. Plan to be flexible: a commentary on adaptive designs. Biometrical J. Berry Consultants. What is adaptive design? Accessed 7 Jul Campbell G. Similarities and differences of Bayesian designs and adaptive designs for medical devices: a regulatory view.

Stat Biopharm Res. Chow SC, Chang M. Adaptive design methods in clinical trials, 2nd ed. Morgan CC. Sample size re-estimation in group-sequential response-adaptive clinical trials. Stat Med. Speeding up the evaluation of new agents in cancer. J Natl Cancer Inst. Zohar S, Chevret S. J Biopharm Stat. Sverdlov O, Wong WK. Ther Innov Regul Sci. Drug Inf J. Stallard N, Todd S. Stat Methods Med Res. Statistical consideration of adaptive methods in clinical development. Maintaining confidentiality of interim data to enhance trial integrity and credibility.

Clin Trials. Selection and bias—two hostile brothers. Type I error rate control in adaptive designs for confirmatory clinical trials with treatment selection at interim. Pharm Stat. Graf AC, Bauer P. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications. Simultaneous confidence intervals that are compatible with closed testing in adaptive designs. The practical application of adaptive study design in early phase clinical trials: a retrospective analysis of time savings.

Eur J Clin Pharmacol. Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls. Dose escalation methods in phase I cancer clinical trials.


  • Learn How To Gamble Online.
  • Finding God among Our Neighbors: An Interfaith Systematic Theology?
  • Adaptive Design Methods in Clinical Trials - Shein-Chung Chow, Mark Chang - Google Llibres.
  • Adaptive design methods in clinical trials.
  • The Jesus Legend: A Case for the Historical Reliability of the Synoptic Jesus Tradition;
  • Seasons of Change.
  • Clinical Trial Methodology (Chapman & Hall/CRC Biostatistics Series) …;
  • Chevret S. Bayesian adaptive clinical trials: a dream for statisticians only? Jaki T. Uptake of novel statistical methods for early-phase clinical studies in the UK public sector. Adaptive design: results of survey on perception and use. Missing steps in a staircase: a qualitative study of the perspectives of key stakeholders on the use of adaptive designs in confirmatory trials. Cross-sector surveys assessing perceptions of key stakeholders towards barriers, concerns and facilitators to the appropriate use of adaptive designs in confirmatory trials.

    Adaptive designs undertaken in clinical research: a review of registered clinical trials. Effects of ranolazine with atenolol, amlodipine, or diltiazem on exercise tolerance and angina frequency in patients with severe chronic angina: a randomized controlled trial. J Am Med Assoc. Telmisartan and insulin resistance in HIV TAILoR : protocol for a dose-ranging phase II randomised open-labelled trial of telmisartan as a strategy for the reduction of insulin resistance in HIV-positive individuals on combination antiretroviral therapy.

    BMJ Open. A generalized Dunnett test for multi-arm multi-stage clinical studies with treatment selection. Adaptive randomized study of idarubicin and cytarabine versus troxacitabine and cytarabine versus troxacitabine and idarubicin in untreated patients 50 years or older with adverse karyotype acute myeloid leukemia. J Clin Oncol. Adaptive increase in sample size when interim results are promising: a practical guide with examples.

    Jennison C, Turnbull BW. Adaptive sample size modification in clinical trials: start small then ask for more? Empirical Bayes estimation of the selected treatment mean for two-stage drop-the-loser trials: a meta-analytic approach. Developing a Bayesian adaptive design for a phase I clinical trial: a case study for a novel HIV treatment.

    Wellcome Trust. Joint Global Health Trials scheme. National Institutes of Health. European Medicines Agency. Reflection paper on methodological issues in confirmatory clinical trials planned with an adaptive design. Adaptive design clinical trials for drugs and biologics: guidance for industry draft.

    Adaptive designs for medical device clinical studies: guidance for industry and Food and Drug Administration staff. Clin Investig. Adaptive clinical trial designs for European marketing authorization: a survey of scientific advice letters from the European Medicines Agency. The independent statistician for data monitoring committees. Gallo P. Operational challenges in adaptive design implementation. A proposed charter for clinical trial data monitoring committees: helping them to do their job well.

    Views on emerging issues pertaining to data monitoring committees for adaptive trials. A practical guide to data monitoring committees in adaptive trials. Data monitoring committees—expect the unexpected. N Engl J Med. Recommendations for data monitoring committees from the Clinical Trials Transformation Initiative.

    Trial steering committees in randomised controlled trials: a survey of registered clinical trials units to establish current practice and experiences. Exploring the role and function of trial steering committees: results of an expert panel meeting. What are the roles and valued attributes of a trial steering committee? Ethnographic study of eight clinical trials facing challenges.

    Group sequential methods and software applications. Am Stat. Tymofyeyev Y. Practical considerations for adaptive trial design and implementation. New York: Springer: Quinlan J, Krams M. Implementing adaptive designs: logistical and operational considerations. Adaptive design methods in clinical trials—a review. Orphanet J Rare Dis. Adaptive designs for confirmatory clinical trials. Barriers and opportunities for implementation of adaptive designs in pharmaceutical product development. Practical considerations and strategies for executing adaptive clinical trials. Curtin F, Heritier S.

    The role of adaptive trial designs in drug development. Expert Rev Clin Pharmacol. Implementation of adaptive methods in early-phase clinical trials. J Diabetes Sci Technol. An overview of statistical approaches for adaptive designs and design modifications. Biom J. Wassmer G, Brannath W.

    Group sequential and confirmatory adaptive designs in clinical trials. Heidelberg: Springer; Bias and trials stopped early for benefit. Analysis following a sequential test. In: Group sequential methods with applications to clinical trials. Parameter estimation following group sequential hypothesis testing. Liu A, Hall WJ. Unbiased estimation following a group sequential test.

    Bowden J, Glimm E. Unbiased estimation of selected treatment means in two-stage trials. Conditionally unbiased and near unbiased estimation of the selected treatment mean for multistage drop-the-losers trials. Whitehead J. On the bias of maximum likelihood estimation following a sequential test. Jovic G, Whitehead J. An exact method for analysis following a two-stage phase II cancer clinical trial. Carreras M, Brannath W. Shrinkage estimation in two-stage adaptive designs with midtrial treatment selection. Estimation in multi-arm two-stage trials with treatment selection and time-to-event endpoint.

    Bowden J, Wason J. Identifying combined design and analysis procedures in two-stage trials with a binary end point.