Clinical trial design

Andrew Hooker, Mats Karlsson, Joakim Nyberg

There are two principle ways in which models can be used to evaluate and optimize clinical and pre-clinical experiments. The first is by simulation of a set of proposed designs from a model (or set of models), followed by evaluation of those resulting data sets using metrics of interest. The simulations, repeated many times with different random seeds, provide information about the expected results of various different designs (for example, measures of the precision and bias of parameter estimates, or the power to detect a drug effect of a specific size). With this methodology we have investigated, for example, differences in randomization schemes for dose-finding trials where it was found that dose-randomized trials are more powerful in characterizing the underlying relation compared to concentration-randomized trials. In most instances, this increase in power can be achieved with a similar or lower number of observed side effects.

The second way of evaluating and optimizing trial designs is through the use of optimal experimental design methodologies. These methods often rely on calculations of an Information Matrix (e.g. the Fisher Information Matrix), which characterizes the information content of any possible design.  Each design evaluation is much quicker than clinical trial simulation, thus one can investigate the landscape of possible designs (within constraints) potentially available for an experiment, and even optimize a design based on this information. We have developed methods and software (PopED) that utilize these methods with both local and global design criteria (e.g. E-family optimal designs, which take into account the underlying uncertainty in a pharmacometric model description of a biological system. Additionally, while optimal design is often focused on optimization of sampling times in an experiment, the methodology can be applied to other aspects of trial designs, such as the dose administered or the length of run-in, treatment and wash-out phases of an experiment, these aspects are investigated in our research. Further, we have investigated the extension of optimal design methodology to optimize a study for other interesting quantities such as power, as opposed to the traditional optimization based on model parameter estimation uncertainty.

The two methods of evaluating and optimizing trial designs can be combined to evaluate and explore adaptive optimal designs.  In these types of trial designs, interim analyses are used to update models used to describe the system being investigated and then to use this information to re-optimize the next cohort of patients coming into a study.  With combined simulation/optimization one can explore the adaptation and optimization rules one will use in an adaptive trial.  We are currently developing such a tool (MBAOD), and are investigating the use of such designs in, for example, pedatric bridging studies and time-to-event type studies.