P0p2

The middle column is the hypothesized model of the velocity (first time derivative) of x, a dynamic quantity. Solving such models involves finding a method that ends up with x by itself on the left of the equal sign and a function of time on the right. More complicated types of models are used by mathematical biologists to describe and predict the behavior of microscopic biological processes such as intracellular metabolism or nerve conduction [3].

Statistical Models

Statistical models are models that have at least one random (stochastic) variable. Going forward, mechanistic biological models will probably be a mixture of deterministic and stochastic variables. Most statistical models describe the behavior of the mean of P as a function of other variables, including time. Unless there is a specific experimental design model such as analysis of variance (ANOVA), statistical models tend to be constructed after the behavior of the data is known. A serious loss of information can occur if the wrong model is chosen. Here are some typical statistical models of data varying in time, where T is the total time of observation:

Mean model: ANOVA model: Straight-line model: Polynomial model:

Trigonometric model:

Log-linear model:

Sigmoidal model:

Sine model: OU model:

yt = Pt + £, yt = p t + e yt = ßo +ßi t + e yt = ßo +ßi t + ß212 +ß 313 +ß4 i4 + ••• + e

+ ß3 cos-+ ß4 sin — + • • • + e y = eßo +ßlt+ß212 +ß3t3 +ß4t4 +—+e

Po +(p2 -Po )eP1t yt = Po +P2 sinP11 + e yt = e~a(t - s)ys +p(1 - e-a(t - s))

In all of these models, it is generally assumed that e is the random measurement error, or residual difference between the mean of the model and the observed y. This error is assumed to be Gaussian with mean zero and variance , and each measurement error is statistically independent of all the others. Since the rest of the model is usually deterministic, in most fields of measurement it is the error term that induces y to be a random variable. However, in biology there are many sources of biological variation as well. In the OU model when the biological Brownian motion B is also contributing variation in a complicated way, mechanistic interpretation is much easier through the SDE. It should be noted that all the parameters in these models are also independent of time, except the first two.

The "mean model" is not really a model; it represents simply the calculation of the mean and standard deviation at each time point, a typical biologist approach. These are descriptive statistics and do not lend themselves to efficient statistical analyses. The next five models are all special cases of the linear model; that is, they are linear in the unknown parameters, not in the variables. ANOVA is a model that parameterizes the mean for each combination of the experimental factors; there are many equivalent ways to parameterize these means. The fundamental difference between the mean model and ANOVA is that for the latter, the standard deviations are assumed to be equal, leading to a more precise estimate of the means, if the assumption is true. In the representation of ANOVA in the example below, the cell-mean model is used, meaning that a mean is estimated for each combination and any modeling of those means is done using contrasts of those means (shown above). Given the data and a linear model, a system of equations, one for each measurement, can be solved using linear algebra to get OLS estimates of the parameters.

The last three models are nonlinear and require iterative methods to solve for the parameters, which means repeated guessing using mathematical techniques such as calculus. This requires a computer and a stopping rule to determine when the guess is close enough. Sometimes it is not possible or is very difficult to get the algorithm to find the solution. This is related to the nature of the model and the nature of the data. Statisticians prefer to avoid nonlinear models for this reason. Note that the OU model is nonlinear but can be "linearized" in special cases.

Monte Carlo Data Generation

Most software packages have random number generators. These are actually pseudorandom numbers because all of the algorithms are deterministic [28] . Monte Carlo simulations are just the generation of these numbers to suit a particular purpose. The two most common uses of Monte Carlo methods are the generation of random data to evaluate or display statistical results and the solution of deterministic equations that are too difficult to solve using mathematical methods. Commercial simulators use Monte Carlo methods as well.

All of the data used in the examples below were generated by this method. Each of the example experiments used the same data sets for a given autocorrelation and response percentage, starting with the first observation in each group. The underlying probability distribution was the OU model using different means where the control (no-effect) group (n = 500) had all constant parameters and the experimental group (n = 500) had the same parameters except that the mean was a quadratic model in time deviating from the controls after t = 0, with 8 being assigned randomly asa0or1, with probability p for responding to the intervention:

Parameter Control Group Experiment Group \l p0 P0 + 8(pit + p2t2)

The values for p0, P1, and p2 were 3, 1/5, and -2/300, respectively; o was and a was either 3 to simulate independence or 0.03 to simulate a value similar to those observed previously in liver tests [29,30] . The response proportion was varied among 1.0, 0.5, and 0.1 to show the effect of partial responses, which is common in biology. A measurement error was added with a CV of 10% (e.g., oe = 0.1|), except that the added noise was based on the actual measured value, not the average measured value, where it was assumed that the raw data were log-Gaussian, like an enzyme [18,31] . The parameters of the quadratic response model were chosen so that the maximum is 3o from the control group mean halfway between the ends of the time interval.

The first 50 samples in each group of the simulated data are shown in Figures 5 and 6 for a = 3 and a = 0.03, respectively. In Figure 5 at 100% response, the quadratic response is very clear and diminishes with decreasing

Control Group 100 % Experimental Group

Control Group 100 % Experimental Group

Time Time

Figure 5 Examples of the simulated data for a = 3 and responses 100%, 50%, and 10%.

Time Time

Figure 5 Examples of the simulated data for a = 3 and responses 100%, 50%, and 10%.

Time Time

Figure 6 Examples of the simulated data for a = 0.03 and responses 100%, 50%, and 10%.

Time Time

Figure 6 Examples of the simulated data for a = 0.03 and responses 100%, 50%, and 10%.

response. No statistician is needed here. In Figure 6 the signals have a very different appearance, even though it is only a that differs. The strong autocorrelation mutes the variation in the signal and causes it to lag in time. There is little visual difference among the difference responses. A look at the plots might suggest that there is no signal at all, but the mean signal is exactly the same as in Figure 5. Standard statistical methods currently used in drug development may not find this signal. The examples below are intended to illustrate some of the issues. It will obviously take more sophisticated methods, not used in this chapter, to detect this signal properly under autocorrelation.

The following statistical models were used for the experimental effect while t was set to zero at all time points on the right-hand side for the control group:

Model 5: yt - yo _y(ys - yo )+M>t + e Model 6: yt _ Po + Pit + P2t2 + P3t3 + P414 + e Model 7: yt _yys + Po + P1i + P212 + P3t3 + P414 + e Model 8: yt _ yyo + Po + P1i + P212 + P3t3 + P414 + e

Model 9: yt - yo _ Po + Pit + P212 + P3t3 + P4t4 + e

Model io: yt - yo _ X (y - yo > + Po + Pit + P212 + P3t3 + P414 + e

These models are compared in the simulated experiments below. Models i to io used the lm( •) function in R [32] , which is OLS. Models ii to 2o are the same models, respectively, except that the linear mixed-effect model lme(-) in R with restricted maximum likelihood estimation was used, allowing random baselines (intercepts). Model i is ANOVA, model 2 is ANOVA covariate-adjusted for the previous observation, model 3 is ANOVA covariate-adjusted for the baseline, model 4 is ANOVA for the change from baseline, and model 5 is ANOVA for the change from baseline that is covariate-adjusted for the previous change from baseline. Models 6 to io replaced the means with fourth-degree polynomials in time.

These models were chosen because they are typical of what a statistician or a biologist might apply to similar data sets to study biomarkers when the underlying time-dependence structure is not known. The focus is on the mean response, which is typical; no attempt was made to extract the biological variation component o or the analytical variation component ae The noncon-stant variance of the residuals was also ignored. If there is no measurement error and the biology is known to follow an OU model, model 2 or 7 would be the "best" to estimate a, ^ and o^ although transformations would be needed to get the correct estimates. For example, a = -logy/(t - s) only if t -s is a constant, which was used below. When y is not in the model, a value of "na" is shown in the results; when it is negative, "*" is shown. The mean would have to be transformed similarly, but this was not done below in the calculation of the bias, since it is not likely that the analyst would know the proper transformation. If the underlying process is known to be OU and the baseline of both groups has the same distribution as the control group over time, cova-riance adjustment for the baseline loses information about a and increases the model bias, possibly masking the signal. Modeling the change from baseline underestimates the variance because it assumes that the variance of the baseline distribution is zero. This approach might give less biased results but is likely to have p-values that are too small, meaning that false signals may be detected.

Statistical tests for signal detection in the experiments below are based on the differences in parameters between the two groups: linear effect, quadratic effect (quad), cubic effect, and quartic effect. Hierarchial statistical testing was performed to learn about the mathematical form of the biological response. An overall test for the deviation from baseline was done to see if any signal was present. This is the test that counts since it controls the type I error. The subsequent tests were secondary and were not corrected for multiple testing. Second, a test for a fourth-degree polynomial (poly) was done; if it is significant, it means that at least one coefficient in the polynomial is not zero. Then, each degree of the polynomial is tested to determine the functional shape of the curve. In the simulated data, only the true quadratic term is nonzero; any other significant findings are spurious. In all cases presented below, the estimates of o are too low, resulting in p-values that are too small.

To evaluate the Fisher information for each model, the MSE was calculated using the estimated variance (Var) of the time function integrated over time from 0 to 30 and the integrated bias using the true polynomial. This represents the mean squared area between the true time function and the estimated time function. In most experimental cases, the bias can never be known, but it can be reduced using techniques such as bootstrapping [21] . When using OLS, it is assumed that the estimated parameters are unbiased. In a real statistical experiment, the behavior of the particular estimation procedure would be simulated hundreds or thousands of times and the distributions of the results would be studied. Here only a single simulation was done for illustration, and no scientific conclusion should be inferred from this presentation.

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Post a comment