Transl Clin Pharmacol. 2015 Jun;23(1):1-7. English.
Published online Jun 30, 2015.
Copyright © 2015 Translational and Clinical Pharmacology
Review

R-based reproduction of the estimation process hidden behind NONMEM® Part 1: first-order approximation method

Min-Gul Kim,1 Dong-Seok Yim,2 and Kyun-Seop Bae3,4
    • 1Clinical Pharmacology Unit, Biomedical Research Institute, Chonbuk National University Hospital, Jeonju 561-712, Republic of Korea.
    • 2PIPET (Pharmacometrics Institute for Practical Education and Training), The Catholic University of Korea, Seoul 137-701, Republic of Korea.
    • 3Department of Clinical Pharmacology and Therapeutics, Asan Medical Center, Seoul 138-736, Republic of Korea.
    • 4University of Ulsan College of Medicine, Seoul 138-736, Republic of Korea.
Received May 28, 2015; Revised June 12, 2015; Accepted June 15, 2015.

It is identical to the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/).

Abstract

NONMEM® is the most-widely used nonlinear mixed effects modelling tool introduced into population PK/PD analysis. Even though thousands of pharmaceutical scientists utilize NONMEM® routinely for their data analysis, the various estimation methods implemented in NONMEM® remain a mystery for most users due to the complex statistical and mathematical derivations underlying the algorithm used in NONMEM®. In this tutorial, we demonstrated how to directly obtain the objective function value and post hoc η for the first order approximation method by the use of R. We hope that this tutorial helps pharmacometricians understand the underlying estimation process of nonlinear mixed effects modelling.

Keywords
NONMEM; modelling; First order approximation; R

Introduction

The nonlinear mixed effects method that was introduced to the biomedical research field approximately 30 years ago is now an essential platform in population pharmacokinetic and pharmacodynamic (PK/PD) data analysis.[1, 2, 3]

NONMEM® is the most-widely used nonlinear mixed effects modelling tool introduced for population PK/PD analysis since its prototype was developed by Lewis B. Sheiner and Stuart L. Beal of the University of California, San Francisco.[4] It is programmed in FORTRAN code and has evolved to its current version, 7.3. NONMEM® is regarded as the standard pharmacometrics tool. Even though thousands of pharmaceutical scientists utilize NONMEM® routinely for their data analysis, the various estimation methods implemented in NONMEM® remain a mystery for most users due to the complex statistical and mathematical derivations underlying the algorithm used in NONMEM®. Wang systematically derived the objective functions for the major estimation methods used in NONMEM® and clearly demonstrated their relationships through the derivation.[5] Wang derived the first-order (FO) method from an approximation of the Laplacian method, whereas we derived the FO method directly from the maximum likelihood estimation method.

R is a computer language and suite of libraries for statistical and mathematical computation. It has a strong but relatively small base system compared with other commercial software like SAS® and SPSS®. However, R has robust functions for scientific computation and many add-in packages for particular problems and situations. Many of R's standard functions are coded in the R language itself, making it easy to follow their algorithmic flow.

The objective of this tutorial is to illustrate how to directly obtain the objective function value and the post hoc η using R. This tutorial gives a detailed derivation of the FO approximation method in NONMEM®. To be consistent with the terminology, the notations are defined in Table 1.

Basic principles of the first-order approximation method used in NONMEM®

The following equations are used for PK/PD modelling with NONMEM®:

F = f (θ, η, x)
Y = f (F, ε)
η~MVN(0, Ω)
ε~MVN(0, ∑)

When F is a function of the predicted value without final residual variability, the observed value (Y) can be expressed as a function of F and residual variability. As shown above, f(θ, η, x) represents a structural model describing the relationship between the PK/PD observations and the model parameter θ, while η and ε represent the stochastic model components describing the randomness unexplained by the structural model. η represents inter-individual random variability with E(η)=0 and V(η)=ω2, whereas ε is the rest residual variability with E(ε)=0 and V(ε)=σ2 (Therefore, θ, Ω and ∑ are constants while η and ε are random variables. Before examining the details of mathematical equations, those not accustomed to mathematics and statistics may wish to view the R script first to more easily understand it.).

The typical PK/PD of the population is summarized by θ, and individual PK/PD parameters are expressed as a combination of θ and η. For example, systemic clearance is expressed as,

CL = θCL · exp(ηCL)
where θCL is the typical value and ηCL is the corresponding inter-subject variability following normal distribution with mean zero and covariance. In other words, the typical value of a PK/PD parameter is the value of the PK/PD parameter when η = 0. F is called the typical prediction when η = 0, while F is called the individual prediction when η = η̂, the empirical Bayes estimate (EBE) of individual eta.[6]

Maximum likelihood estimation (MLE) is the approach used in NONMEM® for estimation of the fixed effect and random effect parameters.[7, 8] MLE begins with writing a mathematical expression known as the likelihood function of the sample data. If Y is the measured observation, E(Y) is the expectation of that observation by the model, and V(Y) is the variance of the model, then the likelihood of the observation given the model is[9]:

L=12πVYe-12V(Y)Y-EY2

The probability (or likelihood) of a series of observations is the product of the probability (or likelihood) of individual observations. Thus, if there are n observations, the combined probability (likelihood) is the product of n individual probabilities (likelihoods), i.e., the likelihood of the i-the subject becomes:

Li=j=1ni12πVYije-12V(Y)Y-EY2

This can be transformed into the -2 log likelihood (-2LL):

-2logLi=nilog 2π+j=1nlog VYij+Yij-EYij2VYi

NONMEM® minimizes the -2 log likelihood. Because the first part of the right side equation, nlog(2π), is a constant (when the observation number n is fixed), minimization was directed to the second part of the right hand side of the above equation that is given by NONMEM® as the objective function value.

Therefore, the objective function for i-th subject (and if the V(Yij) is constant as V(Yi)) can be denoted as:

OBJi=nilog VYi+j=1niYij-EYij2VYi;j=1,,ni

This function shows that the squared error, (Yij-E(Yij))2, is weighted by the (inverse of) variance, V(Yi). However, Sheiner and Beal did not assume a normal distribution for this derivation. Therefore they called this method the 'extended least squares' objective function in NONMEM® [10, 11, 12] instead of the maximum likelihood estimation.

By using a first-order Taylor series approximation of f about true eta value (ηt, unknown to the investigator) of η and 0 of ε respectively, E(Y) and V(Y) were derived as follows: For an individual with ηt,

Y=fθ,η,εfθ,ηt,0+fηη=ηt,ε=0η-ηt+fεη=ηt,ε=0ε-0
fθ,η,0η=gθ,η,0=Fη
fθ,η,εε=hθ,η,ε=Fε
E(Y)≅f(θ,ηt,0)-g(θ,ηt,0)ηt
V(Y)≅[g(θ,ηt,0)]2ω2+h(θ,ηt,0)]2σ2+2·g(θ,ηt,0)·h(θ,ηt,0)·COV(η,ε)

Because the covariance between η and ε is assumed to be zero, the last term of approximated V(Y), 2 · g (θ, ηt, 0) · h (θ, ηt, 0) · COV (η, ε), is eliminated.

V(Y)≅[g(θ,ηt,0)]2ω2+h(θ,ηt,0)]2σ2

In NONMEM®, the FO method assumes ηt is zero; therefore, EŶ becomes

EŶfθ̂,0,0-gθ̂,0,0·0=fθ̂,0,0

Meanwhile, there are different ways to obtain V(Y) for each INTERACTION estimation option in NONMEM®. This option considers the interaction of η and ε and uses η̂ (estimate of true eta, ηt) instead of 0 for the calculation of the variance of Y. This is a little contradictory in that the FO method assumes EBE is zero. Therefore, the authors discourage using the INTERACTION option in the FO method, while with other estimation methods like FOCE and LAPLACIAN, the use of INTERACTION is generally recommended. This is because the INTERACTION option did not improve the output with the additive residual error model, while it gave more accurate estimates for other residual error models (proportional error, combined additive and proportional error, power function error, etc.). Because this article only deals with the FO method, VŶ was assumed as follows:

V(̂(Y)≅[g(θ̂,0,0)]2ω̂2+h(θ̂,0,0)]2σ̂2

If the POSTHOC option is used, NONMEM® will estimate η based on the final estimates of θ, Ω and ∑ for each individual after the end of the estimation step. This estimate of each subject's η(η̂) has several names - post hoc η, realized η, EBE of η, or maximum a posteriori (MAP) estimate of η. NONMEM® constructs an objective function in order to estimate post hoc η in addition to the objective function for θ, Ω and ∑. The objective function for post hoc η in NONMEM® is as follows[6]:

OBJeta=logσ2+j=1niyij-fθ,η,02σ2+η-02ω2
η̂=arg min(OBJeta)

Vector notations

To reproduce the estimation algorithm in the above section using R, equations were converted into vector notation because R is optimized to calculate using vectors.

For an individual,

Yι=f⃗(θ⃗,η⃗,ε⃗)≅f⃗(θ⃗,η⃗t,0⃗)+Gi(θ⃗,η⃗t,0⃗)(η⃗-η⃗t)+Hi(θ⃗,η⃗t,0⃗)(ε⃗-0⃗)
E(Yι)≅(f⃗(θ⃗,η⃗t,0⃗)-Gi(θ⃗,η⃗t,0⃗)η⃗t η⃗t=0⃗ IN FO method
V(Yι)≅Gi(θ⃗,η⃗t,0⃗)ΩGi(θ⃗,η⃗t,0⃗)T+diag(Hi(θ⃗,η⃗t,0⃗)Σ(Hi(θ⃗,η⃗t,0⃗) if COV(η⃗,ε⃗)=0

Because Y⃗ι was approximated by a first-order Taylor series, only the first-order terms for both η⃗ and ε⃗ were shown. Due to the assumption of multivariate normality of η⃗ and ε⃗, the above Yι also follows multivariate normal distribution with mean E(Yι) and the variance-covariance matrix V(Yι). Thus, -2LLi were derived by using estimated values for unknown true values as follows:

-2LLi=nilog2π+logVYi+Yi-EYiTVYi-1Yi-EYinilog2π+logVYi+Yi-fθ,η̂t,i,0+Giθ,η̂t,i,0η̂t,iTVYi-1Yi-fθ,η̂t,i,0+Giθ,η̂t,i,0η̂t,i

Thus, the objective function of NONMEM® was denoted as

OBJi=logVYi+Yi-EYiTVYi-1Yi-EYilogVYi+Yi-fθ,η̂t,i,0̂+Giθ,η̂t,i,0η̂t,iTVYi-1Yi-fθ,η̂t,i,0+Giθ,η̂t,i,0η̂t,i

Overall objective function value is just the sum of each individual objective function value. Thus,

OBJ=iOBJi=ilogVYi+iYi-EYiTVYi-1Yi-EYiilogVYi+iYi-fθ,η̂t,i,0+Giθ,η̂t,i,0η̂t,iTVYi-1Yi-fθ,η̂t,i,0+Giθ,η̂t,i,0η̂t,i

Because the FO method assumes is zero, this equation could be simplified as follows:

OBJilogVYi+iYi-fθ,η̂t,i,0TVYi-1Yi-fθ,η̂t,i,0
where
V(Yι)≅Gi(θ⃗,0⃗,0⃗)ΩGi(θ⃗,0⃗,0⃗)T+diag(Hi(θ⃗,0⃗,0⃗)ΣHi(θ⃗,0⃗,0⃗)T)

We can derive equations in the NONMEM® user's guide from the above equation.

OBJi=log|V(Y⃗i)|+(Y⃗i-E(Y⃗i))TV(Y⃗i)-1(Y⃗i-E(Y⃗i))
=log|Ci|+(Y⃗i-F⃗i)TCi-1(Y⃗i-F⃗i)
=log|Ci|+(Y⃗i-F⃗i)TCi½(Y⃗i-F⃗i)
log|Ci|+(Y⃗i-F⃗i)T[Ci½]TCi½(Y⃗i-F⃗i) (∵ C is symmentric)
log|Ci|+[Ci½(Y⃗i-F⃗i)T]TCi½(Y⃗i-F⃗i) (∵ (AB)T=BTAT)
=log|Ci|+RiTRi
where
Ri=Ci(Yi-Fi)
R⃗i is called the 'Weighted Residual (WRES)' in NONMEM®, and this equation is the same as the equation in the NONMEM® Users Guide I, p34-41. Because WRES is defined in this way, the sign of WRES could be different from 'Residual (RES)'. ci½ could be calculated using spectral decomposition, as was shown in R code 2.

To estimate η⃗t,i of true η⃗t,i for i-th subject, final estimates of θ̂, Ω and ∑ obtained after the end of estimation step were used.

OBJηt,i=logVYi+Yi-fθ,ηt,i,0TVYi-1Yi-fθ,ηt,i,0+ηt,iT-1ηt,i
η̂t,i=argminOBJηt,i

Implementation

Computing environment

NONMEM® 7.3 (ICON Development Solutions, Ellicott City, MD) running on a GNU GFortran version 4.6.3 FORTRAN compiler under MS-Windows 7 (64 bit) was used for the computation of objective function value. For the R software, R 3.1.2 for MS-Windows was used.

Dataset and population pharmacokinetic model

The THEO dataset in the NONMEM® distribution media was used for exemplary computing of the objective function value. This dataset is readily available, and its use will make it easy to compare the results from other available methods of the future. The THEO dataset has 132 observations from 12 subjects (11 observations per subject following an oral administration of 320 mg theophylline). A one-compartment PK model with first-order absorption, as shown in the model equations below, was used for the calculation of the objective function value.

F=DoseV·kaka-ke-k·t-e-ka·t
ka1·exp(η1)
V=θ2·exp(η2)
k3·exp(η3)
Y=F+F·ε12
η=η1η2η3~MVN0,=ω112ω122ω132ω212ω222ω232ω312ω322ω332=T
ε=ε1ε2~MVN0, =σ11200σ222
where ka, V, k are absorption rate constant, volume of distribution, and elimination rate constant, respectively.

Computation of objective function value using user-written code in R

The NONMEM® control stream used to obtain the objective function value, final estimates of θ, Ω and ∑, individual objective function values, and G and H matrices are shown below. Trailing 1s in G11, G21, G31, H11, and H21 are NONMEM's convention.

; NONMEM control stream to obtain G and H matrix
and individual objective function values

$PROBLEM THEOPHYLLINE ORAL
$INPUT ID AMT TIME DV WT
$DATA THEO.CSV
$PRED
 DOSE = 320
 KA = THETA(1) * EXP(ETA(1))
  V = THETA(2) * EXP(ETA(2))
  K = THETA(3) * EXP(ETA(3))
  F = DOSE / V * KA / (KA - K) * (EXP(-K*TIME) -
      EXP(-KA*TIME))
  Y = F + F * EPS(1) + EPS(2)

$THETA
  (0, 2)
  (0, 50)
  (0, 0.1)

$OMEGA BLOCK(3)
  0.2
  0.1 0.2
  0.1 0.1 0.2

$SIGMA
  0.1
  0.1
$EST MAX=9999 METHOD=ZERO
$TABLE ID OBJI FIRSTONLY NOAPPEND ONE
  HEADER FORMAT=s1F14.7 FILE=OBJI.tab
$TABLE ID TIME DV PRED G11 G21 G31 H11 H21
  NOAPPEND ONEHEADER FILE=GH.tab

Prior to computing the objective function value in R, two matrix-related functions, one for converting a vector of the lower triangular part to a full matrix (R code 1) and the other for calculating the square-root-inverse of a matrix using spectral decomposition (R code 2), were developed.

# R code 1
ltv2mat = function(vec) {
 LENGTH = length(vec)
 DIM = as.integer(round((sqrt(8*LENGTH+1)-1)/2,0))
 if (DIM*(DIM+1)/2 != LENGTH) return(NULL)
 mat = matrix(nrow=DIM, ncol=DIM)
 mat[upper.tri(mat, diag=TRUE)] = vec
 mat[lower.tri(mat)] = t(mat)[lower.tri(mat)]
return(mat)
}

# R code 2
mat.sqrt.inv = function(mat) {
 ei = eigen(mat)
 d = abs(ei$values)
 d2 = 1 / sqrt(d)
 ans = ei$vectors %*% diag(d2) %*% t(ei$vectors)
return(ans)
}

The PRED function needs to be coded in R to calculate the F, G, and H vectors or matrices for the objective function value (R code 3). F is the typical prediction in the FO method. G is the partial derivative of F with respect to η evaluated at , which is necessary to calculate the total objective function value for θ, Ω and ∑. H is the partial derivative of Y with respect to ε.

G1=Fη1=Fkakaη1=-Dose·kVka-ke-k·t-e-ka·t+Dose·kaVka-kt·e-ka·t·ka
G2=Fη2=FVVη2=-Dose·kaV2ka-ke-k·t-e-ka·t·V
G3=Fη3=Fkkη3=-Dose·kaVka-k2e-k·t-e-ka·t+Dose·kaVka-k-t·e-k·t·k
H1=Yε1=F
H2=Yε2=1

Where G1, G2, and G3 are partial derivatives of F with respect to η1, η2, and η3, respectively, and H1 and H2 are the partial derivatives of Y with respect to ε1 and ε2, respectively.

# R code 3
PRED = function(THETA, ETA, DATA){
 DOSE = 320
 TIME = DATA$TIME

 KA = THETA[1] * exp(ETA[1])
  V = THETA[2] * exp(ETA[2])
  K = THETA[3] * exp(ETA[3])
  F = DOSE/V*KA/(KA-K) * (exp(-K*TIME) - exp(-K
      A*TIME))

 G1 = ((DOSE/V/(KA-K)-DOSE/V*KA/(KA-
     K)^2)*(exp(-K*TIME)-exp(-KA*TIME))+DOSE/
     V*KA/(KA-K)*(exp(-KA*TIME)*TIME)) * KA

 G2 = -(DOSE/V^2*KA/(KA-K)*(exp(-K*TIME)-
    exp(-KA*TIME))) * V

 G3 = (DOSE/V*KA/(KA-K)^2*(exp(-K*TIME)-
   exp(-KA*TIME))-DOSE/V*KA/(KA-K)*(exp(-
   K*TIME)*TIME)) * K

 H1 = F
 H2 = 1
 return(cbind(F, G1, G2, G3, H1, H2))
}

In order to mimic NONMEM® in R, final estimates of θ, Ω and ∑ from NONMEM® results were used to calculate the objective function value. For obtaining the final estimates, we used the .xml file from the NONMEM® output files. The objective function value is the sum of individual objective function values (R code 4).

# R code 4
DATASET = read.csv("THEO.csv", header=TRUE)
names(DATASET) = c("ID", "AMT", "TIME", "DV", "WT")
ID = unique(DATASET$ID)
OBJ = 0; OBJI.TAB = data.frame(); FGH.TAB 
    = data.frame(); FIT.TAB = data.frame()
NETA = 3; NEPS = 2

# Final estimate of theta, omega, sigma 
  obtained from prior NONMEM results
THETA = c(3.1693436270752584, 38.251728321164123,
0.10501216077638235)
OMEGA = c(1.198144159941804, 0.13748126980
932032, 0.0313481792427366263, 0.3700142043422
2171, 0.0433917711628219599, 0.25055031320292215)
SIGMA = c(0.0120775735988250220, 0.0000000000
000000, 0.0542660303512642050)

# Calculate the objective function value
for(i in ID) {
 DATA = DATASET[DATASET$ID==ID[i],]
  ETA = rep(0, NETA) # ETA is a vector of zeros for
FO method.
  FGH = PRED(THETA, ETA, DATA)
   Y = DATA$DV
   F = FGH[,1]
   G = FGH[,2:4]
   H = FGH[,5:6]

 OM = ltv2mat(OMEGA)
  SG = ltv2mat(SIGMA)

  RES = Y - F
 COV = G %*% OM %*% t(G) + diag(diag(H %*% SG
       %*% t(H)))
 WRES = mat.sqrt.inv(COV) %*% RES
  OBJI = log(det(COV)) + t(WRES) %*% WRES

 OBJ = OBJ + OBJI
 OBJI.TAB = rbind(OBJI.TAB, cbind(ID=i, OBJI))
 FGH.TAB = rbind(FGH.TAB, cbind(ID=i, FGH))
 FIT.TAB = rbind(FIT.TAB, cbind(DATA, F, RES,
           WRES))
}

For the POSTHOC process, the OBETA function is necessary to calculate the objective function for post hoc η (R code 5). In order to minimize the objective function values for post hoc η, the 'L-BFGS-B' method in the 'optim' function was used with zero initial values. Additionally, the standard error of post hoc η was calculated (R code 6).[6] After the POSTHOC process, G and H matrices were recalculated using the final estimates of θ, Ω, ∑, and for each individual (R code 7).

# R code 5
OBETA = function(ETA){
 FGH = PRED(THETA, ETA, DATA)
 Y = DATA$DV
 F = FGH[,1]
 H = FGH[,(NETA+2):(NETA+NEPS+1)]
 COV = diag(diag(H%*%SG%*%t(H)))
 isum = log(det(COV))+t(Y-F)%*%solve(COV)%*%
 (Y-F)+t(ETA)%*%solve(OM)%*%ETA
 return(isum)
}

# R code 6
ETA = cbind(ID, matrix(nrow=length(ID), ncol=2*3))
colnames(ETA) = c("ID", "ETA1", "ETA2", "ETA3",
"seETA1", "seETA2", "seETA3")
for(i in ID){
DATA = DATASET[DATASET$ID==ID[i],]
#optim: general-purpose optimization based on some 
algorithms pre-defined.
#L-BFGS-B method: based on quasi-Newton method,
which allows box constraints.
 RES = optim(rep(0,NETA), OBETA, method=
"L-BFGS-B", hessian=T)
 ETA[i, 2:(NETA+1)] = RES$par
 ETA[i, (NETA+2):(2*NETA+1)] = sqrt(diag(2*abs
 (solve(RES$hessian))))
}

# R code 7
EBEGH.TAB = data.frame()
for(i in ID){
 DATA = DATASET[DATASET$ID==ID[i],]
 FGH = PRED(THETA, ETA[i, 2:(NETA+1)], DATA)
 EBEGH.TAB = rbind(EBEGH.TAB, 
 cbind(ID=i, FGH[,2:(NETA+1)], 
FGH[,(NETA+2):(NETA+NEPS+1)]))
}

There was no difference between the estimates in the NONMEM ® output file and the results from our R codes, e.g., F, G, and H vectors or matrices, individual objective function values, or objective function.

Summary

The nonlinear mixed effects model aims to describe the data observed in a population of subjects. NONMEM® has become the gold standard, both in the pharmaceutical industry and academia. But it is difficult to understand the estimation methods implemented in NONMEM®. The classical estimation method, FO approximation, is relatively comprehensible compared with other estimation methods. This method was reproduced using the R language, which is more user-friendly and easy to learn than other programming languages such as FORTRAN or C++.

In this tutorial, we demonstrate the calculation steps of the F, G and H vectors or matrices using final estimates of θ, Ω, ∑ and the objective function value. In fact, NONMEM® produces the updated θ, Ω and ∑ from the ZXMIN1 subroutine to minimize the value of the objective function. ZXMIN1 uses the derivative-free Davidon-Fletcher-Powell algorithm, which is a type of quasi-Newton algorithm. This optimization process for estimation of θ, Ω and ∑ is more complex than η and considers bound constraints, scaling, reparameterization, etc. Therefore, we chose not to use this process.

We hope that this tutorial helps pharmacometricians understand the underlying estimation process of nonlinear mixed effects modelling.

Notes

Conflict of interest:Nothing to declare.

References

    1. Sheiner LB, Beal SL. Evaluation of methods for estimating population pharmacokinetics parameters. I. Michaelis-Menten model: routine clinical pharmacokinetic data. J Pharmacokinet Biopharm 1980;8:553–571.
    1. Sheiner LB, Beal SL. Evaluation of methods for estimating population pharmacokinetic parameters. II. Biexponential model and experimental pharmacokinetic data. J Pharmacokinet Biopharm 1981;9:635–651.
    1. Sheiner LB, Beal SL. Evaluation of methods for estimating population pharmacokinetic parameters III Monoexponential model: routine clinical pharmacokinetic data. J Pharmacokinet Biopharm 1983;11:303–319.
    1. Beal S, Sheiner L, Boeckmann A, Bauer R. In: NONMEM user's guides. Ellicott city, MD, USA: 1989-2009.
    1. Wang Y. Derivation of various NONMEM estimation methods. J Pharmacokinet Pharmacodyn 2007;34:575–593.
    1. Kang D, Bae KS, Houk BE, Savic RM, Karlsson MO. Standard Error of Empirical Bayes Estimate in NONMEM® VI. Korean J Physiol Pharmacol 2012;16:97–106.
    1. Beal S, Sheiner L. The NONMEM System. Am Stat 1980;34:118–119.
    1. Plan EL, Maloney A, Mentre F, Karlsson MO, Bertrand J. Performance comparison of various maximum likelihood nonlinear mixed-effects estimation methods for dose-response models. AAPS J 2012;14:420–432.
    1. Fisher D, Shafer S. In: Fisher/Shafer NONMEM Workshop Pharmacokinetic and Pharmacodynamic Analysis with NONMEM. Ghent, Belgium: Het Pand; 2007.
    1. Peck CC, Beal SL, Sheiner LB, Nichols AI. Extended least squares nonlinear regression: a possible solution to the choice of weights problem in analysis of individual pharmacokinetic data. J Pharmacokinet Biopharm 1984;12:545–558.
    1. Sheiner LB, Beal SL. Pharmacokinetic parameter estimates from several least squares procedures: superiority of extended least squares. J Pharmacokinet Biopharm 1985;13:185–201.
    1. Peck CC, Sheiner LB, Nichols AI. The problem of choosing weights in nonlinear regression analysis of pharmacokinetic data. Drug Metab Rev 1984;15:133–148.

Metrics
Share
Tables

1 / 1

PERMALINK