Alfisol, Item 003:

LMMpro, version 2.0
The Langmuir Optimization Program
plus
The MichaelisMenten Optimization Program



vNLLS Optimization
The vNLLS regression method used by LMMpro is based on an adaptation of the optimization method discussed by
Persoff & Thomas in 1988 (Soil Sci. Soc. Am. J. 52:886889).
vNLLS stands for vertical nonlinear least squares. This regression method optimizes the parameters of the equation
without converting the equation into another form or shape. The best fit is that equation that yields the smallest error.
The error of each datum point is defined as the vertical difference between the predicted value and the actual value.
The predicted value is the value calculated for a given x_{i}, and the actual value is the value measured
for a given x_{i}.
For the Langmuir Equation, the optimization is as follows:
 Let Σ ε_{i}^{2} =
Σ error^{2} =
Σ (measured  predicted)^{2}
 Substitute the Langmuir Equation for predicted values:
Σ ε_{i}^{2} = Σ [ Γ_{i} 

Γ_{max} K c_{i}
1 + K c_{i} 
]^{2}


 Expand the square term:
Σ ε_{i}^{2} = Σ [

Γ_{max} K c_{i}
1 + K c_{i} 
]^{2}  Σ [

2 Γ_{i} Γ_{max} K c_{i}
1 + K c_{i} 
] + Σ Γ_{i}^{2}


 To optimize Γ_{max}, set the first derivative equal to zero and solve:
d(Σ ε_{i}^{2}) dΓ_{max}

= 0 = 2 Γ_{max} Σ [

K c_{i} 1 + K c_{i} 
]^{2}  Σ [

2 Γ_{i} K c_{i} 1 + K c_{i} 
] + 0


 Solving Equation [4] for Γ_{max} yields:

Σ [ 
Γ_{i} K c_{i}
1 + K c_{i}
  ] 
Γ_{max} =  

Σ [ 
 ]^{2} 

 If K is known (or fixed by the user), then use Equation [5] above to get the optimized Γ_{max} value.
 If K is not known, guess K values, and then use Equation [5] to get the corresponding best Γ_{max} value.
Finally, use Equation [2] to determine the error that corresponds to the guesses made. Repeat the process by incrementally
increasing or decreasing the K value guesses until the minimum error condition is found.
 If Γ_{max} is known (or fixed by the user), then Equation [5] is not needed. The best value of K, however, is still
determined with Equation [2] and an iteration loop to find the condition with the minimum error.
For the MichaelisMenten Equation, the optimization is as follows:
 Let Σ ε_{i}^{2} =
Σ error^{2} =
Σ (measured  predicted)^{2}
 Substitute the MichaelisMenten Equation for predicted values:
Σ ε_{i}^{2} = Σ [ v_{i} 

V_{max} S_{i}
K_{M} + S_{i} 
]^{2}


 Expand the square term:
Σ ε_{i}^{2} = Σ [

V_{max} S_{i}
K_{M} + S_{i} 
]^{2}  Σ [

2 v_{i} V_{max} S_{i}
K_{M} + S_{i} 
] + Σ v_{i}^{2}


 To optimize V_{max}, set the first derivative equal to zero and solve:
d(Σ ε_{i}^{2}) dV_{max}

= 0 = 2 V_{max} Σ [

S_{i} K_{M} + S_{i} 
]^{2}  Σ [

2 v_{i} S_{i} K_{M} + S_{i} 
] + 0


 Solving Equation [4] for V_{max} yields:

Σ [ 
v_{i} S_{i}
K_{M} + S_{i}
  ] 
V_{max} =  

Σ [ 
 ]^{2} 

 If K_{M} is known (or fixed by the user), then use Equation [5] above to get the optimized V_{max} value.
 If K_{M} is not known,
guess K_{M} values, and then use Equation [5] to get the corresponding best V_{max} value.
Finally, use Equation [2] to determine the error that corresponds to the guesses made. Repeat the process by incrementally
increasing or decreasing the K_{M} value guesses until the minimum error condition is found.
 If V_{max} is known (or fixed by the user), then Equation [5] is not needed.
The best value of K_{M}, however, is still
determined with Equation [2] and an iteration loop to find the condition with the minimum error.
Note that the vNLLS regression will optimize the parameters assuming that minimizing the vertical error yields the best results.
This method does have some bias in favor of any region of the curve where the vertical changes are most pronounced.
In other words, for a parabolic equation such as the Langmuir Equation or the
MichaelisMenten Equation, the vNLLS regression has some bias toward the lower left region of the graph.
Also note that the vNLLS regression does not result in an optimized curve with the data evenly distributed above and
below the curve. That is, the sum of the errors above the curve is not the same value as the sum of the errors below the curve.
Least squares regressions will only balance the data around the curve if the function has a constant term.
For example, in y = f(x) + b, b is the constant term. The Langmuir Equation and the MichaelisMenten Equation are lacking the
contant term (b).