Alfisol, Item 003:  LMMpro, version 2.0 The Langmuir Optimization Program 

Name of Linear Regression:  Langmuir Equation (1916):
Γ = Amount adsorbed Γ_{max} = Maximum adsorption quantity K = Reaction equilibrium constant c = Aqueous Equilibrium concentration 
MichaelisMenten Equation (1913):
v = Overall rate of reaction V_{max} = Maximum reaction rate K_{M} = MichaelisMenten constant S = Substrate concentration 
Comments:
Note: the term "original graph" in the comments below refers to graph generated by either of the two parabolic equations expressed here, namely the Langmuir Equation or the MichaelisMenten Equation, plus the (c, Γ) or the (S, v) original data.  
LineweaverBurk (1934): 
plot (1/Γ) versus (1/c) slope = 1/(Γ_{max} K) intercept = 1 / Γ_{max} 
plot (1/v) versus (1/S) slope = K_{M} / V_{max} intercept = 1 / V_{max} 
This regression method is extremely sensitive to data error.
It has a very strong bias for closely tracking the data in the lower left corner of the original graph.  
EadieHofstee (1942, 1952): 
plot (Γ) versus (Γ/c) slope = 1 / K intercept = Γ_{max} 
plot (v) versus (v/S) slope = K_{M} intercept = V_{max} 
This regression method has some sensitivity to data error.
It has some bias for closely tracking the data in the lower left corner of the original graph.
Note that if you invert the x,yaxes, then this would
convert into the Scatchard regression.  
Scatchard (1949): 
plot (Γ/c) versus (Γ) slope = K intercept = K Γ_{max} 
plot (v/S) versus (v) slope = 1/K_{M} intercept = V_{max} / K_{M} 
This regression method has some sensitive to data error.
It has some bias for closely tracking the data in the upper right corner of the original graph.
Note that if you invert the x,yaxes, then this would
convert into the EadieHofstee regression.  
left: Langmuir (1918) right: HanesWoolf (1932, 1957): 
plot (c/Γ) versus (c) slope = 1 / Γ_{max} intercept = 1/(K Γ_{max}) 
plot (S/v) versus (S) slope = 1 / V_{max} intercept = K_{M} / V_{max} 
This regression method has very little sensitivity to data error.
It has some bias for closely tracking the data in the middle portion of the graph plus the upper right corner of the original graph. This linear regression technique was first presented by Langmuir in 1918. Although he received the Nobel Prize in 1932, the method he used to optimize the parabolic equation was apparently totally missed by others. It was later presented by Hanes (1932) and referred to as the HanesWoolf regression by Haldane (1957), and this regression method often carries their names. This software (LMMpro) refers to this regression method as the "Langmuir Linear Regression Method" when used to solve the Langmuir adsorption isotherm.  
loglog: 
plot (log [θ/(1θ)]) versus (log c) θ = Γ / Γ_{max} slope = 1 intercept = log K 
plot (log [θ/(1θ)]) versus (log S) θ = v/ V_{max} slope = 1 intercept = log K_{M} 
This regression method has very little sensitivity to data error.
This is the only linear regression method that is too difficult to solve by hand. It must be solved via an iterative loop that finds the equation's best maximum value (and, hence, the best θ value). The best θ value is the one with the smallest linear regression error. Note that the slope is fixed and it is set equal to 1.0. 
Note that all linear and nonlinear regression methods are also sensitive to theory error. That is, a small deviation in the data from the Langmuir theory predictions or MichaelisMenten theory preditions is not necessarily an expression of an error in the data collected. It may instead be due to a slightly incomplete mathematical expression of the true nature of the process involved.