Lagrange multipliers

From Free net encyclopedia

(Redirected from Lagrange multiplier)

Image:Lagrange multiplier.png

In mathematical optimization problems, Lagrange multipliers, named after Joseph Louis Lagrange, are a method for dealing with constraints. We seek the local extrema of a function of several variables subject to one or more constraints. This method reduces the constrained problem in n variables to an unconstrained problem in n + 1 variables which can be solved. The method introduces a new unknown scalar variable, the Lagrange multiplier, for each constraint and forms a linear combination involving the multipliers as coefficients.

The defense for this can be carried out in the standard way as concerns partial differentiation, using either total differentials, or their close relatives, the chain rules. The object is, for some implicit function, to find the conditions so that the derivative in terms of the independent variables of a function equal zero at some set of inputs.

Contents

Introduction

Consider a two-dimensional case. Suppose we have a function, f(x,y), to maximize subject to

<math>g(x,y) = c,</math>

where c is a constant. We can visualize contours of f given by

<math>f(x,y)=d_n</math>

for various values of dn, and the contour of g given by g(x,y) = c. Suppose we walk along the g = c contour. Since, in general, the contours of f and g will be distinct, traversing the g = c contour will generally intersect and cross many contours of f. In general, by moving along the line g=c we can increase or decrease the value of f. Only when g=c contour we are following touches tangentially, but does not cross, a contour of f do we not increase or decrease the value of f. This occurs at the constrained local extrema and the constrained inflection points of f.

A familiar example can be obtained from weather maps, with their contours for temperature and pressure: the constrained extrema will occur where the superposed maps show touching lines (isopleths).

Geometrically we translate the tangency condition to saying that the gradients of f and g are parallel vectors at the maximum. Introducing an unknown scalar, λ, we solve

<math>\nabla \left( f(x,y)+\lambda (g(x,y)-c) \right) = 0</math>

for <math>\lambda \ne 0</math>. This

Once values for λ are determined, we are back to the original number of variables and so can go on to find extrema of the new unconstrained equation

<math>F(x,y)=f(x,y)+\lambda (g(x,y)-c)</math>

in traditional ways. That is, <math>F(x,y) = f(x,y)</math> for all (x,y) satisfying the constraint because <math>g(x,y)-c</math> equals zero on the constraint, but the zeros of <math>\nabla F(x,y)</math> are all on <math>g(x,y)=c</math>.

The method of Lagrange multipliers

Let f be a function defined on Rn, and let the constraints be given by gk(x) = 0 (perhaps by moving the constant to the left, as in gk(x) - c = 0). Now, define the Lagrangian, Λ, as

<math>\Lambda(\mathbf x, \boldsymbol \lambda) = f + \sum_k \lambda_k g_k.</math>

Observe that both the optimization criteria and constraints gk are compactly encoded as extrema of the Lagrangian:

<math>\nabla_{\mathbf x} \Lambda = 0 \Leftrightarrow \nabla_{\mathbf x} f = - \sum_k \lambda_k \nabla_{\mathbf x} g_k,</math>

and

<math>\nabla_{\mathbf \lambda} \Lambda = 0 \Leftrightarrow g_k = 0.</math>

Often the Lagrange multipliers have an interpretation as some salient quantity of interest. To see why this might be the case, observe that:

<math>\frac{\partial \Lambda}{\partial {g_k}} = \lambda_k.</math>

Thus, λk is the rate of change of the quantity being optimized as a function of the constraint variable. As examples, in Lagrangian mechanics the equations of motion are derived by finding stationary points of the action, the time integral of the difference between kinetic and potential energy. Thus, the force on a particle due to a scalar potential <math>\mathbf{F} = -\nabla V</math> can be interpreted as a Lagrange multiplier determining the change in action (transfer of potential to kinetic energy) following a variation in the particle's constrained trajectory. In economics, the optimal profit to a player is calculated subject to a constrained space of actions, where a Lagrange multiplier is the value of relaxing a given constraint (e.g. through bribery or other means).

The method of Lagrange multipliers is generalized by the Karush-Kuhn-Tucker conditions.

Example

Suppose we wish to find the discrete probability distribution with maximal information entropy. Then

<math>f(p_1,p_2,\ldots,p_n) = -\sum_{k=1}^n p_k\log_2 p_k.</math>

Of course, the sum of these probabilities equals 1, so our constraint is g(p) = 1 with

<math>g(p_1,p_2,\ldots,p_n)=\sum_{k=1}^n p_k.</math>

We can use Lagrange multipliers to find the point of maximum entropy (depending on the probabilities). For all k from 1 to n, we require that

<math>\frac{\partial}{\partial p_k}(f+\lambda g)=0,</math>

which gives

<math>\frac{\partial}{\partial p_k}\left(-\sum_{k=1}^n p_k \log_2 p_k + \lambda\sum_{k=1}^n p_k \right) = 0.</math>

Carrying out the differentiation of these n equations, we get

<math>-\left(\frac{1}{\ln 2}+\log_2 p_k \right) + \lambda = 0.</math>

This shows that all pi are equal (because they depend on λ only). By using the constraint ∑k pk = 1, we find

<math>p_k = \frac{1}{n}.</math>

Hence, the uniform distribution is the distribution with the greatest entropy.

For another example, see also derivation of the partition function.

External links

For references to Lagrange's original work and for an account of the terminology see the Lagrange Multiplier entry in

For additional text and interactive applets

es:Multiplicadores de Lagrange fr:Multiplicateur de Lagrange he:כופלי לגראנז' ja:ラグランジュの未定乗数法 ro:Multiplicatorul Lagrange ru:Множители Лагранжа