lagrangeconstraint.pdf

Upload: verbicar

Post on 14-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 lagrangeConstraint.pdf

    1/10

    4 Handling Constraints

    Engineering design optimization problems are very rarely unconstrained. The constraints in

    these problems are most often nonlinear an therefore it is important that we learn aboutmethods for the solution of nonlinearly constrained optimization problems.

    As we will see, such problems can be converted into a sequence of unconstrained problems and

    we can then use the methods of solution that we are already familiar with.

    Recall the statement of a general optimization problem,

    minimize f(x)

    w.r.t x Rn

    subject to cj(x) = 0, j = 1, . . . , m

    ck(x) 0, k = 1, . . . , m

  • 7/27/2019 lagrangeConstraint.pdf

    2/10

    Example 4.1: Graphical Solution of a Constrained Optimization Problem

    minimize f(x) = 4x21 x1 x2 2.5

    w.r.t x1, x2

    subject to c1(x) = x22 1.5x

    21 + 2x1 1 0,

    c2(x) = x22 + 2x

    21 2x1 4.25 0

  • 7/27/2019 lagrangeConstraint.pdf

    3/10

  • 7/27/2019 lagrangeConstraint.pdf

    4/10

    4.1 Optimality Conditions for Constrained Problems

    The optimality conditions for nonlinearly constrained problems are important because they form

    the basis for algorithms for solving such problems.

    4.1.1 Nonlinear Equality Constraints

    Suppose we have the following optimization problem with equality constraints,

    minimize f(x)

    w.r.t x Rn

    subject to cj(x) = 0, j = 1, . . . , m

    To solve this problem, we could solve for m components of x by using the equality constraints

    to express them in terms of the other components. The result would be an unconstrained

    problem with n m variables. However, this procedure is only feasible for simple explicit

    functions.

    Lagrange devised a method to solve this problem. . .

  • 7/27/2019 lagrangeConstraint.pdf

    5/10

    At a stationary point, the total differential of the objective function has to be equal to zero, i.e.,

    df =f

    x1dx1 +

    f

    x2dx2 + +

    f

    xndxn = f

    T dx = 0. (4.1)

    For a feasible point, the total differential of the constraints (c1, . . . cm) must also be zero, andso

    dcj =cj

    x1dx1 + +

    cj

    xndxn = c

    T dx = 0. (4.2)

    Lagrange suggested that one could multiply each constraint variation by a scalar j and

    subtract it to from the objective function,

    dfmXj=1

    j dcj = 0 nX

    i=1

    0@ f

    xi

    mXj=1

    jcj

    xi

    1A dxi = 0 (4.3)

    Note that the components of variation vector dx are independent and arbitrary, since we have

    already accounted for the constraint. Thus, for this equation to be satisfied, we need a vector

    such that the expression inside the parenthesis vanishes, i.e.,

    f

    xi

    mXj=1

    jcj

    xi= 0, (i = 1, 2, . . . , n) (4.4)

  • 7/27/2019 lagrangeConstraint.pdf

    6/10

    We define the Lagrangian function as

    L(x, ) = f(x)mX

    j=1

    jcj(x)

    L(x, ) = f(x) Tc(x) (4.5)

    Ifx is a stationary point of this function, using the necessary conditions for unconstrained

    problems, we obtain

    L

    xi=

    f

    xi

    mX

    j=1

    jcj

    xi= 0, (i = 1, . . . , n)

    L

    j

    = cj = 0, (j = 1, . . . , m).

    These first order conditions are known as the KarushKuhnTucker (KKT) conditions and are

    necessary conditions for the optimum of a constrained problem.

    Note that the Lagrangian function is defined such that minimizing it with respect to the design

    variables and the Lagrange multipliers, we obtain the constraints of the original function. We

    have transformed a constrained optimization problem ofn variables and m constraints into an

    unconstrained problem ofn + m variables.

  • 7/27/2019 lagrangeConstraint.pdf

    7/10

    Example 4.2: Problem with Single Equality Constraint

    Consider the following equality constrained problem:

    minimize f(x) = x1 + x2

    subject to c1(x) = x21 + x

    22 2 = 0

    By inspection we can see that the feasible region for this

    problem is a circle of radius

    2. The solution x is

    obviously (1,1)T. From any other point in the circleit is easy to find a way to move in the feasible region (the

    boundary of the circle) while decreasing f.

    -2 -1 0 1 2

    -2

    -1

    0

    1

    2

  • 7/27/2019 lagrangeConstraint.pdf

    8/10

    In this example, the Lagrangian is

    L = x1 + x2 1(x21 + x

    22 2) (4.6)

    And the optimality conditions are

    xL =

    "1 21x11 21x2

    #= 0

    x1

    x2

    =

    "1

    211

    21

    #(4.7)

    1L = x

    21 + x

    22 2 = 0 1 =

    1

    2(4.8)

    In this case 1 = 12 corresponds to the minimum, and the positive value of the Lagrange

    multiplier corresponds to the maximum. We can distinguish these two situations by checking for

    positive definiteness of the Hessian of the Lagrangian.

  • 7/27/2019 lagrangeConstraint.pdf

    9/10

    Also note that at the solution the constraint normal c1(x) is parallel to f(x), i.e., there

    is a scalar 1 such that

    f(x

    ) =

    1c1(x

    ). (4.9)

    We can derive this expression by examining the. . . first-order Taylor series approximations to

    the objective and constraint functions. To retain feasibility with respect to c1(x) = 0 werequire that

    c1(x + d) = 0 (4.10)

    c1(x + d) = c1(x)| {z }

    =0

    +cT1 (x)d +O(d

    Td). (4.11)

    Linearizing this we get,

    cT1 (x)d = 0 . (4.12)

    We also know that a direction of improvement must result in a decrease in f, i.e.,

    f(x + d) f(x) < 0. (4.13)

    Thus to first order we require that

    f(x) + fT(x)d f(x) < 0 (4.14)

    fT(x)d < 0 . (4.15)

  • 7/27/2019 lagrangeConstraint.pdf

    10/10

    A necessary condition for optimality is that there is no direction satisfying both of these

    conditions. The only way that such a direction cannot exist is iff(x) and c1(x) are

    parallel, that is, iff(x) = 1c1(x) holds.

    By defining the Lagrangian function

    L(x, 1) = f(x) 1c1(x), (4.16)

    and noting that xL(x, 1) = f(x) 1c1(x), we can state the necessary optimalitycondition as follows: At the solution x there is a scalar 1 such that xL(x

    , 1) = 0.

    Thus we can search for solutions of the equality-constrained problem by searching for astationary point of the Lagrangian function. The scalar 1 is called the Lagrange multiplier for

    the constraint c1(x) = 0.