Searching a Linear Subspace - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Searching a Linear Subspace

Description:

our objective is to annihilate the 2 (or to transform the matrix in such a way ... a Householder transformation to annihilate the 1.7203. Multiplying. Therefore ... – PowerPoint PPT presentation

Number of Views:28
Avg rating:3.0/5.0
Slides: 28
Provided by: charle167
Category:

less

Transcript and Presenter's Notes

Title: Searching a Linear Subspace


1
Searching a Linear Subspace
  • Lecture IV

2
Deriving Subspaces
  • There are several ways to derive the nullspace
    matrix (or kernel matrix).
  • The methodology developed on in our last meeting
    is referred to the Variable Reduction Technique.

3
  • The nullspace is then defined as
  • Lets start out with the matrix form

4
  • The nullspace then becomes

5
  • An alternative approach is to use the AQ
    factorization which is related to the QR
    factorization.
  • These approaches are based on transformations
    using the Householder transformation

6
  • where H is the Householder transformation w is a
    vector used to annihilate some terms the
    original A matrix
  • For any two distinct vectors and there exists a
    Householder matrix that can transform a into b

7
  • The idea is that we can come up with a sequence
    of Householder transformations that will
    transform our original A matrix into a lower
    triangular L matrix and a zero matrix
  • here

8
  • As a starting point, consider the first row of
  • our objective is to annihilate the 2 (or to
    transform the matrix in such a way to make the 2
    a zero) and the 4.

9
  • Thus,

10
  • Now we create a Householder transformation to
    annihilate the 1.7203
  • Multiplying

11
  • Therefore
  • The last column of this matrix is then the
    nullspace matrix

12
  • Linear Equality Constraints
  • The general optimization problem for the linear
    equality constraints can be stated as

13
  • This time instead of searching over dimension n,
    we only have to search over dimension n-t where t
    is the number of nonredundant equations in A.
  • In the vernacular of the problem, we want to
    decompose the vector x into a range-specific
    portion which is required to solve the
    constraints and a null-space portion which can be
    varied.

14
  • Specifically,
  • where Y xY denotes the range-specific portion of
    x and Z xZ denotes the null-space portion of x.

15
  • Algorithm LE (Model algorithm for solving LEP)
  • LE1. Test for Convergence If the conditions
    for convergence are satisfied, the algorithm
    terminates with xk.
  • LE2. Compute a feasible search direction
    Compute a nonzero vector pz, the unrestricted
    direction of the search. The actual direction of
    the search is then

16
  • LE3. Compute a step length Compute a positive
    ak, for which f(xk akpk) lt f(xk).
  • LE4. Update the estimate of the minimum xk1
    xk akpk and go back to LE1.
  • Computation of the Search Direction
  • As is often the case in this course, the question
    of the search direction starts with the second
    order Taylor series expansion. As in the
    unconstrained case, we derive the approximation
    of the objective function around some point xk as

17
  • Substituting only feasible steps for all possible
    steps, we derive the same expression in terms of
    the null-space
  • Solving for the projection based on the
    Newton-Raphson concept, we derive much the same
    steps as the constrained optimization problem

18
  • As an example, assume that the maximization
    problem is

19
  • This problem has a relatively simple gradient
    vector and Hessian matrix

20
  • Let us start from the initial solution
  • To compute a feasible step

21
  • In this case
  • Hence using the concept

22
  • Linear Inequality Constraints
  • The general optimization problem with linear
    inequality constraints can be written as
  • This problem differs from the linearly
    constrained problem by the fact that some of the
    constraints may not be active at a given
    iteration, or may become active at the next
    iteration.

23
  • Algorithm LI
  • LI1. Test for convergence If the conditions
    for convergence are met at xk, terminate.
  • LI2. Choose which logic to perform Decide
    whether to continue minimizing in the current
    subspace or whether to delete a constraint from
    the working set. If a constraint is to be
    deleted go to step LI6. If the same working set
    is to be retained, go on to step LI3.
  • LI3. Compute a feasible search direction
    Compute a vector pk by applying the null-space
    equality

24
  • LI4. Compute a step length Compute a, in this
    case, we must dtermine if the optimum step length
    will violate a constraint. Specifically a is
    equal to the traditional ak or min(gi) which is
    defined as the minimum distance to a constraint.
    If the optimum step is less than the minimum
    distance to another constraint, then go to LI7,
    otherwise go to LI5.
  • LI5. Add a constraint to the working set If the
    optimum step is greater than the minimum distance
    to another constraint, then you have to add, or
    make active, the constraint associated with gi.
    After adding this constraint, go to L17.

25
  • LI6. Delete a constraint If the marginal value
    of one of the Lagrange multipliers is negative,
    then the associated constraint is binding the
    objective function suboptimally and the
    constraint should be eliminated. Delete the
    constraint from the active set and return to LI1.
  • LI7. Update the estimate of the solution. xk1
    xk akpk and go back to LE1.

26
  • A significant portion of the discussion in the
    LIP algorithm centered around the addition or
    elimination of an active constraint.
  • The concept is identical to the minimum ratio
    rule in linear programming. Specifically, the
    minimum ratio rule in linear programming
    identifies the equation (row) which must leave
    solution in order to maintain feasibility. The
    rule is to select that row with the minimum
    positive ratio of the current right hand side to
    the aij coefficient in the matrix.

27
  • In the nonlinear problem, we define
Write a Comment
User Comments (0)
About PowerShow.com