USING A PRIORI INFORMATION FOR CONSTRUCTING REGULARIZING ALGORITHMS - PowerPoint PPT Presentation

About This Presentation
Title:

USING A PRIORI INFORMATION FOR CONSTRUCTING REGULARIZING ALGORITHMS

Description:

USING A PRIORI INFORMATION FOR CONSTRUCTING REGULARIZING ALGORITHMS Anatoly Yagola Department of Mathematics, Faculty of Physics, Moscow State University, Moscow ... – PowerPoint PPT presentation

Number of Views:122
Avg rating:3.0/5.0
Slides: 59
Provided by: ucl83
Category:

less

Transcript and Presenter's Notes

Title: USING A PRIORI INFORMATION FOR CONSTRUCTING REGULARIZING ALGORITHMS


1
USING A PRIORI INFORMATION FOR CONSTRUCTING
REGULARIZING ALGORITHMS
  • Anatoly Yagola
  • Department of Mathematics, Faculty of Physics,
  • Moscow State University, Moscow 119899 Russia
  • E-mail yagola_at_inverse.phys.msu.ru

2
Main publications
1.   Tikhonov, A.N., Goncharsky, A.V., Stepanov,
V.V. and Yagola, A.G. (1995). Numerical methods
for the solution of ill-posed problems. Kluwer
Academic Publishers, Dordrecht. 2.  Tikhonov,
A.N., Leonov, A.S. and Yagola, A.G. (1998).
Nonlinear ill-posed problems. Chapman and Hall,
London. 3.   Kochikov, I.V., Kuramshina, G.M.,
Pentin, Yu.A. and Yagola, A.G. (1999). Inverse
problems of vibrational spectroscopy. VSP,
Utrecht, Tokyo.
3
Introduction
  • (1)
  • is a linear operator,
  • are linear normed spaces.
  • The problem (1) is called well-posed on the class
    of its admissible data if for any pair
    from the set of admissible data the
    solution of (1)
  • exists,
  • is unique,
  • continuously depends on errors in and
    (is stable).

4
  • Stability means that if instead of we
    are given admissible such that
    , , the
    approximate solution converges to the exact one
    as . The numbers and are
    error estimates for the approximate data
    of (1) with the exact data . Denote
    . If at least one of the
    mentioned requirements is not met, then the
    problem (1) is called ill-posed.

5
  • As a generalized solution, it is often taken the
    so-called normal pseudosolution . It exists
    and is unique for any exact data of the problem
    (1) if ,
    , . Here
    and denote the ranges of the
    operator and its orthogonal complement in
    , and stands for the operator
    pseudoinverse to . Below we find
    as a normal pseudosolution, i.e., .

6
What is to solve an ill-posed problem?
  • Tikhonov answered to solve an ill-posed problem
    means to produce a map (regularizing algorithm)
    . such that
  • brings an element
    into correspondence with any data ,
    , . of the problem
    (1)
  • has the convergence property
    as ,
    .

7
  • All inverse problems may be divided into three
    groups
  • well-posed problems,
  • ill-posed regularizable problems,
  • ill-posed nonregularizable problems.

8
Is it possible to construct a regularizing
algorithm that does not depend on , ?
  • Theorem 1 Let be a map of the set
    into . If is a
    regularizing algorithm (not depending
    explicitly on ), then the map .
    is continuous on its domain
    . .
  • Proof The second condition in the definition of
    RA implies in
    valid for each .
    and the convergence

    as valid for
    .
    .
    The map is continuous on
    .
    .

9
  • It is clear from Theorem 1 that a regularizing
    algorithm not using and explicitly can
    only exist for problems (1) well-posed on the set
    of the data
    . The theorem
    generalized the assertion proved by Bakushinskii.
    Tikhonov proved the similar theorem when was
    studying ill-posed SLAE. As result, L-curve and
    GCV methods cannot be applied for the solution of
    ill-posed problems.

10
It is very curious that the most popular error
free methods cannot solve well-posed problems
also! As the first example we consider so-called
the L-curve method (P.C. Hansen). In this
method the regularization parameter in Tikhonov
functional ? is selected as a point maximum
curvature of the L-curve (lnAhz? - u?,
lnz?) ? ? 0. But this method cannot be
used for the solution of ill-posed problems
because the L-curve doesnt depend on h and ?
(see the theorem). Everybody can easily prove
that this method is inapplicable to solving the
simplest finite-dimensional well-posed problems.
11
Let us consider the equationz 1. Here Z U
R1, A I (unit operator), u 1. Let
approximate data Ah I and u? 1 for any h and
?. Independently on h and ?, the regularization
parameter selected by the L-curve method ?L(Ah,
u?) 1. Therefore, the approximate solution zL
0.5, and it doesnt converge to ze 1 as h, ?
? 0. Using L-curve method weve received 0.5
instead of 1 independently on errors!!!
12
For another popular form of L-curve (Ahz? -
u?2, z?2) ? ? 0 it is possible to prove
that such method has systematic error for all
well-posed systems of linear algebraic equations
(A. Leonov, A. Yagola). Another very popular
error free method is GCV the generalized
cross-validation method (G. Wahba), where ?(Ah,
u?) is found as the point of the global minimum
of the function G(?) (AhAh ?I)-1u?
tr(AhAh ?I)-1-1, ? ? 0. This method is not
applicable for the solution of ill-posed problems
including ill-posed systems of linear algebraic
equations (see the theorem above). It is possible
construct well-posed systems of linear algebraic
equations the GCV method failed for their
solution.
13
Is it possible to estimate an error of an
approximate solution of an ill-posed problem?
  • The answer is negative. The main and very
    important result was obtained by Bakushinskii.
  • Assume . Let be a RA.
    Denote by .
    the
    error of a solution of (1) at the point using
    the algorithm . If (1) is regularizable by a
    continuous map and there is an error
    estimate, which is uniform on
  • then the restriction of to is
    continuous on . .

14
  • The accuracy of the approximate solution
    . of the problem (1) could
    be estimated as ,
    where does not
    depend on and the function
    defines the convergence rate of to
    .
  • Pointwise and uniform error estimations should be
    distinguished.

15
  • Consider the results obtained by Vinokurov.
  • Let be a linear continuous injective
    operator acting in Banach space and the
    inverse operator . be unbounded on
    . Suppose that . is an
    arbitrary positive function such that .
    as , and is an
    arbitrary method to solve the problem.
  • The following equality holds for elements
    except maybe for a first category set in


  • A uniform error
    estimate can only exist on a first category
    subset in .

16
  • A compact set is a typical example of the first
    category set in a normed space . For this set
    special regularizing algorithms may be used and a
    uniform error estimation may be constructed.
  • Clearly, a uniform error estimate exists only for
    well-posed problems.

17
A posteriori error estimation
  • For some ill-posed problems it is possible to
    find a so-called a posteriori error estimation.
  • Let be an exact injective operator with
    closed graph and be a -compact space.
  • Introduce a function such that
    . ,
    , ,

  • The function
    is an a posteriori error estimation
    for the problem (1), if
    as .

18
The generalized discrepancy method
  • Let be Hilbert spaces, be a
    closed convex set of a priori constraints such
    that , . , be linear operators.
    On a set introduce the
    Tikhonov's functional

  • where is a
    regularization parameter.
  • (2)
  • For any , and bounded
    linear operator . the problem (2) is
    solvable and has a unique solution .

19
A priori choice of .
  • A regularizing algorithm using the extreme
    problem (2) for to construct
    such that as .
  • If is an injective operator, and
    , . as , then
    as , i.e., there is the a
    priori choice of .

20
A posteriori choice of .
  • The incompatibility measure of (1) on
  • Let it can be computed with an error ,
    i.e., instead of there is
    such that
  • The generalized discrepancy
  • The generalized discrepancy is
    continuous and monotonically non-decreasing for
    .

21
  • The generalized discrepancy principle to choose
    the regularization parameter
  • If the condition
    is not just, then is an
    approximate solution of (1)
  • If the condition
    is just, then the generalized discrepancy has
    a positive zero and .
  • If is an injective operator, then
    . Otherwise, , where is
    the normal solution of (1), i.e.,
    .

22
  • If are bounded linear operators,
    is a closed convex set, , ,
    the generalized discrepancy principle are
    equivalent to the generalized discrepancy method
  • find

23
Inverse problem for the heat conduction equation.
  • There is a function
    , we want to find
    such that as
    .
  • We may write that

24
  • The problem may be written in the form of
    integral equation
  • where is the Green function
  • The problem is solved for the parameters
    . , the
    function is taken such that
    .

25
  • The exact solution ( ) and the
    approximate solution ( ).

26
The Euler equation
  • The Tikhonov's functional is a strongly
    convex functional in a Hilbert space.
  • The necessary and sufficient condition for
    to be a minimum point of on a set
    of a priori constraints is
  • If is an interior point of , then
    , or
  • We obtain the Euler equation.

27
Sourcewise represented sets
  • (1)
  • is a linear injective operator.
  • Assume the next a priori information is
    sourcewise represented with a linear compact
    operator
  • (3)
  • Here is a reflexive Banach space.
  • Suppose is injective, is known exactly,
    .

28
  • Set and define the set
  • Minimize the discrepancy
    on .
  • If , then
    the solution is found. Denote .
    Otherwise, we change to and
    reiterate the process.
  • If is found, then we define the approximate
    solution of (1) as an arbitrary solution
    of the inequality

29
  • Theorem 2 The process described above converges
    . . There exists (generally
    speaking, depending on ) such that
    for . Approximate solutions
    strongly converge to . as .
  • Proof The ball is a
    bounded closed set in . The set is a
    compact in for any , since is a
    compact operator. Due to Weierstrass theorem the
    continuous functional attains its exact
    lower bound on .
  • Clearly, , where
  • . is the integer part of a number.

30
  • Therefore is a finite number and there is
    such that for any
    . The inequality for any
    is evident. Thus, for all
    the approximate solutions . belong to
    the compact set , and the method
    coincides with the quasisolutions method for all
    sufficiently small positive . The convergence
    follows from the general theory of
    ill-posed problems.
  • Remark The method is a variant of the method of
    extending compacts.

31
  • Theorem 3 For the method described above there
    exists an a posteriori error estimate. It means
    that a functional exists such that
    as and
    at least for all sufficiently
    small positive .
  • Remark 2 The existence of the a posteriori error
    estimation follows from the following. If by
    . we denote the space of sourcewise
    represented with the operator solutions of
    (1), then . Since is a
    compact set, then . is a -compact
    space.

32
  • An a posteriori error estimate is not an error
    estimate in general meaning that is impossible in
    principle for ill-posed problems. But it becomes
    an upper error estimate of the approximate
    solution for small errors , where
    depends on the exact solution .

33
  • The operators and are known with errors.
    Let there be linear operators , such
    that . ,
    . Denote the vector of errors by
    . For any integer define a compact set

    .
  • Find a minimal positive integer number
    such that the inequality
  • has a nonempty set of solutions.
  • Then the a posteriori error estimation is

34
Inverse problem for the heat conduction equation
  • For any moment of time there is
  • where . Suppose
    .
  • We solve the problem using the method of
    extending compacts.
  • Let , , ,
    , .

35
  • The approximate solution and its a
    posteriori error estimation. We obtain
    .

36
Compact sets
  • There is the additional a priori information
    the exact solution of (1) belongs to a
    compact set and is a linear continuous
    injective operator.
  • As a set of approximate solutions of (1) it is
    possible to accept
  • Then as in for any
    .

37
  • After finite dimensional approximation we obtain
    that , where is
    a convex polyhedron for convex or monotonic
    functions and
  • . is a matrix, and are vectors.
  • To find it is possible to use the method of
    conditional gradient or the method of projection
    conjugated gradients.

38
Error estimation
  • Find the minimum and the maximum values for each
    coordinate of . Denote them by , ,
    . .
  • Secondly, using the found we construct
    functions and close to
    such that .
    for each .
  • Therefore, we should minimize a linear function
    on a convex set. We may approximate the set by a
    convex polyhedron and solve a linear programming
    problem. The simplex-method or the method to cut
    convex polyhedrons may be used.

39
Inverse problem for the heat conduction equation.
  • Let be a set of convex upward functions
    such that . Assume that
    , , . ,
    , the number of nodes 20.

40
  • The exact solution ( ), the
    functions , .

41
We shall formulate now general conditions for
constructing of regularizing algorithms for the
solution of nonlinear ill-posed problems in
finite-dimensional spaces. These conditions could
be easily checked for an inverse vibrational
problem which we consider as a problem to find
the normal pseudosolution of nonlinear ill-posed
problem on a given set of constraints. We shall
discuss typical a priori constraints.
42
In this section the main problem for us is an
operator equation   (1)   where D is a
nonempty set of constraints, Z and U are
finite-dimensional normed spaces, is a class of
operators from D into U. Let us give a general
formulation of Tikhonov's scheme of constructing
a regularizing algorithm for solving the main
problem for the operator Eq. (1) on D find an
element z for which  
(2)
43
We assume that to some element there
corresponds the nonempty set Z in D of
quasisolutions and that Z may consist of more
than one element. Furthermore, we suppose that a
functional is defined on D and bounded
below     The -optimal quasisolution
problem for Eq. (1) is formulated as follows
find a such that  
44
We suppose that instead of the unknown exact
data (A, u), we are given approximate data
which satisfy the following conditions     Here
the function represents the known measure of
approximation of precise operator A by
approximate operator .We are given also
numerical characterizations of the
closeness of to (A, u). The main
problem is to construct from the approximate data
an element    which converges to the set
-optimal pseudosolutions as
45
Let us formulate our basic assumptions.
1) The class consists of the operators A
continuous from D to U. 2) The functional
is lower semicontinuous on D. 3) If K is an
arbitrary number such that then the set is
compact in Z. 4) The measure of approximation
is assumed to be defined for
, to depend continuously on
all its arguments, to be monotonically increasing
with respect to for any h ? 0, and
satisfy the equality
46
Conditions 1)-3) guarantee that Tikhonovs
scheme for constructing regularizing algorithms
is based on using the smoothing functional  
  in the conditional extreme problem for fixed
??0, find an element
such that  
47
Here fx is an auxiliary function. A common
choice is We denote the set of extremals of (5)
which correspond to a given ? ? 0 by Conditions
1)-3) imply that
48
The scheme of construction of an approximation to
the set includes (i) the choice of
the regularization parameter
(ii) the fixation of the
corresponding to , and a special selection
of an element in this set
as ? ? 0.
49
It is in this way that the generalized analogs of
a posteriori parameter choice strategies are
used. They were introduced by A.S. Leonov. We
define for their formulation some auxiliary
functions and functionals     Here     is
a generalized measure of incompatibility for
nonlinear problems having the properties 2
.
50
All these functions are generally many-valued.
They have the following properties. Lemma. The
functions are single-valued and continuous
everywhere for ? gt0 except perhaps not more a
countable set of their common points of
discontinuity of the first kind, which are points
of multiple-valuedness, then there exists at
least two elements in the set
such that
. The functions ?,? are
monotonically nondecreasing and ?,? are
nonincreasing. The generalized discrepancy
principle (GDP) for nonlinear problems consists
of the following steps.

51
(i) The choice of the regularization parameter as
a generalized solution ? gt 0 of the
equation   .   Here and in the sequel
we say that ? is the generalized solution for a
monotone function ? if ? is an ordinary solution
or if is a jump-point of this function over 0.
52
The method of selecting an approximate solution
from the set by means of the
following selection rule. Let qgt1 and Cgt1 be
fixed constants,
are auxiliary
regularization parameters, and let and
be extremals of (4) for ??1,2.  If the
inequality  holds for and , then any
elements subject to the
condition can be taken as the
approximate solution. For instance we can take
. But if     then we
choose so as to have
, for example .
53
Theorem. Suppose that for any quasisolution the
inequality holds. Then (a) has
positive generalized solution (b) for any
sequence such as ,
the corresponding sequence of
approximate solutions, which is found by GDP has
the following properties  
54
In many practical cases it is very convenient to
take (r is a constant, r gt1).
If it is known in addition that the operator
equation has a solution on D, then the value
can be omitted. GDP in linear and
nonlinear cases has some optimal properties.
55
References
  1. Hadamard, J. (1923). Lectures on Cauchy's problem
    in linear partial differential equations, Yale
    Univ. Press, New Haven.
  2. Tikhonov, A. N., Leonov, A. S. and Yagola, A. G.
    (1998). Nonlinear ill-posed problems, Chapman and
    Hall, London.
  3. Tikhonov, A. N. (1963). Solution of incorrectly
    formulated problems and the regularization
    method, Sov. Math., Dokl., 5, 1035-1038.
  4. Tikhonov, A. N. (1963). Regularization of
    incorrectly posed problems, Sov. Math., Dokl., 4,
    1624-1627.
  5. Leonov, A. S. and Yagola, A. G. (1995). Can an
    ill-posed problems be solved if the data error is
    unknown? Moscow Univ. Physics Bull., 50(4),
    25-28.
  6. Bakushinskii, A. B. (1984). Remark about the
    choice of the regularization parameter by the
    quasioptimality criterion and the propertion
    criterion, Comput. Math. Math. Phys., 24(8),
    1258-1259.
  7. Bakushinskii, A. B. and Goncharskii, A. V.
    (1994). Ill-posed problems theory and
    applications, Kluwer Academic Publishers,
    Dordrecht.

56
  1. Tikhonov, A. N., Goncharsky, A. V., Stepanov, V.
    V. and Yagola, A. G. (1995). Numerical methods
    for the solution of ill-posed problems, Kluwer
    Academic Publishers, Dordrecht.
  2. Vinokurov, V. A. (1979). Regularizable functions
    in topological spaces and inverse problems, Sov.
    Math., Dokl., 20, 569-573.
  3. Vinokurov, V. A. and Gaponenko, Yu. L. (1982). A
    posteriori estimates of the solutions of
    ill-posed inverse problems, Sov. Math., Dokl.,
    25, 325-328.
  4. Yagola, A. G. and Dorofeev, K. Yu. (2000).
    Sourcewise representation and a posteriori error
    estimates for ill-posed problems, In Ramm, A. G.
    et al., eds., Fields Institute Communications
    Operator Theory and Its Applications, 25,
    543-550, AMS, Providence, RI.
  5. Ivanov, V. K., Vasin, V. V. and Tanana, V. P.
    (1978). The theory of linear ill-posed problems
    and its applications, Nauka, Moscow (in Russian).
  6. Dombrovskaya, I. N. and Ivanov, V. K. (1965).
    Some questions to the theory of linear equations
    in abstract spaces, Sibirskii Mat. Zhurnal, 16,
    499-508 (in Russian).

57
  1. Riesz, F. and Sz.-Nagy, B. (1990). Functional
    analysis, Dover Publications Inc., New York.
  2. Vainikko, G. M. (1982). Methods for the Solution
    of Linear Incorrectly Formulated Problems in
    Hilbert Spaces. Textbook, Tartu University Press,
    Tartu.
  3. Leonov, A. S. and Yagola, A. G. (1998). Special
    regularizing methods for ill-posed problems with
    sourcewise represented solutions, Inverse
    Problems, 14(6), 1539-1550.
  4. Titarenko, V. N. and Yagola, A. G. (2000). A
    method to cut convex polyhedrons and its
    applications to ill-posed problems, Numerical
    Methods and Programming, 1, section 1, 8-13,
    www.http//num-meth.srcc.msu.su.
  5. Goncharsky, A. V., Cherepashchuk, A. M., and
    Yagola, A. G. (1985). Ill-posed problems of
    astrophysics, Nauka, Moscow (in Russian).
  6. Goncharsky, A. V., Cherepashchuk, A. M., and
    Yagola, A. G. (1978). Numerical methods for the
    solution of inverse problems in astrophysics,
    Nauka, Moscow (in Russian).

58
  • Rusov, V. D., Babikova, Yu. F. and Yagola, A. G.
    (1991). Image restoration in electronic
    microscopy autoradiography of surfaces,
    Energoatomizdat, Moscow (in Russian).
  • Yagola, A. G., Kochikov, I. V., Kuramshina, G.
    M., and Pentin, Yu. A. (1999). Inverse problems
    of vibrational spectroscopy, VSP, Zeist.
Write a Comment
User Comments (0)
About PowerShow.com