ecuaciones fredholm

Upload: javier-feijoo

Post on 04-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 ecuaciones fredholm

    1/47

    A Survey on Solution Methods for Integral Equations

    Ilias S. Kotsireas

    June 2008

    1 Introduction

    Integral Equations arise naturally in applications, in many areas of Mathematics, Science andTechnology and have been studied extensively both at the theoretical and practical level. Itis noteworthy that a MathSciNet keyword search on Integral Equations returns more thaneleven thousand items. In this survey we plan to describe several solution methods for IntegralEquations, illustrated with a number of fully worked out examples. In addition, we provide abibliography, for the reader who would be interested in learning more about various theoreticaland computational aspects of Integral Equations.

    Our interest in Integral Equations stems from the fact that understanding and implementingsolution methods for Integral Equations is of vital importance in designing efficient param-eterization algorithms for algebraic curves, surfaces and hypersurfaces. This is because theimplicitization algorithm for algebraic curves, surfaces and hypersurfaces based on a nullvectorcomputation, can be inverted to yield a parameterization algorithm.

    Integral Equations are inextricably related with other areas of Mathematics, such as IntegralTransforms, Functional Analysis and so forth. In view of this fact, and because we made aconscious effort to limit the length in under 50 pages, it was inevitable that the present work

    is not self-contained.We thank ENTER. This project is implemented in the framework of Measure 8.3 of the programme Com-

    petitiveness, 3rd European Union Support Framework, and is funded as follows: 75% of public funding from theEuropean Union, Social Fund, 25% of public funding from the Greek State, Ministry of Development, Generalsecretariat of research and technology, and from the private sector.

    We thank Professor Ioannis Z. Emiris, University of Athens, Athens, Greece, for the warm hospitality inhis Laboratory of Geometric & Algebraic Algorithms, ERGA.

    1

  • 7/31/2019 ecuaciones fredholm

    2/47

    2 Linear Integral Equations

    2.1 General Form

    The most general form of a linear integral equation is

    h(x)u(x) = f(x) +

    b(x)a

    K(x, t)u(t) dt (1)

    The type of an integral equation can be determined via the following conditions:f(x) = 0 Homogeneous. . .f(x) = 0 Nonhomogeneous. . .b(x) = x . . . Volterra integral equation. . .b(x) = b . . . Fredholm integral equation. . .h(x) = 0 . . . of the 1st kind.h(x) = 1 . . . of the 2nd kind.

    For example, Abels problem:

    2gf(x) =

    x0

    (t)x t dt (2)

    is a nonhomogeneous Volterra equation of the 1st kind.

    2.2 Linearity of Solutions

    If u1(x) and u2(x) are both solutions to the integral equation, then c1u1(x) + c2u2(x) is also asolution.

    2.3 The Kernel

    K(x, t) is called the kernel of the integral equation. The equation is called singular if:

    the range of integration is infinite

    the kernel becomes infinite in the range of integration

    2.3.1 Difference Kernels

    IfK(x, t) = K(xt) (i.e. the kernel depends only on xt), then the kernel is called a differencekernel. Volterra equations with difference kernels may be solved using either Laplace or Fouriertransforms, depending on the limits of integration.

    2

  • 7/31/2019 ecuaciones fredholm

    3/47

    2.3.2 Resolvent Kernels

    2.4 Relations to Differential Equations

    Differential equations and integral equations are equivalent and may be converted back andforth:

    dy

    dx= F(x) y(x) =

    xa

    sa

    F(t) dtds + c1x + c2

    2.4.1 Reduction to One Integration

    In most cases, the resulting integral may be reduced to an integration in one variable:

    xa

    sa

    F(t) dtds =xa

    F(t)xt

    dsdt =xa

    (x t)F(t) dt

    Or, in general: xa

    x1a

    . . .

    xn1a

    F(xn) dxn dxn1 . . . dx1 =

    1

    (n1)!xa

    (x x1)n1F(x) dx1

    2.4.2 Generalized Leibnitz formula

    The Leibnitz formula may be useful in converting and solving some types of equations.

    d

    dx

    (x)(x)

    F(x, y) dy =

    (x)(x)

    F

    x(x, y) dy + F(x, (x))

    d

    dx(x) F(x, (x)) d

    dx(x)

    3 Transforms

    3.1 Laplace Transform

    The Laplace transform is an integral transform of a function f(x) defined on (0, ). Thegeneral formula is:

    F(s) L{f} =0

    esxf(x) dx (3)

    3

  • 7/31/2019 ecuaciones fredholm

    4/47

    The Laplace transform happens to be a Fredholm integral equation of the 1st kind with kernel

    K(s, x) = esx.

    3.1.1 Inverse

    The inverse Laplace transform involves complex integration, so tables of transform pairs arenormally used to find both the Laplace transform of a function and its inverse.

    3.1.2 Convolution Product

    When F1(s) L{f1} and F2(s) L{f2} then the convolution product is L{f1 f2} F1(s)F2(s), where f1

    f2 = x0 f1(x

    t)f2(t) dt. f1(x

    t) is a difference kernel and f2(t)

    is a solution to the integral equation. Volterra integral equations with difference kernels wherethe integration is performed on the interval (0, ) may be solved using this method.

    3.1.3 Commutativity

    The Laplace transform is commutative. That is:

    f1 f2 =x0

    f1(x t)f2(t) dt =x0

    f2(x t)f1(t) dt = f2 f1

    3.1.4 Example

    Find the inverse Laplace transform of F(s) = 1s2+9

    + 1s(s+1)

    using a table of transform pairs:

    f(x) = L1

    1

    s2 + 9+

    1

    s(s + 1)

    = L1

    1

    s2 + 9

    + L1

    1

    s(s + 1)

    =1

    3sin3x + L1

    1

    s 1

    s + 1

    =

    1

    3 sin3x + L11

    sL1 1s + 1

    =1

    3sin3x + 1 ex

    4

  • 7/31/2019 ecuaciones fredholm

    5/47

    3.2 Fourier Transform

    The Fourier exponential transform is an integral transform of a function f(x) defined on(, ). The general formula is:

    F() F{f} =

    eixf(x) dx (4)

    The Fourier transform happens to be a Fredholm equation of the 1st kind with kernel K(, x) =eix.

    3.2.1 Inverse

    The inverse Fourier transform is given by:

    f(x) F1{F} = 12

    eixF() d (5)

    It is sometimes difficult to determine the inverse, so tables of transform pairs are normally usedto find both the Fourier transform of a function and its inverse.

    3.2.2 Convolution Product

    When F1()F

    {f1

    }and F2()

    F

    {f2

    }then the convolution product is F

    {f1

    f2

    } F1()F2(), where f1 f2 = f1(x t)f2(t) dt. f1(x t) is a difference kernel and f2(t)

    is a solution to the integral equation. Volterra integral equations with difference kernels wherethe integration is performed on the interval (, ) may be solved using this method.

    3.2.3 Commutativity

    The Fourier transform is commutative. That is:

    f1 f2 =

    f1(x t)f2(t) dt =

    f2(x t)f1(t) dt = f2 f1

    3.2.4 Example

    Find the Fourier transform of g(x) = sinaxx

    using a table of integral transforms:The inverse transform of sina

    is

    f(x) =

    12

    , |x| a0, |x| > a

    5

  • 7/31/2019 ecuaciones fredholm

    6/47

    but since the Fourier transform is symmetrical, we know that

    F{F(x)} = 2f().

    Therefore, let

    F(x) =sin ax

    x

    and

    f() =

    12

    , |x| a0, |x| > a

    so that

    Fsin ax

    x

    = 2f() = , || a0, || > a .3.3 Mellin Transform

    The Mellin transform is an integral transform of a function f(x) defined on (0, ). The generalformula is:

    F() M{f} =0

    x1f(x) dx (6)

    Ifx = et and f(x) 0 for x < 0 then the Mellin transform is reduced to a Laplace transform:

    0

    (et)1f(et)(et) dt = 0

    etf(et) dt.

    3.3.1 Inverse

    Like the inverse Laplace transform, the inverse Mellin transform involves complex integration,so tables of transform pairs are normally used to find both the Mellin transform of a functionand its inverse.

    3.3.2 Convolution Product

    When F1() M

    {f1} and F2() M

    {f2} then the convolution product isM

    {f1 f2} F1()F2(), where f1 f2 =

    0

    f1(t)f2(xt)

    tdt. Volterra integral equations with kernels of the

    form K(xt

    ) where the integration is performed on the interval (0 , ) may be solved using thismethod.

    6

  • 7/31/2019 ecuaciones fredholm

    7/47

    3.3.3 Example

    Find the Mellin transform of eax, a > 0. By definition:

    F() =

    0

    x1eax dx.

    Let ax = z, so that:

    F() = a0

    ezz1 dz.

    From the definition of the gamma function ()

    () =0

    x1ex dx

    the result is

    F() =()

    a.

    4 Numerical Integration

    General Equation:

    u(x) = f(x) + ba

    K(x, t)u(t) dt

    Let Sn(x) =n

    k=0 K(x, tk)u(tk)kt.Let u(xi) = f(xi) +

    nk=0 K(xi, tk)u(tk)kt, i = 0, 1, 2, . . . , n.

    If equal increments are used, x = ban

    .

    4.1 Midpoint Rule

    b

    a

    f(x) dx b an

    n

    i=1f

    xi1 + xi

    2 4.2 Trapezoid Rule

    ba

    f(x) dx b an

    1

    2f(x0) +

    n1i=1

    f(xi) +1

    2f(xn)

    7

  • 7/31/2019 ecuaciones fredholm

    8/47

    4.3 Simpsons Rule

    ba

    f(x) dx b a3n

    [f(x0) + 4f(x1) + 2f(x2) + 4f(x3) + + 4f(xn1) + f(xn)]

    5 Volterra Equations

    Volterra integral equations are a type of linear integral equation (1) where b(x) = x:

    h(x)u(x) = f(x) + x

    a

    K(x, t)u(t) dt (7)

    The equation is said to be the first kind when h(x) = 0,

    f(x) =xa

    K(x, t)u(t) dt (8)

    and the second kind when h(x) = 1,

    u(x) = f(x) +

    xa

    K(x, t)u(t) dt (9)

    5.1 Relation to Initial Value ProblemsA general initial value problem is given by:

    d2y

    dx2+ A(x)

    dy

    dx+ B(x)y(x) = g(x) (10)

    y(a) = c1, y(a) = c2

    y(x) = f(x) +

    xa

    K(x, t)y(t) dt

    where

    f(x) =

    x

    a

    (x t)g(t) dt + (x a)[c1A(a) + c2] + c1and

    K(x, t) = (t x)[B(t) A(t)] A(t)

    8

  • 7/31/2019 ecuaciones fredholm

    9/47

    5.2 Solving 2nd-Kind Volterra Equations

    5.2.1 Neumann Series

    The resolvent kernel for this method is:

    (x, t; ) =n=0

    nKn+1(x, t)

    where

    Kn+1(x, t) =

    xt

    K(x, y)Kn(y, t) dy

    and K1(x, t) = K(x, t)

    The series form of u(x) is

    u(x) = u0(x) + u1(x) + 2u2(x) +

    and since u(x) is a Volterra equation,

    u(x) = f(x) +

    xa

    K(x, t)u(t) dt

    so

    u0(x) + u1(x) + 2u2(x) + =

    f(x) +

    xa

    K(x, t)u0(t) dt + 2

    xa

    K(x, t)u1(t) dt +

    Now, by equating like coefficients:

    u0(x) = f(x)

    u1(x) = x

    a

    K(x, t)u0(t) dt

    u2(x) =

    xa

    K(x, t)u1(t) dt

    ......

    un(x) =

    xa

    K(x, t)un1(t) dt

    9

  • 7/31/2019 ecuaciones fredholm

    10/47

    Substituting u0(x) = f(x):

    u1(x) =

    xa

    K(x, t)f(t) dt

    Substituting this for u1(x):

    u2(x) =

    xa

    K(x, t)

    sa

    K(s, t)f(t) dtds

    =

    xa

    f(t)

    xt

    K(x, s)K(s, t) ds

    dt

    =xa

    f(t)K2(x, t) dt

    Continue substituting recursively for u3, u4, . . . to determine the resolvent kernel. Sometimesthe resolvent kernel becomes recognizable after a few iterations as a Maclaurin or Taylor seriesand can be replaced by the function represented by that series. Otherwise numerical methodsmust be used to solve the equation.

    Example: Use the Neumann series method to solve the Volterra integral equation of the2nd kind:

    u(x) = f(x) + x

    0

    extu(t) dt (11)

    In this example, K1(x, t) K(x, t) = ext, so K2(x, t) is found by:

    K2(x, t) =

    xt

    K(x, s)K1(s, t) ds

    =

    xt

    exsest ds

    =

    xt

    ext ds

    = (x

    t)ext

    and K3(x, t) is found by:

    K3(x, t) =

    xt

    K(x, s)K2(s, t) ds

    =

    xt

    exs(s t)est ds

    10

  • 7/31/2019 ecuaciones fredholm

    11/47

    = x

    t

    (s

    t)ext ds

    = extxt

    (s t) ds

    = ext

    s2

    2 ts

    xt

    = ext

    x2 t22

    t(x t)

    =(x t)2

    2ext

    so by repeating this process the general equation for Kn+1(x, t) is

    Kn+1(x, t) =(x t)n

    n!ext.

    Therefore, the resolvent kernel for eq. (11) is

    (x, t; ) = K1(x, t) + K2(x, t) + 2K3(x, t) +

    = ext + (x t)ext + 2 (x t)2

    2ext +

    = ext

    1 + (x t) + 2 (x t)22

    +

    .

    The series in brackets, 1 + (x t) + 2 (xt)22

    + , is the Maclaurin series of e(xt), so theresolvent kernel is

    (x, t; ) = exte(xt) = e(+1)(xt)

    and the solution to eq. (11) is

    u(x) = f(x) + x

    0

    e(+1)(xt)f(t) dt.

    5.2.2 Successive Approximations

    1. make a reasonable 0th approximation, u0(x)

    2. let i = 1

    3. ui = f(x) +x0

    K(x, t)ui1(t) dt

    11

  • 7/31/2019 ecuaciones fredholm

    12/47

    4. increment i and repeat the previous step until a desired level of accuracy is reached

    If f(x) is continuous on 0 x a and K(x, t) is continuous on both 0 x a and0 t x, then the sequence un(x) will converge to u(x). The sequence may occasionally berecognized as a Maclaurin or Taylor series, otherwise numerical methods must be used to solvethe equation.

    Example: Use the successive approximations method to solve the Volterra integral equationof the 2nd kind:

    u(x) = x x0

    (x t)u(t) dt (12)

    In this example, start with the 0th approximation u0(t) = 0. this gives u1(x) = x,

    u2(x) = x x0

    (x t)u1(t) dt

    = x x0

    (x t)t dt

    = x

    xt2

    2 t

    3

    3

    x0

    = x x3

    3!

    and

    u3(x) = x

    x

    0

    (x t)u2(t) dt

    = x x0

    (x t)

    t t3

    6

    dt

    = x x

    t2

    2 t

    4

    24

    x0

    +

    t3

    3 t

    5

    30

    x0

    = x x

    x2

    2 x

    4

    24

    +

    x3

    3 x

    5

    30

    = x x3

    2 +

    x5

    24 +

    x3

    3 x5

    30

    = x x3

    3!+

    x5

    5!.

    Continuing this process, the nth approximation is

    un(x) = x x3

    3!+

    x5

    5! + (1)n x

    2n+1

    (2n + 1)!

    12

  • 7/31/2019 ecuaciones fredholm

    13/47

    which is clearly the Maclaurin series of sin x. Therefore the solution to eq. (12) is

    u(x) = sin x.

    5.2.3 Laplace Transform

    If the kernel depends only on x t, then it is called a difference kernel and denoted by K(x t).Now the equation becomes:

    u(x) = f(x) +

    x0

    K(x t)u(t) dt,

    which is the convolution product x0

    K(x t)u(t) dt = K u

    Define the Laplace transform pairs U(s) u(x), F(s) f(x), and K(s) K(x).Now, since

    L{K u} = KU,then performing the Laplace transform to the integral equations results in

    U(s) = F(s) + K(s)U(s)

    U(s) = F(s)1 K(s) , K(s) = 1,

    and performing the inverse Laplace transform:

    u(x) = L1

    F(S)

    1 K(s)

    , K(s) = 1.

    Example: Use the Laplace transform method to solve eq. (11):

    u(x) = f(x) + x0

    extu(t) dt.

    Let K(s) = L{ex} = 1s1 and

    U(s) =F(s)

    1 s1

    = F(s)s 1

    s 1 13

  • 7/31/2019 ecuaciones fredholm

    14/47

    = F(s)s 1 +

    s 1 = F(s) +

    F(s)

    s 1 = F(s) +

    F(s)

    s ( + 1) .

    The solution u(x) is the inverse Laplace transform of U(s):

    u(x) = L1

    F(s) + F(s)

    s ( + 1)

    = L1{F(s)} + L1 F(s)s ( + 1)= f(x) + L1

    1

    s ( + 1) F(s)

    = f(x) + e(+1)x f(x)= f(x) +

    x0

    e(+1)(xt)f(t) dt

    which is the same result obtained using the Neumann series method.Example: Use the Laplace transform method to solve eq. (12):

    u(x) = x x0

    (x t)u(t) dt.

    Let K(s) = L{x} = 1s2

    and

    U(s) =1

    s2 1

    s2U(s)

    =1s2

    1 + 1s2

    =1

    s2

    + 1

    The solution u(x) is the inverse Laplace transform of U(s):

    u(x) = L1

    1

    s2 + 1

    = sin x

    14

  • 7/31/2019 ecuaciones fredholm

    15/47

    which is the same result obtained using the successive approximations method.

    5.2.4 Numerical Solutions

    Given a 2nd kind Volterra equation

    u(x) = f(x) +

    xa

    K(x, t)u(t) dt

    divide the interval of integration (a, x) into n equal subintervals, t = xnan

    .n 1, wherexn = x.

    Let t0 = a, x0 = t0 = a, xn = tn = x, tj = a +jt = t0 +jt, xi = x0 + it = a + it = ti.

    Using the trapezoid rule,xa

    K(x, t)u(t) dt

    t

    1

    2K(x, t0)u(t0) + K(x, t1)u(t1) +

    + K(x, tn1)u(tn1) +1

    2K(x, tn)u(tn)

    where t =tja

    j

    = xa

    n

    , tj

    x, j

    1, x = xn = tn.Now the equation becomes

    u(x) = f(x) + t

    1

    2K(x, t0)u(t0) + K(x, t1)u(t1) +

    + K(x, tn1)u(tn1) +1

    2K(x, tn)u(tn)

    , tj x, j 1, x = xn = tn.

    K(x, t) 0 when t > x (the integration ends at t = x), K(xi, tj) = 0 for tj > xi.Numerically, the equation becomes

    u(xi) = f(xi) + t

    1

    2K(xi, t0)u(t0) + K(xi, t1)u(t1) +

    + K(xi, tj1)u(tj1) +1

    2K(xi, tj)u(tj)

    , i = 1, 2, . . . , n, tj xi

    where u(x0) = f(x0).

    15

  • 7/31/2019 ecuaciones fredholm

    16/47

    Denote ui

    u(ti), fi

    f(xi), Kij

    K(xi, tj), so the numeric equation can be written in a

    condensed form as

    u0 = f0, ui = fi + t

    1

    2Ki0u0 + Ki1u1 +

    + Ki(j1)uj1 +1

    2Kijuj

    , i = 1, 2, . . . , n, j i

    there are n + 1 equations

    u0 = f0

    u1 = f1 + t

    12 K10u0 + 12K

    11u1

    u2 = f2 + t

    1

    2K20u0 + K21u1 +

    1

    2K22u2

    ...

    ...

    un = fn + t

    1

    2Kn0u0 + Kn1u1 + + Kn(n1)un1 + 1

    2Knnun

    where a general relation can be rewritten as

    ui =fi + t

    12Ki0u0 + Ki1u1 + + Ki(i1)ui1

    1 t2

    Kii

    and can be evaluated by substituting u0, u1, . . . , ui1 recursively from previous calculations.

    5.3 Solving 1st-Kind Volterra Equations

    5.3.1 Reduction to 2nd-Kind

    f(x) = x

    0

    K(x, t)u(t) dt

    If K(x, x) = 0, then:df

    dx=

    x0

    K(x, t)

    x(t) dt + K(x, x)u(x)

    u(x) = 1K(x, x)

    df

    dxx0

    1

    K(x, x)

    K(x, t)

    xu(t) dt

    16

  • 7/31/2019 ecuaciones fredholm

    17/47

    Let g(x) =1

    K(x, x)and H(x, t) =

    1K(x, x)

    K(x, t)

    x.

    u(x) = g(x) +

    x0

    H(x, t)u(t) dt

    This is a Volterra equation of the 2nd kind.Example: Reduce the following Volterra equation of the 1st kind to an equation of the 2nd

    kind and solve it:

    sin x = x

    0

    extu(t) dt. (13)

    In this case, = 1, K(x, t) = ext, and f(x) = sin x. Since K(x, x) = 1 = 0, then eq. (13)becomes:

    u(x) = cos x x0

    extu(t) dt.

    This is a special case of eq. (11) with f(x) = cos x and = 1:

    u(x) = cos x x0

    cos t dt

    = cos x [sin t]x0= cos x

    sin x.

    5.3.2 Laplace Transform

    If K(x, t) = K(x t), then a Laplace transform may be used to solve the equation. Define theLaplace transform pairs U(s) u(x), F(s) f(x), and K(s) K(x).

    Now, sinceL{K u} = KU,

    then performing the Laplace transform to the integral equations results in

    F(s) = K(s)U(s)

    U(s) =F(s)

    K(s), K(s) = 0,

    and performing the inverse Laplace transform:

    u(x) =1

    L1

    F(S)

    K(s)

    , K(s) = 0.

    17

  • 7/31/2019 ecuaciones fredholm

    18/47

    Example: Use the Laplace transform method to solve eq. (13):

    sin x =

    x0

    extu(t) dt.

    Let K(s) = L{ex} = 1s1 and substitute this and L{sin x} = 1s2+1 to get

    1

    s2 + 1=

    1

    s 1 U(s).

    Solve for U(s):

    U(s) =1

    1s2+11

    s1

    =1

    s 1s2 + 1

    =1

    s

    s2 + 1 1

    s2 + 1

    .

    Use the inverse transform to find u(x):

    u(x) =1

    L1 s

    s2 + 1 1

    s2 + 1

    =1

    (cos x sin x)

    which is the same result found using the reduction to 2nd kind method when = 1.Example: Abels equation (2) formulated as an integral equation is

    2gf(x) =

    x0

    (t)x t dt.

    This describes the shape (y) of a wire in a vertical plane along which a bead must descendunder the influence of gravity a distance y0 in a predetermined time f(y0). This problem isdescribed by Abels equation (2).

    To solve Abels equation, let F(s) be the Laplace transform off(x) and (s) be the Laplacetransform of (x). Also let K(s) = L{ 1

    x} =

    s( (1

    2) =

    ). The Laplace transform of

    eq. (2) is

    2gF(s) =

    s(s)

    18

  • 7/31/2019 ecuaciones fredholm

    19/47

    or, solving for (s):

    (s) =

    2g

    sF(s).

    So (x) is given by:

    (x) =

    2g

    L1{sF(s)}.

    Since L1{s} does not exist, the convolution theorem cannot be used directly to solve for(x). However, by introducing H(s) F(s)

    sthe equation may be rewritten as:

    (x) =

    2g

    sH(s).

    Now

    h(x) = L1{H(s)}= L1

    1

    sF(s)

    =1

    x0

    f(t)x t dt

    and since h(0) = 0,

    dh

    dx= L1{sH(s) h(0)}= L1{sH(s)}

    Finally, use dhdx

    to find (x) using the inverse Laplace transform:

    (x) =

    2g

    d

    dx

    1

    x

    0

    f(t)

    x tdt

    =2g

    d

    dx

    x0

    f(t)x t dt

    which is the solution to Abels equation (2).

    19

  • 7/31/2019 ecuaciones fredholm

    20/47

    6 Fredholm Equations

    Fredholm integral equations are a type of linear integral equation (1) with b(x) = b:

    h(x)u(x) = f(x) +

    ba

    K(x, t)u(t) dt (14)

    The equation is said to be the first kind when h(x) = 0,

    f(x) =xa

    K(x, t)u(t) dt (15)

    and the second kind when h(x) = 1,

    u(x) = f(x) +

    x

    a

    K(x, t)u(t) dt (16)

    6.1 Relation to Boundary Value Problems

    A general boundary value problem is given by:

    d2y

    dx2+ A(x)

    dy

    dx+ B(x)y = g(x) (17)

    y(a) = c1, y(b) = c2

    y(x) = f(x) +

    b

    a

    K(x, t)y(t) dt

    where

    f(x) = c1 +

    xa

    (x t)g(t) dt + x ab a

    c2 c1

    ba

    (b t)g(t) dt

    and

    K(x, t) =

    xbba [A(t) (a t) [A(t) B(t)]] if x > txaba [A(t) (b t) [A(t) B(t)]] if x < t

    6.2 Fredholm Equations of the 2

    nd

    Kind With Seperable KernelsA seperable kernel, also called a degenerate kernel, is a kernel of the form

    K(x, t) =n

    k=1

    ak(x)bk(t) (18)

    which is a finite sum of products ak(x) and bk(t) where ak(x) is a function of x only and bk(t)is a function of t only.

    20

  • 7/31/2019 ecuaciones fredholm

    21/47

    6.2.1 Nonhomogeneous Equations

    Given the kernel in eq. (18), the general nonhomogeneous second kind Fredholm equation,

    u(x) = f(x) +

    ba

    K(x, t)u(t) dt (19)

    becomes

    u(x) = f(x) +

    ba

    nk=1

    ak(x)bk(t)u(t) dt

    = f(x) +

    nk=1 a

    k(x)ba bk(t)u(t) dt

    Define ck as

    ck =

    ba

    bk(t)u(t) dt

    (i.e. the integral portion). Multiply both sides by bm(x), m = 1, 2, . . . , n

    bm(x)u(x) = bm(x)f(x) + n

    k=1ckbm(x)ak(x)

    then integrate both sides from a to bba

    bm(x)u(x) dx =

    ba

    bm(x)f(x) dx + n

    k=1

    ck

    ba

    bm(x)ak(x) dx

    Define fm and amk as

    fm =

    ba

    bm(x)f(x) dx

    amk = b

    a

    bm(x)ak(x) dx

    so finally the equation becomes

    cm = fm + n

    k=1

    amkck, m = 1, 2, . . . , n .

    That is, it is a nonhomogeneous system of n linear equations in c1, c2, . . . , cn. fm and amk areknown since bm(x), f(x), and ak(x) are all given.

    21

  • 7/31/2019 ecuaciones fredholm

    22/47

    This system can be solved by finding all cm, m = 1, 2, . . . , n, and substituting into u(x) =

    f(x) + n

    k=1 ckak(x) to find u(x). In matrix notation,

    let C =

    c1c2...

    cn

    , F =

    f1f2...

    fn

    , A =

    a11 a12 a1na21 a22 a2n

    ......

    . . ....

    an1 an2 ann

    .

    C = F + AC

    C AC = F(I A)C = F

    which has a unique solution if the determinant |IA| = 0, and either no solution or infinitelymany solutions when |I A| = 0.

    Example: Use the above method to solve the Fredholm integral equation

    u(x) = x +

    10

    (xt2 + x2t)u(t) dt. (20)

    The kernel is K(x, t) = xt2 + x2t =2

    k=1 ak(x)bk(t), where a1(x) = x, a2(x) = x2, b1(t) = t

    2,and b2(t) = t. Given that f(x) = x, define f1 and f2:

    f1 = 1

    0

    b1(t)f(t) dt = 1

    0

    t3 dt =1

    4f2 =

    10

    b2(t)f(t) dt =

    10

    t2 dt =1

    3

    so that the matrix F is

    F =

    1413

    .

    Now find the elements of the matrix A:

    a11 = 1

    0

    b1(t)a1(t) dt = 1

    0

    t3 dt =1

    4

    a12 =

    1

    0

    b1(t)a2(t) dt =

    1

    0

    t4 dt =1

    5

    a21 =

    10

    b2(t)a1(t) dt =

    10

    t2 dt =1

    3

    a22 =

    10

    b2(t)a2(t) dt =

    10

    t3 dt =1

    4

    22

  • 7/31/2019 ecuaciones fredholm

    23/47

    so that the A is

    A = 1

    4

    1

    513

    14

    .

    Therefore, C = F + AC becomesc1c2

    =

    1413

    +

    14

    15

    13

    14

    c1c2

    ,

    or: 1

    4

    5

    3

    1 4

    c1c2

    =

    1413

    .

    If the determinant of the leftmost term is not equal to zero, then there is a unique solution for

    c1 and c2 that can be evaluated by finding the inverse of that term. That is, if 1 4 53

    1 4

    = 01

    4

    2

    2

    15= 0

    240 120 2240

    = 0240 120 2 = 0.

    If this is true, then c1 and c2 can be found as:

    c1 =60 +

    240 120 2c2 =

    80

    240 120 2 ,

    then u(x) is found from

    u(x) = f(x) +

    nk=1

    ckak(x)

    = x + 2

    k=1

    ckak(x)

    = x + [c1a1(x) + c2a2(x)]

    = x +

    (60 + )x

    240 120 2 +80x2

    240 120 2

    23

  • 7/31/2019 ecuaciones fredholm

    24/47

    =(240 60)x + 80x2

    240 120 2, 240

    120

    2

    = 0.

    6.2.2 Homogeneous Equations

    Given the kernel defined in eq. (18), the general homogeneous second kind Fredholm equation,

    u(x) =

    ba

    K(x, t)u(t) dt (21)

    becomes

    u(x) =

    b

    a

    nk=1

    ak(x)bk(t)u(t) dt

    = n

    k=1

    ak(x)

    ba

    bk(t)u(t) dt

    Define ck, multiply and integrate, then define amk as in the previous section so finally theequation becomes

    cm = n

    k=1

    amkck, m = 1, 2, . . . , n .

    That is, it is a homogeneous system of n linear equations in c1, c2, . . . , cn. Again, the systemcan be solved by finding all cm, m = 1, 2, . . . , n, and substituting into u(x) =

    nk=1 ckak(x)

    to find u(x). Using the same matrices defined earlier, the equation can be written as

    C = AC

    C AC = 0(I A)C = 0.

    Because this is a homogeneous system of equations, when the determinant|I

    A

    | = 0 then

    the trivial solution C = 0 is the only solution, and the solution to the integral equation isu(x) = 0. Otherwise, if|I A| = 0 then there may be zero or infinitely many solutions whoseeigenfunctions correspond to solutions u1(x), u2(x), . . . , un(x) of the Fredholm equation.

    Example: Use the above method to solve the Fredholm integral equation

    u(x) =

    0

    (cos2 x cos2t + cos 3x cos3 t)u(t) dt. (22)

    24

  • 7/31/2019 ecuaciones fredholm

    25/47

    The kernel is K(x, t) = cos2 x cos2t + cos 3x cos3 t = k=1 2ak(x)bk(t) where a1(x) = cos2 x,a2(x) = cos 3x, b1(t) = cos 2t, and b2(t) = cos3 t. Now find the elements of the matrix A:

    a11 =

    0

    b1(t)a1(t) dt =

    0

    cos2t cos2 t dt =

    4

    a12 =

    0

    b1(t)a2(t) dt =

    0

    cos2t cos3t dt = 0

    a21 =

    0

    b2(t)a1(t) dt =

    0

    cos3 t cos2 t dt = 0

    a22 =

    0

    b2(t)a2(t) dt =

    0

    cos3 t cos3t dt =

    8

    and follow the method of the previous example to find c1 and c2:1

    40

    0 1 8

    = 0

    1 4

    = 0

    1 8

    = 0

    For c1 and c2 to not be the trivial solution, the determinant must be zero. That is, 1 4 00 1 8

    = 1 4

    1 8

    = 0,

    which has solutions 1 =4

    and 2 =8

    which are the eigenvalues of eq. (22). There are twocorresponding eigenfunctions u1(x) and u2(x) that are solutions to eq. (22). For 1 =

    4

    , c1 = c1(i.e. it is a parameter), and c2 = 0. Therefore u1(x) is

    u1(x) =4

    c1 cos

    2 x.

    To form a specific solution (which can be used to find a general solution, since eq. (22) islinear), let c1 =

    4

    , so that

    u1(x) = cos2 x.

    Similarly, for 2 =8

    , c1 = 0 and c2 = c2, so u2(x) is

    u2(x) =8

    c2 cos3x.

    To form a specific solution, let c2 =8

    , so that

    u2(x) = cos 3x.

    25

  • 7/31/2019 ecuaciones fredholm

    26/47

    6.2.3 Fredholm Alternative

    The homogeneous Fredholm equation (21),

    u(x) =

    ba

    K(x, t)u(t) dt

    has the corresponding nonhomogeneous equation (19),

    u(x) = f(x) +

    ba

    K(x, t)u(t) dt.

    If the homogeneous equation has only the trivial solution u(x) = 0, then the nonhomogeneous

    equation has exactly one solution. Otherwise, if the homogeneous equation has nontrivialsolutions, then the nonhomogeneous equation has either no solutions or infinitely many solutionsdepending on the term f(x). Iff(x) is orthogonal to every solution ui(x) of the homogeneousequation, then the associated nonhomogeneous equation will have a solution. The two functionsf(x) and ui(x) are orthogonal if

    ba

    f(x)ui(x) dx = 0.

    6.2.4 Approximate Kernels

    Often a nonseperable kernel may have a Taylor or other series expansion that is separable.This approximate kernel is denoted by M(x, t) K(x, t) and the corresponding approximatesolution for u(x) is v(x). So the approximate solution to eq. (19) is given by

    v(x) = f(x) +

    ba

    M(x, t)v(t) dt.

    The error involved is = |u(x) v(x)| and can be estimated in some cases.Example: Use an approximating kernel to find an approximate solution to

    u(x) = sin x +

    10

    (1 x cos xt)u(t) dt. (23)

    The kernel K(x, t) = 1 x cos xt is not seperable, but a finite number of terms from itsMaclaurin series expansion

    1 x

    1 x2t2

    2!+

    x4t4

    4!

    = 1 x + x

    3t2

    2! x

    5t4

    4!+

    is seperable in x and t. Therefore consider the first three terms of this series as the approximatekernel

    M(x, t) = 1 x + x3t2

    2!.

    26

  • 7/31/2019 ecuaciones fredholm

    27/47

    The associated approximate integral equation in v(x) is

    v(x) = sin x +

    10

    1 x + x

    3t2

    2!

    v(t) dt. (24)

    Now use the previous method for solving Fredholm equations with seperable kernels to solveeq. (24):

    M(x, t) =2

    k=1

    ak(x)bk(t)

    where a1(x) = (1 x), a2(x) = x3, b1(t) = 1, b2(t) = t22 . Given that f(x) = sin x, find f1 andf2:

    f1 =

    10

    b1(t)f(t) dt =

    10

    sin t dt = 1 cos1

    f2 =

    10

    b2(t)f(t) dt =

    10

    t2

    2sin t dt =

    1

    2cos 1 + sin 1 1.

    Now find the elements of the matrix A:

    a11 = 1

    0

    b1(t)a1(t) dt = 1

    0

    (1

    t) dt =

    1

    2a12 =

    10

    b1(t)a2(t) dt =

    10

    t3 dt =1

    4

    a21 =

    10

    b2(t)a1(t) dt =

    10

    t2

    2(1 t) dt = 1

    24

    a22 =

    10

    b2(t)a2(t) dt =

    10

    t5

    2dt =

    1

    12.

    Therefore, C = F + AC becomesc1c2

    =

    1 cos1

    12

    cos 1 + sin 1 1

    + 1

    12

    14

    124

    112

    c1c2

    ,

    or 1 1

    21

    4

    124

    1 112

    c1c2

    =

    1 cos1

    12

    cos 1 + sin 1 1

    .

    27

  • 7/31/2019 ecuaciones fredholm

    28/47

    The determinant is 1 12 14 124

    1 112

    = 12 1112 14 124=

    43

    96

    which is non-zero, so there is a unique solution for C:

    c1 = 7643

    cos 1 +64

    43+

    24

    43sin1 1.0031

    c2 = 2043 cos1 4443 + 4843 sin1 0.1674.

    Therefore, the solution to eq. (24) is

    v(x) = sin x + c1a1(x) + c2a2(x)

    sin x + 1.0031(1 x) + 0.1674x3

    which is an approximate solution to eq. (23). The exact solution is known to be u(x) = 1, so

    comparing some values of v(x):Solution 0 0.25 0.5 0.75 1u(x) 1 1 1 1 1v(x) 1.0031 1.0023 1.0019 1.0030 1.0088

    6.3 Fredholm Equations of the 2nd Kind With Symmetric Kernels

    A symmetric kernel is a kernel that satisfies

    K(x, t) = K(t, x).

    If the kernel is a complex-valued function, then to be symmetric it must satisfy [K(x, t) =K(t, x)] where K denotes the complex conjugate of K.

    These types of integral equations may be solved by using a resolvent kernel ( x, t; ) thatcan be found from the orthonormal eigenfunctions of the homogeneous form of the equationwith a symmetric kernel.

    28

  • 7/31/2019 ecuaciones fredholm

    29/47

    6.3.1 Homogeneous Equations

    The eigenfunctions of a homogeneous Fredholm equation with a symmetric kernel are functionsun(x) that satisfy

    un(x) = n

    ba

    K(x, t)un(t) dt.

    It can be shown that the eigenvalues of the symmetric kernel are real, and that the eigen-functions un(x) and um(x) corresponding to two of the eigenvalues n and m where n = mare orthogonal (see earlier section on the Fredholm Alternative). It can also be shown thatthere is a finite number of eigenfunctions corresponding to each eigenvalue of a kernel if thekernel is square integrable on {(x, t)|a x b, a t b}, that is:

    ba

    ba

    K2(x, t) dxdt = B2 < .

    Once the orthonormal eigenfunctions 1(x), 2(x), . . . of the homogeneous form of the equa-tions are found, the resolvent kernel can be expressed as an infinite series:

    (x, t; ) =k=1

    k(x)k(t)

    k , = k

    so the solution of the 2nd-order Fredholm equation is

    u(x) = f(x) + k=1

    akk(x)

    k

    where

    ak =

    ba

    f(x)k(x) dx.

    6.4 Fredholm Equations of the 2nd Kind With General Kernels

    6.4.1 Fredholm Resolvent Kernel

    Evaluate the Fredholm Resolvent Kernel (x, t; ) in

    u(x) = f(x) +

    ba

    (x, t; )f(t) dt (25)

    (x, t; ) =D(x, t; )

    D()

    29

  • 7/31/2019 ecuaciones fredholm

    30/47

    where D(x, t; ) is called the Fredholm minor and D() is called the Fredholm determinant.

    The minor is defined as

    D(x, t; ) = K(x, t) +n=1

    ()nn!

    Bn(x, t)

    where

    Bn(x, t) = CnK(x, t) nba

    K(x, s)Bn1(s, t) ds, B0(x, t) = K(x, t)

    and

    Cn = b

    a

    Bn1(t, t) dt,n = 1, 2, . . . , C 0 = 1

    and the determinant is defined as

    D() =

    n=0

    ()nn!

    Cn, C0 = 1.

    Bn(x, t) and Cn may each be expressed as a repeated integral where the integrand is adeterminant:

    Bn(x, t) =ba

    ba

    b

    a

    K(x, t) K(x, t1) K(x, tn)K(t1, t) K(t1, t1)

    K(t1, tn)

    ... ... . . . ...K(tn, t) K(tn, t1) K(tn, tn)

    dt1 dt2 . . . dtn

    Cn =

    ba

    ba

    ba

    K(t1, t1) K(t1, t2) K(t1, tn)K(t2, t1) K(t2, t2) K(t2, tn)

    ......

    . . ....

    K(tn, t1) K(tn, t2) K(tn, tn)

    dt1 dt2 . . . dtn

    6.4.2 Iterated Kernels

    Define the iterated kernel Ki(x, t) as

    Ki(x, y) ba

    K(x, t)Ki1(t, y) dt, K1(x, t) K(x, t)

    so

    i(x) ba

    Ki(x, y)f(y) dy

    30

  • 7/31/2019 ecuaciones fredholm

    31/47

    and

    un(x) = f(x) + 1(x) + 22(x) + + nn(x)

    = f(x) +ni=1

    ii(x)

    un(x) converges to u(x) when |B| < 1 and

    B =

    ba

    ba

    K2(x, t) dxdt.

    6.4.3 Neumann Series

    The Neumann series

    u(x) = f(x) +i=1

    ii(x)

    is convergent, and substituting for i(x) as in the previous section it can be rewritten as

    u(x) = f(x) +i=1

    iba

    Ki(x, t)f(t) dt

    = f(x) +ba

    i=1

    i

    Ki(x, t)

    f(t) dt

    = f(x) +

    ba

    (x, t; )f(t) dt.

    Therefore the Neumann resolvent kernel for the general nonhomogeneous Fredholm integralequation of the second kind is

    (x, t; ) =

    i=1i1Ki(x, t).

    6.5 Approximate Solutions To Fredholm Equations of the 2nd Kind

    The solution of the general nonhomogeneous Fredholm integral equation of the second kind eq.(19), u(x), can be approximated by the partial sum

    SN(x) =Nk=1

    ckk(x) (26)

    31

  • 7/31/2019 ecuaciones fredholm

    32/47

    of N linearly independent functions 1, 2, . . . , N on the interval (a, b). The associated error

    (x, c1, c2, . . . , cN) depends on x and the choice of the coefficients c1, c2, . . . , cN. Therefore, whensubstituting the approximate solution for u(x), the equation becomes

    SN(x) = f(x) +

    ba

    K(x, t)SN(t) dt + (x, c1, c2, . . . , cN). (27)

    Now N conditions must be found to give the N equations that will determine the coefficientsc1, c2, . . . , cN.

    6.5.1 Collocation Method

    Assume that the error term disappears at the N points x1, x2, . . . , xN. This reduces theequation to N equations:

    SN(xi) f(xi) +

    ba

    K(xi, t)SN(t) dt, i = 1, 2, . . . , N ,

    where the coefficients of SN can be found by substituting the N linearly independent func-tions 1, 2, . . . , N and values x1, x2, . . . , xN where the error vanishes, then performing theintegration and solving for the coefficients.

    Example: Use the Collocation method to solve the Fredholm integral equation of thesecond kind

    u(x) = x +

    1

    1xtu(t) dt. (28)

    Set N = 3 and choose the linearly independent functions 1(x) = 1, 2(x) = x, and 3(x) = x2.

    Therefore the approximate solution is

    S3(x) =3

    k=1

    ckk(x) = c1 + c2x + c3x2.

    Substituting into eq. (27) obtains

    S3(x) = x +

    1

    1xt(c1 + c2t + c3t

    2) dt + (x, c1, c2, c3)

    = x + x

    11

    (c1t + c2t2 + c3t

    3) dt + (x, c1, c2, c3)

    32

  • 7/31/2019 ecuaciones fredholm

    33/47

    and performing the integration results in11

    (c1t + c2t2 + c3t

    3) dt =

    c1t

    2

    2+

    c2t3

    3+

    c3t4

    4

    11

    =c12

    +c23

    +c34

    c1

    2 c2

    3+

    c34

    =

    c1 c12

    +c2 + c2

    3+

    c3 c34

    =2

    3c2

    so eq. (27) becomes

    c1 + c2x + c3x2 = x + x

    2

    3c2

    + (x, c1, c2, c3)

    = x

    1 +

    2

    3c2

    + (x, c1, c2, c3).

    Now, three equations are needed to find c1, c2, and c3. Assert that the error term is zero atthree points, arbitrarily chosen to be x1 = 1, x2 = 0, and x3 = 1. This gives

    c1 + c2 + c3 = 1 +2

    3c2

    c1 +13

    c2 + c3 = 1,

    c1 + 0 + 0 = 0

    c1 = 0,

    c1 c2 + c3 = 1 23

    c2

    c1 13

    c2 + c3 = 1.

    Solve for c1, c2, and c3, giving c1 = 0, c2 = 3, and c3 = 0. Therefore the approximate solutionto eq. (28) is S3(x) = 3x. In this case, the exact solution is known to be u(x) = 3x, so theapproximate solution happened to be equal to the exact solution due to the selection of thelinearly independent functions.

    33

  • 7/31/2019 ecuaciones fredholm

    34/47

    6.5.2 Galerkin Method

    Assume that the error term is orthogonal to N given linearly independent functions 1, 2, . . . , Nof x on the interval (a, b). The N conditions therefore become

    ba

    j(x)(x, c1, c2, . . . , cN) dx

    =

    ba

    j(x)

    SN(x) f(x)

    ba

    K(x, t)SN(t) dt

    dx

    = 0, j = 1, 2, . . . , N

    which can be rewritten as eitherba

    j(x)

    SN(x)

    ba

    K(x, t)SN(t) dt

    dx

    =

    ba

    j(x)f(x) dx, j = 1, 2, . . . , N

    or

    ba

    j(x) N

    k=1

    ckk(x) ba

    K(x, t) Nk=1

    ckk(t)

    dt

    dx

    =

    ba

    j(x)f(x) dx, j = 1, 2, . . . , N

    after substituting for SN(x) as before. In general the first set of linearly independent func-tions j(x) are different from the second set j(x), but the same functions may be used forconvenience.

    Example: Use the Galerkin method to solve the Fredholm integral equation in eq. (28),

    u(x) = x +

    11

    xtu(t) dt

    using the same linearly independent functions 1(x) = 1, 2(x) = x, and 3(x) = x2 giving the

    approximate solutionS3(x) = c1 + c2x + c3x

    2.

    34

  • 7/31/2019 ecuaciones fredholm

    35/47

    Substituting into eq. (27) results in the error term being

    (x, c1, c2, c3) = c1 + c2x + c3x2 x

    11

    xt(c1 + c2t + c3t2) dt.

    The error must be orthogonal to three linearly independent functions, chosen to be 1(x) = 1,2(x) = x, and 3(x) = x

    2:11

    1

    c1 + c2x + c3x

    2 11

    xt(c1 + c2t + c3t2) dt

    dx =

    11

    x dx

    1

    1 c1 + c2x + c3x

    2 x1

    1

    (c1t + c2t2 + c3t

    3) dt dx =x2

    2 1

    11

    1

    c1 + c2x + c3x

    2 x

    c1t2

    2+

    c2t3

    3+

    c3t4

    4

    11

    dx =

    1

    2 1

    211

    c1 1

    3c2x + c3x

    2

    dx = 0

    c1x +

    c2x2

    6+

    c3x3

    3

    11

    = 0

    2c1 +2

    3c3 = 0,

    11

    x

    c1 + c2x + c3x

    2 11

    xt(c1 + c2t + c3t2) dt

    dx =

    11

    x2 dx

    11

    x

    c1 1

    3c2x + c3x

    2

    dx =

    x3

    3

    1

    111

    c1x 1

    3c2x

    2 + c3x3

    dx =

    1

    3+

    1

    3

    c1x

    2

    2+

    c2x3

    9+

    c3x4

    4 1

    1

    =2

    3

    2

    9c2 =

    2

    3c2 = 3,

    11

    x2

    c1 + c2x + c3x2

    11

    xt(c1 + c2t + c3t2) dt

    dx =

    11

    x3 dx

    35

  • 7/31/2019 ecuaciones fredholm

    36/47

    11

    x2c1

    1

    3c2x + c

    3x2 dx = x4

    41

    111

    c1x

    2 13

    c2x3 + c3x

    4

    dx =

    1

    4 1

    4c1x

    3

    3+

    c2x4

    12+

    c3x5

    5

    11

    = 0

    2

    3c1 +

    2

    5c3 = 0.

    Solving for c1, c2, and c3 gives c1 = 0, c2 = 3, and c3 = 0, resulting in the approximate solutionbeing S3(x) = 3x. This is the same result obtained using the collocation method, and alsohappens to be the exact solution to eq. (27) due to the selection of the linearly independentfunctions.

    6.6 Numerical Solutions To Fredholm Equations

    A numerical solution to a general Fredholm integral equation can be found by approximatingthe integral by a finite sum (using the trapezoid rule) and solving the resulting simultaneousequations.

    6.6.1 Nonhomogeneous Equations of the 2nd Kind

    In the general Fredholm Equation of the second kind in eq. (19),

    u(x) = f(x) +

    ba

    K(x, t)u(t) dt,

    the interval (a, b) can be subdivided into n equal intervals of width t = ban

    . Let t0 = a,tj = a + jt = t0 + jt, and since the variable is either t or x, let x0 = t0 = a, xn = tn = b,and xi = x0 + it (i.e. xi = ti). Also denote u(xi) as ui, f(xi) as fi, and K(xi, tj) as Kij. Nowif the trapezoid rule is used to approximate the given equation, then:

    u(x) = f(x) +

    ba

    K(x, t)u(t) dt f(x) + t

    1

    2K(x, t0)u(t0)

    +K(x, t1)u(t1) + + K(x, tn1)u(tn1) + 12

    K(x, tn)u(tn)

    36

  • 7/31/2019 ecuaciones fredholm

    37/47

    or, more tersely:

    u(x) f(x) + t

    1

    2K(x, t0)u0 + K(x, t1)u1 + + 1

    2K(x, tn)un

    .

    There are n + 1 values of ui, as i = 0, 1, 2, . . . , n. Therefore the equation becomes a set ofn + 1 equations in ui:

    ui = fi + t

    1

    2Ki0u0 + Ki1u1 + + Ki(n1)un1

    +1

    2Kinun

    , i = 0, 1, 2, . . . , n

    that give the approximate solution to u(x) at x = xi. The terms involving u may be moved tothe left side of the equations, resulting in n + 1 equations in u0, u1, u2, . . . , un:

    1 t2

    K00

    u0 tK01u1 tK02u2 t

    2K0nun = f0

    t2

    K10u0 + (1 tK11) u1 tK12u2 t2

    K1nun = f1

    ......

    t

    2 Kn0u0 tKn1u1 tKn2u2 + 1 t2 Knn un = fn.This may also be written in matrix form:

    KU = F,

    where K is the matrix of coefficients

    K =

    1 t2

    K00 tK01 t2 K0nt2

    K10 1 tK11 t2 K1n.

    ..

    .

    .... .

    .

    ..t2

    Kn0 tKn1 1 t2 Knn

    ,

    U is the matrix of solutions

    U =

    u0u1...

    un

    c

    ,

    37

  • 7/31/2019 ecuaciones fredholm

    38/47

    and F is the matrix of the nonhomogeneous part

    F =

    f0f1...

    fn

    .

    Clearly there is a unique solution to this system of linear equations when |K| = 0, and eitherinfinite or zero solutions when |K| = 0.

    Example: Use the trapezoid rule to find a numeric solution to the integral equation in eq.(23):

    u(x) = sin x +10

    (1 x cos xt)u(t) dt

    at x = 0, 12

    , and 1. Choose n = 2 and t = 102

    = 12

    , so ti = it =i2

    = xi. Using the trapezoidrule results in

    ui = fi +1

    2

    1

    2Ki0u0 + Ki1u1 +

    1

    2Ki2u2

    , i = 0, 1, 2,

    or in matrix form:

    1 14

    K00 12K01 14K0214

    K10 1 12K11 14K12

    1

    4K20 1

    2K21 1 1

    4K22

    u0u1

    u2

    =

    sin0sin 1

    2

    sin1

    .

    Substituting fi = f(xi) = sini2

    and Kij = K(xi, tj) = 1 i2 cos ij4 into the matrix obtains 1 14(1 0) 12(1 0) 14(1 0)1

    4

    1 1

    2

    1 1

    2

    1 1

    2cos 1

    4

    14

    1 1

    2cos 1

    2

    1

    4(1 1) 1

    2

    1 cos 1

    2

    1 1

    4(1 cos1)

    u0u1

    u2

    =

    sin0sin 1

    2

    sin1

    34

    12

    141

    812

    + 14

    cos 14

    14

    + 18

    cos 12

    0 12

    + 12

    cos 12

    34

    + 14

    cos1

    u0u1u2

    =

    sin0sin 1

    2

    sin1

    0.75 0.5 0.250.125 0.7422 0.14030 0.0612 0.8851

    u0u1u2

    00.47940.8415

    ,

    which can be solved to give u0 1.0132, u1 1.0095, and u2 1.0205. The exact solution ofeq. (23) is known to be u(x) = 1, which compares very well with these approximate values.

    38

  • 7/31/2019 ecuaciones fredholm

    39/47

    6.6.2 Homogeneous Equations

    The general homogeneous Fredholm equation in eq. (21),

    u(x) =

    ba

    K(x, t)u(t) dt

    is solved in a similar manner as nonhomogeneous Fredholm equations using the trapezoidrule. Using the same terminology as the previous section, the integral is divided into n equalsubintervals of width t, resulting in n + 1 linear homogeneous equations

    ui = t

    1

    2Ki0u0 + Ki1u1 + + Ki(n1)un1 + 1

    2Kinun

    , i = 0, 1, . . . , n .

    Bringing all of the terms to the left side of the equations results in1 t

    2K00

    u0 tK01u1 t

    2K0nun = 0

    t2

    K10u0 + (1 tK11) u1 t2

    K1nun = 0

    ......

    t2

    Kn0u0 tKn1u1 +

    1 t2

    Knn

    un = 0.

    Letting =

    1

    simplifies this system by causing to appear in only one term of each equation: t

    2K00

    u0 tK01u1 tK02u2 t

    2K0nun = 0

    t2

    K10u0 + ( tK11) u1 tK12u2 t2

    K1nun = 0

    ......

    t2

    Kn0u0 tKn1u1 +

    t2

    Knn

    un = 0.

    Again, the equations may be written in matrix notation asKHU = 0

    where

    0 =

    00...0

    ,

    39

  • 7/31/2019 ecuaciones fredholm

    40/47

    U is the same as the nonhomogeneous case:

    U =

    u0u1...

    un

    ,

    and KH is the coefficient matrix for the homogeneous equations

    KH =

    t2

    K00 tK01 t2 K0nt2

    K10 tK11 t2 K1n.

    ..

    .

    .... .

    .

    ..t2

    Kn0 tKn1 t2 Knn

    .

    An infinite number of nontrivial solutions exist iff |KH| = 0. This condition can be used to findthe eigenvalues of the system by finding = 1

    as the zeros of |KH| = 0.

    Example: Use the trapezoid rule to find a numeric solution to the homogeneous Fredholmequation

    u(x) =

    10

    K(x, t)u(t) dt (29)

    with the symmetric kernel

    K(x, t) =

    x(1 t), 0 x tt(1 x), t x 1

    at x = 0, x = 12

    , and x = 1. Therefore n = 2 and t = 12

    , and

    Kij = K

    i

    2,

    j

    2

    , i , j = 0, 1, 2.

    The resulting matrix is

    0 00

    1

    8 00 0

    .

    For the system of homogeneous equations obtained to have a nontrivial solution, the determi-nant must be equal to zero:

    0 00 1

    80

    0 0

    = 2 1

    8

    = 0,

    40

  • 7/31/2019 ecuaciones fredholm

    41/47

    or, = 0 or = 18

    .

    For = 18 , = 1 = 8 and substituting into the system of equations obtains u0 = 0, u2 = 0,and u1 as an arbitrary constant. Therefore there are two zeros at x = 0 and x = 1, but anarbitrary value at x = 1

    2. It can be found by approximating the function u(x) as a triangular

    function connecting the three points (0, 0), (12

    , u1), and (1, 0):

    u(x) =

    2u1x, 0 x 122u1(x 1), 12 x 1

    ,

    and making its norm be one:

    1

    0

    u2(x) dx = 1.

    Substituting results in

    4u21

    12

    0

    x2 dx + 4u21

    11

    2

    (x 1)2 dx = 1

    u216

    +u216

    = 1

    u213

    = 1

    u1 =

    3.

    So the approximate numerical values are u(0) = 0, u(12

    ) =

    3, and u(1) = 0.For comparison with an exact solution, the orthonormal eigenfunction is chosen to be

    u1(x) =

    2sin x, which corresponds to the eigenvalue 1 = 2, which is close to the ap-

    proximate eigenvalue = 8. The exact solution gives u1(0) =

    2 sin 0 = 0, u1(12

    ) =

    2sin 2

    1.4142, and u1(1) =

    2sin = 0.

    7 An example of a separable kernel equation

    Solve the integral equation

    (x) 0

    sin(x + t)(t)dt = 1 (30)

    where = 2

    .

    41

  • 7/31/2019 ecuaciones fredholm

    42/47

    The kernel is K(x, t) = sin x cos x + sin t cos x where

    a1(x) = sin x

    a2(x) = cos x

    b1(x) = cos t

    b2(x) = sin t

    where we have use the additive identity sin(x + t) = sin x cos t + sin t cos x.

    Choosing f(x) = 1 we obtain

    f1 =0

    b1(t)f(t)dt =0

    cos tdt = 0

    f2 =

    0

    b2(t)f(t)dt =

    0

    sin tdt = 2

    and so

    F =

    02

    (31)

    a11 =

    0

    b1(t)a1(t)dt =

    0

    sin t cos tdt = 0

    a12 =

    0

    b1(t)a2(t)dt =

    0

    cos2 tdt =

    2

    a21 =

    0

    b2(t)a1(t)dt =

    0

    sin2 tdt =

    2

    a12 =

    0

    b2(t)a2(t)dt =

    0

    sin t cos tdt = 0

    which yields

    A = 0 2

    2 0

    (32)

    Now we can rewrite the original integral equation as C = F + ACc1c2

    =

    02

    +

    0

    22

    0

    c1c2

    (33)

    or

    42

  • 7/31/2019 ecuaciones fredholm

    43/47

    1

    22

    1

    c1c2

    =

    02

    (34)

    c1 2

    c2 = 0

    c2 2

    c1 = 2

    with solutions c1 = 8/(4 22), c2 = 42/(4 22).

    The general solution is (x) = 1 + c1a2 + c2a1 = 1 + c1 cos x + c2 sin x and the unique solutionof the integral equation is (x) = 1 +

    8

    (4 22) cos x +42

    (4 22) sin x.

    IVP1

    (x) = F(x, (x)), 0 x 1(0) = 0

    (x) =

    x

    0

    F(t, (t))dt + 0

    IVP2

    (x) = F(x, (x)), 0 x 1(0) = 0,

    (0) = 0

    (x) =x0

    F(t, (t))dt + 0

    (x) =x0

    ds

    s0

    F(t, (t))dt + 0(x) + 0

    we know that: x0

    ds

    s0

    G(s, t)dt =

    x0

    dt

    xt

    G(s, t)dsx0

    ds

    s0

    F(t, (t))dt =

    x0

    (x t) F(t, (t))dt

    43

  • 7/31/2019 ecuaciones fredholm

    44/47

    Therefore,

    (x) =

    x

    0

    (x t) F(t, (t))dt + 0(x) + 0

    Theorem 1 Degenerate Kernels If k(x, t) is a degenerate kernel on [a, b] [a, b] which canbe written in the form k(x, t) =

    ni=0 ai(x)bi(t) where a1, . . . , an and b1, . . . , bn are continuous,

    then

    1. for every continuous function f on [a, b] the integral equation

    (x) ba

    k(x, t) (t)dt = f(x) (a x b) (35)

    possesses a unique continuous solution , or

    2. the homogeneous equation

    (x) ba

    k(x, t) (t)dt = 0 (a x b) (36)

    has a non-trivial solution , in which case eqn. 35 will have non-unique solutions if andonly if

    ba

    f(t)(t) dt = 0 for every continuous solution of the equation

    (x) b

    a

    k(t, x) (t)dt = 0 (a x b). (37)

    Theorem 2 LetK be a bounded linear map from L2(a, b) to itself with the property that {K : L2(a, b)} has finite dimension. Then

    1. the linear map I K has an inverse and (I K)1 is a bounded linear map, or2. the equation K = 0 has a non-trivial solution L2(a, b).

    In the specific case where the kernel k generates a compact operator on L2(a, b), the integralequation

    (x) b

    a k(x, t) (t)dt = f(x) (a x b)will have a unique solution, and the solution operator (I K)1 will be bounded, providedthat the corresponding homogeneous equation

    (x) ba

    k(x, t) (t)dt = 0 (a x b)

    has only the trivial solution L2(a, b).

    44

  • 7/31/2019 ecuaciones fredholm

    45/47

    8 Bibliographical Comments

    In this last section we provide a list of bibliographical references that we found useful in writ-ing this survey and that can also be used for further studying Integral Equations. The listcontains mainly textbooks and research monographs and do not claim by any means that itis complete or exhaustive. It merely reflects our own personal interests in the vast subject ofIntegral Equations. The books are presented using the alphabetical order of the first author.

    The book [Cor91] presents the theory of Volterra integral equations and their applications,convolution equations, a historical account of the theory of integral equations, fixed point theo-rems, operators on function spaces, Hammerstein equations and Volterra equations in abstractBanach spaces, applications in coagulation processes, optimal control of processes governed by

    Volterra equations, and stability of nuclear reactors.

    The book [Hac95] starts with an introductory chapter gathering basic facts from analysis,functional analysis, and numerical mathematics. Then the book discusses Volterra integralequations, Fredholm integral equations of the second kind, discretizations, kernel approxima-tion, the collocation method, the Galerkin method, the Nystrom method, multi-grid methodsfor solving systems arising from integral equations of the second kind, Abels integral equation,Cauchys integral equation, analytical properties of the boundary integral equation method,numerical integration and the panel clustering method and the boundary element method.

    The book [Hoc89] discusses the contraction mapping principle, the theory of compact operators,

    the Fredholm alternative, ordinary differential operators and their study via compact integraloperators, as well as the necessary background in linear algebra and Hilbert space theory.Some applications to boundary value problems are also discussed. It also contains a completetreatment of numerous transform techniques such as Fourier, Laplace, Mellin and Hankel, adiscussion of the projection method, the classical Fredholm techniques, integral operators withpositive kernels and the Schauder fixed-point theorem.

    The book [Jer99] contains a large number of worked out examples, which include the approxi-mate numerical solutions of Volterra and Fredholm, integral equations of the first and secondkinds, population dynamics, of mortality of equipment, the tautochrone problem, the reduction

    of initial value problems to Volterra equations and the reduction of boundary value problemsfor ordinary differential equations and the Schrodinger equation in three dimensions to Fred-holm equations. It also deals with the solution of Volterra equations by using the resolventkernel, by iteration and by Laplace transforms, Greens function in one and two dimensions andwith Sturm-Liouville operators and their role in formulating integral equations, the Fredholmequation, linear integral equations with a degenerate kernel, equations with symmetric kernelsand the Hilbert-Schmidt theorem. The book ends with the Banach fixed point in a metricspace and its application to the solution of linear and nonlinear integral equations and the use

    45

  • 7/31/2019 ecuaciones fredholm

    46/47

    of higher quadrature rules for the approximate solution of linear integral equations.

    The book [KK74] discusses the numerical solution of Integral Equations via the method ofinvariant imbedding. The integral equation is converted into a system of differential equations.For general kernels, resolvents and numerical quadratures are being employed.

    The book [Kon91] presents a unified general theory of integral equations, attempting to unifythe theories of Picard, Volterra, Fredholm, and Hilbert-Schmidt. It also features applicationsfrom a wide variety of different areas. It contains a classification of integral equations, a de-scription of methods of solution of integral equations, theories of special integral equations,symmetric kernels, singular integral equations, singular kernels and nonlinear integral equa-tions.

    The book [Moi77] is an introductory text on integral equations. It describes the classification ofintegral equations, the connection with differential equations, integral equations of convolutiontype, the method of successive approximation, singular kernels, the resolvent, Fredholm theory,Hilbert-Schmidt theory and the necessary background from Hilbert spaces.

    The handbook [PM08] contains over 2500 tabulated linear and nonlinear integral equations andtheir exact solutions, outlines exact, approximate analytical, and numerical methods for solvingintegral equations and illustrates the application of the methods with numerous examples. Itdiscusses equations that arise in elasticity, plasticity, heat and mass transfer, hydrodynamics,chemical engineering, and other areas. This is the second edition of the original handbook

    published in 1998.

    The book [PS90] contains a treatment of a classification and examples of integral equations,second order ordinary differential equations and integral equations, integral equations of thesecond kind, compact operators, spectra of compact self-adjoint operators, positive operators,approximation methods for eigenvalues and eigenvectors of self-adjoint operators, approxima-tion methods for inhomogeneous integral equations, and singular integral equations.

    The book [Smi58] features a carefully chosen selection of topics on Integral Equations us-ing complex-valued functions of a real variable and notations of operator theory. The usualtreatment of Neumann series and the analogue of the Hilbert-Schmidt expansion theorem are

    generalized. Kernels of finite rank, or degenerate kernels, are discussed. Using the method ofpolynomial approximation for two variables, the author writes the general kernel as a sum of akernel of finite rank and a kernel of small norm. The usual results of orthogonalization of thesquare integrable functions of one and two variables are given, with a treatment of the Riesz-Fischer theorem, as a preparation for the classical theory and the application to the Hermitiankernels.

    The book [Tri85] contains a study of Volterra equations, with applications to differential equa-

    46

  • 7/31/2019 ecuaciones fredholm

    47/47

    tions, an exposition of classical Fredholm and Hilbert-Schmidt theory, a discussion of orthonor-

    mal sequences and the Hilbert-Schmidt theory for symmetric kernels, a description of the Ritzmethod, a discussion of positive kernels and Mercers theorem, the application of integral equa-tions to Sturm-Liouville theory, a presentation of singular integral equations and non-linearintegral equations.

    References

    [Cor91] C. Corduneanu. Integral equations and applications. Cambridge University Press,Cambridge, 1991.

    [Hac95] W. Hackbusch. Integral equations, volume 120 of International Series of Numeri-cal Mathematics. Birkhauser Verlag, Basel, 1995. Theory and numerical treatment,Translated and revised by the author from the 1989 German original.

    [Hoc89] H. Hochstadt. Integral equations. Wiley Classics Library. John Wiley & Sons Inc.,New York, 1989. Reprint of the 1973 original, A Wiley-Interscience Publication.

    [Jer99] A. J. Jerri. Introduction to integral equations with applications. Wiley-Interscience,New York, second edition, 1999.

    [KK74] H. H. Kagiwada and R. Kalaba. Integral equations via imbedding methods. Addison-Wesley Publishing Co., Reading, Mass.-London-Amsterdam, 1974. Applied Mathe-

    matics and Computation, No. 6.

    [Kon91] J. Kondo. Integral equations. Oxford Applied Mathematics and Computing ScienceSeries. The Clarendon Press Oxford University Press, New York, 1991.

    [Moi77] B. L. Moiseiwitsch. Integral equations. Longman, London, 1977. Longman Mathemat-ical Texts.

    [PM08] A. D. Polyanin and A. V. Manzhirov. Handbook of integral equations. Chapman &Hall/CRC, Boca Raton, FL, second edition, 2008.

    [PS90] D. Porter and D. S. G. Stirling. Integral equations. Cambridge Texts in Applied

    Mathematics. Cambridge University Press, Cambridge, 1990. A practical treatment,from spectral theory to applications.

    [Smi58] F. Smithies. Integral equations. Cambridge Tracts in Mathematics and MathematicalPhysics, no. 49. Cambridge University Press, New York, 1958.

    [Tri85] F. G. Tricomi. Integral equations. Dover Publications Inc., New York, 1985. Reprintof the 1957 original.

    47