moorepenrose inverse.pdf

Upload: murdoko-ragil

Post on 09-Feb-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/22/2019 MoorePenrose inverse.pdf

    1/9

    A generalization of the MoorePenrose inverse relatedto matrix subspaces ofCnm

    Antonio Surez, Luis Gonzlez *

    Department of Mathematics, University of Las Palmas de Gran Canaria, 35017 Las Palmas de Gran Canaria, Spain

    a r t i c l e i n f o

    Keywords:

    MoorePenrose inverse

    Frobenius norm

    S-MoorePenrose inverse

    Approximate inverse preconditioning

    Constrained least squares problem

    a b s t r a c t

    A natural generalization of the classical MoorePenrose inverse is presented. The so-called

    S-MoorePenrose inverse of anm n complex matrixA, denoted by AyS, is defined for any

    linear subspaceSof the matrix vector space Cnm. TheS-MoorePenrose inverseAySis char-

    acterized using either the singular value decomposition or (for the full rank case) the

    orthogonal complements with respect to the Frobenius inner product. These results are

    applied to the preconditioning of linear systems based on Frobenius norm minimization

    and to the linearly constrained linear least squares problem.

    2010 Elsevier Inc. All rights reserved.

    1. Introduction

    Let Cmn Rmndenote the set of allmncomplex (real) matrices. Throughout this paper, the notations AT; A; A1,r(A)andtr(A) stand for the transpose, conjugate transpose, inverse, rank and trace of matrix A, respectively. Thegeneral reciprocal

    was described by Moore in 1920[1]and independently rediscovered by Penrose in 1955[2], and it is nowadays called the

    MoorePenrose inverse. The MoorePenrose inverse of a matrix A 2 Cmn (sometimes referred to as the pseudoinverse ofA)

    is the unique matrix Ay 2 Cnm satisfying the four Penrose equations

    AAyA A; A

    yAA

    y Ay; AAy AAy; A

    yA AyA: 1:1

    For more details on the MoorePenrose inverse, see[3]. The pseudoinverse has been widely studied during the last decades

    from both theoretical and computational points of view. Some of the most recent works can be found, e.g., in[410]and in

    the references contained therein.

    As is well-known, the MoorePenrose inverse Ay can be alternatively defined as the unique matrix that gives the mini-

    mum Frobenius norm among all solutions to any of the matrix minimization problems

    minM2Cnm kMAInkF; 1:2min

    M2CnmkAMImkF; 1:3

    whereIn denotes the identity matrix of order n and k kFstands for the matrix Frobenius norm.

    Another way to characterize the pseudoinverse is by using the singular value decomposition (SVD). IfA URV is an SVD

    ofA then

    Ay VRyU: 1:4

    0096-3003/$ - see front matter 2010 Elsevier Inc. All rights reserved.doi:10.1016/j.amc.2010.01.062

    * Corresponding author.

    E-mail address: [email protected](L. Gonzlez).

    Applied Mathematics and Computation 216 (2010) 514522

    Contents lists available at ScienceDirect

    Applied Mathematics and Computation

    j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c at e / a m c

    http://dx.doi.org/10.1016/j.amc.2010.01.062mailto:[email protected]://www.sciencedirect.com/science/journal/00963003http://www.elsevier.com/locate/amchttp://www.elsevier.com/locate/amchttp://www.sciencedirect.com/science/journal/00963003mailto:[email protected]://dx.doi.org/10.1016/j.amc.2010.01.062
  • 7/22/2019 MoorePenrose inverse.pdf

    2/9

    Also, the pseudoinverse ofA can be defined via a limiting process as [11]

    Ay lim

    d!0A

    AA dIm1 lim

    d!0AA dIn

    1A

    and by the explicit algebraic expression[3]

    Ay CCC1BB1B

    based on the full rank factorization

    A BC; B2 Cmr; C2 Crn; rA rB rC r:

    In particular, ifA 2 Cmn has full row rank (full column rank, respectively) then Ay is simply the unique solution to prob-

    lem(1.2)(to problem(1.3), respectively), given by the respective explicit expressions[3]

    Ay

    i A AA

    1iff rA m;

    ii AA 1

    A

    iff rA n;

    ( 1:5

    so thatAy is a right inverse (left inverse, respectively) ofA ifr(A) =m (r(A) = n, respectively).

    A natural generalization of the MoorePenrose inverse, the so-called left or rightS-MoorePenrose inverse, arises simply

    by taking in Eq.(1.2)or (1.3), respectively, the minimum over an arbitrary matrix subspaceSofCnm, instead of over the

    whole matrix space Cnm. This is the starting point of this paper, that has been organized as follows.

    In Section2, we define theS-MoorePenrose inverse and we provide an explicit expression based on the SVD of matrixA

    (which generalizes Eq.(1.4)forA

    y

    ), as well as an alternative expression for the full rank case, in terms of the orthogonal com-plements with respect to the Frobenius inner product (which generalizes Eq.(1.5)for Ay). Next, in Section3we apply these

    results to the preconditioning of linear systems based on Frobenius norm minimization and to the linear least squares prob-

    lem subject to some linear restrictions. Finally, in Section4 we present our conclusions.

    2. The S-MoorePenrose inverse

    Eqs. (1.2) and (1.3), regarding the least squares characterization of the MoorePenrose inverse, can be generalized as

    follows.

    Definition 2.1. Let A 2 Cmn and letSbe a linear subspace ofCnm. Then

    (i) The left MoorePenrose inverse ofA with respect to Sor, for short, the left S-MoorePenrose inverse ofA, denoted byAyS;l, is the minimum Frobenius norm solution to the matrix minimization problem

    minM2S

    kMAInkF: 2:1

    (ii) The right MoorePenrose inverse ofA with respect to Sor, for short, the rightS-MoorePenrose inverse ofA, denoted

    byAyS;r, is the minimum Frobenius norm solution to the matrix minimization problem

    minM2S

    kAMImkF: 2:2

    Remark 2.1. Note that forS Cnm, the left and rightS-MoorePenrose inverses become the MoorePenrose inverse. That is,

    the left and right S-MoorePenrose inverses generalize the definition, via the matrix minimization problems(1.2) and (1.3),

    respectively, of the classical pseudoinverse. Moreover, when A 2 Cnn is nonsingular and A1 2Sthen the left and right S-

    MoorePenrose inverses are just the inverse. Briefly:

    Ay

    Cnm ;l

    Ay

    Ay

    Cnm ;r

    and AyS;l A

    1 AyS;r for allSs:t: A

    1 2S:

    Remark 2.2. In particular, if matrix A has full row rank (full column rank, respectively) then it has right inverses (left

    inverses, respectively). Thus, due to the uniqueness of the orthogonal projection of the identity matrix In (Im, respectively)

    onto the matrix subspaceSA#Cnn (AS#Cmm, respectively), we conclude that, for these special cases,Definition 2.1simply

    affirms thatAyS;l(AyS;r, respectively) is just the unique solution to problem (2.1)(to problem(2.2), respectively). The following

    simple example illustrates this fact.

    Example 2.1. Forn m 2 and

    A 1 1

    2 0

    ; S span

    1 0

    0 0

    a 0

    0 0

    a2C

    ;

    on one hand, we have

    A. Surez, L. Gonzlez / Applied Mathematics and Computation 216 (2010) 514522 515

    http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-
  • 7/22/2019 MoorePenrose inverse.pdf

    3/9

    AyS;l

    AI2

    2F

    minM2S

    kMAI2k2Fmin

    a2Cja 1j2 jaj2 1;

    so that

    AyS;l

    12

    0

    0 0

    !; A

    yS;lAI2

    F

    3

    2

    and on the other hand, we have

    AAyS;r I2

    2F

    minM2S

    kAMI2k2Fmin

    a2Cja 1j2 j2aj2 1;

    so that

    AyS;r

    15

    0

    0 0

    !; AA

    yS;r I2

    F

    9

    5:

    Remark 2.3. Example 2.1illustrates three basic differences between the MoorePenrose and the S-MoorePenrose inverses.

    First, we have that, in general, AyS;l AyS;r. Second, neitherA

    yS;l, norA

    yS;rsatisfies, in general, none of the Penrose equations given

    in(1.1). Finally, although matrix A has full rank, neither AyS;l is a left inverse, nor AyS;ris a right inverse of matrix A.

    Throughout this paper we address only the case of the right S-MoorePenrose inverseAyS;r, but analogous results can be

    obtained for the left S-MoorePenrose inverseAy

    S;l.Next theorem provides us with an explicit expression for AyS;rfrom an SVD of matrix A.

    Theorem 2.1. Let A2 Cmn and let S be a linear subspace ofCnm. Let A URV be an SVD of matrix A. Then

    AyS;rVR

    yVSU;r

    U and AAyS;r Im

    F

    RRyVSU;r

    Im

    F: 2:3

    Moreover,

    Ay AyS;r

    F

    Ry RyV SU;r

    F: 2:4

    Proof. For allM2 S, we have N VMU2 VSUand M VNU. Using the invariance property of the Frobenius norm under

    multiplication by unitary matrices, we get

    kA eMImkFminM2S

    kAMImkF minN2V SU

    kURVVNU ImkF minN2VSU

    kURNImUkF

    minN2VSU

    kRNImkF kReN ImkF; 2:5

    that is, eN2VSUis a solution to the matrix minimization problem

    minN2VSU

    kRNImkF 2:6

    if and only if eMVeNU 2Sis a solution to problem(2.2).Now, let eNbe any solution to problem(2.6). Since Ry

    VSU;r is the minimum Frobenius norm solution to problem(2.6), we

    have

    RyVSU;r

    F6 k

    eNkF

    and then, using again the fact that the Frobenius norm is unitarily invariant, we have

    VRyV SU;r

    U

    F6 kVeNUkF for any solutioneNto problem 2:6; i:e:;

    VRyV SU;r

    U

    F6 k eMkF for any solutioneMto problem 2:2:

    This proves that the minimum Frobenius norm solution AyS;rto problem(2.2)is VRyV SU;r

    U, and the right-hand equality of

    Eq.(2.3) immediately follows from Eq.(2.5).

    Finally, Eq.(2.4)immediately follows using the well-known expressionAy VRyU and the left-hand equality of Eq.(2.3),

    that is,

    Ay AyS;r

    F

    VRyU VRyVSU;rU

    F

    Ry RyV SU;r

    F:

    Remark 2.4. Formula (2.3) for the right S-MoorePenrose inverse generalizes the well-known expression (1.4) for the

    MoorePenrose inverse in terms of an SVD ofA, since (seeRemark 2.1)

    516 A. Surez, L. Gonzlez/ Applied Mathematics and Computation 216 (2010) 514522

  • 7/22/2019 MoorePenrose inverse.pdf

    4/9

    S Cnm ) VSU Cnm ) Ay AyS;rVRyVSU;r

    U VRyU:

    For full column rank matrices, the next theorem provides an expression for AyS;r involving the orthogonal complement of

    subspaceS(denoted, as usual, by S?) with respect to the Frobenius inner product h; iF.

    Theorem 2.2. Let A2 Cmn with rA n and let S be a linear subspace ofCnm. Then

    AyS;r A

    A

    1A

    Q; for some Q2S?: 2:7

    Proof. Let

    N AAAyS;r2 Cnm and QNA 2 Cnm:

    SincerAA n, we get

    AyS;r A

    A1N AA1A Q; for someQ2 Cnm 2:8

    and we only need to prove that Q2S?.

    Using the fact thatAAyS;ris the orthogonal projection of the identity matrix onto the subspace AS, we have

    AAyS;r Im;AM

    D EF0 for all M2 S; i:e:;

    tr AAyS;r Im MA 0 for all M2 S; i:e:;tr A

    AA

    yS;r A

    M

    0 for all M2 S;

    but using Eq.(2.8), we have

    AAA

    yS;r A

    AAAA1A Q A Q

    and then

    trQM 0; i:e:; hQ;MiF0 for all M2 S; i:e:; Q2S?:

    Remark 2.5. Eq.(2.7)generalizes the well-known formula(1.5)(ii) for the MoorePenrose inverse of full column rank matri-

    ces. Indeed (seeRemark 2.1)

    S Cnm ) S? f0g ) Q0 ) AyC

    nm ;r AA1A Ay:

    An application ofTheorem 2.2is given in the next corollary.

    Corollary 2.1. Let A2 Cmn with rA n. Let s2 Cm f0gand let0=s be the annihilator subspace of s in Cnm, i.e.,

    0=s fM2 CnmjMs 0g:

    Then

    Ay0=s;r A

    A1A Im ksk

    22 ss

    ; 2:9

    wherek k2 stands for the usual vector Euclidean norm.

    Proof. Using Eq.(2.7)for S 0=s, we have

    Ay0=s;r A

    A1A Q; for someQ2 0=s

    ?

    and since the orthogonal complement of the annihilator subspace0=sis given by[12]

    0=s?

    fvsjv2 Cng;

    we get

    Ay0=s;r A

    A

    1A

    vs; for some v2 Cn: 2:10

    Multiplying both sides of Eq.(2.10)by vector s, we have

    Ay0=s;rs A

    A1A vss

    and, sinceAy0=s;r2 0=s, we get

    A. Surez, L. Gonzlez / Applied Mathematics and Computation 216 (2010) 514522 517

  • 7/22/2019 MoorePenrose inverse.pdf

    5/9

    0 A

    vss ) v 1

    ksk22A

    s

    and substituting vby its above expression in Eq. (2.10), we get

    Ay0=s;r A

    A1 A

    1

    ksk22A

    ss

    ! AA1AIm ksk

    22 ss

    :

    Remark 2.6. According to formula(1.5)(ii) for the full column rank case, Eq. (2.9)can be simply rewritten as

    Ay0=s;rA

    yIm

    ss

    ss

    ;

    a curious representation of the right S-MoorePenrose inverse ofA(Sbeing the annihilator subspace0=sofs) as the product

    of the MoorePenrose inverse ofA and an element ofS, namely the elementary annihilatorImss

    ssof vector s.

    Regarding the full column rank case, let us mention that, in addition to formulas (2.3) and (2.7), other explicit expressions

    forAyS;r more appropriate for computational purposes can be given using an arbitrary basis of subspaceS. The basic idea

    simply consists of expressing the orthogonal projection AAyS;rofIm onto the subspaceASby its expansion with respect to an

    orthonormal basis ofAS(after using the GramSchmidt orthonormalization procedure, if necessary). These expressions have

    been developed in[13]for the special case of real n n nonsingular matrices, and they have been applied to the precondi-

    tioning of large linear systems and illustrated with some numerical experiments corresponding to real-world cases.

    The next two lemmas state some spectral properties of the matrix product AAyS;rthat will be used in the next section.

    Lemma 2.1. Let A2 Cmn and let S be a linear subspace ofCnm. Then

    AAyS;r Im

    2F

    mtr AAyS;r

    ; 2:11

    AAyS;r

    2F

    tr AAyS;r

    : 2:12

    Lemma 2.2. Let A2 Cmn and let S be a linear subspace ofCnm. Letfkigmi1 and frig

    mi1 be the sets of eigenvalues and singular

    values, respectively, of matrix AAyS;rarranged, as usual, in nonincreasing order of their modules. Then

    Xmi1

    r2i Xmi1

    ki 6 m: 2:13

    We omit the proofs ofLemmas 2.1 and 2.2, since these proofs are respectively similar to those of Lemmas 2.1 and 2.3

    given in[18]in the context of the spectral analysis of the orthogonal projections of the identity onto arbitrary subspaces

    ofRnn.

    3. Applications

    3.1. Applications to preconditioning

    In numerical linear algebra, the convergence of the iterative Krylov methods[1416]used for solving the linear system

    Ax b; A2R

    nn

    ; A nonsingular; x; b2R

    n1

    3:1can be improved by transforming it into the right preconditioned system

    AMy b; x My: 3:2

    The approximate inverse preconditioning(3.2)of system(3.1)is performed in order to get a preconditioned matrix AMas

    close as possible to the identity in some sense, taking into account the computational cost required for this purpose. The

    closeness of AM to In may be measured by using a suitable matrix norm like, for instance, the Frobenius norm. In this

    way, the optimal (in the sense of the Frobenius norm) right preconditioner of system (3.1), among all matrices Mbelonging

    to a given subspaceSofRnn, is precisely the rightS-MoorePenrose inverseAyS;r, i.e., the unique solution to the matrix min-

    imization problem (Remark 2.2)

    minM2S

    kAMInkF; A2 Rnn; rA n; S#Rnn: 3:3

    Alternatively, we can transform system (3.1)into the left preconditioned system MAx=Mb, which is now related to the left S-

    MoorePenrose inverseAyS;l .

    518 A. Surez, L. Gonzlez/ Applied Mathematics and Computation 216 (2010) 514522

  • 7/22/2019 MoorePenrose inverse.pdf

    6/9

    In the next theorem, we apply the spectral property (2.13)to the special case of the optimal preconditioner AyS;r, i.e., the

    solution to problem(3.3). We must assume that matrix AyS;ris nonsingular, in order to get a nonsingular coefficient matrix

    AAyS;rof the preconditioned system(3.2). Hence, all singular values of matrix AAyS;rwill be greater than zero, and they will be

    denoted as

    r1 P r2 P P rn> 0:

    Theorem 3.1. Let A2 Rnnbe nonsingular and let S be a vector subspace ofRnn such that the solution Ay

    S;r

    to problem(3.3)is

    nonsingular. Then the smallest singular value rnof the preconditioned matrix AAyS;rlies in the interval (0,1]. Moreover, we have the

    following lower and upper bounds on the distance dIn;AS

    1 rn26 AA

    yS;rIn

    2F6 n 1 r2n

    : 3:4

    Proof. UsingLemma 2.2for m =n, we get

    nr2n 6Xni1

    r2i 6 n ) r2n 6 1 ) rn 6 1:

    To prove the left-hand inequality of Eq.(3.4), we use the well-known fact that singular values continuously depend on their

    arguments[17], i.e., for all A; B2 Rnn and for all i 1; 2;. . .; n

    jriA riBj 6 kABkF ) jrn 1j 6 AAyS;r In

    F:

    Finally, to prove the right-hand inequality of Eq.(3.4), we use Eqs.(2.11) and (2.12)for m =n

    AAyS;rIn

    2F

    ntr AAyS;r

    n AAyS;r

    2F

    nXni1

    r2i 6 n 1 r2n

    :

    Remark 3.1. Eq.(3.4)states that AAyS;r I

    F

    decreases to 0 at the same time as the smallest singular valuernof the precon-

    ditioned matrixAAyS;rincreases to 1. That is, the closeness ofAAyS;rto the identity is determined by the closeness ofrn to the

    unity. In the extremal case, rn 1 iffAyS;r A

    1 iffA1 2S. A more complete version ofTheorem 3.1, involving not only the

    smallest singular value, but also the smallest eigenvalues modulus, of the preconditioned matrix, can be found in [18]. The

    proof presented here is completely different, more direct and simpler than the one given in [18].

    Remark 3.2. In numerical linear algebra, the theoretical effectiveness analysis of the preconditioners Mto be used, is basedon the spectral properties of the preconditioned matrix AM (e.g., condition number, clustering of eigenvalues and singular

    values, departure from normality, etc.). For this reason, the spectral properties of matrix AAyS;r, stated inLemmas 2.1 and

    2.2 and in Theorem 3.1, are especially useful when the left S-MoorePenrose inverse is the optimal preconditioner AyS;rdefined by problem(3.3). We refer the reader to [18]for more details about the theoretical effectiveness analysis of these

    preconditioners.

    3.2. Applications to the constrained least squares problem

    TheS-MoorePenrose inverse can be applied to the linear least squares problem subject to a set of homogeneous linear

    constraints. This is in accordance with the well-known fact that the MoorePenrose inverse can be applied to the linear least

    squares problem without restrictions[19], the major application of the MoorePenrose inverse[20]. More precisely, let us

    recall that, regarding the linear system Ax b; A2 Rmn

    , the unique solution (ifA has full column rank) or the minimumnorm solution (otherwise) of the least squares problem

    minx2Rn

    kAxbk2

    is given by

    x Ayb:

    Here we consider as well the linear systemAx b; A2 Rmn, but now subject to the conditionx2 T, whereTis a given vector

    subspace ofRn. Denoting byBx = 0 the system of Cartesian equations of subspace T, this problem is usually formulated as

    minx2Rn

    kAxbk2 subject to Bx 0 3:5

    and we refer the reader to[11]and to the references contained therein, for more details about the linearly constrained linear

    least squares problem.

    A. Surez, L. Gonzlez / Applied Mathematics and Computation 216 (2010) 514522 519

  • 7/22/2019 MoorePenrose inverse.pdf

    7/9

    Next theorem provides us with a new, different approach in terms of the S-MoorePenrose inverse to problem(3.5).

    Our theorem covers all possible situations that one can imagine for both the full rank and the rank deficient cases, e.g., Ax=b

    consistent but Ax b; x2 T inconsistent; Ax=b consistent and underdetermined while Ax b; x2 T consistent and un-

    iquely determined; bothAx =b and Ax b; x2 T inconsistent, etc.

    Theorem 3.2. Let A2 Rmn and b2 Rm. Let T be a linear subspace ofRn of dimension p. Then the minimum norm solution to the

    constrained least squares problem

    minx2T

    kAxbk2

    3:6

    is given by

    xTAySP;r

    b; 3:7

    where P is the n p matrix whose columns are the vectors of an arbitrary basis of T and SPPRpm

    #Rnm. In particular, if A has

    full column rank then xTis the unique least squares solution to problem(3.6).

    Proof. LetB fv1;. . . ;vpgbe a given orthonormal basis ofTand letPbe thenpmatrix whose columns arev1;. . . ;vp. For a

    clearer exposition, we subdivide the proof into the following three parts:

    (i) Writing any vector x 2 T asx Pa a2 Rp, we get

    kA~xbk2 minx2T

    kAxbk2 mina2Rp

    kAPa bk2 kAP~a bk2;

    that is, ~a2 Rp is a solution to the unconstrained least squares problem

    mina2Rp

    kAPa bk2 3:8

    if and only if ~x P~a2 Tis a solution to the constrained least squares problem (3.6).

    Now, let ~abe any solution to problem (3.8). Since a APyb is the minimum norm solution to problem(3.8), we have

    kAPybk2 6 k~ak2

    and then, using the fact that Phas orthonormal columnsPTP Ip, we have

    kPak22 PaTPa aTa kak22 for all a2 R

    p

    and then

    kPAPy

    bk2 6 kP~ak2 for any solution ~a to problem 3:8; i:e:;kPAPybk2 6 k~xk2 for any solution ~x to problem 3:6:

    This proves that the minimum norm solution xT to problem(3.6)can be expressed as

    xTPAPyb for any orthonormal basisB of T: 3:9

    (ii) Writing any matrixM2 SPPRpm asM PN N2 Rpm, we get

    kA eMImkFminM2SP

    kAMImkF minN2Rpm

    kAPNImkF kAPeN ImkF;

    that is, eN2 Rpm is a solution to the unconstrained matrix minimization problem

    minN2Rpm

    kAPNImkF 3:10

    if and only if eMPeN2SPis a solution to the constrained matrix minimization problemmin

    M2SPPRpm

    kAMImkF: 3:11

    Now, let eNbe any solution to problem(3.10). Since APy is the minimum Frobenius norm solution to problem (3.10), wehave

    kAPykF6 keNkF

    and then, using again the fact that Phas orthonormal columns PTP Ip, we have

    kPNk2FtrPNTPN trNTN kNk2F for allN2 R

    pm

    and then

    kPAPykF6 kPeNkF for any solutione

    Nto problem 3:10; i:e:;

    kPAPykF6 k eMkF for any solutioneMto problem 3:11:

    520 A. Surez, L. Gonzlez/ Applied Mathematics and Computation 216 (2010) 514522

  • 7/22/2019 MoorePenrose inverse.pdf

    8/9

    This proves that the minimum Frobenius norm solution AySP;rto problem(3.11)can be expressed as

    AySP;r

    PAPy

    for any orthonormal basis B of T: 3:12

    (iii) Using Eqs.(3.9) and (3.12), we get

    xTAySP;r

    b for any orthonormal basisB of T: 3:13

    To conclude the derivation of formula(3.7), we must proof that, although the assumption that B is an orthonormal basisofTis (in general) essential for Eqs.(3.9) and (3.12)to be hold, formula(3.13)for xTis valid not only for orthonormal bases,

    but in fact for any basis (not necessarily orthonormal) of subspaceT. In other words, the vectorAySP;rbis independent of the

    chosen basis. Indeed, letB,B0 be two arbitrary bases ofTand letP,P0 be thenpmatrices whose columns are the vectors ofB

    andB 0, respectively. Writing matrix P0 asP0 PC, Cbeing ap p nonsingular matrix, we have

    SP0 P0R

    pm PCRpm PRpm SP ) AySP;r

    b AySP0 ;r

    b:

    Finally, if in particularA has full column rank then A is left invertible and, due to the uniqueness of the orthogonal pro-

    jectionAxTof vectorb onto the subspaceAT#Rm, we derive that the minimum norm solution xTA

    ySP;r

    bto problem(3.6)is

    indeed the unique solution to this problem. h

    Remark 3.3. In particular, for the special full rank case r(A) =n, the expression(3.9)for the unique least squares solution xTof problem(3.6), is valid for any basis (not necessarily orthonormal) ofT. Indeed, letB,B0 be two arbitrary bases ofTand letP,

    P0 be thenpmatrices whose columns are the vectors ofB and B0, respectively. Writing matrixP0 asP0 PC,Cbeing appnonsingular matrix, we have

    P0AP0

    yPCAPC

    yPCCyAP

    yPCC1AP

    yPAP

    y;

    where the reverse order law for the product (AP)Cholds because APhas full column rank, i.e.,

    p rA rP n 6 rAP 6 minrA; rP p

    andChas full row rank; see, e.g.,[3]. In conclusion, the minimum norm solution to problem (3.6)is given by

    xTPAPyb

    for any orthonormal basis of T if rA< n;

    for any basis of T if rA n

    and also, for both cases rA 6 n, by xTA

    ySP;r

    b for any basis ofT.

    The following simple example illustratesTheorem 3.2.

    Example 3.1. Let A 1 1 1 2 R13 and b 2 2 R11. Consider the subspace T x3 0 of R3. Then m 1; n 3;

    p 2 and taking the (nonorthonormal) basis ofT :B f1; 0; 0; 1; 1; 0g, we get

    P

    1 1

    0 1

    0 0

    0B@1CA ) SP PR21 ab

    0

    0B@1CA

    8>:9>=>;

    a;b2R

    R31 ) AySP;r

    1=2

    1=2

    0

    0B@1CA

    and then the minimum norm solution to problem(3.6)is given by

    xTAySP;r

    b

    1

    1

    0

    0B@

    1CA:

    4. Concluding remarks

    The left and rightS-MoorePenrose inverses,AyS;land AyS;r, extend the idea of the standard inverse from nonsingular square

    matrices not only to rectangular and singular square matrices (as the MoorePenrose inverse does), but also to invertible

    matrices, themselves, by considering a subspace SofCnm not containingA1. Restricted to the right case, the two expres-

    sions given forAyS;r(namely, the first one based on an SVD ofA; the second one involving the orthogonal complement of sub-

    spaceS), respectively generalize the well-known corresponding formulae (1.4) and (1.5)(ii) for the MoorePenrose inverse.

    TheS-MoorePenrose inverseAyS;rcan be seen as an useful alternative to Ay: it provides us with a computationally feasible

    approximate inverse preconditioner for large linear systems (in contrast to the unfeasible computation ofAy A1), and it

    also provides the minimum norm solution AySP;rb to the least squares problem subject to homogeneous linear constraints

    (in contrast to the minimum norm solution Ayb to the unconstrained least squares problem). The behavior of the S-

    MoorePenrose inverse with respect to both theoretical results and other applications, already known for the MoorePen-

    rose inverse, can be considered for future researches.

    A. Surez, L. Gonzlez / Applied Mathematics and Computation 216 (2010) 514522 521

  • 7/22/2019 MoorePenrose inverse.pdf

    9/9

    Acknowledgments

    This work was partially supported by Ministerio de Ciencia e Innovacin (Spain) and FEDER through Grant contract:

    CGL2008-06003-C03-01/CLI.

    References

    [1] E.H. Moore, On the reciprocal of the general algebraic matrix, Bull. Am. Math. Soc. 26 (1920) 394395.

    [2] R. Penrose, A generalized inverse for matrices, Proc. Cambridge Philos. Soc. 51 (1955) 406413.[3] A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, second ed., Springer-Verlag, NewYork, 2003.[4] J.K. Baksalary, O.M. Baksalary, Particular formulae for the MoorePenrose inverse of a columnwise partitioned matrix, Linear Algebra Appl. 421 (2007)

    1623.[5] T. Britz, D.D. Olesky, P. van den Driessche, The MoorePenrose inverse of matrices with an acyclic bipartite graph, Linear Algebra Appl. 390 (2004) 47

    60.[6] M.A. Rakha, On the MoorePenrose generalized inverse matrix, Appl. Math. Comput. 158 (2004) 185200.[7] Y. Tian, Using rank formulas to characterize equalities for MoorePenrose inverses of matrix products, Appl. Math. Comput. 147 (2004) 581600.[8] Y. Tian, Approximation and continuity of MoorePenrose inverses of orthogonal row block matrices, Appl. Math. Comput. (2008), doi:10.1016/

    j.amc.2008.10.027.[9] F. Toutounian, A. Ataei, A new method for computing MoorePenrose inverse matrices, J. Comput. Appl. Math. (2008), doi:10.1016/j.cam.2008.10.008.

    [10] X. Zhang, J. Cai, Y. Wei, Interval iterative methods for computing MoorePenrose inverse, Appl. Math. Comput. 183 (2006) 522532.[11] G.H. Golub, C.F. Van Loan, Matrix Computations, third ed., Johns Hopkins Press, Baltimore, USA, 1996.[12] A. Griewank, A short proof of the DennisSchnabel theorem, BIT 22 (1982) 252256.[13] G. Montero, L. Gonzlez, E. Flrez, M.D. Garca, A. Surez, Approximate inverse computation using Frobenius inner product, Numer. Linear Algebra

    Appl. 9 (2002) 239247.[14] O. Axelsson, Iterative Solution Methods, Cambridge University Press, Cambridge, MA, 1994.

    [15] A. Greenbaum, Iterative Methods for Solving Linear Systems, Frontiers Appl. Math., vol. 17, SIAM, Philadelphia, PA, 1997.[16] Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing Co., Boston, MA, 1996.[17] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, MA, 1985.[18] L. Gonzlez, Orthogonal projections of the identity: spectral analysis and applications to approximate inverse preconditioning, SIAM Rev. 48 (2006)

    6675.[19] R. Penrose, On best approximate solutions of linear matrix equations, Proc. Cambridge Philos. Soc. 52 (1956) 1719.[20] A. Ben-Israel, The Moore of the MoorePenrose inverse, Electron. J. Linear Algebra 9 (2002) 150157.

    522 A. Surez, L. Gonzlez/ Applied Mathematics and Computation 216 (2010) 514522

    http://dx.doi.org/10.1016/j.amc.2008.10.027http://dx.doi.org/10.1016/j.amc.2008.10.027http://dx.doi.org/10.1016/j.cam.2008.10.008http://dx.doi.org/10.1016/j.cam.2008.10.008http://dx.doi.org/10.1016/j.amc.2008.10.027http://dx.doi.org/10.1016/j.amc.2008.10.027