Sum of positive definite and symmetric matrix

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite
1












Let $A$ be a real, symmetric, positive definite $ntimes n$ matrix. Let $B$ be a real, symmetric $ntimes n$ matrix.



Does there exist an $epsilon>0$ such that $A+epsilon B$ is positive definite?



My attempt: According to Wikipedia we can simultaneously diagonalize $A$ and $B$. Since $A$ is positive definite, its eigenvalues are all positive. Call them $lambda_i$. Let $rho_i$ be the eigenvalues of $B$. Choose a basis which simultaneously diagonalizes $A$ and $B$. Then $$u^T(A+epsilon B)u=u^TAu+epsilon u^TBu=sum_i(lambda_i+epsilon rho_i)u_i^2.$$ So if $$epsilon<fraclambda_min$$ (assume $Bneq 0$), then $$lambda_i+epsilonrho_i>lambda_i-lambda_mingeq0$$ for all $i$, which implies $A+epsilon B$ is positive definite.



Is this valid? Is there a basis-independent way to show this?







share|cite|improve this question





















  • Can you provide a link to where Wikipedia says A and B can be simultaneously diagonalized?
    – John Polcari
    Jul 17 at 2:03










  • en.wikipedia.org/wiki/…
    – srp
    Jul 17 at 2:07






  • 1




    If you carefully read the entry you point to, you will find that it discusses a form of joint diagonalization which does not lead to the traditional eigenvalues for the two matrices. At least one of the diagonal results is necessarily the identity matrix.
    – John Polcari
    Jul 17 at 2:21















up vote
2
down vote

favorite
1












Let $A$ be a real, symmetric, positive definite $ntimes n$ matrix. Let $B$ be a real, symmetric $ntimes n$ matrix.



Does there exist an $epsilon>0$ such that $A+epsilon B$ is positive definite?



My attempt: According to Wikipedia we can simultaneously diagonalize $A$ and $B$. Since $A$ is positive definite, its eigenvalues are all positive. Call them $lambda_i$. Let $rho_i$ be the eigenvalues of $B$. Choose a basis which simultaneously diagonalizes $A$ and $B$. Then $$u^T(A+epsilon B)u=u^TAu+epsilon u^TBu=sum_i(lambda_i+epsilon rho_i)u_i^2.$$ So if $$epsilon<fraclambda_min$$ (assume $Bneq 0$), then $$lambda_i+epsilonrho_i>lambda_i-lambda_mingeq0$$ for all $i$, which implies $A+epsilon B$ is positive definite.



Is this valid? Is there a basis-independent way to show this?







share|cite|improve this question





















  • Can you provide a link to where Wikipedia says A and B can be simultaneously diagonalized?
    – John Polcari
    Jul 17 at 2:03










  • en.wikipedia.org/wiki/…
    – srp
    Jul 17 at 2:07






  • 1




    If you carefully read the entry you point to, you will find that it discusses a form of joint diagonalization which does not lead to the traditional eigenvalues for the two matrices. At least one of the diagonal results is necessarily the identity matrix.
    – John Polcari
    Jul 17 at 2:21













up vote
2
down vote

favorite
1









up vote
2
down vote

favorite
1






1





Let $A$ be a real, symmetric, positive definite $ntimes n$ matrix. Let $B$ be a real, symmetric $ntimes n$ matrix.



Does there exist an $epsilon>0$ such that $A+epsilon B$ is positive definite?



My attempt: According to Wikipedia we can simultaneously diagonalize $A$ and $B$. Since $A$ is positive definite, its eigenvalues are all positive. Call them $lambda_i$. Let $rho_i$ be the eigenvalues of $B$. Choose a basis which simultaneously diagonalizes $A$ and $B$. Then $$u^T(A+epsilon B)u=u^TAu+epsilon u^TBu=sum_i(lambda_i+epsilon rho_i)u_i^2.$$ So if $$epsilon<fraclambda_min$$ (assume $Bneq 0$), then $$lambda_i+epsilonrho_i>lambda_i-lambda_mingeq0$$ for all $i$, which implies $A+epsilon B$ is positive definite.



Is this valid? Is there a basis-independent way to show this?







share|cite|improve this question













Let $A$ be a real, symmetric, positive definite $ntimes n$ matrix. Let $B$ be a real, symmetric $ntimes n$ matrix.



Does there exist an $epsilon>0$ such that $A+epsilon B$ is positive definite?



My attempt: According to Wikipedia we can simultaneously diagonalize $A$ and $B$. Since $A$ is positive definite, its eigenvalues are all positive. Call them $lambda_i$. Let $rho_i$ be the eigenvalues of $B$. Choose a basis which simultaneously diagonalizes $A$ and $B$. Then $$u^T(A+epsilon B)u=u^TAu+epsilon u^TBu=sum_i(lambda_i+epsilon rho_i)u_i^2.$$ So if $$epsilon<fraclambda_min$$ (assume $Bneq 0$), then $$lambda_i+epsilonrho_i>lambda_i-lambda_mingeq0$$ for all $i$, which implies $A+epsilon B$ is positive definite.



Is this valid? Is there a basis-independent way to show this?









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 17 at 1:54
























asked Jul 17 at 1:38









srp

1057




1057











  • Can you provide a link to where Wikipedia says A and B can be simultaneously diagonalized?
    – John Polcari
    Jul 17 at 2:03










  • en.wikipedia.org/wiki/…
    – srp
    Jul 17 at 2:07






  • 1




    If you carefully read the entry you point to, you will find that it discusses a form of joint diagonalization which does not lead to the traditional eigenvalues for the two matrices. At least one of the diagonal results is necessarily the identity matrix.
    – John Polcari
    Jul 17 at 2:21

















  • Can you provide a link to where Wikipedia says A and B can be simultaneously diagonalized?
    – John Polcari
    Jul 17 at 2:03










  • en.wikipedia.org/wiki/…
    – srp
    Jul 17 at 2:07






  • 1




    If you carefully read the entry you point to, you will find that it discusses a form of joint diagonalization which does not lead to the traditional eigenvalues for the two matrices. At least one of the diagonal results is necessarily the identity matrix.
    – John Polcari
    Jul 17 at 2:21
















Can you provide a link to where Wikipedia says A and B can be simultaneously diagonalized?
– John Polcari
Jul 17 at 2:03




Can you provide a link to where Wikipedia says A and B can be simultaneously diagonalized?
– John Polcari
Jul 17 at 2:03












en.wikipedia.org/wiki/…
– srp
Jul 17 at 2:07




en.wikipedia.org/wiki/…
– srp
Jul 17 at 2:07




1




1




If you carefully read the entry you point to, you will find that it discusses a form of joint diagonalization which does not lead to the traditional eigenvalues for the two matrices. At least one of the diagonal results is necessarily the identity matrix.
– John Polcari
Jul 17 at 2:21





If you carefully read the entry you point to, you will find that it discusses a form of joint diagonalization which does not lead to the traditional eigenvalues for the two matrices. At least one of the diagonal results is necessarily the identity matrix.
– John Polcari
Jul 17 at 2:21











2 Answers
2






active

oldest

votes

















up vote
2
down vote



accepted










The sought for conclusion binds since the eigenvalues of a matrix depend continuously on its entries; then as $A + epsilon B$ is continuous in $epsilon$, and agrees with $A$ for $epsilon = 0$, all eigenvalues of the symmetric matrix $A + epsilon B$ are positive for $vert epsilon vert$ sufficiently small, so $A + epsilon B$ is positive definite for such $epsilon$.






share|cite|improve this answer























  • Well what do you know? en.m.wikipedia.org/wiki/… !!!
    – Robert Lewis
    Jul 17 at 2:18


















up vote
4
down vote













You can make this line of reasoning work, but you should be careful about what "diagonalized" means in this context. In particular, your wiki page is talking about diagonalization via a congruence rather than a similarity.



In particular, the theorem is that there exists an invertible matrix $P$ (not necessarily orthogonal) such that both $PAP^T$ and $PBP^T$ are diagonal. Note that the diagonal entries will not generally be the eigenvalues of $A$ and $B$.




Note that we can't necessarily diagonalize $A$ and $B$ simultaneously in the sense of similarity. In fact, we can do so if and only if $AB = BA$.



It is generally true that if $A$ is symmetric and positive definite, then such an $epsilon$ exists. One proof is as follows:



If $A$ is positive definite, then for some $r > 0$ (e.g. $r = lambda_min$) we can write
$$
A = rI + (A - rI)
$$
where $I$ denotes the identity matrix and $A - rI$ is positive semidefinite. With that, we have
$$
A + Bepsilon = [rI + epsilon B] + (A - rI)
$$
It suffices to choose an $epsilon$ such that $rI + epsilon B$ is positive definite.




Another approach that I like: let $A^-1/2$ denote the positive definite square root of $A^-1$. Then $A + epsilon B$ is positive definite if and only if the matrix
$$
A^-1/2(A + epsilon B)A^-1/2 = I + epsilon A^-1/2B(A^-1/2)^T
$$
is positive definite. It therefore suffices to consider the case with $A = I$.



If you choose to diagonalize $I + epsilon A^-1/2B(A^-1/2)^T$ by diagonalizing the symmetric matrix $A^-1/2B(A^-1/2)^T$, then you are essentially deriving the "simultaneous diagonalizability" of quadratic forms in our specific case.






share|cite|improve this answer























    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2854043%2fsum-of-positive-definite-and-symmetric-matrix%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    2
    down vote



    accepted










    The sought for conclusion binds since the eigenvalues of a matrix depend continuously on its entries; then as $A + epsilon B$ is continuous in $epsilon$, and agrees with $A$ for $epsilon = 0$, all eigenvalues of the symmetric matrix $A + epsilon B$ are positive for $vert epsilon vert$ sufficiently small, so $A + epsilon B$ is positive definite for such $epsilon$.






    share|cite|improve this answer























    • Well what do you know? en.m.wikipedia.org/wiki/… !!!
      – Robert Lewis
      Jul 17 at 2:18















    up vote
    2
    down vote



    accepted










    The sought for conclusion binds since the eigenvalues of a matrix depend continuously on its entries; then as $A + epsilon B$ is continuous in $epsilon$, and agrees with $A$ for $epsilon = 0$, all eigenvalues of the symmetric matrix $A + epsilon B$ are positive for $vert epsilon vert$ sufficiently small, so $A + epsilon B$ is positive definite for such $epsilon$.






    share|cite|improve this answer























    • Well what do you know? en.m.wikipedia.org/wiki/… !!!
      – Robert Lewis
      Jul 17 at 2:18













    up vote
    2
    down vote



    accepted







    up vote
    2
    down vote



    accepted






    The sought for conclusion binds since the eigenvalues of a matrix depend continuously on its entries; then as $A + epsilon B$ is continuous in $epsilon$, and agrees with $A$ for $epsilon = 0$, all eigenvalues of the symmetric matrix $A + epsilon B$ are positive for $vert epsilon vert$ sufficiently small, so $A + epsilon B$ is positive definite for such $epsilon$.






    share|cite|improve this answer















    The sought for conclusion binds since the eigenvalues of a matrix depend continuously on its entries; then as $A + epsilon B$ is continuous in $epsilon$, and agrees with $A$ for $epsilon = 0$, all eigenvalues of the symmetric matrix $A + epsilon B$ are positive for $vert epsilon vert$ sufficiently small, so $A + epsilon B$ is positive definite for such $epsilon$.







    share|cite|improve this answer















    share|cite|improve this answer



    share|cite|improve this answer








    edited Jul 17 at 2:21


























    answered Jul 17 at 2:13









    Robert Lewis

    37.1k22256




    37.1k22256











    • Well what do you know? en.m.wikipedia.org/wiki/… !!!
      – Robert Lewis
      Jul 17 at 2:18

















    • Well what do you know? en.m.wikipedia.org/wiki/… !!!
      – Robert Lewis
      Jul 17 at 2:18
















    Well what do you know? en.m.wikipedia.org/wiki/… !!!
    – Robert Lewis
    Jul 17 at 2:18





    Well what do you know? en.m.wikipedia.org/wiki/… !!!
    – Robert Lewis
    Jul 17 at 2:18











    up vote
    4
    down vote













    You can make this line of reasoning work, but you should be careful about what "diagonalized" means in this context. In particular, your wiki page is talking about diagonalization via a congruence rather than a similarity.



    In particular, the theorem is that there exists an invertible matrix $P$ (not necessarily orthogonal) such that both $PAP^T$ and $PBP^T$ are diagonal. Note that the diagonal entries will not generally be the eigenvalues of $A$ and $B$.




    Note that we can't necessarily diagonalize $A$ and $B$ simultaneously in the sense of similarity. In fact, we can do so if and only if $AB = BA$.



    It is generally true that if $A$ is symmetric and positive definite, then such an $epsilon$ exists. One proof is as follows:



    If $A$ is positive definite, then for some $r > 0$ (e.g. $r = lambda_min$) we can write
    $$
    A = rI + (A - rI)
    $$
    where $I$ denotes the identity matrix and $A - rI$ is positive semidefinite. With that, we have
    $$
    A + Bepsilon = [rI + epsilon B] + (A - rI)
    $$
    It suffices to choose an $epsilon$ such that $rI + epsilon B$ is positive definite.




    Another approach that I like: let $A^-1/2$ denote the positive definite square root of $A^-1$. Then $A + epsilon B$ is positive definite if and only if the matrix
    $$
    A^-1/2(A + epsilon B)A^-1/2 = I + epsilon A^-1/2B(A^-1/2)^T
    $$
    is positive definite. It therefore suffices to consider the case with $A = I$.



    If you choose to diagonalize $I + epsilon A^-1/2B(A^-1/2)^T$ by diagonalizing the symmetric matrix $A^-1/2B(A^-1/2)^T$, then you are essentially deriving the "simultaneous diagonalizability" of quadratic forms in our specific case.






    share|cite|improve this answer



























      up vote
      4
      down vote













      You can make this line of reasoning work, but you should be careful about what "diagonalized" means in this context. In particular, your wiki page is talking about diagonalization via a congruence rather than a similarity.



      In particular, the theorem is that there exists an invertible matrix $P$ (not necessarily orthogonal) such that both $PAP^T$ and $PBP^T$ are diagonal. Note that the diagonal entries will not generally be the eigenvalues of $A$ and $B$.




      Note that we can't necessarily diagonalize $A$ and $B$ simultaneously in the sense of similarity. In fact, we can do so if and only if $AB = BA$.



      It is generally true that if $A$ is symmetric and positive definite, then such an $epsilon$ exists. One proof is as follows:



      If $A$ is positive definite, then for some $r > 0$ (e.g. $r = lambda_min$) we can write
      $$
      A = rI + (A - rI)
      $$
      where $I$ denotes the identity matrix and $A - rI$ is positive semidefinite. With that, we have
      $$
      A + Bepsilon = [rI + epsilon B] + (A - rI)
      $$
      It suffices to choose an $epsilon$ such that $rI + epsilon B$ is positive definite.




      Another approach that I like: let $A^-1/2$ denote the positive definite square root of $A^-1$. Then $A + epsilon B$ is positive definite if and only if the matrix
      $$
      A^-1/2(A + epsilon B)A^-1/2 = I + epsilon A^-1/2B(A^-1/2)^T
      $$
      is positive definite. It therefore suffices to consider the case with $A = I$.



      If you choose to diagonalize $I + epsilon A^-1/2B(A^-1/2)^T$ by diagonalizing the symmetric matrix $A^-1/2B(A^-1/2)^T$, then you are essentially deriving the "simultaneous diagonalizability" of quadratic forms in our specific case.






      share|cite|improve this answer

























        up vote
        4
        down vote










        up vote
        4
        down vote









        You can make this line of reasoning work, but you should be careful about what "diagonalized" means in this context. In particular, your wiki page is talking about diagonalization via a congruence rather than a similarity.



        In particular, the theorem is that there exists an invertible matrix $P$ (not necessarily orthogonal) such that both $PAP^T$ and $PBP^T$ are diagonal. Note that the diagonal entries will not generally be the eigenvalues of $A$ and $B$.




        Note that we can't necessarily diagonalize $A$ and $B$ simultaneously in the sense of similarity. In fact, we can do so if and only if $AB = BA$.



        It is generally true that if $A$ is symmetric and positive definite, then such an $epsilon$ exists. One proof is as follows:



        If $A$ is positive definite, then for some $r > 0$ (e.g. $r = lambda_min$) we can write
        $$
        A = rI + (A - rI)
        $$
        where $I$ denotes the identity matrix and $A - rI$ is positive semidefinite. With that, we have
        $$
        A + Bepsilon = [rI + epsilon B] + (A - rI)
        $$
        It suffices to choose an $epsilon$ such that $rI + epsilon B$ is positive definite.




        Another approach that I like: let $A^-1/2$ denote the positive definite square root of $A^-1$. Then $A + epsilon B$ is positive definite if and only if the matrix
        $$
        A^-1/2(A + epsilon B)A^-1/2 = I + epsilon A^-1/2B(A^-1/2)^T
        $$
        is positive definite. It therefore suffices to consider the case with $A = I$.



        If you choose to diagonalize $I + epsilon A^-1/2B(A^-1/2)^T$ by diagonalizing the symmetric matrix $A^-1/2B(A^-1/2)^T$, then you are essentially deriving the "simultaneous diagonalizability" of quadratic forms in our specific case.






        share|cite|improve this answer















        You can make this line of reasoning work, but you should be careful about what "diagonalized" means in this context. In particular, your wiki page is talking about diagonalization via a congruence rather than a similarity.



        In particular, the theorem is that there exists an invertible matrix $P$ (not necessarily orthogonal) such that both $PAP^T$ and $PBP^T$ are diagonal. Note that the diagonal entries will not generally be the eigenvalues of $A$ and $B$.




        Note that we can't necessarily diagonalize $A$ and $B$ simultaneously in the sense of similarity. In fact, we can do so if and only if $AB = BA$.



        It is generally true that if $A$ is symmetric and positive definite, then such an $epsilon$ exists. One proof is as follows:



        If $A$ is positive definite, then for some $r > 0$ (e.g. $r = lambda_min$) we can write
        $$
        A = rI + (A - rI)
        $$
        where $I$ denotes the identity matrix and $A - rI$ is positive semidefinite. With that, we have
        $$
        A + Bepsilon = [rI + epsilon B] + (A - rI)
        $$
        It suffices to choose an $epsilon$ such that $rI + epsilon B$ is positive definite.




        Another approach that I like: let $A^-1/2$ denote the positive definite square root of $A^-1$. Then $A + epsilon B$ is positive definite if and only if the matrix
        $$
        A^-1/2(A + epsilon B)A^-1/2 = I + epsilon A^-1/2B(A^-1/2)^T
        $$
        is positive definite. It therefore suffices to consider the case with $A = I$.



        If you choose to diagonalize $I + epsilon A^-1/2B(A^-1/2)^T$ by diagonalizing the symmetric matrix $A^-1/2B(A^-1/2)^T$, then you are essentially deriving the "simultaneous diagonalizability" of quadratic forms in our specific case.







        share|cite|improve this answer















        share|cite|improve this answer



        share|cite|improve this answer








        edited Jul 17 at 2:22


























        answered Jul 17 at 2:04









        Omnomnomnom

        121k784170




        121k784170






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2854043%2fsum-of-positive-definite-and-symmetric-matrix%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?