Numerically robust Cauchy-Schwarz?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
3
down vote

favorite












Is it well-known by the Cauchy-Schwarz inequality that
$$
langle x, xrangle langle y, yrangle - langle x, yrangle^2 geq 0
$$
for any $x,yin H$, $H$ being a Hilbert space with real-valued scalars.



When computing the above expression numerically, sometimes it will be negative (in the order of machine precision) due to round-off errors in the subtraction.



Is there an equivalent expression that is always numerically positive? (E.g., something to the power of 2, a norm of a vector etc.)







share|cite|improve this question



















  • I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
    – RideTheWavelet
    Jul 19 at 8:56






  • 2




    Or taking the maximum of the computed value and 0?
    – Mees de Vries
    Jul 19 at 9:45










  • Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
    – Piyush Divyanakar
    Jul 19 at 9:55







  • 1




    Why do you care? Is this computation an end in itself, or is it part of a larger computation?
    – awkward
    Jul 19 at 12:17










  • @RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
    – Yves Daoust
    Jul 21 at 14:12














up vote
3
down vote

favorite












Is it well-known by the Cauchy-Schwarz inequality that
$$
langle x, xrangle langle y, yrangle - langle x, yrangle^2 geq 0
$$
for any $x,yin H$, $H$ being a Hilbert space with real-valued scalars.



When computing the above expression numerically, sometimes it will be negative (in the order of machine precision) due to round-off errors in the subtraction.



Is there an equivalent expression that is always numerically positive? (E.g., something to the power of 2, a norm of a vector etc.)







share|cite|improve this question



















  • I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
    – RideTheWavelet
    Jul 19 at 8:56






  • 2




    Or taking the maximum of the computed value and 0?
    – Mees de Vries
    Jul 19 at 9:45










  • Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
    – Piyush Divyanakar
    Jul 19 at 9:55







  • 1




    Why do you care? Is this computation an end in itself, or is it part of a larger computation?
    – awkward
    Jul 19 at 12:17










  • @RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
    – Yves Daoust
    Jul 21 at 14:12












up vote
3
down vote

favorite









up vote
3
down vote

favorite











Is it well-known by the Cauchy-Schwarz inequality that
$$
langle x, xrangle langle y, yrangle - langle x, yrangle^2 geq 0
$$
for any $x,yin H$, $H$ being a Hilbert space with real-valued scalars.



When computing the above expression numerically, sometimes it will be negative (in the order of machine precision) due to round-off errors in the subtraction.



Is there an equivalent expression that is always numerically positive? (E.g., something to the power of 2, a norm of a vector etc.)







share|cite|improve this question











Is it well-known by the Cauchy-Schwarz inequality that
$$
langle x, xrangle langle y, yrangle - langle x, yrangle^2 geq 0
$$
for any $x,yin H$, $H$ being a Hilbert space with real-valued scalars.



When computing the above expression numerically, sometimes it will be negative (in the order of machine precision) due to round-off errors in the subtraction.



Is there an equivalent expression that is always numerically positive? (E.g., something to the power of 2, a norm of a vector etc.)









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 19 at 8:28









Nico Schlömer

388113




388113











  • I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
    – RideTheWavelet
    Jul 19 at 8:56






  • 2




    Or taking the maximum of the computed value and 0?
    – Mees de Vries
    Jul 19 at 9:45










  • Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
    – Piyush Divyanakar
    Jul 19 at 9:55







  • 1




    Why do you care? Is this computation an end in itself, or is it part of a larger computation?
    – awkward
    Jul 19 at 12:17










  • @RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
    – Yves Daoust
    Jul 21 at 14:12
















  • I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
    – RideTheWavelet
    Jul 19 at 8:56






  • 2




    Or taking the maximum of the computed value and 0?
    – Mees de Vries
    Jul 19 at 9:45










  • Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
    – Piyush Divyanakar
    Jul 19 at 9:55







  • 1




    Why do you care? Is this computation an end in itself, or is it part of a larger computation?
    – awkward
    Jul 19 at 12:17










  • @RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
    – Yves Daoust
    Jul 21 at 14:12















I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
– RideTheWavelet
Jul 19 at 8:56




I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
– RideTheWavelet
Jul 19 at 8:56




2




2




Or taking the maximum of the computed value and 0?
– Mees de Vries
Jul 19 at 9:45




Or taking the maximum of the computed value and 0?
– Mees de Vries
Jul 19 at 9:45












Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
– Piyush Divyanakar
Jul 19 at 9:55





Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
– Piyush Divyanakar
Jul 19 at 9:55





1




1




Why do you care? Is this computation an end in itself, or is it part of a larger computation?
– awkward
Jul 19 at 12:17




Why do you care? Is this computation an end in itself, or is it part of a larger computation?
– awkward
Jul 19 at 12:17












@RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
– Yves Daoust
Jul 21 at 14:12




@RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
– Yves Daoust
Jul 21 at 14:12










1 Answer
1






active

oldest

votes

















up vote
5
down vote



accepted










There is Lagrange's Identity:



$$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$



for $x, y in mathbbR^n$.






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856409%2fnumerically-robust-cauchy-schwarz%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    5
    down vote



    accepted










    There is Lagrange's Identity:



    $$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$



    for $x, y in mathbbR^n$.






    share|cite|improve this answer

























      up vote
      5
      down vote



      accepted










      There is Lagrange's Identity:



      $$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$



      for $x, y in mathbbR^n$.






      share|cite|improve this answer























        up vote
        5
        down vote



        accepted







        up vote
        5
        down vote



        accepted






        There is Lagrange's Identity:



        $$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$



        for $x, y in mathbbR^n$.






        share|cite|improve this answer













        There is Lagrange's Identity:



        $$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$



        for $x, y in mathbbR^n$.







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered Jul 21 at 13:58









        awkward

        5,12111021




        5,12111021






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856409%2fnumerically-robust-cauchy-schwarz%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?