Numerically robust Cauchy-Schwarz?
Clash Royale CLAN TAG#URR8PPP
up vote
3
down vote
favorite
Is it well-known by the Cauchy-Schwarz inequality that
$$
langle x, xrangle langle y, yrangle - langle x, yrangle^2 geq 0
$$
for any $x,yin H$, $H$ being a Hilbert space with real-valued scalars.
When computing the above expression numerically, sometimes it will be negative (in the order of machine precision) due to round-off errors in the subtraction.
Is there an equivalent expression that is always numerically positive? (E.g., something to the power of 2, a norm of a vector etc.)
hilbert-spaces inner-product-space cauchy-schwarz-inequality
 |Â
show 1 more comment
up vote
3
down vote
favorite
Is it well-known by the Cauchy-Schwarz inequality that
$$
langle x, xrangle langle y, yrangle - langle x, yrangle^2 geq 0
$$
for any $x,yin H$, $H$ being a Hilbert space with real-valued scalars.
When computing the above expression numerically, sometimes it will be negative (in the order of machine precision) due to round-off errors in the subtraction.
Is there an equivalent expression that is always numerically positive? (E.g., something to the power of 2, a norm of a vector etc.)
hilbert-spaces inner-product-space cauchy-schwarz-inequality
I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
– RideTheWavelet
Jul 19 at 8:56
2
Or taking the maximum of the computed value and 0?
– Mees de Vries
Jul 19 at 9:45
Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
– Piyush Divyanakar
Jul 19 at 9:55
1
Why do you care? Is this computation an end in itself, or is it part of a larger computation?
– awkward
Jul 19 at 12:17
@RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
– Yves Daoust
Jul 21 at 14:12
 |Â
show 1 more comment
up vote
3
down vote
favorite
up vote
3
down vote
favorite
Is it well-known by the Cauchy-Schwarz inequality that
$$
langle x, xrangle langle y, yrangle - langle x, yrangle^2 geq 0
$$
for any $x,yin H$, $H$ being a Hilbert space with real-valued scalars.
When computing the above expression numerically, sometimes it will be negative (in the order of machine precision) due to round-off errors in the subtraction.
Is there an equivalent expression that is always numerically positive? (E.g., something to the power of 2, a norm of a vector etc.)
hilbert-spaces inner-product-space cauchy-schwarz-inequality
Is it well-known by the Cauchy-Schwarz inequality that
$$
langle x, xrangle langle y, yrangle - langle x, yrangle^2 geq 0
$$
for any $x,yin H$, $H$ being a Hilbert space with real-valued scalars.
When computing the above expression numerically, sometimes it will be negative (in the order of machine precision) due to round-off errors in the subtraction.
Is there an equivalent expression that is always numerically positive? (E.g., something to the power of 2, a norm of a vector etc.)
hilbert-spaces inner-product-space cauchy-schwarz-inequality
asked Jul 19 at 8:28
Nico Schlömer
388113
388113
I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
– RideTheWavelet
Jul 19 at 8:56
2
Or taking the maximum of the computed value and 0?
– Mees de Vries
Jul 19 at 9:45
Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
– Piyush Divyanakar
Jul 19 at 9:55
1
Why do you care? Is this computation an end in itself, or is it part of a larger computation?
– awkward
Jul 19 at 12:17
@RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
– Yves Daoust
Jul 21 at 14:12
 |Â
show 1 more comment
I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
– RideTheWavelet
Jul 19 at 8:56
2
Or taking the maximum of the computed value and 0?
– Mees de Vries
Jul 19 at 9:45
Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
– Piyush Divyanakar
Jul 19 at 9:55
1
Why do you care? Is this computation an end in itself, or is it part of a larger computation?
– awkward
Jul 19 at 12:17
@RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
– Yves Daoust
Jul 21 at 14:12
I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
– RideTheWavelet
Jul 19 at 8:56
I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
– RideTheWavelet
Jul 19 at 8:56
2
2
Or taking the maximum of the computed value and 0?
– Mees de Vries
Jul 19 at 9:45
Or taking the maximum of the computed value and 0?
– Mees de Vries
Jul 19 at 9:45
Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
– Piyush Divyanakar
Jul 19 at 9:55
Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
– Piyush Divyanakar
Jul 19 at 9:55
1
1
Why do you care? Is this computation an end in itself, or is it part of a larger computation?
– awkward
Jul 19 at 12:17
Why do you care? Is this computation an end in itself, or is it part of a larger computation?
– awkward
Jul 19 at 12:17
@RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
– Yves Daoust
Jul 21 at 14:12
@RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
– Yves Daoust
Jul 21 at 14:12
 |Â
show 1 more comment
1 Answer
1
active
oldest
votes
up vote
5
down vote
accepted
There is Lagrange's Identity:
$$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$
for $x, y in mathbbR^n$.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
5
down vote
accepted
There is Lagrange's Identity:
$$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$
for $x, y in mathbbR^n$.
add a comment |Â
up vote
5
down vote
accepted
There is Lagrange's Identity:
$$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$
for $x, y in mathbbR^n$.
add a comment |Â
up vote
5
down vote
accepted
up vote
5
down vote
accepted
There is Lagrange's Identity:
$$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$
for $x, y in mathbbR^n$.
There is Lagrange's Identity:
$$langle x,x rangle langle y,yrangle - langle x,y rangle^2 = sum_1 le i < j le n (x_i y_j - x_j y_i)^2$$
for $x, y in mathbbR^n$.
answered Jul 21 at 13:58
awkward
5,12111021
5,12111021
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856409%2fnumerically-robust-cauchy-schwarz%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
I take it that an absolute value will not suffice? You do know that the quantity is nonnegative, so taking an absolute value will not change its true value, but will result in a nonnegative number on the computer.
– RideTheWavelet
Jul 19 at 8:56
2
Or taking the maximum of the computed value and 0?
– Mees de Vries
Jul 19 at 9:45
Would taking the logarithm work? $log(langle x, xrangle )+ log(langle y, yrangle) - 2log(langle x, yrangle) geq 0$. This help when the numbers get really small and avoids floating point errors because it is addition and subtraction.
– Piyush Divyanakar
Jul 19 at 9:55
1
Why do you care? Is this computation an end in itself, or is it part of a larger computation?
– awkward
Jul 19 at 12:17
@RideTheWavelet: taking the absolute value is not the best idea. You know that you are tampering the value, i.e. replacing $-epsilon$ by $epsilon$, twice the correction achieved with $0$.
– Yves Daoust
Jul 21 at 14:12