Generating samples of a Gaussian vector with independent components above a hyperplane
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
Let $(Z_1,Z_2,dots,Z_N)$ be a Gaussian vector with iid components, zero mean, and unit variance. Consider a hyperplane in $mathbbR^N$. I am interested in generating samples of this Gaussian vector that fall on one side of the hyperplane.
For simplicity, let $N = 2$ and let the hyperplane be $aZ_1 + bZ_2 = c$. Suppose that I am interested in generating samples of $(Z_1,Z_2)$ such that $aZ_1 + bZ_2 > c$. Here is the solution methodology that comes to mind:
- Find the line perpendicular to $aZ_1 + bZ_2 = c$.
- Rotate this perpendicular line so that it coincides with the y-axis. I therefore get a rotation matrix here.
- Now my rotated line is $d$ units above (assume for simplicity; it could be below) the origin where $d$ is the distance of $aZ_1 + bZ_2 = c$ from the origin.
- Samples of $(Z_1,Z_2)$ can then be easily generated by first generating samples that lie above the rotated line and subsequently applying the inverse of the rotation matrix to these samples. I'm taking advantage of the fact that the components are independent.
This looks like a tedious task even in 2-dimensions. I am wondering if there is a simpler approach that translates very well to $N$-dimensions.
Suggestions appreciated!
probability statistics normal-distribution sampling monte-carlo
add a comment |Â
up vote
1
down vote
favorite
Let $(Z_1,Z_2,dots,Z_N)$ be a Gaussian vector with iid components, zero mean, and unit variance. Consider a hyperplane in $mathbbR^N$. I am interested in generating samples of this Gaussian vector that fall on one side of the hyperplane.
For simplicity, let $N = 2$ and let the hyperplane be $aZ_1 + bZ_2 = c$. Suppose that I am interested in generating samples of $(Z_1,Z_2)$ such that $aZ_1 + bZ_2 > c$. Here is the solution methodology that comes to mind:
- Find the line perpendicular to $aZ_1 + bZ_2 = c$.
- Rotate this perpendicular line so that it coincides with the y-axis. I therefore get a rotation matrix here.
- Now my rotated line is $d$ units above (assume for simplicity; it could be below) the origin where $d$ is the distance of $aZ_1 + bZ_2 = c$ from the origin.
- Samples of $(Z_1,Z_2)$ can then be easily generated by first generating samples that lie above the rotated line and subsequently applying the inverse of the rotation matrix to these samples. I'm taking advantage of the fact that the components are independent.
This looks like a tedious task even in 2-dimensions. I am wondering if there is a simpler approach that translates very well to $N$-dimensions.
Suggestions appreciated!
probability statistics normal-distribution sampling monte-carlo
Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
â user190080
Jul 20 at 16:08
2
@user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
â Clement C.
Jul 20 at 17:32
@ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
â user190080
Jul 20 at 20:26
Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
â Clement C.
Jul 20 at 20:28
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Let $(Z_1,Z_2,dots,Z_N)$ be a Gaussian vector with iid components, zero mean, and unit variance. Consider a hyperplane in $mathbbR^N$. I am interested in generating samples of this Gaussian vector that fall on one side of the hyperplane.
For simplicity, let $N = 2$ and let the hyperplane be $aZ_1 + bZ_2 = c$. Suppose that I am interested in generating samples of $(Z_1,Z_2)$ such that $aZ_1 + bZ_2 > c$. Here is the solution methodology that comes to mind:
- Find the line perpendicular to $aZ_1 + bZ_2 = c$.
- Rotate this perpendicular line so that it coincides with the y-axis. I therefore get a rotation matrix here.
- Now my rotated line is $d$ units above (assume for simplicity; it could be below) the origin where $d$ is the distance of $aZ_1 + bZ_2 = c$ from the origin.
- Samples of $(Z_1,Z_2)$ can then be easily generated by first generating samples that lie above the rotated line and subsequently applying the inverse of the rotation matrix to these samples. I'm taking advantage of the fact that the components are independent.
This looks like a tedious task even in 2-dimensions. I am wondering if there is a simpler approach that translates very well to $N$-dimensions.
Suggestions appreciated!
probability statistics normal-distribution sampling monte-carlo
Let $(Z_1,Z_2,dots,Z_N)$ be a Gaussian vector with iid components, zero mean, and unit variance. Consider a hyperplane in $mathbbR^N$. I am interested in generating samples of this Gaussian vector that fall on one side of the hyperplane.
For simplicity, let $N = 2$ and let the hyperplane be $aZ_1 + bZ_2 = c$. Suppose that I am interested in generating samples of $(Z_1,Z_2)$ such that $aZ_1 + bZ_2 > c$. Here is the solution methodology that comes to mind:
- Find the line perpendicular to $aZ_1 + bZ_2 = c$.
- Rotate this perpendicular line so that it coincides with the y-axis. I therefore get a rotation matrix here.
- Now my rotated line is $d$ units above (assume for simplicity; it could be below) the origin where $d$ is the distance of $aZ_1 + bZ_2 = c$ from the origin.
- Samples of $(Z_1,Z_2)$ can then be easily generated by first generating samples that lie above the rotated line and subsequently applying the inverse of the rotation matrix to these samples. I'm taking advantage of the fact that the components are independent.
This looks like a tedious task even in 2-dimensions. I am wondering if there is a simpler approach that translates very well to $N$-dimensions.
Suggestions appreciated!
probability statistics normal-distribution sampling monte-carlo
asked Jul 20 at 15:52
Tomas Jorovic
1,55721325
1,55721325
Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
â user190080
Jul 20 at 16:08
2
@user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
â Clement C.
Jul 20 at 17:32
@ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
â user190080
Jul 20 at 20:26
Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
â Clement C.
Jul 20 at 20:28
add a comment |Â
Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
â user190080
Jul 20 at 16:08
2
@user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
â Clement C.
Jul 20 at 17:32
@ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
â user190080
Jul 20 at 20:26
Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
â Clement C.
Jul 20 at 20:28
Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
â user190080
Jul 20 at 16:08
Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
â user190080
Jul 20 at 16:08
2
2
@user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
â Clement C.
Jul 20 at 17:32
@user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
â Clement C.
Jul 20 at 17:32
@ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
â user190080
Jul 20 at 20:26
@ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
â user190080
Jul 20 at 20:26
Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
â Clement C.
Jul 20 at 20:28
Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
â Clement C.
Jul 20 at 20:28
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to âfindâ the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to
$$
fracvec nvec ncdotvec z=frac cvec n
$$
and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is
$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.
Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
â Tomas Jorovic
Jul 23 at 1:14
@TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
â joriki
Jul 23 at 3:10
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to âfindâ the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to
$$
fracvec nvec ncdotvec z=frac cvec n
$$
and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is
$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.
Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
â Tomas Jorovic
Jul 23 at 1:14
@TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
â joriki
Jul 23 at 3:10
add a comment |Â
up vote
1
down vote
You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to âfindâ the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to
$$
fracvec nvec ncdotvec z=frac cvec n
$$
and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is
$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.
Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
â Tomas Jorovic
Jul 23 at 1:14
@TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
â joriki
Jul 23 at 3:10
add a comment |Â
up vote
1
down vote
up vote
1
down vote
You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to âfindâ the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to
$$
fracvec nvec ncdotvec z=frac cvec n
$$
and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is
$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.
You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to âfindâ the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to
$$
fracvec nvec ncdotvec z=frac cvec n
$$
and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is
$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.
answered Jul 21 at 0:17
joriki
164k10180328
164k10180328
Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
â Tomas Jorovic
Jul 23 at 1:14
@TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
â joriki
Jul 23 at 3:10
add a comment |Â
Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
â Tomas Jorovic
Jul 23 at 1:14
@TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
â joriki
Jul 23 at 3:10
Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
â Tomas Jorovic
Jul 23 at 1:14
Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
â Tomas Jorovic
Jul 23 at 1:14
@TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
â joriki
Jul 23 at 3:10
@TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
â joriki
Jul 23 at 3:10
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2857783%2fgenerating-samples-of-a-gaussian-vector-with-independent-components-above-a-hype%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
â user190080
Jul 20 at 16:08
2
@user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
â Clement C.
Jul 20 at 17:32
@ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
â user190080
Jul 20 at 20:26
Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
â Clement C.
Jul 20 at 20:28