Generating samples of a Gaussian vector with independent components above a hyperplane

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












Let $(Z_1,Z_2,dots,Z_N)$ be a Gaussian vector with iid components, zero mean, and unit variance. Consider a hyperplane in $mathbbR^N$. I am interested in generating samples of this Gaussian vector that fall on one side of the hyperplane.



For simplicity, let $N = 2$ and let the hyperplane be $aZ_1 + bZ_2 = c$. Suppose that I am interested in generating samples of $(Z_1,Z_2)$ such that $aZ_1 + bZ_2 > c$. Here is the solution methodology that comes to mind:



  • Find the line perpendicular to $aZ_1 + bZ_2 = c$.

  • Rotate this perpendicular line so that it coincides with the y-axis. I therefore get a rotation matrix here.

  • Now my rotated line is $d$ units above (assume for simplicity; it could be below) the origin where $d$ is the distance of $aZ_1 + bZ_2 = c$ from the origin.

  • Samples of $(Z_1,Z_2)$ can then be easily generated by first generating samples that lie above the rotated line and subsequently applying the inverse of the rotation matrix to these samples. I'm taking advantage of the fact that the components are independent.

This looks like a tedious task even in 2-dimensions. I am wondering if there is a simpler approach that translates very well to $N$-dimensions.



Suggestions appreciated!







share|cite|improve this question



















  • Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
    – user190080
    Jul 20 at 16:08






  • 2




    @user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
    – Clement C.
    Jul 20 at 17:32










  • @ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
    – user190080
    Jul 20 at 20:26










  • Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
    – Clement C.
    Jul 20 at 20:28















up vote
1
down vote

favorite












Let $(Z_1,Z_2,dots,Z_N)$ be a Gaussian vector with iid components, zero mean, and unit variance. Consider a hyperplane in $mathbbR^N$. I am interested in generating samples of this Gaussian vector that fall on one side of the hyperplane.



For simplicity, let $N = 2$ and let the hyperplane be $aZ_1 + bZ_2 = c$. Suppose that I am interested in generating samples of $(Z_1,Z_2)$ such that $aZ_1 + bZ_2 > c$. Here is the solution methodology that comes to mind:



  • Find the line perpendicular to $aZ_1 + bZ_2 = c$.

  • Rotate this perpendicular line so that it coincides with the y-axis. I therefore get a rotation matrix here.

  • Now my rotated line is $d$ units above (assume for simplicity; it could be below) the origin where $d$ is the distance of $aZ_1 + bZ_2 = c$ from the origin.

  • Samples of $(Z_1,Z_2)$ can then be easily generated by first generating samples that lie above the rotated line and subsequently applying the inverse of the rotation matrix to these samples. I'm taking advantage of the fact that the components are independent.

This looks like a tedious task even in 2-dimensions. I am wondering if there is a simpler approach that translates very well to $N$-dimensions.



Suggestions appreciated!







share|cite|improve this question



















  • Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
    – user190080
    Jul 20 at 16:08






  • 2




    @user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
    – Clement C.
    Jul 20 at 17:32










  • @ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
    – user190080
    Jul 20 at 20:26










  • Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
    – Clement C.
    Jul 20 at 20:28













up vote
1
down vote

favorite









up vote
1
down vote

favorite











Let $(Z_1,Z_2,dots,Z_N)$ be a Gaussian vector with iid components, zero mean, and unit variance. Consider a hyperplane in $mathbbR^N$. I am interested in generating samples of this Gaussian vector that fall on one side of the hyperplane.



For simplicity, let $N = 2$ and let the hyperplane be $aZ_1 + bZ_2 = c$. Suppose that I am interested in generating samples of $(Z_1,Z_2)$ such that $aZ_1 + bZ_2 > c$. Here is the solution methodology that comes to mind:



  • Find the line perpendicular to $aZ_1 + bZ_2 = c$.

  • Rotate this perpendicular line so that it coincides with the y-axis. I therefore get a rotation matrix here.

  • Now my rotated line is $d$ units above (assume for simplicity; it could be below) the origin where $d$ is the distance of $aZ_1 + bZ_2 = c$ from the origin.

  • Samples of $(Z_1,Z_2)$ can then be easily generated by first generating samples that lie above the rotated line and subsequently applying the inverse of the rotation matrix to these samples. I'm taking advantage of the fact that the components are independent.

This looks like a tedious task even in 2-dimensions. I am wondering if there is a simpler approach that translates very well to $N$-dimensions.



Suggestions appreciated!







share|cite|improve this question











Let $(Z_1,Z_2,dots,Z_N)$ be a Gaussian vector with iid components, zero mean, and unit variance. Consider a hyperplane in $mathbbR^N$. I am interested in generating samples of this Gaussian vector that fall on one side of the hyperplane.



For simplicity, let $N = 2$ and let the hyperplane be $aZ_1 + bZ_2 = c$. Suppose that I am interested in generating samples of $(Z_1,Z_2)$ such that $aZ_1 + bZ_2 > c$. Here is the solution methodology that comes to mind:



  • Find the line perpendicular to $aZ_1 + bZ_2 = c$.

  • Rotate this perpendicular line so that it coincides with the y-axis. I therefore get a rotation matrix here.

  • Now my rotated line is $d$ units above (assume for simplicity; it could be below) the origin where $d$ is the distance of $aZ_1 + bZ_2 = c$ from the origin.

  • Samples of $(Z_1,Z_2)$ can then be easily generated by first generating samples that lie above the rotated line and subsequently applying the inverse of the rotation matrix to these samples. I'm taking advantage of the fact that the components are independent.

This looks like a tedious task even in 2-dimensions. I am wondering if there is a simpler approach that translates very well to $N$-dimensions.



Suggestions appreciated!









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 20 at 15:52









Tomas Jorovic

1,55721325




1,55721325











  • Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
    – user190080
    Jul 20 at 16:08






  • 2




    @user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
    – Clement C.
    Jul 20 at 17:32










  • @ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
    – user190080
    Jul 20 at 20:26










  • Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
    – Clement C.
    Jul 20 at 20:28

















  • Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
    – user190080
    Jul 20 at 16:08






  • 2




    @user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
    – Clement C.
    Jul 20 at 17:32










  • @ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
    – user190080
    Jul 20 at 20:26










  • Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
    – Clement C.
    Jul 20 at 20:28
















Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
– user190080
Jul 20 at 16:08




Maybe this is a bit of naive, but could you not simply sample $(Z_1,Z_2)$ and check if $aZ_1 + bZ_2 > c$ holds, then take the sample, if it doesn't hold, discard and sample again?
– user190080
Jul 20 at 16:08




2




2




@user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
– Clement C.
Jul 20 at 17:32




@user190080 Rejection sampling can be highly inefficient (especially in $N$ dimension: what if your target set has mass exponentially small in $N$ under the Gaussian?)
– Clement C.
Jul 20 at 17:32












@ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
– user190080
Jul 20 at 20:26




@ClementC. this is a fair point...so rejection sampling is off the table then, though it might be helpful to know the exact dependency then.
– user190080
Jul 20 at 20:26












Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
– Clement C.
Jul 20 at 20:28





Not the most efficient here given the additional structure, but for what it's worth: sampling from high-dimensional log-concave distributions in polynomial time (in the dimension) is certainly doable (there are many works on this). And a Gaussian conditioned on a linear constraint is log-concave.
– Clement C.
Jul 20 at 20:28











1 Answer
1






active

oldest

votes

















up vote
1
down vote













You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to “find” the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to



$$
fracvec nvec ncdotvec z=frac cvec n
$$



and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is



$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.






share|cite|improve this answer





















  • Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
    – Tomas Jorovic
    Jul 23 at 1:14










  • @TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
    – joriki
    Jul 23 at 3:10










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2857783%2fgenerating-samples-of-a-gaussian-vector-with-independent-components-above-a-hype%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote













You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to “find” the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to



$$
fracvec nvec ncdotvec z=frac cvec n
$$



and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is



$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.






share|cite|improve this answer





















  • Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
    – Tomas Jorovic
    Jul 23 at 1:14










  • @TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
    – joriki
    Jul 23 at 3:10














up vote
1
down vote













You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to “find” the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to



$$
fracvec nvec ncdotvec z=frac cvec n
$$



and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is



$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.






share|cite|improve this answer





















  • Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
    – Tomas Jorovic
    Jul 23 at 1:14










  • @TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
    – joriki
    Jul 23 at 3:10












up vote
1
down vote










up vote
1
down vote









You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to “find” the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to



$$
fracvec nvec ncdotvec z=frac cvec n
$$



and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is



$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.






share|cite|improve this answer













You don't need a rotation; a reflection is good enough and less complicated. All you want is that some axis direction is transformed into the normal. You also don't really need to “find” the normal; it's the unit vector along $pmatrixa\b$. More generally, let the hyperplane be defined by $vec ncdotvec z=c$; rescale this to



$$
fracvec nvec ncdotvec z=frac cvec n
$$



and write this as $vec n'cdot z=c'$, so that $vec n'$ is a unit vector normal to the hyperplane. Let $vec e$ denote the unit vector along, say, the $x_1$ axis. Then reflecting in the plane through the origin perpendicular to $vec n'-vec e$ flips $vec e$ onto $vec n'$. Denote the unit normal $fracvec n'-vec e$ of this plane by $vec r$. Draw from the distribution truncated at $x_1=c'$ for $x_1$, and from the full distribution for all other components. Denote the resulting vector by $vec x$. Then the desired random vector is



$$
vec z=vec x-2left(vec x cdotvec rright)vec r;.
$$
By the way, you're taking advantage of much more than just the independence of the components. You're making use of the rotational invariance of a product of Gaussians in Cartesian coordinates; you can't do this with general i.i.d. random variables.







share|cite|improve this answer













share|cite|improve this answer



share|cite|improve this answer











answered Jul 21 at 0:17









joriki

164k10180328




164k10180328











  • Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
    – Tomas Jorovic
    Jul 23 at 1:14










  • @TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
    – joriki
    Jul 23 at 3:10
















  • Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
    – Tomas Jorovic
    Jul 23 at 1:14










  • @TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
    – joriki
    Jul 23 at 3:10















Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
– Tomas Jorovic
Jul 23 at 1:14




Thanks for the reply! You lost me when you mentioned "Then reflecting in the plane through the origin perpendicular..." What are we reflecting here?
– Tomas Jorovic
Jul 23 at 1:14












@TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
– joriki
Jul 23 at 3:10




@TomasJorovic: $vec z=vec x-2left(vec x cdotvec rright)vec r$ reflects the vector $vec x$ in the plane through the origin that's perpendicular to $vec r$. Since $vec r$ is a unit vector, $left(vec x cdotvec rright)vec r$ is the component of $vec x$ along $vec r$, so subtracting it twice takes you to the opposite site of the plane perpendicular to $vec r$.
– joriki
Jul 23 at 3:10












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2857783%2fgenerating-samples-of-a-gaussian-vector-with-independent-components-above-a-hype%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Relationship between determinant of matrix and determinant of adjoint?

Color the edges and diagonals of a regular polygon