Computing accurate mesh normals

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












Suppose we have a mesh $M$ with a vertex set $V$, a facet set $F$, and an edge set $E$. For simplicity we may assume all facets are triangular. $M$ is viewed as an approximation of a parametric surface $S$, so every point on $M$ is close to $S$ and every point on $S$ is close to $M$. Suppose $S$ has a well-defined normal at every point, then it seems natural to want a well-defined normal at every point on $M$. Also, it is desired that the normals on $M$ are continuously varying across the mesh, in particular when crossing interior edges. For this to occur, i am pretty sure an edge cannot be contained in more than two facets.



A natural way to define normals on the mesh is to define normals at every vertex in $V$, and then for any given point on the mesh we find a containing facet and then interpolate a normal from the three vertices of the facet using barycentric weights. It can be shown that normals are well defined along interior edges that are contained in two facets, and the choice of the facet used to interpolate the normal does not matter.



So now it remains to compute accurate normals at every vertex in $V$. To be clear, a vertex normal may not be normal to any facet that contains the vertex. For example, if M is in the shape of a cone with a tip, one would want the tip vertex normal to point along the axis of the cone, and it would not be normal to any facet.



The internet is full of people computing vertex normals by summing all the facet normals (since facets are planar) of the facets containing the point, and then normalizing the resultant vector. Sometimes people normalize the facet normals, and some weight them by the facet area.



I have found a different approach. Fix a vertex in $V$ and suppose it has $k$ facets containing it. Then let $n_1, dots, n_k$ be the facet normals (either unit or weighted by area). Define the $3 times k$ matrix $A=[n_1 dots n_k]$. Consider a $k$ dimensional vector $w$ of weights, and define $n=Aw$. Then $n$ will be the vertex normal we are after, notice it is just a weighted average of the facet normals. It remains to choose a good $w$. The people on the internet i spoke of are choosing $w$ to be a vector of all 1's for every mesh, which does not take into account the mesh geometry. But now we have a generalization and can improve.



What we want is for $n$ to be in the same direction as possible as all the facet normals, which means we want $n_1cdot n, cdots, n_kcdot n$ to all be as large and positive as possible. We can organize these dot products into the vector $A^Tn=A^TAw$. Let $G=A^TA$, the $k times k$ Gram matrix of the facet normals. So $A^Tn=Gw$ needs to have large positive entries. Then it makes sense to me to let $w$ be the dominant eigenvector of $G$ (to maximize euclidean norm of $Gw$). Now if $w$ is an eigenvector of $G$, then so is any scalar multiple of $w$, in particular $-w$. Fret not, for we choose $w$ so that the sum of the entries of $Gw$ is positive and such that $n=Aw$ is a unit vector.



It should be noted that power iteration is very efficient for Gram matrices, since each iteration can be done in $O(k)$ time instead of the usual $O(k^2)$. Furthermore, for a manifold mesh, $k$ cannot exceed 6 on average. So the total time complexity of computing these vertex normals is optimal, i.e. constant time for each vertex.



Now comes the big questions:



Choosing $w$ to be the dominant eigenvector of $G$ ensures the euclidean norm of $Gw$ is large, but we need the entries of $Gw$ to ideally be all positive. If all of the facet normals have positive dot products with each other, which means all the entries of $G$ are positive, then we would like all the entries of $Gw$ to be positive. Can this be proven? What about $w$, when are all the entries of $w$ positive? What kind of differential geometry definition fits for the $G$ matrix, any insights from this area of math (that i am weak in)?



I just wanted to add that in the special case of planar meshes, this method of picking the vertex normals generates the plane normal every time. Also, for cylindrical meshes, this method always generates the radial direction for normals. I have yet to prove it always generates radial direction for spherical shaped meshes, but it certainly appears to be true.



Edit
I just realized since $G$ has nonnegative eigenvalues, that $Gw$ has nonnegative entries if and only $w$ does. Also, my question about positive entries is exactly answered by Perron-Frobenius. If there are any additional i sights into the problem i would like to hear it, but i think now i can justify my work to my boss.







share|cite|improve this question

























    up vote
    0
    down vote

    favorite












    Suppose we have a mesh $M$ with a vertex set $V$, a facet set $F$, and an edge set $E$. For simplicity we may assume all facets are triangular. $M$ is viewed as an approximation of a parametric surface $S$, so every point on $M$ is close to $S$ and every point on $S$ is close to $M$. Suppose $S$ has a well-defined normal at every point, then it seems natural to want a well-defined normal at every point on $M$. Also, it is desired that the normals on $M$ are continuously varying across the mesh, in particular when crossing interior edges. For this to occur, i am pretty sure an edge cannot be contained in more than two facets.



    A natural way to define normals on the mesh is to define normals at every vertex in $V$, and then for any given point on the mesh we find a containing facet and then interpolate a normal from the three vertices of the facet using barycentric weights. It can be shown that normals are well defined along interior edges that are contained in two facets, and the choice of the facet used to interpolate the normal does not matter.



    So now it remains to compute accurate normals at every vertex in $V$. To be clear, a vertex normal may not be normal to any facet that contains the vertex. For example, if M is in the shape of a cone with a tip, one would want the tip vertex normal to point along the axis of the cone, and it would not be normal to any facet.



    The internet is full of people computing vertex normals by summing all the facet normals (since facets are planar) of the facets containing the point, and then normalizing the resultant vector. Sometimes people normalize the facet normals, and some weight them by the facet area.



    I have found a different approach. Fix a vertex in $V$ and suppose it has $k$ facets containing it. Then let $n_1, dots, n_k$ be the facet normals (either unit or weighted by area). Define the $3 times k$ matrix $A=[n_1 dots n_k]$. Consider a $k$ dimensional vector $w$ of weights, and define $n=Aw$. Then $n$ will be the vertex normal we are after, notice it is just a weighted average of the facet normals. It remains to choose a good $w$. The people on the internet i spoke of are choosing $w$ to be a vector of all 1's for every mesh, which does not take into account the mesh geometry. But now we have a generalization and can improve.



    What we want is for $n$ to be in the same direction as possible as all the facet normals, which means we want $n_1cdot n, cdots, n_kcdot n$ to all be as large and positive as possible. We can organize these dot products into the vector $A^Tn=A^TAw$. Let $G=A^TA$, the $k times k$ Gram matrix of the facet normals. So $A^Tn=Gw$ needs to have large positive entries. Then it makes sense to me to let $w$ be the dominant eigenvector of $G$ (to maximize euclidean norm of $Gw$). Now if $w$ is an eigenvector of $G$, then so is any scalar multiple of $w$, in particular $-w$. Fret not, for we choose $w$ so that the sum of the entries of $Gw$ is positive and such that $n=Aw$ is a unit vector.



    It should be noted that power iteration is very efficient for Gram matrices, since each iteration can be done in $O(k)$ time instead of the usual $O(k^2)$. Furthermore, for a manifold mesh, $k$ cannot exceed 6 on average. So the total time complexity of computing these vertex normals is optimal, i.e. constant time for each vertex.



    Now comes the big questions:



    Choosing $w$ to be the dominant eigenvector of $G$ ensures the euclidean norm of $Gw$ is large, but we need the entries of $Gw$ to ideally be all positive. If all of the facet normals have positive dot products with each other, which means all the entries of $G$ are positive, then we would like all the entries of $Gw$ to be positive. Can this be proven? What about $w$, when are all the entries of $w$ positive? What kind of differential geometry definition fits for the $G$ matrix, any insights from this area of math (that i am weak in)?



    I just wanted to add that in the special case of planar meshes, this method of picking the vertex normals generates the plane normal every time. Also, for cylindrical meshes, this method always generates the radial direction for normals. I have yet to prove it always generates radial direction for spherical shaped meshes, but it certainly appears to be true.



    Edit
    I just realized since $G$ has nonnegative eigenvalues, that $Gw$ has nonnegative entries if and only $w$ does. Also, my question about positive entries is exactly answered by Perron-Frobenius. If there are any additional i sights into the problem i would like to hear it, but i think now i can justify my work to my boss.







    share|cite|improve this question























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      Suppose we have a mesh $M$ with a vertex set $V$, a facet set $F$, and an edge set $E$. For simplicity we may assume all facets are triangular. $M$ is viewed as an approximation of a parametric surface $S$, so every point on $M$ is close to $S$ and every point on $S$ is close to $M$. Suppose $S$ has a well-defined normal at every point, then it seems natural to want a well-defined normal at every point on $M$. Also, it is desired that the normals on $M$ are continuously varying across the mesh, in particular when crossing interior edges. For this to occur, i am pretty sure an edge cannot be contained in more than two facets.



      A natural way to define normals on the mesh is to define normals at every vertex in $V$, and then for any given point on the mesh we find a containing facet and then interpolate a normal from the three vertices of the facet using barycentric weights. It can be shown that normals are well defined along interior edges that are contained in two facets, and the choice of the facet used to interpolate the normal does not matter.



      So now it remains to compute accurate normals at every vertex in $V$. To be clear, a vertex normal may not be normal to any facet that contains the vertex. For example, if M is in the shape of a cone with a tip, one would want the tip vertex normal to point along the axis of the cone, and it would not be normal to any facet.



      The internet is full of people computing vertex normals by summing all the facet normals (since facets are planar) of the facets containing the point, and then normalizing the resultant vector. Sometimes people normalize the facet normals, and some weight them by the facet area.



      I have found a different approach. Fix a vertex in $V$ and suppose it has $k$ facets containing it. Then let $n_1, dots, n_k$ be the facet normals (either unit or weighted by area). Define the $3 times k$ matrix $A=[n_1 dots n_k]$. Consider a $k$ dimensional vector $w$ of weights, and define $n=Aw$. Then $n$ will be the vertex normal we are after, notice it is just a weighted average of the facet normals. It remains to choose a good $w$. The people on the internet i spoke of are choosing $w$ to be a vector of all 1's for every mesh, which does not take into account the mesh geometry. But now we have a generalization and can improve.



      What we want is for $n$ to be in the same direction as possible as all the facet normals, which means we want $n_1cdot n, cdots, n_kcdot n$ to all be as large and positive as possible. We can organize these dot products into the vector $A^Tn=A^TAw$. Let $G=A^TA$, the $k times k$ Gram matrix of the facet normals. So $A^Tn=Gw$ needs to have large positive entries. Then it makes sense to me to let $w$ be the dominant eigenvector of $G$ (to maximize euclidean norm of $Gw$). Now if $w$ is an eigenvector of $G$, then so is any scalar multiple of $w$, in particular $-w$. Fret not, for we choose $w$ so that the sum of the entries of $Gw$ is positive and such that $n=Aw$ is a unit vector.



      It should be noted that power iteration is very efficient for Gram matrices, since each iteration can be done in $O(k)$ time instead of the usual $O(k^2)$. Furthermore, for a manifold mesh, $k$ cannot exceed 6 on average. So the total time complexity of computing these vertex normals is optimal, i.e. constant time for each vertex.



      Now comes the big questions:



      Choosing $w$ to be the dominant eigenvector of $G$ ensures the euclidean norm of $Gw$ is large, but we need the entries of $Gw$ to ideally be all positive. If all of the facet normals have positive dot products with each other, which means all the entries of $G$ are positive, then we would like all the entries of $Gw$ to be positive. Can this be proven? What about $w$, when are all the entries of $w$ positive? What kind of differential geometry definition fits for the $G$ matrix, any insights from this area of math (that i am weak in)?



      I just wanted to add that in the special case of planar meshes, this method of picking the vertex normals generates the plane normal every time. Also, for cylindrical meshes, this method always generates the radial direction for normals. I have yet to prove it always generates radial direction for spherical shaped meshes, but it certainly appears to be true.



      Edit
      I just realized since $G$ has nonnegative eigenvalues, that $Gw$ has nonnegative entries if and only $w$ does. Also, my question about positive entries is exactly answered by Perron-Frobenius. If there are any additional i sights into the problem i would like to hear it, but i think now i can justify my work to my boss.







      share|cite|improve this question













      Suppose we have a mesh $M$ with a vertex set $V$, a facet set $F$, and an edge set $E$. For simplicity we may assume all facets are triangular. $M$ is viewed as an approximation of a parametric surface $S$, so every point on $M$ is close to $S$ and every point on $S$ is close to $M$. Suppose $S$ has a well-defined normal at every point, then it seems natural to want a well-defined normal at every point on $M$. Also, it is desired that the normals on $M$ are continuously varying across the mesh, in particular when crossing interior edges. For this to occur, i am pretty sure an edge cannot be contained in more than two facets.



      A natural way to define normals on the mesh is to define normals at every vertex in $V$, and then for any given point on the mesh we find a containing facet and then interpolate a normal from the three vertices of the facet using barycentric weights. It can be shown that normals are well defined along interior edges that are contained in two facets, and the choice of the facet used to interpolate the normal does not matter.



      So now it remains to compute accurate normals at every vertex in $V$. To be clear, a vertex normal may not be normal to any facet that contains the vertex. For example, if M is in the shape of a cone with a tip, one would want the tip vertex normal to point along the axis of the cone, and it would not be normal to any facet.



      The internet is full of people computing vertex normals by summing all the facet normals (since facets are planar) of the facets containing the point, and then normalizing the resultant vector. Sometimes people normalize the facet normals, and some weight them by the facet area.



      I have found a different approach. Fix a vertex in $V$ and suppose it has $k$ facets containing it. Then let $n_1, dots, n_k$ be the facet normals (either unit or weighted by area). Define the $3 times k$ matrix $A=[n_1 dots n_k]$. Consider a $k$ dimensional vector $w$ of weights, and define $n=Aw$. Then $n$ will be the vertex normal we are after, notice it is just a weighted average of the facet normals. It remains to choose a good $w$. The people on the internet i spoke of are choosing $w$ to be a vector of all 1's for every mesh, which does not take into account the mesh geometry. But now we have a generalization and can improve.



      What we want is for $n$ to be in the same direction as possible as all the facet normals, which means we want $n_1cdot n, cdots, n_kcdot n$ to all be as large and positive as possible. We can organize these dot products into the vector $A^Tn=A^TAw$. Let $G=A^TA$, the $k times k$ Gram matrix of the facet normals. So $A^Tn=Gw$ needs to have large positive entries. Then it makes sense to me to let $w$ be the dominant eigenvector of $G$ (to maximize euclidean norm of $Gw$). Now if $w$ is an eigenvector of $G$, then so is any scalar multiple of $w$, in particular $-w$. Fret not, for we choose $w$ so that the sum of the entries of $Gw$ is positive and such that $n=Aw$ is a unit vector.



      It should be noted that power iteration is very efficient for Gram matrices, since each iteration can be done in $O(k)$ time instead of the usual $O(k^2)$. Furthermore, for a manifold mesh, $k$ cannot exceed 6 on average. So the total time complexity of computing these vertex normals is optimal, i.e. constant time for each vertex.



      Now comes the big questions:



      Choosing $w$ to be the dominant eigenvector of $G$ ensures the euclidean norm of $Gw$ is large, but we need the entries of $Gw$ to ideally be all positive. If all of the facet normals have positive dot products with each other, which means all the entries of $G$ are positive, then we would like all the entries of $Gw$ to be positive. Can this be proven? What about $w$, when are all the entries of $w$ positive? What kind of differential geometry definition fits for the $G$ matrix, any insights from this area of math (that i am weak in)?



      I just wanted to add that in the special case of planar meshes, this method of picking the vertex normals generates the plane normal every time. Also, for cylindrical meshes, this method always generates the radial direction for normals. I have yet to prove it always generates radial direction for spherical shaped meshes, but it certainly appears to be true.



      Edit
      I just realized since $G$ has nonnegative eigenvalues, that $Gw$ has nonnegative entries if and only $w$ does. Also, my question about positive entries is exactly answered by Perron-Frobenius. If there are any additional i sights into the problem i would like to hear it, but i think now i can justify my work to my boss.









      share|cite|improve this question












      share|cite|improve this question




      share|cite|improve this question








      edited Jul 21 at 12:59
























      asked Jul 21 at 4:28









      Randy

      315




      315

























          active

          oldest

          votes











          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858248%2fcomputing-accurate-mesh-normals%23new-answer', 'question_page');

          );

          Post as a guest



































          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes










           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2858248%2fcomputing-accurate-mesh-normals%23new-answer', 'question_page');

          );

          Post as a guest













































































          Comments

          Popular posts from this blog

          What is the equation of a 3D cone with generalised tilt?

          Color the edges and diagonals of a regular polygon

          Relationship between determinant of matrix and determinant of adjoint?