Power method for solving a system of equations

Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
Given a set of $n$ points on which a triangulation is performed, it is possible to construct coefficients $lambda_ij>0$ such that each point $x_i$ is a convex combination of the points connected to it. Suppose $N(i)$ is a set of points adjacent to $x_i$ in a triangulation; then, for each $x_i$,
$$sum_jin N(i) lambda_ijx_j = x_i $$
and
$$sum_jin N(i) lambda_j = 1.$$
In matrix form,
$$Lambda x=x$$
where $Lambda$ is a square, $ntimes n$, row stochastic matrix (non-negative entries, and no self-loops).
Now suppose matrix $Lambda$ is given. Can one reach configuration $x$ by the Power Method, i.e., repeatedly multiplying $Lambda$ by a vector? I am asking because the above matrix form reminds me of the eigenvalue/eigenvector definition. If not, what would be a good approach to compute such a configuration? Is the solution unique?
One more interesting question: can the coefficients $lambda_ij$ be constructed in such a way that $lambda_ij = lambda_ji$, i.e., the matrix $Lambda$ is symmetric, that is to say doubly stochastic? This way of constructing coefficients means that the Power Method should converge to $x$.
linear-algebra matrices geometry eigenvalues-eigenvectors analytic-geometry
add a comment |Â
up vote
0
down vote
favorite
Given a set of $n$ points on which a triangulation is performed, it is possible to construct coefficients $lambda_ij>0$ such that each point $x_i$ is a convex combination of the points connected to it. Suppose $N(i)$ is a set of points adjacent to $x_i$ in a triangulation; then, for each $x_i$,
$$sum_jin N(i) lambda_ijx_j = x_i $$
and
$$sum_jin N(i) lambda_j = 1.$$
In matrix form,
$$Lambda x=x$$
where $Lambda$ is a square, $ntimes n$, row stochastic matrix (non-negative entries, and no self-loops).
Now suppose matrix $Lambda$ is given. Can one reach configuration $x$ by the Power Method, i.e., repeatedly multiplying $Lambda$ by a vector? I am asking because the above matrix form reminds me of the eigenvalue/eigenvector definition. If not, what would be a good approach to compute such a configuration? Is the solution unique?
One more interesting question: can the coefficients $lambda_ij$ be constructed in such a way that $lambda_ij = lambda_ji$, i.e., the matrix $Lambda$ is symmetric, that is to say doubly stochastic? This way of constructing coefficients means that the Power Method should converge to $x$.
linear-algebra matrices geometry eigenvalues-eigenvectors analytic-geometry
I'm not familiar with the context. But if the norm of the matrix $boldsymbol varLambda - boldsymbol I$ is less than some fixed number $q < 1$ then the method would converge.
â xbh
Jul 30 at 16:22
@ xbh The diagonal entries of $Lambda$ are zeroes, meaning that each row of $Lambda - I$ would sum to zero. Could you please provide more details on the "norm" claim?
â kevin811
Jul 30 at 16:28
The Power Method is an eigenvalue algorithm that only finds the largest eigenvalue and eigenvector. It isn't used for solving systems of equations. There are variants used for different types of matrices. However, your question doesn't make much sense.
â RHowe
Jul 30 at 20:47
@ Geronimo If it is known that the solution to the system is the dominant eigenvector of the involved matrix, why can't it be used?
â kevin811
Jul 31 at 8:40
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Given a set of $n$ points on which a triangulation is performed, it is possible to construct coefficients $lambda_ij>0$ such that each point $x_i$ is a convex combination of the points connected to it. Suppose $N(i)$ is a set of points adjacent to $x_i$ in a triangulation; then, for each $x_i$,
$$sum_jin N(i) lambda_ijx_j = x_i $$
and
$$sum_jin N(i) lambda_j = 1.$$
In matrix form,
$$Lambda x=x$$
where $Lambda$ is a square, $ntimes n$, row stochastic matrix (non-negative entries, and no self-loops).
Now suppose matrix $Lambda$ is given. Can one reach configuration $x$ by the Power Method, i.e., repeatedly multiplying $Lambda$ by a vector? I am asking because the above matrix form reminds me of the eigenvalue/eigenvector definition. If not, what would be a good approach to compute such a configuration? Is the solution unique?
One more interesting question: can the coefficients $lambda_ij$ be constructed in such a way that $lambda_ij = lambda_ji$, i.e., the matrix $Lambda$ is symmetric, that is to say doubly stochastic? This way of constructing coefficients means that the Power Method should converge to $x$.
linear-algebra matrices geometry eigenvalues-eigenvectors analytic-geometry
Given a set of $n$ points on which a triangulation is performed, it is possible to construct coefficients $lambda_ij>0$ such that each point $x_i$ is a convex combination of the points connected to it. Suppose $N(i)$ is a set of points adjacent to $x_i$ in a triangulation; then, for each $x_i$,
$$sum_jin N(i) lambda_ijx_j = x_i $$
and
$$sum_jin N(i) lambda_j = 1.$$
In matrix form,
$$Lambda x=x$$
where $Lambda$ is a square, $ntimes n$, row stochastic matrix (non-negative entries, and no self-loops).
Now suppose matrix $Lambda$ is given. Can one reach configuration $x$ by the Power Method, i.e., repeatedly multiplying $Lambda$ by a vector? I am asking because the above matrix form reminds me of the eigenvalue/eigenvector definition. If not, what would be a good approach to compute such a configuration? Is the solution unique?
One more interesting question: can the coefficients $lambda_ij$ be constructed in such a way that $lambda_ij = lambda_ji$, i.e., the matrix $Lambda$ is symmetric, that is to say doubly stochastic? This way of constructing coefficients means that the Power Method should converge to $x$.
linear-algebra matrices geometry eigenvalues-eigenvectors analytic-geometry
asked Jul 30 at 16:05
kevin811
62
62
I'm not familiar with the context. But if the norm of the matrix $boldsymbol varLambda - boldsymbol I$ is less than some fixed number $q < 1$ then the method would converge.
â xbh
Jul 30 at 16:22
@ xbh The diagonal entries of $Lambda$ are zeroes, meaning that each row of $Lambda - I$ would sum to zero. Could you please provide more details on the "norm" claim?
â kevin811
Jul 30 at 16:28
The Power Method is an eigenvalue algorithm that only finds the largest eigenvalue and eigenvector. It isn't used for solving systems of equations. There are variants used for different types of matrices. However, your question doesn't make much sense.
â RHowe
Jul 30 at 20:47
@ Geronimo If it is known that the solution to the system is the dominant eigenvector of the involved matrix, why can't it be used?
â kevin811
Jul 31 at 8:40
add a comment |Â
I'm not familiar with the context. But if the norm of the matrix $boldsymbol varLambda - boldsymbol I$ is less than some fixed number $q < 1$ then the method would converge.
â xbh
Jul 30 at 16:22
@ xbh The diagonal entries of $Lambda$ are zeroes, meaning that each row of $Lambda - I$ would sum to zero. Could you please provide more details on the "norm" claim?
â kevin811
Jul 30 at 16:28
The Power Method is an eigenvalue algorithm that only finds the largest eigenvalue and eigenvector. It isn't used for solving systems of equations. There are variants used for different types of matrices. However, your question doesn't make much sense.
â RHowe
Jul 30 at 20:47
@ Geronimo If it is known that the solution to the system is the dominant eigenvector of the involved matrix, why can't it be used?
â kevin811
Jul 31 at 8:40
I'm not familiar with the context. But if the norm of the matrix $boldsymbol varLambda - boldsymbol I$ is less than some fixed number $q < 1$ then the method would converge.
â xbh
Jul 30 at 16:22
I'm not familiar with the context. But if the norm of the matrix $boldsymbol varLambda - boldsymbol I$ is less than some fixed number $q < 1$ then the method would converge.
â xbh
Jul 30 at 16:22
@ xbh The diagonal entries of $Lambda$ are zeroes, meaning that each row of $Lambda - I$ would sum to zero. Could you please provide more details on the "norm" claim?
â kevin811
Jul 30 at 16:28
@ xbh The diagonal entries of $Lambda$ are zeroes, meaning that each row of $Lambda - I$ would sum to zero. Could you please provide more details on the "norm" claim?
â kevin811
Jul 30 at 16:28
The Power Method is an eigenvalue algorithm that only finds the largest eigenvalue and eigenvector. It isn't used for solving systems of equations. There are variants used for different types of matrices. However, your question doesn't make much sense.
â RHowe
Jul 30 at 20:47
The Power Method is an eigenvalue algorithm that only finds the largest eigenvalue and eigenvector. It isn't used for solving systems of equations. There are variants used for different types of matrices. However, your question doesn't make much sense.
â RHowe
Jul 30 at 20:47
@ Geronimo If it is known that the solution to the system is the dominant eigenvector of the involved matrix, why can't it be used?
â kevin811
Jul 31 at 8:40
@ Geronimo If it is known that the solution to the system is the dominant eigenvector of the involved matrix, why can't it be used?
â kevin811
Jul 31 at 8:40
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2867159%2fpower-method-for-solving-a-system-of-equations%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password

I'm not familiar with the context. But if the norm of the matrix $boldsymbol varLambda - boldsymbol I$ is less than some fixed number $q < 1$ then the method would converge.
â xbh
Jul 30 at 16:22
@ xbh The diagonal entries of $Lambda$ are zeroes, meaning that each row of $Lambda - I$ would sum to zero. Could you please provide more details on the "norm" claim?
â kevin811
Jul 30 at 16:28
The Power Method is an eigenvalue algorithm that only finds the largest eigenvalue and eigenvector. It isn't used for solving systems of equations. There are variants used for different types of matrices. However, your question doesn't make much sense.
â RHowe
Jul 30 at 20:47
@ Geronimo If it is known that the solution to the system is the dominant eigenvector of the involved matrix, why can't it be used?
â kevin811
Jul 31 at 8:40