Eigendecomposition of a large symmetric block tridiagonal matrix.
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
I was having a go at implementing the algorithm in the paper Spectral Matting which looks neat.
In it they construct a Matting Laplacian. This is a sparse N×N symmetric matrix, where each row and column represents a pixel in the original image (N is the number of pixels in the image). The only non-zero elements are those corresponding to adjacent pixels, which means it has the following block tridiagonal structure (and each of the little blocks are tridiagonal):
The algorithm depends on finding a few of the smallest eigenvalues (and eigenvectors) for this matrix. As you might guess, for realistic images the matrix becomes huge, and even using ARPACK (which is what they do), it becomes impossibly slow. Also I'd like to avoid using archaic FORTRAN software if possible!
I'm wondering if there is a faster way to find the smallest eigenvalues/vectors.
So far I've found that you can possibly find the inverse quickly using the block Thomas algorithm. And MRRR can be used to find the eigendecomposition of a tridiagonal matrix. Is there a block MRRR method? How does it scale?
I'd appreciate any help.
eigenvalues-eigenvectors
add a comment |Â
up vote
2
down vote
favorite
I was having a go at implementing the algorithm in the paper Spectral Matting which looks neat.
In it they construct a Matting Laplacian. This is a sparse N×N symmetric matrix, where each row and column represents a pixel in the original image (N is the number of pixels in the image). The only non-zero elements are those corresponding to adjacent pixels, which means it has the following block tridiagonal structure (and each of the little blocks are tridiagonal):
The algorithm depends on finding a few of the smallest eigenvalues (and eigenvectors) for this matrix. As you might guess, for realistic images the matrix becomes huge, and even using ARPACK (which is what they do), it becomes impossibly slow. Also I'd like to avoid using archaic FORTRAN software if possible!
I'm wondering if there is a faster way to find the smallest eigenvalues/vectors.
So far I've found that you can possibly find the inverse quickly using the block Thomas algorithm. And MRRR can be used to find the eigendecomposition of a tridiagonal matrix. Is there a block MRRR method? How does it scale?
I'd appreciate any help.
eigenvalues-eigenvectors
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I was having a go at implementing the algorithm in the paper Spectral Matting which looks neat.
In it they construct a Matting Laplacian. This is a sparse N×N symmetric matrix, where each row and column represents a pixel in the original image (N is the number of pixels in the image). The only non-zero elements are those corresponding to adjacent pixels, which means it has the following block tridiagonal structure (and each of the little blocks are tridiagonal):
The algorithm depends on finding a few of the smallest eigenvalues (and eigenvectors) for this matrix. As you might guess, for realistic images the matrix becomes huge, and even using ARPACK (which is what they do), it becomes impossibly slow. Also I'd like to avoid using archaic FORTRAN software if possible!
I'm wondering if there is a faster way to find the smallest eigenvalues/vectors.
So far I've found that you can possibly find the inverse quickly using the block Thomas algorithm. And MRRR can be used to find the eigendecomposition of a tridiagonal matrix. Is there a block MRRR method? How does it scale?
I'd appreciate any help.
eigenvalues-eigenvectors
I was having a go at implementing the algorithm in the paper Spectral Matting which looks neat.
In it they construct a Matting Laplacian. This is a sparse N×N symmetric matrix, where each row and column represents a pixel in the original image (N is the number of pixels in the image). The only non-zero elements are those corresponding to adjacent pixels, which means it has the following block tridiagonal structure (and each of the little blocks are tridiagonal):
The algorithm depends on finding a few of the smallest eigenvalues (and eigenvectors) for this matrix. As you might guess, for realistic images the matrix becomes huge, and even using ARPACK (which is what they do), it becomes impossibly slow. Also I'd like to avoid using archaic FORTRAN software if possible!
I'm wondering if there is a faster way to find the smallest eigenvalues/vectors.
So far I've found that you can possibly find the inverse quickly using the block Thomas algorithm. And MRRR can be used to find the eigendecomposition of a tridiagonal matrix. Is there a block MRRR method? How does it scale?
I'd appreciate any help.
eigenvalues-eigenvectors
asked Jan 30 '13 at 19:03
Timmmm
1496
1496
add a comment |Â
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f290767%2feigendecomposition-of-a-large-symmetric-block-tridiagonal-matrix%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password