Measures for homogeneity of a finite sequence - approximating an image with lines.
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I am trying to approximate a grayscale image with lines, using a greedy algorithm. For our purposes, the line is a finite sequence, with each number being between $0$ and $1$, where $0$ is pure white and $1$ pure black. I evaluate the fitness of a line as follows:
Treat a line as a set of coordinates (pixels) $L subset mathbbN_0^2$. I use the Bresenham Line Algorithm to decide what pixels belong on the line.
The function $f(p_i,j) in [0, 1]$ takes the color value of a pixel at position $i, j$ and outputs the color value that that pixel will take, after the line goes over it. In the simplest case, where the lines are always black, it just outputs $1$, but it could also output $p_i,j + frac1 - p_i,j2$ for 50% transparent lines etc.
The function $g(i, j) in [0, 1]$ gives the color value of the pixel at $i, j$ in the target image.
Then the fitness functions is given as the sum of differences between the pixels on the line in the target image and the current image after is drawn: $$F(L) = frac1 cdot sum_(i, j) in L |f(p_i,j) - g(i,j)|$$
The best line is the one that has lowest value of $F(L)$.
The problem that I've encountered is that an image like this:
is being approximated like that: because vertical lines'fit' the image the best. However, this destroys the details that humans, who would look at the picture, care about.
To counter that I thought of factoring in the 'homogeneity' of a line, by treating the line as a sequence instead of a set, and adding a penalty to the fitness score, if a line goes over a section that contains a lot of changes.
Question: What suitable measures of 'homogeneity' in this context are there?
For reference, I would like the result to look something like that: here I changed the color of the background from white to gray to achieve the result, but it only works because this image is so simple, the same approach wouldn't work with more complex images.
statistics reference-request soft-question approximation image-processing
add a comment |Â
up vote
1
down vote
favorite
I am trying to approximate a grayscale image with lines, using a greedy algorithm. For our purposes, the line is a finite sequence, with each number being between $0$ and $1$, where $0$ is pure white and $1$ pure black. I evaluate the fitness of a line as follows:
Treat a line as a set of coordinates (pixels) $L subset mathbbN_0^2$. I use the Bresenham Line Algorithm to decide what pixels belong on the line.
The function $f(p_i,j) in [0, 1]$ takes the color value of a pixel at position $i, j$ and outputs the color value that that pixel will take, after the line goes over it. In the simplest case, where the lines are always black, it just outputs $1$, but it could also output $p_i,j + frac1 - p_i,j2$ for 50% transparent lines etc.
The function $g(i, j) in [0, 1]$ gives the color value of the pixel at $i, j$ in the target image.
Then the fitness functions is given as the sum of differences between the pixels on the line in the target image and the current image after is drawn: $$F(L) = frac1 cdot sum_(i, j) in L |f(p_i,j) - g(i,j)|$$
The best line is the one that has lowest value of $F(L)$.
The problem that I've encountered is that an image like this:
is being approximated like that: because vertical lines'fit' the image the best. However, this destroys the details that humans, who would look at the picture, care about.
To counter that I thought of factoring in the 'homogeneity' of a line, by treating the line as a sequence instead of a set, and adding a penalty to the fitness score, if a line goes over a section that contains a lot of changes.
Question: What suitable measures of 'homogeneity' in this context are there?
For reference, I would like the result to look something like that: here I changed the color of the background from white to gray to achieve the result, but it only works because this image is so simple, the same approach wouldn't work with more complex images.
statistics reference-request soft-question approximation image-processing
Just trying to understand the approximation concept: The input is a grayscale image (got that), the output is a set of line segments that set pixels to either black or white when digitised (eg via Bresenham Line Drawing)?
– Patrick Hew
Jul 21 at 11:15
Could you please add an example(s) of grayscale images that are processed in the way you want? Thanks.
– Patrick Hew
Jul 21 at 11:18
@PatrickHew Thank you for suggestion. I added the description of the fitness function and the reference result. But I'm not asking for a better algorithm that would do this, I'm just asking how to measure how much the values on the line change. I thought of things like summing up the differences between successive pixels, or trying a Fourier transform and penalize lines with greater frequencies, but maybe there is a more elegant statistical measure for this - I just don't know much statistics.
– ZyTelevan
Jul 21 at 12:29
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I am trying to approximate a grayscale image with lines, using a greedy algorithm. For our purposes, the line is a finite sequence, with each number being between $0$ and $1$, where $0$ is pure white and $1$ pure black. I evaluate the fitness of a line as follows:
Treat a line as a set of coordinates (pixels) $L subset mathbbN_0^2$. I use the Bresenham Line Algorithm to decide what pixels belong on the line.
The function $f(p_i,j) in [0, 1]$ takes the color value of a pixel at position $i, j$ and outputs the color value that that pixel will take, after the line goes over it. In the simplest case, where the lines are always black, it just outputs $1$, but it could also output $p_i,j + frac1 - p_i,j2$ for 50% transparent lines etc.
The function $g(i, j) in [0, 1]$ gives the color value of the pixel at $i, j$ in the target image.
Then the fitness functions is given as the sum of differences between the pixels on the line in the target image and the current image after is drawn: $$F(L) = frac1 cdot sum_(i, j) in L |f(p_i,j) - g(i,j)|$$
The best line is the one that has lowest value of $F(L)$.
The problem that I've encountered is that an image like this:
is being approximated like that: because vertical lines'fit' the image the best. However, this destroys the details that humans, who would look at the picture, care about.
To counter that I thought of factoring in the 'homogeneity' of a line, by treating the line as a sequence instead of a set, and adding a penalty to the fitness score, if a line goes over a section that contains a lot of changes.
Question: What suitable measures of 'homogeneity' in this context are there?
For reference, I would like the result to look something like that: here I changed the color of the background from white to gray to achieve the result, but it only works because this image is so simple, the same approach wouldn't work with more complex images.
statistics reference-request soft-question approximation image-processing
I am trying to approximate a grayscale image with lines, using a greedy algorithm. For our purposes, the line is a finite sequence, with each number being between $0$ and $1$, where $0$ is pure white and $1$ pure black. I evaluate the fitness of a line as follows:
Treat a line as a set of coordinates (pixels) $L subset mathbbN_0^2$. I use the Bresenham Line Algorithm to decide what pixels belong on the line.
The function $f(p_i,j) in [0, 1]$ takes the color value of a pixel at position $i, j$ and outputs the color value that that pixel will take, after the line goes over it. In the simplest case, where the lines are always black, it just outputs $1$, but it could also output $p_i,j + frac1 - p_i,j2$ for 50% transparent lines etc.
The function $g(i, j) in [0, 1]$ gives the color value of the pixel at $i, j$ in the target image.
Then the fitness functions is given as the sum of differences between the pixels on the line in the target image and the current image after is drawn: $$F(L) = frac1 cdot sum_(i, j) in L |f(p_i,j) - g(i,j)|$$
The best line is the one that has lowest value of $F(L)$.
The problem that I've encountered is that an image like this:
is being approximated like that: because vertical lines'fit' the image the best. However, this destroys the details that humans, who would look at the picture, care about.
To counter that I thought of factoring in the 'homogeneity' of a line, by treating the line as a sequence instead of a set, and adding a penalty to the fitness score, if a line goes over a section that contains a lot of changes.
Question: What suitable measures of 'homogeneity' in this context are there?
For reference, I would like the result to look something like that: here I changed the color of the background from white to gray to achieve the result, but it only works because this image is so simple, the same approach wouldn't work with more complex images.
statistics reference-request soft-question approximation image-processing
edited Jul 21 at 12:14
asked Jul 19 at 8:03
ZyTelevan
402418
402418
Just trying to understand the approximation concept: The input is a grayscale image (got that), the output is a set of line segments that set pixels to either black or white when digitised (eg via Bresenham Line Drawing)?
– Patrick Hew
Jul 21 at 11:15
Could you please add an example(s) of grayscale images that are processed in the way you want? Thanks.
– Patrick Hew
Jul 21 at 11:18
@PatrickHew Thank you for suggestion. I added the description of the fitness function and the reference result. But I'm not asking for a better algorithm that would do this, I'm just asking how to measure how much the values on the line change. I thought of things like summing up the differences between successive pixels, or trying a Fourier transform and penalize lines with greater frequencies, but maybe there is a more elegant statistical measure for this - I just don't know much statistics.
– ZyTelevan
Jul 21 at 12:29
add a comment |Â
Just trying to understand the approximation concept: The input is a grayscale image (got that), the output is a set of line segments that set pixels to either black or white when digitised (eg via Bresenham Line Drawing)?
– Patrick Hew
Jul 21 at 11:15
Could you please add an example(s) of grayscale images that are processed in the way you want? Thanks.
– Patrick Hew
Jul 21 at 11:18
@PatrickHew Thank you for suggestion. I added the description of the fitness function and the reference result. But I'm not asking for a better algorithm that would do this, I'm just asking how to measure how much the values on the line change. I thought of things like summing up the differences between successive pixels, or trying a Fourier transform and penalize lines with greater frequencies, but maybe there is a more elegant statistical measure for this - I just don't know much statistics.
– ZyTelevan
Jul 21 at 12:29
Just trying to understand the approximation concept: The input is a grayscale image (got that), the output is a set of line segments that set pixels to either black or white when digitised (eg via Bresenham Line Drawing)?
– Patrick Hew
Jul 21 at 11:15
Just trying to understand the approximation concept: The input is a grayscale image (got that), the output is a set of line segments that set pixels to either black or white when digitised (eg via Bresenham Line Drawing)?
– Patrick Hew
Jul 21 at 11:15
Could you please add an example(s) of grayscale images that are processed in the way you want? Thanks.
– Patrick Hew
Jul 21 at 11:18
Could you please add an example(s) of grayscale images that are processed in the way you want? Thanks.
– Patrick Hew
Jul 21 at 11:18
@PatrickHew Thank you for suggestion. I added the description of the fitness function and the reference result. But I'm not asking for a better algorithm that would do this, I'm just asking how to measure how much the values on the line change. I thought of things like summing up the differences between successive pixels, or trying a Fourier transform and penalize lines with greater frequencies, but maybe there is a more elegant statistical measure for this - I just don't know much statistics.
– ZyTelevan
Jul 21 at 12:29
@PatrickHew Thank you for suggestion. I added the description of the fitness function and the reference result. But I'm not asking for a better algorithm that would do this, I'm just asking how to measure how much the values on the line change. I thought of things like summing up the differences between successive pixels, or trying a Fourier transform and penalize lines with greater frequencies, but maybe there is a more elegant statistical measure for this - I just don't know much statistics.
– ZyTelevan
Jul 21 at 12:29
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
1
down vote
accepted
Measure the 'Mean Number of Pixels Since Last Change'?
The model/intuition/analogy here is from reliability theory, specifically system that alternates between Working vs Broken. Here Working = Draw As Black vs Broken = Draw As White.
One of the common measures in such systems is the Mean Time To Failure. By extrapolation, it seems you want lines with the equivalent of a long Mean Time To Failure rather than a short one.
Thank you! I'll accept this one because it answers the question more directly, but both approaches look very promising!
– ZyTelevan
Jul 21 at 13:08
Thanks! Hope it helps!
– Patrick Hew
Jul 21 at 13:26
add a comment |Â
up vote
1
down vote
The desired answer looks a lot like a Hough Transform https://en.m.wikipedia.org/wiki/Hough_transform followed by an Inverse Hough Transform. In rough overview: Each pixel 'voted' for the existence of a line in the space of all possible lines (Hough Transform). Then take the lines that get the most votes (Inverse Hough Transform).
This indirectly measures the 'homogeneity' because the 'off' pixels are implicitly votes against the existence of lines.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Measure the 'Mean Number of Pixels Since Last Change'?
The model/intuition/analogy here is from reliability theory, specifically system that alternates between Working vs Broken. Here Working = Draw As Black vs Broken = Draw As White.
One of the common measures in such systems is the Mean Time To Failure. By extrapolation, it seems you want lines with the equivalent of a long Mean Time To Failure rather than a short one.
Thank you! I'll accept this one because it answers the question more directly, but both approaches look very promising!
– ZyTelevan
Jul 21 at 13:08
Thanks! Hope it helps!
– Patrick Hew
Jul 21 at 13:26
add a comment |Â
up vote
1
down vote
accepted
Measure the 'Mean Number of Pixels Since Last Change'?
The model/intuition/analogy here is from reliability theory, specifically system that alternates between Working vs Broken. Here Working = Draw As Black vs Broken = Draw As White.
One of the common measures in such systems is the Mean Time To Failure. By extrapolation, it seems you want lines with the equivalent of a long Mean Time To Failure rather than a short one.
Thank you! I'll accept this one because it answers the question more directly, but both approaches look very promising!
– ZyTelevan
Jul 21 at 13:08
Thanks! Hope it helps!
– Patrick Hew
Jul 21 at 13:26
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Measure the 'Mean Number of Pixels Since Last Change'?
The model/intuition/analogy here is from reliability theory, specifically system that alternates between Working vs Broken. Here Working = Draw As Black vs Broken = Draw As White.
One of the common measures in such systems is the Mean Time To Failure. By extrapolation, it seems you want lines with the equivalent of a long Mean Time To Failure rather than a short one.
Measure the 'Mean Number of Pixels Since Last Change'?
The model/intuition/analogy here is from reliability theory, specifically system that alternates between Working vs Broken. Here Working = Draw As Black vs Broken = Draw As White.
One of the common measures in such systems is the Mean Time To Failure. By extrapolation, it seems you want lines with the equivalent of a long Mean Time To Failure rather than a short one.
answered Jul 21 at 13:03
Patrick Hew
354111
354111
Thank you! I'll accept this one because it answers the question more directly, but both approaches look very promising!
– ZyTelevan
Jul 21 at 13:08
Thanks! Hope it helps!
– Patrick Hew
Jul 21 at 13:26
add a comment |Â
Thank you! I'll accept this one because it answers the question more directly, but both approaches look very promising!
– ZyTelevan
Jul 21 at 13:08
Thanks! Hope it helps!
– Patrick Hew
Jul 21 at 13:26
Thank you! I'll accept this one because it answers the question more directly, but both approaches look very promising!
– ZyTelevan
Jul 21 at 13:08
Thank you! I'll accept this one because it answers the question more directly, but both approaches look very promising!
– ZyTelevan
Jul 21 at 13:08
Thanks! Hope it helps!
– Patrick Hew
Jul 21 at 13:26
Thanks! Hope it helps!
– Patrick Hew
Jul 21 at 13:26
add a comment |Â
up vote
1
down vote
The desired answer looks a lot like a Hough Transform https://en.m.wikipedia.org/wiki/Hough_transform followed by an Inverse Hough Transform. In rough overview: Each pixel 'voted' for the existence of a line in the space of all possible lines (Hough Transform). Then take the lines that get the most votes (Inverse Hough Transform).
This indirectly measures the 'homogeneity' because the 'off' pixels are implicitly votes against the existence of lines.
add a comment |Â
up vote
1
down vote
The desired answer looks a lot like a Hough Transform https://en.m.wikipedia.org/wiki/Hough_transform followed by an Inverse Hough Transform. In rough overview: Each pixel 'voted' for the existence of a line in the space of all possible lines (Hough Transform). Then take the lines that get the most votes (Inverse Hough Transform).
This indirectly measures the 'homogeneity' because the 'off' pixels are implicitly votes against the existence of lines.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
The desired answer looks a lot like a Hough Transform https://en.m.wikipedia.org/wiki/Hough_transform followed by an Inverse Hough Transform. In rough overview: Each pixel 'voted' for the existence of a line in the space of all possible lines (Hough Transform). Then take the lines that get the most votes (Inverse Hough Transform).
This indirectly measures the 'homogeneity' because the 'off' pixels are implicitly votes against the existence of lines.
The desired answer looks a lot like a Hough Transform https://en.m.wikipedia.org/wiki/Hough_transform followed by an Inverse Hough Transform. In rough overview: Each pixel 'voted' for the existence of a line in the space of all possible lines (Hough Transform). Then take the lines that get the most votes (Inverse Hough Transform).
This indirectly measures the 'homogeneity' because the 'off' pixels are implicitly votes against the existence of lines.
answered Jul 21 at 12:51
Patrick Hew
354111
354111
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2856390%2fmeasures-for-homogeneity-of-a-finite-sequence-approximating-an-image-with-line%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Just trying to understand the approximation concept: The input is a grayscale image (got that), the output is a set of line segments that set pixels to either black or white when digitised (eg via Bresenham Line Drawing)?
– Patrick Hew
Jul 21 at 11:15
Could you please add an example(s) of grayscale images that are processed in the way you want? Thanks.
– Patrick Hew
Jul 21 at 11:18
@PatrickHew Thank you for suggestion. I added the description of the fitness function and the reference result. But I'm not asking for a better algorithm that would do this, I'm just asking how to measure how much the values on the line change. I thought of things like summing up the differences between successive pixels, or trying a Fourier transform and penalize lines with greater frequencies, but maybe there is a more elegant statistical measure for this - I just don't know much statistics.
– ZyTelevan
Jul 21 at 12:29