What about linearity makes it so useful?
Clash Royale CLAN TAG#URR8PPP
up vote
12
down vote
favorite
Among all areas of mathematics, linear algebra is incredibly well understood. I have heard it said that the only problems we can really solve in math are linear problems- and that much of the rest of math involves reducing problems to linear algebra?
But, what about the exchange of a "multiplication" operation and an "addition" operation is nice? Why is this interchange desirable, and why - among the many possible properties to specify, is linearity so important?
Specifically, I am looking for:
An idea of why the exchange that linearity allows is so powerful - whether appealing to some categorical or other argument about why these particular rules are so powerful
An idea of why linear problems, or linearization, shows up so frequently
linear-algebra abstract-algebra soft-question group-actions
add a comment |Â
up vote
12
down vote
favorite
Among all areas of mathematics, linear algebra is incredibly well understood. I have heard it said that the only problems we can really solve in math are linear problems- and that much of the rest of math involves reducing problems to linear algebra?
But, what about the exchange of a "multiplication" operation and an "addition" operation is nice? Why is this interchange desirable, and why - among the many possible properties to specify, is linearity so important?
Specifically, I am looking for:
An idea of why the exchange that linearity allows is so powerful - whether appealing to some categorical or other argument about why these particular rules are so powerful
An idea of why linear problems, or linearization, shows up so frequently
linear-algebra abstract-algebra soft-question group-actions
Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
â Anik Bhowmick
2 days ago
An old professor said to me, âÂÂThe reason we do linear algebra is that itâÂÂs the kind we know how to do!â Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra weâÂÂre really very good at!
â G Tony Jacobs
2 days ago
Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that itâÂÂs the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
â msm
yesterday
I knew my comment didnâÂÂt answer your question, which is why I didnâÂÂt post it as an answer. Now that IâÂÂve posted an answer, would you say it addresses what youâÂÂre asking about?
â G Tony Jacobs
9 hours ago
add a comment |Â
up vote
12
down vote
favorite
up vote
12
down vote
favorite
Among all areas of mathematics, linear algebra is incredibly well understood. I have heard it said that the only problems we can really solve in math are linear problems- and that much of the rest of math involves reducing problems to linear algebra?
But, what about the exchange of a "multiplication" operation and an "addition" operation is nice? Why is this interchange desirable, and why - among the many possible properties to specify, is linearity so important?
Specifically, I am looking for:
An idea of why the exchange that linearity allows is so powerful - whether appealing to some categorical or other argument about why these particular rules are so powerful
An idea of why linear problems, or linearization, shows up so frequently
linear-algebra abstract-algebra soft-question group-actions
Among all areas of mathematics, linear algebra is incredibly well understood. I have heard it said that the only problems we can really solve in math are linear problems- and that much of the rest of math involves reducing problems to linear algebra?
But, what about the exchange of a "multiplication" operation and an "addition" operation is nice? Why is this interchange desirable, and why - among the many possible properties to specify, is linearity so important?
Specifically, I am looking for:
An idea of why the exchange that linearity allows is so powerful - whether appealing to some categorical or other argument about why these particular rules are so powerful
An idea of why linear problems, or linearization, shows up so frequently
linear-algebra abstract-algebra soft-question group-actions
asked 2 days ago
msm
875514
875514
Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
â Anik Bhowmick
2 days ago
An old professor said to me, âÂÂThe reason we do linear algebra is that itâÂÂs the kind we know how to do!â Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra weâÂÂre really very good at!
â G Tony Jacobs
2 days ago
Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that itâÂÂs the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
â msm
yesterday
I knew my comment didnâÂÂt answer your question, which is why I didnâÂÂt post it as an answer. Now that IâÂÂve posted an answer, would you say it addresses what youâÂÂre asking about?
â G Tony Jacobs
9 hours ago
add a comment |Â
Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
â Anik Bhowmick
2 days ago
An old professor said to me, âÂÂThe reason we do linear algebra is that itâÂÂs the kind we know how to do!â Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra weâÂÂre really very good at!
â G Tony Jacobs
2 days ago
Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that itâÂÂs the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
â msm
yesterday
I knew my comment didnâÂÂt answer your question, which is why I didnâÂÂt post it as an answer. Now that IâÂÂve posted an answer, would you say it addresses what youâÂÂre asking about?
â G Tony Jacobs
9 hours ago
Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
â Anik Bhowmick
2 days ago
Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
â Anik Bhowmick
2 days ago
An old professor said to me, âÂÂThe reason we do linear algebra is that itâÂÂs the kind we know how to do!â Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra weâÂÂre really very good at!
â G Tony Jacobs
2 days ago
An old professor said to me, âÂÂThe reason we do linear algebra is that itâÂÂs the kind we know how to do!â Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra weâÂÂre really very good at!
â G Tony Jacobs
2 days ago
Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that itâÂÂs the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
â msm
yesterday
Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that itâÂÂs the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
â msm
yesterday
I knew my comment didnâÂÂt answer your question, which is why I didnâÂÂt post it as an answer. Now that IâÂÂve posted an answer, would you say it addresses what youâÂÂre asking about?
â G Tony Jacobs
9 hours ago
I knew my comment didnâÂÂt answer your question, which is why I didnâÂÂt post it as an answer. Now that IâÂÂve posted an answer, would you say it addresses what youâÂÂre asking about?
â G Tony Jacobs
9 hours ago
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
1
down vote
Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).
1
There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
â Bungo
2 days ago
add a comment |Â
up vote
1
down vote
We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.
What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.
The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.
Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.
This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).
1
There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
â Bungo
2 days ago
add a comment |Â
up vote
1
down vote
Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).
1
There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
â Bungo
2 days ago
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).
Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).
edited 2 days ago
answered 2 days ago
David G. Stork
7,3102728
7,3102728
1
There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
â Bungo
2 days ago
add a comment |Â
1
There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
â Bungo
2 days ago
1
1
There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
â Bungo
2 days ago
There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
â Bungo
2 days ago
add a comment |Â
up vote
1
down vote
We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.
What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.
The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.
Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.
This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.
add a comment |Â
up vote
1
down vote
We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.
What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.
The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.
Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.
This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.
What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.
The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.
Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.
This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.
We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.
What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.
The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.
Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.
This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.
answered yesterday
G Tony Jacobs
25.5k43483
25.5k43483
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2871711%2fwhat-about-linearity-makes-it-so-useful%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
â Anik Bhowmick
2 days ago
An old professor said to me, âÂÂThe reason we do linear algebra is that itâÂÂs the kind we know how to do!â Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra weâÂÂre really very good at!
â G Tony Jacobs
2 days ago
Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that itâÂÂs the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
â msm
yesterday
I knew my comment didnâÂÂt answer your question, which is why I didnâÂÂt post it as an answer. Now that IâÂÂve posted an answer, would you say it addresses what youâÂÂre asking about?
â G Tony Jacobs
9 hours ago