What about linearity makes it so useful?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
12
down vote

favorite
3












Among all areas of mathematics, linear algebra is incredibly well understood. I have heard it said that the only problems we can really solve in math are linear problems- and that much of the rest of math involves reducing problems to linear algebra?



But, what about the exchange of a "multiplication" operation and an "addition" operation is nice? Why is this interchange desirable, and why - among the many possible properties to specify, is linearity so important?



Specifically, I am looking for:



  1. An idea of why the exchange that linearity allows is so powerful - whether appealing to some categorical or other argument about why these particular rules are so powerful


  2. An idea of why linear problems, or linearization, shows up so frequently







share|cite|improve this question



















  • Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
    – Anik Bhowmick
    2 days ago










  • An old professor said to me, “The reason we do linear algebra is that it’s the kind we know how to do!” Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra we’re really very good at!
    – G Tony Jacobs
    2 days ago











  • Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that it’s the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
    – msm
    yesterday











  • I knew my comment didn’t answer your question, which is why I didn’t post it as an answer. Now that I’ve posted an answer, would you say it addresses what you’re asking about?
    – G Tony Jacobs
    9 hours ago














up vote
12
down vote

favorite
3












Among all areas of mathematics, linear algebra is incredibly well understood. I have heard it said that the only problems we can really solve in math are linear problems- and that much of the rest of math involves reducing problems to linear algebra?



But, what about the exchange of a "multiplication" operation and an "addition" operation is nice? Why is this interchange desirable, and why - among the many possible properties to specify, is linearity so important?



Specifically, I am looking for:



  1. An idea of why the exchange that linearity allows is so powerful - whether appealing to some categorical or other argument about why these particular rules are so powerful


  2. An idea of why linear problems, or linearization, shows up so frequently







share|cite|improve this question



















  • Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
    – Anik Bhowmick
    2 days ago










  • An old professor said to me, “The reason we do linear algebra is that it’s the kind we know how to do!” Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra we’re really very good at!
    – G Tony Jacobs
    2 days ago











  • Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that it’s the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
    – msm
    yesterday











  • I knew my comment didn’t answer your question, which is why I didn’t post it as an answer. Now that I’ve posted an answer, would you say it addresses what you’re asking about?
    – G Tony Jacobs
    9 hours ago












up vote
12
down vote

favorite
3









up vote
12
down vote

favorite
3






3





Among all areas of mathematics, linear algebra is incredibly well understood. I have heard it said that the only problems we can really solve in math are linear problems- and that much of the rest of math involves reducing problems to linear algebra?



But, what about the exchange of a "multiplication" operation and an "addition" operation is nice? Why is this interchange desirable, and why - among the many possible properties to specify, is linearity so important?



Specifically, I am looking for:



  1. An idea of why the exchange that linearity allows is so powerful - whether appealing to some categorical or other argument about why these particular rules are so powerful


  2. An idea of why linear problems, or linearization, shows up so frequently







share|cite|improve this question











Among all areas of mathematics, linear algebra is incredibly well understood. I have heard it said that the only problems we can really solve in math are linear problems- and that much of the rest of math involves reducing problems to linear algebra?



But, what about the exchange of a "multiplication" operation and an "addition" operation is nice? Why is this interchange desirable, and why - among the many possible properties to specify, is linearity so important?



Specifically, I am looking for:



  1. An idea of why the exchange that linearity allows is so powerful - whether appealing to some categorical or other argument about why these particular rules are so powerful


  2. An idea of why linear problems, or linearization, shows up so frequently









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked 2 days ago









msm

875514




875514











  • Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
    – Anik Bhowmick
    2 days ago










  • An old professor said to me, “The reason we do linear algebra is that it’s the kind we know how to do!” Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra we’re really very good at!
    – G Tony Jacobs
    2 days ago











  • Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that it’s the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
    – msm
    yesterday











  • I knew my comment didn’t answer your question, which is why I didn’t post it as an answer. Now that I’ve posted an answer, would you say it addresses what you’re asking about?
    – G Tony Jacobs
    9 hours ago
















  • Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
    – Anik Bhowmick
    2 days ago










  • An old professor said to me, “The reason we do linear algebra is that it’s the kind we know how to do!” Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra we’re really very good at!
    – G Tony Jacobs
    2 days ago











  • Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that it’s the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
    – msm
    yesterday











  • I knew my comment didn’t answer your question, which is why I didn’t post it as an answer. Now that I’ve posted an answer, would you say it addresses what you’re asking about?
    – G Tony Jacobs
    9 hours ago















Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
– Anik Bhowmick
2 days ago




Through linear algebra, you can take a vector and pass it through different dimensions. Also, it is the best way to connect with geometry, since we all can see geometry, but we can't see abstract algebra or real analysis, neither any of them can be connected to geometry. Here lies the beauty of linear algebra.
– Anik Bhowmick
2 days ago












An old professor said to me, “The reason we do linear algebra is that it’s the kind we know how to do!” Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra we’re really very good at!
– G Tony Jacobs
2 days ago





An old professor said to me, “The reason we do linear algebra is that it’s the kind we know how to do!” Besides that, the whole point of differential calculus is that anything changing smoothly is locally linear, making it accessible to the only kind of algebra we’re really very good at!
– G Tony Jacobs
2 days ago













Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that it’s the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
– msm
yesterday





Perhaps I was not as clear as I thought I was in my original question. When you say, "...An old professor said to me, 'The reason we do linear algebra is that it’s the kind we know how to do!'" my question comes down to "What is it about the property of linearity that MAKES it so that we can do math with linear things and not with non-linear things?" We have all manner of reasonable conditions we put on functions in mathematics. Of all those conditions why is THIS condition of linearity so powerful that we can completely do linear algebra?
– msm
yesterday













I knew my comment didn’t answer your question, which is why I didn’t post it as an answer. Now that I’ve posted an answer, would you say it addresses what you’re asking about?
– G Tony Jacobs
9 hours ago




I knew my comment didn’t answer your question, which is why I didn’t post it as an answer. Now that I’ve posted an answer, would you say it addresses what you’re asking about?
– G Tony Jacobs
9 hours ago










2 Answers
2






active

oldest

votes

















up vote
1
down vote













Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).






share|cite|improve this answer



















  • 1




    There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
    – Bungo
    2 days ago

















up vote
1
down vote













We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.



What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.



The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.



Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.



This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2871711%2fwhat-about-linearity-makes-it-so-useful%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote













    Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).






    share|cite|improve this answer



















    • 1




      There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
      – Bungo
      2 days ago














    up vote
    1
    down vote













    Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).






    share|cite|improve this answer



















    • 1




      There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
      – Bungo
      2 days ago












    up vote
    1
    down vote










    up vote
    1
    down vote









    Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).






    share|cite|improve this answer















    Linear problems are so very useful because they describe well small deviations, displacements, signals, etc., and because they admit single solutions. For sufficiently small $x$, $f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ldots$ can be very well approximated by $a_0 + a_1 x$. Even the simplest nonlinear equation, $x^2 - 3 = 0$ has two real solutions, making analysis more difficult. Linear equations have a single solution (if one exists).







    share|cite|improve this answer















    share|cite|improve this answer



    share|cite|improve this answer








    edited 2 days ago


























    answered 2 days ago









    David G. Stork

    7,3102728




    7,3102728







    • 1




      There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
      – Bungo
      2 days ago












    • 1




      There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
      – Bungo
      2 days ago







    1




    1




    There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
    – Bungo
    2 days ago




    There may also be infinity many solutions, e.g. $Ax = 0$ if $A$ has nontrivial nullspace.
    – Bungo
    2 days ago










    up vote
    1
    down vote













    We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.



    What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.



    The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.



    Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.



    This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.






    share|cite|improve this answer

























      up vote
      1
      down vote













      We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.



      What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.



      The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.



      Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.



      This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.






      share|cite|improve this answer























        up vote
        1
        down vote










        up vote
        1
        down vote









        We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.



        What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.



        The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.



        Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.



        This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.






        share|cite|improve this answer













        We work with fields of numbers, such as $Bbb Q$, the field of rational numbers, $Bbb R$, the field of real numbers, and $Bbb C$, the field of complex numbers. What is a field? It's a set in which two invertible operations - addition and multiplication - interact. Elementary algebra is simply the study of that interaction.



        What's a linear function defined on one of these fields? It's a function that is compatible with the two operations. If $f(x+y)=f(x)+f(y)$, and $f(cx)=cf(x)$, then the whole domain, before and after applying $f$, is structurally preserved. (That's as long as $f$ is invertible; I'm glossing over some details.) Essentially, such a function is simply taking the field and scaling it, possibly flipping it around as well. In the complex field, the picture is a little more.... complex, but fundamentally the same.



        The most intuitive vector spaces - finite dimensional ones over our familiar fields - are basically just multiple copies of the base field, set at "right angles" to each other. Invertible linear functions now just scale, reflect, rotate and shear this basic picture, but they preserve the algebraic structure of the space.



        Now, we often work with transformations that do more complicated things that this, but if they are smooth transformations, then they "look like" linear transformations when you "zoom in" at any point. To analyze something complicated, you have to simplify it in some way, and a good way to simplify working with some weird non-linear transformation is to describe and study the linear transformations that it "looks like" up close.



        This is why we see linear problems arise so frequently. Some situations are modeled by linear transformations, and that's great. However, even situations modeled by non-linear transformations are often approximated with appropriate linear maps. The first and roughest way to approximate a function is with a constant, but we don't get a lot of mileage out of that. The next fancier approach is the approximate with a linear function at each point, and we do get a lot of mileage out of that. If you want to do better, you can use a quadratic approximation. These are great for describing, for instance, critical points of multi-variable functions. Even the quadratic description, however, uses tools from linear algebra.







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered yesterday









        G Tony Jacobs

        25.5k43483




        25.5k43483






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2871711%2fwhat-about-linearity-makes-it-so-useful%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?

            What is the equation of a 3D cone with generalised tilt?