Linear Approximation of x/ (1-x)

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I am trying to linearize the following function, but, having difficulties.



Let,



$x = fraclm,$



where $l,m in R^+$ and $l<m$



Assume $l$ is a variable, while, $m$ is a constant (parameter), which makes $x$ a variable. I want to find a linear approximation for the following:



$f(x) = fracx2m(1-x)$



In other words, as $2$ and $m$ are constant, I am interested in



$g(x)=fracx1-x$



I plotted the graph, but, it did not really help to derive something useful. Any help is appreciated.







share|cite|improve this question



















  • Linear Approximation? You want to approximate the function using a straight line?
    – W. mu
    Jul 15 at 2:50










  • Yes. Does it sound impossible?
    – user8028576
    Jul 15 at 2:51










  • Of course, it's possible, but the error may be large. en.wikipedia.org/wiki/Approximation_theory
    – W. mu
    Jul 15 at 3:04










  • If you are looking for the usual calculus approximation, then it will be $h(x)=x$. This can be found by the formula $f(x)≈f'(x_0)(x-x_0)+f(x_0)$ when $x≈x_0$.
    – user496634
    Jul 15 at 9:53










  • @user496634 thanks for your comment. In my search for solutions, I confronted with the calculus approximation. Yet, I could not quite understand how it can make the function linear because the first derivative of the function still remains to be non-linear with squared $x$ terms. Additionally, I am lost with the term $x_0$. I am using this function in a MILP problem and I do solve for $x$ and other thousands of variables while there are several inequality constraints including $x$. If you may put your solution approach down, I will be thrilled to read it.
    – user8028576
    Jul 16 at 1:53














up vote
1
down vote

favorite












I am trying to linearize the following function, but, having difficulties.



Let,



$x = fraclm,$



where $l,m in R^+$ and $l<m$



Assume $l$ is a variable, while, $m$ is a constant (parameter), which makes $x$ a variable. I want to find a linear approximation for the following:



$f(x) = fracx2m(1-x)$



In other words, as $2$ and $m$ are constant, I am interested in



$g(x)=fracx1-x$



I plotted the graph, but, it did not really help to derive something useful. Any help is appreciated.







share|cite|improve this question



















  • Linear Approximation? You want to approximate the function using a straight line?
    – W. mu
    Jul 15 at 2:50










  • Yes. Does it sound impossible?
    – user8028576
    Jul 15 at 2:51










  • Of course, it's possible, but the error may be large. en.wikipedia.org/wiki/Approximation_theory
    – W. mu
    Jul 15 at 3:04










  • If you are looking for the usual calculus approximation, then it will be $h(x)=x$. This can be found by the formula $f(x)≈f'(x_0)(x-x_0)+f(x_0)$ when $x≈x_0$.
    – user496634
    Jul 15 at 9:53










  • @user496634 thanks for your comment. In my search for solutions, I confronted with the calculus approximation. Yet, I could not quite understand how it can make the function linear because the first derivative of the function still remains to be non-linear with squared $x$ terms. Additionally, I am lost with the term $x_0$. I am using this function in a MILP problem and I do solve for $x$ and other thousands of variables while there are several inequality constraints including $x$. If you may put your solution approach down, I will be thrilled to read it.
    – user8028576
    Jul 16 at 1:53












up vote
1
down vote

favorite









up vote
1
down vote

favorite











I am trying to linearize the following function, but, having difficulties.



Let,



$x = fraclm,$



where $l,m in R^+$ and $l<m$



Assume $l$ is a variable, while, $m$ is a constant (parameter), which makes $x$ a variable. I want to find a linear approximation for the following:



$f(x) = fracx2m(1-x)$



In other words, as $2$ and $m$ are constant, I am interested in



$g(x)=fracx1-x$



I plotted the graph, but, it did not really help to derive something useful. Any help is appreciated.







share|cite|improve this question











I am trying to linearize the following function, but, having difficulties.



Let,



$x = fraclm,$



where $l,m in R^+$ and $l<m$



Assume $l$ is a variable, while, $m$ is a constant (parameter), which makes $x$ a variable. I want to find a linear approximation for the following:



$f(x) = fracx2m(1-x)$



In other words, as $2$ and $m$ are constant, I am interested in



$g(x)=fracx1-x$



I plotted the graph, but, it did not really help to derive something useful. Any help is appreciated.









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 15 at 2:45









user8028576

277




277











  • Linear Approximation? You want to approximate the function using a straight line?
    – W. mu
    Jul 15 at 2:50










  • Yes. Does it sound impossible?
    – user8028576
    Jul 15 at 2:51










  • Of course, it's possible, but the error may be large. en.wikipedia.org/wiki/Approximation_theory
    – W. mu
    Jul 15 at 3:04










  • If you are looking for the usual calculus approximation, then it will be $h(x)=x$. This can be found by the formula $f(x)≈f'(x_0)(x-x_0)+f(x_0)$ when $x≈x_0$.
    – user496634
    Jul 15 at 9:53










  • @user496634 thanks for your comment. In my search for solutions, I confronted with the calculus approximation. Yet, I could not quite understand how it can make the function linear because the first derivative of the function still remains to be non-linear with squared $x$ terms. Additionally, I am lost with the term $x_0$. I am using this function in a MILP problem and I do solve for $x$ and other thousands of variables while there are several inequality constraints including $x$. If you may put your solution approach down, I will be thrilled to read it.
    – user8028576
    Jul 16 at 1:53
















  • Linear Approximation? You want to approximate the function using a straight line?
    – W. mu
    Jul 15 at 2:50










  • Yes. Does it sound impossible?
    – user8028576
    Jul 15 at 2:51










  • Of course, it's possible, but the error may be large. en.wikipedia.org/wiki/Approximation_theory
    – W. mu
    Jul 15 at 3:04










  • If you are looking for the usual calculus approximation, then it will be $h(x)=x$. This can be found by the formula $f(x)≈f'(x_0)(x-x_0)+f(x_0)$ when $x≈x_0$.
    – user496634
    Jul 15 at 9:53










  • @user496634 thanks for your comment. In my search for solutions, I confronted with the calculus approximation. Yet, I could not quite understand how it can make the function linear because the first derivative of the function still remains to be non-linear with squared $x$ terms. Additionally, I am lost with the term $x_0$. I am using this function in a MILP problem and I do solve for $x$ and other thousands of variables while there are several inequality constraints including $x$. If you may put your solution approach down, I will be thrilled to read it.
    – user8028576
    Jul 16 at 1:53















Linear Approximation? You want to approximate the function using a straight line?
– W. mu
Jul 15 at 2:50




Linear Approximation? You want to approximate the function using a straight line?
– W. mu
Jul 15 at 2:50












Yes. Does it sound impossible?
– user8028576
Jul 15 at 2:51




Yes. Does it sound impossible?
– user8028576
Jul 15 at 2:51












Of course, it's possible, but the error may be large. en.wikipedia.org/wiki/Approximation_theory
– W. mu
Jul 15 at 3:04




Of course, it's possible, but the error may be large. en.wikipedia.org/wiki/Approximation_theory
– W. mu
Jul 15 at 3:04












If you are looking for the usual calculus approximation, then it will be $h(x)=x$. This can be found by the formula $f(x)≈f'(x_0)(x-x_0)+f(x_0)$ when $x≈x_0$.
– user496634
Jul 15 at 9:53




If you are looking for the usual calculus approximation, then it will be $h(x)=x$. This can be found by the formula $f(x)≈f'(x_0)(x-x_0)+f(x_0)$ when $x≈x_0$.
– user496634
Jul 15 at 9:53












@user496634 thanks for your comment. In my search for solutions, I confronted with the calculus approximation. Yet, I could not quite understand how it can make the function linear because the first derivative of the function still remains to be non-linear with squared $x$ terms. Additionally, I am lost with the term $x_0$. I am using this function in a MILP problem and I do solve for $x$ and other thousands of variables while there are several inequality constraints including $x$. If you may put your solution approach down, I will be thrilled to read it.
– user8028576
Jul 16 at 1:53




@user496634 thanks for your comment. In my search for solutions, I confronted with the calculus approximation. Yet, I could not quite understand how it can make the function linear because the first derivative of the function still remains to be non-linear with squared $x$ terms. Additionally, I am lost with the term $x_0$. I am using this function in a MILP problem and I do solve for $x$ and other thousands of variables while there are several inequality constraints including $x$. If you may put your solution approach down, I will be thrilled to read it.
– user8028576
Jul 16 at 1:53










3 Answers
3






active

oldest

votes

















up vote
2
down vote



accepted










If, over a range $aleq x leq b$, you want the best linear approximation $A+Bx$ of
$$f(x) = fracx2m(1-x)$$ the solution is to minimize the norm
$$F=int_a^b left(A+Bx-fracx2m(1-x) right)^2$$ with respect to parameters $A$ and $B$.



Integrating and then computing the partial derivatives $fracpartial Fpartial A$ and $fracpartial Fpartial B$ and setting them equal to $0$ leads to two linear equations in $(A,B)$
$$2m(b-a) A + m (b^2-a^2)B+(b-a)+log left(frac1-b1-aright)=0$$
$$6m(b^2-a^2)A+4m(b^3-a^3)B+3 (b-a) (a+b+2)+6log left(frac1-b1-aright)=0$$ which you can simplify factoring $(b-a)$ in several places.
I let to you the pleasure of finding $A,B$ (but this is simple).



Around $x=0$, for $a=-b$, using Taylor series, we have
$$A=fracb^26 m+fracb^410 m+Oleft(b^6right)qquad B=frac12 m+frac3 b^210 m+frac3 b^414 m+Oleft(b^6right)qquad F=frac2 b^545 m^2+Oleft(b^7right)$$



Taking an example using $a=-0.1$, $b=0.1$, $m=1$, this would lead to
$$A=frac12 left(-1+5 log left(frac1110right)+5 log
left(frac109right)right)approx 0.00167674$$
$$B=-150 (1+10 log (3)-5 log (11))approx 0.503022$$






share|cite|improve this answer























  • Thanks a lot! This is what I was looking for. Yet, I need some verbal explanation to digest all this and make it useful for my case. First of all, I use the function $f(x)$ in a mixed integer (non)linear programming model and $f(x)$ is the term that makes it non. Again, $0leq x<1$. So, I guess, I need to set $a=0$ and possibly $b=0.999$ as close as to $1$. Then, instead of having my $x$ as a variable in my problem, I would have $A$ and $B$. But, I still need $x$ as a variable because I have to use in some other equality constraints. So, what would you recommend in this case?
    – user8028576
    Jul 16 at 0:55











  • Dr. Leibovici, if I understand your solution correctly with my limited math, I guess, you mean, I probably need to find $A$ and $B$ given that $a=0$ and (say) $b=0.999$. Then, plugging these in $f(x)=A+Bx$ will yield me an approximation of my original function, right? If I get it right until here, either this approximation does not really work or I miss something. I found $A=36.9472, B=-68.0547$ given that $a=0, b=0.999, m=1$. When I test the approximation with $x=0.6$, I get $f(x)_approx = −3.88562$ while $f(x)=0.75$. Furthermore, $f(x)geq 0 $ must be held in any case.
    – user8028576
    Jul 16 at 1:46











  • @user8028576. Linearization of any function is local and valid over a small range.
    – Claude Leibovici
    Jul 16 at 3:05










  • Dr. Leibovici, so, do you mean $a=0$ and $b=0.999$ is a large range? If yes, how can I find the optimal range to make the approximation valid? I guess, I may create multiple equations of the above form with e.g., $a=0, b=0.1$, $a=0.1, b=0.2$ $...$ $a=0.9, b=0.999$. Right? But, is $0.1$ step size okay?
    – user8028576
    Jul 16 at 3:38











  • @user8028576. For each ("small") range, you need to compute $A$ and $B$.
    – Claude Leibovici
    Jul 16 at 3:51

















up vote
1
down vote













I will present an elementary calculus solution. I will first rename your $g(x)$ into $f(x)$ because I can and because it looks nicer to the eye. It is apparent that the best linear approximation to any function at $x=0$ must be the tangent line to the function at $x=0$. In our case, since $f(0)=0$, our tangent line must pass through $(0,0)$ too and so it must be of the form $y=mx$. Since it is the tangent line, the gradient $m$ of this line must have the same "gradient" (i.e. rate of change) as $f$ at $0$. To find the gradient, thus, we set



$$ m = f'(0) $$



Where the RHS is the derivative at $0$; so the rate of change as previously mentioned. So now we just have to take the derivative of $f$ and evaluate it at $0$ to find $m$. I'm sure you know how to do this, but I'll put it here for completeness's sake:



$$fracddx fracx1-x = frac(1-x)-(-1)x(1-x)^2$$



Evaluated at $0$ this is $1$. So $m=1$ and the closest linear approximation would be $y=mx=x$.






share|cite|improve this answer





















  • you basically propose $F = fracx2m$ as a n approximation to the original $f(x)$ given in the post. If I get it correctly, this approximation is not any close to $f(x)$. Let $x=0.9, m=1$, $f(x)=4.5$ and $F=0.45$. If I interpret it wrongly, please let me know.
    – user8028576
    Jul 16 at 15:40










  • @user8028576 You are absolutely right that it is not a very good approximation. However, it is the best linear one, locally around $x=0$. This is unless, of course, you want to approximate it around a range around $x=0$, not just best at that point and it's "immediate vicinity". In that case, you can consider Leibovici's answer. You will see that in a small range immediately around $0$, my approximation is better; while in the range $a$ and $b$ in his answer his is generally better.
    – user496634
    Jul 16 at 22:34










  • @user8028576 A helpful way to consider this is that my answer is the limit of Lebivoci's answer as $a$ and $b$ both approach $0$ (this is quite easy to prove). Around points extremely close to $0$, my approximation is extremely good; while his answer minimises the sum of the squared of the difference in values over a whole range $[a,b]$. One is an approximation at a point, the other is the approximation over a range.
    – user496634
    Jul 16 at 22:40










  • @user8028576 I totally get the logic of both approximations. Your approximation draws a linear line starting from the origin and it does not really worry about minimizing the difference in the approximation and the actual. Your approximation is the easiest with a single $F=fracx2m function. On the other hand, Leibovici's Taylor series draws a line for a "small" range of the whole plot that is as close as to the originial nonlinear line in that segmentation. Once I create "a lot of"of those lines with the given function, I will be able to approximate much more precisely.
    – user8028576
    Jul 17 at 13:42






  • 1




    But, when I get much closer to the tails of the approximation, his approximation yields an infeasible (negative) solution. For example, $a=0, b=0.1, m=1$, then, when I search for $x=0.0001$, it will give me $Fapprox -0.00088$. Yet, this situation is understandable. To deal with it, I will adjust the ranges according to my expectations of the results. For instance, if I know $x=0.0001$ is an expected result, then I will add shorter ranges, which is equivalent of saying a decimal adjustment. I think, I got it! I am happy. Thank you very much to all, as you thought me something very useful!
    – user8028576
    Jul 17 at 13:47

















up vote
0
down vote













$$frac ddx frac1 1-x = frac1(1-x)^2$$ Additionally,
since $frac11-x = sum_k=0^inftyx^k$ clearly $$frac1(1-x)^2 = sum_k=1^inftykx^k-1$$Unfortunately, it does not really make sense to approximate linearly something like this in any global sense. Locally, evaluating $m = frac1(1-x_0)^2$ would give you the slope of a line tangent to $frac1(1-x)$ at point $x_0$ which locally approximates it linearly. I believe this is what you're asking.






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852141%2flinear-approximation-of-x-1-x%23new-answer', 'question_page');

    );

    Post as a guest






























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    2
    down vote



    accepted










    If, over a range $aleq x leq b$, you want the best linear approximation $A+Bx$ of
    $$f(x) = fracx2m(1-x)$$ the solution is to minimize the norm
    $$F=int_a^b left(A+Bx-fracx2m(1-x) right)^2$$ with respect to parameters $A$ and $B$.



    Integrating and then computing the partial derivatives $fracpartial Fpartial A$ and $fracpartial Fpartial B$ and setting them equal to $0$ leads to two linear equations in $(A,B)$
    $$2m(b-a) A + m (b^2-a^2)B+(b-a)+log left(frac1-b1-aright)=0$$
    $$6m(b^2-a^2)A+4m(b^3-a^3)B+3 (b-a) (a+b+2)+6log left(frac1-b1-aright)=0$$ which you can simplify factoring $(b-a)$ in several places.
    I let to you the pleasure of finding $A,B$ (but this is simple).



    Around $x=0$, for $a=-b$, using Taylor series, we have
    $$A=fracb^26 m+fracb^410 m+Oleft(b^6right)qquad B=frac12 m+frac3 b^210 m+frac3 b^414 m+Oleft(b^6right)qquad F=frac2 b^545 m^2+Oleft(b^7right)$$



    Taking an example using $a=-0.1$, $b=0.1$, $m=1$, this would lead to
    $$A=frac12 left(-1+5 log left(frac1110right)+5 log
    left(frac109right)right)approx 0.00167674$$
    $$B=-150 (1+10 log (3)-5 log (11))approx 0.503022$$






    share|cite|improve this answer























    • Thanks a lot! This is what I was looking for. Yet, I need some verbal explanation to digest all this and make it useful for my case. First of all, I use the function $f(x)$ in a mixed integer (non)linear programming model and $f(x)$ is the term that makes it non. Again, $0leq x<1$. So, I guess, I need to set $a=0$ and possibly $b=0.999$ as close as to $1$. Then, instead of having my $x$ as a variable in my problem, I would have $A$ and $B$. But, I still need $x$ as a variable because I have to use in some other equality constraints. So, what would you recommend in this case?
      – user8028576
      Jul 16 at 0:55











    • Dr. Leibovici, if I understand your solution correctly with my limited math, I guess, you mean, I probably need to find $A$ and $B$ given that $a=0$ and (say) $b=0.999$. Then, plugging these in $f(x)=A+Bx$ will yield me an approximation of my original function, right? If I get it right until here, either this approximation does not really work or I miss something. I found $A=36.9472, B=-68.0547$ given that $a=0, b=0.999, m=1$. When I test the approximation with $x=0.6$, I get $f(x)_approx = −3.88562$ while $f(x)=0.75$. Furthermore, $f(x)geq 0 $ must be held in any case.
      – user8028576
      Jul 16 at 1:46











    • @user8028576. Linearization of any function is local and valid over a small range.
      – Claude Leibovici
      Jul 16 at 3:05










    • Dr. Leibovici, so, do you mean $a=0$ and $b=0.999$ is a large range? If yes, how can I find the optimal range to make the approximation valid? I guess, I may create multiple equations of the above form with e.g., $a=0, b=0.1$, $a=0.1, b=0.2$ $...$ $a=0.9, b=0.999$. Right? But, is $0.1$ step size okay?
      – user8028576
      Jul 16 at 3:38











    • @user8028576. For each ("small") range, you need to compute $A$ and $B$.
      – Claude Leibovici
      Jul 16 at 3:51














    up vote
    2
    down vote



    accepted










    If, over a range $aleq x leq b$, you want the best linear approximation $A+Bx$ of
    $$f(x) = fracx2m(1-x)$$ the solution is to minimize the norm
    $$F=int_a^b left(A+Bx-fracx2m(1-x) right)^2$$ with respect to parameters $A$ and $B$.



    Integrating and then computing the partial derivatives $fracpartial Fpartial A$ and $fracpartial Fpartial B$ and setting them equal to $0$ leads to two linear equations in $(A,B)$
    $$2m(b-a) A + m (b^2-a^2)B+(b-a)+log left(frac1-b1-aright)=0$$
    $$6m(b^2-a^2)A+4m(b^3-a^3)B+3 (b-a) (a+b+2)+6log left(frac1-b1-aright)=0$$ which you can simplify factoring $(b-a)$ in several places.
    I let to you the pleasure of finding $A,B$ (but this is simple).



    Around $x=0$, for $a=-b$, using Taylor series, we have
    $$A=fracb^26 m+fracb^410 m+Oleft(b^6right)qquad B=frac12 m+frac3 b^210 m+frac3 b^414 m+Oleft(b^6right)qquad F=frac2 b^545 m^2+Oleft(b^7right)$$



    Taking an example using $a=-0.1$, $b=0.1$, $m=1$, this would lead to
    $$A=frac12 left(-1+5 log left(frac1110right)+5 log
    left(frac109right)right)approx 0.00167674$$
    $$B=-150 (1+10 log (3)-5 log (11))approx 0.503022$$






    share|cite|improve this answer























    • Thanks a lot! This is what I was looking for. Yet, I need some verbal explanation to digest all this and make it useful for my case. First of all, I use the function $f(x)$ in a mixed integer (non)linear programming model and $f(x)$ is the term that makes it non. Again, $0leq x<1$. So, I guess, I need to set $a=0$ and possibly $b=0.999$ as close as to $1$. Then, instead of having my $x$ as a variable in my problem, I would have $A$ and $B$. But, I still need $x$ as a variable because I have to use in some other equality constraints. So, what would you recommend in this case?
      – user8028576
      Jul 16 at 0:55











    • Dr. Leibovici, if I understand your solution correctly with my limited math, I guess, you mean, I probably need to find $A$ and $B$ given that $a=0$ and (say) $b=0.999$. Then, plugging these in $f(x)=A+Bx$ will yield me an approximation of my original function, right? If I get it right until here, either this approximation does not really work or I miss something. I found $A=36.9472, B=-68.0547$ given that $a=0, b=0.999, m=1$. When I test the approximation with $x=0.6$, I get $f(x)_approx = −3.88562$ while $f(x)=0.75$. Furthermore, $f(x)geq 0 $ must be held in any case.
      – user8028576
      Jul 16 at 1:46











    • @user8028576. Linearization of any function is local and valid over a small range.
      – Claude Leibovici
      Jul 16 at 3:05










    • Dr. Leibovici, so, do you mean $a=0$ and $b=0.999$ is a large range? If yes, how can I find the optimal range to make the approximation valid? I guess, I may create multiple equations of the above form with e.g., $a=0, b=0.1$, $a=0.1, b=0.2$ $...$ $a=0.9, b=0.999$. Right? But, is $0.1$ step size okay?
      – user8028576
      Jul 16 at 3:38











    • @user8028576. For each ("small") range, you need to compute $A$ and $B$.
      – Claude Leibovici
      Jul 16 at 3:51












    up vote
    2
    down vote



    accepted







    up vote
    2
    down vote



    accepted






    If, over a range $aleq x leq b$, you want the best linear approximation $A+Bx$ of
    $$f(x) = fracx2m(1-x)$$ the solution is to minimize the norm
    $$F=int_a^b left(A+Bx-fracx2m(1-x) right)^2$$ with respect to parameters $A$ and $B$.



    Integrating and then computing the partial derivatives $fracpartial Fpartial A$ and $fracpartial Fpartial B$ and setting them equal to $0$ leads to two linear equations in $(A,B)$
    $$2m(b-a) A + m (b^2-a^2)B+(b-a)+log left(frac1-b1-aright)=0$$
    $$6m(b^2-a^2)A+4m(b^3-a^3)B+3 (b-a) (a+b+2)+6log left(frac1-b1-aright)=0$$ which you can simplify factoring $(b-a)$ in several places.
    I let to you the pleasure of finding $A,B$ (but this is simple).



    Around $x=0$, for $a=-b$, using Taylor series, we have
    $$A=fracb^26 m+fracb^410 m+Oleft(b^6right)qquad B=frac12 m+frac3 b^210 m+frac3 b^414 m+Oleft(b^6right)qquad F=frac2 b^545 m^2+Oleft(b^7right)$$



    Taking an example using $a=-0.1$, $b=0.1$, $m=1$, this would lead to
    $$A=frac12 left(-1+5 log left(frac1110right)+5 log
    left(frac109right)right)approx 0.00167674$$
    $$B=-150 (1+10 log (3)-5 log (11))approx 0.503022$$






    share|cite|improve this answer















    If, over a range $aleq x leq b$, you want the best linear approximation $A+Bx$ of
    $$f(x) = fracx2m(1-x)$$ the solution is to minimize the norm
    $$F=int_a^b left(A+Bx-fracx2m(1-x) right)^2$$ with respect to parameters $A$ and $B$.



    Integrating and then computing the partial derivatives $fracpartial Fpartial A$ and $fracpartial Fpartial B$ and setting them equal to $0$ leads to two linear equations in $(A,B)$
    $$2m(b-a) A + m (b^2-a^2)B+(b-a)+log left(frac1-b1-aright)=0$$
    $$6m(b^2-a^2)A+4m(b^3-a^3)B+3 (b-a) (a+b+2)+6log left(frac1-b1-aright)=0$$ which you can simplify factoring $(b-a)$ in several places.
    I let to you the pleasure of finding $A,B$ (but this is simple).



    Around $x=0$, for $a=-b$, using Taylor series, we have
    $$A=fracb^26 m+fracb^410 m+Oleft(b^6right)qquad B=frac12 m+frac3 b^210 m+frac3 b^414 m+Oleft(b^6right)qquad F=frac2 b^545 m^2+Oleft(b^7right)$$



    Taking an example using $a=-0.1$, $b=0.1$, $m=1$, this would lead to
    $$A=frac12 left(-1+5 log left(frac1110right)+5 log
    left(frac109right)right)approx 0.00167674$$
    $$B=-150 (1+10 log (3)-5 log (11))approx 0.503022$$







    share|cite|improve this answer















    share|cite|improve this answer



    share|cite|improve this answer








    edited Jul 17 at 7:47


























    answered Jul 15 at 4:22









    Claude Leibovici

    112k1055126




    112k1055126











    • Thanks a lot! This is what I was looking for. Yet, I need some verbal explanation to digest all this and make it useful for my case. First of all, I use the function $f(x)$ in a mixed integer (non)linear programming model and $f(x)$ is the term that makes it non. Again, $0leq x<1$. So, I guess, I need to set $a=0$ and possibly $b=0.999$ as close as to $1$. Then, instead of having my $x$ as a variable in my problem, I would have $A$ and $B$. But, I still need $x$ as a variable because I have to use in some other equality constraints. So, what would you recommend in this case?
      – user8028576
      Jul 16 at 0:55











    • Dr. Leibovici, if I understand your solution correctly with my limited math, I guess, you mean, I probably need to find $A$ and $B$ given that $a=0$ and (say) $b=0.999$. Then, plugging these in $f(x)=A+Bx$ will yield me an approximation of my original function, right? If I get it right until here, either this approximation does not really work or I miss something. I found $A=36.9472, B=-68.0547$ given that $a=0, b=0.999, m=1$. When I test the approximation with $x=0.6$, I get $f(x)_approx = −3.88562$ while $f(x)=0.75$. Furthermore, $f(x)geq 0 $ must be held in any case.
      – user8028576
      Jul 16 at 1:46











    • @user8028576. Linearization of any function is local and valid over a small range.
      – Claude Leibovici
      Jul 16 at 3:05










    • Dr. Leibovici, so, do you mean $a=0$ and $b=0.999$ is a large range? If yes, how can I find the optimal range to make the approximation valid? I guess, I may create multiple equations of the above form with e.g., $a=0, b=0.1$, $a=0.1, b=0.2$ $...$ $a=0.9, b=0.999$. Right? But, is $0.1$ step size okay?
      – user8028576
      Jul 16 at 3:38











    • @user8028576. For each ("small") range, you need to compute $A$ and $B$.
      – Claude Leibovici
      Jul 16 at 3:51
















    • Thanks a lot! This is what I was looking for. Yet, I need some verbal explanation to digest all this and make it useful for my case. First of all, I use the function $f(x)$ in a mixed integer (non)linear programming model and $f(x)$ is the term that makes it non. Again, $0leq x<1$. So, I guess, I need to set $a=0$ and possibly $b=0.999$ as close as to $1$. Then, instead of having my $x$ as a variable in my problem, I would have $A$ and $B$. But, I still need $x$ as a variable because I have to use in some other equality constraints. So, what would you recommend in this case?
      – user8028576
      Jul 16 at 0:55











    • Dr. Leibovici, if I understand your solution correctly with my limited math, I guess, you mean, I probably need to find $A$ and $B$ given that $a=0$ and (say) $b=0.999$. Then, plugging these in $f(x)=A+Bx$ will yield me an approximation of my original function, right? If I get it right until here, either this approximation does not really work or I miss something. I found $A=36.9472, B=-68.0547$ given that $a=0, b=0.999, m=1$. When I test the approximation with $x=0.6$, I get $f(x)_approx = −3.88562$ while $f(x)=0.75$. Furthermore, $f(x)geq 0 $ must be held in any case.
      – user8028576
      Jul 16 at 1:46











    • @user8028576. Linearization of any function is local and valid over a small range.
      – Claude Leibovici
      Jul 16 at 3:05










    • Dr. Leibovici, so, do you mean $a=0$ and $b=0.999$ is a large range? If yes, how can I find the optimal range to make the approximation valid? I guess, I may create multiple equations of the above form with e.g., $a=0, b=0.1$, $a=0.1, b=0.2$ $...$ $a=0.9, b=0.999$. Right? But, is $0.1$ step size okay?
      – user8028576
      Jul 16 at 3:38











    • @user8028576. For each ("small") range, you need to compute $A$ and $B$.
      – Claude Leibovici
      Jul 16 at 3:51















    Thanks a lot! This is what I was looking for. Yet, I need some verbal explanation to digest all this and make it useful for my case. First of all, I use the function $f(x)$ in a mixed integer (non)linear programming model and $f(x)$ is the term that makes it non. Again, $0leq x<1$. So, I guess, I need to set $a=0$ and possibly $b=0.999$ as close as to $1$. Then, instead of having my $x$ as a variable in my problem, I would have $A$ and $B$. But, I still need $x$ as a variable because I have to use in some other equality constraints. So, what would you recommend in this case?
    – user8028576
    Jul 16 at 0:55





    Thanks a lot! This is what I was looking for. Yet, I need some verbal explanation to digest all this and make it useful for my case. First of all, I use the function $f(x)$ in a mixed integer (non)linear programming model and $f(x)$ is the term that makes it non. Again, $0leq x<1$. So, I guess, I need to set $a=0$ and possibly $b=0.999$ as close as to $1$. Then, instead of having my $x$ as a variable in my problem, I would have $A$ and $B$. But, I still need $x$ as a variable because I have to use in some other equality constraints. So, what would you recommend in this case?
    – user8028576
    Jul 16 at 0:55













    Dr. Leibovici, if I understand your solution correctly with my limited math, I guess, you mean, I probably need to find $A$ and $B$ given that $a=0$ and (say) $b=0.999$. Then, plugging these in $f(x)=A+Bx$ will yield me an approximation of my original function, right? If I get it right until here, either this approximation does not really work or I miss something. I found $A=36.9472, B=-68.0547$ given that $a=0, b=0.999, m=1$. When I test the approximation with $x=0.6$, I get $f(x)_approx = −3.88562$ while $f(x)=0.75$. Furthermore, $f(x)geq 0 $ must be held in any case.
    – user8028576
    Jul 16 at 1:46





    Dr. Leibovici, if I understand your solution correctly with my limited math, I guess, you mean, I probably need to find $A$ and $B$ given that $a=0$ and (say) $b=0.999$. Then, plugging these in $f(x)=A+Bx$ will yield me an approximation of my original function, right? If I get it right until here, either this approximation does not really work or I miss something. I found $A=36.9472, B=-68.0547$ given that $a=0, b=0.999, m=1$. When I test the approximation with $x=0.6$, I get $f(x)_approx = −3.88562$ while $f(x)=0.75$. Furthermore, $f(x)geq 0 $ must be held in any case.
    – user8028576
    Jul 16 at 1:46













    @user8028576. Linearization of any function is local and valid over a small range.
    – Claude Leibovici
    Jul 16 at 3:05




    @user8028576. Linearization of any function is local and valid over a small range.
    – Claude Leibovici
    Jul 16 at 3:05












    Dr. Leibovici, so, do you mean $a=0$ and $b=0.999$ is a large range? If yes, how can I find the optimal range to make the approximation valid? I guess, I may create multiple equations of the above form with e.g., $a=0, b=0.1$, $a=0.1, b=0.2$ $...$ $a=0.9, b=0.999$. Right? But, is $0.1$ step size okay?
    – user8028576
    Jul 16 at 3:38





    Dr. Leibovici, so, do you mean $a=0$ and $b=0.999$ is a large range? If yes, how can I find the optimal range to make the approximation valid? I guess, I may create multiple equations of the above form with e.g., $a=0, b=0.1$, $a=0.1, b=0.2$ $...$ $a=0.9, b=0.999$. Right? But, is $0.1$ step size okay?
    – user8028576
    Jul 16 at 3:38













    @user8028576. For each ("small") range, you need to compute $A$ and $B$.
    – Claude Leibovici
    Jul 16 at 3:51




    @user8028576. For each ("small") range, you need to compute $A$ and $B$.
    – Claude Leibovici
    Jul 16 at 3:51










    up vote
    1
    down vote













    I will present an elementary calculus solution. I will first rename your $g(x)$ into $f(x)$ because I can and because it looks nicer to the eye. It is apparent that the best linear approximation to any function at $x=0$ must be the tangent line to the function at $x=0$. In our case, since $f(0)=0$, our tangent line must pass through $(0,0)$ too and so it must be of the form $y=mx$. Since it is the tangent line, the gradient $m$ of this line must have the same "gradient" (i.e. rate of change) as $f$ at $0$. To find the gradient, thus, we set



    $$ m = f'(0) $$



    Where the RHS is the derivative at $0$; so the rate of change as previously mentioned. So now we just have to take the derivative of $f$ and evaluate it at $0$ to find $m$. I'm sure you know how to do this, but I'll put it here for completeness's sake:



    $$fracddx fracx1-x = frac(1-x)-(-1)x(1-x)^2$$



    Evaluated at $0$ this is $1$. So $m=1$ and the closest linear approximation would be $y=mx=x$.






    share|cite|improve this answer





















    • you basically propose $F = fracx2m$ as a n approximation to the original $f(x)$ given in the post. If I get it correctly, this approximation is not any close to $f(x)$. Let $x=0.9, m=1$, $f(x)=4.5$ and $F=0.45$. If I interpret it wrongly, please let me know.
      – user8028576
      Jul 16 at 15:40










    • @user8028576 You are absolutely right that it is not a very good approximation. However, it is the best linear one, locally around $x=0$. This is unless, of course, you want to approximate it around a range around $x=0$, not just best at that point and it's "immediate vicinity". In that case, you can consider Leibovici's answer. You will see that in a small range immediately around $0$, my approximation is better; while in the range $a$ and $b$ in his answer his is generally better.
      – user496634
      Jul 16 at 22:34










    • @user8028576 A helpful way to consider this is that my answer is the limit of Lebivoci's answer as $a$ and $b$ both approach $0$ (this is quite easy to prove). Around points extremely close to $0$, my approximation is extremely good; while his answer minimises the sum of the squared of the difference in values over a whole range $[a,b]$. One is an approximation at a point, the other is the approximation over a range.
      – user496634
      Jul 16 at 22:40










    • @user8028576 I totally get the logic of both approximations. Your approximation draws a linear line starting from the origin and it does not really worry about minimizing the difference in the approximation and the actual. Your approximation is the easiest with a single $F=fracx2m function. On the other hand, Leibovici's Taylor series draws a line for a "small" range of the whole plot that is as close as to the originial nonlinear line in that segmentation. Once I create "a lot of"of those lines with the given function, I will be able to approximate much more precisely.
      – user8028576
      Jul 17 at 13:42






    • 1




      But, when I get much closer to the tails of the approximation, his approximation yields an infeasible (negative) solution. For example, $a=0, b=0.1, m=1$, then, when I search for $x=0.0001$, it will give me $Fapprox -0.00088$. Yet, this situation is understandable. To deal with it, I will adjust the ranges according to my expectations of the results. For instance, if I know $x=0.0001$ is an expected result, then I will add shorter ranges, which is equivalent of saying a decimal adjustment. I think, I got it! I am happy. Thank you very much to all, as you thought me something very useful!
      – user8028576
      Jul 17 at 13:47














    up vote
    1
    down vote













    I will present an elementary calculus solution. I will first rename your $g(x)$ into $f(x)$ because I can and because it looks nicer to the eye. It is apparent that the best linear approximation to any function at $x=0$ must be the tangent line to the function at $x=0$. In our case, since $f(0)=0$, our tangent line must pass through $(0,0)$ too and so it must be of the form $y=mx$. Since it is the tangent line, the gradient $m$ of this line must have the same "gradient" (i.e. rate of change) as $f$ at $0$. To find the gradient, thus, we set



    $$ m = f'(0) $$



    Where the RHS is the derivative at $0$; so the rate of change as previously mentioned. So now we just have to take the derivative of $f$ and evaluate it at $0$ to find $m$. I'm sure you know how to do this, but I'll put it here for completeness's sake:



    $$fracddx fracx1-x = frac(1-x)-(-1)x(1-x)^2$$



    Evaluated at $0$ this is $1$. So $m=1$ and the closest linear approximation would be $y=mx=x$.






    share|cite|improve this answer





















    • you basically propose $F = fracx2m$ as a n approximation to the original $f(x)$ given in the post. If I get it correctly, this approximation is not any close to $f(x)$. Let $x=0.9, m=1$, $f(x)=4.5$ and $F=0.45$. If I interpret it wrongly, please let me know.
      – user8028576
      Jul 16 at 15:40










    • @user8028576 You are absolutely right that it is not a very good approximation. However, it is the best linear one, locally around $x=0$. This is unless, of course, you want to approximate it around a range around $x=0$, not just best at that point and it's "immediate vicinity". In that case, you can consider Leibovici's answer. You will see that in a small range immediately around $0$, my approximation is better; while in the range $a$ and $b$ in his answer his is generally better.
      – user496634
      Jul 16 at 22:34










    • @user8028576 A helpful way to consider this is that my answer is the limit of Lebivoci's answer as $a$ and $b$ both approach $0$ (this is quite easy to prove). Around points extremely close to $0$, my approximation is extremely good; while his answer minimises the sum of the squared of the difference in values over a whole range $[a,b]$. One is an approximation at a point, the other is the approximation over a range.
      – user496634
      Jul 16 at 22:40










    • @user8028576 I totally get the logic of both approximations. Your approximation draws a linear line starting from the origin and it does not really worry about minimizing the difference in the approximation and the actual. Your approximation is the easiest with a single $F=fracx2m function. On the other hand, Leibovici's Taylor series draws a line for a "small" range of the whole plot that is as close as to the originial nonlinear line in that segmentation. Once I create "a lot of"of those lines with the given function, I will be able to approximate much more precisely.
      – user8028576
      Jul 17 at 13:42






    • 1




      But, when I get much closer to the tails of the approximation, his approximation yields an infeasible (negative) solution. For example, $a=0, b=0.1, m=1$, then, when I search for $x=0.0001$, it will give me $Fapprox -0.00088$. Yet, this situation is understandable. To deal with it, I will adjust the ranges according to my expectations of the results. For instance, if I know $x=0.0001$ is an expected result, then I will add shorter ranges, which is equivalent of saying a decimal adjustment. I think, I got it! I am happy. Thank you very much to all, as you thought me something very useful!
      – user8028576
      Jul 17 at 13:47












    up vote
    1
    down vote










    up vote
    1
    down vote









    I will present an elementary calculus solution. I will first rename your $g(x)$ into $f(x)$ because I can and because it looks nicer to the eye. It is apparent that the best linear approximation to any function at $x=0$ must be the tangent line to the function at $x=0$. In our case, since $f(0)=0$, our tangent line must pass through $(0,0)$ too and so it must be of the form $y=mx$. Since it is the tangent line, the gradient $m$ of this line must have the same "gradient" (i.e. rate of change) as $f$ at $0$. To find the gradient, thus, we set



    $$ m = f'(0) $$



    Where the RHS is the derivative at $0$; so the rate of change as previously mentioned. So now we just have to take the derivative of $f$ and evaluate it at $0$ to find $m$. I'm sure you know how to do this, but I'll put it here for completeness's sake:



    $$fracddx fracx1-x = frac(1-x)-(-1)x(1-x)^2$$



    Evaluated at $0$ this is $1$. So $m=1$ and the closest linear approximation would be $y=mx=x$.






    share|cite|improve this answer













    I will present an elementary calculus solution. I will first rename your $g(x)$ into $f(x)$ because I can and because it looks nicer to the eye. It is apparent that the best linear approximation to any function at $x=0$ must be the tangent line to the function at $x=0$. In our case, since $f(0)=0$, our tangent line must pass through $(0,0)$ too and so it must be of the form $y=mx$. Since it is the tangent line, the gradient $m$ of this line must have the same "gradient" (i.e. rate of change) as $f$ at $0$. To find the gradient, thus, we set



    $$ m = f'(0) $$



    Where the RHS is the derivative at $0$; so the rate of change as previously mentioned. So now we just have to take the derivative of $f$ and evaluate it at $0$ to find $m$. I'm sure you know how to do this, but I'll put it here for completeness's sake:



    $$fracddx fracx1-x = frac(1-x)-(-1)x(1-x)^2$$



    Evaluated at $0$ this is $1$. So $m=1$ and the closest linear approximation would be $y=mx=x$.







    share|cite|improve this answer













    share|cite|improve this answer



    share|cite|improve this answer











    answered Jul 16 at 5:51









    user496634

    30518




    30518











    • you basically propose $F = fracx2m$ as a n approximation to the original $f(x)$ given in the post. If I get it correctly, this approximation is not any close to $f(x)$. Let $x=0.9, m=1$, $f(x)=4.5$ and $F=0.45$. If I interpret it wrongly, please let me know.
      – user8028576
      Jul 16 at 15:40










    • @user8028576 You are absolutely right that it is not a very good approximation. However, it is the best linear one, locally around $x=0$. This is unless, of course, you want to approximate it around a range around $x=0$, not just best at that point and it's "immediate vicinity". In that case, you can consider Leibovici's answer. You will see that in a small range immediately around $0$, my approximation is better; while in the range $a$ and $b$ in his answer his is generally better.
      – user496634
      Jul 16 at 22:34










    • @user8028576 A helpful way to consider this is that my answer is the limit of Lebivoci's answer as $a$ and $b$ both approach $0$ (this is quite easy to prove). Around points extremely close to $0$, my approximation is extremely good; while his answer minimises the sum of the squared of the difference in values over a whole range $[a,b]$. One is an approximation at a point, the other is the approximation over a range.
      – user496634
      Jul 16 at 22:40










    • @user8028576 I totally get the logic of both approximations. Your approximation draws a linear line starting from the origin and it does not really worry about minimizing the difference in the approximation and the actual. Your approximation is the easiest with a single $F=fracx2m function. On the other hand, Leibovici's Taylor series draws a line for a "small" range of the whole plot that is as close as to the originial nonlinear line in that segmentation. Once I create "a lot of"of those lines with the given function, I will be able to approximate much more precisely.
      – user8028576
      Jul 17 at 13:42






    • 1




      But, when I get much closer to the tails of the approximation, his approximation yields an infeasible (negative) solution. For example, $a=0, b=0.1, m=1$, then, when I search for $x=0.0001$, it will give me $Fapprox -0.00088$. Yet, this situation is understandable. To deal with it, I will adjust the ranges according to my expectations of the results. For instance, if I know $x=0.0001$ is an expected result, then I will add shorter ranges, which is equivalent of saying a decimal adjustment. I think, I got it! I am happy. Thank you very much to all, as you thought me something very useful!
      – user8028576
      Jul 17 at 13:47
















    • you basically propose $F = fracx2m$ as a n approximation to the original $f(x)$ given in the post. If I get it correctly, this approximation is not any close to $f(x)$. Let $x=0.9, m=1$, $f(x)=4.5$ and $F=0.45$. If I interpret it wrongly, please let me know.
      – user8028576
      Jul 16 at 15:40










    • @user8028576 You are absolutely right that it is not a very good approximation. However, it is the best linear one, locally around $x=0$. This is unless, of course, you want to approximate it around a range around $x=0$, not just best at that point and it's "immediate vicinity". In that case, you can consider Leibovici's answer. You will see that in a small range immediately around $0$, my approximation is better; while in the range $a$ and $b$ in his answer his is generally better.
      – user496634
      Jul 16 at 22:34










    • @user8028576 A helpful way to consider this is that my answer is the limit of Lebivoci's answer as $a$ and $b$ both approach $0$ (this is quite easy to prove). Around points extremely close to $0$, my approximation is extremely good; while his answer minimises the sum of the squared of the difference in values over a whole range $[a,b]$. One is an approximation at a point, the other is the approximation over a range.
      – user496634
      Jul 16 at 22:40










    • @user8028576 I totally get the logic of both approximations. Your approximation draws a linear line starting from the origin and it does not really worry about minimizing the difference in the approximation and the actual. Your approximation is the easiest with a single $F=fracx2m function. On the other hand, Leibovici's Taylor series draws a line for a "small" range of the whole plot that is as close as to the originial nonlinear line in that segmentation. Once I create "a lot of"of those lines with the given function, I will be able to approximate much more precisely.
      – user8028576
      Jul 17 at 13:42






    • 1




      But, when I get much closer to the tails of the approximation, his approximation yields an infeasible (negative) solution. For example, $a=0, b=0.1, m=1$, then, when I search for $x=0.0001$, it will give me $Fapprox -0.00088$. Yet, this situation is understandable. To deal with it, I will adjust the ranges according to my expectations of the results. For instance, if I know $x=0.0001$ is an expected result, then I will add shorter ranges, which is equivalent of saying a decimal adjustment. I think, I got it! I am happy. Thank you very much to all, as you thought me something very useful!
      – user8028576
      Jul 17 at 13:47















    you basically propose $F = fracx2m$ as a n approximation to the original $f(x)$ given in the post. If I get it correctly, this approximation is not any close to $f(x)$. Let $x=0.9, m=1$, $f(x)=4.5$ and $F=0.45$. If I interpret it wrongly, please let me know.
    – user8028576
    Jul 16 at 15:40




    you basically propose $F = fracx2m$ as a n approximation to the original $f(x)$ given in the post. If I get it correctly, this approximation is not any close to $f(x)$. Let $x=0.9, m=1$, $f(x)=4.5$ and $F=0.45$. If I interpret it wrongly, please let me know.
    – user8028576
    Jul 16 at 15:40












    @user8028576 You are absolutely right that it is not a very good approximation. However, it is the best linear one, locally around $x=0$. This is unless, of course, you want to approximate it around a range around $x=0$, not just best at that point and it's "immediate vicinity". In that case, you can consider Leibovici's answer. You will see that in a small range immediately around $0$, my approximation is better; while in the range $a$ and $b$ in his answer his is generally better.
    – user496634
    Jul 16 at 22:34




    @user8028576 You are absolutely right that it is not a very good approximation. However, it is the best linear one, locally around $x=0$. This is unless, of course, you want to approximate it around a range around $x=0$, not just best at that point and it's "immediate vicinity". In that case, you can consider Leibovici's answer. You will see that in a small range immediately around $0$, my approximation is better; while in the range $a$ and $b$ in his answer his is generally better.
    – user496634
    Jul 16 at 22:34












    @user8028576 A helpful way to consider this is that my answer is the limit of Lebivoci's answer as $a$ and $b$ both approach $0$ (this is quite easy to prove). Around points extremely close to $0$, my approximation is extremely good; while his answer minimises the sum of the squared of the difference in values over a whole range $[a,b]$. One is an approximation at a point, the other is the approximation over a range.
    – user496634
    Jul 16 at 22:40




    @user8028576 A helpful way to consider this is that my answer is the limit of Lebivoci's answer as $a$ and $b$ both approach $0$ (this is quite easy to prove). Around points extremely close to $0$, my approximation is extremely good; while his answer minimises the sum of the squared of the difference in values over a whole range $[a,b]$. One is an approximation at a point, the other is the approximation over a range.
    – user496634
    Jul 16 at 22:40












    @user8028576 I totally get the logic of both approximations. Your approximation draws a linear line starting from the origin and it does not really worry about minimizing the difference in the approximation and the actual. Your approximation is the easiest with a single $F=fracx2m function. On the other hand, Leibovici's Taylor series draws a line for a "small" range of the whole plot that is as close as to the originial nonlinear line in that segmentation. Once I create "a lot of"of those lines with the given function, I will be able to approximate much more precisely.
    – user8028576
    Jul 17 at 13:42




    @user8028576 I totally get the logic of both approximations. Your approximation draws a linear line starting from the origin and it does not really worry about minimizing the difference in the approximation and the actual. Your approximation is the easiest with a single $F=fracx2m function. On the other hand, Leibovici's Taylor series draws a line for a "small" range of the whole plot that is as close as to the originial nonlinear line in that segmentation. Once I create "a lot of"of those lines with the given function, I will be able to approximate much more precisely.
    – user8028576
    Jul 17 at 13:42




    1




    1




    But, when I get much closer to the tails of the approximation, his approximation yields an infeasible (negative) solution. For example, $a=0, b=0.1, m=1$, then, when I search for $x=0.0001$, it will give me $Fapprox -0.00088$. Yet, this situation is understandable. To deal with it, I will adjust the ranges according to my expectations of the results. For instance, if I know $x=0.0001$ is an expected result, then I will add shorter ranges, which is equivalent of saying a decimal adjustment. I think, I got it! I am happy. Thank you very much to all, as you thought me something very useful!
    – user8028576
    Jul 17 at 13:47




    But, when I get much closer to the tails of the approximation, his approximation yields an infeasible (negative) solution. For example, $a=0, b=0.1, m=1$, then, when I search for $x=0.0001$, it will give me $Fapprox -0.00088$. Yet, this situation is understandable. To deal with it, I will adjust the ranges according to my expectations of the results. For instance, if I know $x=0.0001$ is an expected result, then I will add shorter ranges, which is equivalent of saying a decimal adjustment. I think, I got it! I am happy. Thank you very much to all, as you thought me something very useful!
    – user8028576
    Jul 17 at 13:47










    up vote
    0
    down vote













    $$frac ddx frac1 1-x = frac1(1-x)^2$$ Additionally,
    since $frac11-x = sum_k=0^inftyx^k$ clearly $$frac1(1-x)^2 = sum_k=1^inftykx^k-1$$Unfortunately, it does not really make sense to approximate linearly something like this in any global sense. Locally, evaluating $m = frac1(1-x_0)^2$ would give you the slope of a line tangent to $frac1(1-x)$ at point $x_0$ which locally approximates it linearly. I believe this is what you're asking.






    share|cite|improve this answer

























      up vote
      0
      down vote













      $$frac ddx frac1 1-x = frac1(1-x)^2$$ Additionally,
      since $frac11-x = sum_k=0^inftyx^k$ clearly $$frac1(1-x)^2 = sum_k=1^inftykx^k-1$$Unfortunately, it does not really make sense to approximate linearly something like this in any global sense. Locally, evaluating $m = frac1(1-x_0)^2$ would give you the slope of a line tangent to $frac1(1-x)$ at point $x_0$ which locally approximates it linearly. I believe this is what you're asking.






      share|cite|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        $$frac ddx frac1 1-x = frac1(1-x)^2$$ Additionally,
        since $frac11-x = sum_k=0^inftyx^k$ clearly $$frac1(1-x)^2 = sum_k=1^inftykx^k-1$$Unfortunately, it does not really make sense to approximate linearly something like this in any global sense. Locally, evaluating $m = frac1(1-x_0)^2$ would give you the slope of a line tangent to $frac1(1-x)$ at point $x_0$ which locally approximates it linearly. I believe this is what you're asking.






        share|cite|improve this answer













        $$frac ddx frac1 1-x = frac1(1-x)^2$$ Additionally,
        since $frac11-x = sum_k=0^inftyx^k$ clearly $$frac1(1-x)^2 = sum_k=1^inftykx^k-1$$Unfortunately, it does not really make sense to approximate linearly something like this in any global sense. Locally, evaluating $m = frac1(1-x_0)^2$ would give you the slope of a line tangent to $frac1(1-x)$ at point $x_0$ which locally approximates it linearly. I believe this is what you're asking.







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered Jul 15 at 3:47









        BelowAverageIntelligence

        339212




        339212






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852141%2flinear-approximation-of-x-1-x%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Relationship between determinant of matrix and determinant of adjoint?

            Color the edges and diagonals of a regular polygon