How to minimize a function following an asymptotic pattern?

Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
When running simulation of data following a linear function (with noise) with python, I can find that the linear model gives the best fit according to the standard error of the regression:
from pylab import *
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
# setting the independent variable
lim_sup = 100
x = np.arange(1, lim_sup, 1)
x_reg = x.reshape(99,1)
# setting the dependent variable as a linear function
# of the independent variable with some added noise
error = np.random.normal(0, 5, lim_sup - 1)
y_test_lin = 3 * x + error
# finding the best fit for the data with
# a linear regression model
regr = linear_model.LinearRegression()
regr.fit(x_reg, y_test_lin)
# Predicting the data with the coefficient and intercept
# found with the linear regression
y_pred_lin = x * regr.coef_ + regr.intercept_
# Predicting the data with the linear model (the one just
# found) as well as a logarithmic and asymptotic model
y_pred_1 = y_pred_lin
y_pred_2 = 50 * np.log(x)
y_pred_3 = 100 * np.arctan(x)
# Compare the goodness of fit of those different models with
# the standard error of the regression
root_mean_squared_error_pred_1 = sqrt(sum((y_pred_1 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_1)
Root Mean Squared Error: 4.475865763351861
root_mean_squared_error_pred_2 = sqrt(sum((y_pred_2 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_2)
Root Mean Squared Error: 58.73409708967253
root_mean_squared_error_pred_3 = sqrt(sum((y_pred_3 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_3)
Root Mean Squared Error: 81.80226281199442
When running simulation of data following a logarithmic function (with noise) with python, after linearizing the data, I can find that the linear model gives the best fit according to the standard error of the regression:
# setting the dependent variable as a exponential function
# of the independent variable with some added noise
y_test_exp = exp(x) + exp(error)
# linearize the exponential data by computing its logarithm
y_test_lin = log(y_test_exp)
# finding the best fit for the data with
# a linear regression model
regr.fit(x_reg, y_test_lin)
# Predicting the data with the coefficient and intercept
# found with the linear regression
y_pred_lin = x * regr.coef_ + regr.intercept_
# Predicting the data with the linear model (the one just
# found) as well as an logarithmic asymptotic model
y_pred_1 = y_pred_lin
y_pred_2 = 100 * np.arctan(x)
root_mean_squared_error_pred_1 = sqrt(sum((y_pred_1 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_1)
Root Mean Squared Error: 0.1331147098437333
root_mean_squared_error_pred_2 = sqrt(sum((y_pred_2 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_2)
Root Mean Squared Error: 104.5638974518184
Now I would like to minimize I'm trying to minimize a function that could fit data following an asymptotic function as the following:
y_test_exp = 100 * np.arctan(x)
How could it be done? Is there a way to linearise the data? If not is there an alternative?
asymptotics nonlinear-optimization regression linear-regression
add a comment |Â
up vote
0
down vote
favorite
When running simulation of data following a linear function (with noise) with python, I can find that the linear model gives the best fit according to the standard error of the regression:
from pylab import *
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
# setting the independent variable
lim_sup = 100
x = np.arange(1, lim_sup, 1)
x_reg = x.reshape(99,1)
# setting the dependent variable as a linear function
# of the independent variable with some added noise
error = np.random.normal(0, 5, lim_sup - 1)
y_test_lin = 3 * x + error
# finding the best fit for the data with
# a linear regression model
regr = linear_model.LinearRegression()
regr.fit(x_reg, y_test_lin)
# Predicting the data with the coefficient and intercept
# found with the linear regression
y_pred_lin = x * regr.coef_ + regr.intercept_
# Predicting the data with the linear model (the one just
# found) as well as a logarithmic and asymptotic model
y_pred_1 = y_pred_lin
y_pred_2 = 50 * np.log(x)
y_pred_3 = 100 * np.arctan(x)
# Compare the goodness of fit of those different models with
# the standard error of the regression
root_mean_squared_error_pred_1 = sqrt(sum((y_pred_1 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_1)
Root Mean Squared Error: 4.475865763351861
root_mean_squared_error_pred_2 = sqrt(sum((y_pred_2 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_2)
Root Mean Squared Error: 58.73409708967253
root_mean_squared_error_pred_3 = sqrt(sum((y_pred_3 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_3)
Root Mean Squared Error: 81.80226281199442
When running simulation of data following a logarithmic function (with noise) with python, after linearizing the data, I can find that the linear model gives the best fit according to the standard error of the regression:
# setting the dependent variable as a exponential function
# of the independent variable with some added noise
y_test_exp = exp(x) + exp(error)
# linearize the exponential data by computing its logarithm
y_test_lin = log(y_test_exp)
# finding the best fit for the data with
# a linear regression model
regr.fit(x_reg, y_test_lin)
# Predicting the data with the coefficient and intercept
# found with the linear regression
y_pred_lin = x * regr.coef_ + regr.intercept_
# Predicting the data with the linear model (the one just
# found) as well as an logarithmic asymptotic model
y_pred_1 = y_pred_lin
y_pred_2 = 100 * np.arctan(x)
root_mean_squared_error_pred_1 = sqrt(sum((y_pred_1 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_1)
Root Mean Squared Error: 0.1331147098437333
root_mean_squared_error_pred_2 = sqrt(sum((y_pred_2 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_2)
Root Mean Squared Error: 104.5638974518184
Now I would like to minimize I'm trying to minimize a function that could fit data following an asymptotic function as the following:
y_test_exp = 100 * np.arctan(x)
How could it be done? Is there a way to linearise the data? If not is there an alternative?
asymptotics nonlinear-optimization regression linear-regression
1
Are you expecting people on MSE to read your code???
â copper.hat
Jul 16 at 6:14
As written, this question would be better suited for StackOverflow. Even then, it has too many extraneous details. See stackoverflow.com/help/mcve
â Hans Musgrave
Jul 16 at 6:27
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
When running simulation of data following a linear function (with noise) with python, I can find that the linear model gives the best fit according to the standard error of the regression:
from pylab import *
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
# setting the independent variable
lim_sup = 100
x = np.arange(1, lim_sup, 1)
x_reg = x.reshape(99,1)
# setting the dependent variable as a linear function
# of the independent variable with some added noise
error = np.random.normal(0, 5, lim_sup - 1)
y_test_lin = 3 * x + error
# finding the best fit for the data with
# a linear regression model
regr = linear_model.LinearRegression()
regr.fit(x_reg, y_test_lin)
# Predicting the data with the coefficient and intercept
# found with the linear regression
y_pred_lin = x * regr.coef_ + regr.intercept_
# Predicting the data with the linear model (the one just
# found) as well as a logarithmic and asymptotic model
y_pred_1 = y_pred_lin
y_pred_2 = 50 * np.log(x)
y_pred_3 = 100 * np.arctan(x)
# Compare the goodness of fit of those different models with
# the standard error of the regression
root_mean_squared_error_pred_1 = sqrt(sum((y_pred_1 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_1)
Root Mean Squared Error: 4.475865763351861
root_mean_squared_error_pred_2 = sqrt(sum((y_pred_2 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_2)
Root Mean Squared Error: 58.73409708967253
root_mean_squared_error_pred_3 = sqrt(sum((y_pred_3 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_3)
Root Mean Squared Error: 81.80226281199442
When running simulation of data following a logarithmic function (with noise) with python, after linearizing the data, I can find that the linear model gives the best fit according to the standard error of the regression:
# setting the dependent variable as a exponential function
# of the independent variable with some added noise
y_test_exp = exp(x) + exp(error)
# linearize the exponential data by computing its logarithm
y_test_lin = log(y_test_exp)
# finding the best fit for the data with
# a linear regression model
regr.fit(x_reg, y_test_lin)
# Predicting the data with the coefficient and intercept
# found with the linear regression
y_pred_lin = x * regr.coef_ + regr.intercept_
# Predicting the data with the linear model (the one just
# found) as well as an logarithmic asymptotic model
y_pred_1 = y_pred_lin
y_pred_2 = 100 * np.arctan(x)
root_mean_squared_error_pred_1 = sqrt(sum((y_pred_1 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_1)
Root Mean Squared Error: 0.1331147098437333
root_mean_squared_error_pred_2 = sqrt(sum((y_pred_2 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_2)
Root Mean Squared Error: 104.5638974518184
Now I would like to minimize I'm trying to minimize a function that could fit data following an asymptotic function as the following:
y_test_exp = 100 * np.arctan(x)
How could it be done? Is there a way to linearise the data? If not is there an alternative?
asymptotics nonlinear-optimization regression linear-regression
When running simulation of data following a linear function (with noise) with python, I can find that the linear model gives the best fit according to the standard error of the regression:
from pylab import *
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
# setting the independent variable
lim_sup = 100
x = np.arange(1, lim_sup, 1)
x_reg = x.reshape(99,1)
# setting the dependent variable as a linear function
# of the independent variable with some added noise
error = np.random.normal(0, 5, lim_sup - 1)
y_test_lin = 3 * x + error
# finding the best fit for the data with
# a linear regression model
regr = linear_model.LinearRegression()
regr.fit(x_reg, y_test_lin)
# Predicting the data with the coefficient and intercept
# found with the linear regression
y_pred_lin = x * regr.coef_ + regr.intercept_
# Predicting the data with the linear model (the one just
# found) as well as a logarithmic and asymptotic model
y_pred_1 = y_pred_lin
y_pred_2 = 50 * np.log(x)
y_pred_3 = 100 * np.arctan(x)
# Compare the goodness of fit of those different models with
# the standard error of the regression
root_mean_squared_error_pred_1 = sqrt(sum((y_pred_1 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_1)
Root Mean Squared Error: 4.475865763351861
root_mean_squared_error_pred_2 = sqrt(sum((y_pred_2 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_2)
Root Mean Squared Error: 58.73409708967253
root_mean_squared_error_pred_3 = sqrt(sum((y_pred_3 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_3)
Root Mean Squared Error: 81.80226281199442
When running simulation of data following a logarithmic function (with noise) with python, after linearizing the data, I can find that the linear model gives the best fit according to the standard error of the regression:
# setting the dependent variable as a exponential function
# of the independent variable with some added noise
y_test_exp = exp(x) + exp(error)
# linearize the exponential data by computing its logarithm
y_test_lin = log(y_test_exp)
# finding the best fit for the data with
# a linear regression model
regr.fit(x_reg, y_test_lin)
# Predicting the data with the coefficient and intercept
# found with the linear regression
y_pred_lin = x * regr.coef_ + regr.intercept_
# Predicting the data with the linear model (the one just
# found) as well as an logarithmic asymptotic model
y_pred_1 = y_pred_lin
y_pred_2 = 100 * np.arctan(x)
root_mean_squared_error_pred_1 = sqrt(sum((y_pred_1 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_1)
Root Mean Squared Error: 0.1331147098437333
root_mean_squared_error_pred_2 = sqrt(sum((y_pred_2 - y_test_lin) ** 2) / lim_sup)
print("Root Mean Squared Error: ", root_mean_squared_error_pred_2)
Root Mean Squared Error: 104.5638974518184
Now I would like to minimize I'm trying to minimize a function that could fit data following an asymptotic function as the following:
y_test_exp = 100 * np.arctan(x)
How could it be done? Is there a way to linearise the data? If not is there an alternative?
asymptotics nonlinear-optimization regression linear-regression
asked Jul 16 at 6:05
ecjb
295
295
1
Are you expecting people on MSE to read your code???
â copper.hat
Jul 16 at 6:14
As written, this question would be better suited for StackOverflow. Even then, it has too many extraneous details. See stackoverflow.com/help/mcve
â Hans Musgrave
Jul 16 at 6:27
add a comment |Â
1
Are you expecting people on MSE to read your code???
â copper.hat
Jul 16 at 6:14
As written, this question would be better suited for StackOverflow. Even then, it has too many extraneous details. See stackoverflow.com/help/mcve
â Hans Musgrave
Jul 16 at 6:27
1
1
Are you expecting people on MSE to read your code???
â copper.hat
Jul 16 at 6:14
Are you expecting people on MSE to read your code???
â copper.hat
Jul 16 at 6:14
As written, this question would be better suited for StackOverflow. Even then, it has too many extraneous details. See stackoverflow.com/help/mcve
â Hans Musgrave
Jul 16 at 6:27
As written, this question would be better suited for StackOverflow. Even then, it has too many extraneous details. See stackoverflow.com/help/mcve
â Hans Musgrave
Jul 16 at 6:27
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
The examples you have been working with have the form $$y=f(a,b,x),$$ where $a$ and $b$ are parameters to be estimated and $f$ is a particular functional (e.g. $e^ax+b$). The key to both the linear and the logarithmic model is that there is an invertible transformation $$T:mathbb Rrightarrowmathbb R$$ satisfying the equation $$T(y)=ax+b$$ for ANY choice of $a$ and $b$ so that $$y=T^-1(ax+b).$$ This transformation, or linearization as you're calling it allows you to use linear mean squared error regression to estimate the parameters $a$ and $b$ given samples of $x$ and $y$. Especially desirable extra properties would include not introducing bias given the presence of certain kinds of noise, but you don't indicate that such a bias is much of a concern.
The exact functional form you are trying to match is a bit ambiguous, but for the sake of argument suppose we are trying to fit the data to the functional $$y=acdotarctan(x)+b.$$
It turns out that no such transformation $T$ exists for all choices of the parameters $a$ and $b$. In other words, any attempt to linearize the system in the same way as your code would be guaranteed to NOT reveal a linear dependence for some choices of $a$ and $b$.
However, if you want to expand on your technique, you could allow the introduction of transformations of $x$ and $y$. To make that explicit, if we can find choices of $T$ and $R$ with $T$ invertible which satisfy $$T(y)=aR(x)+b$$ for all choices of $a$ and $b$, then we can estimate $a$ and $b$ with a linear regression and then predict $y$ with the formula $y=T^-1(aR(x)+b).$$
In your case, such a transformation is almost handed to you on a silver platter. In particular, choose $R(x)=arctan(x)$ and $T(y)=y$. Run a linear regression on $y$ vs $arctan(x)$ to estimate $a$ and $b$, and then you can predict $y$ by applying the regressor to $arctan(x)$:
clf = LinearRegression().fit(np.arctan(x), y)
clf.predict(np.arctan(x))
If you had the transformations $T$ and $R$ defined already, these two lines would be replaced in general with something like the following:
clf = LinearRegression().fit(R(x), T(y))
T_inv(clf.predict(R(x)))
Even without linearizations, the parameters $a$ and $b$ (and more if you have them) can be readily estimated with general minimization formulas. Since you have scikit-learn installed, you should have scipy on your system as well. As an example, consider the following command:
scipy.optimize.curve_fit(
lambda x, a, b: a*np.arctan(x)+b,
x,
y,
np.ones(2)
)
The output might have more information than you want, and you should examine the documentation for more details, but in general this performs a kind of least-squares regression to estimate your parameters. Working through each of the inputs,
lambda x, a, b: a*np.arctan(x)+bis the function whose parameters you want. The inputxmust be the first argument, and the rest of the parameters follow.xandyshould be obvious.np.ones(2)creates an array of length $2$, representing the parameters $a$ and $b$ in that order. This is your initial guess as to their values. If you have a reasonable guess, you could incorporate it, but for simple functions this should converge anyway without any additional effort.
Many thanks @Hans Musgrave. Works perfectly!
â ecjb
Jul 16 at 12:36
Dear @Hans Musgrave, let say now that the $R$ function is similar sigmoid pattern $$R_1(x) = frac11+e^(-(x-10)/2).$$, however as model I know only $$R_2(x) = frac11+e^-x.$$. How can find $R_1$ with $R_2$?
â ecjb
Jul 16 at 21:24
@ecjb This should be asked as a separate question, and on StackOverflow if programming is still the goal. Also, are you trying to find the relationship between $R_1$ and $R_2$, or are you trying to estimate some sorts of parameters again? I.e., do you want to know $$R_1=frac11+e^5sqrtfrac1R_2-1,$$ or do you want to estimate $a$ and $b$ given some data in a functional like $$R=frac11+e^-left(fracx-abright)?$$
â Hans Musgrave
Jul 17 at 0:09
Many thanks @Hans Musgrave. I posted a new question here: math.stackexchange.com/questions/2854235/â¦
â ecjb
Jul 17 at 6:58
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
The examples you have been working with have the form $$y=f(a,b,x),$$ where $a$ and $b$ are parameters to be estimated and $f$ is a particular functional (e.g. $e^ax+b$). The key to both the linear and the logarithmic model is that there is an invertible transformation $$T:mathbb Rrightarrowmathbb R$$ satisfying the equation $$T(y)=ax+b$$ for ANY choice of $a$ and $b$ so that $$y=T^-1(ax+b).$$ This transformation, or linearization as you're calling it allows you to use linear mean squared error regression to estimate the parameters $a$ and $b$ given samples of $x$ and $y$. Especially desirable extra properties would include not introducing bias given the presence of certain kinds of noise, but you don't indicate that such a bias is much of a concern.
The exact functional form you are trying to match is a bit ambiguous, but for the sake of argument suppose we are trying to fit the data to the functional $$y=acdotarctan(x)+b.$$
It turns out that no such transformation $T$ exists for all choices of the parameters $a$ and $b$. In other words, any attempt to linearize the system in the same way as your code would be guaranteed to NOT reveal a linear dependence for some choices of $a$ and $b$.
However, if you want to expand on your technique, you could allow the introduction of transformations of $x$ and $y$. To make that explicit, if we can find choices of $T$ and $R$ with $T$ invertible which satisfy $$T(y)=aR(x)+b$$ for all choices of $a$ and $b$, then we can estimate $a$ and $b$ with a linear regression and then predict $y$ with the formula $y=T^-1(aR(x)+b).$$
In your case, such a transformation is almost handed to you on a silver platter. In particular, choose $R(x)=arctan(x)$ and $T(y)=y$. Run a linear regression on $y$ vs $arctan(x)$ to estimate $a$ and $b$, and then you can predict $y$ by applying the regressor to $arctan(x)$:
clf = LinearRegression().fit(np.arctan(x), y)
clf.predict(np.arctan(x))
If you had the transformations $T$ and $R$ defined already, these two lines would be replaced in general with something like the following:
clf = LinearRegression().fit(R(x), T(y))
T_inv(clf.predict(R(x)))
Even without linearizations, the parameters $a$ and $b$ (and more if you have them) can be readily estimated with general minimization formulas. Since you have scikit-learn installed, you should have scipy on your system as well. As an example, consider the following command:
scipy.optimize.curve_fit(
lambda x, a, b: a*np.arctan(x)+b,
x,
y,
np.ones(2)
)
The output might have more information than you want, and you should examine the documentation for more details, but in general this performs a kind of least-squares regression to estimate your parameters. Working through each of the inputs,
lambda x, a, b: a*np.arctan(x)+bis the function whose parameters you want. The inputxmust be the first argument, and the rest of the parameters follow.xandyshould be obvious.np.ones(2)creates an array of length $2$, representing the parameters $a$ and $b$ in that order. This is your initial guess as to their values. If you have a reasonable guess, you could incorporate it, but for simple functions this should converge anyway without any additional effort.
Many thanks @Hans Musgrave. Works perfectly!
â ecjb
Jul 16 at 12:36
Dear @Hans Musgrave, let say now that the $R$ function is similar sigmoid pattern $$R_1(x) = frac11+e^(-(x-10)/2).$$, however as model I know only $$R_2(x) = frac11+e^-x.$$. How can find $R_1$ with $R_2$?
â ecjb
Jul 16 at 21:24
@ecjb This should be asked as a separate question, and on StackOverflow if programming is still the goal. Also, are you trying to find the relationship between $R_1$ and $R_2$, or are you trying to estimate some sorts of parameters again? I.e., do you want to know $$R_1=frac11+e^5sqrtfrac1R_2-1,$$ or do you want to estimate $a$ and $b$ given some data in a functional like $$R=frac11+e^-left(fracx-abright)?$$
â Hans Musgrave
Jul 17 at 0:09
Many thanks @Hans Musgrave. I posted a new question here: math.stackexchange.com/questions/2854235/â¦
â ecjb
Jul 17 at 6:58
add a comment |Â
up vote
1
down vote
accepted
The examples you have been working with have the form $$y=f(a,b,x),$$ where $a$ and $b$ are parameters to be estimated and $f$ is a particular functional (e.g. $e^ax+b$). The key to both the linear and the logarithmic model is that there is an invertible transformation $$T:mathbb Rrightarrowmathbb R$$ satisfying the equation $$T(y)=ax+b$$ for ANY choice of $a$ and $b$ so that $$y=T^-1(ax+b).$$ This transformation, or linearization as you're calling it allows you to use linear mean squared error regression to estimate the parameters $a$ and $b$ given samples of $x$ and $y$. Especially desirable extra properties would include not introducing bias given the presence of certain kinds of noise, but you don't indicate that such a bias is much of a concern.
The exact functional form you are trying to match is a bit ambiguous, but for the sake of argument suppose we are trying to fit the data to the functional $$y=acdotarctan(x)+b.$$
It turns out that no such transformation $T$ exists for all choices of the parameters $a$ and $b$. In other words, any attempt to linearize the system in the same way as your code would be guaranteed to NOT reveal a linear dependence for some choices of $a$ and $b$.
However, if you want to expand on your technique, you could allow the introduction of transformations of $x$ and $y$. To make that explicit, if we can find choices of $T$ and $R$ with $T$ invertible which satisfy $$T(y)=aR(x)+b$$ for all choices of $a$ and $b$, then we can estimate $a$ and $b$ with a linear regression and then predict $y$ with the formula $y=T^-1(aR(x)+b).$$
In your case, such a transformation is almost handed to you on a silver platter. In particular, choose $R(x)=arctan(x)$ and $T(y)=y$. Run a linear regression on $y$ vs $arctan(x)$ to estimate $a$ and $b$, and then you can predict $y$ by applying the regressor to $arctan(x)$:
clf = LinearRegression().fit(np.arctan(x), y)
clf.predict(np.arctan(x))
If you had the transformations $T$ and $R$ defined already, these two lines would be replaced in general with something like the following:
clf = LinearRegression().fit(R(x), T(y))
T_inv(clf.predict(R(x)))
Even without linearizations, the parameters $a$ and $b$ (and more if you have them) can be readily estimated with general minimization formulas. Since you have scikit-learn installed, you should have scipy on your system as well. As an example, consider the following command:
scipy.optimize.curve_fit(
lambda x, a, b: a*np.arctan(x)+b,
x,
y,
np.ones(2)
)
The output might have more information than you want, and you should examine the documentation for more details, but in general this performs a kind of least-squares regression to estimate your parameters. Working through each of the inputs,
lambda x, a, b: a*np.arctan(x)+bis the function whose parameters you want. The inputxmust be the first argument, and the rest of the parameters follow.xandyshould be obvious.np.ones(2)creates an array of length $2$, representing the parameters $a$ and $b$ in that order. This is your initial guess as to their values. If you have a reasonable guess, you could incorporate it, but for simple functions this should converge anyway without any additional effort.
Many thanks @Hans Musgrave. Works perfectly!
â ecjb
Jul 16 at 12:36
Dear @Hans Musgrave, let say now that the $R$ function is similar sigmoid pattern $$R_1(x) = frac11+e^(-(x-10)/2).$$, however as model I know only $$R_2(x) = frac11+e^-x.$$. How can find $R_1$ with $R_2$?
â ecjb
Jul 16 at 21:24
@ecjb This should be asked as a separate question, and on StackOverflow if programming is still the goal. Also, are you trying to find the relationship between $R_1$ and $R_2$, or are you trying to estimate some sorts of parameters again? I.e., do you want to know $$R_1=frac11+e^5sqrtfrac1R_2-1,$$ or do you want to estimate $a$ and $b$ given some data in a functional like $$R=frac11+e^-left(fracx-abright)?$$
â Hans Musgrave
Jul 17 at 0:09
Many thanks @Hans Musgrave. I posted a new question here: math.stackexchange.com/questions/2854235/â¦
â ecjb
Jul 17 at 6:58
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
The examples you have been working with have the form $$y=f(a,b,x),$$ where $a$ and $b$ are parameters to be estimated and $f$ is a particular functional (e.g. $e^ax+b$). The key to both the linear and the logarithmic model is that there is an invertible transformation $$T:mathbb Rrightarrowmathbb R$$ satisfying the equation $$T(y)=ax+b$$ for ANY choice of $a$ and $b$ so that $$y=T^-1(ax+b).$$ This transformation, or linearization as you're calling it allows you to use linear mean squared error regression to estimate the parameters $a$ and $b$ given samples of $x$ and $y$. Especially desirable extra properties would include not introducing bias given the presence of certain kinds of noise, but you don't indicate that such a bias is much of a concern.
The exact functional form you are trying to match is a bit ambiguous, but for the sake of argument suppose we are trying to fit the data to the functional $$y=acdotarctan(x)+b.$$
It turns out that no such transformation $T$ exists for all choices of the parameters $a$ and $b$. In other words, any attempt to linearize the system in the same way as your code would be guaranteed to NOT reveal a linear dependence for some choices of $a$ and $b$.
However, if you want to expand on your technique, you could allow the introduction of transformations of $x$ and $y$. To make that explicit, if we can find choices of $T$ and $R$ with $T$ invertible which satisfy $$T(y)=aR(x)+b$$ for all choices of $a$ and $b$, then we can estimate $a$ and $b$ with a linear regression and then predict $y$ with the formula $y=T^-1(aR(x)+b).$$
In your case, such a transformation is almost handed to you on a silver platter. In particular, choose $R(x)=arctan(x)$ and $T(y)=y$. Run a linear regression on $y$ vs $arctan(x)$ to estimate $a$ and $b$, and then you can predict $y$ by applying the regressor to $arctan(x)$:
clf = LinearRegression().fit(np.arctan(x), y)
clf.predict(np.arctan(x))
If you had the transformations $T$ and $R$ defined already, these two lines would be replaced in general with something like the following:
clf = LinearRegression().fit(R(x), T(y))
T_inv(clf.predict(R(x)))
Even without linearizations, the parameters $a$ and $b$ (and more if you have them) can be readily estimated with general minimization formulas. Since you have scikit-learn installed, you should have scipy on your system as well. As an example, consider the following command:
scipy.optimize.curve_fit(
lambda x, a, b: a*np.arctan(x)+b,
x,
y,
np.ones(2)
)
The output might have more information than you want, and you should examine the documentation for more details, but in general this performs a kind of least-squares regression to estimate your parameters. Working through each of the inputs,
lambda x, a, b: a*np.arctan(x)+bis the function whose parameters you want. The inputxmust be the first argument, and the rest of the parameters follow.xandyshould be obvious.np.ones(2)creates an array of length $2$, representing the parameters $a$ and $b$ in that order. This is your initial guess as to their values. If you have a reasonable guess, you could incorporate it, but for simple functions this should converge anyway without any additional effort.
The examples you have been working with have the form $$y=f(a,b,x),$$ where $a$ and $b$ are parameters to be estimated and $f$ is a particular functional (e.g. $e^ax+b$). The key to both the linear and the logarithmic model is that there is an invertible transformation $$T:mathbb Rrightarrowmathbb R$$ satisfying the equation $$T(y)=ax+b$$ for ANY choice of $a$ and $b$ so that $$y=T^-1(ax+b).$$ This transformation, or linearization as you're calling it allows you to use linear mean squared error regression to estimate the parameters $a$ and $b$ given samples of $x$ and $y$. Especially desirable extra properties would include not introducing bias given the presence of certain kinds of noise, but you don't indicate that such a bias is much of a concern.
The exact functional form you are trying to match is a bit ambiguous, but for the sake of argument suppose we are trying to fit the data to the functional $$y=acdotarctan(x)+b.$$
It turns out that no such transformation $T$ exists for all choices of the parameters $a$ and $b$. In other words, any attempt to linearize the system in the same way as your code would be guaranteed to NOT reveal a linear dependence for some choices of $a$ and $b$.
However, if you want to expand on your technique, you could allow the introduction of transformations of $x$ and $y$. To make that explicit, if we can find choices of $T$ and $R$ with $T$ invertible which satisfy $$T(y)=aR(x)+b$$ for all choices of $a$ and $b$, then we can estimate $a$ and $b$ with a linear regression and then predict $y$ with the formula $y=T^-1(aR(x)+b).$$
In your case, such a transformation is almost handed to you on a silver platter. In particular, choose $R(x)=arctan(x)$ and $T(y)=y$. Run a linear regression on $y$ vs $arctan(x)$ to estimate $a$ and $b$, and then you can predict $y$ by applying the regressor to $arctan(x)$:
clf = LinearRegression().fit(np.arctan(x), y)
clf.predict(np.arctan(x))
If you had the transformations $T$ and $R$ defined already, these two lines would be replaced in general with something like the following:
clf = LinearRegression().fit(R(x), T(y))
T_inv(clf.predict(R(x)))
Even without linearizations, the parameters $a$ and $b$ (and more if you have them) can be readily estimated with general minimization formulas. Since you have scikit-learn installed, you should have scipy on your system as well. As an example, consider the following command:
scipy.optimize.curve_fit(
lambda x, a, b: a*np.arctan(x)+b,
x,
y,
np.ones(2)
)
The output might have more information than you want, and you should examine the documentation for more details, but in general this performs a kind of least-squares regression to estimate your parameters. Working through each of the inputs,
lambda x, a, b: a*np.arctan(x)+bis the function whose parameters you want. The inputxmust be the first argument, and the rest of the parameters follow.xandyshould be obvious.np.ones(2)creates an array of length $2$, representing the parameters $a$ and $b$ in that order. This is your initial guess as to their values. If you have a reasonable guess, you could incorporate it, but for simple functions this should converge anyway without any additional effort.
answered Jul 16 at 7:09
Hans Musgrave
1,393111
1,393111
Many thanks @Hans Musgrave. Works perfectly!
â ecjb
Jul 16 at 12:36
Dear @Hans Musgrave, let say now that the $R$ function is similar sigmoid pattern $$R_1(x) = frac11+e^(-(x-10)/2).$$, however as model I know only $$R_2(x) = frac11+e^-x.$$. How can find $R_1$ with $R_2$?
â ecjb
Jul 16 at 21:24
@ecjb This should be asked as a separate question, and on StackOverflow if programming is still the goal. Also, are you trying to find the relationship between $R_1$ and $R_2$, or are you trying to estimate some sorts of parameters again? I.e., do you want to know $$R_1=frac11+e^5sqrtfrac1R_2-1,$$ or do you want to estimate $a$ and $b$ given some data in a functional like $$R=frac11+e^-left(fracx-abright)?$$
â Hans Musgrave
Jul 17 at 0:09
Many thanks @Hans Musgrave. I posted a new question here: math.stackexchange.com/questions/2854235/â¦
â ecjb
Jul 17 at 6:58
add a comment |Â
Many thanks @Hans Musgrave. Works perfectly!
â ecjb
Jul 16 at 12:36
Dear @Hans Musgrave, let say now that the $R$ function is similar sigmoid pattern $$R_1(x) = frac11+e^(-(x-10)/2).$$, however as model I know only $$R_2(x) = frac11+e^-x.$$. How can find $R_1$ with $R_2$?
â ecjb
Jul 16 at 21:24
@ecjb This should be asked as a separate question, and on StackOverflow if programming is still the goal. Also, are you trying to find the relationship between $R_1$ and $R_2$, or are you trying to estimate some sorts of parameters again? I.e., do you want to know $$R_1=frac11+e^5sqrtfrac1R_2-1,$$ or do you want to estimate $a$ and $b$ given some data in a functional like $$R=frac11+e^-left(fracx-abright)?$$
â Hans Musgrave
Jul 17 at 0:09
Many thanks @Hans Musgrave. I posted a new question here: math.stackexchange.com/questions/2854235/â¦
â ecjb
Jul 17 at 6:58
Many thanks @Hans Musgrave. Works perfectly!
â ecjb
Jul 16 at 12:36
Many thanks @Hans Musgrave. Works perfectly!
â ecjb
Jul 16 at 12:36
Dear @Hans Musgrave, let say now that the $R$ function is similar sigmoid pattern $$R_1(x) = frac11+e^(-(x-10)/2).$$, however as model I know only $$R_2(x) = frac11+e^-x.$$. How can find $R_1$ with $R_2$?
â ecjb
Jul 16 at 21:24
Dear @Hans Musgrave, let say now that the $R$ function is similar sigmoid pattern $$R_1(x) = frac11+e^(-(x-10)/2).$$, however as model I know only $$R_2(x) = frac11+e^-x.$$. How can find $R_1$ with $R_2$?
â ecjb
Jul 16 at 21:24
@ecjb This should be asked as a separate question, and on StackOverflow if programming is still the goal. Also, are you trying to find the relationship between $R_1$ and $R_2$, or are you trying to estimate some sorts of parameters again? I.e., do you want to know $$R_1=frac11+e^5sqrtfrac1R_2-1,$$ or do you want to estimate $a$ and $b$ given some data in a functional like $$R=frac11+e^-left(fracx-abright)?$$
â Hans Musgrave
Jul 17 at 0:09
@ecjb This should be asked as a separate question, and on StackOverflow if programming is still the goal. Also, are you trying to find the relationship between $R_1$ and $R_2$, or are you trying to estimate some sorts of parameters again? I.e., do you want to know $$R_1=frac11+e^5sqrtfrac1R_2-1,$$ or do you want to estimate $a$ and $b$ given some data in a functional like $$R=frac11+e^-left(fracx-abright)?$$
â Hans Musgrave
Jul 17 at 0:09
Many thanks @Hans Musgrave. I posted a new question here: math.stackexchange.com/questions/2854235/â¦
â ecjb
Jul 17 at 6:58
Many thanks @Hans Musgrave. I posted a new question here: math.stackexchange.com/questions/2854235/â¦
â ecjb
Jul 17 at 6:58
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853135%2fhow-to-minimize-a-function-following-an-asymptotic-pattern%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password

1
Are you expecting people on MSE to read your code???
â copper.hat
Jul 16 at 6:14
As written, this question would be better suited for StackOverflow. Even then, it has too many extraneous details. See stackoverflow.com/help/mcve
â Hans Musgrave
Jul 16 at 6:27