Convert code from matlab to python, problems occurred [on hold]
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I found this implementation of the backward Euler written in matlab here
the equation to compute the step are :
function [t,y]=beul(f,df,y0,t0,tf,n)
h=(tf-t0)/n;
t=linspace(t0,tf,n+1);
y=zeros(n+1,length(y0));
y(1,:)=y0;
for i=1:n
x0=y(i,:)âÂÂ;
x1=x0-inv(eye(length(y0))-h*feval(df,t(i),x0))*(x0-h*feval(f,t(i),x0)âÂÂ-y(i,:)âÂÂ);
while (norm(x1-x0)>0.0001)
x0=x1;
x1=x0-inv(eye(length(y0))-h*feval(df,t(i),x0))*(x0-h*feval(f,t(i),x0)âÂÂ-y(i,:)âÂÂ);
end
y(i+1,:)=x1âÂÂ;
end
end
then this function is called defining system of function and jacobian :
function yp=volt(t,y)
a=4;
c=1;
yp(1)=a*(y(1)-y(1)*y(2));
yp(2)=-c*(y(2)-y(1)*y(2));
end
function y=dvol(t,x)
a=4;
c=1;
y(1,1)=a*(1-x(2));
y(1,2)=-a*x(1);
y(2,1)=c*x(2);
y(2,2)=-c*(1-x(1));
and the backward euler is called :
[t,y]=beul(âÂÂvoltâÂÂ,âÂÂdvolâÂÂ,[2,1],0,10,1000);
I have translate the code in python :
class Backward(Euler):
def solve(self):
for i in range(len(self.time)-1):
u0 = self.u[i].T
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)) * (u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
error = np.array([1.0])
iters = 0
while True:
try:
u0 = u1.T
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)) * (u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
iters += 1
error = np.abs(u1-u0)
if np.sum(np.abs(error)) <= toll:
break
except ValueError as ex:
print('Error occurred in Implicit-NR Euler Solvers: %s' %ex.args)
return None , None
self.u[i+1] = u1.T
then I've defined the jacobian matrix as follow :
def f(self,ti,ui):
return np.array([function(ti,ui) for function in self.func])
def df(self, t, u, **params):
eps = 1e-12
J = np.zeros([len(u), len(u)], dtype = np.float)
for i in range(len(u)):
u1 = u.copy()
u2 = u.copy()
u1[i] += eps
u2[i] -= eps
f1 = self.f(t, u1, **params)
f2 = self.f(t, u2, **params)
J[ : , i] = (f1 - f2) / (2 * eps)
return J
If I try to run single equation problems the methods works very well (I had compared with other solver)
but the problem is that matlab product doing behave differently ! so I don't know how can I fix the product to be the same in python because when I run the code for a system (for example the same solved by matlab)
eq1 = lambda t,u : a*(u[0]-u[0]*u[1]);
eq2 = lambda t,u : -c*(u[1]-u[0]*u[1]);
func1 = np.array([eq1,eq2])
y0 = np.array([2.,1.])
I got this error:
Running Newton-Rapson Backward Euler ....
Error occurred in Implicit-NR Euler Solvers: could not broadcast input array from shape (2,2) into shape (2)
so how can I define the same product that matlab compute (it's df is also a 2x2 matrix) in order to fix the method of python ?
edit all the product of the code ?? I know that there is also numpy.dot as product ... is different ?
I solved using numpy.dot
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)).dot(u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
differential-equations matlab differential python
put on hold as off-topic by mfl, m_t_, Math1000, Henrik, max_zorn Aug 3 at 7:10
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question is not about mathematics, within the scope defined in the help center." â mfl, m_t_, Math1000, Henrik, max_zorn
add a comment |Â
up vote
0
down vote
favorite
I found this implementation of the backward Euler written in matlab here
the equation to compute the step are :
function [t,y]=beul(f,df,y0,t0,tf,n)
h=(tf-t0)/n;
t=linspace(t0,tf,n+1);
y=zeros(n+1,length(y0));
y(1,:)=y0;
for i=1:n
x0=y(i,:)âÂÂ;
x1=x0-inv(eye(length(y0))-h*feval(df,t(i),x0))*(x0-h*feval(f,t(i),x0)âÂÂ-y(i,:)âÂÂ);
while (norm(x1-x0)>0.0001)
x0=x1;
x1=x0-inv(eye(length(y0))-h*feval(df,t(i),x0))*(x0-h*feval(f,t(i),x0)âÂÂ-y(i,:)âÂÂ);
end
y(i+1,:)=x1âÂÂ;
end
end
then this function is called defining system of function and jacobian :
function yp=volt(t,y)
a=4;
c=1;
yp(1)=a*(y(1)-y(1)*y(2));
yp(2)=-c*(y(2)-y(1)*y(2));
end
function y=dvol(t,x)
a=4;
c=1;
y(1,1)=a*(1-x(2));
y(1,2)=-a*x(1);
y(2,1)=c*x(2);
y(2,2)=-c*(1-x(1));
and the backward euler is called :
[t,y]=beul(âÂÂvoltâÂÂ,âÂÂdvolâÂÂ,[2,1],0,10,1000);
I have translate the code in python :
class Backward(Euler):
def solve(self):
for i in range(len(self.time)-1):
u0 = self.u[i].T
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)) * (u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
error = np.array([1.0])
iters = 0
while True:
try:
u0 = u1.T
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)) * (u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
iters += 1
error = np.abs(u1-u0)
if np.sum(np.abs(error)) <= toll:
break
except ValueError as ex:
print('Error occurred in Implicit-NR Euler Solvers: %s' %ex.args)
return None , None
self.u[i+1] = u1.T
then I've defined the jacobian matrix as follow :
def f(self,ti,ui):
return np.array([function(ti,ui) for function in self.func])
def df(self, t, u, **params):
eps = 1e-12
J = np.zeros([len(u), len(u)], dtype = np.float)
for i in range(len(u)):
u1 = u.copy()
u2 = u.copy()
u1[i] += eps
u2[i] -= eps
f1 = self.f(t, u1, **params)
f2 = self.f(t, u2, **params)
J[ : , i] = (f1 - f2) / (2 * eps)
return J
If I try to run single equation problems the methods works very well (I had compared with other solver)
but the problem is that matlab product doing behave differently ! so I don't know how can I fix the product to be the same in python because when I run the code for a system (for example the same solved by matlab)
eq1 = lambda t,u : a*(u[0]-u[0]*u[1]);
eq2 = lambda t,u : -c*(u[1]-u[0]*u[1]);
func1 = np.array([eq1,eq2])
y0 = np.array([2.,1.])
I got this error:
Running Newton-Rapson Backward Euler ....
Error occurred in Implicit-NR Euler Solvers: could not broadcast input array from shape (2,2) into shape (2)
so how can I define the same product that matlab compute (it's df is also a 2x2 matrix) in order to fix the method of python ?
edit all the product of the code ?? I know that there is also numpy.dot as product ... is different ?
I solved using numpy.dot
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)).dot(u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
differential-equations matlab differential python
put on hold as off-topic by mfl, m_t_, Math1000, Henrik, max_zorn Aug 3 at 7:10
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question is not about mathematics, within the scope defined in the help center." â mfl, m_t_, Math1000, Henrik, max_zorn
1
Not sure which product you need, but matrix multiplication can be done with numpy'smatmul
â spiralstotheleft
Aug 2 at 11:18
The at sign@
can also be used in more recent versions of Numpy to multiply matrices (e.g.b = A @ x
). Equivalent to @spiralstotheleft answer, but just a shorter syntax
â David M.
Aug 2 at 12:03
I used dot not matmul ... :)
â Drudox lebowsky
Aug 2 at 12:05
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I found this implementation of the backward Euler written in matlab here
the equation to compute the step are :
function [t,y]=beul(f,df,y0,t0,tf,n)
h=(tf-t0)/n;
t=linspace(t0,tf,n+1);
y=zeros(n+1,length(y0));
y(1,:)=y0;
for i=1:n
x0=y(i,:)âÂÂ;
x1=x0-inv(eye(length(y0))-h*feval(df,t(i),x0))*(x0-h*feval(f,t(i),x0)âÂÂ-y(i,:)âÂÂ);
while (norm(x1-x0)>0.0001)
x0=x1;
x1=x0-inv(eye(length(y0))-h*feval(df,t(i),x0))*(x0-h*feval(f,t(i),x0)âÂÂ-y(i,:)âÂÂ);
end
y(i+1,:)=x1âÂÂ;
end
end
then this function is called defining system of function and jacobian :
function yp=volt(t,y)
a=4;
c=1;
yp(1)=a*(y(1)-y(1)*y(2));
yp(2)=-c*(y(2)-y(1)*y(2));
end
function y=dvol(t,x)
a=4;
c=1;
y(1,1)=a*(1-x(2));
y(1,2)=-a*x(1);
y(2,1)=c*x(2);
y(2,2)=-c*(1-x(1));
and the backward euler is called :
[t,y]=beul(âÂÂvoltâÂÂ,âÂÂdvolâÂÂ,[2,1],0,10,1000);
I have translate the code in python :
class Backward(Euler):
def solve(self):
for i in range(len(self.time)-1):
u0 = self.u[i].T
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)) * (u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
error = np.array([1.0])
iters = 0
while True:
try:
u0 = u1.T
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)) * (u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
iters += 1
error = np.abs(u1-u0)
if np.sum(np.abs(error)) <= toll:
break
except ValueError as ex:
print('Error occurred in Implicit-NR Euler Solvers: %s' %ex.args)
return None , None
self.u[i+1] = u1.T
then I've defined the jacobian matrix as follow :
def f(self,ti,ui):
return np.array([function(ti,ui) for function in self.func])
def df(self, t, u, **params):
eps = 1e-12
J = np.zeros([len(u), len(u)], dtype = np.float)
for i in range(len(u)):
u1 = u.copy()
u2 = u.copy()
u1[i] += eps
u2[i] -= eps
f1 = self.f(t, u1, **params)
f2 = self.f(t, u2, **params)
J[ : , i] = (f1 - f2) / (2 * eps)
return J
If I try to run single equation problems the methods works very well (I had compared with other solver)
but the problem is that matlab product doing behave differently ! so I don't know how can I fix the product to be the same in python because when I run the code for a system (for example the same solved by matlab)
eq1 = lambda t,u : a*(u[0]-u[0]*u[1]);
eq2 = lambda t,u : -c*(u[1]-u[0]*u[1]);
func1 = np.array([eq1,eq2])
y0 = np.array([2.,1.])
I got this error:
Running Newton-Rapson Backward Euler ....
Error occurred in Implicit-NR Euler Solvers: could not broadcast input array from shape (2,2) into shape (2)
so how can I define the same product that matlab compute (it's df is also a 2x2 matrix) in order to fix the method of python ?
edit all the product of the code ?? I know that there is also numpy.dot as product ... is different ?
I solved using numpy.dot
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)).dot(u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
differential-equations matlab differential python
I found this implementation of the backward Euler written in matlab here
the equation to compute the step are :
function [t,y]=beul(f,df,y0,t0,tf,n)
h=(tf-t0)/n;
t=linspace(t0,tf,n+1);
y=zeros(n+1,length(y0));
y(1,:)=y0;
for i=1:n
x0=y(i,:)âÂÂ;
x1=x0-inv(eye(length(y0))-h*feval(df,t(i),x0))*(x0-h*feval(f,t(i),x0)âÂÂ-y(i,:)âÂÂ);
while (norm(x1-x0)>0.0001)
x0=x1;
x1=x0-inv(eye(length(y0))-h*feval(df,t(i),x0))*(x0-h*feval(f,t(i),x0)âÂÂ-y(i,:)âÂÂ);
end
y(i+1,:)=x1âÂÂ;
end
end
then this function is called defining system of function and jacobian :
function yp=volt(t,y)
a=4;
c=1;
yp(1)=a*(y(1)-y(1)*y(2));
yp(2)=-c*(y(2)-y(1)*y(2));
end
function y=dvol(t,x)
a=4;
c=1;
y(1,1)=a*(1-x(2));
y(1,2)=-a*x(1);
y(2,1)=c*x(2);
y(2,2)=-c*(1-x(1));
and the backward euler is called :
[t,y]=beul(âÂÂvoltâÂÂ,âÂÂdvolâÂÂ,[2,1],0,10,1000);
I have translate the code in python :
class Backward(Euler):
def solve(self):
for i in range(len(self.time)-1):
u0 = self.u[i].T
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)) * (u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
error = np.array([1.0])
iters = 0
while True:
try:
u0 = u1.T
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)) * (u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
iters += 1
error = np.abs(u1-u0)
if np.sum(np.abs(error)) <= toll:
break
except ValueError as ex:
print('Error occurred in Implicit-NR Euler Solvers: %s' %ex.args)
return None , None
self.u[i+1] = u1.T
then I've defined the jacobian matrix as follow :
def f(self,ti,ui):
return np.array([function(ti,ui) for function in self.func])
def df(self, t, u, **params):
eps = 1e-12
J = np.zeros([len(u), len(u)], dtype = np.float)
for i in range(len(u)):
u1 = u.copy()
u2 = u.copy()
u1[i] += eps
u2[i] -= eps
f1 = self.f(t, u1, **params)
f2 = self.f(t, u2, **params)
J[ : , i] = (f1 - f2) / (2 * eps)
return J
If I try to run single equation problems the methods works very well (I had compared with other solver)
but the problem is that matlab product doing behave differently ! so I don't know how can I fix the product to be the same in python because when I run the code for a system (for example the same solved by matlab)
eq1 = lambda t,u : a*(u[0]-u[0]*u[1]);
eq2 = lambda t,u : -c*(u[1]-u[0]*u[1]);
func1 = np.array([eq1,eq2])
y0 = np.array([2.,1.])
I got this error:
Running Newton-Rapson Backward Euler ....
Error occurred in Implicit-NR Euler Solvers: could not broadcast input array from shape (2,2) into shape (2)
so how can I define the same product that matlab compute (it's df is also a 2x2 matrix) in order to fix the method of python ?
edit all the product of the code ?? I know that there is also numpy.dot as product ... is different ?
I solved using numpy.dot
u1 = u0 - np.linalg.inv(np.eye(len(self.dydt.u0)) - self.dt * self.dydt.df(self.time[i+1],u0)).dot(u0 - self.dt * self.dydt.f(self.time[i+1], u0).T - self.u[i].T )
differential-equations matlab differential python
edited Aug 2 at 11:51
asked Aug 2 at 11:09
Drudox lebowsky
1124
1124
put on hold as off-topic by mfl, m_t_, Math1000, Henrik, max_zorn Aug 3 at 7:10
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question is not about mathematics, within the scope defined in the help center." â mfl, m_t_, Math1000, Henrik, max_zorn
put on hold as off-topic by mfl, m_t_, Math1000, Henrik, max_zorn Aug 3 at 7:10
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question is not about mathematics, within the scope defined in the help center." â mfl, m_t_, Math1000, Henrik, max_zorn
1
Not sure which product you need, but matrix multiplication can be done with numpy'smatmul
â spiralstotheleft
Aug 2 at 11:18
The at sign@
can also be used in more recent versions of Numpy to multiply matrices (e.g.b = A @ x
). Equivalent to @spiralstotheleft answer, but just a shorter syntax
â David M.
Aug 2 at 12:03
I used dot not matmul ... :)
â Drudox lebowsky
Aug 2 at 12:05
add a comment |Â
1
Not sure which product you need, but matrix multiplication can be done with numpy'smatmul
â spiralstotheleft
Aug 2 at 11:18
The at sign@
can also be used in more recent versions of Numpy to multiply matrices (e.g.b = A @ x
). Equivalent to @spiralstotheleft answer, but just a shorter syntax
â David M.
Aug 2 at 12:03
I used dot not matmul ... :)
â Drudox lebowsky
Aug 2 at 12:05
1
1
Not sure which product you need, but matrix multiplication can be done with numpy's
matmul
â spiralstotheleft
Aug 2 at 11:18
Not sure which product you need, but matrix multiplication can be done with numpy's
matmul
â spiralstotheleft
Aug 2 at 11:18
The at sign
@
can also be used in more recent versions of Numpy to multiply matrices (e.g. b = A @ x
). Equivalent to @spiralstotheleft answer, but just a shorter syntaxâ David M.
Aug 2 at 12:03
The at sign
@
can also be used in more recent versions of Numpy to multiply matrices (e.g. b = A @ x
). Equivalent to @spiralstotheleft answer, but just a shorter syntaxâ David M.
Aug 2 at 12:03
I used dot not matmul ... :)
â Drudox lebowsky
Aug 2 at 12:05
I used dot not matmul ... :)
â Drudox lebowsky
Aug 2 at 12:05
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
1
Not sure which product you need, but matrix multiplication can be done with numpy's
matmul
â spiralstotheleft
Aug 2 at 11:18
The at sign
@
can also be used in more recent versions of Numpy to multiply matrices (e.g.b = A @ x
). Equivalent to @spiralstotheleft answer, but just a shorter syntaxâ David M.
Aug 2 at 12:03
I used dot not matmul ... :)
â Drudox lebowsky
Aug 2 at 12:05