Value at $(0,0)$ of the solution to a 2D linear elliptic PDE
Clash Royale CLAN TAG#URR8PPP
up vote
2
down vote
favorite
The setup is the following.
Let $D=(-frac12 , frac12)^2 subset mathbbR^2$, $partial D$ be its boundary. Let $a, bin mathbbR$ and $f$ be continuous on $partial D$. Also we assume that $f$ is periodic in the sense that $f(frac12,x_2) = f(-frac12,x_2)$ and $ f(x_1, frac12)= f(x_1, -frac12)$. Consider the equations $$Delta u+ax_1 fracpartial upartial x_1+bx_2 fracpartial upartial x_2 = 0 text , on D text;$$ $$u|_partial D = f text .$$
I would like to know if there is an explicit or approximate solution to this PDE.
Actually, I am only interested in the value $u(0,0)$ or its approximation.
Thank you!
pde
add a comment |Â
up vote
2
down vote
favorite
The setup is the following.
Let $D=(-frac12 , frac12)^2 subset mathbbR^2$, $partial D$ be its boundary. Let $a, bin mathbbR$ and $f$ be continuous on $partial D$. Also we assume that $f$ is periodic in the sense that $f(frac12,x_2) = f(-frac12,x_2)$ and $ f(x_1, frac12)= f(x_1, -frac12)$. Consider the equations $$Delta u+ax_1 fracpartial upartial x_1+bx_2 fracpartial upartial x_2 = 0 text , on D text;$$ $$u|_partial D = f text .$$
I would like to know if there is an explicit or approximate solution to this PDE.
Actually, I am only interested in the value $u(0,0)$ or its approximation.
Thank you!
pde
You can switch to polar coordinates, assume a Fourier series solution and obtain a system of ODEs for the radial part. But I'm not sure to how to solve it due to the recurring relations.
– Dylan
Jul 15 at 15:06
@Dylan, I changed the domain to a more "natural" one, making it more convenient for Fourier series. Could you please try again with this one?
– Johann Bruckner
Jul 15 at 22:18
I don't think a periodic is possible in the new setting, so the Fourier method may not work here. It is more easily separated, though. But again, I'm not sure how to solve the separated equations.
– Dylan
Jul 16 at 16:12
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
The setup is the following.
Let $D=(-frac12 , frac12)^2 subset mathbbR^2$, $partial D$ be its boundary. Let $a, bin mathbbR$ and $f$ be continuous on $partial D$. Also we assume that $f$ is periodic in the sense that $f(frac12,x_2) = f(-frac12,x_2)$ and $ f(x_1, frac12)= f(x_1, -frac12)$. Consider the equations $$Delta u+ax_1 fracpartial upartial x_1+bx_2 fracpartial upartial x_2 = 0 text , on D text;$$ $$u|_partial D = f text .$$
I would like to know if there is an explicit or approximate solution to this PDE.
Actually, I am only interested in the value $u(0,0)$ or its approximation.
Thank you!
pde
The setup is the following.
Let $D=(-frac12 , frac12)^2 subset mathbbR^2$, $partial D$ be its boundary. Let $a, bin mathbbR$ and $f$ be continuous on $partial D$. Also we assume that $f$ is periodic in the sense that $f(frac12,x_2) = f(-frac12,x_2)$ and $ f(x_1, frac12)= f(x_1, -frac12)$. Consider the equations $$Delta u+ax_1 fracpartial upartial x_1+bx_2 fracpartial upartial x_2 = 0 text , on D text;$$ $$u|_partial D = f text .$$
I would like to know if there is an explicit or approximate solution to this PDE.
Actually, I am only interested in the value $u(0,0)$ or its approximation.
Thank you!
pde
edited Jul 15 at 22:27
asked Jul 14 at 17:05
Johann Bruckner
223
223
You can switch to polar coordinates, assume a Fourier series solution and obtain a system of ODEs for the radial part. But I'm not sure to how to solve it due to the recurring relations.
– Dylan
Jul 15 at 15:06
@Dylan, I changed the domain to a more "natural" one, making it more convenient for Fourier series. Could you please try again with this one?
– Johann Bruckner
Jul 15 at 22:18
I don't think a periodic is possible in the new setting, so the Fourier method may not work here. It is more easily separated, though. But again, I'm not sure how to solve the separated equations.
– Dylan
Jul 16 at 16:12
add a comment |Â
You can switch to polar coordinates, assume a Fourier series solution and obtain a system of ODEs for the radial part. But I'm not sure to how to solve it due to the recurring relations.
– Dylan
Jul 15 at 15:06
@Dylan, I changed the domain to a more "natural" one, making it more convenient for Fourier series. Could you please try again with this one?
– Johann Bruckner
Jul 15 at 22:18
I don't think a periodic is possible in the new setting, so the Fourier method may not work here. It is more easily separated, though. But again, I'm not sure how to solve the separated equations.
– Dylan
Jul 16 at 16:12
You can switch to polar coordinates, assume a Fourier series solution and obtain a system of ODEs for the radial part. But I'm not sure to how to solve it due to the recurring relations.
– Dylan
Jul 15 at 15:06
You can switch to polar coordinates, assume a Fourier series solution and obtain a system of ODEs for the radial part. But I'm not sure to how to solve it due to the recurring relations.
– Dylan
Jul 15 at 15:06
@Dylan, I changed the domain to a more "natural" one, making it more convenient for Fourier series. Could you please try again with this one?
– Johann Bruckner
Jul 15 at 22:18
@Dylan, I changed the domain to a more "natural" one, making it more convenient for Fourier series. Could you please try again with this one?
– Johann Bruckner
Jul 15 at 22:18
I don't think a periodic is possible in the new setting, so the Fourier method may not work here. It is more easily separated, though. But again, I'm not sure how to solve the separated equations.
– Dylan
Jul 16 at 16:12
I don't think a periodic is possible in the new setting, so the Fourier method may not work here. It is more easily separated, though. But again, I'm not sure how to solve the separated equations.
– Dylan
Jul 16 at 16:12
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
0
down vote
This equation appears like it can be solved by using separation of variables. Assume there exists a separable solution
$$u(x_1,x_2) = X_1(x_1)X_2(x_2)$$
Plugging this into the original PDE allows the two variables to be separated with eigenvalue of $lambda_n$
$$X_1''+ax_1X'_1-lambda_n X_1=0quad&quad X_2''+bx_2X_2'+lambda_n X_2=0$$
These equations are not easily solvable. They can be solved assuming power series solution, but Mathematica is much more clever in recognizing the power series to be previously studied functions
$$X_1(x_1;lambda_n)=e^-fracax_1^22left[g_1(x_2);H_fraclambda_na-1left(fracsqrta;x_1sqrt2right)+g_2(x_2);_1F_1left(fraca-lambda_n2a;frac12;fracax_1^22right)right]$$
$$X_2(x_2;lambda_n) = e^-fracbx_2^22left[h_1(x_1);H_-fraclambda_nb-1left(fracsqrtb;x_2sqrt2right)+h_2(x_1);_1F_1left(fracb+lambda_n2b;frac12;fracbx_2^22right)right]$$
where $H_i$ are the Hermite polynomials and $_1F_1$ is the Kummer confluent hypergeometric function of the first kind (I've never studied this one) and $g$ and $h$ are arbitrary functions of $x_2$ and $x_1$ respectively. $g$ and $h$ are dependent on $f$, so they can be found when $f$ is known.
The most general solution would be to sum over all the eigenfunctions
$$u = sum_nX_1(x_1;lambda_n)X_2(x_2;lambda_n)$$
however, since you are interested in $u(0,0)$ we can understand a little more without knowing $f$. Using
$$_1F_1(cdot;cdot;0) = 1quad&quad H_i(0) = fracsqrtpi;2^iGammaleft(frac1-i2right)$$
we can see that the solution at the origin would look like
$$u(0,0) = sum_nleft[g_1(0)fracsqrtpi;2^-fraclambda_na-1Gammaleft(fraclambda_n2aright)+g_2(0)right]left[h_1(0)fracsqrtpi;2^fraclambda_nb-1Gammaleft(1-fraclambda_n2bright)+h_2(0)right]$$
Again, $g$ and $h$ are dependent on $f$.
Not sure if this approach is what you were looking for. I think there may be a more elegant approach.
How can $X1(x_1)$ be dependent on $x_2$ and vice versa? Shouldn't $g$ and $h$ just be constants?
– Dylan
Jul 16 at 9:32
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
This equation appears like it can be solved by using separation of variables. Assume there exists a separable solution
$$u(x_1,x_2) = X_1(x_1)X_2(x_2)$$
Plugging this into the original PDE allows the two variables to be separated with eigenvalue of $lambda_n$
$$X_1''+ax_1X'_1-lambda_n X_1=0quad&quad X_2''+bx_2X_2'+lambda_n X_2=0$$
These equations are not easily solvable. They can be solved assuming power series solution, but Mathematica is much more clever in recognizing the power series to be previously studied functions
$$X_1(x_1;lambda_n)=e^-fracax_1^22left[g_1(x_2);H_fraclambda_na-1left(fracsqrta;x_1sqrt2right)+g_2(x_2);_1F_1left(fraca-lambda_n2a;frac12;fracax_1^22right)right]$$
$$X_2(x_2;lambda_n) = e^-fracbx_2^22left[h_1(x_1);H_-fraclambda_nb-1left(fracsqrtb;x_2sqrt2right)+h_2(x_1);_1F_1left(fracb+lambda_n2b;frac12;fracbx_2^22right)right]$$
where $H_i$ are the Hermite polynomials and $_1F_1$ is the Kummer confluent hypergeometric function of the first kind (I've never studied this one) and $g$ and $h$ are arbitrary functions of $x_2$ and $x_1$ respectively. $g$ and $h$ are dependent on $f$, so they can be found when $f$ is known.
The most general solution would be to sum over all the eigenfunctions
$$u = sum_nX_1(x_1;lambda_n)X_2(x_2;lambda_n)$$
however, since you are interested in $u(0,0)$ we can understand a little more without knowing $f$. Using
$$_1F_1(cdot;cdot;0) = 1quad&quad H_i(0) = fracsqrtpi;2^iGammaleft(frac1-i2right)$$
we can see that the solution at the origin would look like
$$u(0,0) = sum_nleft[g_1(0)fracsqrtpi;2^-fraclambda_na-1Gammaleft(fraclambda_n2aright)+g_2(0)right]left[h_1(0)fracsqrtpi;2^fraclambda_nb-1Gammaleft(1-fraclambda_n2bright)+h_2(0)right]$$
Again, $g$ and $h$ are dependent on $f$.
Not sure if this approach is what you were looking for. I think there may be a more elegant approach.
How can $X1(x_1)$ be dependent on $x_2$ and vice versa? Shouldn't $g$ and $h$ just be constants?
– Dylan
Jul 16 at 9:32
add a comment |Â
up vote
0
down vote
This equation appears like it can be solved by using separation of variables. Assume there exists a separable solution
$$u(x_1,x_2) = X_1(x_1)X_2(x_2)$$
Plugging this into the original PDE allows the two variables to be separated with eigenvalue of $lambda_n$
$$X_1''+ax_1X'_1-lambda_n X_1=0quad&quad X_2''+bx_2X_2'+lambda_n X_2=0$$
These equations are not easily solvable. They can be solved assuming power series solution, but Mathematica is much more clever in recognizing the power series to be previously studied functions
$$X_1(x_1;lambda_n)=e^-fracax_1^22left[g_1(x_2);H_fraclambda_na-1left(fracsqrta;x_1sqrt2right)+g_2(x_2);_1F_1left(fraca-lambda_n2a;frac12;fracax_1^22right)right]$$
$$X_2(x_2;lambda_n) = e^-fracbx_2^22left[h_1(x_1);H_-fraclambda_nb-1left(fracsqrtb;x_2sqrt2right)+h_2(x_1);_1F_1left(fracb+lambda_n2b;frac12;fracbx_2^22right)right]$$
where $H_i$ are the Hermite polynomials and $_1F_1$ is the Kummer confluent hypergeometric function of the first kind (I've never studied this one) and $g$ and $h$ are arbitrary functions of $x_2$ and $x_1$ respectively. $g$ and $h$ are dependent on $f$, so they can be found when $f$ is known.
The most general solution would be to sum over all the eigenfunctions
$$u = sum_nX_1(x_1;lambda_n)X_2(x_2;lambda_n)$$
however, since you are interested in $u(0,0)$ we can understand a little more without knowing $f$. Using
$$_1F_1(cdot;cdot;0) = 1quad&quad H_i(0) = fracsqrtpi;2^iGammaleft(frac1-i2right)$$
we can see that the solution at the origin would look like
$$u(0,0) = sum_nleft[g_1(0)fracsqrtpi;2^-fraclambda_na-1Gammaleft(fraclambda_n2aright)+g_2(0)right]left[h_1(0)fracsqrtpi;2^fraclambda_nb-1Gammaleft(1-fraclambda_n2bright)+h_2(0)right]$$
Again, $g$ and $h$ are dependent on $f$.
Not sure if this approach is what you were looking for. I think there may be a more elegant approach.
How can $X1(x_1)$ be dependent on $x_2$ and vice versa? Shouldn't $g$ and $h$ just be constants?
– Dylan
Jul 16 at 9:32
add a comment |Â
up vote
0
down vote
up vote
0
down vote
This equation appears like it can be solved by using separation of variables. Assume there exists a separable solution
$$u(x_1,x_2) = X_1(x_1)X_2(x_2)$$
Plugging this into the original PDE allows the two variables to be separated with eigenvalue of $lambda_n$
$$X_1''+ax_1X'_1-lambda_n X_1=0quad&quad X_2''+bx_2X_2'+lambda_n X_2=0$$
These equations are not easily solvable. They can be solved assuming power series solution, but Mathematica is much more clever in recognizing the power series to be previously studied functions
$$X_1(x_1;lambda_n)=e^-fracax_1^22left[g_1(x_2);H_fraclambda_na-1left(fracsqrta;x_1sqrt2right)+g_2(x_2);_1F_1left(fraca-lambda_n2a;frac12;fracax_1^22right)right]$$
$$X_2(x_2;lambda_n) = e^-fracbx_2^22left[h_1(x_1);H_-fraclambda_nb-1left(fracsqrtb;x_2sqrt2right)+h_2(x_1);_1F_1left(fracb+lambda_n2b;frac12;fracbx_2^22right)right]$$
where $H_i$ are the Hermite polynomials and $_1F_1$ is the Kummer confluent hypergeometric function of the first kind (I've never studied this one) and $g$ and $h$ are arbitrary functions of $x_2$ and $x_1$ respectively. $g$ and $h$ are dependent on $f$, so they can be found when $f$ is known.
The most general solution would be to sum over all the eigenfunctions
$$u = sum_nX_1(x_1;lambda_n)X_2(x_2;lambda_n)$$
however, since you are interested in $u(0,0)$ we can understand a little more without knowing $f$. Using
$$_1F_1(cdot;cdot;0) = 1quad&quad H_i(0) = fracsqrtpi;2^iGammaleft(frac1-i2right)$$
we can see that the solution at the origin would look like
$$u(0,0) = sum_nleft[g_1(0)fracsqrtpi;2^-fraclambda_na-1Gammaleft(fraclambda_n2aright)+g_2(0)right]left[h_1(0)fracsqrtpi;2^fraclambda_nb-1Gammaleft(1-fraclambda_n2bright)+h_2(0)right]$$
Again, $g$ and $h$ are dependent on $f$.
Not sure if this approach is what you were looking for. I think there may be a more elegant approach.
This equation appears like it can be solved by using separation of variables. Assume there exists a separable solution
$$u(x_1,x_2) = X_1(x_1)X_2(x_2)$$
Plugging this into the original PDE allows the two variables to be separated with eigenvalue of $lambda_n$
$$X_1''+ax_1X'_1-lambda_n X_1=0quad&quad X_2''+bx_2X_2'+lambda_n X_2=0$$
These equations are not easily solvable. They can be solved assuming power series solution, but Mathematica is much more clever in recognizing the power series to be previously studied functions
$$X_1(x_1;lambda_n)=e^-fracax_1^22left[g_1(x_2);H_fraclambda_na-1left(fracsqrta;x_1sqrt2right)+g_2(x_2);_1F_1left(fraca-lambda_n2a;frac12;fracax_1^22right)right]$$
$$X_2(x_2;lambda_n) = e^-fracbx_2^22left[h_1(x_1);H_-fraclambda_nb-1left(fracsqrtb;x_2sqrt2right)+h_2(x_1);_1F_1left(fracb+lambda_n2b;frac12;fracbx_2^22right)right]$$
where $H_i$ are the Hermite polynomials and $_1F_1$ is the Kummer confluent hypergeometric function of the first kind (I've never studied this one) and $g$ and $h$ are arbitrary functions of $x_2$ and $x_1$ respectively. $g$ and $h$ are dependent on $f$, so they can be found when $f$ is known.
The most general solution would be to sum over all the eigenfunctions
$$u = sum_nX_1(x_1;lambda_n)X_2(x_2;lambda_n)$$
however, since you are interested in $u(0,0)$ we can understand a little more without knowing $f$. Using
$$_1F_1(cdot;cdot;0) = 1quad&quad H_i(0) = fracsqrtpi;2^iGammaleft(frac1-i2right)$$
we can see that the solution at the origin would look like
$$u(0,0) = sum_nleft[g_1(0)fracsqrtpi;2^-fraclambda_na-1Gammaleft(fraclambda_n2aright)+g_2(0)right]left[h_1(0)fracsqrtpi;2^fraclambda_nb-1Gammaleft(1-fraclambda_n2bright)+h_2(0)right]$$
Again, $g$ and $h$ are dependent on $f$.
Not sure if this approach is what you were looking for. I think there may be a more elegant approach.
answered Jul 16 at 2:47


MasterYoda
89829
89829
How can $X1(x_1)$ be dependent on $x_2$ and vice versa? Shouldn't $g$ and $h$ just be constants?
– Dylan
Jul 16 at 9:32
add a comment |Â
How can $X1(x_1)$ be dependent on $x_2$ and vice versa? Shouldn't $g$ and $h$ just be constants?
– Dylan
Jul 16 at 9:32
How can $X1(x_1)$ be dependent on $x_2$ and vice versa? Shouldn't $g$ and $h$ just be constants?
– Dylan
Jul 16 at 9:32
How can $X1(x_1)$ be dependent on $x_2$ and vice versa? Shouldn't $g$ and $h$ just be constants?
– Dylan
Jul 16 at 9:32
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2851782%2fvalue-at-0-0-of-the-solution-to-a-2d-linear-elliptic-pde%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
You can switch to polar coordinates, assume a Fourier series solution and obtain a system of ODEs for the radial part. But I'm not sure to how to solve it due to the recurring relations.
– Dylan
Jul 15 at 15:06
@Dylan, I changed the domain to a more "natural" one, making it more convenient for Fourier series. Could you please try again with this one?
– Johann Bruckner
Jul 15 at 22:18
I don't think a periodic is possible in the new setting, so the Fourier method may not work here. It is more easily separated, though. But again, I'm not sure how to solve the separated equations.
– Dylan
Jul 16 at 16:12