An optimization problem over a bi-linear function
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
An optimization is to be carried out over the following function:
$$f(x_1,x_2;a,b)=frac12left[x_1+x_2-x_1x_2+c(x_1;a,b)c(x_2;a,b)right]$$
where
$$c(x;a,b)=sum_i=0^nleft[left(sum_j=0^ifracb_j-1-b_ja_j-1right)x+b_iright]mathbb1_left[a_i,a_i-1right](x)$$
with the initial conditions $a_-1=1$, $a_n=0$ and $b_-1=0$.
Here, $a$ and $b$ are the parameter vectors given as $$a=a_0,ldots,a_n,quad b=b_0,ldots,b_n$$ with $$0=a_n<a_n-1<cdots <
a_0 <a_-1=1$$ $$0=b_-1<b_0<cdots<b_n-1<b_n=1$$
The problem to be solved is given as:
$$lim_nrightarrowinftymax_a,bleft[min_x_1=x_2in[0,1]f(x_1,x_2;a,b) -min_x_1,x_2in[0,1]f(x_1,x_2;a,b)right]$$
My work: I decided to consider a finite $n$ and then try to generalize to some $n$ which is very large, not necessarily $nrightarrowinfty$, as it seems impossible to me.
Assume that $n=1$. Then,
$$c(x;a,b)=casesfrac-1+b_0(1-a_0)a_0x+ 1,quad 0 leq x leq a_0\ b_0(-x+1),quad a_0 leq x leq 1$$
Accordingly it is easy to obtain
$$ f(x_1,x_2;a,b)=\smallcasesfrac12left[x_1+x_2-x_1x_2+left(frac-1+b_0(1-a_0)a_0x_1+1right)left(frac-1+b_0(1-a_0)a_0x_2+1right)right],quad 0 leq x_1leq a_0,, 0 leq x_2leq a_0\ frac12left[x_1+x_2-x_1x_2+left(frac-1+b_0(1-a_0)a_0x_1+1right)b_0(-x_2+1)right],quad 0 leq x_1leq a_0,, a_0 leq x_2leq 1\ frac12left[x_1+x_2-x_1x_2+b_0(-x_1+1)left(frac-1+b_0(1-a_0)a_0x_2+1right)right],quad a_0 leq x_1leq 1,, 0 leq x_2leq a_0\ frac12left[x_1+x_2-x_1x_2+b_0(-x_1+1)b_0(-x_2+1)right],quad a_0 leq x_1leq 1,, a_0 leq x_2leq 1$$
With the constaint of $x_1=x_2$, I have
$$f(x_1=x_2;a,b)=casesfrac12left[2x_1-x_1^2+left(1+fracx_1(1+b_0(1-a_0))a_0right)^2right],quad 0 leq x_1 leq a_0\frac12left[2x_1+b_0^2(1-x_1)^2-x_1^2right],quad a_0 leq x_1 leq 1$$
From here, to minimize $f(x_1,x_2;a,b)$, I take the the derivative of this function with respect to $x_1$ and $x_2$ and make it equal to zero. I do the same for $f(x_1=x_2;a,b)$. For $f(x_1,x_2;a,b)$ taking derivatives corresponds to minimization only for the case $0 leq x_1leq a_0, 0 leq x_2leq a_0$. For other three cases it just maximizes. Then I check to see what kind of function I am dealing with in 3D and I see that I have an increasing function. Therefore I select the minimum values of $x_1$ and $x_2$. Later I compare the first case with the other three cases via taking the difference and plotting it in 3D. I see that the first case takes less values than all other cases for all points $x_1,x_2$.
I do the same for the function $f(x_1=x_2;a,b)$ and I also see that the first case is again the minimizer.
After that I take the difference for the first case of both: $$f(x_1=x_2;a,b)-f(x_1,x_2;a,b)$$
This function is only a function of the parameters $a_0,b_0$, because I took the derivates and make them equal to zero, and insert them back in the original equations.
In the next step I must find the maximum over all $(a_0,b_0)in[0,1]$. This step gives me $0$.
For $ngeq 2$, the resul is no more $0$. But I dont know how to proceed? I have problems, because as $n$ increases there are too many cases to compare. Taking derivatives and making them equal to zero gives me maximum or minimum? Lets assume I found them, later I cannot make visual inspection. Which cases should I consider? How to approach this problem? for example I can also use Matlab or Mathematica.
I will be happy to hear your comments. I can do the work. Solutions for $nin2,ldots,3$ either analytically or with a program is enough.
convex-analysis convex-optimization mathematica dynamic-programming
 |Â
show 1 more comment
up vote
1
down vote
favorite
An optimization is to be carried out over the following function:
$$f(x_1,x_2;a,b)=frac12left[x_1+x_2-x_1x_2+c(x_1;a,b)c(x_2;a,b)right]$$
where
$$c(x;a,b)=sum_i=0^nleft[left(sum_j=0^ifracb_j-1-b_ja_j-1right)x+b_iright]mathbb1_left[a_i,a_i-1right](x)$$
with the initial conditions $a_-1=1$, $a_n=0$ and $b_-1=0$.
Here, $a$ and $b$ are the parameter vectors given as $$a=a_0,ldots,a_n,quad b=b_0,ldots,b_n$$ with $$0=a_n<a_n-1<cdots <
a_0 <a_-1=1$$ $$0=b_-1<b_0<cdots<b_n-1<b_n=1$$
The problem to be solved is given as:
$$lim_nrightarrowinftymax_a,bleft[min_x_1=x_2in[0,1]f(x_1,x_2;a,b) -min_x_1,x_2in[0,1]f(x_1,x_2;a,b)right]$$
My work: I decided to consider a finite $n$ and then try to generalize to some $n$ which is very large, not necessarily $nrightarrowinfty$, as it seems impossible to me.
Assume that $n=1$. Then,
$$c(x;a,b)=casesfrac-1+b_0(1-a_0)a_0x+ 1,quad 0 leq x leq a_0\ b_0(-x+1),quad a_0 leq x leq 1$$
Accordingly it is easy to obtain
$$ f(x_1,x_2;a,b)=\smallcasesfrac12left[x_1+x_2-x_1x_2+left(frac-1+b_0(1-a_0)a_0x_1+1right)left(frac-1+b_0(1-a_0)a_0x_2+1right)right],quad 0 leq x_1leq a_0,, 0 leq x_2leq a_0\ frac12left[x_1+x_2-x_1x_2+left(frac-1+b_0(1-a_0)a_0x_1+1right)b_0(-x_2+1)right],quad 0 leq x_1leq a_0,, a_0 leq x_2leq 1\ frac12left[x_1+x_2-x_1x_2+b_0(-x_1+1)left(frac-1+b_0(1-a_0)a_0x_2+1right)right],quad a_0 leq x_1leq 1,, 0 leq x_2leq a_0\ frac12left[x_1+x_2-x_1x_2+b_0(-x_1+1)b_0(-x_2+1)right],quad a_0 leq x_1leq 1,, a_0 leq x_2leq 1$$
With the constaint of $x_1=x_2$, I have
$$f(x_1=x_2;a,b)=casesfrac12left[2x_1-x_1^2+left(1+fracx_1(1+b_0(1-a_0))a_0right)^2right],quad 0 leq x_1 leq a_0\frac12left[2x_1+b_0^2(1-x_1)^2-x_1^2right],quad a_0 leq x_1 leq 1$$
From here, to minimize $f(x_1,x_2;a,b)$, I take the the derivative of this function with respect to $x_1$ and $x_2$ and make it equal to zero. I do the same for $f(x_1=x_2;a,b)$. For $f(x_1,x_2;a,b)$ taking derivatives corresponds to minimization only for the case $0 leq x_1leq a_0, 0 leq x_2leq a_0$. For other three cases it just maximizes. Then I check to see what kind of function I am dealing with in 3D and I see that I have an increasing function. Therefore I select the minimum values of $x_1$ and $x_2$. Later I compare the first case with the other three cases via taking the difference and plotting it in 3D. I see that the first case takes less values than all other cases for all points $x_1,x_2$.
I do the same for the function $f(x_1=x_2;a,b)$ and I also see that the first case is again the minimizer.
After that I take the difference for the first case of both: $$f(x_1=x_2;a,b)-f(x_1,x_2;a,b)$$
This function is only a function of the parameters $a_0,b_0$, because I took the derivates and make them equal to zero, and insert them back in the original equations.
In the next step I must find the maximum over all $(a_0,b_0)in[0,1]$. This step gives me $0$.
For $ngeq 2$, the resul is no more $0$. But I dont know how to proceed? I have problems, because as $n$ increases there are too many cases to compare. Taking derivatives and making them equal to zero gives me maximum or minimum? Lets assume I found them, later I cannot make visual inspection. Which cases should I consider? How to approach this problem? for example I can also use Matlab or Mathematica.
I will be happy to hear your comments. I can do the work. Solutions for $nin2,ldots,3$ either analytically or with a program is enough.
convex-analysis convex-optimization mathematica dynamic-programming
Can I ask how you know this is convex? As soon as I see products between variables (or functions of variables) I have to be skeptical.
– Michael Grant
Jul 30 at 21:24
1
@MichaelGrant They are line segments added up from $(0,1)$ to $(1,0)$ in a clever way. I did this construction, so I know that It must be convex. I can prove it. So there is no problem about it, trust me.
– Seyhmus Güngören
Jul 30 at 21:28
Ha ha! Most people who tell me that something is convex, despite my intuition otherwise, I do not trust. You, I will. :-)
– Michael Grant
Jul 30 at 21:29
Still: $f$ a bilinear function, right?
– Michael Grant
Jul 30 at 21:30
1
@MichaelGrant okay, I am talking about the function $c$ and probably you are talking about $f$. So I must clarify that, sorry.
– Seyhmus Güngören
Jul 30 at 21:37
 |Â
show 1 more comment
up vote
1
down vote
favorite
up vote
1
down vote
favorite
An optimization is to be carried out over the following function:
$$f(x_1,x_2;a,b)=frac12left[x_1+x_2-x_1x_2+c(x_1;a,b)c(x_2;a,b)right]$$
where
$$c(x;a,b)=sum_i=0^nleft[left(sum_j=0^ifracb_j-1-b_ja_j-1right)x+b_iright]mathbb1_left[a_i,a_i-1right](x)$$
with the initial conditions $a_-1=1$, $a_n=0$ and $b_-1=0$.
Here, $a$ and $b$ are the parameter vectors given as $$a=a_0,ldots,a_n,quad b=b_0,ldots,b_n$$ with $$0=a_n<a_n-1<cdots <
a_0 <a_-1=1$$ $$0=b_-1<b_0<cdots<b_n-1<b_n=1$$
The problem to be solved is given as:
$$lim_nrightarrowinftymax_a,bleft[min_x_1=x_2in[0,1]f(x_1,x_2;a,b) -min_x_1,x_2in[0,1]f(x_1,x_2;a,b)right]$$
My work: I decided to consider a finite $n$ and then try to generalize to some $n$ which is very large, not necessarily $nrightarrowinfty$, as it seems impossible to me.
Assume that $n=1$. Then,
$$c(x;a,b)=casesfrac-1+b_0(1-a_0)a_0x+ 1,quad 0 leq x leq a_0\ b_0(-x+1),quad a_0 leq x leq 1$$
Accordingly it is easy to obtain
$$ f(x_1,x_2;a,b)=\smallcasesfrac12left[x_1+x_2-x_1x_2+left(frac-1+b_0(1-a_0)a_0x_1+1right)left(frac-1+b_0(1-a_0)a_0x_2+1right)right],quad 0 leq x_1leq a_0,, 0 leq x_2leq a_0\ frac12left[x_1+x_2-x_1x_2+left(frac-1+b_0(1-a_0)a_0x_1+1right)b_0(-x_2+1)right],quad 0 leq x_1leq a_0,, a_0 leq x_2leq 1\ frac12left[x_1+x_2-x_1x_2+b_0(-x_1+1)left(frac-1+b_0(1-a_0)a_0x_2+1right)right],quad a_0 leq x_1leq 1,, 0 leq x_2leq a_0\ frac12left[x_1+x_2-x_1x_2+b_0(-x_1+1)b_0(-x_2+1)right],quad a_0 leq x_1leq 1,, a_0 leq x_2leq 1$$
With the constaint of $x_1=x_2$, I have
$$f(x_1=x_2;a,b)=casesfrac12left[2x_1-x_1^2+left(1+fracx_1(1+b_0(1-a_0))a_0right)^2right],quad 0 leq x_1 leq a_0\frac12left[2x_1+b_0^2(1-x_1)^2-x_1^2right],quad a_0 leq x_1 leq 1$$
From here, to minimize $f(x_1,x_2;a,b)$, I take the the derivative of this function with respect to $x_1$ and $x_2$ and make it equal to zero. I do the same for $f(x_1=x_2;a,b)$. For $f(x_1,x_2;a,b)$ taking derivatives corresponds to minimization only for the case $0 leq x_1leq a_0, 0 leq x_2leq a_0$. For other three cases it just maximizes. Then I check to see what kind of function I am dealing with in 3D and I see that I have an increasing function. Therefore I select the minimum values of $x_1$ and $x_2$. Later I compare the first case with the other three cases via taking the difference and plotting it in 3D. I see that the first case takes less values than all other cases for all points $x_1,x_2$.
I do the same for the function $f(x_1=x_2;a,b)$ and I also see that the first case is again the minimizer.
After that I take the difference for the first case of both: $$f(x_1=x_2;a,b)-f(x_1,x_2;a,b)$$
This function is only a function of the parameters $a_0,b_0$, because I took the derivates and make them equal to zero, and insert them back in the original equations.
In the next step I must find the maximum over all $(a_0,b_0)in[0,1]$. This step gives me $0$.
For $ngeq 2$, the resul is no more $0$. But I dont know how to proceed? I have problems, because as $n$ increases there are too many cases to compare. Taking derivatives and making them equal to zero gives me maximum or minimum? Lets assume I found them, later I cannot make visual inspection. Which cases should I consider? How to approach this problem? for example I can also use Matlab or Mathematica.
I will be happy to hear your comments. I can do the work. Solutions for $nin2,ldots,3$ either analytically or with a program is enough.
convex-analysis convex-optimization mathematica dynamic-programming
An optimization is to be carried out over the following function:
$$f(x_1,x_2;a,b)=frac12left[x_1+x_2-x_1x_2+c(x_1;a,b)c(x_2;a,b)right]$$
where
$$c(x;a,b)=sum_i=0^nleft[left(sum_j=0^ifracb_j-1-b_ja_j-1right)x+b_iright]mathbb1_left[a_i,a_i-1right](x)$$
with the initial conditions $a_-1=1$, $a_n=0$ and $b_-1=0$.
Here, $a$ and $b$ are the parameter vectors given as $$a=a_0,ldots,a_n,quad b=b_0,ldots,b_n$$ with $$0=a_n<a_n-1<cdots <
a_0 <a_-1=1$$ $$0=b_-1<b_0<cdots<b_n-1<b_n=1$$
The problem to be solved is given as:
$$lim_nrightarrowinftymax_a,bleft[min_x_1=x_2in[0,1]f(x_1,x_2;a,b) -min_x_1,x_2in[0,1]f(x_1,x_2;a,b)right]$$
My work: I decided to consider a finite $n$ and then try to generalize to some $n$ which is very large, not necessarily $nrightarrowinfty$, as it seems impossible to me.
Assume that $n=1$. Then,
$$c(x;a,b)=casesfrac-1+b_0(1-a_0)a_0x+ 1,quad 0 leq x leq a_0\ b_0(-x+1),quad a_0 leq x leq 1$$
Accordingly it is easy to obtain
$$ f(x_1,x_2;a,b)=\smallcasesfrac12left[x_1+x_2-x_1x_2+left(frac-1+b_0(1-a_0)a_0x_1+1right)left(frac-1+b_0(1-a_0)a_0x_2+1right)right],quad 0 leq x_1leq a_0,, 0 leq x_2leq a_0\ frac12left[x_1+x_2-x_1x_2+left(frac-1+b_0(1-a_0)a_0x_1+1right)b_0(-x_2+1)right],quad 0 leq x_1leq a_0,, a_0 leq x_2leq 1\ frac12left[x_1+x_2-x_1x_2+b_0(-x_1+1)left(frac-1+b_0(1-a_0)a_0x_2+1right)right],quad a_0 leq x_1leq 1,, 0 leq x_2leq a_0\ frac12left[x_1+x_2-x_1x_2+b_0(-x_1+1)b_0(-x_2+1)right],quad a_0 leq x_1leq 1,, a_0 leq x_2leq 1$$
With the constaint of $x_1=x_2$, I have
$$f(x_1=x_2;a,b)=casesfrac12left[2x_1-x_1^2+left(1+fracx_1(1+b_0(1-a_0))a_0right)^2right],quad 0 leq x_1 leq a_0\frac12left[2x_1+b_0^2(1-x_1)^2-x_1^2right],quad a_0 leq x_1 leq 1$$
From here, to minimize $f(x_1,x_2;a,b)$, I take the the derivative of this function with respect to $x_1$ and $x_2$ and make it equal to zero. I do the same for $f(x_1=x_2;a,b)$. For $f(x_1,x_2;a,b)$ taking derivatives corresponds to minimization only for the case $0 leq x_1leq a_0, 0 leq x_2leq a_0$. For other three cases it just maximizes. Then I check to see what kind of function I am dealing with in 3D and I see that I have an increasing function. Therefore I select the minimum values of $x_1$ and $x_2$. Later I compare the first case with the other three cases via taking the difference and plotting it in 3D. I see that the first case takes less values than all other cases for all points $x_1,x_2$.
I do the same for the function $f(x_1=x_2;a,b)$ and I also see that the first case is again the minimizer.
After that I take the difference for the first case of both: $$f(x_1=x_2;a,b)-f(x_1,x_2;a,b)$$
This function is only a function of the parameters $a_0,b_0$, because I took the derivates and make them equal to zero, and insert them back in the original equations.
In the next step I must find the maximum over all $(a_0,b_0)in[0,1]$. This step gives me $0$.
For $ngeq 2$, the resul is no more $0$. But I dont know how to proceed? I have problems, because as $n$ increases there are too many cases to compare. Taking derivatives and making them equal to zero gives me maximum or minimum? Lets assume I found them, later I cannot make visual inspection. Which cases should I consider? How to approach this problem? for example I can also use Matlab or Mathematica.
I will be happy to hear your comments. I can do the work. Solutions for $nin2,ldots,3$ either analytically or with a program is enough.
convex-analysis convex-optimization mathematica dynamic-programming
edited Aug 2 at 20:09
asked Jul 30 at 21:03


Seyhmus Güngören
5,45631535
5,45631535
Can I ask how you know this is convex? As soon as I see products between variables (or functions of variables) I have to be skeptical.
– Michael Grant
Jul 30 at 21:24
1
@MichaelGrant They are line segments added up from $(0,1)$ to $(1,0)$ in a clever way. I did this construction, so I know that It must be convex. I can prove it. So there is no problem about it, trust me.
– Seyhmus Güngören
Jul 30 at 21:28
Ha ha! Most people who tell me that something is convex, despite my intuition otherwise, I do not trust. You, I will. :-)
– Michael Grant
Jul 30 at 21:29
Still: $f$ a bilinear function, right?
– Michael Grant
Jul 30 at 21:30
1
@MichaelGrant okay, I am talking about the function $c$ and probably you are talking about $f$. So I must clarify that, sorry.
– Seyhmus Güngören
Jul 30 at 21:37
 |Â
show 1 more comment
Can I ask how you know this is convex? As soon as I see products between variables (or functions of variables) I have to be skeptical.
– Michael Grant
Jul 30 at 21:24
1
@MichaelGrant They are line segments added up from $(0,1)$ to $(1,0)$ in a clever way. I did this construction, so I know that It must be convex. I can prove it. So there is no problem about it, trust me.
– Seyhmus Güngören
Jul 30 at 21:28
Ha ha! Most people who tell me that something is convex, despite my intuition otherwise, I do not trust. You, I will. :-)
– Michael Grant
Jul 30 at 21:29
Still: $f$ a bilinear function, right?
– Michael Grant
Jul 30 at 21:30
1
@MichaelGrant okay, I am talking about the function $c$ and probably you are talking about $f$. So I must clarify that, sorry.
– Seyhmus Güngören
Jul 30 at 21:37
Can I ask how you know this is convex? As soon as I see products between variables (or functions of variables) I have to be skeptical.
– Michael Grant
Jul 30 at 21:24
Can I ask how you know this is convex? As soon as I see products between variables (or functions of variables) I have to be skeptical.
– Michael Grant
Jul 30 at 21:24
1
1
@MichaelGrant They are line segments added up from $(0,1)$ to $(1,0)$ in a clever way. I did this construction, so I know that It must be convex. I can prove it. So there is no problem about it, trust me.
– Seyhmus Güngören
Jul 30 at 21:28
@MichaelGrant They are line segments added up from $(0,1)$ to $(1,0)$ in a clever way. I did this construction, so I know that It must be convex. I can prove it. So there is no problem about it, trust me.
– Seyhmus Güngören
Jul 30 at 21:28
Ha ha! Most people who tell me that something is convex, despite my intuition otherwise, I do not trust. You, I will. :-)
– Michael Grant
Jul 30 at 21:29
Ha ha! Most people who tell me that something is convex, despite my intuition otherwise, I do not trust. You, I will. :-)
– Michael Grant
Jul 30 at 21:29
Still: $f$ a bilinear function, right?
– Michael Grant
Jul 30 at 21:30
Still: $f$ a bilinear function, right?
– Michael Grant
Jul 30 at 21:30
1
1
@MichaelGrant okay, I am talking about the function $c$ and probably you are talking about $f$. So I must clarify that, sorry.
– Seyhmus Güngören
Jul 30 at 21:37
@MichaelGrant okay, I am talking about the function $c$ and probably you are talking about $f$. So I must clarify that, sorry.
– Seyhmus Güngören
Jul 30 at 21:37
 |Â
show 1 more comment
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2867428%2fan-optimization-problem-over-a-bi-linear-function%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Can I ask how you know this is convex? As soon as I see products between variables (or functions of variables) I have to be skeptical.
– Michael Grant
Jul 30 at 21:24
1
@MichaelGrant They are line segments added up from $(0,1)$ to $(1,0)$ in a clever way. I did this construction, so I know that It must be convex. I can prove it. So there is no problem about it, trust me.
– Seyhmus Güngören
Jul 30 at 21:28
Ha ha! Most people who tell me that something is convex, despite my intuition otherwise, I do not trust. You, I will. :-)
– Michael Grant
Jul 30 at 21:29
Still: $f$ a bilinear function, right?
– Michael Grant
Jul 30 at 21:30
1
@MichaelGrant okay, I am talking about the function $c$ and probably you are talking about $f$. So I must clarify that, sorry.
– Seyhmus Güngören
Jul 30 at 21:37