Chance constrained stochastic programming
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
A stochastic programming optimizes the expectation of a cost function with respect to values.
begincases
boldsymbol x=textargmin~ E(f(boldsymbol x))\
boldsymbol g(boldsymbol x)<boldsymbol 0
endcases
where $E$ refers to expectation.
A chance constrained programming is a programming with constrained chance
begincases
boldsymbol x=textargmin~ f(boldsymbol x)\
P(boldsymbol g(boldsymbol x)<boldsymbol 0)>alpha
endcases
where $P$ refers to probability and $alpha$ refers to the confidence level.
But I am looking for a different programming with both expectation and chance constrained properties like
begincases
boldsymbol x=textargmin~ E(f(boldsymbol x))\
P(boldsymbol g(boldsymbol x)<boldsymbol 0)>alpha
endcases
Does such an optimization exist?
Is such an optimization common?
If yes, what is the name of this optimization?
Is there any academic publication about this optimization?
probability probability-theory stochastic-processes stochastic-calculus stochastic-analysis
add a comment |Â
up vote
1
down vote
favorite
A stochastic programming optimizes the expectation of a cost function with respect to values.
begincases
boldsymbol x=textargmin~ E(f(boldsymbol x))\
boldsymbol g(boldsymbol x)<boldsymbol 0
endcases
where $E$ refers to expectation.
A chance constrained programming is a programming with constrained chance
begincases
boldsymbol x=textargmin~ f(boldsymbol x)\
P(boldsymbol g(boldsymbol x)<boldsymbol 0)>alpha
endcases
where $P$ refers to probability and $alpha$ refers to the confidence level.
But I am looking for a different programming with both expectation and chance constrained properties like
begincases
boldsymbol x=textargmin~ E(f(boldsymbol x))\
P(boldsymbol g(boldsymbol x)<boldsymbol 0)>alpha
endcases
Does such an optimization exist?
Is such an optimization common?
If yes, what is the name of this optimization?
Is there any academic publication about this optimization?
probability probability-theory stochastic-processes stochastic-calculus stochastic-analysis
When you say “stochastic programming†in the first sentence, do you mean a two-stage problem with recourse? (Also, small notational things: (1) $argmin$ is a set, so it should read $xinargmin$, and (2) the inequality constraints should read $g(x)le0$, since standard programming problems are virtually never defined on open sets)
– David M.
Jul 15 at 12:58
You may take a look at the KKT condition: en.wikipedia.org/wiki/…
– BGM
Jul 16 at 3:13
@DavidM. I do not know what is two-stage problem.
– Adams
Jul 17 at 2:34
@BGM, how does KKT apply to the stochastic system?
– Adams
Jul 17 at 2:35
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
A stochastic programming optimizes the expectation of a cost function with respect to values.
begincases
boldsymbol x=textargmin~ E(f(boldsymbol x))\
boldsymbol g(boldsymbol x)<boldsymbol 0
endcases
where $E$ refers to expectation.
A chance constrained programming is a programming with constrained chance
begincases
boldsymbol x=textargmin~ f(boldsymbol x)\
P(boldsymbol g(boldsymbol x)<boldsymbol 0)>alpha
endcases
where $P$ refers to probability and $alpha$ refers to the confidence level.
But I am looking for a different programming with both expectation and chance constrained properties like
begincases
boldsymbol x=textargmin~ E(f(boldsymbol x))\
P(boldsymbol g(boldsymbol x)<boldsymbol 0)>alpha
endcases
Does such an optimization exist?
Is such an optimization common?
If yes, what is the name of this optimization?
Is there any academic publication about this optimization?
probability probability-theory stochastic-processes stochastic-calculus stochastic-analysis
A stochastic programming optimizes the expectation of a cost function with respect to values.
begincases
boldsymbol x=textargmin~ E(f(boldsymbol x))\
boldsymbol g(boldsymbol x)<boldsymbol 0
endcases
where $E$ refers to expectation.
A chance constrained programming is a programming with constrained chance
begincases
boldsymbol x=textargmin~ f(boldsymbol x)\
P(boldsymbol g(boldsymbol x)<boldsymbol 0)>alpha
endcases
where $P$ refers to probability and $alpha$ refers to the confidence level.
But I am looking for a different programming with both expectation and chance constrained properties like
begincases
boldsymbol x=textargmin~ E(f(boldsymbol x))\
P(boldsymbol g(boldsymbol x)<boldsymbol 0)>alpha
endcases
Does such an optimization exist?
Is such an optimization common?
If yes, what is the name of this optimization?
Is there any academic publication about this optimization?
probability probability-theory stochastic-processes stochastic-calculus stochastic-analysis
asked Jul 15 at 9:27
Adams
506
506
When you say “stochastic programming†in the first sentence, do you mean a two-stage problem with recourse? (Also, small notational things: (1) $argmin$ is a set, so it should read $xinargmin$, and (2) the inequality constraints should read $g(x)le0$, since standard programming problems are virtually never defined on open sets)
– David M.
Jul 15 at 12:58
You may take a look at the KKT condition: en.wikipedia.org/wiki/…
– BGM
Jul 16 at 3:13
@DavidM. I do not know what is two-stage problem.
– Adams
Jul 17 at 2:34
@BGM, how does KKT apply to the stochastic system?
– Adams
Jul 17 at 2:35
add a comment |Â
When you say “stochastic programming†in the first sentence, do you mean a two-stage problem with recourse? (Also, small notational things: (1) $argmin$ is a set, so it should read $xinargmin$, and (2) the inequality constraints should read $g(x)le0$, since standard programming problems are virtually never defined on open sets)
– David M.
Jul 15 at 12:58
You may take a look at the KKT condition: en.wikipedia.org/wiki/…
– BGM
Jul 16 at 3:13
@DavidM. I do not know what is two-stage problem.
– Adams
Jul 17 at 2:34
@BGM, how does KKT apply to the stochastic system?
– Adams
Jul 17 at 2:35
When you say “stochastic programming†in the first sentence, do you mean a two-stage problem with recourse? (Also, small notational things: (1) $argmin$ is a set, so it should read $xinargmin$, and (2) the inequality constraints should read $g(x)le0$, since standard programming problems are virtually never defined on open sets)
– David M.
Jul 15 at 12:58
When you say “stochastic programming†in the first sentence, do you mean a two-stage problem with recourse? (Also, small notational things: (1) $argmin$ is a set, so it should read $xinargmin$, and (2) the inequality constraints should read $g(x)le0$, since standard programming problems are virtually never defined on open sets)
– David M.
Jul 15 at 12:58
You may take a look at the KKT condition: en.wikipedia.org/wiki/…
– BGM
Jul 16 at 3:13
You may take a look at the KKT condition: en.wikipedia.org/wiki/…
– BGM
Jul 16 at 3:13
@DavidM. I do not know what is two-stage problem.
– Adams
Jul 17 at 2:34
@DavidM. I do not know what is two-stage problem.
– Adams
Jul 17 at 2:34
@BGM, how does KKT apply to the stochastic system?
– Adams
Jul 17 at 2:35
@BGM, how does KKT apply to the stochastic system?
– Adams
Jul 17 at 2:35
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
0
down vote
I think your definition of a stochastic program is suspect (I have never seen it defined this way). It seems that you're saying that a stochastic program is an optimization problem of the form
beginequation
beginarrayrl
min & mathbbE_xi[f(x,xi)]\
texts.t. & g(x)leqslant0
endarray
endequation
where $xi$ is some random variable. In many cases, such a problem would be uninteresting. For example, suppose (as is often done in the literature) that $f$ has the form
beginequation
f(x,xi)=xi_1x_1+dots+xi_nx_n.
endequation
where $xi_i$ are independent random variables. Then the objective function is given by
beginequation
mathbbE_xi[f(x,xi)]=mathbbE_xi[xi_1x_i+dots+xi_nx_n]=mathbbE[xi_1]x_1+dots+mathbbE[xi_n]x_n.
endequation
Since the values $mathbbE[xi_i]$ are just constants, we've reduced the objective function to a (deterministic) affine function of $x$. This isn't really a stochastic program at all--we just replaced some random variables with their expected values.
This confusion makes it hard to address the rest of your question. In principle, chance constraints can be combined with lots of different models--it really depends on what you're modeling.
I would suggest two books to read up on this subject:
- Birge and Louveaux have a very good textbook (published by Springer) that introduces the fundamentals of stochastic programming. In particular, they introduce the concept of linear programming with recourse, which is at the heart of most stochastic programming as it is studied today.
- Prekopa wrote a seminal text (called simply "Stochastic Programming") which is much more technical than Birge and Louveaux, but treats chance constraints much more thoroughly. In particular, they explore different places where chance constraints can appear.
I refer to stochastic model predictive control from this source:Mesbah, Ali. "Stochastic model predictive control: An overview and perspectives for future research." IEEE Control Systems 36.6 (2016): 30-44.
. If you do not have access to download it, please let me know.
– Adams
Jul 18 at 14:08
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
I think your definition of a stochastic program is suspect (I have never seen it defined this way). It seems that you're saying that a stochastic program is an optimization problem of the form
beginequation
beginarrayrl
min & mathbbE_xi[f(x,xi)]\
texts.t. & g(x)leqslant0
endarray
endequation
where $xi$ is some random variable. In many cases, such a problem would be uninteresting. For example, suppose (as is often done in the literature) that $f$ has the form
beginequation
f(x,xi)=xi_1x_1+dots+xi_nx_n.
endequation
where $xi_i$ are independent random variables. Then the objective function is given by
beginequation
mathbbE_xi[f(x,xi)]=mathbbE_xi[xi_1x_i+dots+xi_nx_n]=mathbbE[xi_1]x_1+dots+mathbbE[xi_n]x_n.
endequation
Since the values $mathbbE[xi_i]$ are just constants, we've reduced the objective function to a (deterministic) affine function of $x$. This isn't really a stochastic program at all--we just replaced some random variables with their expected values.
This confusion makes it hard to address the rest of your question. In principle, chance constraints can be combined with lots of different models--it really depends on what you're modeling.
I would suggest two books to read up on this subject:
- Birge and Louveaux have a very good textbook (published by Springer) that introduces the fundamentals of stochastic programming. In particular, they introduce the concept of linear programming with recourse, which is at the heart of most stochastic programming as it is studied today.
- Prekopa wrote a seminal text (called simply "Stochastic Programming") which is much more technical than Birge and Louveaux, but treats chance constraints much more thoroughly. In particular, they explore different places where chance constraints can appear.
I refer to stochastic model predictive control from this source:Mesbah, Ali. "Stochastic model predictive control: An overview and perspectives for future research." IEEE Control Systems 36.6 (2016): 30-44.
. If you do not have access to download it, please let me know.
– Adams
Jul 18 at 14:08
add a comment |Â
up vote
0
down vote
I think your definition of a stochastic program is suspect (I have never seen it defined this way). It seems that you're saying that a stochastic program is an optimization problem of the form
beginequation
beginarrayrl
min & mathbbE_xi[f(x,xi)]\
texts.t. & g(x)leqslant0
endarray
endequation
where $xi$ is some random variable. In many cases, such a problem would be uninteresting. For example, suppose (as is often done in the literature) that $f$ has the form
beginequation
f(x,xi)=xi_1x_1+dots+xi_nx_n.
endequation
where $xi_i$ are independent random variables. Then the objective function is given by
beginequation
mathbbE_xi[f(x,xi)]=mathbbE_xi[xi_1x_i+dots+xi_nx_n]=mathbbE[xi_1]x_1+dots+mathbbE[xi_n]x_n.
endequation
Since the values $mathbbE[xi_i]$ are just constants, we've reduced the objective function to a (deterministic) affine function of $x$. This isn't really a stochastic program at all--we just replaced some random variables with their expected values.
This confusion makes it hard to address the rest of your question. In principle, chance constraints can be combined with lots of different models--it really depends on what you're modeling.
I would suggest two books to read up on this subject:
- Birge and Louveaux have a very good textbook (published by Springer) that introduces the fundamentals of stochastic programming. In particular, they introduce the concept of linear programming with recourse, which is at the heart of most stochastic programming as it is studied today.
- Prekopa wrote a seminal text (called simply "Stochastic Programming") which is much more technical than Birge and Louveaux, but treats chance constraints much more thoroughly. In particular, they explore different places where chance constraints can appear.
I refer to stochastic model predictive control from this source:Mesbah, Ali. "Stochastic model predictive control: An overview and perspectives for future research." IEEE Control Systems 36.6 (2016): 30-44.
. If you do not have access to download it, please let me know.
– Adams
Jul 18 at 14:08
add a comment |Â
up vote
0
down vote
up vote
0
down vote
I think your definition of a stochastic program is suspect (I have never seen it defined this way). It seems that you're saying that a stochastic program is an optimization problem of the form
beginequation
beginarrayrl
min & mathbbE_xi[f(x,xi)]\
texts.t. & g(x)leqslant0
endarray
endequation
where $xi$ is some random variable. In many cases, such a problem would be uninteresting. For example, suppose (as is often done in the literature) that $f$ has the form
beginequation
f(x,xi)=xi_1x_1+dots+xi_nx_n.
endequation
where $xi_i$ are independent random variables. Then the objective function is given by
beginequation
mathbbE_xi[f(x,xi)]=mathbbE_xi[xi_1x_i+dots+xi_nx_n]=mathbbE[xi_1]x_1+dots+mathbbE[xi_n]x_n.
endequation
Since the values $mathbbE[xi_i]$ are just constants, we've reduced the objective function to a (deterministic) affine function of $x$. This isn't really a stochastic program at all--we just replaced some random variables with their expected values.
This confusion makes it hard to address the rest of your question. In principle, chance constraints can be combined with lots of different models--it really depends on what you're modeling.
I would suggest two books to read up on this subject:
- Birge and Louveaux have a very good textbook (published by Springer) that introduces the fundamentals of stochastic programming. In particular, they introduce the concept of linear programming with recourse, which is at the heart of most stochastic programming as it is studied today.
- Prekopa wrote a seminal text (called simply "Stochastic Programming") which is much more technical than Birge and Louveaux, but treats chance constraints much more thoroughly. In particular, they explore different places where chance constraints can appear.
I think your definition of a stochastic program is suspect (I have never seen it defined this way). It seems that you're saying that a stochastic program is an optimization problem of the form
beginequation
beginarrayrl
min & mathbbE_xi[f(x,xi)]\
texts.t. & g(x)leqslant0
endarray
endequation
where $xi$ is some random variable. In many cases, such a problem would be uninteresting. For example, suppose (as is often done in the literature) that $f$ has the form
beginequation
f(x,xi)=xi_1x_1+dots+xi_nx_n.
endequation
where $xi_i$ are independent random variables. Then the objective function is given by
beginequation
mathbbE_xi[f(x,xi)]=mathbbE_xi[xi_1x_i+dots+xi_nx_n]=mathbbE[xi_1]x_1+dots+mathbbE[xi_n]x_n.
endequation
Since the values $mathbbE[xi_i]$ are just constants, we've reduced the objective function to a (deterministic) affine function of $x$. This isn't really a stochastic program at all--we just replaced some random variables with their expected values.
This confusion makes it hard to address the rest of your question. In principle, chance constraints can be combined with lots of different models--it really depends on what you're modeling.
I would suggest two books to read up on this subject:
- Birge and Louveaux have a very good textbook (published by Springer) that introduces the fundamentals of stochastic programming. In particular, they introduce the concept of linear programming with recourse, which is at the heart of most stochastic programming as it is studied today.
- Prekopa wrote a seminal text (called simply "Stochastic Programming") which is much more technical than Birge and Louveaux, but treats chance constraints much more thoroughly. In particular, they explore different places where chance constraints can appear.
answered Jul 17 at 15:58
David M.
1,334318
1,334318
I refer to stochastic model predictive control from this source:Mesbah, Ali. "Stochastic model predictive control: An overview and perspectives for future research." IEEE Control Systems 36.6 (2016): 30-44.
. If you do not have access to download it, please let me know.
– Adams
Jul 18 at 14:08
add a comment |Â
I refer to stochastic model predictive control from this source:Mesbah, Ali. "Stochastic model predictive control: An overview and perspectives for future research." IEEE Control Systems 36.6 (2016): 30-44.
. If you do not have access to download it, please let me know.
– Adams
Jul 18 at 14:08
I refer to stochastic model predictive control from this source:
Mesbah, Ali. "Stochastic model predictive control: An overview and perspectives for future research." IEEE Control Systems 36.6 (2016): 30-44.
. If you do not have access to download it, please let me know.– Adams
Jul 18 at 14:08
I refer to stochastic model predictive control from this source:
Mesbah, Ali. "Stochastic model predictive control: An overview and perspectives for future research." IEEE Control Systems 36.6 (2016): 30-44.
. If you do not have access to download it, please let me know.– Adams
Jul 18 at 14:08
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852328%2fchance-constrained-stochastic-programming%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
When you say “stochastic programming†in the first sentence, do you mean a two-stage problem with recourse? (Also, small notational things: (1) $argmin$ is a set, so it should read $xinargmin$, and (2) the inequality constraints should read $g(x)le0$, since standard programming problems are virtually never defined on open sets)
– David M.
Jul 15 at 12:58
You may take a look at the KKT condition: en.wikipedia.org/wiki/…
– BGM
Jul 16 at 3:13
@DavidM. I do not know what is two-stage problem.
– Adams
Jul 17 at 2:34
@BGM, how does KKT apply to the stochastic system?
– Adams
Jul 17 at 2:35