optimization loss due to misperceived probability

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












Suppose $a$ is chosen to maximize the expected value of $u(a,x)$ under a probability measure of $x$. Image the true distribution is $P(x)$, but the optimization may be conducted under a misperceived distribution $Q(x)$. We denote the optimal action under $P$ and $Q$ as $a^*(P),a^*(Q)$ respectively,
beginalign
a^*(P)&=textargmax_a int u(a,x)d P(x),\
a^*(Q)&=textargmax_a int u(a,x)d Q(x).
endalign



Now we investigate the loss of the misoptimization
beginalign
Delta(P,Q) = int left[uleft(a^*(P),xright) - uleft(a^*(Q),xright) right]d P(x).
endalign



My question: Is there a good bound or approximation for $Delta(P,Q)$ in general, in terms of some function (or value function) of $u$ and some metric to measure the distance between two distributions $P$ and $Q$? It's fine to assume good properties for $u$, say smooth and bounded and assume $P$ and $Q$ are close in some intuitive sense.



It would be nice to also allow the possibility that $P$ and $Q$ are indeed Dirac measure placed at $p$ and $q$ with $p,q$ being close to each other.



It seems like a well-motivated question, but I failed to find any literature on this. Thanks in advance for discussions or pointing me to some extant results.



For an ideal solution, it's better not to use KL-divergence, because it's infinite between Dirac $P$ and Dirac $Q$. Wasserstein metric appears more likely to be related. I guess some form of envelope theorem might be useful, since an optimization is involved.







share|cite|improve this question



















  • Comment to point me to other formulation of this question or anything helpful is welcomed!
    – Sean
    7 hours ago














up vote
2
down vote

favorite












Suppose $a$ is chosen to maximize the expected value of $u(a,x)$ under a probability measure of $x$. Image the true distribution is $P(x)$, but the optimization may be conducted under a misperceived distribution $Q(x)$. We denote the optimal action under $P$ and $Q$ as $a^*(P),a^*(Q)$ respectively,
beginalign
a^*(P)&=textargmax_a int u(a,x)d P(x),\
a^*(Q)&=textargmax_a int u(a,x)d Q(x).
endalign



Now we investigate the loss of the misoptimization
beginalign
Delta(P,Q) = int left[uleft(a^*(P),xright) - uleft(a^*(Q),xright) right]d P(x).
endalign



My question: Is there a good bound or approximation for $Delta(P,Q)$ in general, in terms of some function (or value function) of $u$ and some metric to measure the distance between two distributions $P$ and $Q$? It's fine to assume good properties for $u$, say smooth and bounded and assume $P$ and $Q$ are close in some intuitive sense.



It would be nice to also allow the possibility that $P$ and $Q$ are indeed Dirac measure placed at $p$ and $q$ with $p,q$ being close to each other.



It seems like a well-motivated question, but I failed to find any literature on this. Thanks in advance for discussions or pointing me to some extant results.



For an ideal solution, it's better not to use KL-divergence, because it's infinite between Dirac $P$ and Dirac $Q$. Wasserstein metric appears more likely to be related. I guess some form of envelope theorem might be useful, since an optimization is involved.







share|cite|improve this question



















  • Comment to point me to other formulation of this question or anything helpful is welcomed!
    – Sean
    7 hours ago












up vote
2
down vote

favorite









up vote
2
down vote

favorite











Suppose $a$ is chosen to maximize the expected value of $u(a,x)$ under a probability measure of $x$. Image the true distribution is $P(x)$, but the optimization may be conducted under a misperceived distribution $Q(x)$. We denote the optimal action under $P$ and $Q$ as $a^*(P),a^*(Q)$ respectively,
beginalign
a^*(P)&=textargmax_a int u(a,x)d P(x),\
a^*(Q)&=textargmax_a int u(a,x)d Q(x).
endalign



Now we investigate the loss of the misoptimization
beginalign
Delta(P,Q) = int left[uleft(a^*(P),xright) - uleft(a^*(Q),xright) right]d P(x).
endalign



My question: Is there a good bound or approximation for $Delta(P,Q)$ in general, in terms of some function (or value function) of $u$ and some metric to measure the distance between two distributions $P$ and $Q$? It's fine to assume good properties for $u$, say smooth and bounded and assume $P$ and $Q$ are close in some intuitive sense.



It would be nice to also allow the possibility that $P$ and $Q$ are indeed Dirac measure placed at $p$ and $q$ with $p,q$ being close to each other.



It seems like a well-motivated question, but I failed to find any literature on this. Thanks in advance for discussions or pointing me to some extant results.



For an ideal solution, it's better not to use KL-divergence, because it's infinite between Dirac $P$ and Dirac $Q$. Wasserstein metric appears more likely to be related. I guess some form of envelope theorem might be useful, since an optimization is involved.







share|cite|improve this question











Suppose $a$ is chosen to maximize the expected value of $u(a,x)$ under a probability measure of $x$. Image the true distribution is $P(x)$, but the optimization may be conducted under a misperceived distribution $Q(x)$. We denote the optimal action under $P$ and $Q$ as $a^*(P),a^*(Q)$ respectively,
beginalign
a^*(P)&=textargmax_a int u(a,x)d P(x),\
a^*(Q)&=textargmax_a int u(a,x)d Q(x).
endalign



Now we investigate the loss of the misoptimization
beginalign
Delta(P,Q) = int left[uleft(a^*(P),xright) - uleft(a^*(Q),xright) right]d P(x).
endalign



My question: Is there a good bound or approximation for $Delta(P,Q)$ in general, in terms of some function (or value function) of $u$ and some metric to measure the distance between two distributions $P$ and $Q$? It's fine to assume good properties for $u$, say smooth and bounded and assume $P$ and $Q$ are close in some intuitive sense.



It would be nice to also allow the possibility that $P$ and $Q$ are indeed Dirac measure placed at $p$ and $q$ with $p,q$ being close to each other.



It seems like a well-motivated question, but I failed to find any literature on this. Thanks in advance for discussions or pointing me to some extant results.



For an ideal solution, it's better not to use KL-divergence, because it's infinite between Dirac $P$ and Dirac $Q$. Wasserstein metric appears more likely to be related. I guess some form of envelope theorem might be useful, since an optimization is involved.









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked 7 hours ago









Sean

112




112











  • Comment to point me to other formulation of this question or anything helpful is welcomed!
    – Sean
    7 hours ago
















  • Comment to point me to other formulation of this question or anything helpful is welcomed!
    – Sean
    7 hours ago















Comment to point me to other formulation of this question or anything helpful is welcomed!
– Sean
7 hours ago




Comment to point me to other formulation of this question or anything helpful is welcomed!
– Sean
7 hours ago















active

oldest

votes











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873189%2foptimization-loss-due-to-misperceived-probability%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes










 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873189%2foptimization-loss-due-to-misperceived-probability%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?