Optimisation with unit simplex
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
minimise $$sum_i x_ic_i$$ for constant $c in mathbbR^n$, subject to $sum_i x_i=1$ as $x_i geq 0$
we are told to use Lagrange's method, so I have $$L(x,lambda)=sum_i x_ic_i
-lambda(sum_i x_i-1)$$
So $$fracpartial Lpartial x_i=c_i -lambda =0$$
But this gives no dependency for each individual $x_i$ ?
What are the general approaches for solving convex optimisation problems with a probability distribution such as $sum_i x_i=1$ ?
convex-optimization lagrange-multiplier
add a comment |Â
up vote
1
down vote
favorite
minimise $$sum_i x_ic_i$$ for constant $c in mathbbR^n$, subject to $sum_i x_i=1$ as $x_i geq 0$
we are told to use Lagrange's method, so I have $$L(x,lambda)=sum_i x_ic_i
-lambda(sum_i x_i-1)$$
So $$fracpartial Lpartial x_i=c_i -lambda =0$$
But this gives no dependency for each individual $x_i$ ?
What are the general approaches for solving convex optimisation problems with a probability distribution such as $sum_i x_i=1$ ?
convex-optimization lagrange-multiplier
2
The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
â David M.
Jul 23 at 23:34
The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
â random
Jul 24 at 1:00
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
minimise $$sum_i x_ic_i$$ for constant $c in mathbbR^n$, subject to $sum_i x_i=1$ as $x_i geq 0$
we are told to use Lagrange's method, so I have $$L(x,lambda)=sum_i x_ic_i
-lambda(sum_i x_i-1)$$
So $$fracpartial Lpartial x_i=c_i -lambda =0$$
But this gives no dependency for each individual $x_i$ ?
What are the general approaches for solving convex optimisation problems with a probability distribution such as $sum_i x_i=1$ ?
convex-optimization lagrange-multiplier
minimise $$sum_i x_ic_i$$ for constant $c in mathbbR^n$, subject to $sum_i x_i=1$ as $x_i geq 0$
we are told to use Lagrange's method, so I have $$L(x,lambda)=sum_i x_ic_i
-lambda(sum_i x_i-1)$$
So $$fracpartial Lpartial x_i=c_i -lambda =0$$
But this gives no dependency for each individual $x_i$ ?
What are the general approaches for solving convex optimisation problems with a probability distribution such as $sum_i x_i=1$ ?
convex-optimization lagrange-multiplier
asked Jul 23 at 23:27
Steve
6671420
6671420
2
The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
â David M.
Jul 23 at 23:34
The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
â random
Jul 24 at 1:00
add a comment |Â
2
The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
â David M.
Jul 23 at 23:34
The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
â random
Jul 24 at 1:00
2
2
The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
â David M.
Jul 23 at 23:34
The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
â David M.
Jul 23 at 23:34
The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
â random
Jul 24 at 1:00
The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
â random
Jul 24 at 1:00
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
$$
L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
$$
with $lambda_i geq 0$, and the conditions are
$$
beginaligned
c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
endaligned
$$
Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.
However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
Thus, after casting back to the minimization form, the minimum can be computed by
$$
min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
$$
and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
$$
L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
$$
with $lambda_i geq 0$, and the conditions are
$$
beginaligned
c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
endaligned
$$
Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.
However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
Thus, after casting back to the minimization form, the minimum can be computed by
$$
min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
$$
and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.
add a comment |Â
up vote
1
down vote
accepted
First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
$$
L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
$$
with $lambda_i geq 0$, and the conditions are
$$
beginaligned
c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
endaligned
$$
Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.
However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
Thus, after casting back to the minimization form, the minimum can be computed by
$$
min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
$$
and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
$$
L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
$$
with $lambda_i geq 0$, and the conditions are
$$
beginaligned
c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
endaligned
$$
Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.
However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
Thus, after casting back to the minimization form, the minimum can be computed by
$$
min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
$$
and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.
First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
$$
L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
$$
with $lambda_i geq 0$, and the conditions are
$$
beginaligned
c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
endaligned
$$
Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.
However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
Thus, after casting back to the minimization form, the minimum can be computed by
$$
min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
$$
and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.
answered Jul 29 at 8:20
Alex Shtof
532514
532514
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2860863%2foptimisation-with-unit-simplex%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
2
The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
â David M.
Jul 23 at 23:34
The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
â random
Jul 24 at 1:00