Optimisation with unit simplex

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












minimise $$sum_i x_ic_i$$ for constant $c in mathbbR^n$, subject to $sum_i x_i=1$ as $x_i geq 0$




we are told to use Lagrange's method, so I have $$L(x,lambda)=sum_i x_ic_i
-lambda(sum_i x_i-1)$$



So $$fracpartial Lpartial x_i=c_i -lambda =0$$



But this gives no dependency for each individual $x_i$ ?



What are the general approaches for solving convex optimisation problems with a probability distribution such as $sum_i x_i=1$ ?







share|cite|improve this question















  • 2




    The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
    – David M.
    Jul 23 at 23:34










  • The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
    – random
    Jul 24 at 1:00














up vote
1
down vote

favorite
1












minimise $$sum_i x_ic_i$$ for constant $c in mathbbR^n$, subject to $sum_i x_i=1$ as $x_i geq 0$




we are told to use Lagrange's method, so I have $$L(x,lambda)=sum_i x_ic_i
-lambda(sum_i x_i-1)$$



So $$fracpartial Lpartial x_i=c_i -lambda =0$$



But this gives no dependency for each individual $x_i$ ?



What are the general approaches for solving convex optimisation problems with a probability distribution such as $sum_i x_i=1$ ?







share|cite|improve this question















  • 2




    The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
    – David M.
    Jul 23 at 23:34










  • The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
    – random
    Jul 24 at 1:00












up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





minimise $$sum_i x_ic_i$$ for constant $c in mathbbR^n$, subject to $sum_i x_i=1$ as $x_i geq 0$




we are told to use Lagrange's method, so I have $$L(x,lambda)=sum_i x_ic_i
-lambda(sum_i x_i-1)$$



So $$fracpartial Lpartial x_i=c_i -lambda =0$$



But this gives no dependency for each individual $x_i$ ?



What are the general approaches for solving convex optimisation problems with a probability distribution such as $sum_i x_i=1$ ?







share|cite|improve this question











minimise $$sum_i x_ic_i$$ for constant $c in mathbbR^n$, subject to $sum_i x_i=1$ as $x_i geq 0$




we are told to use Lagrange's method, so I have $$L(x,lambda)=sum_i x_ic_i
-lambda(sum_i x_i-1)$$



So $$fracpartial Lpartial x_i=c_i -lambda =0$$



But this gives no dependency for each individual $x_i$ ?



What are the general approaches for solving convex optimisation problems with a probability distribution such as $sum_i x_i=1$ ?









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 23 at 23:27









Steve

6671420




6671420







  • 2




    The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
    – David M.
    Jul 23 at 23:34










  • The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
    – random
    Jul 24 at 1:00












  • 2




    The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
    – David M.
    Jul 23 at 23:34










  • The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
    – random
    Jul 24 at 1:00







2




2




The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
– David M.
Jul 23 at 23:34




The method of Lagrange multipliers only (directly) applies to optimization problems with equality constraints. As this problem has inequality constraints (namely, $xgeqslant0$), you would need to use the generalization of Lagrange multipliers, called the Karush-Kuhn-Tucker (KKT) conditions
– David M.
Jul 23 at 23:34












The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
– random
Jul 24 at 1:00




The result shows that the sum can only have a stationary point in the interior of the simplex if all $c_i$ are equal, with similar results for stationary points in the interiors of sub-simplices.
– random
Jul 24 at 1:00










1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted










First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
$$
L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
$$
with $lambda_i geq 0$, and the conditions are
$$
beginaligned
c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
endaligned
$$
Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.



However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
Thus, after casting back to the minimization form, the minimum can be computed by
$$
min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
$$
and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2860863%2foptimisation-with-unit-simplex%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    1
    down vote



    accepted










    First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
    $$
    L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
    $$
    with $lambda_i geq 0$, and the conditions are
    $$
    beginaligned
    c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
    lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
    sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
    lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
    endaligned
    $$
    Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.



    However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
    Thus, after casting back to the minimization form, the minimum can be computed by
    $$
    min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
    $$
    and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.






    share|cite|improve this answer

























      up vote
      1
      down vote



      accepted










      First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
      $$
      L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
      $$
      with $lambda_i geq 0$, and the conditions are
      $$
      beginaligned
      c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
      lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
      sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
      lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
      endaligned
      $$
      Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.



      However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
      Thus, after casting back to the minimization form, the minimum can be computed by
      $$
      min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
      $$
      and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.






      share|cite|improve this answer























        up vote
        1
        down vote



        accepted







        up vote
        1
        down vote



        accepted






        First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
        $$
        L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
        $$
        with $lambda_i geq 0$, and the conditions are
        $$
        beginaligned
        c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
        lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
        sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
        lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
        endaligned
        $$
        Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.



        However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
        Thus, after casting back to the minimization form, the minimum can be computed by
        $$
        min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
        $$
        and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.






        share|cite|improve this answer













        First of all, "Lagrange's method" with inequality constraints is a little bit different, is called the KKT Conditions, and includes a bit more than setting the derivative to zero. The Lagrangian is
        $$
        L(x, mu, lambda) = sum_i=1^n c_i x_i + mu(sum_i=1^n x_i - 1) +sum_i=1^n lambda_i (-x_i),
        $$
        with $lambda_i geq 0$, and the conditions are
        $$
        beginaligned
        c_i + mu - lambda_i &= 0 & leftarrownabla L = 0 \
        lambda_i (-x_i) &= 0 & leftarrowtextComplementarity \
        sum_i=1^n x_i = 1,~ x &geq 0 & leftarrowtextPrimal feasibility \
        lambda &geq 0 &leftarrowtextMultiplier (dual) feasibility
        endaligned
        $$
        Unfortunately, it is hard to find a vector which satisfies these conditions directly. Personally, I am not aware of an elegant way to do so.



        However, another approach can be used which does not involve Lagrange's method at all. The unit simplex is a convex set, and the problem is equivalent to maximizing $sum_i=1^n (-c_i x_i)$, which is also convex. It is known that the maximum of a convex function on a convex set is attained at an extreme point. In the case of the simplex, these are its vertices - the standard basis vectors $mathrme_i = (0, dots, 1, dots, 0)^T$.
        Thus, after casting back to the minimization form, the minimum can be computed by
        $$
        min_i=1, dots, n mathrme_i^T x = min_i=1, dots, n c_i
        $$
        and the optimal solution is the corresponding vector $x^* = mathrme_i^*$, where $i^*$ is the index of one of a minimum $c_i$.







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered Jul 29 at 8:20









        Alex Shtof

        532514




        532514






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2860863%2foptimisation-with-unit-simplex%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?

            What is the equation of a 3D cone with generalised tilt?