Convex Minimization Over the Unit Simplex

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite
3












I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbfx)$, $texts.t. mathbf0 preceq x$ and $mathbf1^top mathbfx = 1$. This will have to be performed multiple times.



What is a good optimization method to use? Preferably fairly simple to describe and implement?



There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.







share|cite|improve this question





















  • If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
    – jedwards
    Dec 6 '11 at 3:34











  • How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
    – matt
    Dec 6 '11 at 12:35










  • Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
    – dcm29
    Dec 12 '11 at 3:05







  • 2




    This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
    – user103342
    Oct 26 '13 at 11:12











  • See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
    – Liang
    Nov 22 '13 at 13:16














up vote
4
down vote

favorite
3












I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbfx)$, $texts.t. mathbf0 preceq x$ and $mathbf1^top mathbfx = 1$. This will have to be performed multiple times.



What is a good optimization method to use? Preferably fairly simple to describe and implement?



There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.







share|cite|improve this question





















  • If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
    – jedwards
    Dec 6 '11 at 3:34











  • How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
    – matt
    Dec 6 '11 at 12:35










  • Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
    – dcm29
    Dec 12 '11 at 3:05







  • 2




    This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
    – user103342
    Oct 26 '13 at 11:12











  • See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
    – Liang
    Nov 22 '13 at 13:16












up vote
4
down vote

favorite
3









up vote
4
down vote

favorite
3






3





I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbfx)$, $texts.t. mathbf0 preceq x$ and $mathbf1^top mathbfx = 1$. This will have to be performed multiple times.



What is a good optimization method to use? Preferably fairly simple to describe and implement?



There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.







share|cite|improve this question













I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $min. f(mathbfx)$, $texts.t. mathbf0 preceq x$ and $mathbf1^top mathbfx = 1$. This will have to be performed multiple times.



What is a good optimization method to use? Preferably fairly simple to describe and implement?



There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited May 31 at 20:44









Royi

3,03512046




3,03512046









asked Dec 6 '11 at 2:30









dcm29

212




212











  • If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
    – jedwards
    Dec 6 '11 at 3:34











  • How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
    – matt
    Dec 6 '11 at 12:35










  • Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
    – dcm29
    Dec 12 '11 at 3:05







  • 2




    This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
    – user103342
    Oct 26 '13 at 11:12











  • See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
    – Liang
    Nov 22 '13 at 13:16
















  • If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
    – jedwards
    Dec 6 '11 at 3:34











  • How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
    – matt
    Dec 6 '11 at 12:35










  • Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
    – dcm29
    Dec 12 '11 at 3:05







  • 2




    This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
    – user103342
    Oct 26 '13 at 11:12











  • See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
    – Liang
    Nov 22 '13 at 13:16















If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34





If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound.
– jedwards
Dec 6 '11 at 3:34













How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35




How about the "Karush–Kuhn–Tucker conditions"? en.wikipedia.org/wiki/…
– matt
Dec 6 '11 at 12:35












Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05





Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well.
– dcm29
Dec 12 '11 at 3:05





2




2




This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12





This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example.
– user103342
Oct 26 '13 at 11:12













See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16




See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective.
– Liang
Nov 22 '13 at 13:16










1 Answer
1






active

oldest

votes

















up vote
0
down vote













The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



A MATLAB Code would be:



simplexRadius = 1;
stopThr = 1e-5;
vX = vXInit;

for ii = 1:numIterations

vG = CalcFunGrad(vX); %<! The Gradient
vX = vX - (stepSize * vG);
vX = ProjectSimplex(vX, simplexRadius, stopThr);

end


You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f88746%2fconvex-minimization-over-the-unit-simplex%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



    All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



    A MATLAB Code would be:



    simplexRadius = 1;
    stopThr = 1e-5;
    vX = vXInit;

    for ii = 1:numIterations

    vG = CalcFunGrad(vX); %<! The Gradient
    vX = vX - (stepSize * vG);
    vX = ProjectSimplex(vX, simplexRadius, stopThr);

    end


    You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).






    share|cite|improve this answer

























      up vote
      0
      down vote













      The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



      All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



      A MATLAB Code would be:



      simplexRadius = 1;
      stopThr = 1e-5;
      vX = vXInit;

      for ii = 1:numIterations

      vG = CalcFunGrad(vX); %<! The Gradient
      vX = vX - (stepSize * vG);
      vX = ProjectSimplex(vX, simplexRadius, stopThr);

      end


      You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).






      share|cite|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



        All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



        A MATLAB Code would be:



        simplexRadius = 1;
        stopThr = 1e-5;
        vX = vXInit;

        for ii = 1:numIterations

        vG = CalcFunGrad(vX); %<! The Gradient
        vX = vX - (stepSize * vG);
        vX = ProjectSimplex(vX, simplexRadius, stopThr);

        end


        You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).






        share|cite|improve this answer













        The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.



        All you need is to calculate the Gradient of the function $ f left( x right) $ and project each iteration onto the Unit Simplex.



        A MATLAB Code would be:



        simplexRadius = 1;
        stopThr = 1e-5;
        vX = vXInit;

        for ii = 1:numIterations

        vG = CalcFunGrad(vX); %<! The Gradient
        vX = vX - (stepSize * vG);
        vX = ProjectSimplex(vX, simplexRadius, stopThr);

        end


        You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered Aug 22 '17 at 12:40









        Royi

        3,03512046




        3,03512046






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f88746%2fconvex-minimization-over-the-unit-simplex%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?