How does one minimize/maximize the Lagrangian if its gradient is non-linear?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












If one is trying to maximize(or minimize) the Lagrangian



$$mathcalL(x,y,lambda) = f(x,y) - lambda cdot g(x,y)$$



its fairly straightforward that this is achieved by solving:



$$nabla_x,y,lambda mathcalL(x , y, lambda)=0. $$



From the examples that I have seen, the Lagrangian never has a degree higher than 2, and so its gradient is always linear and so the above equation can be solved as a system of linear equations. The examples I have seen where there are non-linear terms in the above equation is solved by hand.



Is there a go-to method for optimizing with equality constraints for any function where you may not have linear gradients? Is some iterative method (i.e: gradient decent/ascent) the go-to method?







share|cite|improve this question



















  • It's more of a field of study. Lots of options.
    – jnez71
    Jul 14 at 21:57










  • You might be interested in reading about the augmented Lagrangian method. Many iterative optimization algorithms can be interpreted as methods for solving the KKT conditions.
    – littleO
    Jul 14 at 22:01














up vote
0
down vote

favorite












If one is trying to maximize(or minimize) the Lagrangian



$$mathcalL(x,y,lambda) = f(x,y) - lambda cdot g(x,y)$$



its fairly straightforward that this is achieved by solving:



$$nabla_x,y,lambda mathcalL(x , y, lambda)=0. $$



From the examples that I have seen, the Lagrangian never has a degree higher than 2, and so its gradient is always linear and so the above equation can be solved as a system of linear equations. The examples I have seen where there are non-linear terms in the above equation is solved by hand.



Is there a go-to method for optimizing with equality constraints for any function where you may not have linear gradients? Is some iterative method (i.e: gradient decent/ascent) the go-to method?







share|cite|improve this question



















  • It's more of a field of study. Lots of options.
    – jnez71
    Jul 14 at 21:57










  • You might be interested in reading about the augmented Lagrangian method. Many iterative optimization algorithms can be interpreted as methods for solving the KKT conditions.
    – littleO
    Jul 14 at 22:01












up vote
0
down vote

favorite









up vote
0
down vote

favorite











If one is trying to maximize(or minimize) the Lagrangian



$$mathcalL(x,y,lambda) = f(x,y) - lambda cdot g(x,y)$$



its fairly straightforward that this is achieved by solving:



$$nabla_x,y,lambda mathcalL(x , y, lambda)=0. $$



From the examples that I have seen, the Lagrangian never has a degree higher than 2, and so its gradient is always linear and so the above equation can be solved as a system of linear equations. The examples I have seen where there are non-linear terms in the above equation is solved by hand.



Is there a go-to method for optimizing with equality constraints for any function where you may not have linear gradients? Is some iterative method (i.e: gradient decent/ascent) the go-to method?







share|cite|improve this question











If one is trying to maximize(or minimize) the Lagrangian



$$mathcalL(x,y,lambda) = f(x,y) - lambda cdot g(x,y)$$



its fairly straightforward that this is achieved by solving:



$$nabla_x,y,lambda mathcalL(x , y, lambda)=0. $$



From the examples that I have seen, the Lagrangian never has a degree higher than 2, and so its gradient is always linear and so the above equation can be solved as a system of linear equations. The examples I have seen where there are non-linear terms in the above equation is solved by hand.



Is there a go-to method for optimizing with equality constraints for any function where you may not have linear gradients? Is some iterative method (i.e: gradient decent/ascent) the go-to method?









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 14 at 21:44









Duncan Frost

1




1











  • It's more of a field of study. Lots of options.
    – jnez71
    Jul 14 at 21:57










  • You might be interested in reading about the augmented Lagrangian method. Many iterative optimization algorithms can be interpreted as methods for solving the KKT conditions.
    – littleO
    Jul 14 at 22:01
















  • It's more of a field of study. Lots of options.
    – jnez71
    Jul 14 at 21:57










  • You might be interested in reading about the augmented Lagrangian method. Many iterative optimization algorithms can be interpreted as methods for solving the KKT conditions.
    – littleO
    Jul 14 at 22:01















It's more of a field of study. Lots of options.
– jnez71
Jul 14 at 21:57




It's more of a field of study. Lots of options.
– jnez71
Jul 14 at 21:57












You might be interested in reading about the augmented Lagrangian method. Many iterative optimization algorithms can be interpreted as methods for solving the KKT conditions.
– littleO
Jul 14 at 22:01




You might be interested in reading about the augmented Lagrangian method. Many iterative optimization algorithms can be interpreted as methods for solving the KKT conditions.
– littleO
Jul 14 at 22:01















active

oldest

votes











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2851984%2fhow-does-one-minimize-maximize-the-lagrangian-if-its-gradient-is-non-linear%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes










 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2851984%2fhow-does-one-minimize-maximize-the-lagrangian-if-its-gradient-is-non-linear%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?

What is the equation of a 3D cone with generalised tilt?