Solve contrained nonlinear optimization problem by MATLAB [closed]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I know in MATLAB there is a fmincon to solve this kind of problem. But I really don't know how to use it for my problem.



Question 1: Find optimal $u$ to



$u^TDx-fracmu2 |u|^2 rightarrow max$



$|u_i| le 1$



where $x$ - given 1D column vector, $mu$ - const, |.| - $L_2$ norm, $D$ - gradient operator, $u$ - 1D column vector.



Question 2: Do above problem and below problem have same optimum?



$-(u^TDx-fracmu2 |u|^2) rightarrow min$



$|u_i| le 1$



If without constraint, this is right. However, there is constraint, do they have same optimum? I don't want to formulate the dual problem.







share|cite|improve this question













closed as off-topic by Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex Aug 6 at 16:54


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 1




    Have you read the documentation on fmincon? It should tell you everything you need to know.
    – Theoretical Economist
    Aug 6 at 5:10






  • 1




    This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
    – Michal Adamaszek
    Aug 6 at 5:28






  • 1




    Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
    – HTCom
    Aug 6 at 5:34






  • 1




    For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off using quadprog instead of fmincon.
    – Theoretical Economist
    Aug 6 at 13:14














up vote
0
down vote

favorite












I know in MATLAB there is a fmincon to solve this kind of problem. But I really don't know how to use it for my problem.



Question 1: Find optimal $u$ to



$u^TDx-fracmu2 |u|^2 rightarrow max$



$|u_i| le 1$



where $x$ - given 1D column vector, $mu$ - const, |.| - $L_2$ norm, $D$ - gradient operator, $u$ - 1D column vector.



Question 2: Do above problem and below problem have same optimum?



$-(u^TDx-fracmu2 |u|^2) rightarrow min$



$|u_i| le 1$



If without constraint, this is right. However, there is constraint, do they have same optimum? I don't want to formulate the dual problem.







share|cite|improve this question













closed as off-topic by Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex Aug 6 at 16:54


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 1




    Have you read the documentation on fmincon? It should tell you everything you need to know.
    – Theoretical Economist
    Aug 6 at 5:10






  • 1




    This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
    – Michal Adamaszek
    Aug 6 at 5:28






  • 1




    Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
    – HTCom
    Aug 6 at 5:34






  • 1




    For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off using quadprog instead of fmincon.
    – Theoretical Economist
    Aug 6 at 13:14












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I know in MATLAB there is a fmincon to solve this kind of problem. But I really don't know how to use it for my problem.



Question 1: Find optimal $u$ to



$u^TDx-fracmu2 |u|^2 rightarrow max$



$|u_i| le 1$



where $x$ - given 1D column vector, $mu$ - const, |.| - $L_2$ norm, $D$ - gradient operator, $u$ - 1D column vector.



Question 2: Do above problem and below problem have same optimum?



$-(u^TDx-fracmu2 |u|^2) rightarrow min$



$|u_i| le 1$



If without constraint, this is right. However, there is constraint, do they have same optimum? I don't want to formulate the dual problem.







share|cite|improve this question













I know in MATLAB there is a fmincon to solve this kind of problem. But I really don't know how to use it for my problem.



Question 1: Find optimal $u$ to



$u^TDx-fracmu2 |u|^2 rightarrow max$



$|u_i| le 1$



where $x$ - given 1D column vector, $mu$ - const, |.| - $L_2$ norm, $D$ - gradient operator, $u$ - 1D column vector.



Question 2: Do above problem and below problem have same optimum?



$-(u^TDx-fracmu2 |u|^2) rightarrow min$



$|u_i| le 1$



If without constraint, this is right. However, there is constraint, do they have same optimum? I don't want to formulate the dual problem.









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Aug 6 at 15:38
























asked Aug 6 at 4:04









HTCom

1013




1013




closed as off-topic by Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex Aug 6 at 16:54


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex
If this question can be reworded to fit the rules in the help center, please edit the question.




closed as off-topic by Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex Aug 6 at 16:54


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex
If this question can be reworded to fit the rules in the help center, please edit the question.







  • 1




    Have you read the documentation on fmincon? It should tell you everything you need to know.
    – Theoretical Economist
    Aug 6 at 5:10






  • 1




    This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
    – Michal Adamaszek
    Aug 6 at 5:28






  • 1




    Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
    – HTCom
    Aug 6 at 5:34






  • 1




    For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off using quadprog instead of fmincon.
    – Theoretical Economist
    Aug 6 at 13:14












  • 1




    Have you read the documentation on fmincon? It should tell you everything you need to know.
    – Theoretical Economist
    Aug 6 at 5:10






  • 1




    This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
    – Michal Adamaszek
    Aug 6 at 5:28






  • 1




    Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
    – HTCom
    Aug 6 at 5:34






  • 1




    For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off using quadprog instead of fmincon.
    – Theoretical Economist
    Aug 6 at 13:14







1




1




Have you read the documentation on fmincon? It should tell you everything you need to know.
– Theoretical Economist
Aug 6 at 5:10




Have you read the documentation on fmincon? It should tell you everything you need to know.
– Theoretical Economist
Aug 6 at 5:10




1




1




This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
– Michal Adamaszek
Aug 6 at 5:28




This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
– Michal Adamaszek
Aug 6 at 5:28




1




1




Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
– HTCom
Aug 6 at 5:34




Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
– HTCom
Aug 6 at 5:34




1




1




For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off using quadprog instead of fmincon.
– Theoretical Economist
Aug 6 at 13:14




For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off using quadprog instead of fmincon.
– Theoretical Economist
Aug 6 at 13:14










1 Answer
1






active

oldest

votes

















up vote
2
down vote













Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.



If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.






share|cite|improve this answer





















  • How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like this objective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
    – HTCom
    Aug 6 at 15:32


















1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
2
down vote













Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.



If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.






share|cite|improve this answer





















  • How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like this objective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
    – HTCom
    Aug 6 at 15:32















up vote
2
down vote













Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.



If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.






share|cite|improve this answer





















  • How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like this objective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
    – HTCom
    Aug 6 at 15:32













up vote
2
down vote










up vote
2
down vote









Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.



If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.






share|cite|improve this answer













Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.



If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.







share|cite|improve this answer













share|cite|improve this answer



share|cite|improve this answer











answered Aug 6 at 12:52









John Hedengren

413




413











  • How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like this objective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
    – HTCom
    Aug 6 at 15:32

















  • How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like this objective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
    – HTCom
    Aug 6 at 15:32
















How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like this objective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
– HTCom
Aug 6 at 15:32





How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like this objective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
– HTCom
Aug 6 at 15:32



Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?