Solve contrained nonlinear optimization problem by MATLAB [closed]
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I know in MATLAB there is a fmincon
to solve this kind of problem. But I really don't know how to use it for my problem.
Question 1: Find optimal $u$ to
$u^TDx-fracmu2 |u|^2 rightarrow max$
$|u_i| le 1$
where $x$ - given 1D column vector, $mu$ - const, |.| - $L_2$ norm, $D$ - gradient operator, $u$ - 1D column vector.
Question 2: Do above problem and below problem have same optimum?
$-(u^TDx-fracmu2 |u|^2) rightarrow min$
$|u_i| le 1$
If without constraint, this is right. However, there is constraint, do they have same optimum? I don't want to formulate the dual problem.
optimization matlab nonlinear-optimization numerical-optimization
closed as off-topic by Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex Aug 6 at 16:54
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex
add a comment |Â
up vote
0
down vote
favorite
I know in MATLAB there is a fmincon
to solve this kind of problem. But I really don't know how to use it for my problem.
Question 1: Find optimal $u$ to
$u^TDx-fracmu2 |u|^2 rightarrow max$
$|u_i| le 1$
where $x$ - given 1D column vector, $mu$ - const, |.| - $L_2$ norm, $D$ - gradient operator, $u$ - 1D column vector.
Question 2: Do above problem and below problem have same optimum?
$-(u^TDx-fracmu2 |u|^2) rightarrow min$
$|u_i| le 1$
If without constraint, this is right. However, there is constraint, do they have same optimum? I don't want to formulate the dual problem.
optimization matlab nonlinear-optimization numerical-optimization
closed as off-topic by Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex Aug 6 at 16:54
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex
1
Have you read the documentation on fmincon? It should tell you everything you need to know.
– Theoretical Economist
Aug 6 at 5:10
1
This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
– Michal Adamaszek
Aug 6 at 5:28
1
Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
– HTCom
Aug 6 at 5:34
1
For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off usingquadprog
instead offmincon
.
– Theoretical Economist
Aug 6 at 13:14
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I know in MATLAB there is a fmincon
to solve this kind of problem. But I really don't know how to use it for my problem.
Question 1: Find optimal $u$ to
$u^TDx-fracmu2 |u|^2 rightarrow max$
$|u_i| le 1$
where $x$ - given 1D column vector, $mu$ - const, |.| - $L_2$ norm, $D$ - gradient operator, $u$ - 1D column vector.
Question 2: Do above problem and below problem have same optimum?
$-(u^TDx-fracmu2 |u|^2) rightarrow min$
$|u_i| le 1$
If without constraint, this is right. However, there is constraint, do they have same optimum? I don't want to formulate the dual problem.
optimization matlab nonlinear-optimization numerical-optimization
I know in MATLAB there is a fmincon
to solve this kind of problem. But I really don't know how to use it for my problem.
Question 1: Find optimal $u$ to
$u^TDx-fracmu2 |u|^2 rightarrow max$
$|u_i| le 1$
where $x$ - given 1D column vector, $mu$ - const, |.| - $L_2$ norm, $D$ - gradient operator, $u$ - 1D column vector.
Question 2: Do above problem and below problem have same optimum?
$-(u^TDx-fracmu2 |u|^2) rightarrow min$
$|u_i| le 1$
If without constraint, this is right. However, there is constraint, do they have same optimum? I don't want to formulate the dual problem.
optimization matlab nonlinear-optimization numerical-optimization
edited Aug 6 at 15:38
asked Aug 6 at 4:04
HTCom
1013
1013
closed as off-topic by Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex Aug 6 at 16:54
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex
closed as off-topic by Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex Aug 6 at 16:54
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Theoretical Economist, Taroccoesbrocco, Shailesh, José Carlos Santos, Key Flex
1
Have you read the documentation on fmincon? It should tell you everything you need to know.
– Theoretical Economist
Aug 6 at 5:10
1
This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
– Michal Adamaszek
Aug 6 at 5:28
1
Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
– HTCom
Aug 6 at 5:34
1
For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off usingquadprog
instead offmincon
.
– Theoretical Economist
Aug 6 at 13:14
add a comment |Â
1
Have you read the documentation on fmincon? It should tell you everything you need to know.
– Theoretical Economist
Aug 6 at 5:10
1
This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
– Michal Adamaszek
Aug 6 at 5:28
1
Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
– HTCom
Aug 6 at 5:34
1
For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off usingquadprog
instead offmincon
.
– Theoretical Economist
Aug 6 at 13:14
1
1
Have you read the documentation on fmincon? It should tell you everything you need to know.
– Theoretical Economist
Aug 6 at 5:10
Have you read the documentation on fmincon? It should tell you everything you need to know.
– Theoretical Economist
Aug 6 at 5:10
1
1
This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
– Michal Adamaszek
Aug 6 at 5:28
This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
– Michal Adamaszek
Aug 6 at 5:28
1
1
Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
– HTCom
Aug 6 at 5:34
Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
– HTCom
Aug 6 at 5:34
1
1
For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off using
quadprog
instead of fmincon
.– Theoretical Economist
Aug 6 at 13:14
For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off using
quadprog
instead of fmincon
.– Theoretical Economist
Aug 6 at 13:14
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
2
down vote
Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.
If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.
How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like thisobjective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
– HTCom
Aug 6 at 15:32
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.
If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.
How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like thisobjective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
– HTCom
Aug 6 at 15:32
add a comment |Â
up vote
2
down vote
Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.
If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.
How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like thisobjective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
– HTCom
Aug 6 at 15:32
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.
If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.
Most optimization solvers require an objective function (by default) in minimization form. Multiplying a maximization by negative one is one way to reliably transform the objective to a minimization statement, even with constraints. I created an fmincon tutorial with source code (see method #2) for a problem with an objective function, equality constraint, and an inequality constraint or you can use MathWorks' fmincon or quadprog documentation.
If you do solve this problem numerically, you should avoid the use of an absolute value operator $left | u_i right |<1$ as an inequality constraint but instead use an upper and lower bound on the variable ($u_ilt 1$ and $u_igt -1$). Using an absolute value operator may cause problems for gradient based solvers that require continuous first and second derivatives.
answered Aug 6 at 12:52


John Hedengren
413
413
How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like thisobjective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
– HTCom
Aug 6 at 15:32
add a comment |Â
How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like thisobjective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
– HTCom
Aug 6 at 15:32
How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like this
objective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
– HTCom
Aug 6 at 15:32
How can I formulate objective function in Matlab with undefined number of argument? $u=(u_1, u_2, ..., u_n)$ Just like this
objective = @(u) u(1)+u(2)+.......+u(n)+u(1)*u(1)+u(2)*u(2)+....+u(n)*u(n);
– HTCom
Aug 6 at 15:32
add a comment |Â
1
Have you read the documentation on fmincon? It should tell you everything you need to know.
– Theoretical Economist
Aug 6 at 5:10
1
This is a convex quadratic optimization problem for which the optimization toolbox (as well as similar toolboxes from external vendors) will be much more suitable in terms of performance and accuracy than fmincon. Search under quadprog.
– Michal Adamaszek
Aug 6 at 5:28
1
Can I transfer this problem to min by add minus before functional and unchange the condition? I don't want to formulate the dual problem. If without contraint, $frightarrow max$ and $-frightarrow min$ have same optimum, but there is a constraint.
– HTCom
Aug 6 at 5:34
1
For Q2, the answer is yes. You can multiply your objective function by $-1$ and minimise it subject to the same constraints. I agree with Michael's comment that you're better off using
quadprog
instead offmincon
.– Theoretical Economist
Aug 6 at 13:14