Date: Tue, 25 Nov 1997 15:20:17 LOCAL
From: or-kakl@wiso.wiso.uni-dortmund.de (Katja Klingebiel)
Subject: QP testproblems?
Does anybody know of QP testproblems which are not cited in the
NLP-FAQ?
(Hessian of objective function should be positive definite, linear constraints)
Many thanks in advance.
Katja.
Date: Wed, 26 Nov 1997 14:57:35 -0700
From: "Hans D. Mittelmann" <mittelmann@asu.edu>
Subject: QP testproblems?
Hi,
in which format would you input the QP problems? I have recently done some benchmarks with the CUTE problems but not only in SIF format as well as with randomly generated QP's in AMPL. For all this see the QP section of http://plato.la.asu.edu/bench.html
Random problems have, of course, random conditioning. In case you want to generate QP's with known properties I could send you a f77 program. Also: did you look at Calamai's genqp-code mentioned in the NLP-FAQ? (It's in f77 also)
Date: Wed, 01 Oct 1997 11:31:32 -0400
From: Guangliang He <guangliang.he@gs.com>
Subject: Critical Line Algorithm for QP
In the book `Mean-Variance Analysis in Portfolio Choice and
Capital Markets' by Harry M. Markowitz, he presented the
`Critical line algorithm' for solving the QP problem. Anyone
used this algorithm? Any major difference between this
algorithm vs. active set algorithm (they look similar). Is
this algorithm suitable for large portfolio optiomization?
Thanks for your help.
Date: 2 Oct 1997 20:57:03 GMT
From: roger@sina.hpc.uh.edu (Roger Z. Rios)
Subject: [Q] Lower bounds for indefinite QP
Dear fellow optimizers:
Is there an easy-to-compute convex underestimator of an indefinite linearly-constrained quadratic program (minimization form); i.e., can a lower bound for an indefinite QP be computed in polinomial time.
Any pointers on references, approaches, code availability, etc. will be highly appreciated.
have a look at the paper
S. Boyd, L. Vandenberghe "Semidefinite programming relaxations of non-convex problems in control and combinatorial optimization"
that can be downloaded from Stephen Boyd's home page at Stanford University. The URL is
http://www-isl.stanford.edu/~boyd/relaxations.html
Robin Becker
Date: Mon, 06 Oct 1997 10:41:57 -0400
From: "J.T. Linderoth" <jeff@akula.isye.gatech.edu>
Subject: [Q] Lower bounds for indefinite QP
Hello Roger,
Another reference for this problem is in
Once upon a time, I wrote some code to do something of the sort mentioned in this paper. I don't know if it would be of use to you, but would gladly make it available. I would also be interested in hearing of the application area from which this problem arises.
Hope this helps. Let me know if you would like the code, or would like to discuss the problem further.
Cheers,
-Jeff
Date: 8 Aug 1997 23:16:20 GMT
From: roger@sina.hpc.uh.edu (Roger Z. Rios)
Subject: [Q] Indefinite QP
Dear fellow optimizers:
Is there any specialized algorithm/procedure to find the global optimum of an indefinite quadratic program (linear constraints)?
Any pointers on references, approaches, code availability, etc. will be highly appreciated.
rz
Date: 30 Jun 1997 11:51:17 -0400
From: jeckstei@rutcor.rutgers.edu (Jonathan Eckstein)
Subject: Advice needed for Augmented Lagrangian Method
>Now I need practical suggestion for starting point, the initial guess
>for c. Since the problem is large and a single iteration costs many
>hours. I have to do anything to reduce the number of iterations.
Another point Peter didn't explicitly mention -- you do not have to solve the subproblems to optimality, only with a sufficiently increasing accuracy. This should cut down the time invested in early iterations without blowing up the overall iteration count.
For example, you can halt optimizing the subproblem when
norm(gradient(augmented lagrangian)) <= threshold(k)
and then set
threshold(k+1) = ratio*threshold(k)
where threshold(0) >= 0 is arbitrary and 0 <= ratio < 1.
For more info consult the classic articles by Rockafellar or the texts by Bertsekas.
Assistant Professor Jonathan Eckstein
Date: Fri, 27 Jun 1997 13:05:05 -0700
From: Alexander Schwarm <alexander@tamu.edu>
Subject: Sensitivity in Convex Quadratic Programming
Does anyone know of any resources which discuss sensitivity issues
in convex quadratic programming? I found a paper (Boot,1963) which
discusses some the issues and I have also looked at Fiacco's book
on sensitivity. The problem I wish to address is of the type:
min x'Hx + 2g'x
s.t. Ax <= b
where H is symmetric, positive definite and nxn, g is nx1, A is mxn, b is mx1 and x is nx1. A also has full row rank. For deviations in the matrix A how does the solution of the problem vary depending on the characteristics of these data.
Thank you,
Alexander Schwarm
alexander@tamu.edu
Date: Fri, 27 Jun 1997 13:21:31 -0700
From: Alexander Schwarm <alexander@tamu.edu>
Subject: Sensitivity in Convex Quadratic Programming
What resources are available for sensitivty analysis of convex quadratic
programming problems? For a convex QP how do variations in the linear
constraint matrix effect the solution vector, e.g.,
min x'Hx + 2g'x
s.t. Ax<=b
H is symmetric positive definite and nxn, x is nx1, g is nx1, A is mxn and b is mx1. For variations in the matrix A, how does the solution vary depending on the characteristics of H, g, A and b.
There is a paper by Boot (1963) which discusses some of the aspects, but does not give any bounds. Fiacco's book on sensitivity is too general in its discussion of nonlinear programming. Is this specific problem discussed anywhere?
Thank you,
Alexander Schwarm
alexander@tamu.edu
Date: 30 Jun 1997 12:51:48 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Sensitivity in Convex Quadratic Programming
first of all there is a paper of J.W.Daniel in Math. Prog. 5 91973), 41-53.
then the papers of St. Robinson on Stability in SINUM 12, (1975),754-769,
Math. Prog. Study 19, (1982), 200-221 have relevant results.
If Slaters condition is satisfied (i.e. Mangasarian-Fromowitz too), then
these results assert local Lipschitz continuity of the solution.
hope this helps
peter
Date: Fri, 27 Jun 1997 11:41:14 -0400
From: Donghui Wu <wud2@rpi.edu>
Subject: Advice needed for Augmented Lagrangian Method
I am working on a Quadratic Problem with up to 50,000 variables but with
very very few equality constraints.
min f(x) + lamada(k)*h(x) + 0.5*c(k)*||h(x)||**2
it will converge, either lamada(k) -> lamada(*) as k -> inf. or c(k) -> inf., k - iteration times.
I have found a good solver for this sub-problem.
Now I need practical suggestion for starting point, the initial guess for c. Since the problem is large and a single iteration costs many hours. I have to do anything to reduce the number of iterations.
I use the following rule to update lamada(k), which is usual,
lamada(k+1)= lamada(k) + c(k)*h(x)
How to get a guess about the initial lamada(0)? it requires to
the common practice of updating c is
c(k+1) = c(k)*ratio,
but how big should the ratio be? how big should the c(0) - the initial c be? If c(0) or ration is too big, I seem to get to stationary point of h(x), not the optimal sol to f(x).
Any Suggestion? Thanks in advance.
Date: 30 Jun 1997 12:03:43 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Advice needed for Augmented Lagrangian Method
I hope your f is convex. then you are a lucky man, having no trouble at
all, theoretically at least.
The augmented Lagrangian method is globally convergent in this case for any positive c. therefore no updating rule has to be used for it. simply take "lamada_0" as the least squares solution of grad_x L (x,"lamada")=0 (you mean lambda). this is simple, if your number of equations is really small, as you write. Otherwise, take it to zero. then take c=10 (or 100, say) but be aware that c*norm(grad h)**2 is not much larger than norm(hess(f)). some outer iterations will suffice, I would guess 10 or so. the trouble, if any, will come from the inner minimization. If the problem is illconditioned, a truncated Newton-iteration baed on the Lanczos method may be appropriate (but we have already had trouble with it if n is so large). If you have a good preconditioner, then you might do better do use cg+preconditioning.
good luck
peter
Date: Fri, 16 May 1997 15:51:12 +1000
From: Karen Lau <karen@maths.unsw.edu.au>
Subject: QP sensitivity
Hi all,
For the strictly convex quadratic programming problem
min x' Gx + g' x st Ax <= b Dx = d if (b,d) are perturbed arbitrarily in R^p, how does the optimal value function changes as a function of (b,d).
I know that it is a piecewise quadratic function of (b,d) if the perturbation is only along a single direction. and that it is locally quadratic if the active constraints have independent gradients and strict somplimentarity is satisfied.
Is anyay aware of a proof of a stronger result that show that the optimal value unction is indeed piecewise quadratic for any pertubation in the plane without any strong constraint qualifications?
Thanks very much.
Karen Lau email: karen@maths.unsw.edu.au
Date: 25 Apr 1997 14:21:25 GMT
From: PFEIFFER@WEEEV3.UNI-WUPPERTAL.DE (Ralph Pfeiffer)
Subject: A question on SQP algorithms
My question on SQP Algoritms:
- Is convexity of the objective function and equality and inequality constraints required to solve an optimization problem applying the SQP method?
Thanks for your help.
Date: Thu, 24 Apr 1997 09:54:27 +1000
From: Karen Lau <karen@maths.unsw.edu.au>
Subject: sensitivty analysis
Hi,
I am looking at a problem of minimizing a 1) quadratic 2) a once differentiable function over a set of linear constraints. I am interested in the continuity and differentiability of the objective function as the RHS of the constraints is perturbed.
Woule anyone be able to point me to some relevent references? Any kind of survey or summary of main results in sensitivity analysis would be most useful (especially those not assuming a strong understanding of analysis).
Thank you in advance.
Date: Fri, 25 Apr 1997 09:10:36 -0700
From: Hans Mittelmann <mittelmann@asu.edu>
Subject: A question on SQP algorithms
Date: Tue, 14 Jan 1997 23:45:5 GMT
From: vavasis@cosmo.mcs.anl.gov (Stephen A. Vavasis)
Subject: Indefinite quadratic minimization on a simplex.
The problem of minimizing an indefinite quadratic function over a
simplex is indeed NP-hard.
The main theorem need to prove this is from:
T. Motzkin and G. Straus, Maxima for graphs and a new proof of a theorem of Turan, Canad. J. Math 17(1965) 533-540.
Of course, NP-hardness wasn't invented in 1965; the extension of theorem in that paper to a proof of NP-completeness is due to
P. Pardalos, Y. Ye and C.-G. Han, Algorithms for the solution of quadratic knapsack problems, preprint, 1989.
The proof also appears in my book:
S. Vavasis, Nonlinear Optimization: Complexity Issues, Oxford University Press, 1991.
-- Steve Vavasis
------------------------------
Date: 15 Jan 1997 10:09:01 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Indefinite quadratic minimization on a simplex.
If A is not positive semidefinite, then you must check all constraining boundary manifolds of the simplex from dimension n-1 downto 0 (vertices) for the existence of feasible local minimizers resp. function values and hence complexity is exponential.
peter
Date: Thu, 16 Jan 1997 18:10:01 +1300
From: Chris Stephens <math5cps@cantua.canterbury.ac.nz>
Subject: Indefinite quadratic minimization on a simplex.
I wrote:
I would expect the simplex constrained case to be NP-HARD when A is indefinite, but I have not seen a proof of this special case.
Thanks again,
Chris Stephens.
Date: Wed, 15 Jan 1997 18:35:7 GMT
From: vavasis@cosmo.mcs.anl.gov (Stephen A. Vavasis)
Subject: Indefinite quadratic minimization on a simplex.
P. Pardalos and S. Vavasis, "Quadratic programming with one negative eigenvalue is NP-hard", J. Global Optimization 1 (1991) 15-22.
The result in that paper is that minimizing a quadratic function that has one negative eigenvalue subject to linear inequality constraints is an NP-hard problem. In particular, the objective function has the form
min x_1 - x_2 ^ 2
The linear constraints used in that proof do not have a simple form (i.e., not box or simplex constraints).
-- Steve Vavasis (vavasis@mcs.anl.gov)
Date: Wed, 15 Jan 1997 12:07:52 GMT
From: hwolkowi@orion.math.uwaterloo.ca (Henry Wolkowicz)
Subject: Indefinite quadratic minimization on a simplex.
In article ....
I think that the case when A is positive definite is NOT NP-hard for general linear constraints, i.e. in that case you have a nice convex quadratic over a polyhedral set. This can be solved in polynomial time, see e.g. the book by Nesterov-Nemirovskii (though I am sure there are better references).
But - even the if there is only one negative eigenvalue, the problem is NP-hard. I am not sure of the exact reference but the following may help:
Crawl back to front page.