Brian's Digest: Quadratic Programming 1997


1996 volume


SCI.OP-RESEARCH Digest Mon, 1 Dec 97 Volume 4: Issue 48

Date: Tue, 25 Nov 1997 15:20:17 LOCAL
From: or-kakl@wiso.wiso.uni-dortmund.de (Katja Klingebiel)
Subject: QP testproblems?
Does anybody know of QP testproblems which are not cited in the NLP-FAQ?

(Hessian of objective function should be positive definite, linear constraints)

Many thanks in advance.

Katja.

Date: Wed, 26 Nov 1997 14:57:35 -0700
From: "Hans D. Mittelmann" <mittelmann@asu.edu>
Subject: QP testproblems?
Hi,

in which format would you input the QP problems? I have recently done some benchmarks with the CUTE problems but not only in SIF format as well as with randomly generated QP's in AMPL. For all this see the QP section of http://plato.la.asu.edu/bench.html

Random problems have, of course, random conditioning. In case you want to generate QP's with known properties I could send you a f77 program. Also: did you look at Calamai's genqp-code mentioned in the NLP-FAQ? (It's in f77 also)

Hans D. Mittelmann http://plato.la.asu.edu/ Arizona State University Phone: (602) 965-6595 Department of Mathematics Fax: (602) 965-0461 Tempe, AZ 85287-1804 email: mittelmann@asu.edu

SCI.OP-RESEARCH Digest Mon, 6 Oct 97 Volume 4: Issue 40

Date: Wed, 01 Oct 1997 11:31:32 -0400
From: Guangliang He <guangliang.he@gs.com>
Subject: Critical Line Algorithm for QP
In the book `Mean-Variance Analysis in Portfolio Choice and Capital Markets' by Harry M. Markowitz, he presented the `Critical line algorithm' for solving the QP problem. Anyone used this algorithm? Any major difference between this algorithm vs. active set algorithm (they look similar). Is this algorithm suitable for large portfolio optiomization?

Thanks for your help.

Guang-Liang He | Goldman, Sachs & Co. guangliang.he@gs.com | Firmwide Risk (212) 357-1944 | 85 Broad St., 16th Floor | New York, NY 10004

SCI.OP-RESEARCH Digest Mon, 6 Oct 97 Volume 4: Issue 40

Date: 2 Oct 1997 20:57:03 GMT
From: roger@sina.hpc.uh.edu (Roger Z. Rios)
Subject: [Q] Lower bounds for indefinite QP
Dear fellow optimizers:

Is there an easy-to-compute convex underestimator of an indefinite linearly-constrained quadratic program (minimization form); i.e., can a lower bound for an indefinite QP be computed in polinomial time.

Any pointers on references, approaches, code availability, etc. will be highly appreciated.

+----------------------------------------------------------+ | Roger Z. Rios The Dead Optimizer Society | | roger@hpc.uh.edu http://www.hpc.uh.edu/~roger/ | +----------------------------------------------------------+ Date: Fri, 03 Oct 1997 09:01:11 +0200
From: Didier Henrion <henrion@laas.fr>
Subject: [Q] Lower bounds for indefinite QP
Hello,

have a look at the paper

S. Boyd, L. Vandenberghe "Semidefinite programming relaxations of non-convex problems in control and combinatorial optimization"

that can be downloaded from Stephen Boyd's home page at Stanford University. The URL is

http://www-isl.stanford.edu/~boyd/relaxations.html

--------------------------------------------------------------- Didier Henrion Bureau E50 / Groupe CSC / LAAS-CNRS 7 Avenue du Colonel Roche 31077 Toulouse Cedex 4, France Phone: (33 5) 61 33 69 49 Fax: (33 5) 61 33 69 69 E-mail: henrion@laas.fr WWW: http://www.laas.fr/~henrion --------------------------------------------------------------- Date: Fri, 3 Oct 1997 01:11:22 +0100
From: Robin Becker <robin@jessikat.demon.co.uk>
Subject: [Q] Lower bounds for indefinite QP
There are some papers at the interior point archive concerning approximating indefinite QP.

Robin Becker

Date: Mon, 06 Oct 1997 10:41:57 -0400
From: "J.T. Linderoth" <jeff@akula.isye.gatech.edu>
Subject: [Q] Lower bounds for indefinite QP
Hello Roger,

Another reference for this problem is in

@article{alkh:joint, AUTHOR = "Al-Khayyal, F. A. and J. E. Falk", TITLE = "Jointly constrained biconvex programming", JOURNAL = "Mathematics of Operations Research", YEAR = 1983, VOLUME = 8, PAGES = "273-286" } The method here is somewhat nice in that you need only solve linear programs to get an underestimate.

Once upon a time, I wrote some code to do something of the sort mentioned in this paper. I don't know if it would be of use to you, but would gladly make it available. I would also be interested in hearing of the application area from which this problem arises.

Hope this helps. Let me know if you would like the code, or would like to discuss the problem further.

Cheers,
-Jeff

----------------------------------------------------------------------------- Jeffrey T. Linderoth jeff@akula.isye.gatech.edu Logistics Engineering Center http://akula.isye.gatech.edu/~jeff Industrial and Systems Engineering Phone: (404)-894-2366 Georgia Tech Fax: (404)-894-0390 ------------------------------

SCI.OP-RESEARCH Digest Mon, 11 Aug 97 Volume 4: Issue 32

Date: 8 Aug 1997 23:16:20 GMT
From: roger@sina.hpc.uh.edu (Roger Z. Rios)
Subject: [Q] Indefinite QP
Dear fellow optimizers:

Is there any specialized algorithm/procedure to find the global optimum of an indefinite quadratic program (linear constraints)?

Any pointers on references, approaches, code availability, etc. will be highly appreciated.

rz

+----------------------------------------------------------+ | Roger Z. Rios The Dead Optimizer Society | | roger@hpc.uh.edu http://www.hpc.uh.edu/~roger/ | +----------------------------------------------------------+

SCI.OP-RESEARCH Digest Mon, 7 July 97 Volume 4: Issue 27

Date: 30 Jun 1997 11:51:17 -0400
From: jeckstei@rutcor.rutgers.edu (Jonathan Eckstein)
Subject: Advice needed for Augmented Lagrangian Method
>Now I need practical suggestion for starting point, the initial guess >for c. Since the problem is large and a single iteration costs many >hours. I have to do anything to reduce the number of iterations.

Another point Peter didn't explicitly mention -- you do not have to solve the subproblems to optimality, only with a sufficiently increasing accuracy. This should cut down the time invested in early iterations without blowing up the overall iteration count.

For example, you can halt optimizing the subproblem when

norm(gradient(augmented lagrangian)) <= threshold(k)

and then set

threshold(k+1) = ratio*threshold(k)

where threshold(0) >= 0 is arbitrary and 0 <= ratio < 1.

For more info consult the classic articles by Rockafellar or the texts by Bertsekas.

Assistant Professor Jonathan Eckstein

TEACHING ADDRESS RESEARCH ADDRESS +----------------------------------------+-----------------------------------+ | MSIS Department, Faculty of Management | RUTCOR | | 255 J.H. Levin Building | Rutgers University | | Rutgers University | P.O. Box 5062 | | P.O. Box 5062 | New Brunswick NJ 08903-5062 USA | | New Brunswick, NJ 08903-5062 USA | (732) 445-3596 | | (732) 445-0510 | FAX (732) 445-5472 | | FAX (732) 445-6329 | | +----------------------------------------+-----------------------------------+ jeckstei@rutcor.rutgers.edu http://rutcor.rutgers.edu:80/~jeckstei/

SCI.OP-RESEARCH Digest Mon, 30 Jun 97 Volume 4: Issue 26

Date: Fri, 27 Jun 1997 13:05:05 -0700
From: Alexander Schwarm <alexander@tamu.edu>
Subject: Sensitivity in Convex Quadratic Programming
Does anyone know of any resources which discuss sensitivity issues in convex quadratic programming? I found a paper (Boot,1963) which discusses some the issues and I have also looked at Fiacco's book on sensitivity. The problem I wish to address is of the type:

min x'Hx + 2g'x
s.t. Ax <= b

where H is symmetric, positive definite and nxn, g is nx1, A is mxn, b is mx1 and x is nx1. A also has full row rank. For deviations in the matrix A how does the solution of the problem vary depending on the characteristics of these data.

Thank you,
Alexander Schwarm
alexander@tamu.edu

Date: Fri, 27 Jun 1997 13:21:31 -0700
From: Alexander Schwarm <alexander@tamu.edu>
Subject: Sensitivity in Convex Quadratic Programming
What resources are available for sensitivty analysis of convex quadratic programming problems? For a convex QP how do variations in the linear constraint matrix effect the solution vector, e.g.,

min x'Hx + 2g'x
s.t. Ax<=b

H is symmetric positive definite and nxn, x is nx1, g is nx1, A is mxn and b is mx1. For variations in the matrix A, how does the solution vary depending on the characteristics of H, g, A and b.

There is a paper by Boot (1963) which discusses some of the aspects, but does not give any bounds. Fiacco's book on sensitivity is too general in its discussion of nonlinear programming. Is this specific problem discussed anywhere?

Thank you,
Alexander Schwarm
alexander@tamu.edu

Date: 30 Jun 1997 12:51:48 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Sensitivity in Convex Quadratic Programming
first of all there is a paper of J.W.Daniel in Math. Prog. 5 91973), 41-53. then the papers of St. Robinson on Stability in SINUM 12, (1975),754-769, Math. Prog. Study 19, (1982), 200-221 have relevant results. If Slaters condition is satisfied (i.e. Mangasarian-Fromowitz too), then these results assert local Lipschitz continuity of the solution. hope this helps

peter


SCI.OP-RESEARCH Digest Mon, 30 Jun 97 Volume 4: Issue 26

Date: Fri, 27 Jun 1997 11:41:14 -0400
From: Donghui Wu <wud2@rpi.edu>
Subject: Advice needed for Augmented Lagrangian Method
I am working on a Quadratic Problem with up to 50,000 variables but with very very few equality constraints.

min f(x) st. h(x)=0 I was suggested to use Augmented Lagrangian Method.

min f(x) + lamada(k)*h(x) + 0.5*c(k)*||h(x)||**2

it will converge, either lamada(k) -> lamada(*) as k -> inf. or c(k) -> inf., k - iteration times.

I have found a good solver for this sub-problem.

Now I need practical suggestion for starting point, the initial guess for c. Since the problem is large and a single iteration costs many hours. I have to do anything to reduce the number of iterations.

I use the following rule to update lamada(k), which is usual,

lamada(k+1)= lamada(k) + c(k)*h(x)

How to get a guess about the initial lamada(0)? it requires to

the common practice of updating c is

c(k+1) = c(k)*ratio,

but how big should the ratio be? how big should the c(0) - the initial c be? If c(0) or ration is too big, I seem to get to stationary point of h(x), not the optimal sol to f(x).

Any Suggestion? Thanks in advance.

Donghui Wu Department of Mathematical Science Rensselaer Polytechnic Institute Troy, NY 12180-3590 E-MAIL: wud2@rpi.edu FAX: (518)276-4284

Date: 30 Jun 1997 12:03:43 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Advice needed for Augmented Lagrangian Method
I hope your f is convex. then you are a lucky man, having no trouble at all, theoretically at least.

The augmented Lagrangian method is globally convergent in this case for any positive c. therefore no updating rule has to be used for it. simply take "lamada_0" as the least squares solution of grad_x L (x,"lamada")=0 (you mean lambda). this is simple, if your number of equations is really small, as you write. Otherwise, take it to zero. then take c=10 (or 100, say) but be aware that c*norm(grad h)**2 is not much larger than norm(hess(f)). some outer iterations will suffice, I would guess 10 or so. the trouble, if any, will come from the inner minimization. If the problem is illconditioned, a truncated Newton-iteration baed on the Lanczos method may be appropriate (but we have already had trouble with it if n is so large). If you have a good preconditioner, then you might do better do use cg+preconditioning.

good luck

peter


SCI.OP-RESEARCH Digest Mon, 26 97 Volume 4: Issue 21

Date: Fri, 16 May 1997 15:51:12 +1000
From: Karen Lau <karen@maths.unsw.edu.au>
Subject: QP sensitivity
Hi all,

For the strictly convex quadratic programming problem

min x' Gx + g' x st Ax <= b Dx = d if (b,d) are perturbed arbitrarily in R^p, how does the optimal value function changes as a function of (b,d).

I know that it is a piecewise quadratic function of (b,d) if the perturbation is only along a single direction. and that it is locally quadratic if the active constraints have independent gradients and strict somplimentarity is satisfied.

Is anyay aware of a proof of a stronger result that show that the optimal value unction is indeed piecewise quadratic for any pertubation in the plane without any strong constraint qualifications?

Thanks very much.

Karen Lau email: karen@maths.unsw.edu.au


SCI.OP-RESEARCH Digest Mon, 12 May 97 Volume 4: Issue 19

Date: 25 Apr 1997 14:21:25 GMT
From: PFEIFFER@WEEEV3.UNI-WUPPERTAL.DE (Ralph Pfeiffer)
Subject: A question on SQP algorithms
My question on SQP Algoritms:

- Is convexity of the objective function and equality and inequality constraints required to solve an optimization problem applying the SQP method?

Thanks for your help.


SCI.OP-RESEARCH Digest Mon, 28 April 97 Volume 4: Issue 17

Date: Thu, 24 Apr 1997 09:54:27 +1000
From: Karen Lau <karen@maths.unsw.edu.au>
Subject: sensitivty analysis

Hi,

I am looking at a problem of minimizing a 1) quadratic 2) a once differentiable function over a set of linear constraints. I am interested in the continuity and differentiability of the objective function as the RHS of the constraints is perturbed.

Woule anyone be able to point me to some relevent references? Any kind of survey or summary of main results in sensitivity analysis would be most useful (especially those not assuming a strong understanding of analysis).

Thank you in advance.

Date: Fri, 25 Apr 1997 09:10:36 -0700
From: Hans Mittelmann <mittelmann@asu.edu>
Subject: A question on SQP algorithms

To: Ralph Pfeiffer <PFEIFFER@WEEEV3.UNI-WUPPERTAL.DE> Ralph Pfeiffer wrote: > > My question on SQP Algoritms: > > - Is convexity of the objective function and > equality and inequality constraints required to > solve an optimization problem applying the SQP method? > > Thanks for your help. > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ No, but under these conditions there is a unique local/global minimum, so a SQP method would find the global minimum.

Hans D. Mittelmann http://plato.la.asu.edu/ Arizona State University Phone: (602) 965-6595 Department of Mathematics Fax: (602) 965-0461 Tempe, AZ 85287-1804 email: mittelmann@asu.edu

SCI.OP-RESEARCH Digest Mon, 20 Jan 97 Volume 4: Issue 3

Date: Tue, 14 Jan 1997 23:45:5 GMT
From: vavasis@cosmo.mcs.anl.gov (Stephen A. Vavasis)
Subject: Indefinite quadratic minimization on a simplex.
The problem of minimizing an indefinite quadratic function over a simplex is indeed NP-hard.

The main theorem need to prove this is from:

T. Motzkin and G. Straus, Maxima for graphs and a new proof of a theorem of Turan, Canad. J. Math 17(1965) 533-540.

Of course, NP-hardness wasn't invented in 1965; the extension of theorem in that paper to a proof of NP-completeness is due to

P. Pardalos, Y. Ye and C.-G. Han, Algorithms for the solution of quadratic knapsack problems, preprint, 1989.

The proof also appears in my book:

S. Vavasis, Nonlinear Optimization: Complexity Issues, Oxford University Press, 1991.

-- Steve Vavasis

------------------------------ Date: 15 Jan 1997 10:09:01 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: Indefinite quadratic minimization on a simplex.

In article <t20wwtgdhft.fsf@cantua.canterbury.ac.nz>, Chris Stephens <math5cps@cantua.canterbury.ac.nz> writes: |> |> Does anyone know if minimization of an indefinite quadratic on a simplex |> is an NP-Hard problem. That is, what is known of the complexity of, |> |> T T |> min c x + x Ax/2 |> x |> |> such that x_i >= 0, for all i=1..n, and |> x_1+x_2+...+x_n <= 1. |> |> |> Of course, it is known that this problem is NP-Hard for general linear |> constraints or if x is constrained to a box, even if A is positive |> definite. However, if x is constrained to a simplex, there are |> polynomial time algorithms if A is positive or negative semi-definite |> (since there are only n+1 vertices). What about the case where A is |> indefinite? |> |> Any references or pointers would be much appreciated. |> |> -- |> Chris Stephens. |> First of all: If A is positive semidefinite, then the problem can be solved in polynomial time using interior point methods. Consult Nesterov&Nemirovskii: Interior point polynomial algorithms in convex programming. SIAM 1993.

If A is not positive semidefinite, then you must check all constraining boundary manifolds of the simplex from dimension n-1 downto 0 (vertices) for the existence of feasible local minimizers resp. function values and hence complexity is exponential.

peter

Date: Thu, 16 Jan 1997 18:10:01 +1300
From: Chris Stephens <math5cps@cantua.canterbury.ac.nz>
Subject: Indefinite quadratic minimization on a simplex.

I wrote:

> Of course, it is known that [quadratic programming] is NP-Hard for > general linear constraints or if x is constrained to a box, even if A > is positive definite. That should have been, "even is A is *negative* definite". The point being: if A is negative definite and x constraned to a box then the problem is NP-HARD, but if x is constrained to a simplex (and A is neg. def), the problem is polynomial time solvable.

I would expect the simplex constrained case to be NP-HARD when A is indefinite, but I have not seen a proof of this special case.

Thanks again,

Chris Stephens.

Date: Wed, 15 Jan 1997 18:35:7 GMT
From: vavasis@cosmo.mcs.anl.gov (Stephen A. Vavasis)
Subject: Indefinite quadratic minimization on a simplex.

In article <E41tp4.70u@undergrad.math.uwaterloo.ca>, Henry Wolkowicz <hwolkowi@orion.math.uwaterloo.ca> wrote: >[...] >But - even the if there is only one negative eigenvalue, the problem is >NP-hard. I am not sure of the exact reference [...] The reference is:

P. Pardalos and S. Vavasis, "Quadratic programming with one negative eigenvalue is NP-hard", J. Global Optimization 1 (1991) 15-22.

The result in that paper is that minimizing a quadratic function that has one negative eigenvalue subject to linear inequality constraints is an NP-hard problem. In particular, the objective function has the form

min x_1 - x_2 ^ 2

The linear constraints used in that proof do not have a simple form (i.e., not box or simplex constraints).

-- Steve Vavasis (vavasis@mcs.anl.gov)

Date: Wed, 15 Jan 1997 12:07:52 GMT
From: hwolkowi@orion.math.uwaterloo.ca (Henry Wolkowicz)
Subject: Indefinite quadratic minimization on a simplex.
In article ....

I think that the case when A is positive definite is NOT NP-hard for general linear constraints, i.e. in that case you have a nice convex quadratic over a polyhedral set. This can be solved in polynomial time, see e.g. the book by Nesterov-Nemirovskii (though I am sure there are better references).

But - even the if there is only one negative eigenvalue, the problem is NP-hard. I am not sure of the exact reference but the following may help:

author = "C.-G. HAN and P.M. PARDALOS and Y.YE", title = "Computational aspects of an interior point algorithm for quadratic programming problems with box constraints", booktitle = "Large Scale Numerical Optimization", publisher = "SIAM", editor = "T.F. COLEMAN and Y. LI", year = "1990"} author = "C.-G. HAN and P.M. PARDALOS and Y.YE", title = "On the solution of indefinite quadratic problems using an interior-point algorithm", journal = "Informatica", volume = "3", number = "4", pages = "474-496", year = "1992"} -- ||Henry Wolkowicz |Fax: (519) 725-5441 ||University of Waterloo |Tel: (519) 888-4567, 1+ext. 5589 ||Dept of Comb and Opt |email: henry@orion.math.uwaterloo.ca ||Waterloo, Ont. CANADA N2L 3G1 |URL: http://orion.math.uwaterloo.ca/~hwolkowi

Crawl back to front page.