Brian's Digest: Convex Programming


SCI.OP-RESEARCH Digest Mon, 25 Oct 97 Volume 4: Issue 43

Date: Wed, 22 Oct 1997 15:54:14 +0200
From: (null)
Subject: Determining convexity of a constraint/space

Alexander Schwarm wrote: > > How does one go about showing whether a space/constraint is convex? I > know how to show a function is convex. I can show that x^2 is convex, > but x^2 > 1 is not convex. The specific constraint I am looking into > is: > a'*X <= b - sqrt(X'*P*X + g'*X + r) > where P >=0, R^nxn and X, a, g are R^n and r is R^1. How can I go about > finding out whether this space is convex? > if F(x) is convex then F(x)<=0 is convex. So to check convexity of F(x)<=G(x) you neet to find the Hessian of F(x)-G(x); if it is positive semidefinite the constraint is convex; if not, it is very likely that the constraint set is nonconvex; at least for algorithmic purposes, it is.

Arnold Neumaier

Date: Wed, 22 Oct 1997 16:36:57 GMT
From: (null)
Subject: Determining convexity of a constraint/space
The above constraint set convexity also holds under quasiconvexity of the function. There are characterizations for quasiconvex functions as well. A good reference is the proceedings of the NATO Conference on GEneralized Concavity, held in Vancouver B.C. at U.B.C. in the early 80's. editors were: Schaible, Ziemba, Diewert, Avriel (I think)

-- ||Prof. Henry Wolkowicz |Fax: (519) 725-5441 ||University of Waterloo |Tel: (519) 888-4567, 1+ext. 5589 ||Dept of Comb and Opt |email: henry@orion.math.uwaterloo.ca ||Waterloo, Ont. CANADA N2L 3G1 |URL: http://orion.math.uwaterloo.ca/~hwolkowi

SCI.OP-RESEARCH Digest Mon, 6 Oct 97 Volume 4: Issue 40

Date: Tue, 30 Sep 1997 15:04:48 +0200
From: frangio@di.unipi.it (Antonio Franngioni)
Subject: convex programming
Hallo.

> min f(x) > > subject to Ax = t1 > a <= x <= b It really depends on the further properties that f() has, if any. If the function is at least once differentiable, you can look to any of the many NLP (standing for NonLinear Programming) methods. Look at the "NLP FAQ" (in this newsgroup): a WWW pointers to such stuff can be found by snuffing in http://mat.gsia.cmu.edu/">

If the function is nondifferentiable, things are different. You must go through a NonDifferentiable Optimization method. There are some of them, some pointers being at the page above. I also have one (not listed above).

Hope this helps Antonio Antonio Frangioni Research Associate | I have gained ... Dip. di Informatica - Corso Italia 40 - 56125 Pisa, Italy | the colour of the Ph +039-50-887216 www.di.unipi.it/~frangio | corn. [Exupery]

SCI.OP-RESEARCH Digest Mon, 29 Sep 97 Volume 4: Issue 39

Date: Mon, 29 Sep 1997 09:06:00 -0500
From: Alexander Schwarm <NOSPAMalexander@tamu.edu>
Subject: Determining convexity of a constraint/space
How does one go about showing whether a space/constraint is convex? I know how to show a function is convex. I can show that x^2 is convex, but x^2 > 1 is not convex. The specific constraint I am looking into is: a'*X <= b - sqrt(X'*P*X + g'*X + r) where P >=0, R^nxn and X, a, g are R^n and r is R^1. How can I go about finding out whether this space is convex?

Thank you,

Alexander Schwarm

Date: Sat, 27 Sep 1997 20:10:19 -0400
From: Pierre Duchesne <duchesne@stat.umontreal.ca>
Subject: Convex programming
Hello,

I'm really new in the area of OR, but a have the following problem to solve: (in fact, it is maybe interesting to note that my initial problem come from statistics... it is amazing to note how statisticians should learn from OR methods...)

--------------------------------------------------------------------------- I want to minimize a convex function on a convex set. The function to minimize is f, and the constraints are linears, with in addition simple bounds on the variables: min f(x) subject to Ax = t1 a <= x <= b where x is a vector of dimention p. Suppose for example p=100, a common situation is to have A be of dimensions 2 x 100. --------------------------------------------------------------------------- First, I want to investigate feasability of my problem. The only thing I know is that, for the contraints: Ax = t2 a <= x <= b, I have a feasible solution, say x2, but I'd like to investigate 'the proximity' of the two regions. Sometimes, we except that the initial region will be empty. Any references on that type of problem will be really appreciated... Second, is there any good algorithms already written? In Gill, Murray and Wright, p.193, that type of problem is discussed. Is there any Fortran algorithms? Thank you very much for any help, e-mail me directly at duchesne@dms.umontreal.ca, i will summarize. Pierre Duchesne

SCI.OP-RESEARCH Digest Mon, 26 97 Volume 4: Issue 21

Date: 16 May 1997 09:59:12 -0700
From: penev@lipari.usc.edu (Kamen Penev)
Subject: How to check if a convex cone contains only 0?

In article <5l08td$njs$4@msunews.cl.msu.edu>, Paul A. Rubin <rubin@msu.edu> wrote: > >A more tedious solution, which should (I think) cope with degenerate cases >such as this one, is to alternately maximize and minimize each component of >x over the feasible region: > > max (min) x_j > s.t. Ax <= 0 > >running j from 1 to 6. If all 12 problems have optimal value 0, the cone >contains only the origin. The first time you get an optimal value other >than zero, quit with a non-zero point in the cone. Yes. I am doing a variation of that right now:

max x_j min Sum x_j s.t. Ax <= 0 This only takes 7 problems instead of 12, but I had the feeling that it's ugly.

Regards!

Kamen


SCI.OP-RESEARCH Digest Mon, 12 May 97 Volume 4: Issue 19

Date: Fri, 9 May 1997 03:17:51 GMT
From: Mark Rosenschein <mrosen@cs.technion.ac.il>
Subject: Convex optimization
Did anybody ever try to implement gradient method for convex optimization ? Or any similar method. Or any practically efficient method. Could someone point out me to any paper or source code ? Thanks in advance.

Mark Rosenschein

Date: 9 May 1997 13:18:58 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
yes, about 50 years ago.....

no knowledgeable person nowadays will use it .... for modern algorithms, you should have a look at some nonlinear optimization books in your meth-library (e.g. Fletcher, Bertsekas, Nash&Sofer) and for softawre have a look at

http://plato.la.asu.edu/guide.html

(there are additional hints for getting information from other sources at the end of this guide)

hope this helps

peter

Date: Fri, 9 May 1997 15:36:47 GMT
From: Mark Rosenschein <mrosen@cs.technion.ac.il>
Subject: Convex optimization
To: Peter Spellucci <spellucci@mathematik.th-darmstadt.de>
Thanks for help, but may be I should formulate my problem precisely. I want to minimize linear function over convex , where convex given by oracle. Given the point oracle returns me whether it is inside or outside of convex and, if it is outside, normal vector of cutting plane. ( It means I've got the oracle procedure ) And I need to solve this problem for large dimentions. I can't really figure from short descriptions of software in web guides which is appropriate for my model.

Mark Rosenschein

Date: Fri, 9 May 1997 15:19:37 GMT
From: hwolkowi@orion.math.uwaterloo.ca (Henry Wolkowicz)
Subject: Convex optimization
I have a followup question - more detailed.

What about convex programs where the objective and constraints are all quadratic? Has anyone seen some numerical tests specific for this problem? What algorithms would be best? Dual algorithms? Proximal algorithms? Interior point methods?

thanks

||Henry Wolkowicz |Fax: (519) 725-5441 ||University of Waterloo |Tel: (519) 888-4567, 1+ext. 5589 ||Dept of Comb and Opt |email: henry@orion.math.uwaterloo.ca ||Waterloo, Ont. CANADA N2L 3G1 |URL: http://orion.math.uwaterloo.ca/~hwolkowi

SCI.OP-RESEARCH Digest Mon, 28 April 97 Volume 4: Issue 17

Date: 14 Apr 1997 09:33:07 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: A question on positive definite matrices

In article <5il9nj$5pd$1@fbi-news.informatik.uni-dortmund.de>, hofmeist@thue.informatik.uni-dortmund.de (Thomas Hofmeister) writes: |> I have the following question which is probably trivial to |> answer for some of you: |> |> Is there some simple criterion that makes the following true: |> |> Matrix A is positive definite |> if and only if |> it is positive semi-definite AND some criterion ..... |> |> Could that "some criterion" be something like "A is non-singular" ? |> |> The reason I'm asking is that computing determinants suffices |> to determine positive definiteness whereas for *semi*-definiteness, |> there is the characterization via eigenvalues which are "harder to |> compute". Clearly "semidefinite" (i.e. all eigenvalues nonnegative) and "regular" means determinant=product of all eigenvalues nonzero, hence no eigenvalue zero, implies positive definite.

hope this helps

peter

Date: 24 Apr 1997 21:41:24 GMT
From: rubin@msu.edu (Paul A. Rubin)
Subject: A question on positive definite matrices
If we're dealing with real matrices and if we assume that positive semidefinite and positive definite are only defined for symmetric matrices, then p.s.d. + nonsingular <==> p.d. When I learned linear algebra, p.d. implied symmetric, but I think (not sure) I've seen authors apply it to asymmetric matrices, in which case the equivalence is not valid.

->The reason I'm asking is that computing determinants suffices ->to determine positive definiteness whereas for *semi*-definiteness, ->there is the characterization via eigenvalues which are "harder to ->compute". For symmetric (real) matrices, p.s.d. is characterized by all principal determinants being nonnegative.

-- Paul

************************************************************************** * Paul A. Rubin Phone: (517) 432-3509 * * Department of Management Fax: (517) 432-1111 * * Eli Broad Graduate School of Management Net: RUBIN@MSU.EDU * * Michigan State University * * East Lansing, MI 48824-1122 (USA) * ************************************************************************

SCI.OP-RESEARCH Digest Mon, 14 April 97 Volume 4: Issue 15

Date: 11 Apr 1997 12:10:27 GMT
From: hofmeist@thue.informatik.uni-dortmund.de (Thomas Hofmeister)
Subject: A question on positive definite matrices
I have the following question which is probably trivial to answer for some of you:

Is there some simple criterion that makes the following true:

Matrix A is positive definite if and only if it is positive semi-definite AND some criterion .....

Could that "some criterion" be something like "A is non-singular" ?

The reason I'm asking is that computing determinants suffices to determine positive definiteness whereas for *semi*-definiteness, there is the characterization via eigenvalues which are "harder to compute".

Date: 7 Apr 1997 15:26:42 GMT
From: "Jos A. Horsmeier" <jos@and.nl>
Subject: Bland's rule applied to dual simplex
Greetings and saluations,

my question may seem less than trivial to most, if not all, of you, but it kept me busy for the entire weekemd and I still haven't figured it out ...

I'm implementing a (revised) simplex algorithm (both primal and dual). I use Bland's rule to avoid cycling in the primal part of the simplex algorithm. Everything works sound and solid, and even more, I understand what I'm doing here (which is quite something in my case ;-)

Whenever the algorithm is started with a basic _infeasible_ solution, (the right hand side b is elements less than zero). I want to apply the dual to drag the (primal) basis back to a feasible basis (using any cost vector c, as long as c >= 0).

Of course cycling can roar it's ugly head here. I pirated and caroused all bookshops again this weekend, but I can't find anything appropriate concerning the applicability of Bland's rule in this case. Not even 'Introduction to Linear Programming' by Dimitris Bertsimas and John N. Tsitsiklis, which happens to be a terrific book, mentions it. All I can find is that horrible lexicographic anti cycling rule; but I can't find an easy way to implement it when I have to deal with the dual part of the algorithm (I don't want to sort all the rows of the initial tableau of course) ...

Can anyone help me out with this? I wouldn't be the only one who'd appreciate it, but also my wife would love to see an answer to it, in order to get all the paper mess out of the living room and that stupid looking, absent minding look out of my eyes ... ;-)

Thanks in advance for any pointer or answers; e-mail or replies to this group are both fine. I read (lurk) this group every day.

kind regards,

Jos aka jos@and.nl

ps. Please don't ask me why I don't use any available simplex implementation; I want to implement my own, maybe not just as an exercise ...


SCI.OP-RESEARCH Digest Mon, 7 April 97 Volume 4: Issue 14

Date: 6 Apr 1997 22:54:27 GMT
From: madhu@PROBLEM_WITH_YOUR_MAIL_GATEWAY_FILE.nyu.edu (Madhu Nayakkankuppam)
Subject: New code for SDP.

We would like to announce the availability of our semidefinite programming code: SDPpack Verion 0.8 BETA. The code and documentation is available at the URL:

http://www.cs.nyu.edu/phd_students/madhu/sdppack/sdppack.html

Abstract (User Guide)

This report describes SDPpack, a package of Matlab files designed to solve semidefinite programs (SDP). SDP is a generalization of linear programming to the space of block diagonal, symmetric, positive semidefinite matrices. The main routine implements a primal-dual Mehrotra predictor-corrector scheme based on the XZ+ZX search direction. We also provide certain specialized routines, one to solve SDP's with only diagonal constraints, and one to compute the Lovasz theta function of a graph, using the XZ search direction. Routines are also provided to determine whether an SDP is primal or dual degenerate, and to compute the condition number of an SDP. The code optionally uses MEX files for improved performance; binaries are available for several platforms. Benchmarks show that the codes provide highly accurate solutions to a wide variety of problems.

F. Alizadeh, J.-P. Haeberly, M.V. Nayakkankuppam, M.L. Overton Rutgers Fordham NYU NYU


SCI.OP-RESEARCH Digest Mon, 3 Feb 97 Volume 4: Issue 5

Date: 28 Jan 1997 14:07:59 GMT
From: dreaume@engin.umich.edu (Daniel Reaume)
Subject: Complexity of Convex Programming
Dear Colleagues,

I am currently writing up a literature review on the subject of convex programming as part of my dissertation. On the subject of complexity, I have read many contradictory results. Usually, these contradictions seem to arise from imprecision in defining relative error. To be more certain, I ask the readers of this newsgroup:

Consider the mathematical program:

min f(x) s.t. g(x)<=0 where the objective function and the constraint functions are all convex and the feasible region is assumed to be a well-rounded subset of R^n. By well rounded, I mean that the feasible region contains the unit ball and is in turn contained by a ball of radius sqrt(n)(n+1).

I define the error of a solution to be f(x)-f*, where f* is the optimum value.

Does there exist an algorithm, deterministic or randomized, which, using a number of membership and/or separation oracle calls bounded from above by a polynomial in n and (1/error), will return a feasible solution x to the above convex program such that f(x)-f*<=error? Does there exist such an algorithm using only membership oracles?

To the best of my knowledge, the answer is "No", since ellipsoid methods seem to have difficulties guaranteeing feasibility while interior point methods usually require additional assumptions allowing the construction of self-concordant barrier functions.

I welcome any comments or assistance!

Many thanks in advance,

Daniel Reaume
dreaume@engin.umich.edu

Date: 28 Jan 1997 10:23:27 -0600
From: hennebry@plains.nodak.edu (Michael J. Hennebry)
Subject: Complexity of Convex Programming
The answer probably depends on precisely what one can get out of the oracles and what one has to put in them.

Mike hennebry@plains.NoDak.edu "Gradually a shot rang out." -- I. Forget (Snoopy?)

Crawl back to front page.