Date: Wed, 22 Oct 1997 15:54:14 +0200
From: (null)
Subject: Determining convexity of a constraint/space
Arnold Neumaier
Date: Wed, 22 Oct 1997 16:36:57 GMT
From: (null)
Subject: Determining convexity of a constraint/space
The above constraint set convexity also holds under quasiconvexity of
the function. There are characterizations for quasiconvex functions as
well. A good reference is the proceedings of the NATO Conference on
GEneralized Concavity, held in Vancouver B.C. at U.B.C. in the early
80's.
editors were: Schaible, Ziemba, Diewert, Avriel (I think)
Date: Tue, 30 Sep 1997 15:04:48 +0200
From: frangio@di.unipi.it (Antonio Franngioni)
Subject: convex programming
Hallo.
If the function is nondifferentiable, things are different. You must go through a NonDifferentiable Optimization method. There are some of them, some pointers being at the page above. I also have one (not listed above).
Date: Mon, 29 Sep 1997 09:06:00 -0500
From: Alexander Schwarm <NOSPAMalexander@tamu.edu>
Subject: Determining convexity of a constraint/space
How does one go about showing whether a space/constraint is convex? I
know how to show a function is convex. I can show that x^2 is convex,
but x^2 > 1 is not convex. The specific constraint I am looking into
is:
a'*X <= b - sqrt(X'*P*X + g'*X + r)
where P >=0, R^nxn and X, a, g are R^n and r is R^1. How can I go about
finding out whether this space is convex?
Thank you,
Alexander Schwarm
Date: Sat, 27 Sep 1997 20:10:19 -0400
From: Pierre Duchesne <duchesne@stat.umontreal.ca>
Subject: Convex programming
Hello,
I'm really new in the area of OR, but a have the following problem to solve: (in fact, it is maybe interesting to note that my initial problem come from statistics... it is amazing to note how statisticians should learn from OR methods...)
Date: 16 May 1997 09:59:12 -0700
From: penev@lipari.usc.edu (Kamen Penev)
Subject: How to check if a convex cone contains only 0?
Regards!
Kamen
Date: Fri, 9 May 1997 03:17:51 GMT
From: Mark Rosenschein <mrosen@cs.technion.ac.il>
Subject: Convex optimization
Did anybody ever try to implement gradient method for convex
optimization ?
Or any similar method. Or any practically efficient method.
Could someone point out me to any paper or source code ?
Thanks in advance.
Mark Rosenschein
Date: 9 May 1997 13:18:58 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
yes, about 50 years ago.....
no knowledgeable person nowadays will use it .... for modern algorithms, you should have a look at some nonlinear optimization books in your meth-library (e.g. Fletcher, Bertsekas, Nash&Sofer) and for softawre have a look at
http://plato.la.asu.edu/guide.html
(there are additional hints for getting information from other sources at the end of this guide)
hope this helps
peter
Date: Fri, 9 May 1997 15:36:47 GMT
From: Mark Rosenschein <mrosen@cs.technion.ac.il>
Subject: Convex optimization
To: Peter Spellucci <spellucci@mathematik.th-darmstadt.de>
Thanks for help, but may be I should formulate my problem precisely.
I want to minimize linear function over convex , where
convex given by oracle. Given the point oracle returns me whether it is
inside or outside of convex and, if it is outside, normal vector of
cutting plane. ( It means I've got the oracle procedure )
And I need to solve this problem for large dimentions.
I can't really figure from short descriptions of software in web guides which is
appropriate for my model.
Mark Rosenschein
Date: Fri, 9 May 1997 15:19:37 GMT
From: hwolkowi@orion.math.uwaterloo.ca (Henry Wolkowicz)
Subject: Convex optimization
I have a followup question - more detailed.
What about convex programs where the objective and constraints are all quadratic? Has anyone seen some numerical tests specific for this problem? What algorithms would be best? Dual algorithms? Proximal algorithms? Interior point methods?
thanks
Date: 14 Apr 1997 09:33:07 GMT
From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci)
Subject: A question on positive definite matrices
hope this helps
peter
Date: 24 Apr 1997 21:41:24 GMT
From: rubin@msu.edu (Paul A. Rubin)
Subject: A question on positive definite matrices
If we're dealing with real matrices and if we assume that positive
semidefinite and positive definite are only defined for symmetric matrices,
then p.s.d. + nonsingular <==> p.d. When I learned linear algebra, p.d.
implied symmetric, but I think (not sure) I've seen authors apply it to
asymmetric matrices, in which case the equivalence is not valid.
-- Paul
Date: 11 Apr 1997 12:10:27 GMT
From: hofmeist@thue.informatik.uni-dortmund.de (Thomas Hofmeister)
Subject: A question on positive definite matrices
I have the following question which is probably trivial to
answer for some of you:
Is there some simple criterion that makes the following true:
Matrix A is positive definite if and only if it is positive semi-definite AND some criterion .....
Could that "some criterion" be something like "A is non-singular" ?
The reason I'm asking is that computing determinants suffices to determine positive definiteness whereas for *semi*-definiteness, there is the characterization via eigenvalues which are "harder to compute".
Date: 7 Apr 1997 15:26:42 GMT
From: "Jos A. Horsmeier" <jos@and.nl>
Subject: Bland's rule applied to dual simplex
Greetings and saluations,
my question may seem less than trivial to most, if not all, of you, but it kept me busy for the entire weekemd and I still haven't figured it out ...
I'm implementing a (revised) simplex algorithm (both primal and dual). I use Bland's rule to avoid cycling in the primal part of the simplex algorithm. Everything works sound and solid, and even more, I understand what I'm doing here (which is quite something in my case ;-)
Whenever the algorithm is started with a basic _infeasible_ solution, (the right hand side b is elements less than zero). I want to apply the dual to drag the (primal) basis back to a feasible basis (using any cost vector c, as long as c >= 0).
Of course cycling can roar it's ugly head here. I pirated and caroused all bookshops again this weekend, but I can't find anything appropriate concerning the applicability of Bland's rule in this case. Not even 'Introduction to Linear Programming' by Dimitris Bertsimas and John N. Tsitsiklis, which happens to be a terrific book, mentions it. All I can find is that horrible lexicographic anti cycling rule; but I can't find an easy way to implement it when I have to deal with the dual part of the algorithm (I don't want to sort all the rows of the initial tableau of course) ...
Can anyone help me out with this? I wouldn't be the only one who'd appreciate it, but also my wife would love to see an answer to it, in order to get all the paper mess out of the living room and that stupid looking, absent minding look out of my eyes ... ;-)
Thanks in advance for any pointer or answers; e-mail or replies to this group are both fine. I read (lurk) this group every day.
kind regards,
Jos aka jos@and.nl
ps. Please don't ask me why I don't use any available simplex implementation; I want to implement my own, maybe not just as an exercise ...
Date: 6 Apr 1997 22:54:27 GMT
From: madhu@PROBLEM_WITH_YOUR_MAIL_GATEWAY_FILE.nyu.edu (Madhu Nayakkankuppam)
Subject: New code for SDP.
We would like to announce the availability of our semidefinite programming code: SDPpack Verion 0.8 BETA. The code and documentation is available at the URL:
http://www.cs.nyu.edu/phd_students/madhu/sdppack/sdppack.html
Abstract (User Guide)
This report describes SDPpack, a package of Matlab files designed to solve semidefinite programs (SDP). SDP is a generalization of linear programming to the space of block diagonal, symmetric, positive semidefinite matrices. The main routine implements a primal-dual Mehrotra predictor-corrector scheme based on the XZ+ZX search direction. We also provide certain specialized routines, one to solve SDP's with only diagonal constraints, and one to compute the Lovasz theta function of a graph, using the XZ search direction. Routines are also provided to determine whether an SDP is primal or dual degenerate, and to compute the condition number of an SDP. The code optionally uses MEX files for improved performance; binaries are available for several platforms. Benchmarks show that the codes provide highly accurate solutions to a wide variety of problems.
F. Alizadeh, J.-P. Haeberly, M.V. Nayakkankuppam, M.L. Overton Rutgers Fordham NYU NYU
Date: 28 Jan 1997 14:07:59 GMT
From: dreaume@engin.umich.edu (Daniel Reaume)
Subject: Complexity of Convex Programming
Dear Colleagues,
I am currently writing up a literature review on the subject of convex programming as part of my dissertation. On the subject of complexity, I have read many contradictory results. Usually, these contradictions seem to arise from imprecision in defining relative error. To be more certain, I ask the readers of this newsgroup:
Consider the mathematical program:
I define the error of a solution to be f(x)-f*, where f* is the optimum value.
Does there exist an algorithm, deterministic or randomized, which, using a number of membership and/or separation oracle calls bounded from above by a polynomial in n and (1/error), will return a feasible solution x to the above convex program such that f(x)-f*<=error? Does there exist such an algorithm using only membership oracles?
To the best of my knowledge, the answer is "No", since ellipsoid methods seem to have difficulties guaranteeing feasibility while interior point methods usually require additional assumptions allowing the construction of self-concordant barrier functions.
I welcome any comments or assistance!
Many thanks in advance,
Daniel Reaume
dreaume@engin.umich.edu
Date: 28 Jan 1997 10:23:27 -0600
From: hennebry@plains.nodak.edu (Michael J. Hennebry)
Subject: Complexity of Convex Programming
The answer probably depends on precisely what one can get out of the oracles
and what one has to put in them.
Crawl back to front page.