`UK Nonlinear News`,
`November 2003`

The fundamental topic of this book is the complexity analysis of primal-dual interior-point algorithms for linear programming problems.

Of course the classical algorithm for this problem is the famous Simplex method. This iterative method continually reduces the objective function by moving from vertex to vertex of the feasible set, and thus (theoretically) solves a linear programming problem exactly in a finite number of iterations. Although the Simplex method (almost) always converges quickly in practice, there are known examples where the number of iterations required depends exponentially on N, the size of the linear programming problem; e.g. the well-known Klee-Minty example. Thus an open problem was to create a new algorithm for solving linear programming problems, for which the number of iterations required was guaranteed to be only polynomial in N. The big disappointment, however, was that when such algorithms appeared, e.g. the Khachiyan algorithm in 1979, they performed much worse than the Simplex method in practice.

In 1984, however, the situation changed abruptly with the announcement of Karmarkar's algorithm. In contrast to the Simplex method, this new algorithm generated a sequence of iterates in the interior of the feasible set; hence the name ``interior-point method''. Most importantly, Karmarkar's algorithm was polynomial in N and he claimed that it could solve large linear programming problems much faster than the Simplex method. It was quickly recognised that there were strong links between this new idea and older rejected algorithms based on the idea of ``barrier functions''. For a number of years the area of linear optimisation was in a state of flux, but a consensus gradually developed that the new approach was best applied in a ``primal-dual framework'', i.e. both the primal and dual linear programming problems were solved simultaneously.

The present state of primal-dual interior-point algorithms is that
they demand the solution of a highly-structured system of (slightly)
nonlinear
equations. This system is solved by introducing a non-negative
homotopy parameter `\mu` and using Newton's method. Thus we require the
solution
for `\mu=0`, but it is easy to write down the solution for
`\mu=1`. Hence
`\mu` is gradually reduced from 1 to 0 with the solution for each value
of `\mu`
computed by Newton's method. One would like to state precisely how
`\mu` should be decreased and precisely when the Newton iteration at
each homotopy
step should be terminated, so that not only does the algorithm
perform well in practice but also the total amount of work required
in terms of N
can be proved to satisfy a realistic bound.

It is here that there has been a gap between theory and practice,
whose elimination is one of the main purposes of this book. The
homotopy methods that
have performed best in practice are the so-called ``large
update-neighbourhood'' methods, which reduce
`\mu` to zero quickly and
do not bother to solve
the intermediate problems very accurately. The amount of work
required for this type of method has only been proved to be
`O(N log N)`. On the other hand
the so-called ``small update-neighbourhood'' methods, which reduce
`\mu` to zero conservatively and are more careful in their solution of
intermediate
problems, do NOT perform so well in practice. The amount of work
required for this type of method, however, has been proved to be
`O(N ^{1/2} log N)`.
Hence, on page 16, the authors ask the key question:
``Can we lower the complexity bounds of the large-update methods
by improved analysis without solving any additional systems?''
Their answer is in the affirmative and uses the new idea of
``self-regular'' functions to control the accuracy of the Newton
iterates. A self-regular
function is a strictly convex twice continuously differentiable
function defined on the positive real line, with global minimum at 1,
and also
satisfying important growth conditions. Using this idea, the authors
prove that the amount of work required for the large update
methods is close to

We have given the impression above that this book is mostly concerned with linear programming problems, but this is definitely not so. The larger part of the book applies self-regular functions to interior-point methods in three other areas of optimisation, namely

- complementarity problems
- semi-definite optimization
- second-order conic optimization.

In conclusion then, this is a theoretical book, proving important theoretical results for various interior-point algorithms. It is not suitable as an introduction to the area (although Chapter 1 is rather good in this respect) nor for those readers with mainly practical interests. Such people are better directed towards S.J. Wright's 1997 SIAM monograph. In general the book is easy to read, but I have still not recovered from the astonishing claim at the top of page 2.

Also with three authors available to proof-read each other, there are more typos then one would expect: e.g. the indentation in the pseudo-code on page 9 and the proximity function at the top of page 12.``There is no doubt that the major breakthroughs in the field of mathematical programming are always inaugurated in linear optimisation.''

Gerald Moore (`
g.moore@imperial.ac.uk`)

Department of Mathematics

Imperial College

London.

`UK Nonlinear
thanks Princeton University Press for providing a review copy of this book.
`

<< Move to

Page Created: 1st December 2003.

Last Updated: 1st December 2003.