George Mason University

Department of SEOR and Mathematical Sciences Department.

Fall 2004

Professor Roman A. Polyak

 

Math 689/ IT 884-001: Advance Nonlinear Optimization

Tuesday 4:30-7:10 pm. ENT 276

 

 

Office: Room127, ST-II building; phone: 703-9931685; fax: 703-9931521

Office Hours: Thursday 3 pm-5 pm or by appointment. E-mail: rpolyak@gmu.edu

 Text: D.BertsekasNonlinear Programming Second Edition Athena Scientific,Belmont,Massachusetts.

          S.Nash,A.SoferLinear and Nonlinear Programming” The McGraw-Hill Companies Inc.1996.

 

Course Summary: A number of real life applications arising in statistical learning theory, structural optimization, antennae design, optimal power flow, radiation therapy planning, signal processing, economics and finance just to mention a few lead to Nonlinear Programming (NLP).A general NLP problem consists of finding a minimum (maximum) of a nonlinear function under nonlinear constraints both inequalities and equations. In this course along with classical NLP chapters that go back to Newton, Cauchy and Lagrange we will cover recent advances and trends in NLP.

             In the first part of the course we will consider theory and methods for unconstrained optimization as well as NLP with equality constraints. We will also cover elements of convex analysis and convex optimization theory including optimality criteria and convex duality.

            In the second part of the course we will cover recent advances in NLP including Interior Point Methods(IPMs) and Nonlinear Rescaling (NR) theory and methods in constrained optimization. Particular emphasis will be given to the primal-dual approaches  based on IPM and NR.

           There will be homework assignment and  projects.

  Grading:  15% homework; 30% midterm exam; 20 % project; 35 % final exam. 

 

Course Schedule    

1.      Real life applications and mathematical problems that lead to NLP formulation.

2.      Basics in unconstrained optimization: gradient method, Newton method and their modifications.

3.      Optimization problems with equality constraints. Lagrangian equations as necessary  optimality condition. Lagrangian duality: the dual functions and the dual problem.

4.      R.Courants penalty method for equality constrained optimization and its dual equivalent-N.Tichonov’s regularization method for unconstrained optimization.

5.      Convex functions, convex sets and the convex optimization problem.

      Karush-Kuhn-Tucker’s optimality condition. Elements of the duality theory in convex optimization.

6.      Principle of feasible directions and first primal-dual method for convex optimization.

7     Midterm

8.      Sequential unconstrained minimization technique (SUMT).Classical barrier and distance functions .

9.      Interior Point Method for  NLP.

10.  Augmented Lagrangian. Lagrange multipliers method for equality constraints and its dual equivalent-quadratic prox-method for unconstrained optimization.

11.  Nonlinear Rescaling (NR) principle for inequality constraint optimization. Modified barrier functions, modified distance functions   and correspondent methods.

12.  NR multipliers methods and their dual equivalent-interior prox with entropy-like distance functions.

13. Primal-dual Interior Point Methods.

14. Primal-dual NR methods in constrained optimization. Numerical realization and numerical results. 

 

Final Exam: December 14, 2004