Topics in Optimization for Data-Driven Application

Module Number: EI71053

Duration: 1 Semester

Occurence: Winter semester

Language: English

Number of ECTS: 5

Staff

Professor in charge: Holger Boche

Amount of work

Contact hours: 60

Self-studying hours: 90

Total: 150

Description of achievement and assessment methods

The grading is based on an written exam at the end of the semester (90 min). During this exam, the students have to solve problems by applying the learned techniques and methods from the lecture and exercises. These problems may include short calculations and sketches of proofs. No aids allowed during the exam.

Recommended requirements

Basic knowledge in linear algebra, analysis, probability theory, convex optimization, and high-level programming languages, such as MATLAB, Python, or R. Some knowledge in dynamical systems, machine learning, and statistics are helpful but not necessary

Contents

The main objective of this course is to introduce and analyze iterative methods for optimization, which are suitable for data-driven applications in engineering practice (e.g. machine learning, network-based applications, and technical operation managements). Typically in such applications, methods which are able to process large amount of data, which fulfill low delay requirements, and which are able to handle large number of controllable influencing factors, are needed. By this reason, the emphasis of this course is on first-order (resp. zero-order) methods, which requires firstderivative (resp. the actual value) of the objective function. Specifically following topics are discussed in this course: 1. First-order method in unconstrained nonlinear optimization (Subgradient descent for convex functions, Fundamental limit of first-order optimization method, Stochastic gradient descent for convex functions, Nesterov’s accelerated gradient Method, First-order descent methods for non-convex functions). 2. First-order method under constraints and uncertainty (Projected gradient descent and mirror descent, Primal-dual Lagrangian methods for constrained convex programs, Adaptive subgradient methods (AdaGrad), Descent methods for robust optimization, Dynamic Programming, Markov decision process, Bellman’s equation and principle of optimality, Value iteration method, Approximation methods for large-scale implementation). Moreover, macroscopic and distributed aspects of the introduced methods will be discussed in this course. Possible topics towards this direction contain inter alia: Descent methods in a non-cooperative game theoretical setting and decentralized distributed first-order methods. Finally, selected applications from the following areas are covered: Signal processing, Supervised learning, Online learning and reinforcement learning, Resource management in network applications, Smart grid.

Study Goals

Upon completion of this module, the students are able to inter alia: Understand the basic principles and mechanisms of different iterative optimization methods and both, recognize and reflect the advantages and disadvantages of the introduced methods w.r.t. the complexity. Capture the theoretical frameworks and paradigms based on iterative optimization methods behind the selected aspects in the data-driven applications. Possibly extend the introduced methods and implement them in further (data-driven) applications.

Teaching and learning methods

Development of the theoretical concepts and argumentations on the blackboard, Discussions of the practical applications on the blackboard or the slides, Numerical simulations, Exercise sheets

Media formats

Blackboard, Beamer Presentation, Exercise sheets.

Literature

Y. Nesterov, „Lectures on Convex Optimization“, Springer International Publishing, 2018

S. Bubeck, „Convex Optimization: Algorithms and Complexity“, Now Publisher Inc., 2015

A. Ben-Tal, A. Nemirovski, „Lectures on Modern Convex Optimization“, SIAM, 2001

S. Shalev-Shwartz, „Online Learning and Online Convex Optimization“, Now Publisher Inc., 2011

M. L. Puterman, „Markov Decision Process: Discrete Stochastic Dynamic Programming“, John Wiley & Sons, 2014

F. Facchinei and J. Pang, „Finite-Dimensional Variational Inequalities and Complementarity Problems“, Springer, 2007

Further literature will be announced in the lecture.