Topics in Optimization for Data-Driven Applications (Lecture)

Lecturer (assistant)
Number0000003514
TypeLecture
Duration2 SWS
TermWintersemester 2020/21
Language of instructionGerman
Position within curriculaSee TUMonline
DatesSee TUMonline

Dates

Admission information

Objectives

Upon completion of this module, the students are able to inter alia: Understand the basic principles and mechanisms of different iterative optimization methods and both, recognize and reflect the advantages and disadvantages of the introduced methods w.r.t. the complexity. Capture the theoretical frameworks and paradigms based on iterative optimization methods behind the selected aspects in the data-driven applications. Possibly extend the introduced methods and implement them in further (data-driven) applications.

Description

The main objective of this course is to introduce and analyze iterative methods for optimization, which are suitable for data-driven applications in engineering practice (e.g. machine learning, network-based applications, and technical operation managements). Typically in such applications, methods which are able to process large amount of data, which fulfill low delay requirements, and which are able to handle large number of controllable influencing factors, are needed. By this reason, the emphasis of this course is on first-order (resp. zero-order) methods, which requires firstderivative (resp. the actual value) of the objective function. Specifically following topics are discussed in this course: 1. First-order method in unconstrained nonlinear optimization (Subgradient descent for convex functions, Fundamental limit of first-order optimization method, Stochastic gradient descent for convex functions, Nesterov’s accelerated gradient Method, First-order descent methods for non-convex functions). 2. First-order method under constraints and uncertainty (Projected gradient descent and mirror descent, Primal-dual Lagrangian methods for constrained convex programs, Adaptive subgradient methods (AdaGrad), Descent methods for robust optimization, Dynamic Programming, Markov decision process, Bellman’s equation and principle of optimality, Value iteration method, Approximation methods for large-scale implementation). Moreover, macroscopic and distributed aspects of the introduced methods will be discussed in this course. Possible topics towards this direction contain inter alia: Descent methods in a non-cooperative game theoretical setting and decentralized distributed first-order methods. Finally, selected applications from the following areas are covered: Signal processing, Supervised learning, Online learning and reinforcement learning, Resource management in network applications, Smart grid.

Prerequisites

Basic knowledge in linear algebra, analysis, probability theory, convex optimization, and high-level programming languages, such as MATLAB, Python, or R. Some knowledge in dynamical systems, machine learning, and statistics are helpful but not necessary.

Teaching and learning methods

Development of the theoretical concepts and argumentations on the blackboard, Discussions of the practical applications on the blackboard or the slides, Numerical simulations, Exercise sheets.

Examination

The grading is based on an oral or written examination (dependent on the number of participants).

Recommended literature

Y. Nesterov, „Lectures on Convex Optimization“, Springer International Publishing, 2018 S. Bubeck, „Convex Optimization: Algorithms and Complexity“, Now Publisher Inc., 2015 A. Ben-Tal, A. Nemirovski, „Lectures on Modern Convex Optimization“, SIAM, 2001 S. Shalev-Shwartz, „Online Learning and Online Convex Optimization“, Now Publisher Inc., 2011 M. L. Puterman, „Markov Decision Process: Discrete Stochastic Dynamic Programming“, John Wiley & Sons, 2014 F. Facchinei and J. Pang, „Finite-Dimensional Variational Inequalities and Complementarity Problems“, Springer, 2007 Further literature will be announced in the lecture.

Links