- Lothar Reichel – TBA
- Jim Nagy: “Regularization methods in image processing”
In these lectures we provide some basic background on regularization methods, including filtering based on the singular (and spectral) value factorizations (SVD). Since problems in image processing are large scale, the filtering methods are not always applicable, so we describe situations where they can be used. We also discuss state-of-the-art iterative methods for cases where SVD methods cannot be used. However, since other lecturers will discuss mathematical and algorithmic details, the focus this particular set of lectures will be primarily on software, so that we can illustrate how various regularization methods perform on a variety of test problems. It would be beneficial for students to have a laptop with MATLAB so that they can get hands-on experience solving large scale inverse problems in imaging applications. We will provide MATLAB software that can be used to generate and solve a variety of test problems.
- Raymond Chan: “Variational Methods for Missing Data Recovery in Imaging”
In many practical problems in image processing, the observed data sets are often incomplete in the sense that features of interest in the images are missing partially or corrupted by noise. The recovery of missing data from incomplete data is an essential part of any image processing procedures whether the final image is utilized for visual interpretation or for automatic analysis.
The first part of the lectures will be on various variational methods for image recovery for missing data and the ways to solve these problems. The second part will touch on various applications in image processing such as, inpainting, impulse noise removal, segmentation, ground-based astronomy, hyper-spectral imaging, and super-resolution image reconstruction.
- Jean-Christophe Pesquet: “Proximal splitting methods in image processing”
Proximal methods are grounded on the use of the proximity operator which constitutes a natural extension of the projection onto a convex set. Although simple, this concept turns out to be fundamental to provide a unifying view of existing algorithms in convex optimization, but also to design new ones possibly applicable to the nonconvex case. The main advantage of proximal algorithms is their ability to solve large-scale optimization problems involving possibly non smooth functions such as sparsity measures or indicator functions of constraint sets. This course aims at introducing the proximity operator and the principles of proximal algorithms. Various kinds of minimization algorithms allowing to split an intricate objective function into a sum of easy-to-handle ones will be presented. Applications to image recovery and machine learning will be described.