NODAI
Numerical optimization for data analysis and imaging I

Invited Session
Time Slot: Wednesday Morning
Room: 003
Chair: Simone Rebegoldi

A new converging approach for learnt optimization with applications to imaging

Time: 9:00

Elena Morotti (Università di Bologna), Davide Evangelista, Elena Loli Piccolomini

Image processing and image reconstruction are active research fields that have been recently revolutionized by the advent of convolutional neural networks, as deep learning-based schemes often yield superior results than classical optimization approaches. However, their ability to actually compute the inverse problem solution is still discussed in the literature.This talk presents a new scheme, called RISING, embedding deep learning tools in an optimization approach. Numerical results and theoretical aspects will be discussed, showing that RISING preserves the convergence properties of iterative solvers and also exploits the accuracy and flexibility of data-driven approaches, while fitting constraints due to large-scale problem applications.

A spectral PALM algorithm for Dictionary Learning

Time: 9:20

Margherita Porcelli (University of Bologna), Domitilla Brandoni, Valeria Simoncini

Dictionary Learning (DL) is one of the leading sparsity promoting techniques in the context of image classification, where the “dictionary” matrix D of images and the sparse matrix X are determined so as to represent a redundant image dataset. The resulting constrained optimization problem is nonconvex and non-smooth, providing several computational challenges for its solution. To preserve multidimensional data features, various tensor DL formulations have been introduced, adding to the problem complexity. We present a new alternating algorithm for the solution of the DL problem both in the matrix and tensor frameworks; in the latter case a new formulation based on Tensor-Train decompositions is also introduced. The new method belongs to the Proximal Alternating Linearized Minimization (PALM) algorithmic family, with the inclusion of second order information to enhance efficiency. The numerical performance of the new method will be discussed on the image classification of several benchmark datasets.

Automatic parameter selection for the TGV regularizer in image restoration under Poisson noise

Time: 9:40

Monica Pragliola (University of Naples Federico II), di Serafino Daniela

We address the image restoration problem under Poisson noise corruption. The Kullback-Leibler divergence, which is typically adopted in the variational framework as data fidelity term in this case, is coupled with the second-order Total Generalized Variation (TGV). The TGV regularizer is known to be capable of preserving both smooth and piece-wise constant features in the image, however its behavior is subject to a suitable setting of the parameters arising in its expression. We propose a hierarchical Bayesian formulation of the original problem coupled with a Maximum A Posteriori estimation approach, according to which the unknown image and parameters can be jointly and automatically estimated by minimizing a given cost functional. The minimization problem is tackled via a scheme based on the Alternating Direction Method of Multipliers, which also incorporates a procedure for the automatic selection of the regularization parameter by means of a popular discrepancy principle. Computational results show the effectiveness of our proposal.

Iterative regularization for convex regularizers

Time: 10:00

Cesare Molinari (UNIGE), Mathurin Massias Lorenzo Rosasco

Iterative regularization exploits the implicit bias of an optimization algorithm to regularize ill-posed problems. Constructing algorithms with such built-in regularization mechanisms is a classic challenge in inverse problems but also in modern machine learning, where it provides both a new perspective on algorithms analysis, and significant speed-ups compared to explicit regularization. In this talk, we propose and study the first iterative regularization procedure able to handle biases described by non smooth and non strongly convex functionals, prominent in low-complexity regularization. Our approach is based on a primal-dual algorithm of which we analyze convergence and stability properties, even in the case where the original problem is unfeasible. The general results are illustrated considering the special case of sparse recovery with the ℓ1 penalty. Our theoretical results are complemented by experiments showing the computational benefits of our approach.

Stochastic trust-region methods with subsampling for finite-sum minimization problems

Time: 10:20

Simone Rebegoldi (Università di Firenze), Stefania Bellavia, Natasa Krejic, Benedetta Morini

We propose some novel stochastic first-order trust-region methods with function and gradient approximations for solving finite-sum minimization problems. At each iteration, the proposed methods approximate the function and the gradient by sampling, employing adaptive sample sizes that may vary from one iteration to the other. The core idea of these methods lies in the definition of an appropriate merit function, which combines the subsampled objective function with a measure of the distance of the sample size from its maximum value. The trust-region step is indeed accepted or rejected according to whether a sufficient decrease condition on the aforementioned merit function holds or not. We investigate the convergence in probability of these methods, showing that the gradient converges to zero with probability one under some appropriate probabilistic requirements on the function and gradient estimates. Furthermore, we provide some worst-case complexity results to achieve a certain accuracy in the gradient norm. We also report numerical results on nonconvex binary classification problems, which confirm that the proposed algorithms achieve an adequate accuracy way before the maximum sample size is reached, and without requiring a problem-dependent tuning of the involved parameters.