NODAI
Numerical optimization for data analysis and imaging II

Invited Session
Time Slot: Wednesday Morning
Room: 003
Chair: Benedetta Morini

Bacterial Colonies Segmentation via Variational Approaches

Time: 11:30

Ambra Catozzi (Università degli Studi di Parma), Benfenati Alessandro

Digital image processing can be a crucial tool to automate and improve the analysis of plate culture of microorganisms, such as bacteria, molds and fungi. After taking a sample from the investigated environment, the main objective is to study its development within a specific growth medium in potential colonies. The number of plates to examine is usually large, hence the costs in terms of time and resources grows fast; we propose to combine image segmentation techniques with mathematical tools, such as the Mumford-Shah or Blake-Zisserman functionals (see “Numerical minimization of a second-order functional for image segmentation”, M. Zanetti, V. Ruggiero. M. Miranda, 2016), for the development of an automatic colonies’ detector. A possible dataset is composed of RGB images of plates; a pre-processing phase on the original data enables to crop the uninteresting parts of an image and to enhance the contrast without losing any significant information or properties, reducing the computational time. Thus, a variational approach enables to highlight the edges in the image, by identifying colonies of microorganisms with respect to the whole plate.Furthermore, the performance of the proposed approach is evaluated on a set of real images with respect to state-of-the-art techniques (see “A comprehensive review of image analysis methods for microorganism counting: from classical image processing to deep learning approaches”, J. Zhang et al., 2021). Session: “Numerical optimization for data analysis and imaging” [NODAI].

Learning regularized Gauss-Newton methods

Time: 11:50

Francesco Colibazzi (University of Bologna), Lazzaro Damiana, Morigi Serena, Samorè Andrea

We consider variational networks for a class of nonlinear-ill-posed least squares inverse problems. These problems are addressed by regularized Gauss-Newton type optimization algorithms where the regularization is learned by a neural network. Two different data-driven approaches will be investigated. First, we present a learned-regularizer integrated into an unrolled Gauss-Newton network. As an alternative approach, we derive a proximal regularized quasi-Newton (PRQN) method and we unfold the PRQN into a deep network which consists of a cascade of multiple proximal mappings. Therefore the proximal operator is learned directly through a variable metric denoiser network. As a practical application, we show how our methods have been successfully applied to solve the parameter identification problem in elliptic PDEs, such as the nonlinear Electrical Impedance Tomography inverse problem.

A line search based proximal stochastic gradient algorithm with dynamical variance reduction for finite-sum optimization problems

Time: 12:10

Giorgia Franchini (UNIMORE), Porta Federica, Ruggiero Valeria, Trombini Ilaria

Many optimization problems arising from machine learning applications can be cast as the minimization of a regularized finite-sum. When dealing with large-scale machine learning issues, the computation of the full gradient of the finite-sum functional can be prohibitively expensive. For this reason, proximal stochastic gradient methods have been extensively studied in the optimization area in the last decades. It is well known in the literature that a proper strategy to select the hyperparameters of the method (i.e., the set of parameters a-priori selected) and, in particular, the steplength and the mini batch size is needed to guarantee convergence properties and good practical performance. In this work we develop a proximal stochastic gradient algorithm which is based on two main ingredients: the steplength is automatically selected by means of a proper line search procedure and the variance of the stochastic gradients is dynamically reduced along the iterative procedure through an adaptive subsampling strategy. No need of periodical full gradient computation is required. An extensive numerical experimentation in training a binary classifier highlights that the proposed approach appears robust with respect to the setting of the hyperparameters and competitive compared to state-of-the-art methods.

Iterative regularisation and modular-proximal algorithms in spaces for imaging problems

Time: 12:30

Marta Lazzaretti (DIMA, Università di Genova – Lab I3S, UCA CNRS INRIA), Estatico Claudio, Calatroni Luca

Note: Recipient of the Verizon Connect Italy grant

Solving imaging inverse problems by means of a variational approach requires some crucial choices: the modelling of the data-fidelity term whose structural form depends on the noise statistics, the use of a suitable regularisation (penalty) term typically promoting sparsity of the solution in some sense, and, when the formulation is given in an infinite-dimensional framework, the solution space. The latter choice is often overlooked due to the technical challenges it poses w.r.t.~to the underlying metric and topological properties. However, it might have a critical effect on the quality of the computed solution due to its `implicit’ regularisation properties. We focus our attention on the non-standard choice of variable exponent Lebesgue spaces as solution spaces. Such Banach spaces are defined in terms of a point-wise variable exponent inducing a specific shift-variant norm and intrinsic space-variant properties. Choosing as a solution space corresponds to promote a spatially adaptive solution smoothness. We first show how this modelling improves the accuracy of the reconstructions with respect to standard and spaces with a constant exponent by considering at first the so-called dual method, e.g.~an iterative regularisation (Landweber) algorithm suited to the Banach spaces scenario, minimising the data term.We then consider more complex, structured, problems, defined in terms of the sum of a smooth and convex fidelity, and a proper, l.s.c., convex, typically non-smooth penalty. Optimisation problems of this form are frequently encountered in signal and image processing and usually tackled with forward-backward splitting algorithms, where a gradient descent step w.r.t. the smooth part is alternated with a proximal step associated to the non-smooth part. The use of analogous strategies for minimising composite functionals in is more challenging since due to the Banach setting, which requires the use of new definition of both gradient and proximal steps due to the lack of the Riesz identification between the solution space and its dual. To do so, we propose a proximal gradient algorithm where the gradient step is performed in the dual space and the proximal step, rather than depending on the natural but non-separable norm, is defined in terms of its separable modular function, which allows for the efficient computation of algorithmic iterates, and by means of a Bregman-like distance, which is more suitable to the Banach space geometry rather than the usual -norm. We analyse the algorithm’s convergence rate in function values, showing its dependence on the smoothness of both the functional and the space. Some numerical tests highlight the flexibility of the space for exemplar deconvolution and mixed noise removal problems are reported, showing improved reconstruction accuracy, faster convergence speed and reduced computational costs of the proposed algorithm in comparison with analogous ones defined in standard spaces.