From 30th June to 6th July 2019 the Department of Information Engineering (DINFO) of the University of Florence will organize a Summer School on Optimization, Big Data and Applications (OBA).
The School will be held in Veroli (a beautiful town in central Italy) and is intended to be an advanced course for Master, PhD students, Postdocs. Participants should have a sound background in optimization and machine learning. The number of participants is limited to 50. The main objective of the School is to provide machine learning models and novel optimization algorithms for big data problems and applications. Lectures will be given by world leading researchers in optimization and machine learning.
The fee of the School is 100€. The accommodations expenses for six nights are 150€.
The Summer School will be held in Veroli, a town of central Italy located about 80 km far from Rome. Accommodation for all participants has been reserved at Hotel Relais Filonardi. The price for six nights (arrival June 30, departure July 6) is 150€ (booking is not necessary).
Veroli is served by the airports of Fiumicino and Ciampino of Rome. The best way to reach Veroli is to arrive to Roma Termini railway station.
A shuttle bus (every 2-3 hours) will be organized from Roma Termini railway station to the Hotel Relais Filonardi on the arrival day (Sunday June 30th) and from Hotel Relais Filonardi to Roma Termini railway station on the departure day (Saturday July 6th).
The Shuttle Bus from Roma Termini Station to Hotel Relais Filonardi and back to Roma Termini Station is free of charge.
Participants must communicate their arrival/departure times to the airport or to the Roma Termini railway station by sending an e-mail to email@example.com.
If you need any help simply contact us at firstname.lastname@example.org and we can aid you in the best way to arrive to Veroli.
Online learning is an abstract mathematical framework for the study of sequential decision problems. In this course, we consider online algorithms that construct predictive models (for classification or regression) by going through the data points sequentially, using each new data point to adjust the current model. Typically, this adjustment is "local", as it only involves the current model and the new data point. This has two main advantages: first, online algorithms typically scale well with the number of data points; second, they easily adapt to the properties of the data sequence on which they are run. As a consequence, popular stochastic optimization methods for large-scale learning applications, such as stochastic gradient descent, are often built on online learning algorithms. In this course, we introduce the framework of online convex optimization, the standard model for the design and analysis of online learning algorithms. After defining the notions of regret and regularization, we describe and analyze some of the most important online algorithms, including Mirror Descent, AdaGrad, and Online Newton Step. The last part of the course is concerned with models of online learning with partial feedback. In particular, we describe and analyze algorithms for the multiarmed bandit model.
Continuous optimization has been central to machine learning for a long time. In these lectures, we are interested in continuous problems with a particular "large-scale" structure that prevents us from using generic optimization toolboxes or simply vanilla first- or second-order gradient descent methods. In such a context, all of these tools suffer indeed either from too large complexity per iteration, or too slow convergence, or both, which has motivated the machine learning community to develop dedicated algorithms. We will introduce several of such techniques, and in particular focus on stochastic optimization, which plays a crucial role for applying machine learning techniques to large datasets.
Learning a model from data is usually defined by an optimisation problem, where a loss function (such as the classification error) is minimised over the parameters of the model on a training set. The optimisation is usually carried out using gradient-based algorithms. However, some models of interest involve objective functions or constraints that are non-differentiable or involve discrete parameters, such as autoencoders using binary codes, decision trees, or neural nets whose parameters are quantised. We will describe a generic approach to solve this type of problems, based on introducing auxiliary variables and constraints so the optimisation involves an augmented space but is simpler. Applying to this a penalty method and alternating optimisation results in steps that often exhibit high parallelism and take the form of known problems which can be solved by existing algorithms. We will describe three cases where this is applicable: 1) in "nested systems", involving mathematically nested functions, such as deep nets; 2) in compressing machine learning models, in particular deep nets, so they optimise a given loss while satisfying constraints on memory, execution time or energy; and 3) in learning decision trees and their combination with neural nets.
Convolutional Neural Networks (CNNs) are the model of choice for state-of-the-art visual recognition in images and video. In a series of four lectures we will look at how these models work, investigate some state-of-the-art architectures, and see first-hand the architectural, algorithmic, and theoretical tricks and techniques needed to fit their parameters to training data and apply them effectively to our own recognition problems. This will lead us to a discussion of the advantages and disadvantages of various supervision regimes, from fully-supervised and few-shot learning and finally to unsupervised learning using variational and adversarial models. Our tour of CNNs for visual recognition will conclude with a discussion of generalization and the problems associated with lifelong, continual learning in an open world. These lectures will have something for everyone: theoretical foundations and motivations, practical and hands-on examples of CNNs, and a panoramic snapshot of the current state-of-the-art.
|Monday, July 1st||Tuesday, July 2nd||Wednesday, July 3rd||Thursday, July 4th||Friday, July 5th|
|9:00 - 10:30||Lectures||Lectures||Lectures||Lectures||Lectures|
|11:00 - 12:30|
|15:00 - 16:30||Lectures||Lectures||Excursion||Lectures||OBA Award|
|17:00 - 18:30|
|20:00 - 23:30||Social dinner *|
* The additional price for social dinner is 20€.