I have been invited to Vienna by my friend and colleague Manuel Bomze for the OGDA 2018 workshop. It was an excellent workshop with top level talks. My role has been that of opening the panel discussion on the theme: “Short-lived hypes and failures in OR”.

This was a really difficult task: in my opinion in our field, apart from wrong papers, it is really difficult to label a theory or a computational approach as a failure, as we have seen, many times, that forgotten research can be revisited and re-considered and give rise to unexpected improvements. One example for all: after Fiacco and McCormick work on sequential unconstrained minimization techniques, much attention was given to penaly and barrier methods, which were later abandoned, mostly because of their numerical difficulties, only to be re-discovered much later as the basis for interior point methods.

So, looking for hypes and failures (mostly, failures) I was naturally led to consider my own research. And I found this picture:

This was taken, if I remember well, in Szeged, 1995 (or was it Sopron, 1990?). Many well known friends and coleagues can e seen in the picture. Among them,  Manuel Bomze on the right, close to Marco Locatelli, while Reiner Horst and Chris Floudas are on the left;  many many others can be seen: Panos Pardalos, Eligius Hendrix, Gerardo ToraldoImmaculada Garcia, Tibor Csendes, Donald Jones, Zelda Zabinsky and so many others!

At that time, more or less, I stopped working on a (relatively small) hype which became later a relatively important failure:  when to stop sampling for the global optimum? There had been, at that time, a quite strong scientific war between our group, which was then based in Milano, and the group in Rotterdam headed by Alexander Rinnooy Kan. Both groups were working on the problem of “optimally” deciding whether an heuristic method for global optimization had to be stopped or if it was beneficial to continue as the likelihood to find an improved local minimum was sufficiently high to counter balance the computational effort required to continue. The methods both groups developed were rooted in important theoretical backgrounds ranging from non parametric Bayesian statistics to the theory of multi armed bandits. Both of our groups eventually published papers on this subject on our flagship journal, Mathematical Programming (our group in 1987, 1992, our collegaues fro the Netherlands in 1987  and 1991)

But, at the time the picture was taken, nobody really cared anymore about stopping… Computers were becoming faster and faster, algorithmic advances made local searches less CPU-intensive, branch and bound methods helped in devising deterministic stopping rules, … so that the topic disappeared from the scientific literature. The theory on which they were based is still alive and productive (just think of the relationship between bandits and reinforcement learning.

So, even if our approach was clearly a failure, you can never say that something which is not practical in some contexts will not become relevant, in the future, in a slightly different context. Just as a personal reminder, I recall that back in those years (oh, actually well before that photo: I am speaking of the workshop on “Stochastics and Optimization” held in Gargnano (Lake of Garda) in the late ’80’s…)  I was studying the Robbins-Monro stochastic approximation scheme and the stochastic gradient method. And, although I appreciated the elegance of the results, I just could not understand why, when minimizing a sum of functions, one might have been interested in using the gradient of just one of the terms in the sum…  As the gradient of the sum is the sum of gradients, why not taking the gradient of the whole function?  How much I was wrong! Well not only me: that hype almost disappeared for years, before becoming “the” method for training neural networks.

So, my conclusion is: maybe we can identify hypes, but who knows if they are really failures or simply these hypes came out too early for the scientific community to suitably appreciate them?

image_print
A contribution to the OGDA 2018 Workshop on Optimization, Game Theory and Data Analysis