Programme

-The conference starts on September 3rd evening by an informal meeting.

 

Conference Program

 

-Plenary Speakers:

 

Jose C. Principe

Title:  Quantifying Model Uncertainty for Semantic Segmentation using RKHS Operators
Abstract

This talk presents our current goal of developing operators inspired by quantum theory to quantify
uncertainty in the outputs of machine learning models, specifically semantic segmentation. The basic
observation is that data projected to a Reproducing Kernel Hilbert Space (RKHS) with kernels built from
the expected value operator are statistical embeddings of the input data. At the same time, the RKHS
functionals obey the properties of a potential field. Therefore, one can directly apply the Schrodinger
equation to the projected data and interpret its Hermite expansion in terms of modal decompositions of the
PDF over the space of samples that express multi scale uncertainty. This methodology is quite general and
can be used in many different applications as demonstrated in the talk.


Roman Belavkin

Title: Towards a Dynamic Value of Information Theory
Abstract:

The value of information (VoI) theory was developed in the 1960s by Ruslan Stratonovich
and colleagues. Inspired by Shannon’s rate distortion theory, it defines VoI as the maximum expected utility (or the minimum expected cost) that can be achieved subject to a given information constraint. Different value functions correspond to different types of information and different optimal Markov transition probabilities. In many natural systems, such as learning and evolving systems, the information amount itself is dynamic, and here we discuss dynamical extension of the value of information theory. We formulate the corresponding variational problems defining certain geodesic curves on statistical manifolds and discuss the resulting theory. Examples for Shannon’s information and certain types of utility functions will be used for illustration. The problem of optimal control of mutation rates in  evolutionary systems will be considered as an application of the theory.


Panos Pardalos 

Title: Diffusion capacity of single and interconnected networks

Abstract:

This lecture addresses the significant challenge of comprehending diffusive processes in networks in the context of complexity. Networks possess a diffusive potential that depends on their topological configuration, but diffusion also relies on the process and initial conditions. The lecture introduces the concept of Diffusion Capacity, a measure of a node’s potential to diffuse information that incorporates a distance distribution considering both geodesic and weighted shortest paths and the dynamic features of the diffusion process. This concept provides a comprehensive depiction of individual nodes’ roles during the diffusion process and can identify structural modifications that may improve diffusion mechanisms. The lecture also defines Diffusion Capacity for interconnected networks and introduces Relative Gain, a tool that compares a node’s performance in a single structure versus an interconnected one. To demonstrate the concept’s utility, we apply the methodology to a global climate network formed from surface air temperature data, revealing a significant shift in diffusion capacity around the year 2000. This suggests a decline in the planet’s diffusion capacity, which may contribute to the emergence of more frequent climatic events. Our goal is to gain a deeper understanding of the complexities of diffusive processes in networks and the potential applications of the Diffusion Capacity concept.

References:
Schieber, T.A., Carpi, L.C., Pardalos, P.M. et al. Diffusion capacity of
single and interconnected networks. Nat Commun 14, 2217 (2023).
https://doi.org/10.1038/s41467-023-37323-0 (see also supplementary
information)
Schieber, T., Carpi, L., Díaz-Guilera, A. et al. Quantification of network
structural dissimilarities. Nat Commun 8, 13928 (2017).
https://doi.org/10.1038/ncomms13928


Martin Schmid

Title: Search in Imperfect Information Games
Abstract
:
From the very dawn of the field, se
arch with value functions was a fundamental concept of computer games research. Turing’s chess algorithm from 1950 was able to think two moves ahead, and Shannon’s work on chess from 1950 includes an extensive section on evaluation functions to be used within a search. Samuel’s checkers program from 1959 already combines search and value functions that are learned through selfplay and bootstrapping. TDGammon improves upon those ideas and uses neural networks to learn those complex value functions only to be again used within search. The combination of decisiontime search and value functions has been present in the remarkable milestones where computers bested their human counterparts in long standing challenging gamesDeepBlue for Chess and AlphaGo for Go. Until recently, this powerful framework of search aided with (learned) value functions has been limited to perfect information games. We will talk about why search matters, and about generalizing search for imperfect information games.