Program
Timetable
Lectures
Wojciech Chachólski (KTH)
Geometry, Homology and Data
One of the key steps towards a successful analysis is to provide suitable representations of data by objects amenable for statistical and ML methods.
Since geometrical properties are often not amenable for such tools, one of the most important contribution of Topological data analysis has been to provide strategies and algorithms of transforming geometrical information into objects for which statistical and ML methods can be applied to. During the last decade there has been an explosion of applications in which such representations of data played a significant role. In my talks I will present one such strategy based on hierarchical stabilisation process leading to invariants called stable ranks.
I will use classical Wisconsin Breast Cancer data as one of examples when homological invariants can give interesting information.
Anne Estrade (Université Paris Cité)
The geometry of Gaussian fields
The short title of the lecture has to be understood as "Some geometric properties of Gaussian random fields".
The first part will be dedicated to general definitions and properties of Gaussian fields indexed by the Euclidean space $R^d$ (with $d \ge 2$) with a focus on the special case of stationary Gaussian fields. We will deal with a geometric feature that is really specific to the multivariate context: anisotropy. We will present various models and will try to understand which characteristics of the field are impacted by the anisotropy property.
The second part will be dedicated to Rice formulas and their consequences. They consist in writing moments of some geometric functionals who depend on the level sets of the Gaussian field. We will visit recent works on related topics that open new perspectives in link with spatial statistics, image analysis or TDA.
Érika Roldán (MPI Leipzig)
Topology and Geometry of Random Cubical Complexes
In this mini-course, we will explore the topology and local geometry of different random cubical complex models. In the first part, we explore two models of random subcomplexes of the regular cubical grid: percolation clusters (joint work with David Aristoff and Sayan Mukherjee), and the Eden Cell Growth model (joint work with Fedor Manin and Benjamin Schweinhart). In the second part, we study the fundamental group of random 2-dimensional subcomplexes of an n-dimensional cube; this model is analogous to the Linial-Meshulam model for simplicial complexes (joint work with Matt Kahle and Elliot Paquette).
Rasmus Waagepetersen (Aalborg University)
Cox processes – mixed models for point processes
Cox process is the point process counterpart of a latent variable model. It arises by adding latent random effects to a Poisson point process model where these random effects constitute a random field and are used to model unobserved sources of variation influencing the occurrence of points. We will review a range of Cox process models including log Gaussian Cox processes and shot-noise Cox processes. We consider moment properties and methodology for statistical inference including estimating functions. We also consider extensions to multivariate point patterns.
Poster session
Karthik Viswanathan (University of Amsterdam)
Information Maximizing Persistent Homology for Inference
A way to summarize a complex data set is to represent it via a filtered simplicial complex and compute the corresponding persistent homology. It is possible that we lose information about the dataset while we construct the persistence diagram. In this poster, I'll quantify the information content of the persistent diagram using Fisher Information. This algorithm may be useful for integrating topological information into statistical inference. We illustrate the pipeline using examples such as Gaussian Random Fields with a defined power spectrum. In certain cases, we show that the persistence diagrams can provide optimal summaries for inference, in the sense that the Cramer-Rao bound is saturated. The possibility of using Fisher Information as a loss function for unsupervised learning tasks to learn an "optimal" filtration is explored in this poster.