Bayesian inference and latent variable models in machine learning

Abstract

One of possible approaches to machine learning problem is to build a joint probability distribution over the whole set of variables. By doing this it becomes possible to make predictions on hidden variables by conditioning the joint distribution on the observed variables and marginalizing it w.r.t. unknown variables in whose particular values are not of interest for us. But this framework is much more powerful and provides us with greater opportunities. In the talk I will present how to apply it for the case when even in the training set we deal with partly-labeled or unlabeled data. We will review EM-algorithm and its extensions which allow computer to restore surprising dependencies from incomplete data.

Slides

Video