In sequential prediction with log-loss as well as density estimation with risk measured by KL divergence, one is often interested in the expected instantaneous loss, or, equivalently... Watch Video
We give a self-contained tutorial on the Minimum Description Length (MDL) approach to modeling, learning and prediction. We focus on the recent (post 1995) formulations of MDL, which... Watch Video
We extend Bayesian MAP and Minimum Description Length (MDL) learning by testing whether the data can be substantially more compressed by a mixture of the MDL/MAP distribution with... Watch Video
... Watch Video
We show that forms of Bayesian and MDL learning that are often applied to classification problems can be *statistically inconsistent*. We present a large family of classifiers and... Watch Video
Standard Bayesian model selection/averaging sometimes learn too slowly: there exist other learning methods that lead to better predictions based on less data. We give a novel analysis... Watch Video
Part of this talk is based on results of A. Barron (1986) and recent joint work with J. Langford (2004). We introduce the information-theoretic concepts of universal coding and... Watch Video
We give a tutorial introduction to the *modern* Minimum Description Length (MDL) Principle, taking into account the many refinements and developments that have taken place in the 1... Watch Video
A remarkable variety of problems in machine learning and statistics can be recast as data compression under constraints: (1) sequential prediction with arbitrary loss functions can... Watch Video
Jobilize.com uses cookies to ensure that you get the best experience. By continuing to use Jobilize.com web-site, you agree to the Terms of Use.