<< Chapter < Page Chapter >> Page >

Workshop overview

A graphical model, or Bayesian network, encodes probabilistic relationships among variables. Techniques based on these models arebecoming increasingly important in data analysis applications of many types. In areas such as foreign-language translation, microchipmanufacturing, and drug discovery, the volume of data can slow progress because of the difficulty of finding causal connections ordependencies. The new Bayesian methods enable these tangled interconnections to be sorted out and produce useful tools forhandling large data sets. Google is already using these techniques to find and take advantage of patterns of interconnections between Webpages, and Bill Gates has been quoted as saying that expertise in Bayesian networks is an essential part of Microsoft's competitiveadvantage, particularly in such areas as speech recognition. (Bayesian networks now pervade Microsoft Office.) Recently, the MIT TechnologyReview named Bayesian networks as one of the top ten emerging technologies.

Remark: This workshop was held on February 19, 2004 as part of the Computational Sciences Lecture Series (CSLS) at the University of Wisconsin-Madison.

An introduction to probabilistic graphical models and their lyapunov functions and algorithms for inference and learning

By Prof. Brendan J. Frey (Probabilistic and Statistical Inference Group, Electrical and Computer Engineering,University of Toronto, Canada)

Slides of talk in PDF | Video [WMV]

ABSTRACT: Many problems in science and engineering require that we take into account uncertainties in the observed data and uncertaintiesin the model that is used to analyze the data. Probability theory (in particular, Bayes rule) provides a way to account for uncertainty, bycombining the evidence provided by the data with prior knowledge about the problem. Recently, we have seen an increasing abundance of dataand computational power, and this has motivated researchers to develop techniques for solving large-scale problems that require complexchains of reasoning applied to large datasets. For example, a typical problem that my group works on will have 100,000 to 1,000,000 or moreunobserved random variables. In such large-scale systems, the structure of the probability model plays a crucial role and thisstructure can be easily represented using a graph. In this talk, I will review the definitions and properties of the main types ofgraphical model, and the Lyapunov functions and optimization algorithms that can be used to perform inference and learning in thesemodels. Throughout the talk, I will use a simple example taken from the application area of computer vision, to demonstrate the concepts.

Graphical models for linear systems, codes and networks

By Prof. Ralf Koetter (Coordinated Science Laboratory and Department of Electrical Engineering,University of Illinois, Urbana-Champaign, USA)

Slides of talk in PDF | Video [WMV]

ABSTRACT: The use of graphical models of sytems is a well establishedtechnique to characterize a represented behavior. While these models are often given by nature in some cases it is possible to choose theunderlying graphical framework. If in addition the represented behavior satisfies certain linearity requirements, surprisingstructural properties of the underlying graphical models can be derived. We give an overview over a developing structure theory forlinear systems in graphical models and point out numerous directions for further research. Examples of applications of this theory aregiven that cover areas as different as coding, state space models and network information theory.

Graphical models, exponential families and variational inference

By Prof. Michael I. Jordan (Department of Computer Science, University of California Berkeley,USA)

Slides of talk in PDF | Video [WMV]

ABSTRACT: The formalism of probabilistic graphical models provides a unifying framework for the development of large-scale multivariatestatistical models. Graphical models have become a focus of researchin many applied statistical and computational fields, including bioinformatics, information theory, signal and image processing,information retrieval and machine learning. Many problems that arise in specific instances---including the key problems of computingmarginals and modes of probability distributions---are best studied in the general setting. Exploiting the conjugate duality between thecumulant generating funciton and the entropy for exponential families, we develop general variational representations of the problems ofcomputing marginals and modes. We describe how a wide variety of known computational algorithms---including mean field, sum-product andcluster variational techniques---can be understand in terms of these variational representations. We also present novel convex relaxationsbased on the variational framework. We present applications to problems in bioinformatics and information retrieval. [Joint work withMartin Wainwright]

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Computational sciences lecture series at uw-madison. OpenStax CNX. May 01, 2005 Download for free at http://cnx.org/content/col10277/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Computational sciences lecture series at uw-madison' conversation and receive update notifications?

Ask