Reducing structured learning to a series of (possibly nonlinear) logistic regression problems
Justin Domke (NICTA)
CECS SEMINAR SERIESDATE: 2013-09-26
TIME: 11:15:00 - 12:15:00
LOCATION: NICTA - 7 London Circuit
CONTACT: JavaScript must be enabled to display this email address.
ABSTRACT:
When using a conditional random field, it is common to first train a powerful nonlinear classifier (e.g. boosted decision trees) to predict each variable independently of the others. Then, linear weights are adjusted to maximize a measure of joint accuracy with the nonlinear structure fixed. This talk will describe a method to reduce structured learning to a sequence of logistic regression problems. As normal, one begins by fitting a classifier to predict each variable independently. However, rather than leaving this fixed, message-passing inference is used to create a sequence of new problems that update these univariate terms to be optimal for joint prediction. Edge potentials are trained in a similar manner, not restricted to linear functions. I will present results on semantic segmentation problems, showing the benefits and perils of fitting nonlinear classifiers for both univariate and edge interactions.





