Revisiting Uncertainty in Graph Cut Solutions
MetadataShow full item record
CitationTarlow, Daniel and Ryan P Adams. Forthcoming. Revisiting uncertainty in graph cut solutions. Proceedings of the IEEE Conference on Computer Vision Pattern Recognition (CVPR): June 16-21, 2012, Providence, Rhode Island.
AbstractGraph cuts is a popular algorithm for finding the MAP assignment of many large-scale graphical models that are common in computer vision. While graph cuts is powerful, it does not provide information about the marginal probabilities associated with the solution it finds. To assess uncertainty, we are forced to fall back on less efficient and inexact inference algorithms such as loopy belief propagation, or use less principled surrogate representations of uncertainty such as the min-marginal approach of Kohli & Torr. In this work, we give new justification for using min-marginals to compute the uncertainty in conditional random fields, framing the min-marginal outputs as exact marginals under a specially-chosen generative probabilistic model. We leverage this view to learn properly calibrated marginal probabilities as the result of straightforward maximization of the training likelihood, showing that the necessary subgradients can be computed efficiently using dynamic graph cut operations. We also show how this approach can be extended to compute multi-label marginal distributions, where again dynamic graph cuts enable efficient marginal inference and maximum likelihood learning. We demonstrate empirically that — after proper training — uncertainties based on min-marginals provide better- calibrated probabilities than baselines and that these distributions can be exploited in a decision-theoretic way for improved segmentation in low-level vision.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:8712188
- FAS Scholarly Articles