Optimizing Intersection-Over-Union in Deep Neural Networks for Image Segmentation

We consider the problem of learning deep neural networks (DNNs) for object category segmentation, where the goal is to label each pixel in an image as being part of a given object (foreground) or not (background). Deep neural networks are usually trained with simple loss functions (e.g., softmax los...

Full description

Saved in:
Bibliographic Details
Published inAdvances in Visual Computing Vol. 10072; pp. 234 - 244
Main Authors Rahman, Md Atiqur, Wang, Yang
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2016
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN9783319508344
3319508342
ISSN0302-9743
1611-3349
DOI10.1007/978-3-319-50835-1_22

Cover

More Information
Summary:We consider the problem of learning deep neural networks (DNNs) for object category segmentation, where the goal is to label each pixel in an image as being part of a given object (foreground) or not (background). Deep neural networks are usually trained with simple loss functions (e.g., softmax loss). These loss functions are appropriate for standard classification problems where the performance is measured by the overall classification accuracy. For object category segmentation, the two classes (foreground and background) are very imbalanced. The intersection-over-union (IoU) is usually used to measure the performance of any object category segmentation method. In this paper, we propose an approach for directly optimizing this IoU measure in deep neural networks. Our experimental results on two object category segmentation datasets demonstrate that our approach outperforms DNNs trained with standard softmax loss.
ISBN:9783319508344
3319508342
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-319-50835-1_22