Bounding Boxes Are All We Need: Street View Image Classification via Context Encoding of Detected Buildings
Street view image classification aiming at the urban land use analysis is difficult because the class labels (e.g., commercial area) are concepts with higher abstract levels compared to the ones of general visual tasks (e.g., persons and cars). Therefore, classification models using only visual feat...
Saved in:
| Published in | IEEE transactions on geoscience and remote sensing Vol. 60; pp. 1 - 17 |
|---|---|
| Main Authors | , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
New York
IEEE
2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0196-2892 1558-0644 |
| DOI | 10.1109/TGRS.2021.3064316 |
Cover
| Summary: | Street view image classification aiming at the urban land use analysis is difficult because the class labels (e.g., commercial area) are concepts with higher abstract levels compared to the ones of general visual tasks (e.g., persons and cars). Therefore, classification models using only visual features often fail to achieve satisfactory performance. In this article, a novel approach based on a "bottom-up and top-down" framework is proposed. Instead of using visual features of the whole image directly as common image-level models based on convolutional neural networks (CNNs) do, the proposed framework first obtains low-level semantic, namely, the bounding boxes of buildings in street view images through a bottom-up object discovery process. Their contextual information, such as the co-occurrence patterns of building classes and their layout, is then encoded into metadata by the proposed algorithm "Context encOding of Detected buildINGs" (CODING). Finally, these metadata (low-level semantic encoded with context information) are abstracted to high-level semantic, namely, the land use label of the street view image through a top-down semantic aggregation process implemented by a recurrent neural network (RNN). In addition, in order to effectively discover low-level semantic as the bridge between visual features and higher abstract concepts, we made a dual-labeled data set named "Building dEtection And Urban funcTional-zone portraYing" (BEAUTY) of 19070 street view images and 38857 buildings based on the existing BIC_GSV. The data set can be used not only for street view image classification but also for multiclass building detection. Experiments on "BEAUTY" show that the proposed approach achieves a 12.65% performance improvement on macroprecision and 12% on macrorecall over image-level CNN-based models. Our code and data set are available at https://github.com/kyle-one/Context-Encoding-of-Detected-Buildings/ . |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0196-2892 1558-0644 |
| DOI: | 10.1109/TGRS.2021.3064316 |