GAGCN: Generative adversarial graph convolutional network for non‐homogeneous texture extension synthesis

In the non‐homogeneous texture synthesis task, the overall visual characteristics should be consistent when extending the local patterns of the exemplar. The existing methods mainly focus on the local visual features of patterns but ignore the relative position features that are important for non‐ho...

Full description

Saved in:
Bibliographic Details
Published inIET image processing Vol. 17; no. 5; pp. 1603 - 1614
Main Authors Xie, Shasha, Qian, Wenhua, Nie, Rencan, Xu, Dan, Cao, Jinde
Format Journal Article
LanguageEnglish
Published Wiley 01.04.2023
Subjects
Online AccessGet full text
ISSN1751-9659
1751-9667
DOI10.1049/ipr2.12741

Cover

More Information
Summary:In the non‐homogeneous texture synthesis task, the overall visual characteristics should be consistent when extending the local patterns of the exemplar. The existing methods mainly focus on the local visual features of patterns but ignore the relative position features that are important for non‐homogeneous texture synthesis. Although these methods have achieved success on homogeneous textures, they cannot perform well on non‐homogeneous textures. Thus, it is desirable to model the dependence between pixels to improve the synthesis performance. To ensure synthesis results from both the local detail structure and the overall structure, this paper proposes a non‐homogeneous texture extended synthesis model (GAGCN) combining the generate adversarial network (GAN) and the graph convolutional network (GCN). The GAN learns the internal distribution of image patches, which makes the synthetic image have rich local details. The GCN learns the latent dependence between pixels according to the statistical characteristics of the image. Based on this, a novel graph similarity loss is proposed. This loss describes the latent spatial differences between the sample image and the generated image, which helps the model to better capture global features. Experiments show that our method outperforms existing methods on non‐homogeneous textures. This figure describes the process of extracting feature information from a texture image through Graph Convolutional Network (GCN). The correlation matrix is obtained through the clustering algorithm and co‐occurrence statistics, which is used as the input of the GCN network.
ISSN:1751-9659
1751-9667
DOI:10.1049/ipr2.12741