2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS 2021
DOI: 10.1109/igarss47720.2021.9554645
|View full text |Cite
|
Sign up to set email alerts
|

Style Transformation-Based Change Detection Using Adversarial Learning with Object Boundary Constraints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 3 publications
0
2
0
Order By: Relevance
“…In order to address the task of CD from multi-temporal imagery with domain shift (such as seasonal differences and style differences), a transformer-driven image translation module is proposed to map data between two images with real-time efficiency. Unlike GAN-based image-to-image translation [28], [31], [36], transformer-driven image translation adopts the expressive multi-head attention strategy from the well-known transformer architecture [47] to globally model a new image, whose image content is coherent with the pre-change image, and whose style is optionally the same as the post-change image. As Fig.…”
Section: Methodology a Transformer-driven Image Translationmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to address the task of CD from multi-temporal imagery with domain shift (such as seasonal differences and style differences), a transformer-driven image translation module is proposed to map data between two images with real-time efficiency. Unlike GAN-based image-to-image translation [28], [31], [36], transformer-driven image translation adopts the expressive multi-head attention strategy from the well-known transformer architecture [47] to globally model a new image, whose image content is coherent with the pre-change image, and whose style is optionally the same as the post-change image. As Fig.…”
Section: Methodology a Transformer-driven Image Translationmentioning
confidence: 99%
“…1) Building effective representation of multi-temporal images at the pixel-level [28], [31], [32], feature-level [29], [33], [34], or object-level [35], [36]. Importantly, homogeneous data captured by the same types of sensors and heterogeneous images captured by different types of sensors use different methods to establish an effective representation of multitemporal images.…”
Section: Introductionmentioning
confidence: 99%