Change is Everywhere: Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery (ICCV 2021)

Overview

Change is Everywhere
Single-Temporal Supervised Object Change Detection
in Remote Sensing Imagery

by Zhuo Zheng, Ailong Ma, Liangpei Zhang and Yanfei Zhong

[Paper] [BibTeX]



This is an official implementation of STAR and ChangeStar in our ICCV 2021 paper Change is Everywhere: Single-Temporal Supervised Object Change Detection for High Spatial Resolution Remote Sensing Imagery.

We hope that STAR will serve as a solid baseline and help ease future research in weakly-supervised object change detection.


News

  • 2021/08/28, The code is available.
  • 2021/07/23, The code will be released soon.
  • 2021/07/23, This paper is accepted by ICCV 2021.

Features

  • Learning a good change detector from single-temporal supervision.
  • Strong baselines for bitemporal and single-temporal supervised change detection.
  • A clean codebase for weakly-supervised change detection.
  • Support both bitemporal and single-temporal supervised settings

Citation

If you use STAR or ChangeStar (FarSeg) in your research, please cite the following paper:

@inproceedings{zheng2021change,
  title={Change is Everywhere: Single-Temporal Supervised Object Change Detection for High Spatial Resolution Remote Sensing Imagery},
  author={Zheng, Zhuo and Ma, Ailong and Liangpei Zhang and Zhong, Yanfei},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={},
  year={2021}
}

@inproceedings{zheng2020foreground,
  title={Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution Remote Sensing Imagery},
  author={Zheng, Zhuo and Zhong, Yanfei and Wang, Junjue and Ma, Ailong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4096--4105},
  year={2020}
}

Getting Started

Install EVer

pip install --upgrade git+https://github.com/Z-Zheng/ever.git

Requirements:

  • pytorch >= 1.6.0
  • python >=3.6

Prepare Dataset

  1. Download xView2 dataset (training set and tier3 set) and LEVIR-CD dataset.

  2. Create soft link

ln -s </path/to/xView2> ./xView2
ln -s </path/to/LEVIR-CD> ./LEVIR-CD

Training and Evaluation under Single-Temporal Supervision

bash ./scripts/trainxView2/r50_farseg_changemixin_symmetry.sh

Training and Evaluation under Bitemporal Supervision

bash ./scripts/bisup_levircd/r50_farseg_changemixin.sh

License

ChangeStar is released under the Apache License 2.0.

Copyright (c) Zhuo Zheng. All rights reserved.

Comments
  • Can ChangeStar be used for general CD?

    Can ChangeStar be used for general CD?

    hi,

    Thanks for the great work. I wonder, can this work be used for general change detection? i.e., multi-class not just single class.

    If yes, do you have done the experiments? Thanks!

    opened by Richardych 3
  • hello, how to add changemixin when use bitemporal supervised

    hello, how to add changemixin when use bitemporal supervised

    hello I have question about your repo:

    1. how to add changeminxin when use bitemporal supervised, i see it in your paper table 4 but i cant find in codes?
    2. could changestar use LEVIR-CD train Single-Temporal(another dataset is too big for train, i cant download it)
    3. are your bitemporal suprvised methods just use torch.cat in the final layer? sorry for ask these question,
    opened by csliuchang 3
  • ValueError: Requested crop size (512, 512) is larger than the image size (384, 384)

    ValueError: Requested crop size (512, 512) is larger than the image size (384, 384)

    Traceback (most recent call last): File "./train_sup_change.py", line 48, in blob = trainer.run(after_construct_launcher_callbacks=[register_evaluate_fn]) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 117, in run test_data_loader=kw_dataloader['testdata_loader']) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/launcher.py", line 232, in train_by_config signal_loss_dict = self.train_iters(train_data_loader, test_data_loader=test_data_loader, **config) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/launcher.py", line 174, in train_iters is_master=self._master) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/iterator.py", line 30, in next data = next(self._iterator) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataset.py", line 218, in getitem return self.datasets[dataset_idx][sample_idx] File "/home/yujianzhi/tem/ChangeStar-master/data/levir_cd/dataset.py", line 30, in getitem blob = self.transforms(**dict(image=imgs, mask=gt)) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/composition.py", line 191, in call data = t(force_apply=force_apply, **data) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/transforms_interface.py", line 90, in call return self.apply_with_params(params, **kwargs) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/transforms_interface.py", line 103, in apply_with_params res[key] = target_function(arg, **dict(params, **target_dependencies)) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/augmentations/crops/transforms.py", line 48, in apply return F.random_crop(img, self.height, self.width, h_start, w_start) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/augmentations/crops/functional.py", line 28, in random_crop crop_height=crop_height, crop_width=crop_width, height=height, width=width ValueError: Requested crop size (512, 512) is larger than the image size (384, 384) Traceback (most recent call last): File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in main() File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/yujianzhi/anaconda3/envs/CStar/bin/python', '-u', './train_sup_change.py', '--local_rank=0', '--config_path=levircd.r50_farseg_changestar_bisup', '--model_dir=./log/bisup-LEVIRCD/r50_farseg_changestar']' returned non-zero exit status 1.

    it says: ValueError: Requested crop size (512, 512) is larger than the image size (384, 384) but my img is 512*512 exactly.

    opened by themoongodyue 3
  • How to get the bitemporal images' labels if the model is trained on LEVIR-CD dataset?

    How to get the bitemporal images' labels if the model is trained on LEVIR-CD dataset?

    Hello, I'm very interested in your work, but I encountered a problem in the process of research. If the model is trained on the LEVIR-CD dataset, how to obtain the changed labels when there are no segmentation maps for each bitemporal image in the dataset? I would appreciate it if you could solve my problems.

    opened by SONGLEI-arch 2
  • Reproduction Problem

    Reproduction Problem

    Hello author.

    Your work is great!

    But I ran into a problem while running your code.

    The performance came as shown in the picture below, but this number is much higher than the number in table1 of your paper. (IoU) Can you tell me the reason? Screen Shot 2022-01-01 at 7 44 17 PM

    All hyperparameters and data are identical.

    opened by seominseok0429 1
  • AssertionError error

    AssertionError error

    Hello, this is really great work. I have one question for you. The LEVIR-CD dataset trains well, but the xview2 dataset gives the following unknown error.

    Do you have any idea how to fix it? All processes follow the recipe exactly Screen Shot 2021-12-31 at 4 57 41 PM .

    opened by seominseok0429 1
  • RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8

    RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8

    i have crazy,help me please

    Traceback (most recent call last): File "./train_sup_change.py", line 48, in blob = trainer.run(after_construct_launcher_callbacks=[register_evaluate_fn]) File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 98, in run kwargs.update(dict(model=self.make_model())) File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 87, in make_model model = nn.parallel.DistributedDataParallel( File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 496, in init dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 ncclSystemError: System call (socket, malloc, munmap, etc) failed. ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 31335) of binary: /home/cy/miniconda3/envs/STAnet/bin/python ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed

    opened by themoongodyue 1
  • Evaluation

    Evaluation

    Excuse me, I want to know how this module behave inference after training the model. And if you can offer an link for usage of 'ever' Lib, that will be fantastic

    opened by LIUZIJING-CHN 1
  • changestar_sisup results

    changestar_sisup results

    Hi, I have trained the model under single-temporal supervision, but the F1 result is only 0.73,which is worse than the result in your paper. Is there anything wrong with my experiment, below is my training log:

    1666753326.225779.log

    After training I only test the LEVIR-CD test set.

    opened by max2857 0
  • A question about PCC

    A question about PCC

    Hello,I have a question about PCC:

    PCC is mentioned in the paper. After obtaining the classification result through the segmentation model, how to obtain the change detection result through the classification result? Is it a direct subtraction?

    opened by Hyd1999618 0
  • [Feature] support [0~255] gt

    [Feature] support [0~255] gt

    The original dataset of LEVIR-CD consists of 0 and 255.

    However, the segmentation loss of this code works only when it consists of 0 and 1.

    Therefore, I added a code to change gt's 255 to 1.

    opened by seominseok0429 1
Releases(v0.1.0)
Owner
Zhuo Zheng
CV IN RS. Ph.D. Student.
Zhuo Zheng
TorchFlare is a simple, beginner-friendly, and easy-to-use PyTorch Framework train your models effortlessly.

TorchFlare TorchFlare is a simple, beginner-friendly and an easy-to-use PyTorch Framework train your models without much effort. It provides an almost

Atharva Phatak 85 Dec 26, 2022
Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

lyricpoem 16 Dec 16, 2022
Data reduction pipeline for KOALA on the AAT.

KOALA KOALA, the Kilofibre Optical AAT Lenslet Array, is a wide-field, high efficiency, integral field unit used by the AAOmega spectrograph on the 3.

4 Sep 26, 2022
Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks

Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks This is the official code for DyReg model inroduced in Discovering Dyna

Bitdefender Machine Learning 11 Nov 08, 2022
Wandb-predictions - WANDB Predictions With Python

WANDB API CI/CD Below we capture the CI/CD scenarios that we would expect with o

Anish Shah 6 Oct 07, 2022
[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

CC 4.4k Dec 27, 2022
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
Trajectory Variational Autoencder baseline for Multi-Agent Behavior challenge 2022

MABe_2022_TVAE: a Trajectory Variational Autoencoder baseline for the 2022 Multi-Agent Behavior challenge This repository contains jupyter notebooks t

Andrew Ulmer 15 Nov 08, 2022
Video Corpus Moment Retrieval with Contrastive Learning (SIGIR 2021)

Video Corpus Moment Retrieval with Contrastive Learning PyTorch implementation for the paper "Video Corpus Moment Retrieval with Contrastive Learning"

ZHANG HAO 42 Dec 29, 2022
The official implementation of the Hybrid Self-Attention NEAT algorithm

PUREPLES - Pure Python Library for ES-HyperNEAT About This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure

Adrian Westh 91 Dec 12, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 05, 2022
Visual Tracking by TridenAlign and Context Embedding

Visual Tracking by TridentAlign and Context Embedding (TACT) Test code for "Visual Tracking by TridentAlign and Context Embedding" Janghoon Choi, Juns

Janghoon Choi 32 Aug 25, 2021
One line to host them all. Bootstrap your image search case in minutes.

One line to host them all. Bootstrap your image search case in minutes. Survey NOW gives the world access to customized neural image search in just on

Jina AI 403 Dec 30, 2022
NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows This repo contains the code for the paper Tractable Densit

Layer6 Labs 4 Dec 12, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

2.6k Jan 04, 2023
Official PyTorch Implementation of Learning Architectures for Binary Networks

Learning Architectures for Binary Networks An Pytorch Implementation of the paper Learning Architectures for Binary Networks (BNAS) (ECCV 2020) If you

Computer Vision Lab. @ GIST 25 Jun 09, 2022
Speedy Implementation of Instance-based Learning (IBL) agents in Python

A Python library to create single or multi Instance-based Learning (IBL) agents that are built based on Instance Based Learning Theory (IBLT) 1 Instal

0 Nov 18, 2021
Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Instrument Recognition.

Music Trees Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Ins

Hugo Flores García 32 Nov 22, 2022
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 01, 2022
Implementation of ToeplitzLDA for spatiotemporal stationary time series data.

Code for the ToeplitzLDA classifier proposed in here. The classifier conforms sklearn and can be used as a drop-in replacement for other LDA classifiers. For in-depth usage refer to the learning from

Jan Sosulski 5 Nov 07, 2022