Implementation of the Swin Transformer in PyTorch.

Overview

Linear Self Attention

Swin Transformer - PyTorch

Implementation of the Swin Transformer architecture. This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (86.4 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones.

This is NOT the official repository of the Swin Transformer. At the moment in time the official code of the authors is not available yet but can be found later at: https://github.com/microsoft/Swin-Transformer.

All credits go to the authors Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo.

Install

$ pip install swin-transformer-pytorch

or (if you clone the repository)

$ pip install -r requirements.txt

Usage

import torch
from swin_transformer_pytorch import SwinTransformer

net = SwinTransformer(
    hidden_dim=96,
    layers=(2, 2, 6, 2),
    heads=(3, 6, 12, 24),
    channels=3,
    num_classes=3,
    head_dim=32,
    window_size=7,
    downscaling_factors=(4, 2, 2, 2),
    relative_pos_embedding=True
)
dummy_x = torch.randn(1, 3, 224, 224)
logits = net(dummy_x)  # (1,3)
print(net)
print(logits)

Parameters

  • hidden_dim: int.
    What hidden dimension you want to use for the architecture, noted C in the original paper
  • layers: 4-tuple of ints divisible by 2.
    How many layers in each stage to apply. Every int should be divisible by two because we are always applying a regular and a shifted SwinBlock together.
  • heads: 4-tuple of ints
    How many heads in each stage to apply.
  • channels: int.
    Number of channels of the input.
  • num_classes: int.
    Num classes the output should have.
  • head_dim: int.
    What dimension each head should have.
  • window_size: int.
    What window size to use, make sure that after each downscaling the image dimensions are still divisible by the window size.
  • downscaling_factors: 4-tuple of ints.
    What downscaling factor to use in each stage. Make sure image dimension is large enough for downscaling factors.
  • relative_pos_embedding: bool.
    Whether to use learnable relative position embedding (2M-1)x(2M-1) or full positional embeddings (M²xM²).

TODO

  • Adjust code for and validate on ImageNet-1K and COCO 2017

References

Some part of the code is adapted from the PyTorch - VisionTransformer repository https://github.com/lucidrains/vit-pytorch , which provides a very clean VisionTransformer implementation to start with.

Citations

@misc{liu2021swin,
      title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, 
      author={Ze Liu and Yutong Lin and Yue Cao and Han Hu and Yixuan Wei and Zheng Zhang and Stephen Lin and Baining Guo},
      year={2021},
      eprint={2103.14030},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • about widow-size

    about widow-size

    Dear Sir, Thank you very much for your great work. I would like to ask if you have any suggestions on how to set the window size. For 224x224 input, window size set to 7 is reasonable because it can divide by 7, but for other sizes, such as 768x768 in cityscapes, 7 will undoubtedly report an error since 768 / 32=24 , so it looks like the window setting is very subtle. The close value is 8, but is the window setting the same as the convolution kernel, where odd numbers work better? Also, is it possible to set different window sizes at different stages, which seems to be feasible for non-regular image sizes. Since the window size is a very critical hyperparameter that determines the perceptual field and the amount of computation, would like to request your opinion, thanks!

    opened by huixiancheng 9
  • relative pos embedding errs out with

    relative pos embedding errs out with "IndexError: tensors used as indices must be long, byte or bool tensors"

    Very big thanks for making this implementation! I just upgraded to the relative pos embedding update from an hour ago and in trying to train get this type error.

    ---> 32         y_pred = model(images)
         33         #print(f" y_pred = {y_pred}")
         34         #print(f" y_pred shape = {y_pred.shape}")
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, img)
        229 
        230     def forward(self, img):
    --> 231         x = self.stage1(img)
        232         x = self.stage2(x)
        233         x = self.stage3(x)
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x)
        189         x = self.patch_partition(x)
        190         for regular_block, shifted_block in self.layers:
    --> 191             x = regular_block(x)
        192             x = shifted_block(x)
        193         return x.permute(0, 3, 1, 2)
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x)
        148 
        149     def forward(self, x):
    --> 150         x = self.attention_block(x)
        151         x = self.mlp_block(x)
        152         return x
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x, **kwargs)
         21 
         22     def forward(self, x, **kwargs):
    ---> 23         return self.fn(x, **kwargs) + x
         24 
         25 
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x, **kwargs)
         31 
         32     def forward(self, x, **kwargs):
    ---> 33         return self.fn(self.norm(x), **kwargs)
         34 
         35 
    
    ~\anaconda3\envs\fastai2\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),
    
    ~\cdetr\cdetr_utils\transformer\swin_transformer.py in forward(self, x)
        116 
        117         if self.relative_pos_embedding:
    --> 118             dots += self.pos_embedding[self.relative_indices[:, :, 0], self.relative_indices[:, :, 1]]
        119         else:
        120             dots += self.pos_embedding
    
    IndexError: tensors used as indices must be long, byte or bool tensors
    
    opened by lessw2020 8
  • fail to run the code

    fail to run the code

    Hi, i'm intereted in your code! But when i run the example of it,

    Traceback (most recent call last): File "D:/Code/Pytorch/swin-transformer-pytorch-0.4/example.py", line 16, in logits = net(dummy_x) # (1,3) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 219, in forward x = self.stage1(img) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 190, in forward x = regular_block(x) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 149, in forward x = self.attention_block(x) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 22, in forward return self.fn(x, **kwargs) + x File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 32, in forward return self.fn(self.norm(x), **kwargs) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 117, in forward dots += self.pos_embedding[self.relative_indices[:, :, 0], self.relative_indices[:, :, 1]] IndexError: tensors used as indices must be long, byte or bool tensors

    And when i change the type to long, the code has another error.

    Traceback (most recent call last): File "D:/Code/Pytorch/swin-transformer-pytorch-0.4/example.py", line 16, in logits = net(dummy_x) # (1,3) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 219, in forward x = self.stage1(img) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 188, in forward x = self.patch_partition(x) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Code\Pytorch\swin-transformer-pytorch-0.4\swin_transformer_pytorch\swin_transformer.py", line 164, in forward x = self.patch_merge(x).view(b, -1, new_h, new_w).permute(0, 2, 3, 1) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\modules\fold.py", line 295, in forward self.padding, self.stride) File "D:\Softwares\Anaconda\envs\pytorch_18\lib\site-packages\torch\nn\functional.py", line 4313, in unfold return torch._C._nn.im2col(input, _pair(kernel_size), _pair(dilation), _pair(padding), _pair(stride)) RuntimeError: "im2col_out_cpu" not implemented for 'Long'

    opened by ShujinW 5
  • Cyclic shift with masking

    Cyclic shift with masking

    Hello sir, I'm trying to understand "efficient batch computation" which the authors suggested. Probably because of my short knowledge, it was hard to get how it works. Your implementation really helped me for understanding its mechanism, thanks a lot!

    Here's my question, it seems the masked area of q * k / sqrt(d) vanishes during the computation of self-attention. I'm not sure that I understood the code correctly, but is this originally intended in the paper? I'm wondering if each subwindow's self-attention might be computed before reversing. image

    Apology if I misunderstood something, and thanks again!

    opened by Hayoung93 4
  • Shifting attention-calculating windows

    Shifting attention-calculating windows

    Hello, sir. A question popped up again, unfortunately.

    I've followed your shifting code, and it seems to have a difference with (my comprehension of) the paper. I understood the behavior of the original paper's window shifting as a black arrow in the image below (self-attention is calculated with elements inside of bold lines). The left red arrow points to the result of patch-wise rolling and the right red arrow points results of rolling the entire feature map. In my opinion, self-attention should be computed according to the right-top figure, therefore, boxes of right-bottom should be used (green dot-line separates subwindows) which each region in the right-top figure preserves.

    Please let me know if I misunderstood your code or something in the paper. Thanks a lot!

    Additionally, this is how I mimicked your code:

    import torch
    from einops import rearrange
    A = torch.Tensor(list(range(1, 17))).view(1, 4, 4)
    A_patched = A.view(4, 2, 2).permute(1, 2, 0).view(1, 2, 2, 4)
    A_patched_rolled = torch.roll(A_patched, shifts=(-1, -1), dims=(1, 2))
    A_rearranged = rearrange(A, 'a (b c) (d e)->a (b d) (c e)', b=2, d=2)
    A_rearranged_rolled = torch.roll(A_rearranged, shifts=(-1, -1), dims=(1, 2))
    A_rearranged_rolled2 = torch.roll(A_rearranged, shifts=(1, 1), dims=(1, 2))
    

    where A can be considered as a 4x4 feature map (though element order is not matched with image above), A_patched is a divided version of A, and A_patched_rolled is patch-wise shifted version of A_patched, following torch.roll(x, shifts=(self.displacement, self.displacement), dims=(1, 2)) in your code. A_rearranged is rearranged to match the image above.

    <---A_patched<---A_patched_rolled

    >>> A
    tensor([[[ 1.,  2.,  3.,  4.],
             [ 5.,  6.,  7.,  8.],
             [ 9., 10., 11., 12.],
             [13., 14., 15., 16.]]])
    >>> A_patched
    tensor([[[[ 1.,  5.,  9., 13.],
              [ 2.,  6., 10., 14.]],
    
             [[ 3.,  7., 11., 15.],
              [ 4.,  8., 12., 16.]]]])
    >>> A_patched_rolled
    tensor([[[[ 4.,  8., 12., 16.],
              [ 3.,  7., 11., 15.]],
    
             [[ 2.,  6., 10., 14.],
              [ 1.,  5.,  9., 13.]]]])
    >>> A_rearranged
    tensor([[[ 1.,  2.,  5.,  6.],
             [ 3.,  4.,  7.,  8.],
             [ 9., 10., 13., 14.],
             [11., 12., 15., 16.]]])
    >>> A_rearranged_rolled
    tensor([[[ 4.,  7.,  8.,  3.],
             [10., 13., 14.,  9.],
             [12., 15., 16., 11.],
             [ 2.,  5.,  6.,  1.]]])
    >>> A_rearranged_rolled2
    tensor([[[16., 11., 12., 15.],
             [ 6.,  1.,  2.,  5.],
             [ 8.,  3.,  4.,  7.],
             [14.,  9., 10., 13.]]])
    
    opened by Hayoung93 2
  • How to use for generation work

    How to use for generation work

    Thanks for your great work. I do the task of image generation. In my opinion, the current swin-transformer is an encode structure. Is there a corresponding swin-transformer that can be used for decode?

    opened by yinyiyu 2
  • Runtime error

    Runtime error

    I'm running an error in your code at line 117 dots += self.pos_embedding[self.relative_indices[:, :, 0], self.relative_indices[:, :, 1]] IndexError: tensors used as indices must be long, byte or bool tensors

    opened by QinchengZhang 1
  • A question about qk_scale

    A question about qk_scale

    Hello @berniwal , I have a question about this: https://github.com/berniwal/swin-transformer-pytorch/blob/c921ebf914c6ea9734bb260ada395e3746c85402/swin_transformer_pytorch/swin_transformer.py#L76

    what's the function of the scale?I can't understand why do this.

    Best regards

    opened by Sample-design-alt 0
  • Issues related to patch merging implementation

    Issues related to patch merging implementation

    In this repository, patch merging is implemented with nn.Unfold, but it is expected to behave differently than the official implementation.

    https://github.com/microsoft/Swin-Transformer/blob/6ded2577413b68cbbd89f08391465788ed73030e/models/swin_transformer.py#L291

    Is there something I'm missing out on?

    opened by lee-gwang 1
  • why the createmask function is 49*49?

    why the createmask function is 49*49?

    def create_mask(window_size, displacement, upper_lower, left_right): mask = torch.zeros(window_size ** 2, window_size ** 2)

    it is 49*49 in all tne swin network,why?

    opened by henbucuoshanghai 1
  • apply to other dataset

    apply to other dataset

    hello,thanks for the work you had done very much and i have a question that how can i apply this code to train a vit model on other dataset,how can i to adjust those parameters?

    opened by jieweilai 0
  • deeplabv3 + swintransformer

    deeplabv3 + swintransformer

    i try this swintransformer on deeplabv3 (https://github.com/VainF/DeepLabV3Plus-Pytorch), errors are found:

    Exception has occurred: EinopsError Error while processing rearrange-reduction pattern "b (nw_h w_h) (nw_w w_w) (h d) -> b h (nw_h nw_w) (w_h w_w) d". Input tensor shape: torch.Size([1, 104, 104, 96]). Additional info: {'h': 3, 'w_h': 7, 'w_w': 7}. Shape mismatch, can't divide axis of length 104 in chunks of 7

    During handling of the above exception, another exception occurred:

    File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 111, in lambda t: rearrange(t, 'b (nw_h w_h) (nw_w w_w) (h d) -> b h (nw_h nw_w) (w_h w_w) d', File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 110, in forward q, k, v = map( File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 32, in forward return self.fn(self.norm(x), **kwargs) File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 22, in forward return self.fn(x, **kwargs) + x File "D:\TangYong\Src\VS\Python\PyTorch\deeplabv3-vainf\network\backbone\swintransformer.py", line 149, in forward

    thank you for your answer.

    opened by TangYong1975 1
Releases(0.4)
This repository includes different versions of the prescribed-time controller as Simulink blocks and MATLAB script codes for engineering applications.

Prescribed-time Control Prescribed-time control (PTC) blocks in Simulink environment, MATLAB R2020b. For more theoretical details, refer to the papers

Amir Shakouri 1 Mar 11, 2022
Testability-Aware Low Power Controller Design with Evolutionary Learning, ITC2021

Testability-Aware Low Power Controller Design with Evolutionary Learning This repo contains the source code of Testability-Aware Low Power Controller

Lee Man 1 Dec 26, 2021
Vector Quantized Diffusion Model for Text-to-Image Synthesis

Vector Quantized Diffusion Model for Text-to-Image Synthesis Due to company policy, I have to set microsoft/VQ-Diffusion to private for now, so I prov

Shuyang Gu 294 Jan 05, 2023
Multimodal Co-Attention Transformer (MCAT) for Survival Prediction in Gigapixel Whole Slide Images

Multimodal Co-Attention Transformer (MCAT) for Survival Prediction in Gigapixel Whole Slide Images [ICCV 2021] © Mahmood Lab - This code is made avail

Mahmood Lab @ Harvard/BWH 63 Dec 01, 2022
This is a template for the Non-autoregressive Deep Learning-Based TTS model (in PyTorch).

Non-autoregressive Deep Learning-Based TTS Template This is a template for the Non-autoregressive TTS model. It contains Data Preprocessing Pipeline D

Keon Lee 13 Dec 05, 2022
Code for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in Deep Latent Space"

SRHEN This is a better and simpler implementation for "SRHEN: Stepwise-Refining Homography Estimation Network via Parsing Geometric Correspondences in

1 Oct 28, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

Adelaide Intelligent Machines (AIM) Group 7 Sep 12, 2022
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022
This repo contains implementation of different architectures for emotion recognition in conversations.

Emotion Recognition in Conversations Updates 🔥 🔥 🔥 Date Announcements 03/08/2021 🎆 🎆 We have released a new dataset M2H2: A Multimodal Multiparty

Deep Cognition and Language Research (DeCLaRe) Lab 1k Dec 30, 2022
Contour-guided image completion with perceptual grouping (BMVC 2021 publication)

Contour-guided Image Completion with Perceptual Grouping Authors Morteza Rezanejad*, Sidharth Gupta*, Chandra Gummaluru, Ryan Marten, John Wilder, Mic

Sid Gupta 6 Dec 27, 2022
Codes for paper "Towards Diverse Paragraph Captioning for Untrimmed Videos". CVPR 2021

Towards Diverse Paragraph Captioning for Untrimmed Videos This repository contains PyTorch implementation of our paper Towards Diverse Paragraph Capti

Yuqing Song 61 Oct 11, 2022
TensorFlow Implementation of Unsupervised Cross-Domain Image Generation

Domain Transfer Network (DTN) TensorFlow implementation of Unsupervised Cross-Domain Image Generation. Requirements Python 2.7 TensorFlow 0.12 Pickle

Yunjey Choi 864 Dec 30, 2022
Automated Evidence Collection for Fake News Detection

Automated Evidence Collection for Fake News Detection This is the code repo for the Automated Evidence Collection for Fake News Detection paper accept

Mrinal Rawat 2 Apr 12, 2022
Official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch.

Multi-speaker DGP This repository provides official implementation of deep Gaussian process (DGP)-based multi-speaker speech synthesis with PyTorch. O

sarulab-speech 24 Sep 07, 2022
Red Team tool for exfiltrating files from a target's Google Drive that you have access to, via Google's API.

GD-Thief Red Team tool for exfiltrating files from a target's Google Drive that you(the attacker) has access to, via the Google Drive API. This includ

Antonio Piazza 39 Dec 27, 2022
A powerful framework for decentralized federated learning with user-defined communication topology

Scatterbrained Decentralized Federated Learning Scatterbrained makes it easy to build federated learning systems. In addition to traditional federated

Johns Hopkins Applied Physics Laboratory 7 Sep 26, 2022
Train Scene Graph Generation for Visual Genome and GQA in PyTorch >= 1.2 with improved zero and few-shot generalization.

Scene Graph Generation Object Detections Ground truth Scene Graph Generated Scene Graph In this visualization, woman sitting on rock is a zero-shot tr

Boris Knyazev 93 Dec 28, 2022
Relative Uncertainty Learning for Facial Expression Recognition

Relative Uncertainty Learning for Facial Expression Recognition The official implementation of the following paper at NeurIPS2021: Title: Relative Unc

35 Dec 28, 2022
Codes to calculate solar-sensor zenith and azimuth angles directly from hyperspectral images collected by UAV. Works only for UAVs that have high resolution GNSS/IMU unit.

UAV Solar-Sensor Angle Calculation Table of Contents About The Project Built With Getting Started Prerequisites Installation Datasets Contributing Lic

Sourav Bhadra 1 Jan 15, 2022
Open CV - Convert a picture to look like a cartoon sketch in python

Use the video https://www.youtube.com/watch?v=k7cVPGpnels for initial learning.

Sammith S Bharadwaj 3 Jan 29, 2022