Jittor 64*64 implementation of StyleGAN

Overview

StyleGanJittor (Tsinghua university computer graphics course)

Overview

Jittor 64*64 implementation of StyleGAN (Tsinghua university computer graphics course) This project is a repetition of StyleGAN based on python 3.8 + Jittor(计图) and The open source StyleGAN-Pytorch project. I train the model on the color_symbol_7k dataset for 40000 iterations. The model can generate 64×64 symbolic images.

StyleGAN is a generative adversarial network for image generation proposed by NVIDIA in 2018. According to the paper, the generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. The main improvement of this network model over previous models is the structure of the generator, including the addition of an eight-layer Mapping Network, the use of the AdaIn module, and the introduction of image randomness - these structures allow the generator to The overall features of the image are decoupled from the local features to synthesize images with better effects; at the same time, the network also has better latent space interpolation effects.

(Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 4401-4410.)

The training results are shown in Video1trainingResult.avi, Video2GenerationResult1.avi, and Video3GenerationResul2t.avi generated by the trained model.

The Checkpoint folder is the trained StyleGAN model, because it takes up a lot of storage space, the models have been deleted.The data folder is the color_symbol_7k dataset folder. The dataset is processed by the prepare_data file to obtain the LMDB database for accelerated training, and the database is stored in the mdb folder.The sample folder is the folder where the images are generated during the model training process, which can be used to traverse the training process. The generateSample folder is the sample image generated by calling StyleGenerator after the model training is completed.

The MultiResolutionDataset method for reading the LMDB database is defined in dataset.py, the Jittor model reproduced by Jittor is defined in model.py, train.py is used for the model training script, and VideoWrite.py is used to convert the generated image. output for video.

Environment and execution instructions

Project environment dependencies include jittor, ldbm, PIL, argparse, tqdm and some common python libraries.

First you need to unzip the dataset in the data folder. The model can be trained by the script in the terminal of the project environment python train.py --mixing "./mdb/color_symbol_7k_mdb"

Images can be generated based on the trained model and compared for their differences by the script python generate.py --size 64 --n_row 3 --n_col 5 --path './checkpoint/040000.model'

You can adjust the model training parameters by referring to the code in the args section of train.py and generate.py.

Details

The first is the data set preparation, using the LMDB database to accelerate the training. For model construction, refer to the model structure shown in the following figure in the original text, and the recurring Suri used in Pytorch open source version 1. Using the model-dependent framework shown in the second figure below, the original model is split into EqualConv2d, EqualLinear, StyleConvBlock , Convblock and other sub-parts are implemented, and finally built into a complete StyleGenerator and Discriminator.

image

image

In the model building and training part, follow the tutorial provided by the teaching assistant on the official website to help convert the torch method to the jittor method, and explore some other means to implement it yourself. Jittor's documentation is relatively incomplete, and some methods are different from Pytorch. In this case, I use a lower-level method for implementation.

For example: jt.sqrt(out.var(0, unbiased=False) + 1e-8) is used in the Discrimination part of the model to solve the variance of the given dimension of the tensor, and there is no corresponding var() in the Jittor framework method, so I use ((out-out.mean(0)).sqr().sum(0)+1e-8).sqrt() to implement the same function.

Results

Limited by the hardware, the model training time is long, and I don't have enough time to fine-tune various parameters, optimizers and various parameters, so the results obtained by training on Jittor are not as good as when I use the same model framework to train on Pytorch The result is good, but the progressive training process can be clearly seen from the video, and the generated symbols are gradually clear, and the results are gradually getting better.

Figures below are sample results obtained by training on Jittor and Pytorch respectively. For details, please refer to the video files in the folder. The training results of the same model and code on Pytorch can be found in the sample_torch folder.

figures by Jittor figures by Pytorch

To be continued

Owner
Song Shengyu
Song Shengyu
KDD CUP 2020 Automatic Graph Representation Learning: 1st Place Solution

KDD CUP 2020: AutoGraph Team: aister Members: Jianqiang Huang, Xingyuan Tang, Mingjian Chen, Jin Xu, Bohang Zheng, Yi Qi, Ke Hu, Jun Lei Team Introduc

96 May 30, 2022
PyTorch reimplementation of minimal-hand (CVPR2020)

Minimal Hand Pytorch Unofficial PyTorch reimplementation of minimal-hand (CVPR2020). you can also find in youtube or bilibili bare hand youtube or bil

Hao Meng 228 Dec 29, 2022
An implementation of an abstract algebra for music tones (pitches).

nbdev template Use this template to more easily create your nbdev project. If you are using an older version of this template, and want to upgrade to

Open Music Kit 0 Oct 10, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022
Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks

This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are publi

Shunta Saito 255 Sep 07, 2022
Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"

TR-BERT Source code and dataset for "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference". The code is based on huggaface's transformers.

THUNLP 37 Oct 30, 2022
Code for reproducing our paper: LMSOC: An Approach for Socially Sensitive Pretraining

LMSOC: An Approach for Socially Sensitive Pretraining Code for reproducing the paper LMSOC: An Approach for Socially Sensitive Pretraining to appear a

Twitter Research 11 Dec 20, 2022
Implementation of the Point Transformer layer, in Pytorch

Point Transformer - Pytorch Implementation of the Point Transformer self-attention layer, in Pytorch. The simple circuit above seemed to have allowed

Phil Wang 501 Jan 03, 2023
In the case of your data having only 1 channel while want to use timm models

timm_custom Description In the case of your data having only 1 channel while want to use timm models (with or without pretrained weights), run the fol

2 Nov 26, 2021
Naszilla is a Python library for neural architecture search (NAS)

A repository to compare many popular NAS algorithms seamlessly across three popular benchmarks (NASBench 101, 201, and 301). You can implement your ow

270 Jan 03, 2023
Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer

Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer Paper on arXiv Public PyTorch implementation of two-stage peer-reg

NNAISENSE 38 Oct 14, 2022
Code artifacts for the submission "Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems"

Code Artifacts Code artifacts for the submission "Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driv

Andrea Stocco 2 Aug 24, 2022
[ACMMM 2021 Oral] Enhanced Invertible Encoding for Learned Image Compression

InvCompress Official Pytorch Implementation for "Enhanced Invertible Encoding for Learned Image Compression", ACMMM 2021 (Oral) Figure: Our framework

96 Nov 30, 2022
(ICCV'21) Official PyTorch implementation of Relational Embedding for Few-Shot Classification

Relational Embedding for Few-Shot Classification (ICCV 2021) Dahyun Kang, Heeseung Kwon, Juhong Min, Minsu Cho [paper], [project hompage] We propose t

Dahyun Kang 82 Dec 24, 2022
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning

Here is deepparse. Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning. Use deepparse to Use the pr

GRAAL/GRAIL 192 Dec 20, 2022
Diffusion Normalizing Flow (DiffFlow) Neurips2021

Diffusion Normalizing Flow (DiffFlow) Reproduce setup environment The repo heavily depends on jam, a personal toolbox developed by Qsh.zh. The API may

76 Jan 01, 2023
SNIPS: Solving Noisy Inverse Problems Stochastically

SNIPS: Solving Noisy Inverse Problems Stochastically This repo contains the official implementation for the paper SNIPS: Solving Noisy Inverse Problem

Bahjat Kawar 35 Nov 09, 2022
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

1 May 24, 2022
A simple python program that can be used to implement user authentication tokens into your program...

token-generator A simple python module that can be used by developers to implement user authentication tokens into your program... code examples creat

octo 6 Apr 18, 2022
SGPT: Multi-billion parameter models for semantic search

SGPT: Multi-billion parameter models for semantic search This repository contains code, results and pre-trained models for the paper SGPT: Multi-billi

Niklas Muennighoff 182 Dec 29, 2022