[SIGGRAPH Asia 2019] Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

Overview

AGIS-Net

Introduction

This is the official PyTorch implementation of the Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning.

paper | supplementary material

Abstract

Automatic generation of artistic glyph images is a challenging task that attracts many research interests. Previous methods either are specifically designed for shape synthesis or focus on texture transfer. In this paper, we propose a novel model, AGIS-Net, to transfer both shape and texture styles in one-stage with only a few stylized samples. To achieve this goal, we first disentangle the representations for content and style by using two encoders, ensuring the multi-content and multi-style generation. Then we utilize two collaboratively working decoders to generate the glyph shape image and its texture image simultaneously. In addition, we introduce a local texture refinement loss to further improve the quality of the synthesized textures. In this manner, our one-stage model is much more efficient and effective than other multi-stage stacked methods. We also propose a large-scale dataset with Chinese glyph images in various shape and texture styles, rendered from 35 professional-designed artistic fonts with 7,326 characters and 2,460 synthetic artistic fonts with 639 characters, to validate the effectiveness and extendability of our method. Extensive experiments on both English and Chinese artistic glyph image datasets demonstrate the superiority of our model in generating high-quality stylized glyph images against other state-of-the-art methods.

Model Architecture

Architecture

Skip Connection Local Discriminator
skip-connection local-discriminator

Some Results

comparison

comparison

across_languae

Prerequisites

  • Linux
  • CPU or NVIDIA GPU + CUDA cuDNN
  • Python 3
  • PyTorch 0.4.0+

Get Started

Installation

  1. Install PyTorch, torchvison and dependencies from https://pytorch.org
  2. Install python libraries visdom and dominate:
    pip install visdom
    pip install dominate
  3. Clone this repo:
    git clone -b master --single-branch https://github.com/hologerry/AGIS-Net
    cd AGIS-Net
  4. Download the offical pre-trained vgg19 model: vgg19-dcbb9e9d.pth, and put it under the models/ folder

Datasets

The datasets server is down, you can download the datasets from PKU Disk, Dropbox or MEGA. Download the datasets using the following script, four datasets and the raw average font style glyph image are available.

It may take a while, please be patient

bash ./datasets/download_dataset.sh DATASET_NAME
  • base_gray_color English synthesized gradient glyph image dataset, proposed by MC-GAN.
  • base_gray_texture English artistic glyph image dataset, proposed by MC-GAN.
  • skeleton_gray_color Chinese synthesized gradient glyph image dataset by us.
  • skeleton_gray_texture Chinese artistic glyph image dataset proposed by us.
  • average_skeleton Raw Chinese avgerage font style (skeleton) glyph image dataset proposed by us.

Please refer to the data for more details about our datasets and how to prepare your own datasets.

Model Training

  • To train a model, download the training images (e.g., English artistic glyph transfer)

    bash ./datasets/download_dataset.sh base_gray_color
    bash ./datasets/download_dataset.sh base_gray_texture
  • Train a model:

    1. Start the Visdom Visualizer

      python -m visdom.server -port PORT

      PORT is specified in train.sh

    2. Pretrain on synthesized gradient glyph image dataset

      bash ./scripts/train.sh base_gray_color GPU_ID

      GPU_ID indicates which GPU to use.

    3. Fineture on artistic glyph image dataset

      bash ./scripts/train.sh base_gray_texture GPU_ID DATA_ID FEW_SIZE

      DATA_ID indicates which artistic font is fine-tuned.
      FEW_SIZE indicates the size of few-shot set.

      It will raise an error saying:

      FileNodeFoundError: [Error 2] No such file or directory: 'chechpoints/base_gray_texture/base_gray_texture_DATA_ID_TIME/latest_net_G.pth
      

      Copy the pretrained model to above path

      cp chechpoints/base_gray_color/base_gray_color_TIME/latest_net_* chechpoints/base_gray_texture/base_gray_texture_DATA_ID_TIME/

      And start train again. It will works well.

Model Testing

  • To test a model, copy the trained model from checkpoint to pretrained_models folder (e.g., English artistic glyph transfer)

    cp chechpoints/base_gray_color/base_gray_texture_DATA_ID_TIME/latest_net_* pretrained_models/base_gray_texture_DATA_ID/
  • Test a model

    bash ./scripts/test_base_gray_texture.sh GPU_ID DATA_ID

Acknowledgements

This code is inspired by the BicycleGAN.

Special thanks to the following works for sharing their code and dataset.

Citation

If you find our work is helpful, please cite our paper:

@article{Gao2019Artistic,
  author = {Yue, Gao and Yuan, Guo and Zhouhui, Lian and Yingmin, Tang and Jianguo, Xiao},
  title = {Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning},
  journal = {ACM Trans. Graph.},
  issue_date = {November 2019},
  volume = {38},
  number = {6},
  year = {2019},
  articleno = {185},
  numpages = {12},
  url = {http://doi.acm.org/10.1145/3355089.3356574},
  publisher = {ACM}
} 

Copyright

The code and dataset are only allowed for PERSONAL and ACADEMIC usage.

Owner
Yue Gao
Researcher at Microsoft Research Asia
Yue Gao
[KDD 2021, Research Track] DiffMG: Differentiable Meta Graph Search for Heterogeneous Graph Neural Networks

DiffMG This repository contains the code for our KDD 2021 Research Track paper: DiffMG: Differentiable Meta Graph Search for Heterogeneous Graph Neura

AutoML Research 24 Nov 29, 2022
Experiments with Fourier layers on simulation data.

Factorized Fourier Neural Operators This repository contains the code to reproduce the results in our NeurIPS 2021 ML4PS workshop paper, Factorized Fo

Alasdair Tran 57 Dec 25, 2022
Reimplementation of the paper `Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? (ACL2020)`

Human Attention for Text Classification Re-implementation of the paper Human Attention Maps for Text Classification: Do Humans and Neural Networks Foc

Shunsuke KITADA 15 Dec 13, 2021
Code of paper "Compositionally Generalizable 3D Structure Prediction"

Compositionally Generalizable 3D Structure Prediction In this work, We bring in the concept of compositional generalizability and factorizes the 3D sh

Songfang Han 30 Dec 17, 2022
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

Katsuya Hyodo 24 Mar 02, 2022
SymPy-powered, Wolfram|Alpha-like answer engine totally in your browser, without backend computation

SymPy Beta SymPy Beta is a fork of SymPy Gamma. The purpose of this project is to run a SymPy-powered, Wolfram|Alpha-like answer engine totally in you

Liumeo 25 Dec 21, 2022
Bi-level feature alignment for versatile image translation and manipulation (Under submission of TPAMI)

Bi-level feature alignment for versatile image translation and manipulation (Under submission of TPAMI) Preparation Clone the Synchronized-BatchNorm-P

Fangneng Zhan 12 Aug 10, 2022
Language Models Can See: Plugging Visual Controls in Text Generation

Language Models Can See: Plugging Visual Controls in Text Generation Authors: Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lin

Yixuan Su 195 Dec 22, 2022
Six - a Python 2 and 3 compatibility library

Six is a Python 2 and 3 compatibility library. It provides utility functions for smoothing over the differences between the Python versions with the g

Benjamin Peterson 919 Dec 28, 2022
Evaluation toolkit of the informative tracking benchmark comprising 9 scenarios, 180 diverse videos, and new challenges.

Informative-tracking-benchmark Informative tracking benchmark (ITB) higher diversity. It contains 9 representative scenarios and 180 diverse videos. m

Xin Li 15 Nov 26, 2022
CaFM-pytorch ICCV ACCEPT Introduction of dataset VSD4K

CaFM-pytorch ICCV ACCEPT Introduction of dataset VSD4K Our dataset VSD4K includes 6 popular categories: game, sport, dance, vlog, interview and city.

96 Jul 05, 2022
WORD: Revisiting Organs Segmentation in the Whole Abdominal Region

WORD: Revisiting Organs Segmentation in the Whole Abdominal Region. This repository provides the codebase and dataset for our work WORD: Revisiting Or

Healthcare Intelligence Laboratory 71 Jan 07, 2023
SigOpt wrappers for scikit-learn methods

SigOpt + scikit-learn Interfacing This package implements useful interfaces and wrappers for using SigOpt and scikit-learn together Getting Started In

SigOpt 73 Sep 30, 2022
SMCA replication There are no extra compiled components in SMCA DETR and package dependencies are minimal

Usage There are no extra compiled components in SMCA DETR and package dependencies are minimal, so the code is very simple to use. We provide instruct

22 May 06, 2022
This is a simple plugin for Vim that allows you to use OpenAI Codex.

🤖 Vim Codex An AI plugin that does the work for you. This is a simple plugin for Vim that will allow you to use OpenAI Codex. To use this plugin you

Tom Dörr 195 Dec 28, 2022
Official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo'

IterMVS official source code of paper 'IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo' Introduction IterMVS is a novel lear

Fangjinhua Wang 127 Jan 04, 2023
A simple version for graphfpn

GraphFPN: Graph Feature Pyramid Network for Object Detection Download graph-FPN-main.zip For training , run: python train.py For test with Graph_fpn

WorldGame 67 Dec 25, 2022
Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Movement Primitives Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This

DFKI Robotics Innovation Center 63 Jan 06, 2023
Hide screen when boss is approaching.

BossSensor Hide your screen when your boss is approaching. Demo The boss stands up. He is approaching. When he is approaching, the program fetches fac

Hiroki Nakayama 6.2k Jan 07, 2023
Official release of MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis of Pancreatic Cancer axriv: http://arxiv.org/abs/2112.13513

MSHT: Multi-stage Hybrid Transformer for the ROSE Image Analysis This is the official page of the MSHT with its experimental script and records. We de

Tianyi Zhang 53 Dec 27, 2022