Shufflenet-v2-Pytorch Introduction This is a Pytorch implementation of faceplusplus's ShuffleNet-v2. For details, please read the following papers: ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design Pretrained Models on ImageNet We provide pretrained ShuffleNet-v2 models on ImageNet,which achieve slightly better accuracy rates than the original ones reported in the paper. The top-1/5 accuracy rates by using single center crop (crop size: 224x224, image size: 256xN): Network Top-1 Top-5 Top-1(reported in the paper) ShuffleNet-v2-x0.5 60.646 81.696 60.300 ShuffleNet-v2-x1 69.402 88.374 69.400 Evaluate Models python eval.py -a shufflenetv2 --width_mult=0.5 --evaluate=./shufflenetv2_x0.5_60.646_81.696.pth.tar ./ILSVRC2012/ python eval.py -a shufflenetv2 --width_mult=1.0 --evaluate=./shufflenetv2_x1_69.390_88.412.pth.tar ./ILSVRC2012/ Version: Python2.7 torch0.3.1 torchvision0.2.1 Dataset prepare Refer to https://github.com/facebook/fb.resnet.torch/blob/master/INSTALL.md#download-the-imagenet-dataset
Perfect implement. Model shared. x0.5 (Top1:60.646) and 1.0x (Top1:69.402).
Overview
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware
LSSY量化交易系统
LSSY量化交易系统 该项目是本人3年来研究量化慢慢积累开发的一套系统,属于早期作品慢慢修改而来,仅供学习研究,回测分析,实盘交易部分未公开
Code for Robust Contrastive Learning against Noisy Views
Robust Contrastive Learning against Noisy Views This repository provides a PyTorch implementation of the Robust InfoNCE loss proposed in paper Robust
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨
WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W
GitHub repository for the ICLR Computational Geometry & Topology Challenge 2021
ICLR Computational Geometry & Topology Challenge 2022 Welcome to the ICLR 2022 Computational Geometry & Topology challenge 2022 --- by the ICLR 2022 W
Torchserve server using a YoloV5 model running on docker with GPU and static batch inference to perform production ready inference.
Yolov5 running on TorchServe (GPU compatible) ! This is a dockerfile to run TorchServe for Yolo v5 object detection model. (TorchServe (PyTorch librar
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction. NeurIPS 2021.
text_recognition_toolbox: The reimplementation of a series of classical scene text recognition papers with Pytorch in a uniform way.
text recognition toolbox 1. 项目介绍 该项目是基于pytorch深度学习框架,以统一的改写方式实现了以下6篇经典的文字识别论文,论文的详情如下。该项目会持续进行更新,欢迎大家提出问题以及对代码进行贡献。 模型 论文标题 发表年份 模型方法划分 CRNN 《An End-t
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.
Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep
MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモ
Tokyo2020-Pictogram-using-MediaPipe MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモです。 Tokyo2020Pictgram02.mp4 Requirement mediapipe 0.8.6 or later O
Pseudo-Visual Speech Denoising
Pseudo-Visual Speech Denoising This code is for our paper titled: Visual Speech Enhancement Without A Real Visual Stream published at WACV 2021. Autho
Score refinement for confidence-based 3D multi-object tracking
Score refinement for confidence-based 3D multi-object tracking Our video gives a brief explanation of our Method. This is the official code for the pa
Towhee is a flexible machine learning framework currently focused on computing deep learning embeddings over unstructured data.
Towhee is a flexible machine learning framework currently focused on computing deep learning embeddings over unstructured data.
WebUAV-3M: A Benchmark Unveiling the Power of Million-Scale Deep UAV Tracking
WebUAV-3M: A Benchmark Unveiling the Power of Million-Scale Deep UAV Tracking [Paper Link] Abstract In this work, we contribute a new million-scale Un
LightSeq is a high performance training and inference library for sequence processing and generation implemented in CUDA
LightSeq: A High Performance Library for Sequence Processing and Generation
Evaluating saliency methods on artificial data with different background types
Evaluating saliency methods on artificial data with different background types This repository contains the relevant code for the MedNeurips 2021 subm
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.
PatrickStar: Parallel Training of Large Language Models via a Chunk-based Memory Management Meeting PatrickStar Pre-Trained Models (PTM) are becoming
Code for "Learning Graph Cellular Automata"
Learning Graph Cellular Automata This code implements the experiments from the NeurIPS 2021 paper: "Learning Graph Cellular Automata" Daniele Grattaro
Solver for Large-Scale Rank-One Semidefinite Relaxations
STRIDE: spectrahedral proximal gradient descent along vertices A Solver for Large-Scale Rank-One Semidefinite Relaxations About STRIDE is designed for
Inferring Lexicographically-Ordered Rewards from Preferences
Inferring Lexicographically-Ordered Rewards from Preferences Code author: Alihan Hüyük ([e