Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Overview

Real-ESRGAN

Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Ported from https://github.com/xinntao/Real-ESRGAN

Dependencies

  • NumPy
  • PyTorch, preferably with CUDA. Note that torchvision and torchaudio are not required and hence can be omitted from the command.
  • VapourSynth

Installation

pip install --upgrade vsrealesrgan
python -m vsrealesrgan

Usage

from vsrealesrgan import RealESRGAN

ret = RealESRGAN(clip)

See __init__.py for the description of the parameters.

Comments
  • Installing on portable vapoursynth?

    Installing on portable vapoursynth?

    I'm getting this error:

    ` python -m pip install --upgrade vsrealesrgan Collecting vsrealesrgan Using cached vsrealesrgan-3.1.0-py3-none-any.whl (7.4 kB) Collecting tqdm Using cached tqdm-4.64.0-py2.py3-none-any.whl (78 kB) Requirement already satisfied: numpy in d:\vapoursynth\lib\site-packages (from vsrealesrgan) (1.22.3) Collecting VapourSynth>=55 Using cached VapourSynth-58.zip (558 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error

    × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [15 lines of output] Traceback (most recent call last): File "C:\Users*\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 64, in dll_path = query(winreg.HKEY_LOCAL_MACHINE, REGISTRY_PATH, REGISTRY_KEY) File "C:\Users*\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 38, in query reg_key = winreg.OpenKey(hkey, path, 0, winreg.KEY_READ) FileNotFoundError: [WinError 2] The system cannot find the file specified

      During handling of the above exception, another exception occurred:
    
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\**\AppData\Local\Temp\pip-install-2415kpn4\vapoursynth_712c69d39f4a4718a3f6b523a85b39eb\setup.py", line 67, in <module>
          raise OSError("Couldn't detect vapoursynth installation path")
      OSError: Couldn't detect vapoursynth installation path
      [end of output]
    

    note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed

    × Encountered error while generating package metadata. ╰─> See above for output.

    note: This is an issue with the package mentioned above, not pip. hint: See above for details. `

    opened by manus693 8
  • 'vapoursynth.VideoFrame' object is not subscriptable

    'vapoursynth.VideoFrame' object is not subscriptable

    Error on frame 15 request: 'vapoursynth.VideoFrame' object is not subscriptable

    py3.6.4 vs.core.version: VapourSynth Video Processing Library\nCopyright (c) 2012-2018 Fredrik Mellbin\nCore R44\nAPI R3.5\nOptions: -\n torch.version: 1.10.0+cu111

    vpy: import vapoursynth as vs import sys sys.path.append("C:\C\Transcoding\VapourSynth\core64\plugins\Scripts") import mvsfunc as mvf sys.path.append(r"C:\Users\liujing\AppData\Local\Programs\Python\Python36\Lib\site-packages\vsrealesrgan") from vsrealesrgan import RealESRGAN

    core = vs.get_core(accept_lowercase=True) source = core.ffms2.Source(sourcename) source = mvf.ToRGB(source,depth=32) source = RealESRGAN(source) source= mvf.ToYUV(source,depth=16) source.set_output()

    opened by splinter21 4
  • TensorRT

    TensorRT "Ran out of input"?

    Using:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    import site
    import os
    # Adding torch dependencies to PATH
    path = site.getsitepackages()[0]+'/torch_dependencies/'
    path = path.replace('\\', '/')
    os.environ["PATH"] = path + os.pathsep + os.environ["PATH"]
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
    # source: 'G:\TestClips&Co\files\test.avi'
    # current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
    # Loading G:\TestClips&Co\files\test.avi using LWLibavSource
    clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/files/test.avi", format="YUV420P8", stream_index=0, cache=0, prefer_hw=0)
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 25
    clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
    clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0)
    original = clip
    from vsrealesrgan import RealESRGAN
    # adjusting color space from YUV420P8 to RGBH for VsRealESRGAN
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="470bg", range_s="limited")
    # resizing using RealESRGAN
    clip = RealESRGAN(clip=clip, device_index=0, trt=True, trt_cache_path="G:/Temp", num_streams=4) # 2560x1408
    # resizing 2560x1408 to 640x352
    # adjusting resizing
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="limited")
    clip = core.fmtc.resample(clip=clip, w=640, h=352, kernel="lanczos", interlaced=False, interlacedd=False)
    original = core.resize.Bicubic(clip=original, width=640, height=352)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
    original = core.text.Text(clip=original,text="Original",scale=1,alignment=7)
    clip = core.text.Text(clip=clip,text="Filtered",scale=1,alignment=7)
    stacked = core.std.StackHorizontal([original,clip])
    # Output
    stacked.set_output()
    

    I get

    Failed to evaluate the script: Python exception: Ran out of input

    Traceback (most recent call last):
    File "src\cython\vapoursynth.pyx", line 2866, in vapoursynth._vpy_evaluate
    File "src\cython\vapoursynth.pyx", line 2867, in vapoursynth._vpy_evaluate
    File "C:\Users\Selur\Desktop\test_2.vpy", line 32, in 
    clip = RealESRGAN(clip=clip, device_index=0, trt=True, trt_cache_path="G:/Temp", num_streams=4) # 2560x1408
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsrealesrgan\__init__.py", line 284, in RealESRGAN
    module = [torch.load(trt_engine_path) for _ in range(num_streams)]
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsrealesrgan\__init__.py", line 284, in 
    module = [torch.load(trt_engine_path) for _ in range(num_streams)]
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\serialization.py", line 795, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
    File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\serialization.py", line 1002, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
    EOFError: Ran out of input
    

    Works fine with trt=False.

    ->Any idea what is going wrong there?

    opened by Selur 3
  • [REQ] SwinIR port

    [REQ] SwinIR port

    opened by forart 1
  • Vapoursynth R58 support

    Vapoursynth R58 support

    When trying to install vs-realesrgan in Vapoursynth R58 I get:

    I:\Hybrid\64bit\Vapoursynth>python -m pip install --upgrade vsrealesrgan
    Collecting vsrealesrgan
      Using cached vsrealesrgan-2.0.0-py3-none-any.whl (12 kB)
    Collecting VapourSynth>=55
      Using cached VapourSynth-57.zip (567 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [15 lines of output]
          Traceback (most recent call last):
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-7_na63f8\vapoursynth_4864864388024a95a1e8b4adda80b293\setup.py", line 64, in <module>
              dll_path = query(winreg.HKEY_LOCAL_MACHINE, REGISTRY_PATH, REGISTRY_KEY)
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-7_na63f8\vapoursynth_4864864388024a95a1e8b4adda80b293\setup.py", line 38, in query
              reg_key = winreg.OpenKey(hkey, path, 0, winreg.KEY_READ)
          FileNotFoundError: [WinError 2] Das System kann die angegebene Datei nicht finden
    
          During handling of the above exception, another exception occurred:
    
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-7_na63f8\vapoursynth_4864864388024a95a1e8b4adda80b293\setup.py", line 67, in <module>
              raise OSError("Couldn't detect vapoursynth installation path")
          OSError: Couldn't detect vapoursynth installation path
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    

    any idea how to fix this?

    opened by Selur 0
  • 'vapoursynth.VideoFrame' object has no attribute 'get_read_array'

    'vapoursynth.VideoFrame' object has no attribute 'get_read_array'

    I have been trying to use this plugin, however I get the below error when trying to preview the video in VapourSynth Editor r19-mod-2-x86_64

    Error on frame 0 request: 'vapoursynth.VideoFrame' object has no attribute 'get_read_array'

    The code I am getting this error from is below

    from vapoursynth import core
    from vsrealesrgan import RealESRGAN
    import havsfunc as haf
    import vapoursynth as vs
    video = core.ffms2.Source(source='EDIT.mkv')
    video = haf.QTGMC(video, Preset="slow", MatchPreset="slow", MatchPreset2="slow", SourceMatch=3, TFF=True)
    video = core.std.SelectEvery(clip=video, cycle=2, offsets=0)
    video = core.std.Crop(clip=video, left=8, right=8, top=0, bottom=0)
    video = core.resize.Spline36(clip=video, width=640, height=480)
    video = core.resize.Bicubic(clip=video, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    video = RealESRGAN(clip=video, device_index=0)
    video = core.resize.Bicubic(clip=video, format=vs.YUV420P10, matrix_s="470bg", range_s="limited")
    video = core.resize.Spline36(clip=video, width=1440, height=1080)
    video = core.std.AssumeFPS(clip=video, fpsnum=30000, fpsden=1001)
    video.set_output()
    
    opened by silentsudin 0
Releases(v4.0.1)
Owner
Holy Wu
Holy Wu
Basics of 2D and 3D Human Pose Estimation.

Human Pose Estimation 101 If you want a slightly more rigorous tutorial and understand the basics of Human Pose Estimation and how the field has evolv

Sudharshan Chandra Babu 293 Dec 14, 2022
Koç University deep learning framework.

Knet Knet (pronounced "kay-net") is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It supports GPU

1.4k Dec 31, 2022
Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Deepak Nandwani 1 Dec 31, 2021
Constructing interpretable quadratic accuracy predictors to serve as an objective function for an IQCQP problem that represents NAS under latency constraints and solve it with efficient algorithms.

IQNAS: Interpretable Integer Quadratic programming Neural Architecture Search Realistic use of neural networks often requires adhering to multiple con

0 Oct 24, 2021
[NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"

Qu-ANTI-zation This repository contains the code for reproducing the results of our paper: Qu-ANTI-zation: Exploiting Quantization Artifacts for Achie

Secure AI Systems Lab 8 Mar 26, 2022
This repo is about implementing different approaches of pose estimation and also is a sub-task of the smart hospital bed project :smile:

Pose-Estimation This repo is a sub-task of the smart hospital bed project which is about implementing the task of pose estimation 😄 Many thanks to th

Max 11 Oct 17, 2022
source code the paper Fast and Robust Iterative Closet Point.

Fast-Robust-ICP This repository includes the source code the paper Fast and Robust Iterative Closet Point. Authors: Juyong Zhang, Yuxin Yao, Bailin De

yaoyuxin 320 Dec 28, 2022
This repository contains the code for our paper VDA (public in EMNLP2021 main conference)

Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models This repository contains the code for our paper VDA (publ

RUCAIBox 13 Aug 06, 2022
PSPNet in Chainer

PSPNet This is an unofficial implementation of Pyramid Scene Parsing Network (PSPNet) in Chainer. Training Requirement Python 3.4.4+ Chainer 3.0.0b1+

Shunta Saito 76 Dec 12, 2022
SAMO: Streaming Architecture Mapping Optimisation

SAMO: Streaming Architecture Mapping Optimiser The SAMO framework provides a method of optimising the mapping of a Convolutional Neural Network model

Alexander Montgomerie-Corcoran 20 Dec 10, 2022
Pytorch implementation of ICASSP 2022 paper Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

IIGROUP 6 Sep 21, 2022
This is a Deep Leaning API for classifying emotions from human face and human audios.

Emotion AI This is a Deep Leaning API for classifying emotions from human face and human audios. Starting the server To start the server first you nee

crispengari 5 Oct 02, 2022
A PyTorch Implementation of PGL-SUM from "Combining Global and Local Attention with Positional Encoding for Video Summarization", Proc. IEEE ISM 2021

PGL-SUM: Combining Global and Local Attention with Positional Encoding for Video Summarization PyTorch Implementation of PGL-SUM From "PGL-SUM: Combin

Evlampios Apostolidis 35 Dec 22, 2022
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

3k Jan 08, 2023
Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer"

TSOD Code for the ICME 2021 paper "Exploring Driving-Aware Salient Object Detection via Knowledge Transfer" Usage For training, open train_test, run p

Jinming Su 2 Dec 23, 2021
PyTorch implementation of the NIPS-17 paper "Poincaré Embeddings for Learning Hierarchical Representations"

Poincaré Embeddings for Learning Hierarchical Representations PyTorch implementation of Poincaré Embeddings for Learning Hierarchical Representations

Facebook Research 1.6k Dec 25, 2022
Video lie detector using xgboost - A video lie detector using OpenFace and xgboost

video_lie_detector_using_xgboost a video lie detector using OpenFace and xgboost

2 Jan 11, 2022
SPRING is a seq2seq model for Text-to-AMR and AMR-to-Text (AAAI2021).

SPRING This is the repo for SPRING (Symmetric ParsIng aNd Generation), a novel approach to semantic parsing and generation, presented at AAAI 2021. Wi

Sapienza NLP group 98 Dec 21, 2022
Optical machine for senses sensing using speckle and deep learning

# Senses-speckle [Remote Photonic Detection of Human Senses Using Secondary Speckle Patterns](https://doi.org/10.21203/rs.3.rs-724587/v1) paper Python

Zeev Kalyuzhner 0 Sep 26, 2021
Source code for our paper "Do Not Trust Prediction Scores for Membership Inference Attacks"

Do Not Trust Prediction Scores for Membership Inference Attacks Abstract: Membership inference attacks (MIAs) aim to determine whether a specific samp

<a href=[email protected]"> 3 Oct 25, 2022