PyMatting: A Python Library for Alpha Matting

Overview

PyMatting: A Python Library for Alpha Matting

License: MIT CI PyPI JOSS Gitter

We introduce the PyMatting package for Python which implements various methods to solve the alpha matting problem.

Lemur

Given an input image and a hand-drawn trimap (top row), alpha matting estimates the alpha channel of a foreground object which can then be composed onto a different background (bottom row).

PyMatting provides:

  • Alpha matting implementations for:
    • Closed Form Alpha Matting [1]
    • Large Kernel Matting [2]
    • KNN Matting [3]
    • Learning Based Digital Matting [4]
    • Random Walk Matting [5]
  • Foreground estimation implementations for:
    • Closed Form Foreground Estimation [1]
    • Fast Multi-Level Foreground Estimation (CPU, CUDA and OpenCL) [6]
  • Fast multithreaded KNN search
  • Preconditioners to accelerate the convergence rate of conjugate gradient descent:
    • The incomplete thresholded Cholesky decomposition (Incomplete is part of the name. The implementation is quite complete.)
    • The V-Cycle Geometric Multigrid preconditioner
  • Readable code leveraging NumPy, SciPy and Numba

Getting Started

Requirements

Minimal requiremens

  • numpy>=1.16.0
  • pillow>=5.2.0
  • numba>=0.47.0
  • scipy>=1.1.0

Additional requirements for GPU support

  • cupy-cuda90>=6.5.0 or similar
  • pyopencl>=2019.1.2

Requirements to run the tests

  • pytest>=5.3.4

Installation with PyPI

pip3 install pymatting

Installation from Source

git clone https://github.com/pymatting/pymatting
cd pymatting
pip3 install .

Example

from pymatting import cutout

cutout(
    # input image path
    "data/lemur/lemur.png",
    # input trimap path
    "data/lemur/lemur_trimap.png",
    # output cutout path
    "lemur_cutout.png")

More advanced examples

Trimap Construction

All implemented methods rely on trimaps which roughly classify the image into foreground, background and unknown reagions. Trimaps are expected to be numpy.ndarrays of type np.float64 having the same shape as the input image with only one color-channel. Trimap values of 0.0 denote pixels which are 100% background. Similarly, trimap values of 1.0 denote pixels which are 100% foreground. All other values indicate unknown pixels which will be estimated by the algorithm.

Testing

Run the tests from the main directory:

 python3 tests/download_images.py
 pip3 install -r requirements_tests.txt
 pytest

Currently 89% of the code is covered by tests.

Upgrade

pip3 install --upgrade pymatting
python3 -c "import pymatting"

The last line is necessary to rebuild the ahead-of-time compiled module. Without it, the module will be rebuilt on first import, but the old module will already be loaded at that point, which might cause compatibility issues. Simply re-running the code should usually fix it.

Bug Reports, Questions and Pull-Requests

Please, see our community guidelines.

Authors

  • Thomas Germer
  • Tobias Uelwer
  • Stefan Conrad
  • Stefan Harmeling

See also the list of contributors who participated in this project.

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Citing

If you found PyMatting to be useful for your work, please consider citing our paper:

@article{Germer2020,
  doi = {10.21105/joss.02481},
  url = {https://doi.org/10.21105/joss.02481},
  year = {2020},
  publisher = {The Open Journal},
  volume = {5},
  number = {54},
  pages = {2481},
  author = {Thomas Germer and Tobias Uelwer and Stefan Conrad and Stefan Harmeling},
  title = {PyMatting: A Python Library for Alpha Matting},
  journal = {Journal of Open Source Software}
}

References

[1] Anat Levin, Dani Lischinski, and Yair Weiss. A closed-form solution to natural image matting. IEEE transactions on pattern analysis and machine intelligence, 30(2):228–242, 2007.

[2] Kaiming He, Jian Sun, and Xiaoou Tang. Fast matting using large kernel matting laplacian matrices. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2165–2172. IEEE, 2010.

[3] Qifeng Chen, Dingzeyu Li, and Chi-Keung Tang. Knn matting. IEEE transactions on pattern analysis and machine intelligence, 35(9):2175–2188, 2013.

[4] Yuanjie Zheng and Chandra Kambhamettu. Learning based digital matting. In 2009 IEEE 12th international conference on computer vision, 889–896. IEEE, 2009.

[5] Leo Grady, Thomas Schiwietz, Shmuel Aharon, and Rüdiger Westermann. Random walks for interactive alpha-matting. In Proceedings of VIIP, volume 2005, 423–429. 2005.

[6] Germer, T., Uelwer, T., Conrad, S., & Harmeling, S. (2020). Fast Multi-Level Foreground Estimation. arXiv preprint arXiv:2006.14970.

Lemur image by Mathias Appel from https://www.flickr.com/photos/mathiasappel/25419442300/ licensed under CC0 1.0 Universal (CC0 1.0) Public Domain License.

Comments
  • [Question❓] All unknown region input

    [Question❓] All unknown region input

    When I input a trimap with all unknown region, i.e. no foreground and background, it raised an error there , https://github.com/pymatting/pymatting/blob/master/pymatting/util/util.py#L491.

    opened by michaelowenliu 11
  • Got Segmentation Fault when calling estimate_alpha_knn

    Got Segmentation Fault when calling estimate_alpha_knn

    Got this error both on macOS 10.14 and Ubuntu 16.04

    When installing the package I used --ignore-installed llvmlite flag for pip because I got Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.. I am not sure if this is relevant.

    pytest
    ============================================================================== test session starts ===============================================================================
    platform darwin -- Python 3.7.4, pytest-5.2.1, py-1.8.0, pluggy-0.13.0
    rootdir: /Users/user/pymatting-master
    plugins: arraydiff-0.3, remotedata-0.3.2, doctestplus-0.4.0, openfiles-0.4.0
    collected 11 items                                                                                                                                                               
    
    tests/test_boxfilter.py .                                                                                                                                                  [  9%]
    tests/test_cg.py .                                                                                                                                                         [ 18%]
    tests/test_estimate_alpha.py F                                                                                                                                             [ 27%]
    tests/test_foreground.py .                                                                                                                                                 [ 36%]
    tests/test_ichol.py .                                                                                                                                                      [ 45%]
    tests/test_kdtree.py Fatal Python error: Segmentation fault
    
    Current thread 0x00000001086d7dc0 (most recent call first):
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pymatting/util/kdtree.py", line 280 in __init__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pymatting/util/kdtree.py", line 347 in knn
      File "/Users/user/pymatting-master/tests/test_kdtree.py", line 20 in run_kdtree
      File "/Users/user/pymatting-master/tests/test_kdtree.py", line 46 in test_kdtree
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/python.py", line 170 in pytest_pyfunc_call
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/python.py", line 1423 in runtest
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 125 in pytest_runtest_call
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 201 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 229 in from_call
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 201 in call_runtest_hook
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 176 in call_and_report
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 95 in runtestprotocol
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/runner.py", line 80 in pytest_runtest_protocol
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/main.py", line 256 in pytest_runtestloop
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/main.py", line 235 in _main
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/main.py", line 191 in wrap_session
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/main.py", line 228 in pytest_cmdline_main
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 86 in <lambda>
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/manager.py", line 92 in _hookexec
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
      File "/Users/user/opt/anaconda3/lib/python3.7/site-packages/_pytest/config/__init__.py", line 90 in main
      File "/Users/user/opt/anaconda3/bin/pytest", line 11 in <module>
    [1]    99661 segmentation fault  pytest
    
    opened by ntuLC 10
  • [Question❓]is there a way to speed it up?

    [Question❓]is there a way to speed it up?

    Hey!This tool can bring me very good results, but I did statistics, and it took me nearly 10 minutes to process 1,000 images without counting the IO time. 1,000 images generally correspond to a 30-second video. This efficiency is Not ideal, looking forward to reply

    opened by JSHZT 6
  • [BUG 🐛] No module named 'pymatting_aot.aot'

    [BUG 🐛] No module named 'pymatting_aot.aot'

    Bug description

    $ python test2.py Failed to import ahead-of-time-compiled modules. This is expected on first import. Compiling modules and trying again (this might take a minute). Traceback (most recent call last): File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 21, in import pymatting_aot.aot ModuleNotFoundError: No module named 'pymatting_aot.aot'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "test2.py", line 1, in from pymatting import cutout File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting/init.py", line 2, in import pymatting_aot.cc File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 28, in compile_modules() File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 6, in compile_modules cc = CC("aot") File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/cc.py", line 65, in init self._toolchain = Toolchain() File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/platform.py", line 78, in init self._raise_external_compiler_error() File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/platform.py", line 121, in _raise_external_compiler_error raise RuntimeError(msg) RuntimeError: Attempted to compile AOT function without the compiler used by numpy.distutils present. If using conda try:

    #> conda install gcc_linux-64 gxx_linux-64

    To Reproduce

    installed pymatting with pip install rembg on Fedora 33 within venv for Python3.8 create test2.py: from pymatting import cutout

    cutout( # input image path "data/lemur/lemur.png", # input trimap path "data/lemur/lemur_trimap.png", # output cutout path "lemur_cutout.png") launch: python test2.py within venv for Python3.8

    Expected behavior

    runed without errors

    Images

    (Add relevant images.)

    Library versions:

    (Run the following commands and paste the result here.)

    python --version --version
    Python 3.8.6 (default, Sep 25 2020, 00:00:00) 
    [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
    (envrembg) [[email protected] envrembg]$ python -c "import numpy; numpy.show_config()"
    
    python -c "import numpy; numpy.show_config()"
    blas_mkl_info:
      NOT AVAILABLE
    blis_info:
      NOT AVAILABLE
    openblas_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    blas_opt_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    lapack_mkl_info:
      NOT AVAILABLE
    openblas_lapack_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    lapack_opt_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    
    python -c "import scipy;scipy.show_config()"
    lapack_mkl_info:
      NOT AVAILABLE
    openblas_lapack_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    lapack_opt_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    blas_mkl_info:
      NOT AVAILABLE
    blis_info:
      NOT AVAILABLE
    openblas_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    blas_opt_info:
        libraries = ['openblas', 'openblas']
        library_dirs = ['/usr/local/lib']
        language = c
        define_macros = [('HAVE_CBLAS', None)]
    
    python -c "import numba;print('Numba version:', numba.__version__)"
    Numba version: 0.51.2
    
    python -c "import PIL;print('PIL version:', PIL.__version__)"
    PIL version: 8.0.1
    
    python -c "from pymatting.__about__ import __version__;print('PyMatting version:', __version__)"
    Failed to import ahead-of-time-compiled modules. This is expected on first import.
    Compiling modules and trying again (this might take a minute).
    Traceback (most recent call last):
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 21, in <module>
        import pymatting_aot.aot
    ModuleNotFoundError: No module named 'pymatting_aot.aot'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting/__init__.py", line 2, in <module>
        import pymatting_aot.cc
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 28, in <module>
        compile_modules()
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/pymatting_aot/cc.py", line 6, in compile_modules
        cc = CC("aot")
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/cc.py", line 65, in __init__
        self._toolchain = Toolchain()
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/platform.py", line 78, in __init__
        self._raise_external_compiler_error()
      File "/home/sav/pytest/envrembg/lib64/python3.8/site-packages/numba/pycc/platform.py", line 121, in _raise_external_compiler_error
        raise RuntimeError(msg)
    RuntimeError: Attempted to compile AOT function without the compiler used by `numpy.distutils` present. If using conda try:
    
    #> conda install gcc_linux-64 gxx_linux-64
    
    
    opened by vlsav 6
  • [Question❓] Tests for GPU implementation skipped, because of missing packages

    [Question❓] Tests for GPU implementation skipped, because of missing packages

    Hi

    I have setup the pymatting under container environment and executed the test. Pytest was able to complete it however I got following warnings:

    tests/test_foreground.py::test_foreground /pymatting/tests/test_foreground.py:32: UserWarning: Tests for GPU implementation skipped, because of missing packages. "Tests for GPU implementation skipped, because of missing packages."

    -- Docs: https://docs.pytest.org/en/stable/warnings.html

    I noticed that similar issue was reported earlier as well but couldn't find conclusion

    I have got Nvidia GPUs but somehow it is not being detected. I have individually installed cuPy, pyopencl, libcutensor and some other output on installed cuda packages:

    [email protected]:/pymatting# dpkg --list | grep cuda
    ii  cuda-command-line-tools-10-2  10.2.89-1                           amd64        CUDA command-line tools
    ii  cuda-compat-10-2              440.95.01-1                         amd64        CUDA Compatibility Platform
    ii  cuda-compiler-10-2            10.2.89-1                           amd64        CUDA compiler
    ii  cuda-cudart-10-2              10.2.89-1                           amd64        CUDA Runtime native Libraries
    ii  cuda-cudart-dev-10-2          10.2.89-1                           amd64        CUDA Runtime native dev links, headers
    ii  cuda-cufft-10-2               10.2.89-1                           amd64        CUFFT native runtime libraries
    ii  cuda-cufft-dev-10-2           10.2.89-1                           amd64        CUFFT native dev links, headers
    ii  cuda-cuobjdump-10-2           10.2.89-1                           amd64        CUDA cuobjdump
    ii  cuda-cupti-10-2               10.2.89-1                           amd64        CUDA profiling tools runtime libs.
    ii  cuda-cupti-dev-10-2           10.2.89-1                           amd64        CUDA profiling tools interface.
    ii  cuda-curand-10-2              10.2.89-1                           amd64        CURAND native runtime libraries
    ii  cuda-curand-dev-10-2          10.2.89-1                           amd64        CURAND native dev links, headers
    ii  cuda-cusolver-10-2            10.2.89-1                           amd64        CUDA solver native runtime libraries
    ii  cuda-cusolver-dev-10-2        10.2.89-1                           amd64        CUDA solver native dev links, headers
    ii  cuda-cusparse-10-2            10.2.89-1                           amd64        CUSPARSE native runtime libraries
    ii  cuda-cusparse-dev-10-2        10.2.89-1                           amd64        CUSPARSE native dev links, headers
    ii  cuda-driver-dev-10-2          10.2.89-1                           amd64        CUDA Driver native dev stub library
    ii  cuda-gdb-10-2                 10.2.89-1                           amd64        CUDA-GDB
    ii  cuda-libraries-10-2           10.2.89-1                           amd64        CUDA Libraries 10.2 meta-package
    ii  cuda-libraries-dev-10-2       10.2.89-1                           amd64        CUDA Libraries 10.2 development meta-package
    ii  cuda-license-10-2             10.2.89-1                           amd64        CUDA licenses
    ii  cuda-memcheck-10-2            10.2.89-1                           amd64        CUDA-MEMCHECK
    ii  cuda-minimal-build-10-2       10.2.89-1                           amd64        Minimal CUDA 10.2 toolkit build packages.
    ii  cuda-misc-headers-10-2        10.2.89-1                           amd64        CUDA miscellaneous headers
    ii  cuda-npp-10-2                 10.2.89-1                           amd64        NPP native runtime libraries
    ii  cuda-npp-dev-10-2             10.2.89-1                           amd64        NPP native dev links, headers
    ii  cuda-nvcc-10-2                10.2.89-1                           amd64        CUDA nvcc
    ii  cuda-nvdisasm-10-2            10.2.89-1                           amd64        CUDA disassembler
    ii  cuda-nvgraph-10-2             10.2.89-1                           amd64        NVGRAPH native runtime libraries
    ii  cuda-nvgraph-dev-10-2         10.2.89-1                           amd64        NVGRAPH native dev links, headers
    ii  cuda-nvjpeg-10-2              10.2.89-1                           amd64        NVJPEG native runtime libraries
    ii  cuda-nvjpeg-dev-10-2          10.2.89-1                           amd64        NVJPEG native dev links, headers
    ii  cuda-nvml-dev-10-2            10.2.89-1                           amd64        NVML native dev links, headers
    ii  cuda-nvprof-10-2              10.2.89-1                           amd64        CUDA Profiler tools
    ii  cuda-nvprune-10-2             10.2.89-1                           amd64        CUDA nvprune
    ii  cuda-nvrtc-10-2               10.2.89-1                           amd64        NVRTC native runtime libraries
    ii  cuda-nvrtc-dev-10-2           10.2.89-1                           amd64        NVRTC native dev links, headers
    ii  cuda-nvtx-10-2                10.2.89-1                           amd64        NVIDIA Tools Extension
    ii  cuda-sanitizer-api-10-2       10.2.89-1                           amd64        CUDA Sanitizer API
    hi  libcudnn7                     7.6.5.32-1+cuda10.2                 amd64        cuDNN runtime libraries
    ii  libcudnn7-dev                 7.6.5.32-1+cuda10.2                 amd64        cuDNN development libraries and headers
    hi  libnccl-dev                   2.7.8-1+cuda10.2                    amd64        NVIDIA Collectives Communication Library (NCCL) Development Files
    hi  libnccl2                      2.7.8-1+cuda10.2                    amd64        NVIDIA Collectives Communication Library (NCCL) Runtime
    

    Could you please advise on what package might be missing? Thank you.

    opened by ghazni123 6
  • pytest Error: tests/test_lkm.py:81: AssertionError

    pytest Error: tests/test_lkm.py:81: AssertionError

    === warnings summary ===

    tests/test_foreground.py::test_foreground /home/ferg/git/pymatting/tests/test_foreground.py:31: UserWarning: Tests for GPU implementation skipped, because of missing packages.

    I'm on Fedora 31 and here are my pip 3 package versions which above the required dependencies.

    Requirement already satisfied: numpy in /usr/lib64/python3.7/site-packages (1.17.4) Requirement already satisfied: pillow in /usr/lib64/python3.7/site-packages (6.1.0) Requirement already satisfied: numba in /home/ferg/.local/lib/python3.7/site-packages (0.48.0) Requirement already satisfied: scipy in /home/ferg/.local/lib/python3.7/site-packages (1.4.1)

    opened by 3dsf 5
  • Include a MANIFEST.in file

    Include a MANIFEST.in file

    I'm attempting to get this package into a binary format on conda-forge; would it be possible to include a MANIFEST.in file? Currently some of the required files are not included in the sdist (e.g. requirement.txt)

    https://packaging.python.org/guides/using-manifest-in/

    opened by thewchan 4
  • [Question❓] What exactly is a trimap?

    [Question❓] What exactly is a trimap?

    I was reading about trimap, I found 2 different definition on it

    • An image consisting of only 3 colors: black, white and a single shade of grey
    • An image consisting of black, white and shades of grey (where all shades of grey correspond to unknown region)

    Which one is correct?

    opened by Nkap23 4
  • @vlsav, thanks for reporting this issue! Have you tried running `conda install gcc_linux-64 gxx_linux-64` (as suggested)?

    @vlsav, thanks for reporting this issue! Have you tried running `conda install gcc_linux-64 gxx_linux-64` (as suggested)?

    @vlsav, thanks for reporting this issue! Have you tried running conda install gcc_linux-64 gxx_linux-64 (as suggested)?

    Originally posted by @tuelwer in https://github.com/pymatting/pymatting/issues/37#issuecomment-731645867

    opened by dreamer121121 4
  • ValueError on import

    ValueError on import

    Hi, I installed the lib but there are a problem with the importation of your package.

    I am using python 3.8.1 with : numpy=1.18.1 (>=1.16.0) pillow=6.2.1 (>=5.2.0) numba=0.47.0 (>=0.44.0) scipy=1.3.3 (>=1.1.0)

    *** ValueError: Failed in nopython mode pipeline (step: convert to parfors) Cannot add edge as dest node 26 not in nodes {130, 132, 262, 264, 528, 30, 418, 302, 564, 565, 566, 568, 322, 450, 196, 452, 324, 340, 212, 86, 214, 348, 94, 228, 356, 494, 118, 246, 248, 378, 380}

    (you can read all here: https://gyazo.com/b6b9756f0c8d75a30a63dada09c5f82e)

    Thank you for your work :+1:

    opened by Mathux 4
  • [BUG 🐛] PyMatting crashes when I use it in torch dataloader.

    [BUG 🐛] PyMatting crashes when I use it in torch dataloader.

    Bug description

    I used pymatting in torch data preprocessing, but the new version of pymatting does not seem to support multi-threading. In addition, 1.0.4 works.

    To Reproduce

    Pymatting 1.1.4, Torch 1.10, 5900x with 3090, CUDA 11.4。 torch.dataset/dataloader. number_workers>=1.

    opened by Windaway 3
  • Tests require missing images

    Tests require missing images

    Several tests (for example, the one in test_estimate_alpha.py) fail because a required image is missing:

    FAILED tests/test_estimate_alpha.py::test_alpha - FileNotFoundError: [Errno 2] No such file or directory: 'data/input_training_lowres/GT01.png'
    FAILED tests/test_laplacians.py::test_laplacians - FileNotFoundError: [Errno 2] No such file or directory: 'data/input_training_lowres/GT01.png'
    FAILED tests/test_lkm.py::test_lkm - FileNotFoundError: [Errno 2] No such file or directory: 'data/input_training_lowres/GT01.png'
    FAILED tests/test_preconditioners.py::test_preconditioners - FileNotFoundError: [Errno 2] No such file or directory: 'data/input_training_lowres/GT01.png'
    

    Could/should GT01.png be included?

    opened by jangop 4
  • `setup.py` should define actual dependencies

    `setup.py` should define actual dependencies

    Currently, the packages specified in requirements.txt are copied into setup.py:

        install_requires=load_text("requirements.txt").strip().split("\n"),
    

    This is bad practice and can cause problems downstream.

    requirements.txt should be used to define a repeatable installation, such as a development environment or a production environment. As such, versions of dependencies contained therein should be as specific as possible.

    install_requires should be used to indicate dependencies necessary to run the package. As such, versions of dependencies contained therein should be as broad as possible.

    Please see “install_requires vs requirements files” on python.org or “requirements.txt vs setup.py” on stackoverflow for more information.

    I'd be happy to contribute a PR with loose dependency specifications in setup.py and concrete specifications in requirements.txt.

    opened by jangop 5
  • Make PyMatting available on conda-forge

    Make PyMatting available on conda-forge

    opened by tuelwer 4
  • Foreground background estimation for TensorFlow version[Question❓]

    Foreground background estimation for TensorFlow version[Question❓]

    Hi,

    Thank you for your amazing repo. I try to convert estimate_fg_bg_numpy.py to TensorFlow. However, the inference speed is not satisfactory. In GPU 1080Ti, the cupy version just cost 2ms, the TensorFlow version will cost 20ms for 144x256 resolution. Do you know how to correctly revise the numpy code to TensorFlow? Thank you very much.

    import numpy as np
    from PIL import Image
    import time
    import tensorflow as tf
    
    
    def inv2(mat):
        a = mat[..., 0, 0]
        b = mat[..., 0, 1]
        c = mat[..., 1, 0]
        d = mat[..., 1, 1]
    
        inv_det = 1 / (a * d - b * c)
    
        inv00 = inv_det * d
        inv01 = inv_det * -b
        inv10 = inv_det * -c
        inv11 = inv_det * a
        inv00 = inv00[:, tf.newaxis, tf.newaxis]
        inv01 = inv01[:, tf.newaxis, tf.newaxis]
        inv10 = inv10[:, tf.newaxis, tf.newaxis]
        inv11 = inv11[:, tf.newaxis, tf.newaxis]
        inv_temp1 = tf.concat([inv00, inv10], axis=1)
        inv_temp2 = tf.concat([inv01, inv11], axis=1)
        inv = tf.concat([inv_temp1, inv_temp2], axis=2)
    
        return inv
    
    
    def pixel_coordinates(w, h, flat=False):
        x, y = tf.meshgrid(np.arange(w), np.arange(h))
    
        if flat:
            x = tf.reshape(x, [-1])
            y = tf.reshape(y, [-1])
    
        return x, y
    
    
    def vec_vec_outer(a, b):
        return tf.einsum("...i,...j", a, b)
    
    def estimate_fb_ml(
            input_image,
            input_alpha,
            min_size=2,
            growth_factor=2,
            regularization=1e-5,
            n_iter_func=2,
            print_info=True,):
    
        h0, w0 = 144, 256
    
        # Find initial image size.
        w = int(np.ceil(min_size * w0 / h0))
        h = min_size
    
        # Generate initial foreground and background from input image
        F = tf.image.resize_nearest_neighbor(input_image[tf.newaxis], [h, w])[0]
        B = F * 1.0
        while True:
            if print_info:
                print("New level of size: %d-by-%d" % (w, h))
            # Resize image and alpha to size of current level
            image = tf.image.resize_nearest_neighbor(input_image[tf.newaxis], [h, w])[0]
            alpha = tf.image.resize_nearest_neighbor(input_alpha[tf.newaxis, :, :, tf.newaxis], [h, w])[0, :, :, 0]
            # Iterate a few times
            n_iter = n_iter_func
            for iteration in range(n_iter):
                x, y = pixel_coordinates(w, h, flat=True) # w: 4, h: 2
                # Make alpha into a vector
                a = tf.reshape(alpha, [-1])
                # Build system of linear equations
                U = tf.stack([a, 1 - a], axis=1)
                A = vec_vec_outer(U, U) # 8 x 2 x 2
                b = vec_vec_outer(U, tf.reshape(image, [w*h, 3])) # 8 x 2 x 3
                # For each neighbor
                for dx, dy in [(-1, 0), (1, 0), (0, -1), (0, 1)]:
                    x2 = tf.clip_by_value(x + dx, 0, w - 1)
                    y2 = tf.clip_by_value(y + dy, 0, h - 1)
                    # Vectorized neighbor coordinates
                    j = x2 + y2 * w
                    # Gradient of alpha
                    a_j = tf.nn.embedding_lookup(a, j)
                    da = regularization + tf.abs(a - a_j)
                    # Update matrix of linear equation system
                    A00 = A[:, 0, 0] + da
                    A01 = A[:, 0, 1]
                    A10 = A[:, 1, 0]
                    A11 = A[:, 1, 1] + da
                    A00 = A00[:, tf.newaxis, tf.newaxis]
                    A01 = A01[:, tf.newaxis, tf.newaxis]
                    A10 = A10[:, tf.newaxis, tf.newaxis]
                    A11 = A11[:, tf.newaxis, tf.newaxis]
                    A_temp1 = tf.concat([A00, A10], axis=1)
                    A_temp2 = tf.concat([A01, A11], axis=1)
                    A = tf.concat([A_temp1, A_temp2], axis=2)
                    # Update rhs of linear equation system
                    F_resp = tf.reshape(F, [w * h, 3])
                    F_resp_j = tf.nn.embedding_lookup(F_resp, j)
                    B_resp = tf.reshape(B, [w * h, 3])
                    B_resp_j = tf.nn.embedding_lookup(B_resp, j)
                    da_resp = tf.reshape(da, [w * h, 1])
                    b0 = b[:, 0, :] + da_resp * F_resp_j
                    b1 = b[:, 1, :] + da_resp * B_resp_j
                    b = tf.concat([b0[:, tf.newaxis, :], b1[:, tf.newaxis, :]], axis=1)
                    # Solve linear equation system for foreground and background
                fb = tf.clip_by_value(tf.matmul(inv2(A), b), 0, 1)
    
                F = tf.reshape(fb[:, 0, :], [h, w, 3])
                B = tf.reshape(fb[:, 1, :], [h, w, 3])
    
            # If original image size is reached, return result
            if w >= w0 and h >= h0:
                return F, B
    
            # Grow image size to next level
            w = min(w0, int(np.ceil(w * growth_factor)))
            h = min(h0, int(np.ceil(h * growth_factor)))
    
            F = tf.image.resize_nearest_neighbor(F[tf.newaxis], [h, w])[0]
            B = tf.image.resize_nearest_neighbor(B[tf.newaxis], [h, w])[0]
    
    
    
    ######################################################################
    def estimate_foreground_background_tf():
        image_np = np.array(Image.open("./image.png").resize([256, 144]))[:, :, :3] / 255
        alpha_np = np.array(Image.open("./alpha.png").resize([256, 144])) / 255
        image = tf.placeholder(tf.float32, [144, 256, 3])
        alpha = tf.placeholder(tf.float32, [144, 256])
        foreground, background = estimate_fb_ml(image, alpha, n_iter_func=2)
        sess = tf.Session()
        for i in range(10):
            s = time.time()
            sess.run(foreground, feed_dict={image: image_np, alpha: alpha_np})
            e = time.time()
            print("time: ", e - s)
    
    
    ######################################################################
    def main():
        estimate_foreground_background_tf()
    
    
    if __name__ == "__main__":
        main()
    
    
    opened by MingtaoGuo 1
  • [BUG 🐛] division by zero error in estimate_foreground_ml

    [BUG 🐛] division by zero error in estimate_foreground_ml

    i am getting division by zero errors in estimate_foreground_ml()

    what i tried:

    • pymatting 1.1.1 and 1.1.3
    • making sure both the image and the mask are not uniform (i've seen the error when both have min_val=0 and max_val=1)
    • default parameters and different variations

    the environment is google colab. also sometime this (or something else in pymatting) causes the colab itself to crash and disconnect.

    opened by eyaler 14
Releases(v1.1.2)
NR-GAN: Noise Robust Generative Adversarial Networks

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

Takuhiro Kaneko 59 Dec 11, 2022
Adapter-BERT: Parameter-Efficient Transfer Learning for NLP.

Adapter-BERT: Parameter-Efficient Transfer Learning for NLP.

Google Research 340 Jan 03, 2023
Traductor de lengua de señas al español basado en Python con Opencv y MedaiPipe

Traductor de señas Traductor de lengua de señas al español basado en Python con Opencv y MedaiPipe Requerimientos 🔧 Python 3.8 o inferior para evitar

Jahaziel Hernandez Hoyos 3 Nov 12, 2022
Real-Time-Student-Attendence-System - Real Time Student Attendence System

Real-Time-Student-Attendence-System The Student Attendance Management System Pro

Rounak Das 1 Feb 15, 2022
Towards Fine-Grained Reasoning for Fake News Detection

FinerFact This is the PyTorch implementation for the FinerFact model in the AAAI 2022 paper Towards Fine-Grained Reasoning for Fake News Detection (Ar

Ahren_Jin 15 Dec 15, 2022
Implementation of ICCV2021(Oral) paper - VMNet: Voxel-Mesh Network for Geodesic-aware 3D Semantic Segmentation

VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation Created by Zeyu HU Introduction This work is based on our paper VMNet: Voxel-Mes

HU Zeyu 82 Dec 27, 2022
Chinese Advertisement Board Identification(Pytorch)

Chinese-Advertisement-Board-Identification. We use YoloV5 to extract the ROI of the location of the chinese word. Next, we sort the bounding box and recognize every chinese words which we extracted.

Li-Wei Hsiao 12 Jul 21, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and i

yifan liu 147 Dec 03, 2022
Next-Best-View Estimation based on Deep Reinforcement Learning for Active Object Classification

next_best_view_rl Setup Clone the repository: git clone --recurse-submodules ... In 'third_party/zed-ros-wrapper': git checkout devel Install mujoco `

Christian Korbach 1 Feb 15, 2022
[CVPR 2022 Oral] Rethinking Minimal Sufficient Representation in Contrastive Learning

Rethinking Minimal Sufficient Representation in Contrastive Learning PyTorch implementation of Rethinking Minimal Sufficient Representation in Contras

36 Nov 23, 2022
Faune proche - Retrieval of Faune-France data near a google maps location

faune_proche Récupération des données de Faune-France près d'un lieu google maps

4 Feb 15, 2022
Template repository to build PyTorch projects from source on any version of PyTorch/CUDA/cuDNN.

The Ultimate PyTorch Source-Build Template Translations: 한국어 TL;DR PyTorch built from source can be x4 faster than a naïve PyTorch install. This repos

Joonhyung Lee/이준형 651 Dec 12, 2022
Ağ tarayıcı.Gönderdiği paketler ile ağa bağlı olan cihazların IP adreslerini gösterir.

NetScanner.py Ağ tarayıcı.Gönderdiği paketler ile ağa bağlı olan cihazların IP adreslerini gösterir. Linux'da Kullanımı: git clone https://github.com/

4 Aug 23, 2021
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Official PyTorch implementation of the paper: Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting.

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting Official PyTorch implementation of the paper: Improving Graph Neural Net

Giorgos Bouritsas 58 Dec 31, 2022
Generate images from texts. In Russian. In PaddlePaddle

ruDALL-E PaddlePaddle ruDALL-E in PaddlePaddle. Install: pip install rudalle_paddle==0.0.1rc1 Run with free v100 on AI Studio. Original Pytorch versi

AgentMaker 20 Oct 18, 2022
Benchmark tools for Compressive LiDAR-to-map registration

Benchmark tools for Compressive LiDAR-to-map registration This repo contains the released version of code and datasets used for our IROS 2021 paper: "

Allie 9 Nov 24, 2022
All the essential resources and template code needed to understand and practice data structures and algorithms in python with few small projects to demonstrate their practical application.

Data Structures and Algorithms Python INDEX 1. Resources - Books Data Structures - Reema Thareja competitiveCoding Big-O Cheat Sheet DAA Syllabus Inte

Shushrut Kumar 129 Dec 15, 2022
Implementation for "Conditional entropy minimization principle for learning domain invariant representation features"

Implementation for "Conditional entropy minimization principle for learning domain invariant representation features". The code is reproduced from thi

1 Nov 02, 2022
The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer"

Shuffle Transformer The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer" Introduction Very recently, window-

87 Nov 29, 2022