pytest plugin for a better developer experience when working with the PyTorch test suite

Overview

pytest-pytorch

license repo status isort black tests status

What is it?

pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test suite if you come from a pytest background.

Why do I need it?

Some testcases in the PyTorch test suite are automatically generated when a module is loaded in order to parametrize them. Trying to collect them with their names as written, e.g. pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar, is unfortunately not possible. If you are used to this syntax or your IDE relies on it (PyCharm, VSCode), you can install pytest-pytorch to make it work.

How do I install it?

You can install pytest-pytorch with pip

$ pip install pytest-pytorch

or with conda:

$ conda install -c conda-forge pytest-pytorch

How do I use it?

With pytest-pytorch installed you can select test cases and tests as if the instantiation for different devices was performed by @pytest.mark.parametrize:

Use case Command
Run a test case against all devices pytest test_foo.py::TestBar
Run a test case against one device pytest test_foo.py::TestBar -k "$DEVICE"
Run a test against all devices pytest test_foo.py::TestBar::test_baz
Run a test against one device pytest test_foo.py::TestBar::test_baz -k "$DEVICE"

Can I have a little more background?

PyTorch uses its own method for generating tests that is for the most part compatible with unittest and pytest. Its custom test generation allows test templates to be written and instantiated for different device types, data types, and operators. Consider the following module test_foo.py:

from torch.testing._internal.common_utils import TestCase
from torch.testing._internal.common_device_type import instantiate_device_type_tests

class TestFoo(TestCase):
    def test_bar(self, device):
        pass
    
    def test_baz(self, device):
        pass

instantiate_device_type_tests(TestFoo, globals())

Assuming we "cpu" and "cuda" are available as devices, we can collect four tests:

  1. test_foo.py::TestFooCPU::test_bar_cpu,
  2. test_foo.py::TestFooCPU::test_baz_cpu,
  3. test_foo.py::TestFooCUDA::test_bar_cuda, and
  4. test_foo.py::TestFooCUDA::test_baz_cuda.

From a pytest perspective this is similar to decorating TestFoo with @pytest.mark.parametrize("device", ("cpu", "cuda"))) which would result in

  1. test_foo.py::TestFoo:test_bar[cpu],
  2. test_foo.py::TestFoo:test_bar[cuda],
  3. test_foo.py::TestFoo:test_baz[cpu], and
  4. test_foo.py::TestFoo:test_baz[cuda].

Since the PyTorch test framework renames testcases and tests, naively running pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar fails, because it can't find anything matching these names. Of course you can get around it by using the regular expression matching (-k command line flag) that pytest offers.

pytest-pytorch performs this matching so you can keep your familiar workflow and your IDE is happy out of the box.

How do I contribute?

First and foremost: Thank you for your interest in development of pytest-pytorch's! We appreciate all contributions be it code or something else. Check out our contribution guide lines for details.

Comments
  • Fix broken link in readme

    Fix broken link in readme

    The blog link provided in the readme is broken. Please fix it.

    https://deploy-preview-211--quansight-labs.netlify.app/blog/2021/06/pytest-pytorch/

    I would suggest to replace it with:

    https://labs.quansight.org/blog/2021/06/pytest-pytorch/

    :bulb: Note: I am pushing a PR to fix this.

    opened by sugatoray 1
  • Change test workflows from PyTorch nightly to stable

    Change test workflows from PyTorch nightly to stable

    Previously, we used PyTorch nightly to have a second device available in CI. As of torch==1.9 the "meta" device is included in the stable binaries. Thus, there is no need for less safe nightly testing anymore.

    opened by pmeier 0
  • remove duplicate tests

    remove duplicate tests

    In some cases new_cmds == legacy_cmds. This made it more verbose to write and additionally resulted in duplicate tests.

    After this PR, if no legacy_cmds is passed to Config, the value of new_cmds is used. Plus, duplicate configs are filtered out.

    opened by pmeier 0
  • trim the test matrix

    trim the test matrix

    We don't need to test this for every os / python combination. It should be sufficient to test every os with the minimum python requirement as well as one (linux) with every supported python version.

    This should speed up the CI runs and save some resources.

    opened by pmeier 0
  • refactor test suite to test the actual collection

    refactor test suite to test the actual collection

    Before we tested the collection by using different outcomes for each test based on the respective parameter. Afterwards we checked the pytest result against that. This has two downsides:

    1. It takes more mental effort to not only parse which test will run, but also how the outcome of all run tests is.
    2. Since we could only test against the aggregated result for multiple tests, we can't be sure the test is actually right.

    With this PR we are actually testing the collection, by parsing the pytest output. Additionally, you can now add this codeblock

    # ======================================================================================
    # This block is necessary to autogenerate the parametrization for
    # tests/test_plugin.py::test_standard_collection.
    # It needs to be placed **after** the import of 'instantiate_device_type_tests' and
    # **before** its first usage.
    # ======================================================================================
    try:
        from _spy import Spy
    
        __spy__ = Spy()
        del Spy
        instantiate_device_type_tests = __spy__(instantiate_device_type_tests)
    except ModuleNotFoundError:
        pass
    # ======================================================================================
    

    to a test file to automatically test the selection of

    • everything in the file,
    • every test case, and
    • every test case function.

    Anything beyond that still needs to be configured manually.

    opened by pmeier 0
  • support selection of tests that use op infos

    support selection of tests that use op infos

    Fixes #16.

    Currently we rely on the device identifier comes directly after the test case function name. That is no longer true when using OpInfo's. The name of the instantiated test follows the scheme (template_name)_(op_name)_(device)_(dtype).

    This is a complete rewrite of the internal matching logic:

    • test case: Test cases are only parametrized by the device. Since every TestCase has a device_type attribute, we can simply strip the device identifier from the instantiated name.
    • test case function: Since both template_name and op_name in the pattern above might contain underscores and they are also separated by a single underscore, it is impossible to extract the two parts without further knowledge. To overcome this, we can inspect the source of the function and extract the template_name (which is the function name) directly.
    opened by pmeier 0
  • Re-add support for OpInfo's

    Re-add support for OpInfo's

    Consider the following test setup:

    from torch.testing._internal.common_device_type import (
        instantiate_device_type_tests,
        ops,
    )
    from torch.testing._internal.common_utils import TestCase
    from torch.testing._internal.common_methods_invocations import OpInfo
    
    BazOpInfo = OpInfo("add")
    
    
    class TestFoo(TestCase):
        @ops([BazOpInfo])
        def test_bar(self, device, dtype, op):
            pass
    
    
    instantiate_device_type_tests(TestFoo, globals(), only_for="cpu")
    

    Running pytest test_foo.py --collect-only on that results in:

    <PyTorchTestCase TestFooCPU>
      <PyTorchTestCaseFunction test_bar_add_cpu_float32>
      <PyTorchTestCaseFunction test_bar_add_cpu_float64>
    <PyTorchTestCase TestFooMETA>
      <PyTorchTestCaseFunction test_bar_add_meta_float32>
      <PyTorchTestCaseFunction test_bar_add_meta_float64>
    

    Naming schemes:

    • test cases: (template_name)(device)
    • test case functions: (template_name)_(op_name)_(device)_(dtype)

    After #12 test case functions now require the device to follow right after the template name. Thus, it is no longer possible to select an individual test by name pytest test_foo.py::TestFoo::test_bar

    opened by pmeier 0
  • make dtype testing more concise

    make dtype testing more concise

    Instead of instantiating the tests for all devices and use the @onlyCPU decorator everywhere, used the only_for keyword when instantiating. With that the meta tests are not generated in the first place and do not need to be considered for the number of skipped tests.

    opened by pmeier 0
  • Do not require torch at installation

    Do not require torch at installation

    Right now we have torch as dependency:

    https://github.com/Quansight/pytest-pytorch/blob/bd98f6b23214460605e4b8c0ee2bd4956e846291/setup.cfg#L32-L34

    If you set up the development environment to work on PyTorch, you would probably not have it installed already. Thus installing pytest-pytorch would also install torch which is usually not desired.

    opened by pmeier 0
  • Support for nested test case names

    Support for nested test case names

    Currently we match the test case (function) names based on this:

    https://github.com/Quansight/pytest-pytorch/blob/ff8f2d86906486a2d437b2617ef9973394f5e216/pytest_pytorch/plugin.py#L9-L10

    This works well until you have a setup similar to this:

    class TestFoo(TestCase):
        pass
    
    class TestFooBar(TestCase):
        pass
    

    If you run pytest test_foo.py::TestFoo on this, both test cases are collected instead of just TestFoo.

    opened by pmeier 0
  • CI tests are failing

    CI tests are failing

    The CI tests are failing, because we need torch>=1.9, but it is only available through the nightlies. Unfortunately, tox-ltt is not able to handle the nightly channel yet.

    opened by pmeier 0
Releases(v0.2.1)
  • v0.2.1(May 25, 2021)

    This adds the --disable-pytest-pytorch command line option (#25), which makes it easier to debug incompatibilites with the vanilla pytest collection.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 21, 2021)

    This minor release adds support for OpInfo's which are used more and more throughout the PyTorch test suite (#17).

    Furthermore @xmnlab helped us to get pytest-pytorch into conda-forge. Installation instruction can be found in the README (#18).

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Apr 20, 2021)

    This release includes two minor improvements:

    1. Support for selecting individual test cases if there names are nested, i.e. TestFoo and TestFooBar (#12)
    2. Removal of PyTorch as installation requirement (#14)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Apr 14, 2021)

Owner
Quansight
We grow talent, build technology, and discover products by helping companies grow OSS communities to organize and analyze their data.
Quansight
Minimal example of how to use pytest with automated 'devops' style automated test runs

Pytest python example with automated testing This is a minimal viable example of pytest with an automated run of tests for every push/merge into the m

Karma Computing 2 Jan 02, 2022
create custom test databases that are populated with fake data

About Generate fake but valid data filled databases for test purposes using most popular patterns(AFAIK). Current support is sqlite, mysql, postgresql

Emir Ozer 2.2k Jan 04, 2023
This is a Python script for Github Bot which uses Selenium to Automate things.

github-follow-unfollow-bot This is a Python script for Github Bot which uses Selenium to Automate things. Pre-requisites :- Python A Github Account Re

Chaudhary Hamdan 10 Jul 01, 2022
A small automated test structure using python to test *.cpp codes

Get Started Insert C++ Codes Add Test Code Run Test Samples Check Coverages Insert C++ Codes you can easily add c++ files in /inputs directory there i

Alireza Zahiri 2 Aug 03, 2022
Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report

pytest-ui-automatic Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report How to run Run tests execute_test

moyu6027 11 Nov 08, 2022
A Demo of Feishu automation testing framework

FeishuAutoTestDemo This is a automation testing framework which use Feishu as an example. Execute runner.py to run. Technology Web UI Test pytest + se

2 Aug 19, 2022
Auto Click by pyautogui and excel operations.

Auto Click by pyautogui and excel operations.

Janney 2 Dec 21, 2021
Cornell record & replay mock server

Cornell: record & replay mock server Cornell makes it dead simple, via its record and replay features to perform end-to-end testing in a fast and isol

HiredScoreLabs 134 Sep 15, 2022
GitHub action for AppSweep Mobile Application Security Testing

GitHub action for AppSweep can be used to continuously integrate app scanning using AppSweep into your Android app build process

Guardsquare 14 Oct 06, 2022
A modern API testing tool for web applications built with Open API and GraphQL specifications.

Schemathesis Schemathesis is a modern API testing tool for web applications built with Open API and GraphQL specifications. It reads the application s

Schemathesis.io 1.6k Dec 30, 2022
DUCKSPLOIT - Windows Hacking FrameWork using Reverse Shell

Ducksploit Install Ducksploit Hacker setup raspberry pico Download https://githu

2 Jan 31, 2022
A pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database

This is a pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database. It allows you to specify fixtures for PostgreSQL process and client.

Clearcode 252 Dec 21, 2022
A suite of benchmarks for CPU and GPU performance of the most popular high-performance libraries for Python :rocket:

A suite of benchmarks for CPU and GPU performance of the most popular high-performance libraries for Python :rocket:

Dion Häfner 255 Jan 04, 2023
Um scraper feito em python que gera arquivos de excel baseados nas tier lists do site LoLalytics.

LoLalytics-scraper Um scraper feito em python que gera arquivos de excel baseados nas tier lists do site LoLalytics. Começando por um único script com

Kevin Souza 1 Feb 19, 2022
A simple tool to test internet stability.

pingtest Description A personal project for testing internet stability, intended for use in Linux and Windows.

chris 0 Oct 17, 2021
WrightEagle AutoTest (Has been updated by Cyrus team members)

Autotest2d WrightEagle AutoTest (Has been updated by Cyrus team members) Thanks go to WrightEagle Members. Steps 1- prepare start_team file. In this s

Cyrus Soccer Simulation 2D Team 3 Sep 01, 2022
Codeforces Test Parser for C/C++ & Python on Windows

Codeforces Test Parser for C/C++ & Python on Windows Installation Run pip instal

Minh Vu 2 Jan 05, 2022
API Rest testing FastAPI + SQLAchmey + Docker

Transactions API Rest Implement and design a simple REST API Description We need to a simple API that allow us to register users' transactions and hav

TxeMac 2 Jun 30, 2022
Screenplay pattern base for Python automated UI test suites.

ScreenPy TITLE CARD: "ScreenPy" TITLE DISAPPEARS.

Perry Goy 39 Nov 15, 2022
AllPairs is an open source test combinations generator written in Python

AllPairs is an open source test combinations generator written in Python

Robson Agapito Correa 5 Mar 05, 2022