Selects tests affected by changed files. Continous test runner when used with pytest-watch.

Related tags

Testingpytest-testmon
Overview

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language like Python and how reliable is it? Read here: Determining affected tests

Quickstart

pip install pytest-testmon

# build the dependency database and save it to .testmondata
pytest --testmon

# change some of your code (with test coverage)

# only run tests affected by recent changes
pytest --testmon

To learn more about specifying multiple project directories and troubleshooting, please head to testmon.org

Comments
  • pytest-xdist support

    pytest-xdist support

    When combining testmon and xdist, more tests are rerun than necessary. I suspect the xdist runners don't update .testmondata.

    Is testmon compatible with xdist or is this due to misconfiguration?

    enhancement 
    opened by timdiels 22
  • Rerun tests should not fail.

    Rerun tests should not fail.

    There is a test (A) that fails when running pytest --testmon. After fixing test A, running pytest --testmon shows the results of when the test A was still failing. But If we change any code that affect test A with a NOP and run pytest --testmon the test A will pass once. But running pytest --testmon immediately after that will result in what I believe is a cached result to show up, which is the result failure.

    I believe the cached results in this case should be updated so it doesnt fail.

    opened by henningphan 17
  • Refactor

    Refactor

    This addresses issues: #53 #52 #51 #50 #32 partially #42 (poor mans solution)

    With the limited test cases it worked and the performance was OK. If @blueyed @boxed tell me it works for them I'll merge and release.

    opened by tarpas 14
  • performance with large source base

    performance with large source base

    There is a lot of repetition and json array in the .testmondata sqlite database, let's make it more efficient. @boxed you mentioned you have 20mb per commit Db, so this might interest you.

    opened by tarpas 10
  • Override pytest's exit code 5 for

    Override pytest's exit code 5 for "no tests were run"

    py.test will exit with code 5 in case no tests were ran (https://github.com/pytest-dev/pytest/issues/812, https://github.com/pytest-dev/pytest/issues/500#issuecomment-112204804).

    I can see that this is useful in general, but with pytest-testmon this should not be an error.

    Would it be possible and is it sensible to override pytest's exit code to be 0 in that case then?

    My use case it using pytest-watch and its --onfail feature, which should not be triggered because testmon made py.test skip all tests.

    opened by blueyed 10
  • using of debugger corrupts the testmon database

    using of debugger corrupts the testmon database

    from @max-imlian https://github.com/tarpas/pytest-testmon/issues/90#issuecomment-408984214 FYI I'm still having real issues with testmon, where it doesn't run tests despite both changes in the code and existing failures, even when I pass --tlf.

    I love the goals of testmon, and it's performs so well in 90% of cases that it's become an essential part of my workflow. As a result, when it ignores tests that have changed, it's frustrating. I've found that --tlf often doesn't work, which is a shame as it was often a 'last resort' to testmon making a mistake in ignoring too many tests.

    Is there any info I can supply that would help debug this? I'm happy to post anything.

    Would there be any use of a 'Conservative' mode, where testmon would lean towards testing too much? A Type I error is far less costly than a Type II.

    opened by tarpas 9
  • testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped:

    (Pdb++) nodeid
    'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output'
    (Pdb++) pp self.testmon_data.fail_reports[nodeid]
    [{'duration': 0.0020258426666259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.003198385238647461,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': 'tests/test_renderers.py:753: in test_schemajs_output\n'
                  '    output = renderer.render(\'data\', renderer_context={"request": request})\n'
                  'rest_framework/renderers.py:862: in render\n'
                  '    codec = coreapi.codecs.CoreJSONCodec()\n'
                  "E   AttributeError: 'NoneType' object has no attribute 'codecs'",
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'failed',
      'sections': [],
      'user_properties': [],
      'when': 'call'},
     {'duration': 0.008923768997192383,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'},
     {'duration': 0.0012934207916259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': ['tests/test_renderers.py', 743, 'Skipped: coreapi is not installed'],
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'skipped',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.026836156845092773,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'}]
    (Pdb++) pp [unserialize_report('testreport', report) for report in self.testmon_data.fail_reports[nodeid]]
    [<TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='call' outcome='failed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='skipped'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>]
    

    I might have messed up some internals while debugging #101 / #102, but I think it should be ensured that this would never happen, e.g. on the DB level.

    opened by blueyed 9
  • combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    I have a situation where executing pytest --testmon --tlf module1/module2 after deleting .testmondata will yield 3 failures.

    The next run of pytest --testmon --tlf module1/module2 yields 6 failures (3 duplicates of the first 3) and each subsequent run yields 3 more duplicates.

    If I run pytest --testmon --tlf module1/module2/tests I get back to 3 tests, the duplication starts as soon as I remove the final tests in the path (or execute with the root path of the repository).

    I've tried to find a simple way to reproduce this but failed so far. I've also tried looking at what changes in .testmondata but didn't spot anything.

    I'd be grateful if you could give me a hint how I might reproduce or analyze this, as I unfortunately can't share the code at the moment.

    opened by TauPan 9
  • deleting code causes an internal exception

    deleting code causes an internal exception

    I see this with pytest-testmon > 0.8.3 (downgrading to this version fixes it for me).

    How to reproduce:

    • delete .testmondata
    • run py.test --testmon once
    • Now delete some code (in my case, it was decorated with # pragma: no cover, not sure if that's relevant)
    • call py.test --testmon again And look at a backtrace like the following
    ===================================================================================== test session starts =====================================================================================
    platform linux2 -- Python 2.7.12, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
    Django settings: sis.settings_devel (from ini file)
    testmon=True, changed files: sis/lib/python/sis/rest.py, skipping collection of 529 items, run variant: default
    rootdir: /home/delgado/nobackup/git/sis/software, inifile: pytest.ini
    plugins: testmon-0.9.4, repeat-0.4.1, env-0.6.0, django-3.1.2, cov-2.4.0
    collected 122 items 
    INTERNALERROR> Traceback (most recent call last):
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 98, in wrap_session
    INTERNALERROR>     session.exitstatus = doit(config, session) or 0
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 132, in _main
    INTERNALERROR>     config.hook.pytest_collection(session=session)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 141, in pytest_collection
    INTERNALERROR>     return session.perform_collect()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 602, in perform_collect
    INTERNALERROR>     config=self.config, items=items)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/testmon/pytest_testmon.py", line 165, in pytest_collection_modifyitems
    INTERNALERROR>     assert item.nodeid not in self.collection_ignored
    INTERNALERROR> AssertionError: assert 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' not in set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...])
    INTERNALERROR>  +  where 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' = <TestCaseFunction 'test_should_fail_for_expired_events'>.nodeid
    INTERNALERROR>  +  and   set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...]) = <testmon.pytest_testmon.TestmonDeselect object at 0x7f85679b82d0>.collection_ignored
    
    ==================================================================================== 185 tests deselected =====================================================================================
    =============================================================================== 185 deselected in 0.34 seconds ================================================================================
    
    opened by TauPan 9
  • Performance issue on big projects

    Performance issue on big projects

    We have a big project with a big test suite. When starting pytest with testmon enabled it takes something like 8 minutes just to start when running (almost) no tests. A profile dump reveals this:

    Wed Dec  7 14:37:13 2016    testmon-startup-profile
    
             353228817 function calls (349177685 primitive calls) in 648.684 seconds
    
       Ordered by: cumulative time
       List reduced from 15183 to 100 due to restriction <100>
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(function)
            1    0.001    0.001  648.707  648.707 env/bin/py.test:3(<module>)
     10796/51    0.006    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:335(_hookexec)
     10796/51    0.017    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:332(<lambda>)
     11637/51    0.063    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:586(execute)
            1    0.000    0.000  648.612  648.612 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/config.py:29(main)
      10596/2    0.016    0.000  648.612  324.306 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:722(__call__)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:80(pytest_cmdline_main)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:70(init_testmon_data)
            1    0.004    0.004  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:258(read_fs)
         4310    1.385    0.000  545.292    0.127 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:224(test_should_run)
         4310    3.995    0.001  542.647    0.126 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:229(<dictcomp>)
      4331550   54.292    0.000  538.652    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:104(checksums)
            1    0.039    0.039  537.138  537.138 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:273(compute_unaffected)
     73396811   67.475    0.000  484.571    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:14(checksum)
     73396871  360.852    0.000  360.852    0.000 {method 'encode' of 'str' objects}
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
    
    

    as you can see the last line is 80 seconds cumulative, but the two lines above are 360 and 484 respectively.

    This hurts our use case a LOT, and since we use a reference .testmondata file that has been produced by a CI job, it seems excessive (and useless) to recalculate this on each machine when it could be calculated once up front.

    So, what do you guys think about caching this data in .testmondata?

    opened by boxed 9
  • Failure still reported when whole module gets re-run

    Failure still reported when whole module gets re-run

    I have just seen this, after marking TestSchemaJSRenderer (which only contains test_schemajs_output) with @pytest.mark.skipif(not coreapi, reason='coreapi is not installed'):

    % .tox/venvs/py36-django20/bin/pytest --testmon
    ==================================================================================== test session starts =====================================================================================
    platform linux -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
    testmon=True, changed files: tests/test_renderers.py, skipping collection of 118 files, run variant: default
    rootdir: /home/daniel/Vcs/django-rest-framework, inifile: setup.cfg
    plugins: testmon-0.9.11, django-3.2.1, cov-2.5.1
    collected 75 items / 1219 deselected                                                                                                                                                         
    
    tests/test_renderers.py F                                                                                                                                                              [  0%]
    tests/test_filters.py F                                                                                                                                                                [  0%]
    tests/test_renderers.py ..............................................Fs                                                                                                               [100%]
    
    ========================================================================================== FAILURES ==========================================================================================
    _________________________________________________________________________ TestSchemaJSRenderer.test_schemajs_output __________________________________________________________________________
    tests/test_renderers.py:753: in test_schemajs_output
        output = renderer.render('data', renderer_context={"request": request})
    rest_framework/renderers.py:862: in render
        codec = coreapi.codecs.CoreJSONCodec()
    E   AttributeError: 'NoneType' object has no attribute 'codecs'
    _________________________________________________________________ BaseFilterTests.test_get_schema_fields_checks_for_coreapi __________________________________________________________________
    tests/test_filters.py:36: in test_get_schema_fields_checks_for_coreapi
        assert self.filter_backend.get_schema_fields({}) == []
    rest_framework/filters.py:36: in get_schema_fields
        assert coreschema is not None, 'coreschema must be installed to use `get_schema_fields()`'
    E   AssertionError: coreschema must be installed to use `get_schema_fields()`
    ________________________________________________________________ TestDocumentationRenderer.test_document_with_link_named_data ________________________________________________________________
    tests/test_renderers.py:719: in test_document_with_link_named_data
        document = coreapi.Document(
    E   AttributeError: 'NoneType' object has no attribute 'Document'
    ====================================================================================== warnings summary ======================================================================================
    None
      [pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.
    
    -- Docs: http://doc.pytest.org/en/latest/warnings.html
    ======================================================== 3 failed, 46 passed, 1 skipped, 1219 deselected, 1 warnings in 3.15 seconds =========================================================
    

    TestSchemaJSRenderer.test_schemajs_output should not show up in FAILURES.

    Note also that tests/test_renderers is listed twice, where the failure appears to come from the first entry.

    (running without --testmon shows the same number of tests (tests/test_renderers.py ..............................................Fs)).

    opened by blueyed 8
  • feat: merge DB

    feat: merge DB

    This is a very draft PR to answer this need.

    I had a hard time understanding how testmon works internally, and also how to test this.

    I'll happily get any feedback, so we can agree and merge this in testmon.

    It would be very helpful for us as we have multiple workers in our CI that run tests, and we would need to merge testmon result so it can be reused later

    opened by ElPicador 0
  • multiprocessing does not seem to be supported

    multiprocessing does not seem to be supported

    Suppose I have the following code and tests

    # mymodule.py
    class MyClass:
        def foo(self):
            return "bar"
    
    # test_mymodule.py
    import multiprocessing
    
    def __run_foo():
        from mymodule import MyClass
        c = MyClass()
        assert c.foo() == "bar"
    
    def test_foo():
        process = multiprocessing.Process(target=__run_foo)
        process.start()
        process.join()
        assert process.exitcode == 0
    

    After a first successful run of pytest --testmon test_mymodule.py, one can change the implementation of foo() without testmon noticing and no new runs are triggered.

    When running pytest --cov manually, we get a trace image

    opened by janbernloehr 1
  • Changes to global / class variables are ignored (if no method of their module is executed)

    Changes to global / class variables are ignored (if no method of their module is executed)

    Suppose you have the following files

    # mymodule.py
    foo_bar = "value"
    
    class MyClass:
        FOO = "bar"
    
    # test_mymodule.py
    from mymodule import MyClass, foo_bar
    
    def test_module():
        assert foo_bar == "value"
        assert MyClass.FOO == "bar"
    

    Now running pytest --testmon test_mymodule.py does not rerun test_module() when the value of MyClass.FOO or foo_bar is changed. Even worse, FOO can be completely removed, e.g.

    class MyClass:
        pass
    

    without triggering a re-run.

    When running pytest --cov manually, there seems to be a trace

    image

    opened by janbernloehr 2
  • Fix/improve linting of the code when using pytest-pylint

    Fix/improve linting of the code when using pytest-pylint

    This also includes changed source code files when pytest-pylint is enabled. By default source code files are ignored so the pylint is not able to process those files.

    opened by msemanicky 1
  • Testmon very sensitive towards library changes

    Testmon very sensitive towards library changes

    Background

    I have found pytest-testmon to be very sensitive towards the slightest changes for the packages in the environment it is installed. This makes very difficult to use pytest-testmon in practice when sharing a .testmondata file between our developers.

    Use Case:

    A developer is trying out a new library foo and have written a wrapper foo_bar.py for this. The developer only wants to run test_foo_bar.py for foo_bar.py, however the entire test suite is run since foo was installed.

    Example Solution

    Flag for disregarding library changes

    • I am happy to contribute with a solution, if it is considered possible.
    • If library needs funding for a solution, this is an option as well.
    opened by dgot 1
  • How does testmon handle data files?

    How does testmon handle data files?

    if I have in my code an open('path/to/file.json').read() command, would testmon be able to flag that file?

    I saw there is something called file tracers in coverage but not sure if it is for this use case.

    opened by uriva 1
Releases(v1.4.3b1)
🏃💨 For when you need to fill out feedback in the last minute.

BMSCE Auto Feedback For when you need to fill out feedback in the last minute. 🏃 💨 Setup Clone the repository Run pip install selenium Set the RATIN

Shaan Subbaiah 10 May 23, 2022
Show coverage stats online via coveralls.io

Coveralls for Python Test Status: Version Info: Compatibility: Misc: coveralls.io is a service for publishing your coverage stats online. This package

Kevin James 499 Dec 28, 2022
Test scripts etc. for experimental rollup testing

rollup node experiments Test scripts etc. for experimental rollup testing. untested, work in progress python -m venv venv source venv/bin/activate #

Diederik Loerakker 14 Jan 25, 2022
Automated Security Testing For REST API's

Astra REST API penetration testing is complex due to continuous changes in existing APIs and newly added APIs. Astra can be used by security engineers

Flipkart Incubator 2.1k Dec 31, 2022
UUM Merit Form Filler is a web automation which helps automate entering a matric number to the UUM system in order for participants to obtain a merit

About UUM Merit Form Filler UUM Merit Form Filler is a web automation which helps automate entering a matric number to the UUM system in order for par

Ilham Rachmat 3 May 31, 2022
The source code and slide for my talk about the subject: unittesing in python

PyTest Talk This talk give you some ideals about the purpose of unittest? how to write good unittest? how to use pytest framework? and show you the ba

nguyenlm 3 Jan 18, 2022
create custom test databases that are populated with fake data

About Generate fake but valid data filled databases for test purposes using most popular patterns(AFAIK). Current support is sqlite, mysql, postgresql

Emir Ozer 2.2k Jan 04, 2023
Mimesis is a high-performance fake data generator for Python, which provides data for a variety of purposes in a variety of languages.

Mimesis - Fake Data Generator Description Mimesis is a high-performance fake data generator for Python, which provides data for a variety of purposes

Isaak Uchakaev 3.8k Dec 29, 2022
Python Projects - Few Python projects with Testing using Pytest

Python_Projects Few Python projects : Fast_API_Docker_PyTest- Just a simple auto

Tal Mogendorff 1 Jan 22, 2022
FFPuppet is a Python module that automates browser process related tasks to aid in fuzzing

FFPuppet FFPuppet is a Python module that automates browser process related tasks to aid in fuzzing. Happy bug hunting! Are you fuzzing the browser? G

Mozilla Fuzzing Security 24 Oct 25, 2022
A grab-bag of nifty pytest plugins

A goody-bag of nifty plugins for pytest OS Build Coverage Plugin Description Supported OS pytest-server-fixtures Extensible server-running framework w

Man Group 492 Jan 03, 2023
Automated testing tool developed in python for Advanced mathematical operations.

Advanced-Maths-Operations-Validations Automated testing tool developed in python for Advanced mathematical operations. Requirements Python 3.5 or late

Nikhil Repale 1 Nov 16, 2021
How to Create a YouTube Bot that Increases Views using Python Programming Language

YouTube-Bot-in-Python-Selenium How to Create a YouTube Bot that Increases Views using Python Programming Language. The app is for educational purpose

Edna 14 Jan 03, 2023
Um scraper feito em python que gera arquivos de excel baseados nas tier lists do site LoLalytics.

LoLalytics-scraper Um scraper feito em python que gera arquivos de excel baseados nas tier lists do site LoLalytics. Começando por um único script com

Kevin Souza 1 Feb 19, 2022
A set of pytest fixtures to test Flask applications

pytest-flask An extension of pytest test runner which provides a set of useful tools to simplify testing and development of the Flask extensions and a

pytest-dev 433 Dec 23, 2022
XSSearch - A comprehensive reflected XSS tool built on selenium framework in python

XSSearch A Comprehensive Reflected XSS Scanner XSSearch is a comprehensive refle

Sathyaprakash Sahoo 49 Oct 18, 2022
PENBUD is penetration testing buddy which helps you in penetration testing by making various important tools interactive.

penbud - Penetration Tester Buddy PENBUD is penetration testing buddy which helps you in penetration testing by making various important tools interac

Himanshu Shukla 15 Feb 01, 2022
Automates hiketop+ crystal earning using python and appium

hikepy Works on poco x3 idk about your device deponds on resolution Prerquests Android sdk java adb Setup Go to https://appium.io/ Download and instal

4 Aug 26, 2022
A Python Selenium library inspired by the Testing Library

Selenium Testing Library Slenium Testing Library (STL) is a Python library for Selenium inspired by Testing-Library. Dependencies Python 3.6, 3.7, 3.8

Anže Pečar 12 Dec 26, 2022
Pytest support for asyncio.

pytest-asyncio: pytest support for asyncio pytest-asyncio is an Apache2 licensed library, written in Python, for testing asyncio code with pytest. asy

pytest-dev 1.1k Jan 02, 2023