Selects tests affected by changed files. Continous test runner when used with pytest-watch.

Related tags

Testingpytest-testmon
Overview

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language like Python and how reliable is it? Read here: Determining affected tests

Quickstart

pip install pytest-testmon

# build the dependency database and save it to .testmondata
pytest --testmon

# change some of your code (with test coverage)

# only run tests affected by recent changes
pytest --testmon

To learn more about specifying multiple project directories and troubleshooting, please head to testmon.org

Comments
  • pytest-xdist support

    pytest-xdist support

    When combining testmon and xdist, more tests are rerun than necessary. I suspect the xdist runners don't update .testmondata.

    Is testmon compatible with xdist or is this due to misconfiguration?

    enhancement 
    opened by timdiels 22
  • Rerun tests should not fail.

    Rerun tests should not fail.

    There is a test (A) that fails when running pytest --testmon. After fixing test A, running pytest --testmon shows the results of when the test A was still failing. But If we change any code that affect test A with a NOP and run pytest --testmon the test A will pass once. But running pytest --testmon immediately after that will result in what I believe is a cached result to show up, which is the result failure.

    I believe the cached results in this case should be updated so it doesnt fail.

    opened by henningphan 17
  • Refactor

    Refactor

    This addresses issues: #53 #52 #51 #50 #32 partially #42 (poor mans solution)

    With the limited test cases it worked and the performance was OK. If @blueyed @boxed tell me it works for them I'll merge and release.

    opened by tarpas 14
  • performance with large source base

    performance with large source base

    There is a lot of repetition and json array in the .testmondata sqlite database, let's make it more efficient. @boxed you mentioned you have 20mb per commit Db, so this might interest you.

    opened by tarpas 10
  • Override pytest's exit code 5 for

    Override pytest's exit code 5 for "no tests were run"

    py.test will exit with code 5 in case no tests were ran (https://github.com/pytest-dev/pytest/issues/812, https://github.com/pytest-dev/pytest/issues/500#issuecomment-112204804).

    I can see that this is useful in general, but with pytest-testmon this should not be an error.

    Would it be possible and is it sensible to override pytest's exit code to be 0 in that case then?

    My use case it using pytest-watch and its --onfail feature, which should not be triggered because testmon made py.test skip all tests.

    opened by blueyed 10
  • using of debugger corrupts the testmon database

    using of debugger corrupts the testmon database

    from @max-imlian https://github.com/tarpas/pytest-testmon/issues/90#issuecomment-408984214 FYI I'm still having real issues with testmon, where it doesn't run tests despite both changes in the code and existing failures, even when I pass --tlf.

    I love the goals of testmon, and it's performs so well in 90% of cases that it's become an essential part of my workflow. As a result, when it ignores tests that have changed, it's frustrating. I've found that --tlf often doesn't work, which is a shame as it was often a 'last resort' to testmon making a mistake in ignoring too many tests.

    Is there any info I can supply that would help debug this? I'm happy to post anything.

    Would there be any use of a 'Conservative' mode, where testmon would lean towards testing too much? A Type I error is far less costly than a Type II.

    opened by tarpas 9
  • testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped:

    (Pdb++) nodeid
    'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output'
    (Pdb++) pp self.testmon_data.fail_reports[nodeid]
    [{'duration': 0.0020258426666259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.003198385238647461,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': 'tests/test_renderers.py:753: in test_schemajs_output\n'
                  '    output = renderer.render(\'data\', renderer_context={"request": request})\n'
                  'rest_framework/renderers.py:862: in render\n'
                  '    codec = coreapi.codecs.CoreJSONCodec()\n'
                  "E   AttributeError: 'NoneType' object has no attribute 'codecs'",
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'failed',
      'sections': [],
      'user_properties': [],
      'when': 'call'},
     {'duration': 0.008923768997192383,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'},
     {'duration': 0.0012934207916259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': ['tests/test_renderers.py', 743, 'Skipped: coreapi is not installed'],
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'skipped',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.026836156845092773,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'}]
    (Pdb++) pp [unserialize_report('testreport', report) for report in self.testmon_data.fail_reports[nodeid]]
    [<TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='call' outcome='failed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='skipped'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>]
    

    I might have messed up some internals while debugging #101 / #102, but I think it should be ensured that this would never happen, e.g. on the DB level.

    opened by blueyed 9
  • combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    I have a situation where executing pytest --testmon --tlf module1/module2 after deleting .testmondata will yield 3 failures.

    The next run of pytest --testmon --tlf module1/module2 yields 6 failures (3 duplicates of the first 3) and each subsequent run yields 3 more duplicates.

    If I run pytest --testmon --tlf module1/module2/tests I get back to 3 tests, the duplication starts as soon as I remove the final tests in the path (or execute with the root path of the repository).

    I've tried to find a simple way to reproduce this but failed so far. I've also tried looking at what changes in .testmondata but didn't spot anything.

    I'd be grateful if you could give me a hint how I might reproduce or analyze this, as I unfortunately can't share the code at the moment.

    opened by TauPan 9
  • deleting code causes an internal exception

    deleting code causes an internal exception

    I see this with pytest-testmon > 0.8.3 (downgrading to this version fixes it for me).

    How to reproduce:

    • delete .testmondata
    • run py.test --testmon once
    • Now delete some code (in my case, it was decorated with # pragma: no cover, not sure if that's relevant)
    • call py.test --testmon again And look at a backtrace like the following
    ===================================================================================== test session starts =====================================================================================
    platform linux2 -- Python 2.7.12, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
    Django settings: sis.settings_devel (from ini file)
    testmon=True, changed files: sis/lib/python/sis/rest.py, skipping collection of 529 items, run variant: default
    rootdir: /home/delgado/nobackup/git/sis/software, inifile: pytest.ini
    plugins: testmon-0.9.4, repeat-0.4.1, env-0.6.0, django-3.1.2, cov-2.4.0
    collected 122 items 
    INTERNALERROR> Traceback (most recent call last):
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 98, in wrap_session
    INTERNALERROR>     session.exitstatus = doit(config, session) or 0
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 132, in _main
    INTERNALERROR>     config.hook.pytest_collection(session=session)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 141, in pytest_collection
    INTERNALERROR>     return session.perform_collect()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 602, in perform_collect
    INTERNALERROR>     config=self.config, items=items)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/testmon/pytest_testmon.py", line 165, in pytest_collection_modifyitems
    INTERNALERROR>     assert item.nodeid not in self.collection_ignored
    INTERNALERROR> AssertionError: assert 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' not in set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...])
    INTERNALERROR>  +  where 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' = <TestCaseFunction 'test_should_fail_for_expired_events'>.nodeid
    INTERNALERROR>  +  and   set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...]) = <testmon.pytest_testmon.TestmonDeselect object at 0x7f85679b82d0>.collection_ignored
    
    ==================================================================================== 185 tests deselected =====================================================================================
    =============================================================================== 185 deselected in 0.34 seconds ================================================================================
    
    opened by TauPan 9
  • Performance issue on big projects

    Performance issue on big projects

    We have a big project with a big test suite. When starting pytest with testmon enabled it takes something like 8 minutes just to start when running (almost) no tests. A profile dump reveals this:

    Wed Dec  7 14:37:13 2016    testmon-startup-profile
    
             353228817 function calls (349177685 primitive calls) in 648.684 seconds
    
       Ordered by: cumulative time
       List reduced from 15183 to 100 due to restriction <100>
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(function)
            1    0.001    0.001  648.707  648.707 env/bin/py.test:3(<module>)
     10796/51    0.006    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:335(_hookexec)
     10796/51    0.017    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:332(<lambda>)
     11637/51    0.063    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:586(execute)
            1    0.000    0.000  648.612  648.612 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/config.py:29(main)
      10596/2    0.016    0.000  648.612  324.306 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:722(__call__)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:80(pytest_cmdline_main)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:70(init_testmon_data)
            1    0.004    0.004  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:258(read_fs)
         4310    1.385    0.000  545.292    0.127 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:224(test_should_run)
         4310    3.995    0.001  542.647    0.126 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:229(<dictcomp>)
      4331550   54.292    0.000  538.652    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:104(checksums)
            1    0.039    0.039  537.138  537.138 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:273(compute_unaffected)
     73396811   67.475    0.000  484.571    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:14(checksum)
     73396871  360.852    0.000  360.852    0.000 {method 'encode' of 'str' objects}
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
    
    

    as you can see the last line is 80 seconds cumulative, but the two lines above are 360 and 484 respectively.

    This hurts our use case a LOT, and since we use a reference .testmondata file that has been produced by a CI job, it seems excessive (and useless) to recalculate this on each machine when it could be calculated once up front.

    So, what do you guys think about caching this data in .testmondata?

    opened by boxed 9
  • Failure still reported when whole module gets re-run

    Failure still reported when whole module gets re-run

    I have just seen this, after marking TestSchemaJSRenderer (which only contains test_schemajs_output) with @pytest.mark.skipif(not coreapi, reason='coreapi is not installed'):

    % .tox/venvs/py36-django20/bin/pytest --testmon
    ==================================================================================== test session starts =====================================================================================
    platform linux -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
    testmon=True, changed files: tests/test_renderers.py, skipping collection of 118 files, run variant: default
    rootdir: /home/daniel/Vcs/django-rest-framework, inifile: setup.cfg
    plugins: testmon-0.9.11, django-3.2.1, cov-2.5.1
    collected 75 items / 1219 deselected                                                                                                                                                         
    
    tests/test_renderers.py F                                                                                                                                                              [  0%]
    tests/test_filters.py F                                                                                                                                                                [  0%]
    tests/test_renderers.py ..............................................Fs                                                                                                               [100%]
    
    ========================================================================================== FAILURES ==========================================================================================
    _________________________________________________________________________ TestSchemaJSRenderer.test_schemajs_output __________________________________________________________________________
    tests/test_renderers.py:753: in test_schemajs_output
        output = renderer.render('data', renderer_context={"request": request})
    rest_framework/renderers.py:862: in render
        codec = coreapi.codecs.CoreJSONCodec()
    E   AttributeError: 'NoneType' object has no attribute 'codecs'
    _________________________________________________________________ BaseFilterTests.test_get_schema_fields_checks_for_coreapi __________________________________________________________________
    tests/test_filters.py:36: in test_get_schema_fields_checks_for_coreapi
        assert self.filter_backend.get_schema_fields({}) == []
    rest_framework/filters.py:36: in get_schema_fields
        assert coreschema is not None, 'coreschema must be installed to use `get_schema_fields()`'
    E   AssertionError: coreschema must be installed to use `get_schema_fields()`
    ________________________________________________________________ TestDocumentationRenderer.test_document_with_link_named_data ________________________________________________________________
    tests/test_renderers.py:719: in test_document_with_link_named_data
        document = coreapi.Document(
    E   AttributeError: 'NoneType' object has no attribute 'Document'
    ====================================================================================== warnings summary ======================================================================================
    None
      [pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.
    
    -- Docs: http://doc.pytest.org/en/latest/warnings.html
    ======================================================== 3 failed, 46 passed, 1 skipped, 1219 deselected, 1 warnings in 3.15 seconds =========================================================
    

    TestSchemaJSRenderer.test_schemajs_output should not show up in FAILURES.

    Note also that tests/test_renderers is listed twice, where the failure appears to come from the first entry.

    (running without --testmon shows the same number of tests (tests/test_renderers.py ..............................................Fs)).

    opened by blueyed 8
  • feat: merge DB

    feat: merge DB

    This is a very draft PR to answer this need.

    I had a hard time understanding how testmon works internally, and also how to test this.

    I'll happily get any feedback, so we can agree and merge this in testmon.

    It would be very helpful for us as we have multiple workers in our CI that run tests, and we would need to merge testmon result so it can be reused later

    opened by ElPicador 0
  • multiprocessing does not seem to be supported

    multiprocessing does not seem to be supported

    Suppose I have the following code and tests

    # mymodule.py
    class MyClass:
        def foo(self):
            return "bar"
    
    # test_mymodule.py
    import multiprocessing
    
    def __run_foo():
        from mymodule import MyClass
        c = MyClass()
        assert c.foo() == "bar"
    
    def test_foo():
        process = multiprocessing.Process(target=__run_foo)
        process.start()
        process.join()
        assert process.exitcode == 0
    

    After a first successful run of pytest --testmon test_mymodule.py, one can change the implementation of foo() without testmon noticing and no new runs are triggered.

    When running pytest --cov manually, we get a trace image

    opened by janbernloehr 1
  • Changes to global / class variables are ignored (if no method of their module is executed)

    Changes to global / class variables are ignored (if no method of their module is executed)

    Suppose you have the following files

    # mymodule.py
    foo_bar = "value"
    
    class MyClass:
        FOO = "bar"
    
    # test_mymodule.py
    from mymodule import MyClass, foo_bar
    
    def test_module():
        assert foo_bar == "value"
        assert MyClass.FOO == "bar"
    

    Now running pytest --testmon test_mymodule.py does not rerun test_module() when the value of MyClass.FOO or foo_bar is changed. Even worse, FOO can be completely removed, e.g.

    class MyClass:
        pass
    

    without triggering a re-run.

    When running pytest --cov manually, there seems to be a trace

    image

    opened by janbernloehr 2
  • Fix/improve linting of the code when using pytest-pylint

    Fix/improve linting of the code when using pytest-pylint

    This also includes changed source code files when pytest-pylint is enabled. By default source code files are ignored so the pylint is not able to process those files.

    opened by msemanicky 1
  • Testmon very sensitive towards library changes

    Testmon very sensitive towards library changes

    Background

    I have found pytest-testmon to be very sensitive towards the slightest changes for the packages in the environment it is installed. This makes very difficult to use pytest-testmon in practice when sharing a .testmondata file between our developers.

    Use Case:

    A developer is trying out a new library foo and have written a wrapper foo_bar.py for this. The developer only wants to run test_foo_bar.py for foo_bar.py, however the entire test suite is run since foo was installed.

    Example Solution

    Flag for disregarding library changes

    • I am happy to contribute with a solution, if it is considered possible.
    • If library needs funding for a solution, this is an option as well.
    opened by dgot 1
  • How does testmon handle data files?

    How does testmon handle data files?

    if I have in my code an open('path/to/file.json').read() command, would testmon be able to flag that file?

    I saw there is something called file tracers in coverage but not sure if it is for this use case.

    opened by uriva 1
Releases(v1.4.3b1)
PoC getting concret intel with chardet and charset-normalizer

aiohttp with charset-normalizer Context aiohttp.TCPConnector(limit=16) alpine linux nginx 1.21 python 3.9 aiohttp dev-master chardet 4.0.0 (aiohttp-ch

TAHRI Ahmed R. 2 Nov 30, 2022
1st Solution to QQ Browser 2021 AIAC Track 2

1st Solution to QQ Browser 2021 AIAC Track 2 This repository is the winning solution to QQ Browser 2021 AI Algorithm Competition Track 2 Automated Hyp

DAIR Lab 24 Sep 10, 2022
A test fixtures replacement for Python

factory_boy factory_boy is a fixtures replacement based on thoughtbot's factory_bot. As a fixtures replacement tool, it aims to replace static, hard t

FactoryBoy project 3k Jan 05, 2023
Code coverage measurement for Python

Coverage.py Code coverage testing for Python. Coverage.py measures code coverage, typically during test execution. It uses the code analysis tools and

Ned Batchelder 2.3k Jan 04, 2023
Fully functioning price detector built with selenium and python

Fully functioning price detector built with selenium and python

mark sikaundi 4 Mar 30, 2022
Testing - Instrumenting Sanic framework with Opentelemetry

sanic-otel-splunk Testing - Instrumenting Sanic framework with Opentelemetry Test with python 3.8.10, sanic 20.12.2 Step to instrument pip install -r

Donler 1 Nov 26, 2021
Declarative HTTP Testing for Python and anything else

Gabbi Release Notes Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest

Chris Dent 139 Sep 21, 2022
A simple Python script I wrote that scrapes NASA's James Webb Space Telescope tracker website using Selenium and returns its current status and location.

A simple Python script I wrote that scrapes NASA's James Webb Space Telescope tracker website using Selenium and returns its current status and location.

9 Feb 10, 2022
Python selenium script to bypass simaster.ugm.ac.id weak captcha.

Python selenium script to bypass simaster.ugm.ac.id weak "captcha".

Hafidh R K 1 Feb 01, 2022
FakeDataGen is a Full Valid Fake Data Generator.

FakeDataGen is a Full Valid Fake Data Generator. This tool helps you to create fake accounts (in Spanish format) with fully valid data. Within this in

Joel GM 64 Dec 12, 2022
AutoExploitSwagger is an automated API security testing exploit tool that can be combined with xray, BurpSuite and other scanners.

AutoExploitSwagger is an automated API security testing exploit tool that can be combined with xray, BurpSuite and other scanners.

6 Jan 28, 2022
An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet 😈

An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet 😈

Brandon Galbraith 276 Mar 03, 2021
Pyramid debug toolbar

pyramid_debugtoolbar pyramid_debugtoolbar provides a debug toolbar useful while you're developing your Pyramid application. Note that pyramid_debugtoo

Pylons Project 95 Sep 17, 2022
Django test runner using nose

django-nose django-nose provides all the goodness of nose in your Django tests, like: Testing just your apps by default, not all the standard ones tha

Jazzband 880 Dec 15, 2022
create custom test databases that are populated with fake data

About Generate fake but valid data filled databases for test purposes using most popular patterns(AFAIK). Current support is sqlite, mysql, postgresql

Emir Ozer 2.2k Jan 04, 2023
0hh1 solver for the web (selenium) and also for mobile (adb)

0hh1 - Solver Aims to solve the '0hh1 puzzle' for all the sizes (4x4, 6x6, 8x8, 10x10 12x12). for both the web version (using selenium) and on android

Adwaith Rajesh 1 Nov 05, 2021
Obsei is a low code AI powered automation tool.

Obsei is a low code AI powered automation tool. It can be used in various business flows like social listening, AI based alerting, brand image analysis, comparative study and more .

Obsei 782 Dec 31, 2022
A Simple Unit Test Matcher Library for Python 3

pychoir - Python Test Matchers for humans Super duper low cognitive overhead matching for Python developers reading or writing tests. Implemented in p

Antti Kajander 15 Sep 14, 2022
Mock smart contracts for writing Ethereum test suites

Mock smart contracts for writing Ethereum test suites This package contains comm

Trading Strategy 222 Jan 04, 2023
Automatic SQL injection and database takeover tool

sqlmap sqlmap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of

sqlmapproject 25.7k Jan 04, 2023