Declarative HTTP Testing for Python and anything else

Overview
Documentation Status

Gabbi

Release Notes

Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest test looks like this:

tests:
- name: A test
  GET: /api/resources/id

See the docs for more details on the many features and formats for setting request headers and bodies and evaluating responses.

Gabbi is tested with Python 3.6, 3.7, 3.8, 3.9 and pypy3.

Tests can be run using unittest style test runners, pytest or from the command line with a gabbi-run script.

There is a gabbi-demo repository which provides a tutorial via its commit history. The demo builds a simple API using gabbi to facilitate test driven development.

Purpose

Gabbi works to bridge the gap between human readable YAML files that represent HTTP requests and expected responses and the obscured realm of Python-based, object-oriented unit tests in the style of the unittest module and its derivatives.

Each YAML file represents an ordered list of HTTP requests along with the expected responses. This allows a single file to represent a process in the API being tested. For example:

  • Create a resource.
  • Retrieve a resource.
  • Delete a resource.
  • Retrieve a resource again to confirm it is gone.

At the same time it is still possible to ask gabbi to run just one request. If it is in a sequence of tests, those tests prior to it in the YAML file will be run (in order). In any single process any test will only be run once. Concurrency is handled such that one file runs in one process.

These features mean that it is possible to create tests that are useful for both humans (as tools for improving and developing APIs) and automated CI systems.

Testing and Developing Gabbi

To get started, after cloning the repository, you should install the development dependencies:

$ pip install -r requirements-dev.txt

If you prefer to keep things isolated you can create a virtual environment:

$ virtualenv gabbi-venv
$ . gabbi-venv/bin/activate
$ pip install -r requirements-dev.txt

Gabbi is set up to be developed and tested using tox (installed via requirements-dev.txt). To run the built-in tests (the YAML files are in the directories gabbi/tests/gabbits_* and loaded by the file gabbi/test_*.py), you call tox:

tox -epep8,py37

If you have the dependencies installed (or a warmed up virtualenv) you can run the tests by hand and exit on the first failure:

python -m subunit.run discover -f gabbi | subunit2pyunit

Testing can be limited to individual modules by specifying them after the tox invocation:

tox -epep8,py37 -- test_driver test_handlers

If you wish to avoid running tests that connect to internet hosts, set GABBI_SKIP_NETWORK to True.

Comments
  • Coerce JSON types into correct values for later $RESPONSE replacements

    Coerce JSON types into correct values for later $RESPONSE replacements

    Resolves #147, having $RESPONSE replacements that contained integer or decimal values would wind up as quoted strings after substitution, e.g. {"id": 825} would later become {"id": "825"}.

    This passes tox -epep8, but running tox -epy27 seems to have the first few tests fail, unfortunately. This patch works for my use case, however; I could not POST data to particular endpoints of an API as the number values were wrapped in quotes (and the API was doing type checks on the values).

    This may not be an ideal (or eloquent) solution, but I've also tried to keep performance in mind in lieu of large JSON response bodies, namely that I expect the exception cases to be more common than exceptional, so I've added some additional checking to see if it's really worth parsing particular values as strings (line no. 359) and or if it even looks like a number in the first place (line no. 376). That said, if this performance hit is not an issue, it certainly is a lot more readable without the checks.

    I also chose to do two try/excepts instead of simply using float. Firstly we try parsing for int and then for float as I prefer the resulting JSON to be correct i.e. I would rather not an id field that was initially an int be cast into a float. For example, consider we get a response of {"id": 825} and we simply had one try/except that used float. The value would parse, but the resulting JSON (from json.dumps) would be {"id": 825.0}. This pragmatically doesn't matter as I'm sure most endpoints will accept a decimal value with an appended .0 to be valid as an integer, but I felt the semantics would be a surprise to other users of the lib and it's still possible that certain APIs might have an issue with.

    And thanks for all the effort you've put into the lib!

    opened by justanotherdot 20
  • Unable to make a relative Content Handler import from the command-line

    Unable to make a relative Content Handler import from the command-line

    On the command line, importing a custom Response Handler using a relative path requires manipulation of the PYTHONPATH environment variable to add . to the list of paths.

    Should Gabbi allow relative imports to work out-of-the-box?

    e.g.

    gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... fails with, ModuleNotFoundError: No module named 'foo'.

    Updating PYTHONPATH...

    PYTHONPATH=${PYTHONPATH}:. gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... works.

    opened by scottwallacesh 17
  • Allow to load python object from yaml

    Allow to load python object from yaml

    It can be interesting to write custom object to compare values.

    For example, I need to ensure an output is equal to .NAN

    Because .NAN == .NAN always returns false. We currently can't compare it with assert_equals().

    With the unsafe yaml loader we can register a custom method to check NAN, for example:

    class IsNAN(object):
        @classmethod
        def constructor(cls, loader, node):
            return cls()
    
        def __eq__(self, other):
            return numpy.isnan(other)
    
    yaml.add_constructor(u'!ISNAN', ISNAN.constructor)
    
    opened by sileht 17
  • extra verbosity to include request/response bodies

    extra verbosity to include request/response bodies

    Currently it can be somewhat tricky to debug unexpected outcomes, as verbose: true only prints headers.

    In my case, I wanted to verify that a CSRF token was included in a form submission. The simplest way to check the request body was to start netcat and change my test's URL to http://localhost:9999.

    It would be useful if gabbi provided a way to inspect the entire data being sent over the wire.

    opened by FND 15
  • Ability to run gabbi test cases individually

    Ability to run gabbi test cases individually

    The ability to run an individual gabbi test without any of the tests preceding it in the yaml file could be useful. I created a project where I drive gabbi test cases using Robot Framework (https://github.com/dkt26111/robotframework-gabbilibrary). In order for that to work I explicitly set the prior field of the gabbi test case being run to None.

    opened by dkt26111 14
  • Verbose misses response body

    Verbose misses response body

    I have got the following test spec:

    tests:
    -   name: auth
        verbose: all
        url: /api/sessions
        method: POST
        data: "asdsad"
        status: 200
    

    which has got data not a proper json on purpose. The response results in 400 instead of 200 code. Verbose is set to all, but it still does not print the response body, although detects non empty content:

    ... #### auth ####
    > POST http://localhost:7000/api/sessions
    > user-agent: gabbi/1.40.0 (Python urllib3)
    
    < 400 Bad Request
    < content-length: 48
    < date: Wed, 19 Aug 2020 06:11:24 GMT
    
    ✗ gabbi-runner.input_auth
    
    FAIL: gabbi-runner.input_auth
            Traceback (most recent call last):
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 94, in wrapper
                func(self)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 143, in test_request
                self._run_test()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 550, in _run_test
                self._assert_response()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 188, in _assert_response
                self._test_status(self.test_data['status'], self.response['status'])
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 591, in _test_status
                self.assert_in_or_print_output(observed_status, statii)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 654, in assert_in_or_print_output
                self.assertIn(expected, iterable)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 417, in assertIn
                self.assertThat(haystack, Contains(needle), message)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in assertThat
                raise mismatch_error
            testtools.matchers._impl.MismatchError: '400' not in ['200']
    ----------------------------------------------------------------------
    

    The expected behavior:

    • response body is printed to the stdout alongside the response headers.
    opened by avkonst 11
  • Q: Persist (across > 1 tests) value to variable based on JSON response?

    Q: Persist (across > 1 tests) value to variable based on JSON response?

    Example algorithm to be expressed in YAML:

    • Create an object given an object name.
    • Store in "$FOO" (or similar) the UUID given to object per JSON response.
    • Do unrelated tests.
    • Perform a GET using "$FOO" (the UUID)

    Thanks as always!

    enhancement 
    opened by josdotso 11
  • Variable is not replace to previous result in a request body

    Variable is not replace to previous result in a request body

    Seen from https://gabbi.readthedocs.io/en/latest/format.html#any-previous-test

    There are 2 requests:

    1. post a task, will return a taskId
    2. query the task with the taskId
    • previous test return: {"dataSet": {"header": {"serverIp": "xxx.xxx.xxx.xxx", "version": "1.0", "errorKeys": [{"error_key" : "2-0-0"}], "errorInfo": "", "returnCode": 0}, "data": {"taskId": "3008929"}}}

    • yaml define data: taskId: $HISTORY['start live migrate a vm'].$RESPONSE['$.dataSet.data.taskId']

    • actual result: "data": { "taskId": "$.dataSet.data.taskId" }

    opened by taget 9
  • jsonhandler: allow reading yaml data from disk

    jsonhandler: allow reading yaml data from disk

    This commit aims to change the jsonhandler to be able to read data from disk if it is a yaml file.

    Note:

    • Simply replacing the loads call with yaml.safe_load is not enough due to the nature of the NaN checker requiring an unsafe load[1].

    closes #253 [1] https://github.com/cdent/gabbi/commit/98adca65e05b7de4f1ab2bf90ab0521af5030f35

    opened by trevormccasland 9
  • pytest not working correctly!

    pytest not working correctly!

    Hi, I have been trying the gabbi to write some simple tests and was lucky enough when using the gabbi-run, but I need jenkins report so I tried the py.test version. with the loader code looking like this:

    import os
    
    from gabbi import driver
    
    # By convention the YAML files are put in a directory named
    # "gabbits" that is in the same directory as the Python test file. 
    TESTS_DIR = 'gabbits'
    
    def test_gabbits():
        test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
        test_generator = driver.py_test_generator(
            test_dir, host="http://www.nfdsfdsfdsf.se", port=80)
    
        for test in test_generator:
            yield test
    

    The yaml-file looks very simple:

    tests:
      - name: Do get to a faulty site
        url: /sdsdsad
        method: GET
        status: 200
    

    The problem is now that the test passes, the URL does not exist so the test has to fail with a connection refused, I have also tried with a site returning 404 but still the test passes. Am I doing something wrong here?

    opened by keyhan 9
  • Add yaml-based tests for host header and sni checking

    Add yaml-based tests for host header and sni checking

    The addition of server_hostname to the httpclient PoolManager, without sufficient testing, has revealed some issues:

    • The minimum urllib3 required is too low. server_hostname was introduced in 1.24.x
    • However, there is a bug [1] in PoolManager when mixing schemes in the same pool manager. This is being fixed so the minimum urllib3 will need to be higher still.

    Tests are added here, and the minimum value for urllib3 will be set when a release is made.

    Some of the tests are "live" meaning they require network, and can be skipped via the live test fixture if the GABBI_SKIP_NETWORK env variable is set to "true".

    [1] https://github.com/urllib3/urllib3/issues/2534

    Fixes #307 Fixes #309

    opened by cdent 8
  • gabbi doesn't support client cert yet

    gabbi doesn't support client cert yet

    Gabbi doesn't support client cert yet

    Help gabbi could support: gabbi-run ... --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/client.crt --key /etc/kubernetes/pki/client.key ...

    opened by wu-wenxiang 4
  • Socket leak with large YAML test files

    Socket leak with large YAML test files

    I have a YAML file with nearly 2000 tests in it. When invoked from the command line, I run out of open file handles due to large amounts of sockets left open:

    ERROR: gabbi-runner.input_/foo/bar/__test_l__
    	[Errno 24] Too many open files
    

    By default a Linux user has 1024 file handles:

    $ ulimit -n
    1024
    

    Inspecting the open file handles:

    $ ls -l /proc/$(pgrep gabbi-run)/fd | awk '{print $NF}' | cut -f1 -d: | sort | uniq -c
          1 0
          2 /path/to/a/file.txt
          1 /path/to/another/file.yaml
       1021 socket
    
    opened by scottwallacesh 3
  • Consider per-suite pre & post executables

    Consider per-suite pre & post executables

    Like fixtures, but a call to an external executable, for when gabbi-run is being used.

    This could be explicit, by putting something in the yaml file, or implicit off the name of the yaml file. That is:

    • if gabbit is foo.yaml
    • if foo-start and foo-end exist in the same dir and are executable

    either way, when the start is called gabbi should save, as a list, the line separated stdout, if any, it produced

    and provide that as args (or stdin?) to foo-end

    this would allow passing things like pids of started stuff

    /cc @FND for sanity check

    enhancement 
    opened by cdent 7
  • some fixtures that

    some fixtures that "capture" no longer work with the removal of testtools

    In https://github.com/cdent/gabbi/pull/279 testtools was removed.

    Fixtures in the openstack community that do output capturing rely on some "end of test" handling in testtools to dump the accumulated data. You can see this by trying a LOG.critical("hi") somewhere in the (e.g.) placement code and causing a test to fail. Dropping to a gabbi <2 makes it work again.

    We're definitely not going to add testtools back in, but the test case subclass in gabbi itself may be able to handling the data gathering that's required. Some investigation required.

    /cc @lbragstad for awareness

    opened by cdent 0
  • Faster tests development with gold files

    Faster tests development with gold files

    There is a cool method to speed up development of tests. It would be great if gabbi supported it too.

    Here is the idea:

    1. a test defines that a response should be compared with a gold file (reference to gold file can be custom configurable per every test)

    2. gabbi runs tests with a new flag 'generate-gold-files', which forces gabbi to capture response bodies and headers and (re-)write gold files containing the captured response data

    3. developer reviews the gold files (usually happens one by one as tests are added one by one during development)

    4. gabbi runs tests as usually

      a) if a test has got a reference to a gold file, it captures actual response output and compares with gold file b) if content of the actual output matches the gold file content, verification is considered to be passed c) otherwise test is failed

    This would allow me to reduce size of my test files by half at least.

    opened by avkonst 3
  • test files with - in the name can lead to failing tests when looking for content-type

    test files with - in the name can lead to failing tests when looking for content-type

    Bear with me, this is hard to explain

    Python v 3.6.9

    gabbi: 1.49.0

    A test file with named device-types.yaml with a test of:

    tests:                                                                          
    - name: get only 405                                                            
      POST: /device-types                                                           
      status: 405    
    

    errors with the following when run in a unittest-style harness:

        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 68, in action'
        b'    response_value = str(response[header])'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/urllib3/_collections.py", line 156, in __getitem__'
        b'    val = self._container[key.lower()]'
        b"KeyError: 'content-type'"
        b''
        b'During handling of the above exception, another exception occurred:'
        b''
        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/suitemaker.py", line 96, in do_test'
        b'    return test_method(*args, **kwargs)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 95, in wrapper'
        b'    func(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 149, in test_request'
        b'    self._run_test()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 556, in _run_test'
        b'    self._assert_response()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 196, in _assert_response'
        b'    handler(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/base.py", line 54, in __call__'
        b'    self.action(test, item, value=value)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 72, in action'
        b'    header, response.keys()))'
        b"AssertionError: 'content-type' header not present in response: KeysView(HTTPHeaderDict({'Vary': 'Origin', 'Date': 'Tue, 24 Mar 2020 14:17:33 GMT', 'Content-Length': '0', 'status': '405', 'reason': 'Method Not Allowed'}))"
        b''
    

    However, rename the file to foo.yaml and the test works, or run the device-types.yaml file with gabbi-run and the tests work. Presumably something about test naming.

    So the short term workaround is to rename the file, but this needs to be fixed because using - in filenames is idiomatic for gabbi.

    opened by cdent 1
Releases(2.3.0)
  • 2.3.0(Sep 3, 2021)

    • For the $ENVIRON and $RESPONSE :ref:substitutions <state-substitution> it is now possible to :ref:cast <casting> the value to a type of int, float, str, or bool.
    • The JSONHandler is now more strict about how it detects that a body content is JSON, avoiding some errors where the content-type header suggests JSON but the content cannot be decoded as such.
    • Better error message when content cannot be decoded.
    • Addition of the disable_response_handler test setting for those cases when the test author has no control over the content-type header and it is wrong.
    Source code(tar.gz)
    Source code(zip)
Just a small test with lists in cython

Test for lists in cython Algorithm create a list of 10^4 lists each with 10^4 floats values (namely: 0.1) - 2 nested for iterate each list and compute

Federico Simonetta 32 Jul 23, 2022
How to Create a YouTube Bot that Increases Views using Python Programming Language

YouTube-Bot-in-Python-Selenium How to Create a YouTube Bot that Increases Views using Python Programming Language. The app is for educational purpose

Edna 14 Jan 03, 2023
nose is nicer testing for python

On some platforms, brp-compress zips man pages without distutils knowing about it. This results in an error when building an rpm for nose. The rpm bui

1.4k Dec 12, 2022
Mixer -- Is a fixtures replacement. Supported Django, Flask, SqlAlchemy and custom python objects.

The Mixer is a helper to generate instances of Django or SQLAlchemy models. It's useful for testing and fixture replacement. Fast and convenient test-

Kirill Klenov 871 Dec 25, 2022
Instagram unfollowing bot. If this script is executed that specific accounts following will be reduced

Instagram-Unfollower-Bot Instagram unfollowing bot. If this script is executed that specific accounts following will be reduced.

Biswarup Bhattacharjee 1 Dec 24, 2021
Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report

pytest-ui-automatic Playwright Python tool practice pytest pytest-bdd screen-play page-object allure cucumber-report How to run Run tests execute_test

moyu6027 11 Nov 08, 2022
A friendly wrapper for modern SQLAlchemy and Alembic

A friendly wrapper for modern SQLAlchemy (v1.4 or later) and Alembic. Documentation: https://jpsca.github.io/sqla-wrapper/ Includes: A SQLAlchemy wrap

Juan-Pablo Scaletti 129 Nov 28, 2022
d4rk Ghost is all in one hacking framework For red team Pentesting

d4rk ghost is all in one Hacking framework For red team Pentesting it contains all modules , information_gathering exploitation + vulnerability scanning + ddos attacks with 12 methods + proxy scraper

d4rk sh4d0w 15 Dec 15, 2022
Scalable user load testing tool written in Python

Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, inst

Locust.io 20.4k Jan 04, 2023
Youtube Tool using selenium Python

YT-AutoLikeComment-AutoReportComment-AutoComment Youtube Tool using selenium Python Auto Comment Auto Like Comment Auto Report Comment Usage: 1. Insta

Rahul Joshua Damanik 1 Dec 13, 2021
🎓 Stepik Academy Автоматизация тестирования на Python

🎓 Stepik Academy Автоматизация тестирования на Python Запуск тестов выполняется в командной строке: pytest -v --tb=line --language=en --alluredir=all

Sergey 1 Dec 03, 2021
Wraps any WSGI application and makes it easy to send test requests to that application, without starting up an HTTP server.

WebTest This wraps any WSGI application and makes it easy to send test requests to that application, without starting up an HTTP server. This provides

Pylons Project 325 Dec 30, 2022
This repository contains a set of benchmarks of different implementations of Parquet (storage format) <-> Arrow (in-memory format).

Parquet benchmarks This repository contains a set of benchmarks of different implementations of Parquet (storage format) - Arrow (in-memory format).

11 Dec 21, 2022
A Python Selenium library inspired by the Testing Library

Selenium Testing Library Slenium Testing Library (STL) is a Python library for Selenium inspired by Testing-Library. Dependencies Python 3.6, 3.7, 3.8

Anže Pečar 12 Dec 26, 2022
Just for testing video streaming using pytgcalls.

tgvc-video-tests Just for testing video streaming using pytgcalls. Note: The features used in this repository is highly experimental and you might not

wrench 34 Dec 27, 2022
Python wrapper of Android uiautomator test tool.

uiautomator This module is a Python wrapper of Android uiautomator testing framework. It works on Android 4.1+ (API Level 16~30) simply with Android d

xiaocong 1.9k Dec 30, 2022
A framework-agnostic library for testing ASGI web applications

async-asgi-testclient Async ASGI TestClient is a library for testing web applications that implements ASGI specification (version 2 and 3). The motiva

122 Nov 22, 2022
Python Rest Testing

pyresttest Table of Contents What Is It? Status Installation Sample Test Examples Installation How Do I Use It? Running A Simple Test Using JSON Valid

Sam Van Oort 1.1k Dec 28, 2022
A set of pytest fixtures to test Flask applications

pytest-flask An extension of pytest test runner which provides a set of useful tools to simplify testing and development of the Flask extensions and a

pytest-dev 433 Dec 23, 2022
Test python asyncio-based code with ease.

aiounittest Info The aiounittest is a helper library to ease of your pain (and boilerplate), when writing a test of the asynchronous code (asyncio). Y

Krzysztof Warunek 55 Oct 30, 2022