Solve automatic numerical differentiation problems in one or more variables.

Overview

numdifftools

pkg_img tests_img tests2_img Documentation Maintainability Test Coverage versions_img PyPI - Downloads

The numdifftools library is a suite of tools written in _Python to solve automatic numerical differentiation problems in one or more variables. Finite differences are used in an adaptive manner, coupled with a Richardson extrapolation methodology to provide a maximally accurate result. The user can configure many options like; changing the order of the method or the extrapolation, even allowing the user to specify whether complex-step, central, forward or backward differences are used.

The methods provided are:

  • Derivative: Compute the derivatives of order 1 through 10 on any scalar function.
  • directionaldiff: Compute directional derivative of a function of n variables
  • Gradient: Compute the gradient vector of a scalar function of one or more variables.
  • Jacobian: Compute the Jacobian matrix of a vector valued function of one or more variables.
  • Hessian: Compute the Hessian matrix of all 2nd partial derivatives of a scalar function of one or more variables.
  • Hessdiag: Compute only the diagonal elements of the Hessian matrix

All of these methods also produce error estimates on the result.

Numdifftools also provide an easy to use interface to derivatives calculated with in _AlgoPy. Algopy stands for Algorithmic Differentiation in Python. The purpose of AlgoPy is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) of functions that are implemented as Python programs.

Getting Started

Visualize high order derivatives of the tanh function

>>> import numpy as np
>>> import numdifftools as nd
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-2, 2, 100)
>>> for i in range(10):
...    df = nd.Derivative(np.tanh, n=i)
...    y = df(x)
...    h = plt.plot(x, y/np.abs(y).max())

plt.show()

https://raw.githubusercontent.com/pbrod/numdifftools/master/examples/fun.png

Compute 1'st and 2'nd derivative of exp(x), at x == 1:

>>> fd = nd.Derivative(np.exp)        # 1'st derivative
>>> fdd = nd.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nd.Jacobian(fun)
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nd.Gradient(fun)
>>> np.allclose(dfun([1,2,3]), [ 2.,  4.,  6.])
True

Compute the same with the easy to use interface to AlgoPy:

>>> import numdifftools.nd_algopy as nda
>>> import numpy as np
>>> fd = nda.Derivative(np.exp)        # 1'st derivative
>>> fdd = nda.Derivative(np.exp, n=2)  # 2'nd derivative
>>> np.allclose(fd(1), 2.7182818284590424)
True
>>> np.allclose(fdd(1), 2.7182818284590424)
True

Nonlinear least squares:

>>> xdata = np.reshape(np.arange(0,1,0.1),(-1,1))
>>> ydata = 1+2*np.exp(0.75*xdata)
>>> fun = lambda c: (c[0]+c[1]*np.exp(c[2]*xdata) - ydata)**2
>>> Jfun = nda.Jacobian(fun, method='reverse')
>>> np.allclose(np.abs(Jfun([1,2,0.75])), 0) # should be numerically zero
True

Compute gradient of sum(x**2):

>>> fun = lambda x: np.sum(x**2)
>>> dfun = nda.Gradient(fun)
>>> np.allclose(dfun([1,2,3]), [ 2.,  4.,  6.])
True

See also

scipy.misc.derivative

Documentation and code

Numdifftools works on Python 2.7+ and Python 3.0+.

Official releases available at: http://pypi.python.org/pypi/numdifftools pkg_img

Official documentation available at: http://numdifftools.readthedocs.io/en/latest/ Documentation

Bleeding edge: https://github.com/pbrod/numdifftools.

Installation

If you have pip installed, then simply type:

$ pip install numdifftools

to get the lastest stable version. Using pip also has the advantage that all requirements are automatically installed.

Unit tests

To test if the toolbox is working paste the following in an interactive python session:

import numdifftools as nd
nd.test('--doctest-modules', '--disable-warnings')

Acknowledgement

The numdifftools package for Python was written by Per A. Brodtkorb based on the adaptive numerical differentiation toolbox written in Matlab by John D'Errico [DErrico06].

Later the package was extended with some of the functionality found in the statsmodels.tools.numdiff module written by Josef Perktold [JPerktold14] which is based on [Rid09]. The implementation of bicomplex numbers is based on the matlab implementation described in the project report of [Ver14] which is based on [GLD12]. For completeness the [For98] method for computing the weights and points in general finite difference formulas as well as the [For81] method for cumputing the taylor coefficients of complex analytic function using FFT, was added.

References

[JPerktold14] Perktold, J (2014), numdiff package http://statsmodels.sourceforge.net/0.6.0/_modules/statsmodels/tools/numdiff.html
[Ver14] Adriaen Verheyleweghen, (2014) "Computation of higher-order derivatives using the multi-complex step method", Project report, NTNU
[GLD12] Gregory Lantoine, R.P. Russell, and T. Dargent (2012) "Using multicomplex variables for automatic computation of high-order derivatives", ACM Transactions on Mathematical Software, Vol. 38, No. 3, Article 16, April 2012, 21 pages, http://doi.acm.org/10.1145/2168773.2168774
[MELEV12] M.E. Luna-Elizarraras, M. Shapiro, D.C. Struppa1, A. Vajiac (2012), "Bicomplex Numbers and Their Elementary Functions", CUBO A Mathematical Journal, Vol. 14, No 2, (61-80). June 2012.
[Lan10] Gregory Lantoine (2010), "A methodology for robust optimization of low-thrust trajectories in multi-body environments", Phd thesis, Georgia Institute of Technology
[Rid09] Ridout, M.S. (2009) "Statistical applications of the complex-step method of numerical differentiation", The American Statistician, 63, 66-74
[DErrico06] D'Errico, J. R. (2006), "Adaptive Robust Numerical Differentiation", http://www.mathworks.com/matlabcentral/fileexchange/13490-adaptive-robust-numerical-differentiation
[KLLK05] K.-L. Lai, J.L. Crassidis, Y. Cheng, J. Kim (2005), "New complex step derivative approximations with application to second-order kalman filtering", AIAA Guidance, Navigation and Control Conference, San Francisco, California, August 2005, AIAA-2005-5944.
[For98] B. Fornberg (1998) "Calculation of weights_and_points in finite difference formulas", SIAM Review 40, pp. 685-691.
[For81] Fornberg, B. (1981). "Numerical Differentiation of Analytic Functions", ACM Transactions on Mathematical Software (TOMS), 7(4), 512-526. http://doi.org/10.1145/355972.355979
[JML69] Lyness, J. M., Moler, C. B. (1969). "Generalized Romberg Methods for Integrals of Derivatives", Numerische Mathematik.
[JML66] Lyness, J. M., Moler, C. B. (1966). "Vandermonde Systems and Numerical Differentiation", Numerische Mathematik.
[NAG] NAG Library. NAG Fortran Library Document: D04AAF
Comments
  • Error on evaluating gradient

    Error on evaluating gradient

    Hi! I'm facing this problem, I have a function like this:

    def function( v, r, z , observation): return numpy.linalg.norm( observation - model(v,r,z) )

    and model is a function which takes the (vector parameters) v, r, z and returns a vector of the same size as observation, thus far, everything is ok, but when I try to compute the gradient of function, I'm facing the next error

    Traceback (most recent call last): File "Main.py", line 62, in z = my_grad( v, r, z, observation ) File "/Users/Angel/Documents/Projects/Ursula/26Feb/Optimization/Solvers.py", line 33, in grad return dfun( v ) File "/Users/Angel/anaconda/lib/python2.7/site-packages/numdifftools/nd_algopy.py", line 147, in call df = fun(x0, *args, **kwds) File "/Users/Angel/anaconda/lib/python2.7/site-packages/numdifftools/nd_algopy.py", line 346, in _forward return np.atleast_2d(super(Jacobian, self)._forward(x, *args, **kwds)) File "/Users/Angel/anaconda/lib/python2.7/site-packages/numdifftools/nd_algopy.py", line 266, in _forward y = self.fun(tmp, *args, **kwds) File "/Users/Angel/Documents/Projects/Solvers.py", line 29, in fun = lambda x: function( v, r, z , observation) ValueError: setting an array element with a sequence.

    My function to compute the gradient is as follows:

    def my_grad( v, r, z , observation): fun = lambda x: function( v, r, z , observation) dfun = nda.Gradient(fun) return dfun( v )

    I believe that I'm not making any mistake, but if you have any suggestion, please let me know

    opened by nkeito 8
  • ReadTheDocs permission denied

    ReadTheDocs permission denied

    Accessing http://numdifftools.readthedocs.io/ directs to http://numdifftools.readthedocs.io/en/v0.9.14/ and I see the message:

    Permission Denied

    You don't have the proper permissions to view this page. Please contact the owner of this project to request permission.

    opened by andrewfowlie 8
  • recipe for conda-forge

    recipe for conda-forge

    I'm using numdifftools as a dependency for a package of mine. Because the package uses an extension I make it available on conda-forge (which takes care of all the different OS/numpy/python combinations).

    However, numdifftools is not currently available on conda-forge. Would you mind if numdifftools was added as a recipe on conda-forge?

    I don't mind doing the groundwork for this.

    opened by andyfaff 7
  • Python 3.6 compatibility

    Python 3.6 compatibility

    I've opened this PR in case any of the changes I've made here are useful.

    The main change I've made is to make algopy an optional dependency, and to ignore it on Python 3.6. Given that algopy development has all but stopped I wouldn't want to have a hard dependency on it IMHO.

    I explicitly disabled the doctests as they don't work for me.

    I also removed the use of Tester in the __init__.py as that usage seems to be deprecated, broke pytest testing (for me) and IMHO doesn't add much value.

    I also updated the meta.yaml so the conda build worked for me.

    opened by dhirschfeld 7
  • Sphinx build error

    Sphinx build error

    With Python 3.4 and Sphinx 1.3 I get the following error when I try to build the numdifftools docs:

    $ python setup.py build_sphinx
    /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/setuptools/dist.py:291: UserWarning: The version specified ('unknown') is an invalid version, this may not work as expected with newer versions of setuptools, pip, and PyPI. Please see PEP 440 for more details.
      "details." % self.metadata.version
    running build_sphinx
    Running Sphinx v1.3
    Creating file /Users/deil/code/numdifftools/docs/../docs/_rst/numdifftools.rst.
    Creating file /Users/deil/code/numdifftools/docs/../docs/_rst/numdifftools.tests.rst.
    Creating file /Users/deil/code/numdifftools/docs/../docs/_rst/modules.rst.
    Traceback (most recent call last):
      File "setup.py", line 229, in <module>
        setup_package()
      File "setup.py", line 226, in setup_package
        entry_points={'console_scripts': CONSOLE_SCRIPTS})
      File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/core.py", line 148, in setup
        dist.run_commands()
      File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/dist.py", line 955, in run_commands
        self.run_command(cmd)
      File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/dist.py", line 974, in run_command
        cmd_obj.run()
      File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/sphinx/setup_command.py", line 161, in run
        freshenv=self.fresh_env)
      File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/sphinx/application.py", line 143, in __init__
        self.setup_extension(extension)
      File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/sphinx/application.py", line 440, in setup_extension
        ext_meta = mod.setup(self)
      File "/Users/deil/Library/Python/3.4/lib/python/site-packages/numpydoc/numpydoc.py", line 114, in setup
        app.connect('autodoc-process-docstring', mangle_docstrings)
      File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/sphinx/application.py", line 471, in connect
        self._validate_event(event)
      File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/sphinx/application.py", line 468, in _validate_event
        raise ExtensionError('Unknown event name: %s' % event)
    sphinx.errors.ExtensionError: Unknown event name: autodoc-process-docstring
    

    cc @maniteja123

    opened by cdeil 6
  • New release?

    New release?

    Seems like the last release is some time ago and there are some deprecations, like:

    /home/travis/miniconda2/envs/testenv/lib/python2.7/site-packages/numdifftools/core.py:330: DeprecationWarning: `factorial` is deprecated!
    Importing `factorial` from scipy.misc is deprecated in scipy 1.0.0. Use `scipy.special.factorial` instead.
      misc.factorial(np.arange(offset, step * nterms + offset, step))
    

    That seem fixed on master but not in that latest release. This would make PyMC3's life easier.

    opened by twiecki 4
  • Unable to install on Python 2.7

    Unable to install on Python 2.7

    I can't get numdifftools to install with python 2.7. I resolved the error

    RuntimeError: Due to a bug in setuptools, PyScaffold currently needs at least Python 3.4! Install PyScaffold 2.5 for Python 2.7 support.
    

    by changing in setup.py

    setup(setup_requires=['pyscaffold>=3.0a0,<3.1a0'] + sphinx,
    

    to

    setup(setup_requires=['pyscaffold==2.5.10'] + sphinx,
    

    (2.5.10 is needed to work with setuptools version >= 39 due to blue-yonder/pyscaffold#148). But I then get the error

    error: 'egg_base' must be a directory name (got `src`)
    

    which is the setuptools bug that the PyScaffold docs say you need to use PyScaffold 2.5 to avoid. It might be that the project needs to be set up with PyScaffold 2.5 in the first place as well as installed with it. When I go back to the commit https://github.com/pbrod/numdifftools/commit/406a79877e0dd45aefe210b08e73cdd58ff4cb15 (just before numdifftools was updated with PyScaffold 3) then I can get it to install on python 2.7 if I downgrade setuptools to < 39.

    opened by rparini 4
  • Jacobian broken since 0.9.15

    Jacobian broken since 0.9.15

    I wrote some code back in September that uses Jacobian to calculate all the partial derivatives for a matrix valued function of a vector argument. I started working with this code again this morning and discovered that it no longer works. I traced it down to a change in the behavior of Jacobian. This simple program demonstrates what I mean.

    from numpy import array
    from numdifftools import Jacobian
    x = array([1,2])
    G = lambda x: array([[x[0], x[1]], [x[0], x[1]]])
    dGdx = Jacobian(lambda x: G(x))
    D = dGdx(x)
    print G(x).shape
    print D.shape
    

    This is the output with numdifftools 0.9.14 installed.

    (2, 2)
    (2, 2, 2)
    

    This is the output with numdifftools 0.9.15 or later installed.

    (2, 2)
    (2, 2, 4)
    

    So numdifftools 0.9.15 and later returns the wrong size tensor.

    bug 
    opened by baroobob 4
  • Support for multiple dimensions?

    Support for multiple dimensions?

    Hello, is there a way to calculate directional derivatives of a function using numdifftools, as with directionaldiff or gradest from Matlab's derivest?

    Kind regards, Joe

    opened by solarjoe 4
  • TypeError: unsupported operand type(s) for /: 'float' and 'Bicomplex'

    TypeError: unsupported operand type(s) for /: 'float' and 'Bicomplex'

    The following gives an error:

    import numpy as np
    import numdifftools as nd
    
    g = lambda x: 1.0/(np.exp(x[0]) + np.cos(x[1]) + 10)
    
    print(nd.Gradient(g, method="multicomplex")([1.0, 2.0]))
    

    If I change the method to "complex", it works fine. Is there a way to use the multicomplex method in this situation and other similar ones?

    opened by bacalfa 3
  • Allow fd_derivative to take complex valued functions

    Allow fd_derivative to take complex valued functions

    At the moment the array which stores the derivatives, du, is always real so if the array of function evaluations, fx, is complex then the imaginary part gets discarded when the elements of du are assigned.

    /usr/local/lib/python3.5/site-packages/numdifftools/fornberg.py:173: ComplexWarning: Casting complex values to real discards the imaginary part du[i] = np.dot(fd_weights(x[:size], x0=x[i], n=n), fx[:size])

    The proposed change fixes this by matching the data type (as well as the shape) of du to fx.

    PS thank you for working on this very useful module!

    opened by rparini 3
  • Computing the Jacobian fails for functions of shape (1,)

    Computing the Jacobian fails for functions of shape (1,)

    The call to Jacobian with a function of shape (1,) fails. Although this can be computed using Gradient, this case is helpful when checking the derivates of constraints in optimization problems.

    Example

    import numpy as np
    import numdifftools as nd
    fun = lambda x: np.array([np.sum(x**2) - 1,])
    J1 = nd.Jacobian(fun)(x)
    

    This fails with message

    Traceback (most recent call last):
      File "/Users/jeffreyh/SVN/ExtOpt/tests/test_numdifftools.py", line 20, in <module>
        J1 = nd.Jacobian(fun)(x)
      File "/opt/homebrew/lib/python3.10/site-packages/numdifftools/core.py", line 431, in __call__
        return super(Jacobian, self).__call__(np.atleast_1d(x), *args, **kwds)
      File "/opt/homebrew/lib/python3.10/site-packages/numdifftools/core.py", line 288, in __call__
        results, f_xi = self._derivative(x_i, args, kwds)
      File "/opt/homebrew/lib/python3.10/site-packages/numdifftools/core.py", line 428, in _derivative_nonzero_order
        return self.fd_rule.apply(results, steps2, step_ratio), fxi
      File "/opt/homebrew/lib/python3.10/site-packages/numdifftools/finite_difference.py", line 583, in apply
        f_del, h, original_shape = self._vstack(sequence, steps)
      File "/opt/homebrew/lib/python3.10/site-packages/numdifftools/finite_difference.py", line 684, in _vstack
        h = np.vstack([np.atleast_1d(r).transpose(axes).ravel() for r in steps])
      File "/opt/homebrew/lib/python3.10/site-packages/numdifftools/finite_difference.py", line 684, in <listcomp>
        h = np.vstack([np.atleast_1d(r).transpose(axes).ravel() for r in steps])
    ValueError: axes don't match array
    

    In comparison, if we have output of shape (2,), this works

    import numpy as np
    import numdifftools as nd
    fun = lambda x: np.array([np.sum(x**2) - 1,0])
    J1 = nd.Jacobian(fun)(x)
    
    opened by jeffrey-hokanson 0
  • Error when attempting to calculate the Hessian of a vector-valued function

    Error when attempting to calculate the Hessian of a vector-valued function

    I'm not on the bleeding edge version but I'm getting this error when trying to calculate the Hessian of a function that returns a 1D array:

    TypeError: only size-1 arrays can be converted to Python scalars
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/home/dobos/project/ga_pfsspec_all/python/test/pfs/ga/pfsspec/stellar/rvfit/test_modelgridrvfit.py", line 155, in test_fit_rv
        rv, rv_err, params, params_err = rvfit.fit_rv({'mr': spec}, rv_0=100.0, rv_bounds=(0, 200), params_0=params_0, params_fixed=params_fixed, params_bounds=params_bounds)
      File "/home/dobos/project/ga_pfsspec_all/python/pfs/ga/pfsspec/stellar/rvfit/modelgridrvfit.py", line 155, in fit_rv
        self.calculate_F(spectra, rv, rv_bounds, None, params, params_bounds, None, params_free, params_fixed)
      File "/home/dobos/project/ga_pfsspec_all/python/pfs/ga/pfsspec/stellar/rvfit/modelgridrvfit.py", line 58, in calculate_F
        ddpc = ddphichi(self.params_to_array(rv_0, params_free, **params_0))
      File "/datascope/slurm/miniconda3/envs/astro-dnn/lib/python3.7/site-packages/numdifftools/core.py", line 849, in __call__
        return super(Hessdiag, self).__call__(np.atleast_1d(x), *args, **kwds)
      File "/datascope/slurm/miniconda3/envs/astro-dnn/lib/python3.7/site-packages/numdifftools/core.py", line 376, in __call__
        results = self._derivative(xi, args, kwds)
      File "/datascope/slurm/miniconda3/envs/astro-dnn/lib/python3.7/site-packages/numdifftools/core.py", line 260, in _derivative_nonzero_order
        results = [diff(f, fxi, xi, h) for h in steps]
      File "/datascope/slurm/miniconda3/envs/astro-dnn/lib/python3.7/site-packages/numdifftools/core.py", line 260, in <listcomp>
        results = [diff(f, fxi, xi, h) for h in steps]
      File "/datascope/slurm/miniconda3/envs/astro-dnn/lib/python3.7/site-packages/numdifftools/core.py", line 894, in _central_even
        hess[i, i] = (f(x + 2 * ee[i, :]) - 2 * fx + f(x - 2 * ee[i, :])) / (4. * hess[i, i])
    ValueError: setting an array element with a sequence.
    

    It works with the Gradient and the Jacobian. It would also be great if these accepted Nd arrays instead of just 1D arrays and the resulting new dimension would be the last (right now it's the second regardless of the number of input dimensions).

    numdifftools 0.9.39 py_0 conda-forge

    opened by dobos 0
  • Parallelizing function evaluations for the Derivative class

    Parallelizing function evaluations for the Derivative class

    Similar to issue #20 , I would be willing to implement parallelizing the function evaluations for the derivative with the multiprocessing module. I have it working in my own implementation, and can add it if you are open to it.

    I am unsure if I am breaking anything. Since I am only using it to calculate the jacobian, I have not run any tests on it other than working with that function.

    opened by jmeziere 0
  • discrepancies in values of the jacobian

    discrepancies in values of the jacobian

    I tried to compare the values obtained from computing the jacobian of a complex valued function using numdifftools and found that it differed from that computed using autograd only for the parameter having 1j. i would like to know what went wrong.

    from numdifftools import Jacobian, Hessian
    
    import autograd.numpy as np
    from autograd import jacobian, hessian
    
    frequencies = np.array([1.00000000e+04, 7.94328235e+03, 6.30957344e+03, 5.01187234e+03,3.98107171e+03, 3.16227766e+03, 2.51188643e+03, 1.99526231e+03,
           1.58489319e+03, 1.25892541e+03, 1.00000000e+03, 7.94328235e+02,6.30957344e+02, 5.01187234e+02, 3.98107171e+02, 3.16227766e+02,
           2.51188643e+02, 1.99526231e+02, 1.58489319e+02, 1.25892541e+02,1.00000000e+02, 7.94328235e+01, 6.30957344e+01, 5.01187234e+01,
           3.98107171e+01, 3.16227766e+01, 2.51188643e+01, 1.99526231e+01,1.58489319e+01, 1.25892541e+01, 1.00000000e+01, 7.94328235e+00,
           6.30957344e+00, 5.01187234e+00, 3.98107171e+00, 3.16227766e+00,2.51188643e+00, 1.99526231e+00, 1.58489319e+00, 1.25892541e+00,
           1.00000000e+00])
    
    Z = np.array([ 28.31206108+3.85713361e+00j,  30.65961255-6.15028952e-01j,34.24015216-1.53009927e+00j,  31.28832221+2.00862719e-01j,
            29.16894207-1.90759028e+00j,  31.35415498+8.12935902e+00j,33.29418445-8.44736332e-01j,  31.44975238-4.33909650e+00j,
            28.69757696-4.57807440e-01j,  32.60369585-7.49365253e+00j,38.67554243+1.94558309e+00j,  32.04682449+5.96513060e-02j,
            33.27666601+6.96674118e-02j,  34.03106231-1.63078686e+00j,31.61457921-3.37817364e+00j,  29.10184267-6.59263829e+00j,
            32.50730455-7.49033830e+00j,  36.43172672-3.28250914e+00j,38.06409634-6.87182171e+00j,  28.0217396 -7.79409862e+00j,
            32.56764449-1.28634629e+01j,  35.51511763-2.12395710e+01j,31.9317554 -2.85721873e+01j,  38.87220134-2.99072634e+01j,
            35.5291417 -3.74944224e+01j,  42.9529093 -4.04380053e+01j,40.94710166-5.09880048e+01j,  47.58460761-6.50587929e+01j,
            56.61773977-6.93842057e+01j,  70.84595799-8.97293571e+01j,91.28235232-9.63476214e+01j, 114.19032377-1.06793820e+02j,
           139.78246542-1.00266685e+02j, 161.07272489-1.00733766e+02j,181.230854  -9.02441714e+01j, 205.54084395-8.99491269e+01j,
           215.24421556-7.62099459e+01j, 223.62924894-6.40376670e+01j,229.93028184-4.76770260e+01j, 234.56144072-4.38847706e+01j,
           240.57421683-3.52116682e+01j])
    
    params_init =np.array([10, 1e-5, 20, 50])
    
    w = 2 * np.pi*frequencies # angular frequencies
    s = 1j*w
    def z_fun(p):
            return p[0] + 1/(s*p[1]+1/(p[2]+(p[3]/np.sqrt(s))))
    
    # define the objective function
    def obj_fun_1(p, z): 
        wtRe = 1/(z.real**2 + z.imag**2)
        return (np.sum(np.absolute((wtRe * (z_fun(p)- z)**2))))
    
    # computing the jacobian using numdifftools
    
    def num_jac_1(x, z):
        return Jacobian(lambda x: obj_fun_1(x, z), method='central')(x).real.squeeze()
    
    # check the values
    print(num_jac_1(params_init, Z))
    # [-6.42302368e-01  1.45242757e-05 -1.07751884e-01 -2.99510421e-02]
    
    # computing the jacobian using autograd
    auto_jac_1 = jacobian(obj_fun_1)
    # check the values
    print(auto_jac_1(params_init, Z))
    # [-6.42302368e-01  1.52096259e+05 -1.07751884e-01 -2.99510421e-02]
    
    

    Now the problem is that when i supply the jacobian from numdifftools (the difference is in the second parameter (p[1]) the optimization fails but when I supply that obtained using autograd the optimization succeeds. I would like to know what the problem is.

    opened by richinex 2
  • Part of docs are almost verbatim copy of original docs by John d'Errico

    Part of docs are almost verbatim copy of original docs by John d'Errico

    The original docs of the matlab version can be found here: https://convexoptimization.com/TOOLS/DERIVEST.pdf

    The docs give credit to d'Errico, but without explicit permission from the original author, this is still a copyright violation, unless the original docs were released under a copyleft license.

    opened by HDembinski 0
Releases(v0.9.1)
  • v0.9.1(Aug 20, 2015)

    This is a major facelift of Numdifftools. Highlights for this release is:

    • Complex step method for derivative-order 1 through 14 is now available
    • Updated documentation
    Source code(tar.gz)
    Source code(zip)
Owner
Per A. Brodtkorb
Per A. Brodtkorb
🔬 A curated list of awesome machine learning strategies & tools in financial market.

🔬 A curated list of awesome machine learning strategies & tools in financial market.

GeorgeZou 1.6k Dec 30, 2022
Meerkat provides fast and flexible data structures for working with complex machine learning datasets.

Meerkat makes it easier for ML practitioners to interact with high-dimensional, multi-modal data. It provides simple abstractions for data inspection, model evaluation and model training supported by

Robustness Gym 115 Dec 12, 2022
Empyrial is a Python-based open-source quantitative investment library dedicated to financial institutions and retail investors

By Investors, For Investors. Want to read this in Chinese? Click here Empyrial is a Python-based open-source quantitative investment library dedicated

Santosh 640 Dec 31, 2022
A Software Framework for Neuromorphic Computing

A Software Framework for Neuromorphic Computing

Lava 338 Dec 26, 2022
GAM timeseries modeling with auto-changepoint detection. Inspired by Facebook Prophet and implemented in PyMC3

pm-prophet Pymc3-based universal time series prediction and decomposition library (inspired by Facebook Prophet). However, while Faceook prophet is a

Luca Giacomel 314 Dec 25, 2022
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

152 Jan 02, 2023
A Streamlit demo to interactively visualize Uber pickups in New York City

Streamlit Demo: Uber Pickups in New York City A Streamlit demo written in pure Python to interactively visualize Uber pickups in New York City. View t

Streamlit 230 Dec 28, 2022
Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill

Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill This is a port of the amazing openskill.js package

Open Debates Project 156 Dec 14, 2022
Visualize classified time series data with interactive Sankey plots in Google Earth Engine

sankee Visualize changes in classified time series data with interactive Sankey plots in Google Earth Engine Contents Description Installation Using P

Aaron Zuspan 76 Dec 15, 2022
PennyLane is a cross-platform Python library for differentiable programming of quantum computers

PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural ne

PennyLaneAI 1.6k Jan 01, 2023
A library to generate synthetic time series data by easy-to-use factors and generator

timeseries-generator This repository consists of a python packages that generates synthetic time series dataset in a generic way (under /timeseries_ge

Nike Inc. 87 Dec 20, 2022
Automated Time Series Forecasting

AutoTS AutoTS is a time series package for Python designed for rapidly deploying high-accuracy forecasts at scale. There are dozens of forecasting mod

Colin Catlin 652 Jan 03, 2023
Continuously evaluated, functional, incremental, time-series forecasting

timemachines Autonomous, univariate, k-step ahead time-series forecasting functions assigned Elo ratings You can: Use some of the functionality of a s

Peter Cotton 343 Jan 04, 2023
To design and implement the Identification of Iris Flower species using machine learning using Python and the tool Scikit-Learn.

To design and implement the Identification of Iris Flower species using machine learning using Python and the tool Scikit-Learn.

Astitva Veer Garg 1 Jan 11, 2022
A repository to index and organize the latest machine learning courses found on YouTube.

📺 ML YouTube Courses At DAIR.AI we ❤️ open education. We are excited to share some of the best and most recent machine learning courses available on

DAIR.AI 9.6k Jan 01, 2023
Official code for HH-VAEM

HH-VAEM This repository contains the official Pytorch implementation of the Hierarchical Hamiltonian VAE for Mixed-type Data (HH-VAEM) model and the s

Ignacio Peis 8 Nov 30, 2022
ParaMonte is a serial/parallel library of Monte Carlo routines for sampling mathematical objective functions of arbitrary-dimensions

ParaMonte is a serial/parallel library of Monte Carlo routines for sampling mathematical objective functions of arbitrary-dimensions, in particular, the posterior distributions of Bayesian models in

Computational Data Science Lab 182 Dec 31, 2022
Send rockets to Mars with artificial intelligence(Genetic algorithm) in python.

Send Rockets To Mars With AI Send rockets to Mars with artificial intelligence(Genetic algorithm) in python. Tools Python 3 EasyDraw How to Play Insta

Mohammad Dori 3 Jul 15, 2022
cleanlab is the data-centric ML ops package for machine learning with noisy labels.

cleanlab is the data-centric ML ops package for machine learning with noisy labels. cleanlab cleans labels and supports finding, quantifying, and lear

Cleanlab 51 Nov 28, 2022
Data from "Datamodels: Predicting Predictions with Training Data"

Data from "Datamodels: Predicting Predictions with Training Data" Here we provid

Madry Lab 51 Dec 09, 2022