Apache (Py)Spark type annotations (stub files).

Overview

PySpark Stubs

Build Status PyPI version Conda Forge version

A collection of the Apache Spark stub files. These files were generated by stubgen and manually edited to include accurate type hints.

Tests and configuration files have been originally contributed to the Typeshed project. Please refer to its contributors list and license for details.

Important

This project has been merged with the main Apache Spark repository (SPARK-32714). All further development for Spark 3.1 and onwards will be continued there.

For Spark 2.4 and 3.0, development of this package will be continued, until their official deprecation.

  • If your problem is specific to Spark 2.3 and 3.0 feel free to create an issue or open pull requests here.
  • Otherwise, please check the official Spark JIRA and contributing guidelines. If you create a JIRA ticket or Spark PR related to type hints, please ping me with [~zero323] or @zero323 respectively. Thanks in advance.

Motivation

  • Static error detection (see SPARK-20631)

    SPARK-20631

  • Improved autocompletion.

    Syntax completion

Installation and usage

Please note that the guidelines for distribution of type information is still work in progress (PEP 561 - Distributing and Packaging Type Information). Currently installation script overlays existing Spark installations (pyi stub files are copied next to their py counterparts in the PySpark installation directory). If this approach is not acceptable you can add stub files to the search path manually.

According to PEP 484:

Third-party stub packages can use any location for stub storage. Type checkers should search for them using PYTHONPATH.

Moreover:

Third-party stub packages can use any location for stub storage. Type checkers should search for them using PYTHONPATH. A default fallback directory that is always checked is shared/typehints/python3.5/ (or 3.6, etc.)

Please check usage before proceeding.

The package is available on PYPI:

pip install pyspark-stubs

and conda-forge:

conda install -c conda-forge pyspark-stubs

Depending on your environment you might also need a type checker, like Mypy or Pytype [1], and autocompletion tool, like Jedi.

Editor Type checking Autocompletion Notes
Atom [2] [3] Through plugins.
IPython / Jupyter Notebook [4]  
PyCharm  
PyDev [5] ?  
VIM / Neovim [6] [7] Through plugins.
Visual Studio Code [8] [9] Completion with plugin
Environment independent / other editors [10] [11] Through Mypy and Jedi.

This package is tested against MyPy development branch and in rare cases (primarily important upstrean bugfixes), is not compatible with the preceding MyPy release.

PySpark Version Compatibility

Package versions follow PySpark versions with exception to maintenance releases - i.e. pyspark-stubs==2.3.0 should be compatible with pyspark>=2.3.0,<2.4.0. Maintenance releases (post1, post2, ..., postN) are reserved for internal annotations updates.

API Coverage:

As of release 2.4.0 most of the public API is covered. For details please check API coverage document.

See also

Disclaimer

Apache Spark, Spark, PySpark, Apache, and the Spark logo are trademarks of The Apache Software Foundation. This project is not owned, endorsed, or sponsored by The Apache Software Foundation.

Footnotes

[1] Not supported or tested.
[2] Requires atom-mypy or equivalent.
[3] Requires autocomplete-python-jedi or equivalent.
[4] It is possible to use magics to type check directly in the notebook. In general though, you'll have to export whole notebook to .py file and run type checker on the result.
[5] Requires PyDev 7.0.3 or later.
[6] TODO Using vim-mypy, syntastic or Neomake.
[7] With jedi-vim.
[8] With Mypy linter.
[9] With Python extension for Visual Studio Code.
[10] Just use your favorite checker directly, optionally combined with tool like entr.
[11] See Jedi editor plugins list.
Comments
  • Fix 2-argument math functions

    Fix 2-argument math functions

    Fixes the binary math functions:

    • atan2 and hypot take two arguments, not one
    • pow supports taking a literal numeric value as its second argument in addition to a Column.
    bug 3.0 2.3 2.4 
    opened by harpaj 10
  • Jedi doesn't work with MLReaders

    Jedi doesn't work with MLReaders

    It seems like there is some problem with Jedi compatibility. Some components seem to work pretty well. For example DataFrame without stubs:

    In [1]: import jedi                                                                                                                                                                                                
    
    In [2]: from pyspark.sql import SparkSession                                                                                                                                                                       
    
    In [3]: jedi.Interpreter("SparkSession.builder.getOrCreate().createDataFrame([]).", [globals()]).completions()                                                                                                     
    ---------------------------------------------------------------------------
    AttributeError   
    ...
    AttributeError: 'ModuleContext' object has no attribute 'py__path__'
    

    and with stubs:

    In [1]: from pyspark.sql import SparkSession                                                                                                                                                                       
    
    In [2]: import jedi                                                                                                                                                                                                
    
    In [3]: jedi.Interpreter("SparkSession.builder.getOrCreate().createDataFrame([]).", [globals()]).completions()                                                                                                     
    Out[3]: 
    [<Completion: agg>,
     <Completion: alias>,
     <Completion: approxQuantile>,
     <Completion: cache>,
     <Completion: checkpoint>,
     <Completion: coalesce>,
     <Completion: collect>,
     <Completion: colRegex>,
     <Completion: columns>,
     <Completion: corr>,
     <Completion: count>,
     <Completion: cov>,
    ...
     <Completion: __str__>]
    

    So far so good. However, if take for example LinearRegressionModel.load things don't work so well. Without stubs provides no suggestions

    In [1]: import jedi                                                                                                                                                                                                
    
    In [2]: from pyspark.ml.regression import LinearRegressionModel                                                                                                                                                    
    
    In [3]: jedi.Interpreter("LinearRegressionModel.load('foo').", [globals()]).completions()                                                                                                                          
    Out[3]: []
    

    but one provided with stubs

    In [1]: import jedi                                                                                                                                                                                                
    
    In [2]: from pyspark.ml.regression import LinearRegressionModel                                                                                                                                                    
    
    In [3]: jedi.Interpreter("LinearRegressionModel.load('foo').", [globals()]).completions()                                                                                                                          
    Out[3]: 
    [<Completion: load>,
     <Completion: read>,
     <Completion: __annotations__>,
     <Completion: __class__>,
     <Completion: __delattr__>,
     <Completion: __dict__>,
     <Completion: __dir__>,
     <Completion: __doc__>,
     <Completion: __eq__>,
     <Completion: __format__>,
     <Completion: __getattribute__>,
     <Completion: __hash__>,
     <Completion: __init__>,
     <Completion: __init_subclass__>,
     <Completion: __module__>,
     <Completion: __ne__>,
     <Completion: __new__>,
     <Completion: __reduce__>,
     <Completion: __reduce_ex__>,
     <Completion: __repr__>,
     <Completion: __setattr__>,
     <Completion: __sizeof__>,
     <Completion: __slots__>,
    

    don't make much sense. If model is fitted:

    In [4]: from pyspark.ml.regression import LinearRegression                                                                                                                                                         
    
    In [5]: jedi.Interpreter("LinearRegression().fit(...).", [globals()]).completions()                                                                                                                                
    Out[5]: 
    [<Completion: aggregationDepth>,
     <Completion: append>,
     <Completion: clear>,
     <Completion: coefficients>,
     <Completion: copy>,
     <Completion: count>,
    ....
     <Completion: __str__>]
    

    Model which is explicitly annotated works fine, so it seems like there is something in MLReader or one of the sub-classes that causes a failure.

    We already have data tests for this (as well as some test cases from apache/spark examples, and mypy seems to be fine with this.

    Since LinearRegression.fit works fine (and some toy tests confirm that), Generics are not sufficient to reproduce the problem. So it seems like type parameter is not processed correctly on the path:

    Tested with:

    • jedi==0.15.2 and jedi==0.16.0 (0c56aa4).
    • pyspark-stubs==3.0.0.dev5
    • pyspark==3.0.0.dev0 (afe70b3)
    opened by zero323 7
  • DataFrameReader.load parameters incorrectly expected all to be strings

    DataFrameReader.load parameters incorrectly expected all to be strings

    Using 2.4.0.post6

    spark.read.load(folders, inferSchema=True, header=False)
    

    mypy reports Expected type 'str', got 'bool' instead for both inferSchema and header.

    Looks like the issue is in third_party/3/pyspark/sql/readwriter.pyi Line 23 where in the definition for load() we have **options: str. For csv suppport this needs to be **options: Optional[Union[bool, str, int]] but to handle the general case it probably needs to be **options: Any.

    enhancement 
    opened by ghost 7
  • Added contains to Column

    Added contains to Column

    The contains method is missing from the stubs causing mypy to raise error: "Column" not callable.

    This PR adds the typehints to 2.4 specifically (the version we are using), but it should probably also be added to the other versions.

    opened by Braamling 6
  • #394: Use Union[List[Column], List[str]] for Select

    #394: Use Union[List[Column], List[str]] for Select

    Passing a List[str] to select raises a mypy warning, similar for List[Column]. We change the type from List[Union[Column, str]] to Union[List[Column], List[str]].

    Fixes #394 .

    opened by jhereth 5
  • Update distinct() and repartition() definitions

    Update distinct() and repartition() definitions

    Update repartition functions to allow for Col in numPartitions parameter.

    Reference

    numPartitions – can be an int to specify the target number of partitions or a Column.
        If it is a Column, it will be used as the first partitioning column.
        If not specified, the default number of partitions is used.
    

    Also add stub for DataFrame#distinct()

    opened by zpencerq 5
  • Allow `Column` type for timezone argument in pyspark.sql.functions

    Allow `Column` type for timezone argument in pyspark.sql.functions

    In the functions here: https://github.com/zero323/pyspark-stubs/blob/3c4684a224c1be4eea4577e475f8bb4d045edddd/third_party/3/pyspark/sql/functions.pyi#L100-L101 we currently have tz: str but this can also be specified as a Column

    Example:

    >>> from pyspark.sql import functions
    >>> df = spark.sql("SELECT CAST(0 AS TIMESTAMP) AS timestamp, 'Asia/Tokyo' AS tz")
    >>> df.select(functions.from_utc_timestamp(df.timestamp, df.tz)).collect()
    [Row(from_utc_timestamp(timestamp, tz)=datetime.datetime(1970, 1, 1, 18, 0))]
    

    I think this could be expanded to tz: ColumnOrName?

    3.0 2.4 3.1 
    opened by charlietsai 4
  • Overload DataFrame.drop: sequences must be *str

    Overload DataFrame.drop: sequences must be *str

    The method DataFrame.drop expects either 1 Column, or 1 str, or an iterable of strings. This is only type checked inside the function though.

    Currently the type hints (and the actual API) allow to pass multiple Columns but it does result in a runtime error. Personally, I'd like to have that caught earlier. But as this might be getting too close to the internals of the functions, I’d like to hear your opinion on whether or not the type hints should “look inside” to aid development.

    opened by oliverw1 4
  • provide overloaded methods for sample

    provide overloaded methods for sample

    The fraction is a required argument to the sample method. Anytime someone calls df.sample(.01) this is met in mypy with

    Argument 1 to "sample" of "DataFrame" has incompatible type "float"; expected "Optional[bool]"

    In the Pyspark API, the three arguments are in fact pure keyword arguments that are handled later to ensure fraction must be given. This is probably done to keep consistent with the Scala API.

    By overloading the methods, the issue is resolved.

    opened by oliverw1 4
  • Allow non-string load/save parameters

    Allow non-string load/save parameters

    Resolves #273

    Additional parameters to DataFrameReader.load() and DataFrameWriter.save()/.saveTable() are passed to the file-type specific reader or writer types. These parameters can be of any type.

    opened by mark-oppenheim 4
  • Fix return type for DataFrame.groupBy / cube / rollup

    Fix return type for DataFrame.groupBy / cube / rollup

    2.3 has these data types and I was erroneously gettting errors for them.

    Note this is a port of e2d225f06ff36fcbf79e2123f1c18f380e862728

    I tried a cherry-pick but it had some issues (not sure why)

    opened by dangercrow 4
Releases(3.0.0.post3)
Owner
Maciej
Just a dog on the Internet. I would love to tell you more, but then, of course, I'd have to erase your memory. A30CEF0C31A501EC
Maciej
monolish: MONOlithic Liner equation Solvers for Highly-parallel architecture

monolish is a linear equation solver library that monolithically fuses variable data type, matrix structures, matrix data format, vendor specific data transfer APIs, and vendor specific numerical alg

RICOS Co. Ltd. 179 Dec 21, 2022
Customers Segmentation with RFM Scores and K-means

Customer Segmentation with RFM Scores and K-means RFM Segmentation table: K-Means Clustering: Business Problem Rule-based customer segmentation machin

5 Aug 10, 2022
The project's goal is to show a real world application of image segmentation using k means algorithm

The project's goal is to show a real world application of image segmentation using k means algorithm

2 Jan 22, 2022
Decision Weights in Prospect Theory

Decision Weights in Prospect Theory It's clear that humans are irrational, but how irrational are they? After some research into behavourial economics

Cameron Davidson-Pilon 32 Nov 08, 2021
Winning solution for the Galaxy Challenge on Kaggle

Winning solution for the Galaxy Challenge on Kaggle

Sander Dieleman 483 Jan 02, 2023
K-Means clusternig example with Python and Scikit-learn

Unsupervised-Machine-Learning Flat Clustering K-Means clusternig example with Python and Scikit-learn Flat clustering Clustering algorithms group a se

Emin 1 Dec 13, 2021
pymc-learn: Practical Probabilistic Machine Learning in Python

pymc-learn: Practical Probabilistic Machine Learning in Python Contents: Github repo What is pymc-learn? Quick Install Quick Start Index What is pymc-

pymc-learn 196 Dec 07, 2022
Repository for DCA0305, an undergraduate course about Machine Learning Workflows and Pipelines

Federal University of Rio Grande do Norte Technology Center Department of Computer Engineering and Automation Machine Learning Based Systems Design Re

Ivanovitch Silva 81 Oct 18, 2022
Dual Adaptive Sampling for Machine Learning Interatomic potential.

DAS Dual Adaptive Sampling for Machine Learning Interatomic potential. How to cite If you use this code in your research, please cite this using: Hong

6 Jul 06, 2022
Can a machine learning project be implemented to estimate the salaries of baseball players whose salary information and career statistics for 1986 are shared?

END TO END MACHINE LEARNING PROJECT ON HITTERS DATASET Can a machine learning project be implemented to estimate the salaries of baseball players whos

Pinar Oner 7 Dec 18, 2021
A statistical library designed to fill the void in Python's time series analysis capabilities, including the equivalent of R's auto.arima function.

pmdarima Pmdarima (originally pyramid-arima, for the anagram of 'py' + 'arima') is a statistical library designed to fill the void in Python's time se

alkaline-ml 1.3k Dec 22, 2022
A Streamlit demo to interactively visualize Uber pickups in New York City

Streamlit Demo: Uber Pickups in New York City A Streamlit demo written in pure Python to interactively visualize Uber pickups in New York City. View t

Streamlit 230 Dec 28, 2022
Azure MLOps (v2) solution accelerators.

Azure MLOps (v2) solution accelerator Welcome to the MLOps (v2) solution accelerator repository! This project is intended to serve as the starting poi

Microsoft Azure 233 Jan 01, 2023
Upgini : data search library for your machine learning pipelines

Automated data search library for your machine learning pipelines → find & deliver relevant external data & features to boost ML accuracy :chart_with_upwards_trend:

Upgini 175 Jan 08, 2023
EbookMLCB - ebook Machine Learning cơ bản

Mã nguồn cuốn ebook "Machine Learning cơ bản", Vũ Hữu Tiệp. ebook Machine Learning cơ bản pdf-black_white, pdf-color. Mọi hình thức sao chép, in ấn đề

943 Jan 02, 2023
This repository contains the code to predict house price using Linear Regression Method

House-Price-Prediction-Using-Linear-Regression The dataset I used for this personal project is from Kaggle uploaded by aariyan panchal. Link of Datase

0 Jan 28, 2022
MosaicML Composer contains a library of methods, and ways to compose them together for more efficient ML training

MosaicML Composer MosaicML Composer contains a library of methods, and ways to compose them together for more efficient ML training. We aim to ease th

MosaicML 2.8k Jan 06, 2023
A statistical library designed to fill the void in Python's time series analysis capabilities, including the equivalent of R's auto.arima function.

pmdarima Pmdarima (originally pyramid-arima, for the anagram of 'py' + 'arima') is a statistical library designed to fill the void in Python's time se

alkaline-ml 1.3k Dec 22, 2022
Predict the income for each percentile of the population (Python) - FRENCH

05.income-prediction Predict the income for each percentile of the population (Python) - FRENCH Effectuez une prédiction de revenus Prérequis Pour ce

1 Feb 13, 2022
ML-powered Loan-Marketer Customer Filtering Engine

In Loan-Marketing business employees are required to call the user's to buy loans of several fields and in several magnitudes. If employees are calling everybody in the network it is also very length

Sagnik Roy 13 Jul 02, 2022