Delta Sharing: An Open Protocol for Secure Data Sharing

Overview
Delta Sharing Logo

Delta Sharing: An Open Protocol for Secure Data Sharing

Build and Test License PyPI

Delta Sharing is an open protocol for secure real-time exchange of large datasets, which enables organizations to share data in real time regardless of which computing platforms they use. It is a simple REST protocol that securely shares access to part of a cloud dataset and leverages modern cloud storage systems, such as S3, ADLS, or GCS, to reliably transfer data.

With Delta Sharing, a user accessing shared data can directly connect to it through pandas, Tableau, Apache Spark, Rust, or other systems that support the open protocol, without having to deploy a specific compute platform first. Data providers can share a dataset once to reach a broad range of consumers, while consumers can begin using the data in minutes.

This repo includes the following components:

  • Delta Sharing protocol specification.
  • Python Connector: A Python library that implements the Delta Sharing Protocol to read shared tables as pandas DataFrame or Apache Spark DataFrames.
  • Apache Spark Connector: An Apache Spark connector that implements the Delta Sharing Protocol to read shared tables from a Delta Sharing Server. The tables can then be accessed in SQL, Python, Java, Scala, or R.
  • Delta Sharing Server: A reference implementation server for the Delta Sharing Protocol for development purposes. Users can deploy this server to share existing tables in Delta Lake and Apache Parquet format on modern cloud storage systems.

Python Connector

The Delta Sharing Python Connector is a Python library that implements the Delta Sharing Protocol to read tables from a Delta Sharing Server. You can load shared tables as a pandas DataFrame, or as an Apache Spark DataFrame if running in PySpark with the Apache Spark Connector installed.

System Requirements

Python 3.6+

Installation

pip install delta-sharing

If you are using Databricks Runtime, you can follow Databricks Libraries doc to install the library on your clusters.

Accessing Shared Data

The connector accesses shared tables based on profile files, which are JSON files containing a user's credentials to access a Delta Sharing Server. We have several ways to get started:

  • Download the profile file to access an open, example Delta Sharing Server that we're hosting here. You can try the connectors with this sample data.
  • Start your own Delta Sharing Server and create your own profile file following profile file format to connect to this server.
  • Download a profile file from your data provider.

Quick Start

After you save the profile file, you can use it in the connector to access shared tables.

" # Create a SharingClient. client = delta_sharing.SharingClient(profile_file) # List all shared tables. client.list_all_tables() # Create a url to access a shared table. # A table path is the profile file path following with `#` and the fully qualified name of a table (` . . `). table_url = profile_file + "# . . " # Fetch 10 rows from a table and convert it to a Pandas DataFrame. This can be used to read sample data from a table that cannot fit in the memory. delta_sharing.load_as_pandas(table_url, limit=10) # Load a table as a Pandas DataFrame. This can be used to process tables that can fit in the memory. delta_sharing.load_as_pandas(table_url) # If the code is running with PySpark, you can use `load_as_spark` to load the table as a Spark DataFrame. delta_sharing.load_as_spark(table_url) ">
import delta_sharing

# Point to the profile file. It can be a file on the local file system or a file on a remote storage.
profile_file = "
          
           "
          

# Create a SharingClient.
client = delta_sharing.SharingClient(profile_file)

# List all shared tables.
client.list_all_tables()

# Create a url to access a shared table.
# A table path is the profile file path following with `#` and the fully qualified name of a table (`
          
           .
           
            .
            
             `).
            
           
          
table_url = profile_file + "#
          
           .
           
            .
            
             "
            
           
          

# Fetch 10 rows from a table and convert it to a Pandas DataFrame. This can be used to read sample data from a table that cannot fit in the memory.
delta_sharing.load_as_pandas(table_url, limit=10)

# Load a table as a Pandas DataFrame. This can be used to process tables that can fit in the memory.
delta_sharing.load_as_pandas(table_url)

# If the code is running with PySpark, you can use `load_as_spark` to load the table as a Spark DataFrame.
delta_sharing.load_as_spark(table_url)

You can try this by running our examples with the open, example Delta Sharing Server.

Details on Profile Paths

  • The profile file path for SharingClient and load_as_pandas can be any URL supported by FSSPEC (such as s3a://my_bucket/my/profile/file). If you are using Databricks File System, you can also preface the path with /dbfs/ to access the profile file as if it were a local file.
  • The profile file path for load_as_spark can be any URL supported by Hadoop FileSystem (such as s3a://my_bucket/my/profile/file).
  • A table path is the profile file path following with # and the fully qualified name of a table ( . . ).

Apache Spark Connector

The Apache Spark Connector implements the Delta Sharing Protocol to read shared tables from a Delta Sharing Server. It can be used in SQL, Python, Java, Scala and R.

System Requirements

Accessing Shared Data

The connector loads user credentials from profile files. Please see Download the share profile file to download a profile file for our example server or for your own data sharing server.

Configuring Apache Spark

You can set up Apache Spark to load the Delta Sharing connector in the following two ways:

  • Run interactively: Start the Spark shell (Scala or Python) with the Delta Sharing connector and run the code snippets interactively in the shell.
  • Run as a project: Set up a Maven or SBT project (Scala or Java) with the Delta Sharing connector, copy the code snippets into a source file, and run the project.

If you are using Databricks Runtime, you can skip this section and follow Databricks Libraries doc to install the connector on your clusters.

Set up an interactive shell

To use Delta Sharing connector interactively within the Spark’s Scala/Python shell, you can launch the shells as follows.

PySpark shell

pyspark --packages io.delta:delta-sharing-spark_2.12:0.2.0

Scala Shell

bin/spark-shell --packages io.delta:delta-sharing-spark_2.12:0.2.0

Set up a standalone project

If you want to build a Java/Scala project using Delta Sharing connector from Maven Central Repository, you can use the following Maven coordinates.

Maven

You include Delta Sharing connector in your Maven project by adding it as a dependency in your POM file. Delta Sharing connector is compiled with Scala 2.12.

<dependency>
  <groupId>io.deltagroupId>
  <artifactId>delta-sharing-spark_2.12artifactId>
  <version>0.2.0version>
dependency>

SBT

You include Delta Sharing connector in your SBT project by adding the following line to your build.sbt file:

libraryDependencies += "io.delta" %% "delta-sharing-spark" % "0.2.0"

Quick Start

After you save the profile file and launch Spark with the connector library, you can access shared tables using any language.

SQL

-- A table path is the profile file path following with `#` and the fully qualified name of a table (`
   
    .
    
     .
     
      `).
     
    
   
CREATE TABLE mytable USING deltaSharing LOCATION '
   
    #
    
     .
     
      .
      
       '
      
     
    
   ;
SELECT * FROM mytable;

Python

# . . " df = spark.read.format("deltaSharing").load(table_path) ">
# A table path is the profile file path following with `#` and the fully qualified name of a table (`
       
        .
        
         .
         
          `).
         
        
       
table_path = "
       
        #
        
         .
         
          .
          
           "
          
         
        
       
df = spark.read.format("deltaSharing").load(table_path)

Scala

# . . " val df = spark.read.format("deltaSharing").load(tablePath) ">
// A table path is the profile file path following with `#` and the fully qualified name of a table (`
       
        .
        
         .
         
          `).
         
        
       
val tablePath = "
       
        #
        
         .
         
          .
          
           "
          
         
        
       
val df = spark.read.format("deltaSharing").load(tablePath)

Java

# . . "; Dataset df = spark.read.format("deltaSharing").load(tablePath); ">
// A table path is the profile file path following with `#` and the fully qualified name of a table (`
        
         .
         
          .
          
           `).
          
         
        
String tablePath = "
        
         #
         
          .
          
           .
           
            "
           
          
         
        ;
Dataset<Row> df = spark.read.format("deltaSharing").load(tablePath);

R

# . . " df <- read.df(table_path, "deltaSharing") ">
# A table path is the profile file path following with `#` and the fully qualified name of a table (`
       
        .
        
         .
         
          `).
         
        
       
table_path <- "
       
        #
        
         .
         
          .
          
           "
          
         
        
       
df <- read.df(table_path, "deltaSharing")

You can try this by running our examples with the open, example Delta Sharing Server.

Table paths

  • A profile file path can be any URL supported by Hadoop FileSystem (such as s3a://my_bucket/my/profile/file).
  • A table path is the profile file path following with # and the fully qualified name of a table ( . . ).

Delta Sharing Reference Server

The Delta Sharing Reference Server is a reference implementation server for the Delta Sharing Protocol. This can be used to set up a small service to test your own connector that implements the Delta Sharing Protocol. Please note that this is not a completed implementation of secure web server. We highly recommend you to put this behind a secure proxy if you would like to expose it to public.

Some vendors offer managed services for Delta Sharing too (for example, Databricks). Please refer to your vendor's website for how to set up sharing there. Vendors that are interested in being listed as a service provider should open an issue on GitHub to be added to this README and our project's website.

Here are the steps to setup the reference server to share your own data.

Get the pre-built package

Download the pre-built package delta-sharing-server-x.y.z.zip from GitHub Releases.

Server configuration and adding Shared Data

  • Unpack the pre-built package and copy the server config template file conf/delta-sharing-server.yaml.template to create your own server yaml file, such as conf/delta-sharing-server.yaml.
  • Make changes to your yaml file. You may also need to update some server configs for special requirements.
  • To add Shared Data, add reference to Delta Lake tables you would like to share from this server in this config file.

Config the server to access tables on cloud storage

We support sharing Delta Lake tables on S3, Azure Blob Storage and Azure Data Lake Storage Gen2.

S3

There are multiple ways to config the server to access S3.

EC2 IAM Metadata Authentication (Recommended)

Applications running in EC2 may associate an IAM role with the VM and query the EC2 Instance Metadata Service for credentials to access S3.

Authenticating via the AWS Environment Variables

We support configuration via the standard AWS environment variables. The core environment variables are for the access key and associated secret:

export AWS_ACCESS_KEY_ID=my.aws.key
export AWS_SECRET_ACCESS_KEY=my.secret.key

Other S3 authentication methods

The server is using hadooop-aws to read S3. You can find other approaches in hadoop-aws doc.

Azure Blob Storage

The server is using hadoop-azure to read Azure Blob Storage. Using Azure Blob Storage requires configuration of credentials. You can create a Hadoop configuration file named core-site.xml and add it to the server's conf directory. Then add the following content to the xml file:

fs.azure.account.key.YOUR-ACCOUNT-NAME.blob.core.windows.net YOUR-ACCOUNT-KEY ">
xml version="1.0"?>
xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>fs.azure.account.key.YOUR-ACCOUNT-NAME.blob.core.windows.netname>
    <value>YOUR-ACCOUNT-KEYvalue>
  property>
configuration>

YOUR-ACCOUNT-NAME is your Azure storage account and YOUR-ACCOUNT-KEY is your account key.

Azure Data Lake Storage Gen2

The server is using hadoop-azure to read Azure Data Lake Storage Gen2. We support the Shared Key authentication. You can create a Hadoop configuration file named core-site.xml and add it to the server's conf directory. Then add the following content to the xml file:

fs.azure.account.auth.type.YOUR-ACCOUNT-NAME.dfs.core.windows.net SharedKey fs.azure.account.key.YOUR-ACCOUNT-NAME.dfs.core.windows.net YOUR-ACCOUNT-KEY The secret password. Never share these. ">
xml version="1.0"?>
xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>fs.azure.account.auth.type.YOUR-ACCOUNT-NAME.dfs.core.windows.netname>
    <value>SharedKeyvalue>
    <description>
    description>
  property>
  <property>
    <name>fs.azure.account.key.YOUR-ACCOUNT-NAME.dfs.core.windows.netname>
    <value>YOUR-ACCOUNT-KEYvalue>
    <description>
    The secret password. Never share these.
    description>
  property>
configuration>

YOUR-ACCOUNT-NAME is your Azure storage account and YOUR-ACCOUNT-KEY is your account key.

More cloud storage supports will be added in the future.

Authorization

The server supports a basic authorization with pre-configed bearer token. You can add the following config to your server yaml file:

authorization:
  bearerToken: 
   

Then any request should send with the above token, otherwise, the server will refuse the request.

If you don't config the bearer token in the server yaml file, all requests will be accepted without authorization.

To be more secure, you recommend you to put the server behind a secure proxy such as NGINX to set up JWT Authentication.

Start the server

Run the following shell command:

bin/delta-sharing-server -- --config 
    

   

should be the path of the yaml file you created in the previous step. You can find options to config JVM in sbt-native-packager.

Use the pre-built Docker image

You can use the pre-built docker image from https://hub.docker.com/r/deltaio/delta-sharing-server by running the following command

docker run -p 
   
    :
    
      --mount type=bind,source=
     
      ,target=/config/delta-sharing-server-config.yaml deltaio/delta-sharing-server:0.2.0 -- --config /config/delta-sharing-server-config.yaml

     
    
   

Note that should be the same as the port defined inside the config file.

API Compatibility

The REST APIs provided by Delta Sharing Server are stable public APIs. They are defined by Delta Sharing Protocol and we will follow the entire protocol strictly.

The interfaces inside Delta Sharing Server are not public APIs. They are considered internal, and they are subject to change across minor/patch releases.

Delta Sharing Protocol

The Delta Sharing Protocol specification details the protocol.

Building this Project

Python Connector

To execute tests, run

python/dev/pytest

To install in develop mode, run

cd python/
pip install -e .

To install locally, run

cd python/
pip install .

To generate a wheel file, run

cd python/
python setup.py sdist bdist_wheel

It will generate python/dist/delta_sharing-x.y.z-py3-none-any.whl.

Apache Spark Connector and Delta Sharing Server

Apache Spark Connector and Delta Sharing Server are compiled using SBT.

To compile, run

build/sbt compile

To execute tests, run

build/sbt test

To generate the Apache Spark Connector, run

build/sbt spark/package

It will generate spark/target/scala-2.12/delta-sharing-spark_2.12-x.y.z.jar.

To generate the pre-built Delta Sharing Server package, run

build/sbt server/universal:packageBin

It will generate server/target/universal/delta-sharing-server-x.y.z.zip.

To build the Docker image for Delta Sharing Server, run

build/sbt server/docker:publishLocal

This will build a Docker image tagged delta-sharing-server:x.y.z, which you can run with:

docker run -p 
   
    :
    
      --mount type=bind,source=
     
      ,target=/config/delta-sharing-server-config.yaml delta-sharing-server:x.y.z -- --config /config/delta-sharing-server-config.yaml

     
    
   

Note that should be the same as the port defined inside the config file.

Refer to SBT docs for more commands.

Reporting Issues

We use GitHub Issues to track community reported issues. You can also contact the community for getting answers.

Contributing

We welcome contributions to Delta Sharing. See our CONTRIBUTING.md for more details.

We also adhere to the Delta Lake Code of Conduct.

License

Apache License 2.0.

Community

We use the same community resources as the Delta Lake project:

Comments
  • Errors trying the delta-sharing access with pandas and spark

    Errors trying the delta-sharing access with pandas and spark

    Hello team,

    I'm trying to use your delta-sharing library with python for both PANDAS and SPARK

    SPARK METHOD

    As you can see from the below screenshot I can access the columns or schema using the load_as_spark() method

    20211101_154435

    However, when I try .show() I see the following Java certificate errors

    javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

    Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

    I've tried to use different Java versions and also added the certificate of the delta-sharing server hosted on Kubernetes, but nothing seems to be working. If I try this from a linux machine or kubernetes I seem to have the same problem

    PANDAS METHOD

    When I use the load_as_pandas() method I get a FileNotFoundError. The strange thing is that when I click on that s3 url link that you can see from the screenshot, I download the parquet file (therefore the link does work). Is the PyArrow library trying to look for a local file or what do you think the issue may be ?

    20211101_161049

    Any ideas about the above 2 errors using spark and pandas ?

    Thank you very much,

    Peter

    opened by pknowles-9 19
  • _delta_log and EC2 instance

    _delta_log and EC2 instance

    Hi,

    I am trying to run a delta sharing server on my local, and using my Mac terminal, or do I have to run an EC2 instance instead of running the commands on my terminal?

    On the other hand, I am trying to fetch the data from S3, I created a bucket and uploaded a file in it, do I have to create an empty _delta_log myself or will iti be generated on its own?

    Thanks

    opened by skaplan81 15
  •  fs.s3.awsAccessKeyId and fs.s3.awsSecretAccessKey properties must be present

    fs.s3.awsAccessKeyId and fs.s3.awsSecretAccessKey properties must be present

    Screenshot from 2021-05-28 13-42-45

    Even after having IAM role access to S3 and specifying AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY. I am having this error in server logs. As it uses hadoop-aws to read files, how do we authenticate and pass credentials in server?

    Before running the below command, where should I specify fs.s3.awsAccessKeyId and fs.s3.awsSecretAccessKey properties? bin/delta-sharing-server -- --config conf/delta-sharing-server.yaml

    opened by Aayushpatel007 15
  • Connection Issue

    Connection Issue

    Whenever i am trying to read the delta table from s3 using load_as_pandas function i am getting a connection issue in ec2 instance. Following is the issue: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 170, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 96, in create_connection raise err File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 86, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 706, in urlopen chunked=chunked, File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 394, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 234, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/usr/lib64/python3.7/http/client.py", line 1277, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/lib64/python3.7/http/client.py", line 1323, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib64/python3.7/http/client.py", line 1272, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib64/python3.7/http/client.py", line 1032, in _send_output self.send(msg) File "/usr/lib64/python3.7/http/client.py", line 972, in send self.connect() File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 200, in connect conn = self._new_conn() File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 182, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fb6514c1c10>: Failed to establish a new connection: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 756, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 574, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=5044): Max retries exceeded with url: /delta-sharing/test/shares/share1/schemas/schema1/tables/table1/query (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb6514c1c10>: Failed to establish a new connection: [Errno 111] Connection refused'))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "", line 1, in File "/home/ec2-user/.local/lib/python3.7/site-packages/delta_sharing/delta_sharing.py", line 61, in load_as_pandas rest_client=DataSharingRestClient(profile), File "/home/ec2-user/.local/lib/python3.7/site-packages/delta_sharing/reader.py", line 62, in to_pandas self._table, predicateHints=self._predicateHints, limitHint=self._limitHint File "/home/ec2-user/.local/lib/python3.7/site-packages/delta_sharing/rest_client.py", line 84, in func_with_retry raise e File "/home/ec2-user/.local/lib/python3.7/site-packages/delta_sharing/rest_client.py", line 77, in func_with_retry return func(self, *arg, **kwargs) File "/home/ec2-user/.local/lib/python3.7/site-packages/delta_sharing/rest_client.py", line 182, in list_files_in_table f"/shares/{table.share}/schemas/{table.schema}/tables/{table.name}/query", data=data, File "/usr/lib64/python3.7/contextlib.py", line 112, in enter return next(self.gen) File "/home/ec2-user/.local/lib/python3.7/site-packages/delta_sharing/rest_client.py", line 204, in _request_internal response = request(f"{self._profile.endpoint}{target}", json=data) File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 590, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=5044): Max retries exceeded with url: /delta-sharing/test/shares/share1/schemas/schema1/tables/table1/query (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb6514c1c10>: Failed to establish a new connection: [Errno 111] Connection refused'))

    opened by nish4528 10
  • Add possibility to set endpoint-url for s3

    Add possibility to set endpoint-url for s3

    Unfortunately, default AWS variables doesn't support endpoint url for compatible s3 storages, only through CLI "--endpoint-url". Any chances it could be implemented?

    enhancement 
    opened by alarex 9
  • Support new parameter includeHistoricalMetadata for queryTableChange RPC

    Support new parameter includeHistoricalMetadata for queryTableChange RPC

    A couple changes:

    • Support new parameter includeHistoricalMetadata for queryTableChanges.
    • Update the way SparkStructuredStreaming is put in user agent header.
    • Added two more tests on service side to verify additional metadata is only returned for queryTableChanges from spark streaming.
    • Update to real id in DeltaSharingRestClientSuite.scala
    opened by linzhou-db 7
  • limit feature returning an empty dataframe

    limit feature returning an empty dataframe

    The delta_sharing limit feature doesn't seem to work properly. It returns an empty dataframe.

    table = delta_sharing.load_as_pandas(table_url, limit=10)

    Empty DataFrame
    Columns: [a, b, c, ...]
    Index: []
    
    opened by YannOrieult-EngieDigital 7
  • Delta sharing container on AWS ecs getting access denied error even with all s3 permissions there

    Delta sharing container on AWS ecs getting access denied error even with all s3 permissions there

    Delta sharing container on ecs getting access denied error even with all Iam s3 and kms permission there for the bucket on the ecs service .

    Error

    io.delta.sharing.server.DeltaInternalException: java.util.concurrent.ExecutionException: java.nio.file.AccessDeniedException:  s3a://foo-lake/foo/foo_fact/_delta_log: getFileStatus on s3a://foo-lake/foo/foo_fact/_delta_log: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: C0DZXXXYNWKQ9CWD; S3 
    Extended Request ID: eDvJMbR8UtIRDg8nXD7+0ix04VN8UPsVSEJDIBosFC5u/YJPsnAGpm/hvGdrXteQBpeQNu5DW9Q=), S3 Extended Request ID: eDvJMbR8UtIRDg8nXD7+0ix04VN8UPsVSEJDIBosFC5u/YJPsnAGpm/hvGdrXteQBpeQNu5DW9Q=
    --
    @timestamp | 1635392335291 
    

    Also

    
    (s3a://foo-lake/foo/foo_fact/_delta_log/_delta_log/_last_checkpoint is corrupted.
     Will search the checkpoint files directly,java.nio.file.AccessDeniedException: s3a://foo-lake/foo/foo_fact/_delta_log/_last_checkpoint: getFileStatus on 
    s3a://wfg1stg-datahub-lake/ovdm/tran_fact/_delta_log/_last_checkpoint: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: C0DM6WHHRX0RR7DG; S3 Extended Request ID: yEdeFmnz49jLTXu+LzqoNZqFy0sK8X3ge0p7Gmp5ia9FVFjWN7/HLLZ5sWatqfi6cDH0ZRGGf9s=), S3 Extended Request ID: yEdeFmnz49jLTXu+LzqoNZqFy0sK8X3ge0p7Gmp5ia9FVFjWN7/HLLZ5sWatqfi6cDH0ZRGGf9s=)
    --
     
    
    

    Shouldn't delta-server already use EC2ContainerCredentialsProviderWrapper or if there is a way to still configure this

    opened by gauravbrills 7
  • OSS Delta Sharing Server: Adds api to accept cdf query

    OSS Delta Sharing Server: Adds api to accept cdf query

    OSS Delta Sharing Server: Adds api to accept cdf query

    • @Get("/shares/{share}/schemas/{schema}/tables/{table}/changes")
    • Parse url parameters and construct the cdfoptions map
    • Add classes DeltaErrors and DeltaDataSource for some exceptions and constants
    • Add DeltaSharedTable.queryCDF and return not implemented exception
    opened by linzhou-db 6
  • Doc on how to create a

    Doc on how to create a "share"?

    If I'm self hosting the server, getting it to read S3, how do I create a share? I've checked https://github.com/delta-io/delta-sharing/blob/main/PROTOCOL.md - which focuses on list share, query table etc.

    opened by felixsafegraph 6
  • Delta Share Fails When Attempting to Read Delta Table

    Delta Share Fails When Attempting to Read Delta Table

    import delta_sharing

    table_url = "/Users/user1/Applications/open-datasets.share#share1.default.test_facilities"

    pandas_df = delta_sharing.load_as_pandas(table_url)

    pandas_df.head(10)

    My config.yaml:

    The format version of this config file

    version: 1

    Config shares/schemas/tables to share

    shares:

    • name: "share1" schemas:
      • name: "default" tables:
        • name: "test_facilities" location: "/tmp/test_facilities"

    host: "localhost" port: 9999 endpoint: "/delta-sharing"

    I keep getting the following error when I run the above program. The delta table is there

    HTTPError: 500 Server Error: Internal Server Error for url: http://localhost:9999/delta-sharing/shares/share1/schemas/default/tables/test_facilities/query Response from server: {'errorCode': 'INTERNAL_ERROR', 'message': ''}

    Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: File system class org.apache.hadoop.fs.LocalFileSystem is not supported at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2055) at com.google.common.cache.LocalCache.get(LocalCache.java:3966) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863) at io.delta.standalone.internal.DeltaSharedTableLoader.loadTable(DeltaSharedTableLoader.scala:54) at io.delta.sharing.server.DeltaSharingService.$anonfun$listFiles$1(DeltaSharingService.scala:282) at io.delta.sharing.server.DeltaSharingService.processRequest(DeltaSharingService.scala:169) ... 60 more Caused by: java.lang.IllegalStateException: File system class org.apache.hadoop.fs.LocalFileSystem is not supported at io.delta.standalone.internal.DeltaSharedTable.$anonfun$fileSigner$1(DeltaSharedTableLoader.scala:97) at io.delta.standalone.internal.DeltaSharedTable.withClassLoader(DeltaSharedTableLoader.scala:109) at io.delta.standalone.internal.DeltaSharedTable.(DeltaSharedTableLoader.scala:84) at io.delta.standalone.internal.DeltaSharedTableLoader.$anonfun$loadTable$1(DeltaSharedTableLoader.scala:58) at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049) ... 65 more

    opened by dtgdev 5
  • Rest client in Java - access data without spark session

    Rest client in Java - access data without spark session

    I am trying to create rest client (receiver) in Java. Is there a possibility to load data without using spark session and eventually be able to filter that data, similar to load as pandas in python? I tried delta standalone reader, however, it does not provide data filtering capabilities.

    opened by kaundinyaekta 0
  • Clarification regarding predicate pushdown

    Clarification regarding predicate pushdown

    Does this library take into account the stats field present in the transaction log of a particular table version to filter the parquet files that are supposed to be read while instantiating a DeltaTable?

    For example, I have a table with total 100 records across 2 table versions and each version has 5 parquet files associated with it covering 50 records of that version (assuming even 10 records per parquet file). The transaction log for both versions contains a stats field with information like minValues, maxValues, nullCount etc.

    I have already verified that if I try to read first version (0), the DeltaTable object will read only 1 files and for second version (1) it will read both the files. This means that the number of parquet files are affected by the version already.

    Just needed a clarification whether the stats fields are also used anywhere for selection of files to be read?

    opened by chitralverma 0
  • Add load_as methods for pyarrow dataset and table

    Add load_as methods for pyarrow dataset and table

    Adds separate implementations for load_as_pyarrow_table and load_as_pyarrow_dataset that allows users to read delta sharing tables as pyarrow table and dataset respectively.

    • [x] Add basic implementation
    • [x] Fix lint
    • [x] Refactor common code
    • [x] Verify performance with and without limit
    • [x] Add tests - converter
    • [x] Add tests - reader
    • [ ] Add tests - delta_sharing
    • [x] Add examples

    closes https://github.com/delta-io/delta-sharing/issues/238

    opened by chitralverma 2
  • While accessing the data on recipient side using delta_sharing.load_table_changes_as_spark(), it shows data of all versions.

    While accessing the data on recipient side using delta_sharing.load_table_changes_as_spark(), it shows data of all versions.

    When I tried to access specific version data and set the arguments value to the specific number, I get all version data.

    data1 = delta_sharing.load_table_changes_as_spark(table_url, starting_version=1, ending_version=1)

    data2 = delta_sharing.load_table_changes_as_spark(table_url, starting_version=2, ending_version=2)

    Here data1 and data2 gives the same data. When I check the same version data using load_table_changes_as_pandas(), it gives specific version data.

    data1 = delta_sharing.load_table_changes_as_pandas(table_url, starting_version=1, ending_version=1)

    data2 = delta_sharing.load_table_changes_as_pandas(table_url, starting_version=2, ending_version=2)

    In the pandas scenario, data1 is having version 1 data and data2 is having version 2 data. Both of these, data1 and data2, are having different data which was as expected.

    What we have to do to get the specific version data in spark dataframe using load_table_changes_as_spark function?

    opened by MaheshChahare123 0
  • Support for load_as_pyarrow_dataset or load_as_pyarrow_table

    Support for load_as_pyarrow_dataset or load_as_pyarrow_table

    This is a new feature request or rather a little refactoring in the code for reader to allow users to read datasets directly as pyarrow datasets and tables.

    As you can see here, we are anyways creating the pyarrow dataset and table, which is then used to convert to a pandas DF in the to_pandas method

    I would like to refactor this part and expose this as separate functionalities - to_pyarrow_dataset and to_pyarrow_table.

    Advantage of this refactoring is that users will then be able to efficiently get the pyarrow things directly without an additional full copy/ conversion to pandas dataframe if required. This will allow the extension of delta-sharing on other processing systems like Datafusion, Polars etc, since they all extensively rely on pyarrow datasets.

    Please let me know if this issue makes sense to you, I can raise a PR quick for this in a day or so.

    Note: the existing functionalities will remain unaffected by this refactoring.

    opened by chitralverma 0
Releases(v0.5.3)
Owner
Delta Lake
An open-source storage layer that brings ACID transactions to Apache Spark™
Delta Lake
List of S3 Hacks

s3-leaks List of AWS S3 Leaks Feel free to send in a PR if you know of other leaks Date Description Notes Aug2020 S3 bucket mess up exposed 182GB of s

Nag 291 Dec 28, 2022
A wordlist generator tool, that allows you to supply a set of words, giving you the possibility to craft multiple variations from the given words, creating a unique and ideal wordlist to use regarding a specific target.

A wordlist generator tool, that allows you to supply a set of words, giving you the possibility to craft multiple variations from the given words, creating a unique and ideal wordlist to use regardin

Cycurity 39 Dec 10, 2022
DomainMonitor is a web project that has a RESTful API to get a domain's subdomains and whois data.

DomainMonitor is a web project that has a RESTful API to get a domain's subdomains and whois data.

2 Feb 05, 2022
Proof of concept GnuCash Webinterface

Proof of Concept GnuCash Webinterface This may one day be a something truly great. Milestones [ ] Browse accounts and view transactions [ ] Record sim

Josh 14 Dec 28, 2022
聚合Github上已有的Poc或者Exp,CVE信息来自CVE官网。Auto Collect Poc Or CVE from Github by CVE ID.

PocOrExp in Github 聚合Github上已有的Poc或者Exp,CVE信息来自CVE官网 注意:只通过通用的CVE号聚合,因此对于MS17-010等Windows编号漏洞以及著名的有绰号的漏洞,还是自己检索一下比较好 Usage python3 exp.py -h usage: ex

567 Dec 30, 2022
Huskee: Malware made in Python for Educational purposes

𝐇𝐔𝐒𝐊𝐄𝐄 Caracteristicas: Discord Token Grabber Wifi Passwords Grabber Googl

chew 4 Aug 17, 2022
windows电脑查看全部连接过的WiFi密码

python WIFI历史密码查看器 WIFI密码查看器 原理 win+R,输入cmd打开命令行窗口 #这个命令可以列出你所有连接过的wifi netsh wlan show profiles #替换你要查找的WiFi名称,就可以显示出这个wifi的所有信息,包括密码 netsh wlan show

GMYXDS 15 Dec 22, 2022
A Proof-Of-Concept for the recently found CVE-2021-44228 vulnerability

log4j-shell-poc A Proof-Of-Concept for the recently found CVE-2021-44228 vulnerability. Recently there was a new vulnerability in log4j, a java loggin

koz 1.5k Jan 04, 2023
Lightweight and beneficial Dependency Injection plugin for apscheduler

Implementation of dependency injection for apscheduler Prerequisites: apscheduler-di solves the problem since apscheduler doesn't support Dependency I

Glib 11 Dec 07, 2022
Bilgi Sistemleri Projesi için yapılan keylogger

Keylogger Bilgi Sistemleri Projesi için yapılan keylogger Projede kullanılan kütüphanelere sahip olmasanız da python dosyası çalıştığında kendisi gere

Tarik Bulut 1 Jan 07, 2022
This project is all about building an amazing application that will help users manage their passwords and even generate new passwords for them

An amazing application that will help us manage our passwords and even generate new passwords for us.

1 Jan 23, 2022
𝙾𝚙𝚎𝚗 𝚂𝚘𝚞𝚛𝚌𝚎 𝚂𝚌𝚛𝚒𝚙𝚝 - 𝙽𝚘 𝙲𝚘𝚙𝚢𝚛𝚒𝚐𝚑𝚝 - 𝚃𝚎𝚊𝚖 𝚆𝚘𝚛𝚔 - 𝚂𝚒𝚖𝚙𝚕𝚎 𝙿𝚢𝚝𝚑𝚘𝚗 𝙿𝚛𝚘𝚓𝚎𝚌𝚝 - 𝙲𝚛𝚎𝚊𝚝𝚎𝚍 𝙱𝚢 : 𝙰𝚕𝚕 𝚃𝚎𝚊𝚖 - 𝙲𝚘𝚙𝚢𝙿𝚊𝚜𝚝 𝙲𝚊𝚗 𝙽𝚘𝚝 𝙼𝚊𝚔𝚎 𝚈𝚘𝚞 𝚁𝚎𝚊𝚕 𝙿𝚛𝚘𝚐𝚛𝚊𝚖𝚖𝚎𝚛

𝙾𝚙𝚎𝚗 𝚂𝚘𝚞𝚛𝚌𝚎 𝚂𝚌𝚛𝚒𝚙𝚝 - 𝙽𝚘 𝙲𝚘𝚙𝚢𝚛𝚒𝚐𝚑𝚝 - 𝚃𝚎𝚊𝚖 𝚆𝚘𝚛𝚔 - 𝚂𝚒𝚖𝚙𝚕𝚎 𝙿𝚢𝚝𝚑𝚘𝚗 𝙿𝚛𝚘𝚓𝚎𝚌𝚝 - 𝙲𝚛𝚎𝚊𝚝𝚎𝚍 𝙱𝚢 : 𝙰𝚕𝚕 𝚃𝚎𝚊𝚖 - 𝙲𝚘𝚙𝚢𝙿𝚊𝚜𝚝 𝙲𝚊𝚗 𝙽𝚘𝚝 𝙼𝚊𝚔𝚎 𝚈𝚘𝚞 𝚁𝚎𝚊𝚕 𝙿𝚛𝚘𝚐𝚛𝚊𝚖𝚖𝚎𝚛

CodeX-ID 2 Oct 27, 2022
CVE-2021-45232-RCE-多线程批量漏洞检测

CVE-2021-45232-RCE CVE-2021-45232-RCE-多线程批量漏洞检测 FOFA 查询 title="Apache APISIX Das

孤桜懶契 36 Sep 21, 2022
♻️ Password Generator (PSG) 📚 This plugin is made for more familiarity with Python, but can also be used to create passwords

About Tool This plugin is made for more familiarity with Python, but can also be used to create passwords.

STgazing 2 Jul 23, 2022
Scan your logs for CVE-2021-44228 related activity and report the attackers

jndiRep - CVE-2021-44228 Basically a bad grep on even worse drugs. search for malicious strings decode payloads print results to stdout or file report

js-on 2 Nov 24, 2022
CVE-2021-21985 VMware vCenter Server远程代码执行漏洞 EXP (更新可回显EXP)

CVE-2021-21985 CVE-2021-21985 EXP 本文以及工具仅限技术分享,严禁用于非法用途,否则产生的一切后果自行承担。 0x01 利用Tomcat RMI RCE 1. VPS启动JNDI监听 1099 端口 rmi需要bypass高版本jdk java -jar JNDIIn

r0cky 355 Aug 03, 2022
LaxrFar Python Obfuscator

LaxrFar Python Obfuscator Usage First do the things from "Upload to Webserver" o

LaxrFar 5 Jul 19, 2022
Searches for potentially vulnerable websites to local file inclusion, throughout the web and then exploits them for LFI

LFI-Hunter Searches for potentially vulnerable websites to local file inclusion, throughout the web and then exploits them for LFI A script written in

Anukul Pandey 6 Jan 30, 2022
Orthrus is a macOS agent that uses Apple's MDM to backdoor a device using a malicious profile.

Orthrus is a macOS agent that uses Apple's MDM to backdoor a device using a malicious profile. It effectively runs its own MDM server and allows the operator to interface with it using Mythic.

Mythic Agents 37 Dec 06, 2022