Generate lookml for views from dbt models

Overview

dbt2looker

Use dbt2looker to generate Looker view files automatically from dbt models.

Features

  • Column descriptions synced to looker
  • Dimension for each column in dbt model
  • Dimension groups for datetime/timestamp/date columns
  • Measures defined through dbt column metadata see below
  • Looker types
  • Warehouses: BigQuery, Snowflake, Redshift (postgres to come)

demo

Quickstart

Run dbt2looker in the root of your dbt project after compiling looker docs.

Generate Looker view files for all models:

dbt docs generate
dbt2looker

Generate Looker view files for all models tagged prod

dbt2looker --tag prod

Install

Install from PyPi repository

Install from pypi into a fresh virtual environment.

# Create virtual env
python3.7 -m venv dbt2looker-venv
source dbt2looker-venv/bin/activate

# Install
pip install dbt2looker

# Run
dbt2looker

Build from source

Requires poetry and python >=3.7

# Install
poetry install

# Run
poetry run dbt2looker

Defining measures

You can define looker measures in your dbt schema.yml files. For example:

models:
  - name: pages
    columns:
      - name: url
        description: "Page url"
      - name: event_id
        description: unique event id for page view
        meta:
           measures:
             page_views:
               type: count
Comments
  • Column Type None Error - Field's Not Converting To Dimensions

    Column Type None Error - Field's Not Converting To Dimensions

    When running dbt2looker --tag marts on my mart models, I receive dozens of errors around none type conversions.

    20:54:28 WARNING Column type None not supported for conversion from snowflake to looker. No dimension will be created.

    Here is the example of the schema.yml file.

    image

    The interesting thing is that it correctly recognizes the doc that corresponds to the model. The explore within the model file is correct and has the correct documentation.

    Not sure if I can be of any more help but let me know if there is anything!

    bug 
    opened by sisu-callum 19
  • ValueError: Failed to parse dbt manifest.json

    ValueError: Failed to parse dbt manifest.json

    Hey! I'm trying to run this package and hitting errors right after installation. I pip installed dbt2looker, ran the following in the root of my dbt project.

    dbt docs generate
    dbt2looker
    

    This gives me the following error:

    Traceback (most recent call last): File "/Users/josh/.pyenv/versions/3.10.0/bin/dbt2looker", line 8, in sys.exit(run()) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/cli.py", line 108, in run raw_manifest = get_manifest(prefix=args.target_dir) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/cli.py", line 33, in get_manifest parser.validate_manifest(raw_manifest) File "/Users/josh/.pyenv/versions/3.10.0/lib/python3.10/site-packages/dbt2looker/parser.py", line 20, in validate_manifest raise ValueError("Failed to parse dbt manifest.json") ValueError: Failed to parse dbt manifest.json

    This is preceded by a whole mess of error messages like such:

    21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.created_at: 1639274126.771925 is not of type 'integer' 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.resource_type: 'model' is not one of ['analysis'] 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.created_at: 1639274126.771925 is not of type 'integer' 21:01:05 ERROR Error in manifest at nodes.model.jaffle_shop.stg_customers.resource_type: 'model' is not one of ['test']

    Any idea what might be going wrong here? Happy to provide more detail. Thank you!

    opened by jdavid459 6
  • DBT version 1.0

    DBT version 1.0

    Hi,

    Is this library supporting DBT version 1.0 and forward? I can't get it to run at all. There's a lot of errors when checking the schema of the manifest.json file.

    / Andrea

    opened by AndreasTA-AW 3
  • Multiple manifest.json/catalog.json/dbt_project.yml files found in path ./

    Multiple manifest.json/catalog.json/dbt_project.yml files found in path ./

    When running

    dbt2looker --tag test
    

    I get

    $ dbt2looker --tag test
    19:31:20 WARNING Multiple manifest.json files found in path ./ this can lead to unexpected behaviour
    19:31:20 WARNING Multiple catalog.json files found in path ./ this can lead to unexpected behaviour
    19:31:20 WARNING Multiple dbt_project.yml files found in path ./ this can lead to unexpected behaviour
    19:31:20 INFO   Generated 0 lookml views in ./lookml/views
    19:31:20 INFO   Generated 1 lookml model in ./lookml
    19:31:20 INFO   Success
    

    and no lookml files are generated.

    I assume this is because I have multiple dbt packages installed? Is there a way to get around this? Otherwise, a feature request would be the ability to specify which files should be used - perhaps in a separate dbt2looker.yml settings file.

    enhancement 
    opened by arniwesth 3
  • Support Bigquery BIGNUMERIC datatype

    Support Bigquery BIGNUMERIC datatype

    Previously, dbt2looker would not create dimension for field with data type BIGNUMERIC since Looker didn't support converting BIGNUMERIC. So when we ran dbt2looker in CLI there is a warning WARNING Column type BIGNUMERIC not supported for conversion from bigquery to looker. No dimension will be created. However, as of November 2021, Looker has officially supported BigQuery BIGNUMBERIC (link). Please help to add this. Thank you,

    opened by IL-Jerry 2
  • Adding Filters to Meta Looker Config in schema.yml

    Adding Filters to Meta Looker Config in schema.yml

    Use Case: Given that programatic creation of all LookML files is the goal, there are a couple features that could potentially be added in order to give people more flexibility in measure creation. The first one I could think of was filters. Individuals would use filters to calculate measures like Active Users (ex: count_distinct user ids where some sort of flag is true).

    The following code is my admitted techno-babble as I don't fully understand pydantic and my python is almost exclusively pandas based.

    def lookml_dimensions_from_model(model: models.DbtModel, adapter_type: models.SupportedDbtAdapters):
        return [
            {
                'name': column.name,
                'type': map_adapter_type_to_looker(adapter_type, column.data_type),
                'sql': f'${{TABLE}}.{column.name}',
                'description': column.description
                'filter':[measure.name: f'measure.value']
    
            }
            for column in model.columns.values()
            for filter in column.meta.looker.filters
            if map_adapter_type_to_looker(adapter_type, column.data_type) in looker_scalar_types
        ]
    
    
    def lookml_measures_from_model(model: models.DbtModel):
        return [
            {
                'name': measure.name,
                'type': measure.type.value,
                'sql': f'${{TABLE}}.{column.name}',
                'description': f'{measure.type.value.capitalize()} of {column.description}',
                **'filter':[measure.name: f'measure.value']**
            }
            for column in model.columns.values()
            for measure in column.meta.looker.measures
            **for filter in column.meta.looker.filters**
    
        ]
    

    Pretty obvious I would imagine that my Python skills are lacking/non-existent (and I have no idea if this would actually work) but this idea would add more functionality for those who want to create more dynamic measures. Here is a bare-bones idea of how it could be configured in dbt

    image

    Then the output would look something like.

      measure: Page views {
        type: count
        sql: ${TABLE}.relevant_field ;;
        description: "Count of something."
        filter: [the_name_of_defined_column: value_of_defined_column]
      }
    
    enhancement 
    opened by sisu-callum 2
  • Incompatible packages when using snowflake

    Incompatible packages when using snowflake

    This error comes up when using with snowflake: https://github.com/snowflakedb/snowflake-connector-python/issues/1206

    it is remedied by the simple line pip install typing-extensions>=4.3.0 , but dbt2looker depends on < 4.0.0.

    dbt2looker 0.9.2 requires typing-extensions<4.0.0,>=3.10.0, but you have typing-extensions 4.3.0 which is incompatible.
    
    opened by owlas 1
  • Allowing skipping dbt manifest validation.

    Allowing skipping dbt manifest validation.

    Some users use the manifest heavily in order to enhance their work with dbt. IMHO, in such cases, the Looker library should not enforce any schema validations and it is the users' responsibility to keep the Looker generation not broken.

    opened by cgrosman 1
  • Redshift type conversions missing

    Redshift type conversions missing

    Redshift has missing type conversions:

    10:07:17 WARNING Column type timestamp without time zone not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type boolean not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type double precision not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 WARNING Column type character varying(108) not supported for conversion from redshift to looker. No dimension will be created.
    10:07:17 DEBUG  Created view from model dim_appointment with 0 measures, 0 dimensions
    
    bug 
    opened by owlas 1
  • Join models in explores

    Join models in explores

    Expose config for defining explores with joined models.

    Ideally this would live in a dbt exposure but it's currently missing meta information.

    Add to models for now?

    enhancement 
    opened by owlas 1
  • feat: remove strict manifest validation

    feat: remove strict manifest validation

    Closes #72 Closes #37

    We have some validation already with typing and the dbt manifest keeps changing. I think json-schema is causing more problems that it is solving. If we get weird errors, we can introduce some more relaxed validation.

    opened by owlas 0
  • Support group_labels in yml for dimensions

    Support group_labels in yml for dimensions

    https://github.com/lightdash/dbt2looker/blob/bb8f5b485ec541e2b1be15363ac3c7f8f19d030d/dbt2looker/models.py#L99

    measures seem to have this but not dimensions. probably all/most properties in available in https://docs.lightdash.com/references/dimensions/ should be represented here -- is this something lightdash is willing to maintain or would you want a contribution? @TuringLovesDeathMetal / @owlas - i figure full support for lightdash properties that can map to looker should be, in order to maximize the value of this utility for enabling looker customers to uncouple themselves from looker.

    opened by mike-weinberg 1
  • Issue when parsing dbt models

    Issue when parsing dbt models

    Hey folks!

    I've just run 'dbt2looker' in my local dbt repo folder, and I receive the following error:

    ❯ dbt2looker
    12:11:54 ERROR  Cannot parse model with id: "model.smallpdf.brz_exchange_rates" - is the model file empty?
    Failed
    

    The model file itself (pictured below) is not empty, therefore I am not sure what the issue with parsing this model dbt2looker appears to have. It is not materialised as a table or view, it is utilised by dbt as ephemeral - is that of importance when parsing files in the project? I've also tried running dbt2looker on a limited subset of dbt models via a tag, the same error appears. Any help is greatly appreciated!

    Screenshot 2022-06-20 at 12 12 22

    Other details:

    • on dbt version dbt 1.0.0
    • using dbt-redshift adapter [email protected]
    • let me know if anything else is of importance!
    opened by lewisosborne 8
  • Support model level measures

    Support model level measures

    Motivation

    We technically implement a measure with multiple columns under a column meta. But, it would be more natural to implement such measures as model-level.

    models:
      - name: ubie_jp_lake__dm_medico__hourly_score_for_nps
        description: |
          {{ doc("ubie_jp_lake__dm_medico__hourly_score_for_nps") }}
        meta:
          measures:
            total_x_y_z:
              type: number
              description: 'Summation of total x, total y and total z'
              sql: '${total_x} + ${total_y} + ${total_z}'
    
    
    opened by yu-iskw 0
  • Lookml files should merge with existing views

    Lookml files should merge with existing views

    If I already have a view file, I'd like to merge in any new columns I've added in dbt.

    For example, if I have a description in dbt but not in looker, I'd like to add it

    If looker already has a description, it should be left alone

    Thread in dbt slack: https://getdbt.slack.com/archives/C01DPMVM2LU/p1650353949839609?thread_ts=1649968691.671229&cid=C01DPMVM2LU

    opened by owlas 0
  • Non-empty models cannot be parsed and are reported as empty

    Non-empty models cannot be parsed and are reported as empty

    As of version 0.9.2, dbt2looker will not run for us anymore. v0.7.0 does run successfully. The error returned by 0.9.2 is 'Cannot parse model with id: "%s" - is the model file empty?'. However, the model that this is returned for is not empty. Based on the code, it seems like the attribute 'name' is missing, but inspecting the manifest.json file shows that there is actually a name for this model. I have no idea why the system reports these models as empty. The manifest.json object for one of the offending models is pasted below.

    Reverting to v0.9.0 (which does not yet have this error message) just leads to dbt2looker crashing without any information. Reverting to 0.7.0 fixes the problem. This issue effectively locks us (and likely others) into using an old version of dbt2looker

    "model.zivver_dwh.crm_account_became_customer_dates":
            {
                "raw_sql": "WITH sfdc_accounts AS (\r\n\r\n    SELECT * FROM {{ ref('stg_sfdc_accounts') }}\r\n\r\n), crm_opportunities AS (\r\n\r\n    SELECT * FROM {{ ref('crm_opportunities') }}\r\n\r\n), crm_account_lifecycle_stage_changes_into_customer_observed AS (\r\n\r\n    SELECT\r\n        *\r\n    FROM {{ ref('crm_account_lifecycle_stage_changes_observed') }}\r\n    WHERE\r\n        new_stage = 'CUSTOMER'\r\n\r\n), became_customer_dates_from_opportunities AS (\r\n\r\n    SELECT\r\n        crm_account_id AS sfdc_account_id,\r\n\r\n        -- An account might have multiple opportunities. The account became customer when the first one was closed won.\r\n        MIN(closed_at) AS became_customer_at\r\n    FROM crm_opportunities\r\n    WHERE\r\n        opportunity_stage = 'CLOSED_WON'\r\n    GROUP BY\r\n        1\r\n\r\n), became_customer_dates_observed AS (\r\n\r\n    -- Some accounts might not have closed won opportunities, but still be a customer. Examples would be Connect4Care\r\n    -- customers, which have a single opportunity which applies to multiple accounts. If an account is manually set\r\n    -- to customer, this should also count as a customer.\r\n    --\r\n    -- We try to get the date at which they became a customer from the property history. Since that wasn't on from\r\n    -- the beginning, we conservatively default to either the creation date of the account or the history tracking\r\n    -- start date, whichever was earlier. Please note that this case should be exceedingly rare.\r\n    SELECT\r\n        sfdc_accounts.sfdc_account_id,\r\n        CASE\r\n            WHEN {{ var('date:sfdc:account_history_tracking:start_date') }} <= sfdc_accounts.created_at\r\n                THEN sfdc_accounts.created_at\r\n            ELSE {{ var('date:sfdc:account_history_tracking:start_date') }}\r\n        END AS default_became_customer_date,\r\n\r\n        COALESCE(\r\n            MIN(crm_account_lifecycle_stage_changes_into_customer_observed.new_stage_entered_at),\r\n            default_became_customer_date\r\n        ) AS became_customer_at\r\n\r\n    FROM sfdc_accounts\r\n    LEFT JOIN crm_account_lifecycle_stage_changes_into_customer_observed\r\n        ON sfdc_accounts.sfdc_account_id = crm_account_lifecycle_stage_changes_into_customer_observed.sfdc_account_id\r\n    WHERE\r\n        sfdc_accounts.lifecycle_stage = 'CUSTOMER'\r\n    GROUP BY\r\n        1,\r\n        2\r\n\r\n)\r\nSELECT\r\n    COALESCE(became_customer_dates_from_opportunities.sfdc_account_id,\r\n        became_customer_dates_observed.sfdc_account_id) AS sfdc_account_id,\r\n    COALESCE(became_customer_dates_from_opportunities.became_customer_at,\r\n        became_customer_dates_observed.became_customer_at) AS became_customer_at\r\nFROM became_customer_dates_from_opportunities\r\nFULL OUTER JOIN became_customer_dates_observed\r\n    ON became_customer_dates_from_opportunities.sfdc_account_id = became_customer_dates_observed.sfdc_account_id",
                "resource_type": "model",
                "depends_on":
                {
                    "macros":
                    [
                        "macro.zivver_dwh.ref",
                        "macro.zivver_dwh.audit_model_deployment_started",
                        "macro.zivver_dwh.audit_model_deployment_completed",
                        "macro.zivver_dwh.grant_read_rights_to_role"
                    ],
                    "nodes":
                    [
                        "model.zivver_dwh.stg_sfdc_accounts",
                        "model.zivver_dwh.crm_opportunities",
                        "model.zivver_dwh.crm_account_lifecycle_stage_changes_observed"
                    ]
                },
                "config":
                {
                    "enabled": true,
                    "materialized": "ephemeral",
                    "persist_docs":
                    {},
                    "vars":
                    {},
                    "quoting":
                    {},
                    "column_types":
                    {},
                    "alias": null,
                    "schema": "bl",
                    "database": null,
                    "tags":
                    [
                        "business_layer",
                        "commercial"
                    ],
                    "full_refresh": null,
                    "crm_record_types": null,
                    "post-hook":
                    [
                        {
                            "sql": "{{ audit_model_deployment_completed() }}",
                            "transaction": true,
                            "index": null
                        },
                        {
                            "sql": "{{ grant_read_rights_to_role('data_engineer', ['all']) }}",
                            "transaction": true,
                            "index": null
                        },
                        {
                            "sql": "{{ grant_read_rights_to_role('analyst', ['all']) }}",
                            "transaction": true,
                            "index": null
                        }
                    ],
                    "pre-hook":
                    [
                        {
                            "sql": "{{ audit_model_deployment_started() }}",
                            "transaction": true,
                            "index": null
                        }
                    ]
                },
                "database": "analytics",
                "schema": "bl",
                "fqn":
                [
                    "zivver_dwh",
                    "business_layer",
                    "commercial",
                    "crm_account_lifecycle_stage_changes",
                    "intermediates",
                    "crm_account_became_customer_dates",
                    "crm_account_became_customer_dates"
                ],
                "unique_id": "model.zivver_dwh.crm_account_became_customer_dates",
                "package_name": "zivver_dwh",
                "root_path": "C:\\Users\\tjebbe.bodewes\\Documents\\zivver-dwh\\dwh\\transformations",
                "path": "business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.sql",
                "original_file_path": "models\\business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.sql",
                "name": "crm_account_became_customer_dates",
                "alias": "crm_account_became_customer_dates",
                "checksum":
                {
                    "name": "sha256",
                    "checksum": "a037b5681219d90f8bf8d81641d3587f899501358664b8ec77168901b3e1808b"
                },
                "tags":
                [
                    "business_layer",
                    "commercial"
                ],
                "refs":
                [
                    [
                        "stg_sfdc_accounts"
                    ],
                    [
                        "crm_opportunities"
                    ],
                    [
                        "crm_account_lifecycle_stage_changes_observed"
                    ]
                ],
                "sources":
                [],
                "description": "",
                "columns":
                {
                    "sfdc_account_id":
                    {
                        "name": "sfdc_account_id",
                        "description": "",
                        "meta":
                        {},
                        "data_type": null,
                        "quote": null,
                        "tags":
                        []
                    },
                    "became_customer_at":
                    {
                        "name": "became_customer_at",
                        "description": "",
                        "meta":
                        {},
                        "data_type": null,
                        "quote": null,
                        "tags":
                        []
                    }
                },
                "meta":
                {},
                "docs":
                {
                    "show": true
                },
                "patch_path": "zivver_dwh://models\\business_layer\\commercial\\crm_account_lifecycle_stage_changes\\intermediates\\crm_account_became_customer_dates\\crm_account_became_customer_dates.yml",
                "compiled_path": null,
                "build_path": null,
                "deferred": false,
                "unrendered_config":
                {
                    "pre-hook":
                    [
                        "{{ audit_model_deployment_started() }}"
                    ],
                    "post-hook":
                    [
                        "{{ grant_read_rights_to_role('analyst', ['all']) }}"
                    ],
                    "tags":
                    [
                        "commercial"
                    ],
                    "materialized": "ephemeral",
                    "schema": "bl",
                    "crm_record_types": null
                },
                "created_at": 1637233875
            }
    
    opened by Tbodewes 2
Releases(v0.11.0)
  • v0.11.0(Dec 1, 2022)

    Added

    • support label and hidden fields (#49)
    • support non-aggregate measures (#41)
    • support bytes and bignumeric for bigquery (#75)
    • support for custom connection name on the cli (#78)

    Changed

    • updated dependencies (#74)

    Fixed

    • Types maps for redshift (#76)

    Removed

    • Strict manifest validation (#77)
    Source code(tar.gz)
    Source code(zip)
  • v0.9.2(Oct 11, 2021)

  • v0.9.1(Oct 7, 2021)

    Fixed

    • Fixed bug where dbt2looker would crash if a dbt project contained an empty model

    Changed

    • When filtering models by tag, models that have no tag property will be ignored
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Oct 7, 2021)

    Added

    • Support for spark adapter (@chaimt)

    Changed

    • Updated with support for dbt2looker (@chaimt)
    • Lookml views now populate their "sql_table_name" using the dbt relation name
    Source code(tar.gz)
    Source code(zip)
  • v0.8.2(Sep 22, 2021)

    Changed

    • Measures with missing descriptions fall back to coloumn descriptions. If there is no column description it falls back to "{measure_type} of {column_name}".
    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Sep 22, 2021)

    Added

    • Dimensions have an enabled flag that can be used to switch off generated dimensions for certain columns with enabled: false
    • Measures have been aliased with the following: measures,measure,metrics,metric

    Changed

    • Updated dependencies
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(Sep 9, 2021)

    Changed

    • Command line interface changed argument from --target to --target-dir

    Added

    • Added the --project-dir flag to the command line interface to change the search directory for dbt_project.yml
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Sep 9, 2021)

  • v0.7.2(Sep 9, 2021)

  • v0.7.1(Aug 27, 2021)

    Added

    • Use dbt2looker --output-dir /path/to/dir to customise the output directory of the generated lookml files

    Fixed

    • Fixed error with reporting json validation errors
    • Fixed error in join syntax in example .yml file
    • Fixed development environment for python3.7 users
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 18, 2021)

  • v0.6.2(Apr 18, 2021)

  • v0.6.1(Apr 17, 2021)

  • v0.6.0(Apr 17, 2021)

Owner
lightdash
lightdash
A set of procedures that can realize covid19 virus detection based on blood.

A set of procedures that can realize covid19 virus detection based on blood.

Nuyoah-xlh 3 Mar 07, 2022
Pizza Orders Data Pipeline Usecase Solved by SQL, Sqoop, HDFS, Hive, Airflow.

PizzaOrders_DataPipeline There is a Tony who is owning a New Pizza shop. He knew that pizza alone was not going to help him get seed funding to expand

Melwin Varghese P 4 Jun 05, 2022
Intercepting proxy + analysis toolkit for Second Life compatible virtual worlds

Hippolyzer Hippolyzer is a revival of Linden Lab's PyOGP library targeting modern Python 3, with a focus on debugging issues in Second Life-compatible

Salad Dais 6 Sep 01, 2022
Titanic data analysis for python

Titanic-data-analysis This Repo is an analysis on Titanic_mod.csv This csv file contains some assumed data of the Titanic ship after sinking This full

Hardik Bhanot 1 Dec 26, 2021
WAL enables programmable waveform analysis.

This repro introcudes the Waveform Analysis Language (WAL). The initial paper on WAL will appear at ASPDAC'22 and can be downloaded here: https://www.

Institute for Complex Systems (ICS), Johannes Kepler University Linz 40 Dec 13, 2022
ASOUL直播间弹幕抓取&&数据分析

ASOUL直播间弹幕抓取&&数据分析(更新中) 这些文件用于爬取ASOUL直播间的弹幕(其他直播间也可以)和其他信息,以及简单的数据分析生成。

159 Dec 10, 2022
A program that uses an API and a AI model to get info of sotcks

Stock-Market-AI-Analysis I dont mind anyone using this code but please give me credit A program that uses an API and a AI model to get info of stocks

1 Dec 17, 2021
Finding project directories in Python (data science) projects, just like there R rprojroot and here packages

Find relative paths from a project root directory Finding project directories in Python (data science) projects, just like there R here and rprojroot

Daniel Chen 102 Nov 16, 2022
Data exploration done quick.

Pandas Tab Implementation of Stata's tabulate command in Pandas for extremely easy to type one-way and two-way tabulations. Support: Python 3.7 and 3.

W.D. 20 Aug 27, 2022
A stock analysis app with streamlit

StockAnalysisApp A stock analysis app with streamlit. You select the ticker of the stock and the app makes a series of analysis by using the price cha

Antonio Catalano 50 Nov 27, 2022
This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

📈 Statistical Quality Control 📉 This repo contains a simple but effective tool made using python which can be used for quality control in statistica

SasiVatsal 8 Oct 18, 2022
PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

Burn Research 4 Oct 13, 2022
A DSL for data-driven computational pipelines

"Dataflow variables are spectacularly expressive in concurrent programming" Henri E. Bal , Jennifer G. Steiner , Andrew S. Tanenbaum Quick overview Ne

1.9k Jan 03, 2023
Spectral Analysis in Python

SPECTRUM : Spectral Analysis in Python contributions: Please join https://github.com/cokelaer/spectrum contributors: https://github.com/cokelaer/spect

Thomas Cokelaer 280 Dec 16, 2022
Aggregating gridded data (xarray) to polygons

A package to aggregate gridded data in xarray to polygons in geopandas using area-weighting from the relative area overlaps between pixels and polygons. Check out the binder link above for a sample c

Kevin Schwarzwald 42 Nov 09, 2022
Unsub is a collection analysis tool that assists libraries in analyzing their journal subscriptions.

About Unsub is a collection analysis tool that assists libraries in analyzing their journal subscriptions. The tool provides rich data and a summary g

9 Nov 16, 2022
[CVPR2022] This repository contains code for the paper "Nested Collaborative Learning for Long-Tailed Visual Recognition", published at CVPR 2022

Nested Collaborative Learning for Long-Tailed Visual Recognition This repository is the official PyTorch implementation of the paper in CVPR 2022: Nes

Jun Li 65 Dec 09, 2022
A Python package for the mathematical modeling of infectious diseases via compartmental models

A Python package for the mathematical modeling of infectious diseases via compartmental models. Originally designed for epidemiologists, epispot can be adapted for almost any type of modeling scenari

epispot 12 Dec 28, 2022
nrgpy is the Python package for processing NRG Data Files

nrgpy nrgpy is the Python package for processing NRG Data Files Website and source: https://github.com/nrgpy/nrgpy Documentation: https://nrgpy.github

NRG Tech Services 23 Dec 08, 2022
Employee Turnover Analysis

Employee Turnover Analysis Submission to the DataCamp competition "Can you help reduce employee turnover?"

Jannik Wiedenhaupt 1 Feb 13, 2022