DataOps framework for Machine Learning projects.

Overview

Noronha DataOps

noronha logo

Noronha is a Python framework designed to help you orchestrate and manage ML projects life-cycle.

It hosts Machine Learning models inside a portable, ready-to-use DataOps architecture, thus helping you benefit from DataOps and MLOps practices without having to change much of your usual work behavior.

The architecture consists of three components:

  • File storage: Artifactory, Nexus, Apache Cassandra
    • A raw storage of your choice is used to version ML assets, such as notebooks, datasets and model binaries. Noronha currently supports Artifactory (default) and Nexus for this task, while Apache Cassandra can be used only as model binary storage.
  • Metadata storage: MongoDB
    • Mongo document design is used to guide the framework while managing your ML project, therefore it cannot be swapped with other technologies.
  • Model router: NodeJS (optional)
    • A router can be set up to act as a single entrypoint for all your models. This is especially useful when deploying in Kubernetes, where service exposure comes into play.

These components are internally called isles, they can be used in native mode (managed by Noronha) or foreign mode (managed by user).

Prerequisites

To use the framework in its most basic configuration all you need is:

Getting started

pip install noronha-dataops
nha --debug --pretty get-me-started

After installing the framework, the command-line option noronha or nha becomes available. Every command has the --help option, use it constantly.

The get-me-started option will set up Artifactory and MongoDB instances in native mode.

--debug and --pretty help debugging and reading error messages. Use their short version instead: nha -d -p

3) Basic usage

Once you have successfully installed Noronha, start with the simplest project structure:

project_home:
+-- Dockerfile
+-- requirements.txt
+-- notebooks/
    +-- training.ipynb
    +-- predict.ipynb

This is what the Dockerfile may look like:

# default public base image for working inside Noronha
FROM noronhadataops/noronha:latest

# project dependencies installation
ADD requirements.txt .
RUN bash -c "source ${CONDA_HOME}/bin/activate ${CONDA_VENV}  && \
    conda install --file requirements.txt"

# deploying the project's code
ADD notebooks ./notebooks

From your project home folder, record it in Noronha and build a new image:

nha -d -p proj new --name my-first-proj --desc "Testing project" --home-dir .
nha -d -p proj build --tag develop

Then, run Jupyter Notebook interface for editing and testing code:

nha -d -p note --edit --tag develop --port 9090

--edit will mount your current directory into the container. This is useful if you want to edit code, test it and save it in the local machine (remember to be in the right directory when using this option).

--port host port that will be routed to the notebook's UI

Go to your browser and enter: http://localhost:9090/

Next steps

For fully-working project template and end-to-end tutorial, see the iris example.

For more information about Noronha and advanced usage of the framework, check readthedocs.

If you want to know how to run in Kubernetes, check this guide.

Report issues and request features

If you run into any problem or feel like there is some funcionality that should be added, please consider submiting an issue.

We also monitor Stack Overflow questions that use the tag: #noronha-dataops.

If you like mailing lists, here is our Google Groups: [email protected].

Contributing

Please read our contributing guide.

Comments
  • Extend kubernetes service support, refactor resource profiles, compatibility when deploying with sidecar

    Extend kubernetes service support, refactor resource profiles, compatibility when deploying with sidecar

    K8s services can be configure via resource profile Resource profiles can define whatever combination of request/limit Noronha Pods are now compatible with sidecar paradigm

    opened by g-crocker 1
  • Unfreeze mongoengine version

    Unfreeze mongoengine version

    mongoengine was frozen a while ago because it had a blocking issue.

    Now it has been resolved and Noronha install was failing due to conflict between mongoengine and setuptools.

    Tests with current mongoengine version run without problems.

    opened by g-crocker 0
  • Build is failing due to conflict between mongoengine and setuptools

    Build is failing due to conflict between mongoengine and setuptools

    Describe the bug

    Noronha install and build fails on enviroments running setuptools >= 58.0.0 due to old version of Mongo library being used (mongoengine==0.18.2)

    To Reproduce

    Create environment with setuptools >= 58.0.0 pip install noronha-dataops

    Log messages + Traceback

    Collecting mongoengine==0.18.2
      Downloading mongoengine-0.18.2.tar.gz (151 kB)
        ERROR: Command errored out with exit status 1:
         command: /etc/miniconda/envs/py3_default/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_i6czwns/mongoengine_59f2504400a24032844b35daec24a8bc/setup.py'"'"'; __file__='"'"'/tmp/pip-install-_i6czwns/mongoengine_59f2504400a24032844b35daec24a8bc/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-gahpmg7n
             cwd: /tmp/pip-install-_i6czwns/mongoengine_59f2504400a24032844b35daec24a8bc/
        Complete output (1 lines):
        error in mongoengine setup command: use_2to3 is invalid.
        ----------------------------------------
    WARNING: Discarding https://files.pythonhosted.org/packages/a7/1c/0749992c7a2b6a2f1879ad44ba5285f907d2024838459b4cd635c5e5effd/mongoengine-0.18.2.tar.gz#sha256=fa3e73c966fca2b814cc1103ac4f55bcca7aae05028b112ef0cc8b321ee4a2f7 (from https://pypi.org/simple/mongoengine/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    ERROR: Could not find a version that satisfies the requirement mongoengine==0.18.2 (from versions: 0.4, 0.5, 0.5.1, 0.5.2, 0.6.1, 0.6.3, 0.6.4, 0.6.6, 0.6.7, 0.6.8, 0.6.10, 0.6.12, 0.6.13, 0.6.15, 0.6.16, 0.6.17, 0.6.19, 0.6.20, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4, 0.7.5, 0.7.8, 0.7.9, 0.7.10, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.6, 0.8.7, 0.8.7.post2, 0.8.7.1, 0.8.8, 0.9.0, 0.10.0, 0.10.1, 0.10.4, 0.10.5, 0.10.6, 0.10.7, 0.10.9, 0.11.0, 0.12.0, 0.13.0, 0.14.3, 0.15.0, 0.15.3, 0.16.0, 0.16.1, 0.16.2, 0.16.3, 0.17.0, 0.18.0, 0.18.2, 0.19.0, 0.19.1, 0.20.0, 0.21.0, 0.22.0, 0.22.1, 0.23.0, 0.23.1)
    ERROR: No matching distribution found for mongoengine==0.18.2
    
    

    Expected behavior

    Build and install finish successfully

    Environment

    • OS version: Ubuntu 20.04
    • Docker version: 19.03
    • Noronha version: 1.6.2
    bug 
    opened by g-crocker 0
  • Support MongoDB customization of service type in K8s

    Support MongoDB customization of service type in K8s

    Current behavior

    Today Noronha relies on NodePort service expousure in order to reach MongoDB when outside of the cluster (i.e.: in user host).

    Enhancement

    Since there are security issues regarding NodePort usage, it is interesting to allow users to parametrize which type of service exposure is needed (ClusterIP, NodePort, LoadBalancer).

    enhancement 
    opened by g-crocker 0
  • Support Artifactory usage through K8s Ingress and enable customization of service type

    Support Artifactory usage through K8s Ingress and enable customization of service type

    Current behavior

    Today Noronha relies on NodePort service expousure in order to reach Artifactory when outside of the cluster (i.e.: in user host).

    Enhancement

    Since Artifactory requests are all HTTP-based, access via Ingress would be a much better approach.

    It is also interesting to allow users to parametrize which type of service exposure is needed (ClusterIP, NodePort, LoadBalancer).

    enhancement 
    opened by g-crocker 0
  • Model publish fixes and post-training steps cleanup

    Model publish fixes and post-training steps cleanup

    Model version publish behaves correctly when executing in IDE.

    Training mongo document is correctly updated after model publish.

    Fixed example for lazy prediction.

    Moving Noronha to 1.6.1

    opened by g-crocker 0
  • Publishing model version from notebook IDE fails

    Publishing model version from notebook IDE fails

    Describe the bug

    When I train a new model inside the notebook IDE it fails to publish them, although the log messages indicate it successfully saved the model files.

    To Reproduce

    Start notebook IDE: nha -d -p note --edit --port 9090

    Train model, save output to tmp_path and call Publisher class:

    import joblib
    from noronha.tools.publish import Publisher
    from noronha.tools.shortcuts import data_path, tmp_path
    from sklearn import svm
    
    clf = svm.SVC(kernel=kernel, gamma=gamma)
    clf.fit(entries, classes)
    joblib.dump(clf, tmp_path('clf.pkl'))
    
    Publisher()(version_name='test', model_name='iris-clf')
    

    Log messages + Traceback

    Uploading file: clf.pkl
    
    ---------------------------------------------------------------------------
    DoesNotExist                              Traceback (most recent call last)
    <ipython-input-10-f0881f50ca51> in <module>
          1 joblib.dump(clf, tmp_path('clf.pkl'))
          2 
    ----> 3 Publisher()(version_name='test', model_name='iris-clf')
    
    /etc/miniconda/envs/py3_default/lib/python3.7/site-packages/noronha/tools/publish.py in __call__(self, src_path, details, version_name, model_name, uses_dataset, dataset_name, uses_pretrained, pretrained_with, lightweight)
        142         )
        143 
    --> 144         self.train.reload()
        145         self.train.update(mover=mv, ds=ds)
        146 
    
    /etc/miniconda/envs/py3_default/lib/python3.7/site-packages/mongoengine/document.py in reload(self, *fields, **kwargs)
        709 
        710         if self.pk is None:
    --> 711             raise self.DoesNotExist('Document does not exist')
        712 
        713         obj = self._qs.read_preference(ReadPreference.PRIMARY).filter(
    
    DoesNotExist: Document does not exist
    

    Expected behavior

    Sucessfully publish a new model version to Noronha.

    Environment

    • OS version: Ubuntu 16.04
    • Docker version: 19.03.13
    • K8s version: N/A, using Swarm
    • Noronha version: 1.6.0

    Additional context

    If I run a training using nha train new, the Publisher works fine.

    bug 
    opened by g-crocker 0
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 1
  • Allow secret injection as environment variable and file

    Allow secret injection as environment variable and file

    Use case

    Complex use cases require users to inject files or environment variables that contain sensitive information. Noronha does not provide any functionality to solve this.

    Feature description

    Allow users with previously created secrets to inject those into their Noronha-managed containers. Either as a file or an environment variable.

    feature 
    opened by g-crocker 0
  • Isle image is not pulled from Docker Hub

    Isle image is not pulled from Docker Hub

    Describe the bug

    When deploying an isle for the first time with nha get-me-started or nha isle <name> setup Noronha builds it locally instead of pulling from DockerHub.

    To Reproduce

    Make sure there is no isle image in your registry e.g.: docker images | grep nha-isle-artif. Run command to setup artifactory nha -d -p isle artif setup

    Expected behavior

    Pull image from DockerHub and tag/push to private registry if configured, then start isle container. Only build locally if flag --just-build is used.

    Environment

    • Noronha version: 1.6.2
    bug 
    opened by g-crocker 0
  • Support a file store isle that easily integrates with cloud native object storage

    Support a file store isle that easily integrates with cloud native object storage

    Use case

    While Noronha supports Artifactory, Nexus and Cassandra, when we think about cloud-native solutions the usage of cloud object storage is also very common. Being able to interact with a tools such as CEPH/Rookand MinIO would extend Noronha use-cases.

    Feature description

    Allow users to use a file store isle that is relying in cloud-native object storage.

    Personally, I believe adding support to MinIO first would be better, since it's easier to configure and interact with.

    CEPH seems a bit more challenging, although it might be more used by some projects.

    feature 
    opened by g-crocker 0
  • When an island setup deployment fails, the associated volumes are automatically deleted

    When an island setup deployment fails, the associated volumes are automatically deleted

    Describe the bug

    For example, when we run nha -d -p isle mongo setup and for any reason the deployment fails, Noronha will automatically revert the creation of associated volumes (in K8s, PVCs), even if those volumes were already there before the command was run.

    This can be a major issue if someone already has isles running in their cluster and need to reconfigure them (e.g.: update the resource profile), if for some reason the setup command fails, Noronha will remove existing PVCs without even prompting the user, completely erasing it's own database.

    Expected behavior

    1. Noronha should prompt the user if this action should be done
    2. The default option should be to leave the volumes there
    bug 
    opened by g-crocker 0
  • Include K8s NFS setup steps in host

    Include K8s NFS setup steps in host

    Missing Documentation

    Production guide

    Today the guide doesn't specify how to mount a K8s NFS into a Linux host.

    This is a requirement in order to use Noronha-managed Jupyter notebooks in edit mode while running in K8s, so it makes sense to add it to the docs.

    documentation 
    opened by g-crocker 0
Releases(v1.6.2)
SSH to WebSockets Bridge

wssh wssh is a SSH to WebSockets Bridge that lets you invoke a remote shell using nothing but HTTP. The client connecting to wssh doesn't need to spea

Andrea Luzzardi 1.3k Dec 25, 2022
DC/OS - The Datacenter Operating System

DC/OS - The Datacenter Operating System The easiest way to run microservices, big data, and containers in production. What is DC/OS? Like traditional

DC/OS 2.3k Jan 06, 2023
Run your clouds in RAID.

UniKlaud Run your clouds in RAID Table of Contents About The Project Built With Getting Started Installation Usage Roadmap Contributing License Contac

3 Jan 16, 2022
The leading native Python SSHv2 protocol library.

Paramiko Paramiko: Python SSH module Copyright: Copyright (c) 2009 Robey Pointer 8.1k Jan 04, 2023

Changelog CI is a GitHub Action that enables a project to automatically generate changelogs

What is Changelog CI? Changelog CI is a GitHub Action that enables a project to automatically generate changelogs. Changelog CI can be triggered on pu

Maksudul Haque 106 Dec 25, 2022
Dockerized iCloud drive

iCloud-drive-docker is a simple iCloud drive client in Docker environment. It uses pyiCloud python library to interact with iCloud

Mandar Patil 376 Jan 01, 2023
A curated list of awesome DataOps tools

Awesome DataOps A curated list of awesome DataOps tools. Awesome DataOps Data Catalog Data Exploration Data Ingestion Data Lake Data Processing Data Q

Kelvin S. do Prado 40 Dec 23, 2022
Repository tracking all OpenStack repositories as submodules. Mirror of code maintained at opendev.org.

OpenStack OpenStack is a collection of interoperable components that can be deployed to provide computing, networking and storage resources. Those inf

Mirrors of opendev.org/openstack 4.6k Dec 28, 2022
Ajenti Core and stock plugins

Ajenti is a Linux & BSD modular server admin panel. Ajenti 2 provides a new interface and a better architecture, developed with Python3 and AngularJS.

Ajenti Project 7k Jan 03, 2023
Checkmk kube agent - Checkmk Kubernetes Cluster and Node Collectors

Checkmk Kubernetes Cluster and Node Collectors Checkmk cluster and node collecto

tribe29 GmbH 15 Dec 26, 2022
Chartreuse: Automated Alembic migrations within kubernetes

Chartreuse: Automated Alembic SQL schema migrations within kubernetes "How to automate management of Alembic database schema migration at scale using

Wiremind 8 Oct 25, 2022
Self-hosted, easily-deployable monitoring and alerts service - like a lightweight PagerDuty

Cabot Maintainers wanted Cabot is stable and used by hundreds of companies and individuals in production, but it is not actively maintained. We would

Arachnys 5.4k Dec 23, 2022
strava-offline is a tool to keep a local mirror of Strava activities for further analysis/processing:

strava-offline Overview strava-offline is a tool to keep a local mirror of Strava activities for further analysis/processing: synchronizes metadata ab

Tomáš Janoušek 29 Dec 14, 2022
Project 4 Cloud DevOps Nanodegree

Project Overview In this project, you will apply the skills you have acquired in this course to operationalize a Machine Learning Microservice API. Yo

1 Nov 21, 2021
Bitnami Docker Image for Python using snapshots for the system packages repositories

Python Snapshot packaged by Bitnami What is Python Snapshot? Python is a programming language that lets you work quickly and integrate systems more ef

Bitnami 1 Jan 13, 2022
A colony of interacting processes

NColony Infrastructure for running "colonies" of processes. Hacking $ tox Should DTRT -- if it passes, it means unit tests are passing, and 100% cover

23 Apr 04, 2022
Autoscaling volumes for Kubernetes (with the help of Prometheus)

Kubernetes Volume Autoscaler (with Prometheus) This repository contains a service that automatically increases the size of a Persistent Volume Claim i

DevOps Nirvana 142 Dec 28, 2022
🐳 RAUDI: Regularly and Automatically Updated Docker Images

🐳 RAUDI: Regularly and Automatically Updated Docker Images RAUDI (Regularly and Automatically Updated Docker Images) automatically generates and keep

SecSI 534 Dec 29, 2022
This project shows how to serve an TF based image classification model as a web service with TFServing, Docker, and Kubernetes(GKE).

Deploying ML models with CPU based TFServing, Docker, and Kubernetes By: Chansung Park and Sayak Paul This project shows how to serve a TensorFlow ima

Chansung Park 104 Dec 28, 2022
Build Netbox as a Docker container

netbox-docker The Github repository houses the components needed to build Netbox as a Docker container. Images are built using this code and are relea

Farshad Nick 1 Dec 18, 2021