Keras documentation, hosted live at keras.io

Related tags

Deep Learningkeras-io
Overview

Keras.io documentation generator

This repository hosts the code used to generate the keras.io website.

Generating a local copy of the website

pip install -r requirements.txt
cd scripts
python autogen.py make
python autogen.py serve

If you have Docker (you don't need the gpu version of Docker), you can run instead:

docker build -t keras-io . && docker run --rm -p 8000:8000 keras-io

It will take a while the first time because it's going to pull the image and the dependencies, but on the next times it'll be much faster.

Another way of testing using Docker is via our Makefile:

make container-test

This command will build a Docker image with a documentation server and run it.

Call for examples

Are you interested in submitting new examples for publication on keras.io? We welcome your contributions! Please read the information below about adding new code examples.

We are currently interested in the following examples.

Adding a new code example

Keras code examples are implemented as tutobooks.

A tutobook is a script available simultaneously as a notebook, as a Python file, and as a nicely-rendered webpage.

Its source-of-truth (for manual edition and version control) is its Python script form, but you can also create one by starting from a notebook and converting it with the command nb2py.

Text cells are stored in markdown-formatted comment blocks. the first line (starting with """) may optionally contain a special annotation, one of:

  • shell: execute this block while prefixing each line with !.
  • invisible: do not render this block.

The script form should start with a header with the following fields:

Title: (title)
Author: (could be `Authors`: as well, and may contain markdown links)
Date created: (date in yyyy/mm/dd format)
Last modified: (date in yyyy/mm/dd format)
Description: (one-line text description)

To see examples of tutobooks, you can check out any .py file in examples/ or guides/.

Creating a new example starting from a ipynb file

  1. Save the ipynb file to local disk.
  2. Convert the file to a tutobook by running: (assuming you are in the scripts/ directory)
python tutobooks.py nb2py path_to_your_nb.ipynb ../examples/vision/script_name.py

This will create the file examples/vision/script_name.py.

  1. Open it, fill in the headers, and generally edit it so that it looks nice.

NOTE THAT THE CONVERSION SCRIPT MAY MAKE MISTAKES IN ITS ATTEMPTS TO SHORTEN LINES. MAKE SURE TO PROOFREAD THE GENERATED .py IN FULL. Or alternatively, make sure to keep your lines reasonably-sized (<90 char) to start with, so that the script won't have to shorten them.

  1. Run python autogen.py add_example vision/script_name. This will generate an ipynb and markdown rendering of your example, creating files in examples/vision/ipynb, examples/vision/md, and examples/vision/img. Do not modify any of these files by hand; only the original Python script should ever be edited manually.
  2. Submit a PR adding examples/vision/script_name.py (only the .py, not the generated files). Get a review and approval.
  3. Once the PR is approved, add to the PR the files created by the add_example command. Then we will merge the PR.

Creating a new example starting from a Python script

  1. Format the script with black: black script_name.py
  2. Add tutobook header
  3. Put the script in the relevant subfolder of examples/ (e.g. examples/vision/script_name)
  4. Run python autogen.py add_example vision/script_name. This will generate an ipynb and markdown rendering of your example, creating files in examples/vision/ipynb, examples/vision/md, and examples/vision/img. Do not modify any of these files by hand; only the original Python script should ever be edited manually.
  5. Submit a PR adding examples/vision/script_name.py (only the .py, not the generated files). Get a review and approval.
  6. Once the PR is approved, add to the PR the files created by the add_example command. Then we will merge the PR.

Previewing a new example

You can locally preview what the example looks like by running:

cd scripts
python autogen.py add_example vision/script_name

(Assuming the tutobook file is examples/vision/script_name.py.)

NOTE THAT THIS COMMAND WILL ERROR OUT IF ANY CELLS TAKES TOO LONG TO EXECUTE. In that case, make your code lighter/faster. Remember that examples are meant to demonstrate workflows, not train state-of-the-art models. They should stay very lightweight.

Then serving the website:

python autogen.py make
python autogen.py serve

And navigating to 0.0.0.0:8000/examples.

Read-only autogenerated files

The contents of the following folders should not be modified by hand:

  • site/*
  • sources/*
  • templates/examples/*
  • templates/guides/*
  • examples/*/md/*, examples/*/ipynb/*, examples/*/img/*
  • guides/md/*, guides/ipynb/*, guides/img/*

Modifiable files

These are the only files that should be edited by hand:

  • templates/*.md, with the exception of templates/examples/* and templates/guides/*
  • examples/*/*.py
  • guides/*.py
  • theme/*
  • scripts/*.py
Owner
Keras
Deep Learning for humans
Keras
A modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (prediction model)

ParallelFold Author: Bozitao Zhong This is a modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (p

Bozitao Zhong 77 Dec 22, 2022
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(2021) paper

ImageNet-21K Pretraining for the Masses Paper | Pretrained models Official PyTorch Implementation Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelni

574 Jan 02, 2023
Extremely simple and fast extreme multi-class and multi-label classifiers.

napkinXC napkinXC is an extremely simple and fast library for extreme multi-class and multi-label classification, that focus of implementing various m

Marek Wydmuch 43 Nov 14, 2022
Official PyTorch Implementation of Convolutional Hough Matching Networks, CVPR 2021 (oral)

Convolutional Hough Matching Networks This is the implementation of the paper "Convolutional Hough Matching Network" by J. Min and M. Cho. Implemented

Juhong Min 70 Nov 22, 2022
Object-Centric Learning with Slot Attention

Slot Attention This is a re-implementation of "Object-Centric Learning with Slot Attention" in PyTorch (https://arxiv.org/abs/2006.15055). Requirement

Untitled AI 72 Jan 02, 2023
Implementation of FitVid video prediction model in JAX/Flax.

FitVid Video Prediction Model Implementation of FitVid video prediction model in JAX/Flax. If you find this code useful, please cite it in your paper:

Google Research 62 Nov 25, 2022
DrWhy is the collection of tools for eXplainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models.

Responsible Machine Learning With Great Power Comes Great Responsibility. Voltaire (well, maybe) How to develop machine learning models in a responsib

Model Oriented 590 Dec 26, 2022
DUE: End-to-End Document Understanding Benchmark

This is the repository that provide tools to download data, reproduce the baseline results and evaluation. What can you achieve with this guide Based

21 Dec 29, 2022
Neural network pruning for finding a sparse computational model for controlling a biological motor task.

MothPruning Scientific Overview Originally inspired by biological nervous systems, deep neural networks (DNNs) are powerful computational tools for mo

Olivia Thomas 0 Dec 14, 2022
Implementation of gaze tracking and demo

Predicting Customer Demand by Using Gaze Detecting and Object Tracking This project is the integration of gaze detecting and object tracking. Predict

2 Oct 20, 2022
TRACER: Extreme Attention Guided Salient Object Tracing Network implementation in PyTorch

TRACER: Extreme Attention Guided Salient Object Tracing Network This paper was accepted at AAAI 2022 SA poster session. Datasets All datasets are avai

Karel 118 Dec 29, 2022
Script utilizando OpenCV e modelo Machine Learning para detectar o uso de máscaras.

Reconhecendo máscaras Este repositório contém um script em Python3 que reconhece se um rosto está ou não portando uma máscara! O código utiliza da bib

Maria Eduarda de Azevedo Silva 168 Oct 20, 2022
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees This repository is the official implementation of the empirica

Kuan-Lin (Jason) Chen 2 Oct 02, 2022
🏅 The Most Comprehensive List of Kaggle Solutions and Ideas 🏅

🏅 Collection of Kaggle Solutions and Ideas 🏅

Farid Rashidi 2.3k Jan 08, 2023
Embodied Intelligence via Learning and Evolution

Embodied Intelligence via Learning and Evolution This is the code for the paper Embodied Intelligence via Learning and Evolution Agrim Gupta, Silvio S

Agrim Gupta 111 Dec 13, 2022
Official repository for Fourier model that can generate periodic signals

Conditional Generation of Periodic Signals with Fourier-Based Decoder Jiyoung Lee, Wonjae Kim, Daehoon Gwak, Edward Choi This repository provides offi

8 May 25, 2022
PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders: A PyTorch Implementation This is a PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners: @

Meta Research 4.8k Jan 04, 2023
WORD: Revisiting Organs Segmentation in the Whole Abdominal Region

WORD: Revisiting Organs Segmentation in the Whole Abdominal Region (Paper and DataSet). [New] Note that all the emails about the download permission o

Healthcare Intelligence Laboratory 71 Dec 22, 2022
Classic Papers for Beginners and Impact Scope for Authors.

There have been billions of academic papers around the world. However, maybe only 0.0...01% among them are valuable or are worth reading. Since our limited life has never been forever, TopPaper provi

Qiulin Zhang 228 Dec 18, 2022