Machine Learning powered app to decide whether a photo is food or not.

Overview

Food Not Food dot app ( 🍔 🚫 🍔 )

Code for building a machine Learning powered app to decide whether a photo is of food or not.

See it working live at: https://foodnotfood.app

Yes, that's all it does.

It's not perfect.

But think about it.

How do you decide what's food or not?

Inspiration

Remember hotdog not hotdog?

That's what this repo builds, excepts for food or not.

It's arguably harder to do food or not.

Because there's so many options for what a "food" is versus what "not food" is.

Whereas with hotdog not hotdog, you've only got one option: is it a hotdog or not?

Video and notes

I built this app during a 10-hour livestream to celebrate 100,000 YouTube Subscribers (thank you thank you thank you).

The full stream replay is available to watch on YouTube.

The code has changed since the stream.

I made it cleaner and more reproducible.

My notes are on Notion.

Steps to reproduce

Note: If this doesn't work, please leave an issue.

To reproduce, the following steps are best run in order.

You will require and installation of Conda, I'd recommend Miniconda.

Clone the repo

git clone https://github.com/mrdbourke/food-not-food
cd food-not-food

Environment creation

I use Conda for my environments. You could do similar with venv and pip but I prefer Conda.

This code works with Python 3.8.

conda create --prefix ./env python=3.8 -y
conda activate ./env
conda install pip

Installing requirements

Getting TensorFlow + GPU to work

Follow the install instructions for running TensorFlow on the GPU.

This will be required for model_building/train_model.py.

Note: Another option here to skip the installation of TensorFlow is to use your global installation of TensorFlow and just install the requirements.txt file below.

Other requirements

If you're using your global installation of TensorFlow, you might be able to just run pip install requirements.txt in your environment.

Or if you're running in another dedicated environment, you should also be able to just run pip install -r requirements.txt.

pip install -r requirements.txt

Getting the data

  1. Download Food101 data (101,000 images of food).
python data_download/download_food101.py
  1. Download a subset of Open Images data. Use the -n flag to indicate how many images from each set (train/valid/test) to randomly download.

For example, running python data_download/download_open_images.py -n=100 downloads 100 images from the training, validation and test sets of Open Images (300 images in total).

The downloading for Open Images data is powered by FiftyOne.

python data_download/download_open_images.py -n=100

Processing the data

  1. Extract the Food101 data into a "food" directory, use the -n flag to set how many images of food to extract, for example -n=10000 extracts 10,000 random food images from Food101.
python data_processing/extract_food101.py -n=10000
  1. Extract the Open Images images into open_images_extracted directory.

The data_processing/extract_open_images.py script uses the Open Images labels plus a list of foods and not foods (see data/food_list.txt and data/non_food_list.txt) to separate the downloaded Open Images.

This is necessary because some of the images from Open Images contain foods (we don't want these in our not_food class).

python data_processing/extract_open_images.py
  1. Move the extracted images into "food" and "not_food" directories.

This is necessary because our model training file will be searching for class names by the title of our directories (food and not_food).

python data_processing/move_images.py 
  1. Split the data into training and test sets.

This creates a training and test split of food and not_food images.

This is so we can verify the performance of our model before deploying it.

It'll create the structure:

train/
    food/
        image1.jpeg
        image2.jpeg
        ...
    not_food/
        image100.jpeg
        image101.jpeg
        ...
test/
    food/
        image201.jpeg
        image202.jpeg
        ...
    not_food/
        image301.jpeg
        image302.jpeg
        ...

To do this, run:

python data_processing/data_splitting.py

Modeling the data

Note: This will require a working install of TensorFlow.

Running the model training file will produce a TensorFlow Lite model (this is small enough to be deployed in a browser) saved to the models directory.

The script will look for the train and test directories and will create training and testing datasets on each respectively.

It'll print out the progress at each epoch and then evaluate and save the model.

python model_building/train_model.py

What data is used?

The current deployed model uses about 40,000 images of food and 25,000 images of not food.

Owner
Daniel Bourke
Machine Learning Engineer live on YouTube.
Daniel Bourke
firefox session recovery

firefox session recovery

Ahmad Sadraei 5 Nov 29, 2022
Semester Project on Signal Processing @CS UCU 2021

Blur Detection with Haar Wavelet Transform Requirements Python3 opencv-python PyWavelets Install these using the following command: $ pip install -r r

ButynetsD 2 Oct 15, 2022
A simple spyware in python.

Spyware-Python- Dependencies: Python 3.x OpenCV PyAutoGUI PyMongo (for mongodb connection) Flask (Web Server) Ngrok (helps us push our fla

Abubakar 3 Sep 07, 2022
DC619/DC858 Mainframe Environment/Lab

DC619 Training LPAR The file DC619 - Mainframe Overflows Hands On.pdf contains the labs and walks through how to perform them. Use docker You can use

Soldier of FORTRAN 9 Jun 27, 2022
Script de monitoramento de telemetria para missões espaciais, cansat e foguetemodelismo.

Aeroespace_GroundStation Script de monitoramento de telemetria para missões espaciais, cansat e foguetemodelismo. Imagem 1 - Dashboard realizando moni

Vinícius Azevedo 5 Nov 27, 2022
Navigate to your directory of choice the proceed as follows

Installation 🚀 Navigate to your directory of choice the proceed as follows; 1 .Clone the git repo and create a virtual environment Depending on your

Ondiek Elijah Ochieng 2 Jan 31, 2022
Repositorio com arquivos processados da CPI da COVID para facilitar analise

cpi4all Repositorio com arquivos processados da CPI da COVID para facilitar analise Organização No site do senado é possivel encontrar a lista de todo

Breno Rodrigues Guimarães 12 Aug 16, 2021
Fofa asset consolidation script

资产收集+C段整理二合一 基于fofa资产搜索引擎进行资产收集,快速检索目标条件下的IP,URL以及标题,适用于资产较多时对模糊资产的快速检索,新增C段整理功能,整理出

白泽Sec安全实验室 36 Dec 01, 2022
CountdownTimer - Countdown Timer For Python

Countdown Timer This python script asks for the user time (input) in seconds, an

Arinzechukwu Okoye 1 Jan 01, 2022
🏆 A ranked list of awesome Python open-source libraries and tools. Updated weekly.

Best-of Python 🏆 A ranked list of awesome Python open-source libraries & tools. Updated weekly. This curated list contains 230 awesome open-source pr

Machine Learning Tooling 2.7k Jan 03, 2023
github action test, because I dont know it.

mad-y testing testing pip install -r requirements.txt add the DISCORD_TOKEN value to your env vars. and run mad-y how to Deploy ` docker build -t mad-

Mit 1 Oct 29, 2021
Width-customizer-for-streamlit-apps - Width customizer for Streamlit Apps

🎈 Width customizer for Streamlit Apps As of now, you can only change your Strea

Charly Wargnier 5 Aug 09, 2022
A middle-to-high level algorithm book designed with coding interview at heart!

Hands-on Algorithmic Problem Solving A one-stop coding interview prep book! About this book In short, this is a middle-to-high level algorithm book de

Li Yin 1.8k Jan 02, 2023
Syntax highlighting for yarn.lock and bun.lockb files

Yarn.lock Syntax Highlighting Syntax highlighting for yarn.lock and bun.lockb files Installation Plugin is not publushed yet on Package Control, to in

Alexander Kuznetsov 4 Jul 06, 2022
The update manager for the ERA App (era.sh)

ERA Update Manager This is the official update manager used in the ERA app (see era.sh) How it works Once a new version of ERA is available, the app l

Kian Shahriyari 1 Dec 29, 2021
Amazon SageMaker Delta Sharing Examples

This repository contains examples and related resources showing you how to preprocess, train, and serve your models using Amazon SageMaker with data fetched from Delta Lake.

Eitan Sela 5 May 02, 2022
1. 네이버 카페 댓글을 빨리 다는 기능

naver_autoprogram 기능 설명 네이버 카페 댓글을 빨리 다는 기능 네이버 카페 자동 출석 체크 기능 동작 방식 카페 댓글 기능 기본 동작은 주기적인 스케쥴 동작으로 해당 카페 ID 와 특정 API 주소로 대상이 새글을 작성했는지 체크. 해당 대상이 새글 등

1 Dec 22, 2021
Solcast Integration for Home Assistant

Solcast Solar Home Assistant(https://www.home-assistant.io/) Component This custom component integrates the Solcast API into Home Assistant. Modified

Greg 45 Dec 20, 2022
Site de gestion de cave à vin utilisant une BDD manipulée avec SQLite3 via Python

cave-vin Site de gestion de cave à vin utilisant une bdd manipulée avec MySQL ACCEDER AU SITE : Pour accéder à votre cave vous aurez besoin de lancer

Elouann Lucas 0 Jul 05, 2022
A small script I made that takes any standard Decklist of magic the gathering cards and pulls all card images from scryfall at once!

A small script I made that takes any standard Decklist of magic the gathering cards and pulls all card images from scryfall at once!

15 Aug 26, 2022