Code for testing various M1 Chip benchmarks with TensorFlow.

Overview

M1, M1 Pro, M1 Max Machine Learning Speed Test Comparison

This repo contains some sample code to benchmark the new M1 MacBooks (M1 Pro and M1 Max) against various other pieces of hardware.

It also has steps below to setup your M1, M1 Pro and M1 Max (steps should also for work Intel) Mac to run the code.

Who is this repo for?

You: have a new M1, M1 Pro, M1 Max machine and would like to get started doing machine learning and data science on it.

This repo: teaches you how to install the most common machine learning and data science packages (software) on your machine and make sure they run using sample code.

Machine Learning Experiments Conducted

All experiments were run with the same code. For Apple devices, TensorFlow environments were created with the steps below.

Notebook Number Experiment
00 TinyVGG model trained on CIFAR10 dataset with TensorFlow code.
01 EfficientNetB0 Feature Extractor on Food101 dataset with TensorFlow code.
02 RandomForestClassifier from Scikit-Learn trained with random search cross-validation on California Housing dataset.

Results

See the results directory.

Steps (how to test your M1 machine)

  1. Create an environment and install dependencies (see below)
  2. Clone this repo
  3. Run various notebooks (results come at the end of the notebooks)

How to setup a TensorFlow environment on M1, M1 Pro, M1 Max using Miniforge (shorter version)

If you're experienced with making environments and using the command line, follow this version. If not, see the longer version below.

  1. Download and install Homebrew from https://brew.sh. Follow the steps it prompts you to go through after installation.
  2. Download Miniforge3 (Conda installer) for macOS arm64 chips (M1, M1 Pro, M1 Max).
  3. Install Miniforge3 into home directory.
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
  1. Restart terminal.
  2. Create a directory to setup TensorFlow environment.
mkdir tensorflow-test
cd tensorflow-test
  1. Make and activate Conda environment. Note: Python 3.8 is the most stable for using the following setup.
conda create --prefix ./env python=3.8
conda activate ./env
  1. Install TensorFlow dependencies from Apple Conda channel.
conda install -c apple tensorflow-deps
  1. Install base TensorFlow (Apple's fork of TensorFlow is called tensorflow-macos).
python -m pip install tensorflow-macos
  1. Install Apple's tensorflow-metal to leverage Apple Metal (Apple's GPU framework) for M1, M1 Pro, M1 Max GPU acceleration.
python -m pip install tensorflow-metal
  1. (Optional) Install TensorFlow Datasets to run benchmarks included in this repo.
python -m pip install tensorflow-datasets
  1. Install common data science packages.
conda install jupyter pandas numpy matplotlib scikit-learn
  1. Start Jupyter Notebook.
jupyter notebook
  1. Import dependencies and check TensorFlow version/GPU access.
import numpy as np
import pandas as pd
import sklearn
import tensorflow as tf
import matplotlib.pyplot as plt

# Check for TensorFlow GPU access
print(f"TensorFlow has access to the following devices:\n{tf.config.list_physical_devices()}")

# See TensorFlow version
print(f"TensorFlow version: {tf.__version__}")

If it all worked, you should see something like:

TensorFlow has access to the following devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
TensorFlow version: 2.8.0

How to setup a TensorFlow environment on M1, M1 Pro, M1 Max using Miniforge (longer version)

If you're new to creating environments, using a new M1, M1 Pro, M1 Max machine and would like to get started running TensorFlow and other data science libraries, follow the below steps.

Note: You're going to see the term "package manager" a lot below. Think of it like this: a package manager is a piece of software that helps you install other pieces (packages) of software.

Installing package managers (Homebrew and Miniforge)

  1. Download and install Homebrew from https://brew.sh. Homebrew is a package manager that sets up a lot of useful things on your machine, including Command Line Tools for Xcode, you'll need this to run things like git. The command to install Homebrew will look something like:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

It will explain what it's doing and what you need to do as you go.

  1. Download the most compatible version of Miniforge (minimal installer for Conda specific to conda-forge, Conda is another package manager and conda-forge is a Conda channel) from GitHub.

If you're using an M1 variant Mac, it's "Miniforge3-MacOSX-arm64" <- click for direct download.

Clicking the link above will download a shell file called Miniforge3-MacOSX-arm64.sh to your Downloads folder (unless otherwise specified).

  1. Open Terminal.

  2. We've now got a shell file capable of installing Miniforge, but to do so we'll have to modify it's permissions to make it executable.

To do so, we'll run the command chmod -x FILE_NAME which stands for "change mode of FILE_NAME to -executable".

We'll then execute (run) the program using sh.

chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
  1. This should install Miniforge3 into your home directory (~/ stands for "Home" on Mac).

To check this, we can try to activate the (base) environment, we can do so using the source command.

source ~/miniforge3/bin/activate

If it worked, you should see something like the following in your terminal window.

(base) [email protected] ~ %
  1. We've just installed some new software and for it to fully work, we'll need to restart terminal.

Creating a TensorFlow environment

Now we've got the package managers we need, it's time to install TensorFlow.

Let's setup a folder called tensorflow-test (you can call this anything you want) and install everything in there to make sure it's working.

Note: An environment is like a virtual room on your computer. For example, you use the kitchen in your house for cooking because it's got all the tools you need. It would be strange to have an oven in your bedroom. The same thing on your computer. If you're going to be working on specific software, you'll want it all in one place and not scattered everywhere else.

  1. Make a directory called tensorflow-test. This is the directory we're going to be storing our environment. And inside the environment will be the software tools we need to run TensorFlow.

We can do so with the mkdir command which stands for "make directory".

mkdir tensorflow-test
  1. Change into tensorflow-test. For the rest of the commands we'll be running them inside the directory tensorflow-test so we need to change into it.

We can do this with the cd command which stands for "change directory".

cd tensorflow-test
  1. Now we're inside the tensorflow-test directory, let's create a new Conda environment using the conda command (this command was installed when we installed Miniforge above).

We do so using conda create --prefix ./env which stands for "conda create an environment with the name file/path/to/this/folder/env". The . stands for "everything before".

For example, if I didn't use the ./env, my filepath looks like: /Users/daniel/tensorflow-test/env

conda create --prefix ./env
  1. Activate the environment. If conda created the environment correctly, you should be able to activate it using conda activate path/to/environment.

Short version:

conda activate ./env

Long version:

conda activate /Users/daniel/tensorflow-test/env

Note: It's important to activate your environment every time you'd like to work on projects that use the software you install into that environment. For example, you might have one environment for every different project you work on. And all of the different tools for that specific project are stored in its specific environment.

If activating your environment went correctly, your terminal window prompt should look something like:

(/Users/daniel/tensorflow-test/env) [email protected] tensorflow-test %
  1. Now we've got a Conda environment setup, it's time to install the software we need.

Let's start by installing various TensorFlow dependencies (TensorFlow is a large piece of software and depends on many other pieces of software).

Rather than list these all out, Apple have setup a quick command so you can install almost everything TensorFlow needs in one line.

conda install -c apple tensorflow-deps

The above stands for "hey conda install all of the TensorFlow dependencies from the Apple Conda channel" (-c stands for channel).

If it worked, you should see a bunch of stuff being downloaded and installed for you.

  1. Now all of the TensorFlow dependencies have been installed, it's time install base TensorFlow.

Apple have created a fork (copy) of TensorFlow specifically for Apple Macs. It has all the features of TensorFlow with some extra functionality to make it work on Apple hardware.

This Apple fork of TensorFlow is called tensorflow-macos and is the version we'll be installing:

python -m pip install tensorflow-macos

Depending on your internet connection the above may take a few minutes since TensorFlow is quite a large piece of software.

  1. Now we've got base TensorFlow installed, it's time to install tensorflow-metal.

Why?

Machine learning models often benefit from GPU acceleration. And the M1, M1 Pro and M1 Max chips have quite powerful GPUs.

TensorFlow allows for automatic GPU acceleration if the right software is installed.

And Metal is Apple's framework for GPU computing.

So Apple have created a plugin for TensorFlow (also referred to as a TensorFlow PluggableDevice) called tensorflow-metal to run TensorFlow on Mac GPUs.

We can install it using:

python -m pip install tensorflow-metal

If the above works, we should now be able to leverage our Mac's GPU cores to speed up model training with TensorFlow.

  1. (Optional) Install TensorFlow Datasets. Doing the above is enough to run TensorFlow on your machine. But if you'd like to run the benchmarks included in this repo, you'll need TensorFlow Datasets.

TensorFlow Datasets provides a collection of common machine learning datasets to test out various machine learning code.

python -m pip install tensorflow-datasets
  1. Install common data science packages. If you'd like to run the benchmarks above or work on other various data science and machine learning projects, you're likely going to need Jupyter Notebooks, pandas for data manipulation, NumPy for numeric computing, matplotlib for plotting and Scikit-Learn for traditional machine learning algorithms and processing functions.

To install those in the current environment run:

conda install jupyter pandas numpy matplotlib scikit-learn
  1. Test it out. To see if everything worked, try starting a Jupyter Notebook and importing the installed packages.
# Start a Jupyter notebook
jupyter notebook

Once the notebook is started, in the first cell:

import numpy as np
import pandas as pd
import sklearn
import tensorflow as tf
import matplotlib.pyplot as plt

# Check for TensorFlow GPU access
print(tf.config.list_physical_devices())

# See TensorFlow version
print(tf.__version__)

If it all worked, you should see something like:

TensorFlow has access to the following devices:
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
TensorFlow version: 2.5.0
  1. To see if it really worked, try running one of the notebooks above end to end!

And then compare your results to the benchmarks above.

Owner
Daniel Bourke
Machine Learning Engineer live on YouTube.
Daniel Bourke
ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

ST++ This is the official PyTorch implementation of our paper: ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation. Lihe Ya

Lihe Yang 147 Jan 03, 2023
Really awesome semantic segmentation

really-awesome-semantic-segmentation A list of all papers on Semantic Segmentation and the datasets they use. This site is maintained by Holger Caesar

Holger Caesar 400 Nov 28, 2022
Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies

To make the comparison with Animatable NeRF easier on the Human3.6M dataset, we save the quantitative results at here, which also contains the results of other methods, including Neural Body, D-NeRF,

ZJU3DV 359 Jan 08, 2023
This is the repository of shape matching algorithm Iterative Rotations and Assignments (IRA)

Description This is the repository of shape matching algorithm Iterative Rotations and Assignments (IRA), described in the publication [1]. Directory

MAMMASMIAS Consortium 6 Nov 14, 2022
YOLOV4运行在嵌入式设备上

在嵌入式设备上实现YOLO V4 tiny 在嵌入式设备上实现YOLO V4 tiny 目录结构 目录结构 |-- YOLO V4 tiny |-- .gitignore |-- LICENSE |-- README.md |-- test.txt |-- t

Liu-Wei 6 Sep 09, 2021
A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Kordel K. France 2 Nov 14, 2022
Explainable Medical ImageSegmentation via GenerativeAdversarial Networks andLayer-wise Relevance Propagation

MedAI: Transparency in Medical Image Segmentation What is this repo This repo contains the code and experiments that are implemented to contribute in

Awadelrahman M. A. Ahmed 1 Nov 22, 2021
U-2-Net: U Square Net - Modified for paired image training of style transfer

U2-Net: U Square Net Modified for paired image training of style transfer This is an unofficial repo making use of the code which was made available b

Doron Adler 43 Oct 03, 2022
Lazy, a tool for running things in idle time

Lazy, a tool for running things in idle time Mostly used to stop CUDA ML model training from making my desktop unusable. Simply monitors keyboard/mous

N Shepperd 46 Nov 06, 2022
Code for our WACV 2022 paper "Hyper-Convolution Networks for Biomedical Image Segmentation"

Hyper-Convolution Networks for Biomedical Image Segmentation Code for our WACV 2022 paper "Hyper-Convolution Networks for Biomedical Image Segmentatio

Tianyu Ma 17 Nov 02, 2022
Framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample resolution

Sample-specific Bayesian Networks A framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample or per-patient re

Caleb Ellington 1 Sep 23, 2022
SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers

SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers This repo contains our codes for the paper "No Parameters Left Behind: Sensitivity Gu

Chen Liang 23 Nov 07, 2022
SlideGraph+: Whole Slide Image Level Graphs to Predict HER2 Status in Breast Cancer

SlideGraph+: Whole Slide Image Level Graphs to Predict HER2 Status in Breast Cancer A novel graph neural network (GNN) based model (termed SlideGraph+

28 Dec 24, 2022
StorSeismic: An approach to pre-train a neural network to store seismic data features

StorSeismic: An approach to pre-train a neural network to store seismic data features This repository contains codes and resources to reproduce experi

Seismic Wave Analysis Group 11 Dec 05, 2022
Official Implementation of PCT

Official Implementation of PCT Prerequisites python == 3.8.5 Please make sure you have the following libraries installed: numpy torch=1.4.0 torchvisi

32 Nov 21, 2022
Implementation of "Deep Implicit Templates for 3D Shape Representation"

Deep Implicit Templates for 3D Shape Representation Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu. arXiv 2020. This repository is an implementation fo

Zerong Zheng 144 Dec 07, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
ShuttleNet: Position-aware Fusion of Rally Progress and Player Styles for Stroke Forecasting in Badminton (AAAI 2022)

ShuttleNet: Position-aware Rally Progress and Player Styles Fusion for Stroke Forecasting in Badminton (AAAI 2022) Official code of the paper ShuttleN

Wei-Yao Wang 11 Nov 30, 2022
Tutorial on scikit-learn and IPython for parallel machine learning

Parallel Machine Learning with scikit-learn and IPython Video recording of this tutorial given at PyCon in 2013. The tutorial material has been rearra

Olivier Grisel 1.6k Dec 26, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Jan 05, 2023