The Illinois repository for Climatehack (https://climatehack.ai/). We won 1st place!

Overview

Climatehack

This is the repository for Illinois's Climatehack Team. We earned first place on the leaderboard with a final score of 0.87992.

Final Leaderboard

An overview of our approach can be found here.

Example predictions:

Setup

conda env create -f environment.yaml
conda activate climatehack
python -m ipykernel install --user --name=climatehack

First, download data by running data/download_data.ipynb. Alternatively, you can find preprocessed data files here. Save them into the data folder. We used train.npz and test.npz. They consist of data temporally cropped from 10am to 4pm UK time across the entire dataset. You could also use data_good_sun_2020.npz and data_good_sun_2021.npz, which consist of all samples where the sun elevation is at least 10 degrees. Because these crops produced datasets that could fit in-memory, all our dataloaders work in-memory.

Best Submission

Our best submission earned scores exceeding 0.85 on the Climatehack leaderboard. It is relatively simple and uses the fastai library to pick a base model, optimizer, and learning rate scheduler. After some experimentation, we chose xse_resnext50_deeper. We turned it into a UNET and trained it. More info is in the slides.

To train:

cd best-submission
bash train.sh

To submit, first move the trained model xse_resnext50_deeper.pth into best-submission/submission.

cd best-submission
python doxa_cli.py user login
bash submit.sh

Also, check out best-submission/test_and_visualize.ipynb to test the model and visualize results in a nice animation. This is how we produced the animations found in figs/model_predictions.gif.

Experiments

We conducted several experiments that showed improvements on a strong baseline. The baseline was OpenClimateFix's skillful nowcasting repo, which itself is a implementation of Deepmind's precipitation forecasting GAN. This baseline is more-or-less copied to experiments/dgmr-original. One important difference is that instead of training the GAN, we just train the generator. This was doing well for us and training the GAN had much slower convergence. This baseline will actually train to a score greater than 0.8 on the Climatehack leaderboard. We didn't have time to properly test these experiments on top of our best model, but we suspect they would improve results. The experiments are summarized below:

Experiment Description Results
DCT-Trick Inspired by this, we use the DCT to turn 128x128 -> 64x16x16 and IDCT to turn 64x16x16 -> 128x128. This leads to a shallower network that is autoregressive at fewer spatial resolutions. We believe this is the first time this has been done with UNETs. A fast implementation is in common/utils.py:create_conv_dct_filter and common/utils.py:get_idct_filter. 1.8-2x speedup, small <0.005 performance drop
Denoising We noticed a lot of blocky artifacts in predictions. These artifacts are reminiscent of JPEG/H.264 compression artifacts. We show a comparison of these artifacts in the slides. We found a pretrained neural network to fix them. This can definitely be done better, but we show a proof-of-concept. No performance drop, small visual improvement. The slides have an example.
CoordConv Meteorological phenomenon are correlated with geographic coordinates. We add 2 input channels for the geographic coordinates in OSGB form. +0.0072 MS-SSIM improvement
Optical Flow Optical flow does well for the first few timesteps. We add 2 input channels for the optical flow vectors. +0.0034 MS-SSIM improvement

The folder experiments/climatehack-submission was used to submit these experiments.

cd experiments/climatehack-submission
python doxa_cli.py user login
bash submit.sh

Use experiments/test_and_visualize.ipynb to test the model and visualize results in a nice animation.

Owner
Jatin Mathur
Undergrad at UIUC. Currently working on satellites with LASSI (https://lassiaero.web.illinois.edu/). Previously @astranis, @robinhood, @fractal, @ncsa
Jatin Mathur
House_prices_kaggle - Predict sales prices and practice feature engineering, RFs, and gradient boosting

House Prices - Advanced Regression Techniques Predicting House Prices with Machine Learning This project is build to enhance my knowledge about machin

Gurpreet Singh 1 Jan 01, 2022
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).

APPNP ⠀ A PyTorch implementation of Predict then Propagate: Graph Neural Networks meet Personalized PageRank (ICLR 2019). Abstract Neural message pass

Benedek Rozemberczki 329 Dec 30, 2022
For medical image segmentation

LeViT_UNet For medical image segmentation Our model is based on LeViT (https://github.com/facebookresearch/LeViT). You'd better gitclone its codes. Th

13 Dec 24, 2022
An Evaluation of Generative Adversarial Networks for Collaborative Filtering.

An Evaluation of Generative Adversarial Networks for Collaborative Filtering. This repository was developed by Fernando B. Pérez Maurera. Fernando is

Fernando Benjamín PÉREZ MAURERA 0 Jan 19, 2022
Python PID Tuner - Makes a model of the System from a Process Reaction Curve and calculates PID Gains

PythonPID_Tuner_SOPDT Step 1: Takes a Process Reaction Curve in csv format - assumes data at 100ms interval (column names CV and PV) Step 2: Makes a r

1 Jan 18, 2022
Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Philipp Erler 329 Jan 06, 2023
A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

4.9k Dec 31, 2022
Exploration of some patients clinical variables.

Answer_ALS_clinical_data Exploration of some patients clinical variables. All the clinical / metadata data is available here: https://data.answerals.o

1 Jan 20, 2022
Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning

Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning Yansong Tang *, Zhenyu Jiang *, Zhenda Xie *, Yue

Zhenyu Jiang 12 Nov 16, 2022
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022
Portfolio asset allocation strategies: from Markowitz to RNNs

Portfolio asset allocation strategies: from Markowitz to RNNs Research project to explore different approaches for optimal portfolio allocation starti

Luigi Filippo Chiara 1 Feb 05, 2022
Official Pytorch implementation of Meta Internal Learning

Official Pytorch implementation of Meta Internal Learning

10 Aug 24, 2022
High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models Requirements A suitable conda environment named ldm can be created and activated with: conda env create -f environment.yaml co

CompVis Heidelberg 5.6k Jan 04, 2023
Fantasy Points Prediction and Dream Team Formation

Fantasy-Points-Prediction-and-Dream-Team-Formation Collected Data from open source resources that have over 100 Parameters for predicting cricket play

Akarsh Singh 2 Sep 13, 2022
《Lerning n Intrinsic Grment Spce for Interctive Authoring of Grment Animtion》

Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation Overview This is the demo code for training a motion invariant enco

YuanBo 213 Dec 14, 2022
The second project in Python course on FCC

Assignment Write a function named add_time that takes in two required parameters and one optional parameter: a start time in the 12-hour clock format

Denise T 1 Dec 13, 2021
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022
CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning

CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning This repository contains the code and relevant instructions

XiaoMing 5 Aug 19, 2022
Machine Learning Time-Series Platform

cesium: Open-Source Platform for Time Series Inference Summary cesium is an open source library that allows users to: extract features from raw time s

632 Dec 26, 2022
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Wonjae Kim 922 Jan 01, 2023