Artifacts for paper "MMO: Meta Multi-Objectivization for Software Configuration Tuning"

Related tags

Deep Learningmmo
Overview

MMO: Meta Multi-Objectivization for Software Configuration Tuning

This repository contains the data and code for the following paper that is currently submitting for publication:

Tao Chen and Miqing Li. MMO: Meta Multi-Objectivization for Software Configuration Tuning.

Introduction

In software configuration tuning, different optimizers have been designed to optimize a single performance objective (e.g.,minimizing latency), yet there is still little success in preventing (or mitigating) the search from being trapped in local optima — a hard nut to crack due to the complex configuration landscape and expensive measurement. To tackle this challenge, in this paper, we take a different perspective. Instead of focusing on improving the optimizer, we work on the level of optimization model and propose a meta multi-objectivization (MMO) model that considers an auxiliary performance objective (e.g., throughput in addition to latency). What makes this model unique is that we do not optimize the auxiliary performance objective, but rather use it to make similarly-performing while different configurations less comparable (i.e. Pareto nondominated to each other), thus preventing the search from being trapped in local optima. Importantly, we show how to effectively use the MMO model without worrying about its weight — the only yet highly sensitive parameter that can determine its effectiveness. This is achieved by designing a new normalization method that allows an optimizer to adaptively find the right objective bounds when guiding the tuning. Experiments on 22 cases from 11 real-world software systems/environments confirm that our MMO model with the new normalization performs better than its state-of-the-art single-objective counterparts on 18 out of 22 cases while achieving up to 2.09x speedup. For 15 cases, the new normalization also enables the MMO model to outperform the instance when using it with the normalization proposed in our prior FSE work under pre-tuned best weights, saving a great amount of resources which would be otherwise necessary to find a good weight. We also demonstrate that the MMO model with the new normalization can consolidate FLASH, a recent model-based tuning tool, on 15 out of 22 cases with 1.22x speedup in general.

Data Result

The dataset of this work can be accessed via the Zenodo link here. In particular, the zip file contains all the raw data as reported in the paper; most of the structures are self-explained but we wish to highlight the following:

  • The data under the folder 1.0-0.0 and 0.0-1.0 are for the single-objective optimizers. The former uses O1 as the target performance objective while the latter uses O2 as the target. The data under other folders named by the subject systems are for the MMO and PMO. The result under the weight folder 1.0 are for MMO while all other folders represent different weight values, containing the data for MMO-FSE.

  • For those data of MMO, MMO-FSE, and PMO, the folder 0 and 1 denote using uses O1 and O2 as the target performance objective, respectively.

  • In the lowest-level folder where the data is stored (i.e., the sas folder), SolutionSet.rtf contains the results over all repeated runs; SolutionSetWithMeasurement.rtf records the results over different numbers of measurements.

Souce Code

The code folder contains all the information about the source code, as well as an executable jar file in the executable folder .

Running the Experiments

To run the experiments, one can download the mmo-experiments.jar from the aforementioned repository (under the executable folder). Since the artifacts were written in Java, we assume that the JDK/JRE has already been installed. Next, one can run the code using java -jar mmo-experiments.jar [subject] [runs], where [subject] and [runs] denote the subject software system and the number of repeated run (this is an integer and 50 is the default if it is not specified), respectively. The keyword for the systems/environments used in the paper are:

  • trimesh
  • x264
  • storm-wc
  • storm-rs
  • dnn-sa
  • dnn-adiac
  • mariadb
  • vp9
  • mongodb
  • lrzip
  • llvm

For example, running java -jar mmo-experiments.jar trimesh would execute experiments on the trimesh software for 50 repeated runs.

For each software system, the experiment consists of the runs for MMO, MMO-FSE with all weight values, PMO and the four state-of-the-art single-objective optimizers, as well as the FLASH and FLASH_MMO. All the outputs would be stored in the results folder at the same directory as the executable jar file.

All the measurement data of the subject configurable systems have been placed inside the mmo-experiments.jar.

code for our ECCV 2020 paper "A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation"

Code for our ECCV (2020) paper A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation. Prerequisites: python == 3.6.8 pytorch ==1.1.0

32 Nov 27, 2022
Use your Philips Hue lights as Racing Flags. Works with Assetto Corsa, Assetto Corsa Competizione and iRacing.

phue-racing-flags Use your Philips Hue lights as Racing Flags. Explore the docs » Report Bug · Request Feature Table of Contents About The Project Bui

50 Sep 03, 2022
A Home Assistant custom component for Lobe. Lobe is an AI tool that can classify images.

Lobe This is a Home Assistant custom component for Lobe. Lobe is an AI tool that can classify images. This component lets you easily use an exported m

Kendell R 4 Feb 28, 2022
Project dự đoán giá cổ phiếu bằng thuật toán LSTM gồm: code train và code demo

Web predicts stock prices using Long - Short Term Memory algorithm Give me some start please!!! User interface image: Choose: DayBegin, DayEnd, Stock

Vo Thuong Truong Nhon 8 Nov 11, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
Let Python optimize the best stop loss and take profits for your TradingView strategy.

TradingView Machine Learning TradeView is a free and open source Trading View bot written in Python. It is designed to support all major exchanges. It

Robert Roman 473 Jan 09, 2023
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Dario Pavllo 115 Jan 07, 2023
Implementation of H-UCRL Algorithm

Implementation of H-UCRL Algorithm This repository is an implementation of the H-UCRL algorithm introduced in Curi, S., Berkenkamp, F., & Krause, A. (

Sebastian Curi 25 May 20, 2022
Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

168 Nov 29, 2022
Understanding the Generalization Benefit of Model Invariance from a Data Perspective

Understanding the Generalization Benefit of Model Invariance from a Data Perspective This is the code for our NeurIPS2021 paper "Understanding the Gen

1 Jan 15, 2022
Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

Real-ESRGAN Colab Demo for Real-ESRGAN . Portable Windows executable file. You can find more information here. Real-ESRGAN aims at developing Practica

Xintao 17.2k Jan 02, 2023
Drone detection using YOLOv5

This drone detection system uses YOLOv5 which is a family of object detection architectures and we have trained the model on Drone Dataset. Overview I

Tushar Sarkar 27 Dec 20, 2022
Code for binary and multiclass model change active learning, with spectral truncation implementation.

Model Change Active Learning Paper (To Appear) Python code for doing active learning in graph-based semi-supervised learning (GBSSL) paradigm. Impleme

Kevin Miller 1 Jul 24, 2022
Generative Art Using Neural Visual Grammars and Dual Encoders

Generative Art Using Neural Visual Grammars and Dual Encoders Arnheim 1 The original algorithm from the paper Generative Art Using Neural Visual Gramm

DeepMind 231 Jan 05, 2023
Simple tools for logging and visualizing, loading and training

TNT TNT is a library providing powerful dataloading, logging and visualization utilities for Python. It is closely integrated with PyTorch and is desi

1.5k Jan 02, 2023
Bayesian Generative Adversarial Networks in Tensorflow

Bayesian Generative Adversarial Networks in Tensorflow This repository contains the Tensorflow implementation of the Bayesian GAN by Yunus Saatchi and

Andrew Gordon Wilson 1k Nov 29, 2022
Official implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN Official PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. Prerequisites Python 2.7

SK T-Brain 754 Dec 29, 2022
Pytorch implementation of "Neural Wireframe Renderer: Learning Wireframe to Image Translations"

Neural Wireframe Renderer: Learning Wireframe to Image Translations Pytorch implementation of ideas from the paper Neural Wireframe Renderer: Learning

Yuan Xue 7 Nov 14, 2022
Plugin for Gaffer providing direct acess to asset from PolyHaven.com. Only HDRIs at the moment, Cycles and Arnold supported

GafferHaven Plugin for Gaffer providing direct acess to asset from PolyHaven.com. Only HDRIs are supported at the moment, in Cycles and Arnold lights.

Jakub Vondra 6 Jan 26, 2022
USAD - UnSupervised Anomaly Detection on multivariate time series

USAD - UnSupervised Anomaly Detection on multivariate time series Scripts and utility programs for implementing the USAD architecture. Implementation

116 Jan 04, 2023