Bayesian Optimization using GPflow

Overview

Note: This package is for use with GPFlow 1.

For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind.

GPflowOpt

GPflowOpt is a python package for Bayesian Optimization using GPflow, and uses TensorFlow. It was initiated and is currently maintained by Joachim van der Herten and Ivo Couckuyt. The full list of contributors (in alphabetical order) is Ivo Couckuyt, Tom Dhaene, James Hensman, Nicolas Knudde, Alexander G. de G. Matthews and Joachim van der Herten. Special thanks also to all GPflow contributors as this package would not be able to exist without their effort.

Build Status Coverage Status Documentation Status

Install

The easiest way to install GPflowOpt involves cloning this repository and running

pip install . --process-dependency-links

in the source directory. This also installs all required dependencies (including TensorFlow, if needed). For more detailed installation instructions, see the documentation.

Contributing

If you are interested in contributing to this open source project, contact us through an issue on this repository. For more information, see the notes for contributors.

Citing GPflowOpt

To cite GPflowOpt, please reference the preliminary arXiv paper. Sample Bibtex is given below:

@ARTICLE{GPflowOpt2017,
   author = {Knudde, Nicolas and {van der Herten}, Joachim and Dhaene, Tom and Couckuyt, Ivo},
    title = "{{GP}flow{O}pt: {A} {B}ayesian {O}ptimization {L}ibrary using Tensor{F}low}",
  journal = {arXiv preprint -- arXiv:1711.03845},
  year    = {2017},
  url     = {https://arxiv.org/abs/1711.03845}
}
Comments
  • GPflow 1.0

    GPflow 1.0

    Following up on #86 , development of a 1.0 compatible GPflowOpt version. This is far from done, the biggest difficulty (Acquisition) is still ahead unfortunately.

    At the same time lots of tests are affected: I'm reworking them to use some of the cool pytest features. When this work is over, I hope to do another PR to improve testing further (split out computationally demanding tests to system, use mock to test things which now trigger BO runs and model optimizations)

    do not merge yet Discussion 
    opened by javdrher 17
  • Cholesky faillures due to inappropriate initial hyperparameters

    Cholesky faillures due to inappropriate initial hyperparameters

    As mentioned #4 , tests often failed (mostly on python 2.7) due to cholesky decomposition errors. First I thought this was mostly caused by updating the data and calling optimize() again, but resetting the hyperparameters wasn't working all the time. Increasing the likelihood variance sometimes helps slightly but isn't very robust either.

    Right now the tests specify lengthscales for the initial model, and apply a hyperprior on the kernel variance. Each BO iteration, the hyperparameters supplied with the initial model are applied as a starting point. In addition, restarts are applied by randomizing the Params. This approach made it a lot more stable but isn't perfect yet. Especially in longer runs of BO, reverting to the supplied lengthscales each time ultimately causes crashes.

    Some things we may consider:

    • Normalizing the input/output data. Tested this a bit, didn't solve the issue. Additionally, the model hyperparameters loose some interpretability. Note that I think we will ultimately need this for PES anyway.
    • Add a callback for re-configuring hyperparameters. Instead of reverting to the initially supplied hypers each iteration, this function is called and configures the initial state. I think for more complex modeling approaches this ultimately required, but for simple scenario's with GPR this has to work automatically.
    • Applying hyperpriors is going to be important.

    I'd love to hear thoughts on how to improve this.

    help wanted 
    opened by javdrher 14
  • Question regarding getting started example from documentation

    Question regarding getting started example from documentation

    Good morning guys,

    I am currently trying to get gpflowopt up and running for some optimization problem. Naturally, I first tried the example you provided in the documentation Now, while I do get the same result as in the documentation, I am a bit puzzled about the function evaluations the optimizer is choosing. To be more specific, the optimizer always chooses to evaluate the function at the point [0.0, 0.5] in all 15 iterations. I am probably overlooking something, as this does not seem to be the desired behavior, right? The optimizer seems to be not really optimizing. Can anyone point out the mistake I made during the setup of the problem? I am pretty sure I follow the instructions of the example in the documentation to the letter.

    This is the code, that I am running:

    import numpy as np
    from gpflowopt.domain import ContinuousParameter
    import gpflow
    from gpflowopt.bo import BayesianOptimizer
    from gpflowopt.design import LatinHyperCube
    from gpflowopt.acquisition import ExpectedImprovement
    #from gpflowopt.optim import SciPyOptimizer
    
    def fx(X):
        X = np.atleast_2d(X)
        result = np.sum(np.square(X), axis=1)[:, None]
        print("X: {}".format(X))
        print("fx: {}".format(result))
        return result
    
    
    domain = ContinuousParameter('x1', -2, 2) + ContinuousParameter('x2', -1, 2)
    
    # Use standard Gaussian process Regression
    lhd = LatinHyperCube(21, domain)
    X = lhd.generate()
    Y = fx(X)
    model = gpflow.gpr.GPR(X, Y, gpflow.kernels.Matern52(2, ARD=True))
    model.kern.lengthscales.transform = gpflow.transforms.Log1pe(1e-3)
    
    # Now create the Bayesian Optimizer
    alpha = ExpectedImprovement(model)
    optimizer = BayesianOptimizer(domain, alpha)
    
    # Run the Bayesian optimization
    #with optimizer.silent():
    r = optimizer.optimize(fx, n_iter=15)
    print(r)
    

    And this is the output I am seeing:

    python try_out_gpflowopt.py 
    2017-12-11 09:28:09.790044: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.
    1 SSE4.2 AVX AVX2 FMA
    X: [[ 0.2   0.65]
     [ 0.   -0.1 ]
     [-0.8   0.5 ]
     [ 0.4   1.4 ]
     [-0.6   1.25]
     [ 1.    0.05]
     [ 1.2   0.8 ]
     [-1.   -0.25]
     [ 0.8  -0.7 ]
     [ 1.4   1.55]
     [-1.6   1.1 ]
     [-1.8   0.35]
     [-0.2  -0.85]
     [ 1.8   0.2 ]
     [-0.4   1.85]
     [ 0.6   2.  ]
     [ 2.    0.95]
     [ 1.6  -0.55]
     [-1.4   1.7 ]
     [-1.2  -1.  ]
     [-2.   -0.4 ]]
    fx: [[ 0.4625]
     [ 0.01  ]
     [ 0.89  ]
     [ 2.12  ]
     [ 1.9225]
     [ 1.0025]
     [ 2.08  ]
     [ 1.0625]
     [ 1.13  ]
     [ 4.3625]
     [ 3.77  ]
     [ 3.3625]
     [ 0.7625]
     [ 3.28  ]
     [ 3.5825]
     [ 4.36  ]
     [ 4.9025]
     [ 2.8625]
     [ 4.85  ]
     [ 2.44  ]
     [ 4.16  ]]
    Warning: optimization restart 4/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ -3.76012797e-10   4.99988509e-01]]
    fx: [[ 0.24998851]]
    Warning: optimization restart 1/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 4/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 3/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
         fun: array([ 0.01])
     message: 'OK'
        nfev: 15
     success: True
           x: array([[ 0. , -0.1]])
    

    For the sake of completeness, these are the steps I took to setup GPFlowOpt:

    1. Created new conda environment based on Python 3.5
    2. Cloned the gpflowopt repo
    3. Ran pip install . --process-dependency-links

    BTW, this is the first issue I've ever created on GiHub, so please forgive me if I am violating any conventions and please let me know if I left out crucial information.

    opened by jbi35 9
  • Overflow warnings

    Overflow warnings

    During calls to optimize(), sometimes UserWarnings pop up:

    /home/javdrher/.virtualenvs/gpflowopt/lib/python3.5/site-packages/GPflow/transforms.py:129: RuntimeWarning: overflow encountered in exp result = np.log(1. + np.exp(x)) + self._lower

    Typically this is quite harmless, if its really causing troubles its usually followed by a cholesky decomposition exception. However, those warnings mess up output. Specifically in documentation notebooks. I was thinking of silencing the warnings, any reason not to?

    question 
    opened by javdrher 9
  • Bug fix in pareto.py, Pareto::divide_conquer_nd

    Bug fix in pareto.py, Pareto::divide_conquer_nd

    The algorithm in Pareto::divide_conquer_nd fails when two points in the Pareto set have the same value in a certain dimension. An example is included in the modified pareto.py: The Pareto set d21 contains three points, two of which have a value of 2.0 in the first dimension. Ordering the three points results in different ways results in different values of the hypervolume (28 and 32), which are all wrong (should be 29).

    The issue is in the dominance test associated with _is_test_required method and pseudo_pf. The array pseudo_pf assigns different ranks to the same values. Therefore, by reordering the Pareto set in the test case, different pseudo Pareto sets are generated for the same Pareto set.

    I figured out two ways for the fix. One is to fix pseudo_pf by sorting the Pareto set such that the same values are assigned the same rank (e.g. using scipy.stats.rankdata). The other is to fix the dominance test by checking the actual Pareto set. The first one leads to more iterations in the algorithm. Therefore, I implemented the second approach in pareto.py with a minimum modification - although some simplifications of the code are possible.

    opened by smanist 5
  • when installing GPflowOpt it is down grading GPflow from 1.2 to  0.4

    when installing GPflowOpt it is down grading GPflow from 1.2 to 0.4

    i have installed latest gpflow version (1.2) using pip Later when i tried to install gpflowopt using the following code pip install git+https://github.com/GPflow/GPflowOpt.git

    it is down grading GPflow from 1.2 to 0.4

    opened by pullanagari 5
  • Is GPflowOpt compatible anymore with GPflow functions?

    Is GPflowOpt compatible anymore with GPflow functions?

    I just downloaded GPflowOpt, yet nothing can run due to slight changes made in GPflow. For example, you have now made a 'core' subfolder and seemed to changed the AutoFlow function. I have tried to apply come changes on my own (it is not hard to change 'from gpflow.param import DataHolder, Autoflow' to two separate imports in their correct folders, params and core), but note @Autoflow calls now need to be @gpflow.autoflow. I spent several hours changing this for pretty much every function/class.

    Yet now it seems certain classes are also significantly changed. 'Parameterized' no longer has the attribute 'highest_parent', something needed for SciPyOptimizer.

    At this point, not a single thing can be called from GPflowOpt without error.

    opened by grahamski2323 5
  • Max-Value Entropy Search

    Max-Value Entropy Search

    I implemented the recent acquisition function Max-Value Entropy Search from:

    Wang, Z. & Jegelka, S.. (2017). Max-value Entropy Search for Efficient Bayesian Optimization. Proceedings of the 34th International Conference on Machine Learning, in PMLR 70:3627-3635

    We named it Min-Value Entropy Search because the GPflow framework seeks the minimum if the function. There is a notebook evaluating the method on the Shekel function and it seems to perform well.

    opened by nknudde 5
  • MGP

    MGP

    In this pull request I implemented the Approximatively Marginalised GP as described as indicated in issue #39 . It currently supports multi-output GPs. A notebook and some tests are included.

    opened by nknudde 5
  • GPR works, VGP doesn't

    GPR works, VGP doesn't

    Do improve the speed of optimisation, I replaced GPR with VGP as follows:

    domain = np.sum([GPflowOpt.domain.ContinuousParameter(f'mux{i}', mm[i], mx[i]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'muy{i}', mm[i+7], mx[i+7]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmax{i}', 1e-7, 1.) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmay{i}', 1e-7, 1.) for i in range(7)]) domain += GPflowOpt.domain.ContinuousParameter('offset', endo * 0.7, endo * 1.3) design = GPflowOpt.design.RandomDesign(500, domain) X = design.generate() Y = np.vstack([obj(x.reshape(1, -1)) for x in X]) model = GPflow.vgp.VGP(X, Y, GPflow.kernels.RBF(29, lengthscales=X.std(axis=0)), likelihood=GPflow.likelihoods.Gaussian()) acquisition = GPflowOpt.acquisition.ExpectedImprovement(model) opt = GPflowOpt.optim.StagedOptimizer([GPflowOpt.optim.MCOptimizer(domain, 500), GPflowOpt.optim.SciPyOptimizer(domain)]) optimizer = GPflowOpt.BayesianOptimizer(domain, acquisition, optimizer=opt) optimizer.optimize(obj, n_iter=500)

    GPR works, but with VGP I receive the following error:

    [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] 2017-07-18 23:03:28.798171: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [501,1] vs. [500,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] Warning: optimization restart 1/5 failed 2017-07-18 23:03:28.898935: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.898992: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899066: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899289: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] Warning: optimization restart 2/5 failed

    I'm using master GPflow and GPflowOpt on TensorFlow 1.2 and Python 3.6.

    Thanks.

    bug 
    opened by mccajm 5
  • Avoid (some) duplicate optimizes

    Avoid (some) duplicate optimizes

    Following #52 here is some code which gets rid of 1 optimize call (in case of no initial design specified). The PR also includes a context which can be used to suspend all optimizes. This is mostly useful for the lower-level api.

    note: In the following release the data mechanism will undergo some changes and enabling the scaling should get moved to avoid another scaling

    enhancement do not merge yet 
    opened by javdrher 4
  • Issue: Install package gpflowopt

    Issue: Install package gpflowopt

    Hello can you help me to solve this issue ERROR: Could not find a version that satisfies the requirement GPflow==0.5.0 (from gpflowopt) (from versions: 1.4.1.linux-x86_64, 1.5.0.linux-x86_64, 1.0.0, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.4.1, 1.5.0, 1.5.1, 2.0.0rc1, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.2.0, 2.2.1, 2.3.0, 2.3.1, 2.4.0, 2.5.1, 2.5.2) ERROR: No matching distribution found for GPflow==0.5.0

    I tried to install in google colab and spyder but not working

    opened by SamirLamin 2
  • Coupled of decoupled constrained BO?

    Coupled of decoupled constrained BO?

    I am very new to this BO domain and working on constraint BO and found this wonderful tool that has constrained BO application. As I can understand from the code that the acquisition function for objective and constraint are multiplied. I have a question about that. Is this constrained BO considered as coupled then?

    opened by pallavimitra 0
  • Ask/tell interface

    Ask/tell interface

    Is there a ask/tell interface? If not, is there a workaround?

    I'd like to initialize the optimizer with already known function evaluations. Likewise, I'd like to query the next data point that should be evaluated, according to the acquisition function.

    opened by moi90 2
  • Discrete variables optimization

    Discrete variables optimization

    Hey there, I found your project which seems promising for a problem, I am currently working on. However, I as far as I can see, the option to include discrete variables in the optimization is not yet implemented? Is this correct? and if so, are there any developement in this direction currently going on?

    opened by HolmKiilerich 2
  • Can I use PyTorch within the objective function?

    Can I use PyTorch within the objective function?

    Hi, Before I start writing my coding with GPflowOpt, I need to check with you whether I can use a PyTorch model within the objective function? My objective function needs the decoder of a VAE defined using PyTorch. In addition to other essential parameters, I want to pass this model as one parameter to the objective function. I understand that GPflowOpt is based on TF, but I am not sure whether the objective can be any function independent of TF. Thanks in advance...

    opened by yifeng-li 0
Releases(v0.1.0)
  • v0.1.0(Sep 11, 2017)

    Initial version of the GPflowOpt framework, including some basic acquisition functions and support for standard Bayesian Optimization strategies.

    Source code(tar.gz)
    Source code(zip)
Owner
GPflow
GPflow
SCALoss: Side and Corner Aligned Loss for Bounding Box Regression (AAAI2022).

SCALoss PyTorch implementation of the paper "SCALoss: Side and Corner Aligned Loss for Bounding Box Regression" (AAAI 2022). Introduction IoU-based lo

TuZheng 20 Sep 07, 2022
Background-Click Supervision for Temporal Action Localization

Background-Click Supervision for Temporal Action Localization This repository is the official implementation of BackTAL. In this work, we study the te

LeYang 221 Oct 09, 2022
Code for the paper "On the Power of Edge Independent Graph Models"

Edge Independent Graph Models Code for the paper: "On the Power of Edge Independent Graph Models" Sudhanshu Chanpuriya, Cameron Musco, Konstantinos So

Konstantinos Sotiropoulos 0 Oct 26, 2021
Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Sami BARCHID 2 Oct 20, 2022
Like ThreeJS but for Python and based on wgpu

pygfx A render engine, inspired by ThreeJS, but for Python and targeting Vulkan/Metal/DX12 (via wgpu). Introduction This is a Python render engine bui

139 Jan 07, 2023
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks This is the official repository for our paper: Sharpness-aware Quantization for Deep Neural Netw

Zhuang AI Group 30 Dec 19, 2022
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
Official implementation of Densely connected normalizing flows

Densely connected normalizing flows This repository is the official implementation of NeurIPS 2021 paper Densely connected normalizing flows. Poster a

Matej Grcić 31 Dec 12, 2022
Deep learning toolbox based on PyTorch for hyperspectral data classification.

Deep learning toolbox based on PyTorch for hyperspectral data classification.

Nicolas 304 Dec 28, 2022
A simple API wrapper for Discord interactions.

Your ultimate Discord interactions library for discord.py. About | Installation | Examples | Discord | PyPI About What is discord-py-interactions? dis

james 641 Jan 03, 2023
Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Semantic-NeRF: Semantic Neural Radiance Fields Project Page | Video | Paper | Data In-Place Scene Labelling and Understanding with Implicit Scene Repr

Shuaifeng Zhi 243 Jan 07, 2023
Rank1 Conversation Emotion Detection Task

Rank1-Conversation_Emotion_Detection_Task accuracy macro-f1 recall 0.826 0.7544 0.719 基于预训练模型和时序预测模型的对话情感探测任务 1 摘要 针对对话情感探测任务,本文将其分为文本分类和时间序列预测两个子任务,分

Yuchen Han 2 Nov 28, 2021
This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Hand Cricket Table of Content Overview Installation Game rules Project Details Future scope Overview This is a computer vision based implementation of

Abhinav R Nayak 6 Jan 12, 2022
Development of IP code based on VIPs and AADM

Sparse Implicit Processes In this repository we include the two different versions of the SIP code developed for the article Sparse Implicit Processes

1 Aug 22, 2022
This is the repository for Learning to Generate Piano Music With Sustain Pedals

SusPedal-Gen This is the official repository of Learning to Generate Piano Music With Sustain Pedals Demo Page Dataset The dataset used in this projec

Joann Ching 12 Sep 02, 2022
RLDS stands for Reinforcement Learning Datasets

RLDS RLDS stands for Reinforcement Learning Datasets and it is an ecosystem of tools to store, retrieve and manipulate episodic data in the context of

Google Research 135 Jan 01, 2023
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

g-parki 7 Jul 15, 2022
Synthesizing and manipulating 2048x1024 images with conditional GANs

pix2pixHD Project | Youtube | Paper Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic image-to-image translatio

NVIDIA Corporation 6k Dec 27, 2022
A library for using chemistry in your applications

Chemistry in python Resources Used The following items are not made by me! Click the words to go to the original source Periodic Tab Json - Used in -

Tech Penguin 28 Dec 17, 2021
Neon-erc20-example - Example of creating SPL token and wrapping it with ERC20 interface in Neon EVM

Example of wrapping SPL token by ERC2-20 interface in Neon Requirements Install

7 Mar 28, 2022