Magenta: Music and Art Generation with Machine Intelligence

Overview

Build Status PyPI version

Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it's also an exploration in building smart tools and interfaces that allow artists and musicians to extend (not replace!) their processes using these models. Magenta was started by some researchers and engineers from the Google Brain team, but many others have contributed significantly to the project. We use TensorFlow and release our models and tools in open source on this GitHub. If you’d like to learn more about Magenta, check out our blog, where we post technical details. You can also join our discussion group.

This is the home for our Python TensorFlow library. To use our models in the browser with TensorFlow.js, head to the Magenta.js repository.

Getting Started

Take a look at our colab notebooks for various models, including one on getting started. Magenta.js is also a good resource for models and demos that run in the browser. This and more, including blog posts and Ableton Live plugins, can be found at https://magenta.tensorflow.org.

Magenta Repo

Installation

Magenta maintains a pip package for easy installation. We recommend using Anaconda to install it, but it can work in any standard Python environment. We support Python 3 (>= 3.5). These instructions will assume you are using Anaconda.

Automated Install (w/ Anaconda)

If you are running Mac OS X or Ubuntu, you can try using our automated installation script. Just paste the following command into your terminal.

curl https://raw.githubusercontent.com/tensorflow/magenta/main/magenta/tools/magenta-install.sh > /tmp/magenta-install.sh
bash /tmp/magenta-install.sh

After the script completes, open a new terminal window so the environment variable changes take effect.

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Note that you will need to run source activate magenta to use Magenta every time you open a new terminal window.

Manual Install (w/o Anaconda)

If the automated script fails for any reason, or you'd prefer to install by hand, do the following steps.

Install the Magenta pip package:

pip install magenta

NOTE: In order to install the rtmidi package that we depend on, you may need to install headers for some sound libraries. On Ubuntu Linux, this command should install the necessary packages:

sudo apt-get install build-essential libasound2-dev libjack-dev portaudio19-dev

On Fedora Linux, use

sudo dnf group install "C Development Tools and Libraries"
sudo dnf install SAASound-devel jack-audio-connection-kit-devel portaudio-devel

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Using Magenta

You can now train our various models and use them to generate music, audio, and images. You can find instructions for each of the models by exploring the models directory.

Development Environment

If you want to develop on Magenta, you'll need to set up the full Development Environment.

First, clone this repository:

git clone https://github.com/tensorflow/magenta.git

Next, install the dependencies by changing to the base directory and executing the setup command:

pip install -e .

You can now edit the files and run scripts by calling Python as usual. For example, this is how you would run the melody_rnn_generate script from the base directory:

python magenta/models/melody_rnn/melody_rnn_generate --config=...

You can also install the (potentially modified) package with:

pip install .

Before creating a pull request, please also test your changes with:

pip install pytest-pylint
pytest

PIP Release

To build a new version for pip, bump the version and then run:

python setup.py test
python setup.py bdist_wheel --universal
twine upload dist/magenta-N.N.N-py2.py3-none-any.whl
Comments
  • can't reproduce the result of GANsynth

    can't reproduce the result of GANsynth

    Hi, I have finished training with the recommended hyperparameters after 4 days. But the generated sound with the midi file is very poor and lacks diversity. I trained only on the Acoustic Subset and used tensorflow 1.15.2. How should I reproduce the results in the GANsynth paper? @jesseengel

    opened by shansiliu95 30
  • MusicXML Parser for Magenta

    MusicXML Parser for Magenta

    Lightweight MusicXML parser for Magenta with no dependencies. Handles uncompressed (.xml) and compressed (.mxl) files. Translates MusicXML file into NoteSequences similar to how MIDI is imported.

    Because MusicXML differs from MIDI, different information is available to Magenta. For example, MusicXML does not contain MIDI CC data. However, MusicXML does contain other information (such as dynamic markings) which may be more useful in certain scenarios, such as using Magenta to generate sheet music (MusicXML files) in addition to MIDI files.

    This parser currently only supports input and does not support output to MusicXML files.

    Four public domain MusicXML files are included for unit testing.

    opened by jsawruk 27
  • ValueError: Unable to get the Filesystem for path gs://magentadata/datasets/maestro/v1.0.0/maestro-v1.0.0_test.tfrecord

    ValueError: Unable to get the Filesystem for path gs://magentadata/datasets/maestro/v1.0.0/maestro-v1.0.0_test.tfrecord

    I'm getting this error while trying to do the Data generation / preprocessing step of Score2Perf Music Transformer model on my computer. How can I fix it?

    opened by aletote 23
  • Tests failing with Bazel 0.2.3 and Tensorflow 0.9

    Tests failing with Bazel 0.2.3 and Tensorflow 0.9

    Hey all!

    I'm on a Mac OSX attempting to build. I installed tensorflow in python 3. When I run the

    bazel test //magenta:all
    

    Command, all 6 tests fail locally. The fail log for each says that python can't import tensorflow. I'm assuming the tests are being run by my Mac's default python automatically (2.7.10, doesn't have tensorflow installed), is there a way to change that to python3?

    Sorry if this is a dumb question!

    opened by SJCaldwell 20
  •  Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

    Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

    Hello,

    I'm running the last version MusicVAE from repository on ubuntu 18.04, cuda 10.1, tensorflow 2.2.0, configuration - hier-trio_16bar and have the error below (i tried for different batch sizes, even 1 and for different learning rates, but the problem is the same). Do you know how to fix it?

    2020-06-09 21:01:27.365621: I tensorflow/core/common_runtime/bfc_allocator.cc:1010] Stats: Limit: 14684815360 InUse: 14684616704 MaxInUse: 14684815360 NumAllocs: 26588 MaxAllocSize: 181403648

    2020-06-09 21:01:27.365991: W tensorflow/core/common_runtime/bfc_allocator.cc:439] **************************************************************************************************** 2020-06-09 21:01:27.366026: W tensorflow/core/framework/op_kernel.cc:1753] OP_REQUIRES failed at lstm_ops.cc:372 : Resource exhausted: OOM when allocating tensor with shape[256,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc 2020-06-09 21:01:34.147376: W tensorflow/core/kernels/data/cache_dataset_ops.cc:794] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to dataset.cache().take(k).repeat(). You should use dataset.take(k).cache().repeat() instead. Traceback (most recent call last): File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1365, in _do_call return fn(*args) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350, in _run_fn target_list, run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node swap_in_core_decoder_1/core_decoder_0/decoder/while/BasicDecoderStep/decoder/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_13_0}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[add/_2901]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    (1) Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node swap_in_core_decoder_1/core_decoder_0/decoder/while/BasicDecoderStep/decoder/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_13_0}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    0 successful operations. 0 derived errors ignored.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "music_vae_train.py", line 340, in console_entry_point() File "music_vae_train.py", line 336, in console_entry_point tf.app.run(main) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "music_vae_train.py", line 331, in main run(configs.CONFIG_MAP) File "music_vae_train.py", line 312, in run task=FLAGS.task) File "music_vae_train.py", line 211, in train is_chief=is_chief) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tf_slim/training/training.py", line 551, in train loss = session.run(train_op, run_metadata=run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 778, in run run_metadata=run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1283, in run run_metadata=run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1384, in run raise six.reraise(*original_exc_info) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1369, in run return self._sess.run(*args, **kwargs) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1442, in run run_metadata=run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1200, in run return self._sess.run(*args, **kwargs) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 958, in run run_metadata_ptr) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1181, in _run feed_dict_tensor, options, run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1359, in _do_run run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1384, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node swap_in_core_decoder_1/core_decoder_0/decoder/while/BasicDecoderStep/decoder/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_13_0}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[add/_2901]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    (1) Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node swap_in_core_decoder_1/core_decoder_0/decoder/while/BasicDecoderStep/decoder/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_13_0}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    0 successful operations. 0 derived errors ignored.

    opened by SashaBurashnikova 14
  • losing instruments information when converting midi to note sequence

    losing instruments information when converting midi to note sequence

    Hello, I'm doing manipulations on midi files using Magenta. I convert the midi file to NoteSequence using

    import magenta.music as mm
    mm.midi_file_to_sequence_proto(midi)
    

    The problem is that if I have unused midi channels (lets say 5 and 6) and used midi channels (lets say 7 and 8), the conversion put the used channels in place of empty ones, so I loose the instrument true setup, and when converting it back to midi and playing it with midi player, the instruments are mixed up. Any suggestion how to avoid this wrong conversion (only happens when there are some unused midi channels in the midi file). Thanks!

    opened by DavidPrimor 14
  • Generate melody with basic_rnn

    Generate melody with basic_rnn

    Hi,

    I'm trying to generate a basic melody using the code shown in the tutorial, and I've got 2 questions. Note I'm a beginner, hence my questions may be really simple.

    1. When I type in Python: BUNDLE_PATH=<E:\Storage\Documents\FEI_AI\magenta\BUNDLES> It returns Syntax Error,pointing at '<'

    So I used: BUNDLE_PATH='E:\Storage\Documents\FEI_AI\magenta\BUNDLES' It proceeds without error. But is it correct? Same goes with the next line involving CONFIG.

    1. After typing: melody_rnn_generate \ I typed: --config=${CONFIG} It returns Invalid Syntax, and points at '$'.

    Need advice, thank you!

    opened by EvilMudkip 14
  • GANSynth KeyError: 'tfds_data_dir'

    GANSynth KeyError: 'tfds_data_dir'

    Hi, by two days the GANSynth's Colab Demo give me an error on Environment Setup:

    Load model from /content/gansynth/acoustic_only/stage_00012/./model.ckpt-11000000
    ---------------------------------------------------------------------------
    KeyError                                  Traceback (most recent call last)
    <ipython-input-1-bad1649e76e0> in <module>()
         65 tf.reset_default_graph()
         66 flags = lib_flags.Flags({'batch_size_schedule': [BATCH_SIZE]})
    ---> 67 model = lib_model.Model.load_from_path(CKPT_DIR, flags)
         68 
         69 # Helper functions
    
    4 frames
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/model.py in load_from_path(cls, path, flags)
        175     batch_size = flags.get('eval_batch_size',
        176                            train_util.get_batch_size(stage_id, **flags))
    --> 177     model = cls(stage_id, batch_size, flags)
        178     model.saver.restore(model.sess, ckpt)
        179     return model
    
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/model.py in __init__(self, stage_id, batch_size, config)
        191     """
        192     data_helper = data_helpers.registry[config['data_type']](config)
    --> 193     real_images, real_one_hot_labels = data_helper.provide_data(batch_size)
        194 
        195     # gen_one_hot_labels = real_one_hot_labels
    
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/data_helpers.py in provide_data(self, batch_size)
         64     with tf.name_scope('inputs'):
         65       with tf.device('/cpu:0'):
    ---> 66         dataset = self.dataset.provide_dataset()
         67         dataset = dataset.shuffle(buffer_size=1000)
         68         dataset = dataset.map(self._map_fn, num_parallel_calls=4)
    
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/datasets.py in provide_dataset(self)
        116       return wave, one_hot_label, label, example['instrument']['source']
        117 
    --> 118     dataset = self._get_dataset_from_tfds()
        119     dataset = dataset.map(
        120         _parse_nsynth, num_parallel_calls=tf.data.experimental.AUTOTUNE)
    
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/datasets.py in _get_dataset_from_tfds(self)
         65       dataset = tfds.load(
         66           'nsynth/gansynth_subset:2.3.*',
    ---> 67           data_dir=self._config['tfds_data_dir'],
         68           split=tfds.Split.TRAIN,
         69           download=False)
    
    KeyError: 'tfds_data_dir'
    

    How can I resolve?

    opened by Junpyer 13
  • Where to get model.ckpt?

    Where to get model.ckpt?

    I'm trying to follow the instructions for arbitrary style transfer here, and it says that I need to download a pretrained model from this link. However, what I get is a tar.gz file and whenever I extract it, I only get the following files:

    $ tar xvzf arbitrary_style_transfer.tar.gz
    $ ls arbitrary_style_transfer
    model.ckpt-data-00000-of-000001 model.ckpt.index model.ckpt.meta
    

    Which one here should I use?

    opened by ljvmiranda921 13
  • Windows 10 Ananconda Install - Bazel fails

    Windows 10 Ananconda Install - Bazel fails

    Hi, I'm following the instructions on https://github.com/tensorflow/magenta for setting up a development environment.

    I've installed Bazel aok. I've installed Anaconda aok. I've created a tensorflow env and installed it.

    But when running bazel test //magenta/... I get

    ERROR: E:/music/magenta/magenta/magenta/models/arbitrary_image_stylization/BUILD:48:1: PythonZipper magenta/models/arbitrary_image_stylization/arbitrary_image_stylization_with_weights.zip failed (Exit -1)

    INFO: Elapsed time: 1.734s, Critical Path: 0.15s INFO: 0 processes. FAILED: Build did NOT complete successfully //magenta/common:beam_search_test NO STATUS //magenta/common:concurrency_test NO STATUS //magenta/common:nade_test NO STATUS etc etc

    Any clues?

    opened by sonicviz 13
  • Midi decoding error: Bad header in MIDI file

    Midi decoding error: Bad header in MIDI file

    By following the tutorial, when converting midi files to note sequences, it encounters following error

    ERROR:tensorflow:Midi decoding error <type 'exceptions.TypeError'>: Bad header in MIDI file

    when running

    bazel run //magenta/scripts:convert_midi_dir_to_note_sequences -- \
    --midi_dir=$MIDI_DIRECTORY \
    --output_file=$SEQUENCES_TFRECORD \
    --recursive
    

    No tfrecord file generated.

    I've tried many different midi files from midiworld website, the same error remains. So I guess it might not be the problem with midi files.

    Environment: Bazel 0.30 TensorFlow 0.9 under virtual environment

    opened by zuoxingdong 13
  • What is the status of magenta as a python module?

    What is the status of magenta as a python module?

    Greetings!

    I'm starting recently to learn about magenta, in particular I'm following a book by packt and I'm quite excited. I am, however, very confused about the status of magenta.

    Let me elaborate on the confusion:

    The README on this repository says that it's inactive, and I should check the website for active stuff. The website (python section) says that magenta is active, and links to github.com/tensorflow/magenta which redirects to this repository, which says that it's inactive. Getting vague Liar's Paradox vibes.

    This repository seems to be the upstream for whatever I get with pip install magenta, and it is still getting commits - which seems to imply that magenta as a python module is alive, as the website says. However in issue 2003 you say that magenta is developed, just not this repository.

    So what does this all mean?

    Is magenta as a python module alive or dead? It currently does not work on my main computer, is this something that is likely to change at some point? Does it make sense to create issues like I did?

    If the python module is still being developed but this repository is inactive, which is the active one?

    If the python module is not being developed but something is taking its place (please don't say javascript ❤️ ), what is the new project that i can use to train an AI with my music and have it make new stuff from it? And is there a book that I could buy to learn to do it?

    Thanks for all your work!

    opened by xstasi 0
  • cannot convert midi file to note sequence.

    cannot convert midi file to note sequence.

    I'm trying to convert midi to a note sequence, but it is not working. I've installed magenta 2.1.4 and tensorflow 2.11.0

    convert_dir_to_note_sequences \
    --input_dir= /Users/My_name/downloads/classic \
    --output_file=/Users/My_name/downloads/classic_out/noteseq/notesequences.tfrecord \
    --recursive
    

    I try to run the code and I get this error:

    
    File "/Users/My_name/magenta-env/lib/python3.8/site-packages/resampy/interpn.py", line 73, in <module>
        @guvectorize(
    TypeError: guvectorize() missing 1 required positional argument: 'signature'
    
    
    opened by kimurapyusers 0
  • Cannot run on Apple silicon (m1/m2) because of problematic dependencies

    Cannot run on Apple silicon (m1/m2) because of problematic dependencies

    Magenta has the following problematic dependencies:

    • numpy == 1.21.6

    This version of numpy gives runtime errors for reasons I'm not completely sure of, such as:

    RuntimeError: module compiled against API version 0xf but this version of numpy is 0xe
    

    This runtime problem disappears when numpy is upgraded, but naturally others arise.

    • numba == 0.49.1

    This is the worst offender as it depends on an old version of llvmlite, which in turn depends on an old version of llvm (8.x) that does not support the newer Apple targets. This must be upgraded to a much newer, possibly the newest release, as even llvm 9.x is problematic. llvm 11.x must be supported in order to run on this platform.

    • tensorflow == 2.9.1

    This is tensorflow-macos on Apple silicon, but I'm not sure whether you can add dependency alternatives to include it. A minor problem anyway, as it can be overridden by editing setup.py and installing locally.


    numba and numpy need upgrading though, until then there will be no way to run Magenta on new macs.

    Thanks!

    opened by xstasi 1
  • Errors when trying to install magenta

    Errors when trying to install magenta

    I get error messages for building wheels in various moduls (llvmlite, numba, python-rtmidi). I'm using Python 3.9, already tried to reinstall everything, installed other package versions. It just doesn't work. Can you help me please?

    (I removed some not so important code to not exceed the max. char. length here)

    Building wheels for collected packages: numba, python-rtmidi, llvmlite
      Building wheel for numba (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py bdist_wheel did not run successfully.
      │ exit code: 1
      ╰─> [826 lines of output]
          TBB not found
          OpenMP disabled
          running bdist_wheel
          running build
          got version from file C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\numba_9c8741e8713142ffb7ba6d62968ea471\numba/_version.py {'version': '0.49.1', 'full': 'd2cac8597ad2aa4074147d9c7595f5b5e9919901'}
          running build_py
          creating build
        
          running build_ext
          No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
          building 'numba._dynfunc' extension
          error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      ERROR: Failed building wheel for numba
      Running setup.py clean for numba
      Building wheel for python-rtmidi (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py bdist_wheel did not run successfully.
      │ exit code: 1
      ╰─> [13 lines of output]
          running bdist_wheel
          running build
          running build_py
          creating build
          creating build\lib.win-amd64-cpython-39
          creating build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\midiconstants.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\midiutil.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\release.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\__init__.py -> build\lib.win-amd64-cpython-39\rtmidi
          running build_ext
          building 'rtmidi._rtmidi' extension
          error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      ERROR: Failed building wheel for python-rtmidi
      Running setup.py clean for python-rtmidi
      Building wheel for llvmlite (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py bdist_wheel did not run successfully.
      │ exit code: 1
      ╰─> [40 lines of output]
          running bdist_wheel
          C:\Users\Dell\anaconda3\python.exe C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py
          -- Selecting Windows SDK version  to target Windows 10.0.19044.
          CMake Error at CMakeLists.txt:3 (project):
            Failed to run MSBuild command:
    
              MSBuild.exe
    
            to get the value of VCTargetsPath:
    
              Das System kann die angegebene Datei nicht finden
    
    
    
          -- Configuring incomplete, errors occurred!
          See also "C:/Users/Dell/AppData/Local/Temp/tmp_9vh8mbn/CMakeFiles/CMakeOutput.log".
          CMake Error at CMakeLists.txt:3 (project):
            Generator
    
              Visual Studio 15 2017 Win64
    
            could not find any instance of Visual Studio.
    
    
    
          -- Configuring incomplete, errors occurred!
          See also "C:/Users/Dell/AppData/Local/Temp/tmp5dfq2vh1/CMakeFiles/CMakeOutput.log".
          Trying generator 'Visual Studio 14 2015 Win64'
          Trying generator 'Visual Studio 15 2017 Win64'
          Traceback (most recent call last):
            File "C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py", line 192, in <module>
              main()
            File "C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py", line 180, in main
              main_win32()
            File "C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py", line 89, in main_win32
              generator = find_win32_generator()
            File "C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py", line 85, in find_win32_generator
              raise RuntimeError("No compatible cmake generator installed on this machine")
          RuntimeError: No compatible cmake generator installed on this machine
          error: command 'C:\\Users\\Dell\\anaconda3\\python.exe' failed with exit code 1
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      ERROR: Failed building wheel for llvmlite
      Running setup.py clean for llvmlite
    Failed to build numba python-rtmidi llvmlite
    Installing collected packages: python-rtmidi, pygtrie, mido, llvmlite, keras, flatbuffers, tensorflow-estimator, sox, scipy, protobuf, promise, numba, keras-preprocessing, importlib_resources, imageio, etils, absl-py, tf-slim, tensorflow-probability, sk-video, resampy, mir-eval, matplotlib, googleapis-common-protos, dm-sonnet, tensorflow-metadata, librosa, tensorflow-datasets, tensorboard, note-seq, tensorflow, magenta
      Attempting uninstall: python-rtmidi
        Found existing installation: python-rtmidi 1.4.9
        Uninstalling python-rtmidi-1.4.9:
          Successfully uninstalled python-rtmidi-1.4.9
      Running setup.py install for python-rtmidi ... error
      error: subprocess-exited-with-error
    
      × Running setup.py install for python-rtmidi did not run successfully.
      │ exit code: 1
      ╰─> [15 lines of output]
          running install
          C:\Users\Dell\anaconda3\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
            warnings.warn(
          running build
          running build_py
          creating build
          creating build\lib.win-amd64-cpython-39
          creating build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\midiconstants.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\midiutil.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\release.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\__init__.py -> build\lib.win-amd64-cpython-39\rtmidi
          running build_ext
          building 'rtmidi._rtmidi' extension
          error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      Rolling back uninstall of python-rtmidi
      Moving to c:\users\dell\anaconda3\lib\site-packages\python_rtmidi-1.4.9.dist-info\
       from C:\Users\Dell\anaconda3\Lib\site-packages\~ython_rtmidi-1.4.9.dist-info
      Moving to c:\users\dell\anaconda3\lib\site-packages\rtmidi\
       from C:\Users\Dell\anaconda3\Lib\site-packages\~tmidi
    error: legacy-install-failure
    
    × Encountered error while trying to install package.
    ╰─> python-rtmidi
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for output from the failure.
    
    opened by Luka27 0
  • Installation error: PackagesNotFoundError & ResolutionImpossible

    Installation error: PackagesNotFoundError & ResolutionImpossible

    I am using mac venutra.

    Using the conda install I get this error during installation: image After this I attempted the manual install which gave me this error: image

    I don't believe downgrading to python 3.6 would work, as in the documentation it is said that verisons >= 3.5 work (correct me if i am wrong!)

    Cheers

    opened by 50NNY1 1
Releases(v2.1.4)
Owner
Magenta
An open source research project exploring the role of machine learning as a tool in the creative process.
Magenta
As we all know the BGMI Loot Crate comes with so many resources for the gamers, this ML Crate will be the hub of various ML projects which will be the resources for the ML enthusiasts! Open Source Program: SWOC 2021 and JWOC 2022.

Machine Learning Loot Crate 💻 🧰 🔴 Welcome contributors! As we all know the BGMI Loot Crate comes with so many resources for the gamers, this ML Cra

Abhishek Sharma 89 Dec 28, 2022
ThunderSVM: A Fast SVM Library on GPUs and CPUs

What's new We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs. add scikit-learn interface, see here Overview The miss

Xtra Computing Group 1.4k Dec 22, 2022
This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch

This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch. It uses a simple TestEnvironment to test the algorithm

Martin Huber 59 Dec 09, 2022
A quick reference guide to the most commonly used patterns and functions in PySpark SQL

Using PySpark we can process data from Hadoop HDFS, AWS S3, and many file systems. PySpark also is used to process real-time data using Streaming and

Sundar Ramamurthy 53 Dec 21, 2022
SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and TensorFlow

SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and TensorFlow, in High Performance Computing (HPC) simulations and workloads.

Pyomo is an object-oriented algebraic modeling language in Python for structured optimization problems.

Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating and analyzing optimization models. Pyomo can be used to define symbolic p

Pyomo 1.4k Dec 28, 2022
pymc-learn: Practical Probabilistic Machine Learning in Python

pymc-learn: Practical Probabilistic Machine Learning in Python Contents: Github repo What is pymc-learn? Quick Install Quick Start Index What is pymc-

pymc-learn 196 Dec 07, 2022
Simple, fast, and parallelized symbolic regression in Python/Julia via regularized evolution and simulated annealing

Parallelized symbolic regression built on Julia, and interfaced by Python. Uses regularized evolution, simulated annealing, and gradient-free optimization.

Miles Cranmer 924 Jan 03, 2023
Transpile trained scikit-learn estimators to C, Java, JavaScript and others.

sklearn-porter Transpile trained scikit-learn estimators to C, Java, JavaScript and others. It's recommended for limited embedded systems and critical

Darius Morawiec 1.2k Jan 05, 2023
The unified machine learning framework, enabling framework-agnostic functions, layers and libraries.

The unified machine learning framework, enabling framework-agnostic functions, layers and libraries. Contents Overview In a Nutshell Where Next? Overv

Ivy 8.2k Dec 31, 2022
Can a machine learning project be implemented to estimate the salaries of baseball players whose salary information and career statistics for 1986 are shared?

END TO END MACHINE LEARNING PROJECT ON HITTERS DATASET Can a machine learning project be implemented to estimate the salaries of baseball players whos

Pinar Oner 7 Dec 18, 2021
A naive Bayes model for cancer classification using a set of documents

Naivebayes text classifcation model for cancer and noncancer documents Author: Alex King Purpose Requirements/files included How to use 1. Purpose The

Alex W King 1 Nov 24, 2021
This is the material used in my free Persian course: Machine Learning with Python

This is the material used in my free Persian course: Machine Learning with Python

Yara Mohamadi 4 Aug 07, 2022
Uses WiFi signals :signal_strength: and machine learning to predict where you are

Uses WiFi signals and machine learning (sklearn's RandomForest) to predict where you are. Even works for small distances like 2-10 meters.

Pascal van Kooten 5k Jan 09, 2023
使用数学和计算机知识投机倒把

偷鸡不成项目集锦 坦率地讲,涉及金融市场的好策略如果公开,必然导致使用的人多,最后策略变差。所以这个仓库只收集我目前失败了的案例。 加密货币组合套利 中国体育彩票预测 我赚不上钱的项目,也许可以帮助更有能力的人去赚钱。

Roy 28 Dec 29, 2022
Drug prediction

I have collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of 5 medications, Drug A, Drug B, Drug c, Dr

Khazar 1 Jan 28, 2022
ParaMonte is a serial/parallel library of Monte Carlo routines for sampling mathematical objective functions of arbitrary-dimensions

ParaMonte is a serial/parallel library of Monte Carlo routines for sampling mathematical objective functions of arbitrary-dimensions, in particular, the posterior distributions of Bayesian models in

Computational Data Science Lab 182 Dec 31, 2022
Implementation of deep learning models for time series in PyTorch.

List of Implementations: Currently, the reimplementation of the DeepAR paper(DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks

Yunkai Zhang 275 Dec 28, 2022
AutoOED: Automated Optimal Experiment Design Platform

AutoOED is an optimal experiment design platform powered with automated machine learning to accelerate the discovery of optimal solutions. Our platform solves multi-objective optimization problems an

Yunsheng Tian 107 Jan 03, 2023
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Yahoo 3.8k Jan 04, 2023