Magenta: Music and Art Generation with Machine Intelligence

Related tags

Miscellaneousmagenta
Overview

Build Status PyPI version

Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it's also an exploration in building smart tools and interfaces that allow artists and musicians to extend (not replace!) their processes using these models. Magenta was started by some researchers and engineers from the Google Brain team, but many others have contributed significantly to the project. We use TensorFlow and release our models and tools in open source on this GitHub. If you’d like to learn more about Magenta, check out our blog, where we post technical details. You can also join our discussion group.

This is the home for our Python TensorFlow library. To use our models in the browser with TensorFlow.js, head to the Magenta.js repository.

Getting Started

Take a look at our colab notebooks for various models, including one on getting started. Magenta.js is also a good resource for models and demos that run in the browser. This and more, including blog posts and Ableton Live plugins, can be found at https://magenta.tensorflow.org.

Magenta Repo

Installation

Magenta maintains a pip package for easy installation. We recommend using Anaconda to install it, but it can work in any standard Python environment. We support Python 3 (>= 3.5). These instructions will assume you are using Anaconda.

Automated Install (w/ Anaconda)

If you are running Mac OS X or Ubuntu, you can try using our automated installation script. Just paste the following command into your terminal.

curl https://raw.githubusercontent.com/tensorflow/magenta/master/magenta/tools/magenta-install.sh > /tmp/magenta-install.sh
bash /tmp/magenta-install.sh

After the script completes, open a new terminal window so the environment variable changes take effect.

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Note that you will need to run source activate magenta to use Magenta every time you open a new terminal window.

Manual Install (w/o Anaconda)

If the automated script fails for any reason, or you'd prefer to install by hand, do the following steps.

Install the Magenta pip package:

pip install magenta

NOTE: In order to install the rtmidi package that we depend on, you may need to install headers for some sound libraries. On Ubuntu Linux, this command should install the necessary packages:

sudo apt-get install build-essential libasound2-dev libjack-dev portaudio19-dev

On Fedora Linux, use

sudo dnf group install "C Development Tools and Libraries"
sudo dnf install SAASound-devel jack-audio-connection-kit-devel portaudio-devel

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Using Magenta

You can now train our various models and use them to generate music, audio, and images. You can find instructions for each of the models by exploring the models directory.

Development Environment

If you want to develop on Magenta, you'll need to set up the full Development Environment.

First, clone this repository:

git clone https://github.com/tensorflow/magenta.git

Next, install the dependencies by changing to the base directory and executing the setup command:

pip install -e .

You can now edit the files and run scripts by calling Python as usual. For example, this is how you would run the melody_rnn_generate script from the base directory:

python magenta/models/melody_rnn/melody_rnn_generate --config=...

You can also install the (potentially modified) package with:

pip install .

Before creating a pull request, please also test your changes with:

pip install pytest-pylint
pytest

PIP Release

To build a new version for pip, bump the version and then run:

python setup.py test
python setup.py bdist_wheel --universal
twine upload dist/magenta-N.N.N-py2.py3-none-any.whl
Comments
  • can't reproduce the result of GANsynth

    can't reproduce the result of GANsynth

    Hi, I have finished training with the recommended hyperparameters after 4 days. But the generated sound with the midi file is very poor and lacks diversity. I trained only on the Acoustic Subset and used tensorflow 1.15.2. How should I reproduce the results in the GANsynth paper? @jesseengel

    opened by shansiliu95 30
  • MusicXML Parser for Magenta

    MusicXML Parser for Magenta

    Lightweight MusicXML parser for Magenta with no dependencies. Handles uncompressed (.xml) and compressed (.mxl) files. Translates MusicXML file into NoteSequences similar to how MIDI is imported.

    Because MusicXML differs from MIDI, different information is available to Magenta. For example, MusicXML does not contain MIDI CC data. However, MusicXML does contain other information (such as dynamic markings) which may be more useful in certain scenarios, such as using Magenta to generate sheet music (MusicXML files) in addition to MIDI files.

    This parser currently only supports input and does not support output to MusicXML files.

    Four public domain MusicXML files are included for unit testing.

    opened by jsawruk 27
  • ValueError: Unable to get the Filesystem for path gs://magentadata/datasets/maestro/v1.0.0/maestro-v1.0.0_test.tfrecord

    ValueError: Unable to get the Filesystem for path gs://magentadata/datasets/maestro/v1.0.0/maestro-v1.0.0_test.tfrecord

    I'm getting this error while trying to do the Data generation / preprocessing step of Score2Perf Music Transformer model on my computer. How can I fix it?

    opened by aletote 23
  • Tests failing with Bazel 0.2.3 and Tensorflow 0.9

    Tests failing with Bazel 0.2.3 and Tensorflow 0.9

    Hey all!

    I'm on a Mac OSX attempting to build. I installed tensorflow in python 3. When I run the

    bazel test //magenta:all
    

    Command, all 6 tests fail locally. The fail log for each says that python can't import tensorflow. I'm assuming the tests are being run by my Mac's default python automatically (2.7.10, doesn't have tensorflow installed), is there a way to change that to python3?

    Sorry if this is a dumb question!

    opened by SJCaldwell 20
  •  Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

    Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

    Hello,

    I'm running the last version MusicVAE from repository on ubuntu 18.04, cuda 10.1, tensorflow 2.2.0, configuration - hier-trio_16bar and have the error below (i tried for different batch sizes, even 1 and for different learning rates, but the problem is the same). Do you know how to fix it?

    2020-06-09 21:01:27.365621: I tensorflow/core/common_runtime/bfc_allocator.cc:1010] Stats: Limit: 14684815360 InUse: 14684616704 MaxInUse: 14684815360 NumAllocs: 26588 MaxAllocSize: 181403648

    2020-06-09 21:01:27.365991: W tensorflow/core/common_runtime/bfc_allocator.cc:439] **************************************************************************************************** 2020-06-09 21:01:27.366026: W tensorflow/core/framework/op_kernel.cc:1753] OP_REQUIRES failed at lstm_ops.cc:372 : Resource exhausted: OOM when allocating tensor with shape[256,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc 2020-06-09 21:01:34.147376: W tensorflow/core/kernels/data/cache_dataset_ops.cc:794] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to dataset.cache().take(k).repeat(). You should use dataset.take(k).cache().repeat() instead. Traceback (most recent call last): File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1365, in _do_call return fn(*args) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350, in _run_fn target_list, run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node swap_in_core_decoder_1/core_decoder_0/decoder/while/BasicDecoderStep/decoder/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_13_0}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[add/_2901]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    (1) Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node swap_in_core_decoder_1/core_decoder_0/decoder/while/BasicDecoderStep/decoder/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_13_0}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    0 successful operations. 0 derived errors ignored.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "music_vae_train.py", line 340, in console_entry_point() File "music_vae_train.py", line 336, in console_entry_point tf.app.run(main) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "music_vae_train.py", line 331, in main run(configs.CONFIG_MAP) File "music_vae_train.py", line 312, in run task=FLAGS.task) File "music_vae_train.py", line 211, in train is_chief=is_chief) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tf_slim/training/training.py", line 551, in train loss = session.run(train_op, run_metadata=run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 778, in run run_metadata=run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1283, in run run_metadata=run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1384, in run raise six.reraise(*original_exc_info) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1369, in run return self._sess.run(*args, **kwargs) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1442, in run run_metadata=run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1200, in run return self._sess.run(*args, **kwargs) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 958, in run run_metadata_ptr) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1181, in _run feed_dict_tensor, options, run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1359, in _do_run run_metadata) File "/home/burashnikova/env-tf22/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1384, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node swap_in_core_decoder_1/core_decoder_0/decoder/while/BasicDecoderStep/decoder/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_13_0}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[add/_2901]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    (1) Resource exhausted: OOM when allocating tensor with shape[256,1114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node swap_in_core_decoder_1/core_decoder_0/decoder/while/BasicDecoderStep/decoder/multi_rnn_cell/cell_0/lstm_cell/LSTMBlockCell_13_0}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    0 successful operations. 0 derived errors ignored.

    opened by SashaBurashnikova 14
  • losing instruments information when converting midi to note sequence

    losing instruments information when converting midi to note sequence

    Hello, I'm doing manipulations on midi files using Magenta. I convert the midi file to NoteSequence using

    import magenta.music as mm
    mm.midi_file_to_sequence_proto(midi)
    

    The problem is that if I have unused midi channels (lets say 5 and 6) and used midi channels (lets say 7 and 8), the conversion put the used channels in place of empty ones, so I loose the instrument true setup, and when converting it back to midi and playing it with midi player, the instruments are mixed up. Any suggestion how to avoid this wrong conversion (only happens when there are some unused midi channels in the midi file). Thanks!

    opened by DavidPrimor 14
  • Generate melody with basic_rnn

    Generate melody with basic_rnn

    Hi,

    I'm trying to generate a basic melody using the code shown in the tutorial, and I've got 2 questions. Note I'm a beginner, hence my questions may be really simple.

    1. When I type in Python: BUNDLE_PATH=<E:\Storage\Documents\FEI_AI\magenta\BUNDLES> It returns Syntax Error,pointing at '<'

    So I used: BUNDLE_PATH='E:\Storage\Documents\FEI_AI\magenta\BUNDLES' It proceeds without error. But is it correct? Same goes with the next line involving CONFIG.

    1. After typing: melody_rnn_generate \ I typed: --config=${CONFIG} It returns Invalid Syntax, and points at '$'.

    Need advice, thank you!

    opened by EvilMudkip 14
  • GANSynth KeyError: 'tfds_data_dir'

    GANSynth KeyError: 'tfds_data_dir'

    Hi, by two days the GANSynth's Colab Demo give me an error on Environment Setup:

    Load model from /content/gansynth/acoustic_only/stage_00012/./model.ckpt-11000000
    ---------------------------------------------------------------------------
    KeyError                                  Traceback (most recent call last)
    <ipython-input-1-bad1649e76e0> in <module>()
         65 tf.reset_default_graph()
         66 flags = lib_flags.Flags({'batch_size_schedule': [BATCH_SIZE]})
    ---> 67 model = lib_model.Model.load_from_path(CKPT_DIR, flags)
         68 
         69 # Helper functions
    
    4 frames
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/model.py in load_from_path(cls, path, flags)
        175     batch_size = flags.get('eval_batch_size',
        176                            train_util.get_batch_size(stage_id, **flags))
    --> 177     model = cls(stage_id, batch_size, flags)
        178     model.saver.restore(model.sess, ckpt)
        179     return model
    
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/model.py in __init__(self, stage_id, batch_size, config)
        191     """
        192     data_helper = data_helpers.registry[config['data_type']](config)
    --> 193     real_images, real_one_hot_labels = data_helper.provide_data(batch_size)
        194 
        195     # gen_one_hot_labels = real_one_hot_labels
    
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/data_helpers.py in provide_data(self, batch_size)
         64     with tf.name_scope('inputs'):
         65       with tf.device('/cpu:0'):
    ---> 66         dataset = self.dataset.provide_dataset()
         67         dataset = dataset.shuffle(buffer_size=1000)
         68         dataset = dataset.map(self._map_fn, num_parallel_calls=4)
    
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/datasets.py in provide_dataset(self)
        116       return wave, one_hot_label, label, example['instrument']['source']
        117 
    --> 118     dataset = self._get_dataset_from_tfds()
        119     dataset = dataset.map(
        120         _parse_nsynth, num_parallel_calls=tf.data.experimental.AUTOTUNE)
    
    /usr/local/lib/python3.6/dist-packages/magenta/models/gansynth/lib/datasets.py in _get_dataset_from_tfds(self)
         65       dataset = tfds.load(
         66           'nsynth/gansynth_subset:2.3.*',
    ---> 67           data_dir=self._config['tfds_data_dir'],
         68           split=tfds.Split.TRAIN,
         69           download=False)
    
    KeyError: 'tfds_data_dir'
    

    How can I resolve?

    opened by Junpyer 13
  • Where to get model.ckpt?

    Where to get model.ckpt?

    I'm trying to follow the instructions for arbitrary style transfer here, and it says that I need to download a pretrained model from this link. However, what I get is a tar.gz file and whenever I extract it, I only get the following files:

    $ tar xvzf arbitrary_style_transfer.tar.gz
    $ ls arbitrary_style_transfer
    model.ckpt-data-00000-of-000001 model.ckpt.index model.ckpt.meta
    

    Which one here should I use?

    opened by ljvmiranda921 13
  • Windows 10 Ananconda Install - Bazel fails

    Windows 10 Ananconda Install - Bazel fails

    Hi, I'm following the instructions on https://github.com/tensorflow/magenta for setting up a development environment.

    I've installed Bazel aok. I've installed Anaconda aok. I've created a tensorflow env and installed it.

    But when running bazel test //magenta/... I get

    ERROR: E:/music/magenta/magenta/magenta/models/arbitrary_image_stylization/BUILD:48:1: PythonZipper magenta/models/arbitrary_image_stylization/arbitrary_image_stylization_with_weights.zip failed (Exit -1)

    INFO: Elapsed time: 1.734s, Critical Path: 0.15s INFO: 0 processes. FAILED: Build did NOT complete successfully //magenta/common:beam_search_test NO STATUS //magenta/common:concurrency_test NO STATUS //magenta/common:nade_test NO STATUS etc etc

    Any clues?

    opened by sonicviz 13
  • Midi decoding error: Bad header in MIDI file

    Midi decoding error: Bad header in MIDI file

    By following the tutorial, when converting midi files to note sequences, it encounters following error

    ERROR:tensorflow:Midi decoding error <type 'exceptions.TypeError'>: Bad header in MIDI file

    when running

    bazel run //magenta/scripts:convert_midi_dir_to_note_sequences -- \
    --midi_dir=$MIDI_DIRECTORY \
    --output_file=$SEQUENCES_TFRECORD \
    --recursive
    

    No tfrecord file generated.

    I've tried many different midi files from midiworld website, the same error remains. So I guess it might not be the problem with midi files.

    Environment: Bazel 0.30 TensorFlow 0.9 under virtual environment

    opened by zuoxingdong 13
  • What is the status of magenta as a python module?

    What is the status of magenta as a python module?

    Greetings!

    I'm starting recently to learn about magenta, in particular I'm following a book by packt and I'm quite excited. I am, however, very confused about the status of magenta.

    Let me elaborate on the confusion:

    The README on this repository says that it's inactive, and I should check the website for active stuff. The website (python section) says that magenta is active, and links to github.com/tensorflow/magenta which redirects to this repository, which says that it's inactive. Getting vague Liar's Paradox vibes.

    This repository seems to be the upstream for whatever I get with pip install magenta, and it is still getting commits - which seems to imply that magenta as a python module is alive, as the website says. However in issue 2003 you say that magenta is developed, just not this repository.

    So what does this all mean?

    Is magenta as a python module alive or dead? It currently does not work on my main computer, is this something that is likely to change at some point? Does it make sense to create issues like I did?

    If the python module is still being developed but this repository is inactive, which is the active one?

    If the python module is not being developed but something is taking its place (please don't say javascript ❤️ ), what is the new project that i can use to train an AI with my music and have it make new stuff from it? And is there a book that I could buy to learn to do it?

    Thanks for all your work!

    opened by xstasi 0
  • cannot convert midi file to note sequence.

    cannot convert midi file to note sequence.

    I'm trying to convert midi to a note sequence, but it is not working. I've installed magenta 2.1.4 and tensorflow 2.11.0

    convert_dir_to_note_sequences \
    --input_dir= /Users/My_name/downloads/classic \
    --output_file=/Users/My_name/downloads/classic_out/noteseq/notesequences.tfrecord \
    --recursive
    

    I try to run the code and I get this error:

    
    File "/Users/My_name/magenta-env/lib/python3.8/site-packages/resampy/interpn.py", line 73, in <module>
        @guvectorize(
    TypeError: guvectorize() missing 1 required positional argument: 'signature'
    
    
    opened by kimurapyusers 0
  • Cannot run on Apple silicon (m1/m2) because of problematic dependencies

    Cannot run on Apple silicon (m1/m2) because of problematic dependencies

    Magenta has the following problematic dependencies:

    • numpy == 1.21.6

    This version of numpy gives runtime errors for reasons I'm not completely sure of, such as:

    RuntimeError: module compiled against API version 0xf but this version of numpy is 0xe
    

    This runtime problem disappears when numpy is upgraded, but naturally others arise.

    • numba == 0.49.1

    This is the worst offender as it depends on an old version of llvmlite, which in turn depends on an old version of llvm (8.x) that does not support the newer Apple targets. This must be upgraded to a much newer, possibly the newest release, as even llvm 9.x is problematic. llvm 11.x must be supported in order to run on this platform.

    • tensorflow == 2.9.1

    This is tensorflow-macos on Apple silicon, but I'm not sure whether you can add dependency alternatives to include it. A minor problem anyway, as it can be overridden by editing setup.py and installing locally.


    numba and numpy need upgrading though, until then there will be no way to run Magenta on new macs.

    Thanks!

    opened by xstasi 0
  • Errors when trying to install magenta

    Errors when trying to install magenta

    I get error messages for building wheels in various moduls (llvmlite, numba, python-rtmidi). I'm using Python 3.9, already tried to reinstall everything, installed other package versions. It just doesn't work. Can you help me please?

    (I removed some not so important code to not exceed the max. char. length here)

    Building wheels for collected packages: numba, python-rtmidi, llvmlite
      Building wheel for numba (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py bdist_wheel did not run successfully.
      │ exit code: 1
      ╰─> [826 lines of output]
          TBB not found
          OpenMP disabled
          running bdist_wheel
          running build
          got version from file C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\numba_9c8741e8713142ffb7ba6d62968ea471\numba/_version.py {'version': '0.49.1', 'full': 'd2cac8597ad2aa4074147d9c7595f5b5e9919901'}
          running build_py
          creating build
        
          running build_ext
          No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
          building 'numba._dynfunc' extension
          error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      ERROR: Failed building wheel for numba
      Running setup.py clean for numba
      Building wheel for python-rtmidi (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py bdist_wheel did not run successfully.
      │ exit code: 1
      ╰─> [13 lines of output]
          running bdist_wheel
          running build
          running build_py
          creating build
          creating build\lib.win-amd64-cpython-39
          creating build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\midiconstants.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\midiutil.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\release.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\__init__.py -> build\lib.win-amd64-cpython-39\rtmidi
          running build_ext
          building 'rtmidi._rtmidi' extension
          error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      ERROR: Failed building wheel for python-rtmidi
      Running setup.py clean for python-rtmidi
      Building wheel for llvmlite (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py bdist_wheel did not run successfully.
      │ exit code: 1
      ╰─> [40 lines of output]
          running bdist_wheel
          C:\Users\Dell\anaconda3\python.exe C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py
          -- Selecting Windows SDK version  to target Windows 10.0.19044.
          CMake Error at CMakeLists.txt:3 (project):
            Failed to run MSBuild command:
    
              MSBuild.exe
    
            to get the value of VCTargetsPath:
    
              Das System kann die angegebene Datei nicht finden
    
    
    
          -- Configuring incomplete, errors occurred!
          See also "C:/Users/Dell/AppData/Local/Temp/tmp_9vh8mbn/CMakeFiles/CMakeOutput.log".
          CMake Error at CMakeLists.txt:3 (project):
            Generator
    
              Visual Studio 15 2017 Win64
    
            could not find any instance of Visual Studio.
    
    
    
          -- Configuring incomplete, errors occurred!
          See also "C:/Users/Dell/AppData/Local/Temp/tmp5dfq2vh1/CMakeFiles/CMakeOutput.log".
          Trying generator 'Visual Studio 14 2015 Win64'
          Trying generator 'Visual Studio 15 2017 Win64'
          Traceback (most recent call last):
            File "C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py", line 192, in <module>
              main()
            File "C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py", line 180, in main
              main_win32()
            File "C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py", line 89, in main_win32
              generator = find_win32_generator()
            File "C:\Users\Dell\AppData\Local\Temp\pip-install-s_n9pbef\llvmlite_4ce3f81337d447c18fde83ea427f20d5\ffi\build.py", line 85, in find_win32_generator
              raise RuntimeError("No compatible cmake generator installed on this machine")
          RuntimeError: No compatible cmake generator installed on this machine
          error: command 'C:\\Users\\Dell\\anaconda3\\python.exe' failed with exit code 1
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      ERROR: Failed building wheel for llvmlite
      Running setup.py clean for llvmlite
    Failed to build numba python-rtmidi llvmlite
    Installing collected packages: python-rtmidi, pygtrie, mido, llvmlite, keras, flatbuffers, tensorflow-estimator, sox, scipy, protobuf, promise, numba, keras-preprocessing, importlib_resources, imageio, etils, absl-py, tf-slim, tensorflow-probability, sk-video, resampy, mir-eval, matplotlib, googleapis-common-protos, dm-sonnet, tensorflow-metadata, librosa, tensorflow-datasets, tensorboard, note-seq, tensorflow, magenta
      Attempting uninstall: python-rtmidi
        Found existing installation: python-rtmidi 1.4.9
        Uninstalling python-rtmidi-1.4.9:
          Successfully uninstalled python-rtmidi-1.4.9
      Running setup.py install for python-rtmidi ... error
      error: subprocess-exited-with-error
    
      × Running setup.py install for python-rtmidi did not run successfully.
      │ exit code: 1
      ╰─> [15 lines of output]
          running install
          C:\Users\Dell\anaconda3\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
            warnings.warn(
          running build
          running build_py
          creating build
          creating build\lib.win-amd64-cpython-39
          creating build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\midiconstants.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\midiutil.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\release.py -> build\lib.win-amd64-cpython-39\rtmidi
          copying rtmidi\__init__.py -> build\lib.win-amd64-cpython-39\rtmidi
          running build_ext
          building 'rtmidi._rtmidi' extension
          error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      Rolling back uninstall of python-rtmidi
      Moving to c:\users\dell\anaconda3\lib\site-packages\python_rtmidi-1.4.9.dist-info\
       from C:\Users\Dell\anaconda3\Lib\site-packages\~ython_rtmidi-1.4.9.dist-info
      Moving to c:\users\dell\anaconda3\lib\site-packages\rtmidi\
       from C:\Users\Dell\anaconda3\Lib\site-packages\~tmidi
    error: legacy-install-failure
    
    × Encountered error while trying to install package.
    ╰─> python-rtmidi
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for output from the failure.
    
    opened by Luka27 0
  • Installation error: PackagesNotFoundError & ResolutionImpossible

    Installation error: PackagesNotFoundError & ResolutionImpossible

    I am using mac venutra.

    Using the conda install I get this error during installation: image After this I attempted the manual install which gave me this error: image

    I don't believe downgrading to python 3.6 would work, as in the documentation it is said that verisons >= 3.5 work (correct me if i am wrong!)

    Cheers

    opened by 50NNY1 1
Releases(v2.1.4)
Owner
Magenta
An open source research project exploring the role of machine learning as a tool in the creative process.
Magenta
Location of public benchmarking; primarily final results

CSL_public_benchmark This repo is intended to provide a periodically-updated, public view into genome sequencing benchmarks managed by HudsonAlpha's C

HudsonAlpha Institute for Biotechnology 15 Jun 13, 2022
DG - A(n) (unusual) programming language

DG - A(n) (unusual) programming language General structure There are no infix-operators (i.e. 1 + 1) Each operator takes 2 parameters When there are m

1 Mar 05, 2022
It's a repo for Cramer's rule, which is some math crap or something idk

It's a repo for Cramer's rule, which is some math crap or something idk (just a joke, it's not crap; don't take that seriously, math teachers)

Module64 0 Aug 31, 2022
Block fingerprinting for the beacon chain, for client identification & client diversity metrics

blockprint This is a repository for discussion and development of tools for Ethereum block fingerprinting. The primary aim is to measure beacon chain

Sigma Prime 49 Dec 08, 2022
Spinning waffle from waffle shaped python code

waffle Spinning waffle from waffle shaped python code Based on a parametric curve: r(t) = 2 - 2*sin(t) + (sin(t)*abs(cos(t)))/(sin(t) + 1.4) projected

Viljar Femoen 5 Feb 14, 2022
Open slidebook .sldy files in Python

Work in progress slidebook-python Open slidebook .sldy files in Python To install slidebook-python requires Python = 3.9 pip install slidebook-python

The Institute of Cancer Research 2 May 04, 2022
An alternative app for core Armoury Crate functions.

NoROG DISCLAIMER: Use at your own risk. This is alpha-quality software. It has not been extensively tested, though I personally run it daily on my lap

12 Nov 29, 2022
Python library to interact with Move Hub / PoweredUp Hubs

Python library to interact with Move Hub / PoweredUp Hubs Move Hub is central controller block of LEGO® Boost Robotics Set. In fact, Move Hub is just

Andrey Pokhilko 499 Jan 04, 2023
Little tool in python to watch anime from the terminal (the better way to watch anime)

anipy-cli Little tool in python to watch anime from the terminal (the better way to watch anime) Has a resume playback function when picking from Hist

sdao 97 Dec 29, 2022
A Github Action for sending messages to a Matrix Room.

matrix-commit A Github Action for sending messages to a Matrix Room. Screenshot: Example Usage: # .github/workflows/matrix-commit.yml on: push:

3 Sep 11, 2022
J MBF - Assalamualaikum Mamang...

★ VISITOR ★ ★ INFORMATION ★ Script Ini DiBuat Oleh YayanXD Script Ini Akan DiPerjual Belikan Tanggal 9 Januari 2022 Jika Mau Beli Script Silahkan Hub

Risky [ Zero Tow ] 5 Apr 08, 2022
🔤 Measure edit distance based on keyboard layout

clavier Measure edit distance based on keyboard layout. Table of contents Table of contents Introduction Installation User guide Keyboard layouts Dist

Max Halford 42 Dec 18, 2022
Contains a Jupyter Notebook for calculating remaining plants required based on field/lathhouse data.

Davis-Sunflowers-Su21 Project goals: Plants influence their reproduction and mating system in many ways. Various factors such as time of flowering, ab

1 Feb 10, 2022
The earliest beta version of pytgcalls on Linux x86_64 and ARM64! Use in production at your own risk!

Public beta test. Use in production at your own risk! tgcalls - a python binding for tgcalls (c++ lib by Telegram); pytgcalls - library connecting pyt

Il'ya 21 Jan 13, 2022
Created a Python Keylogger script.

Python Script Simple Keylogger Script WHAT IS IT? Created a Python Keylogger script. HOW IT WORKS Once the script has been executed, it will automatic

AC 0 Dec 12, 2021
A function decorator for enforcing function signatures

A function decorator for enforcing function signatures

Emmanuel I. Obi 0 Dec 08, 2021
The code behind sqlfmt.com, a web UI for sqlfmt

The code behind sqlfmt.com, a web UI for sqlfmt

Ted Conbeer 2 Dec 14, 2022
A git extension for seeing your Cloud Build deployment

A git extension for seeing your Cloud Build deployment

Katie McLaughlin 13 May 10, 2022
Collapse a set of redundant kmers to use IUPAC degenerate bases

kmer-collapse Collapse a set of redundant kmers to use IUPAC degenerate bases Overview Given an input set of kmers, find the smallest set of kmers tha

Alex Reynolds 3 Jan 06, 2022
A fluid medium for storing, relating, and surfacing thoughts.

Conceptarium A fluid medium for storing, relating, and surfacing thoughts. Read more... Instructions The conceptarium takes up about 1GB RAM when runn

115 Dec 19, 2022