• 山东大学项目实训十六——可控音乐变压器Controllable Music Transformer


    Controllable Music Transformer

    Official code for our paper Video Background Music Generation with Controllable Music Transformer (ACM MM 2021 Best Paper Award)

    [Paper] [Demos] [Bibtex]

    Introduction

    在这项工作中,我们解决了视频背景音乐生成的任务。 以前的一些作品实现了有效的音乐生成,但无法为给定的视频专门生成悦耳的音乐,也没有考虑到视频音乐的节奏一致性。 为了生成与给定视频匹配的背景音乐,我们首先建立视频和背景音乐之间的节奏关系。 特别是,我们将视频中的时间、运动速度和运动显着性分别与音乐中的节拍、模拟音符密度和模拟音符强度联系起来。 然后,我们提出了 CMT,这是一种可控音乐转换器,可以对上述节奏特征进行本地控制,以及对音乐流派和用户指定使用的乐器进行全局控制。 客观和主观评价表明,生成的背景音乐与输入视频的兼容性令人满意,同时音乐质量令人印象深刻。
    在这里插入图片描述

    We address the unexplored task – video background music generation. We first establish three rhythmic relations between video and background music. We then propose a Controllable Music Transformer (CMT) to achieve local and global control of the music generation process. Our proposed method does not require paired video and music data for training while generates melodious and compatible music with the given video.

    在这里插入图片描述

    Directory Structure

    • src/: code of the whole pipeline

      • train.py: training script, take a npz as input music data to train the model

      • model.py: code of the model

      • gen_midi_conditional.py: inference script, take a npz (represents a video) as input to generate several songs

      • src/video2npz/: convert video into npz by extracting motion saliency and motion speed

    • dataset/: processed dataset for training, in the format of npz

    • logs/: logs that automatically generate during training, can be used to track training process

    • exp/: checkpoints, named after val loss (e.g. loss_8_params.pt)

    • inference/: processed video for inference (.npz), and generated music(.mid)

    Preparation

    • clone this repo

    • download the processed data lpd_5_prcem_mix_v8_10000.npz from HERE and put it under dataset/

    • download the pretrained model loss_8_params.pt from HERE and put it under exp/

    • install ffmpeg=3.2.4

    • prepare a Python3 conda environment

      • conda create -n mm21_py3 python=3.7
        conda activate mm21_py3
        pip install -r py3_requirements.txt
        
        • 1
        • 2
        • 3
      • choose the correct version of torch and pytorch-fast-transformers based on your CUDA version (see fast-trainsformers repo and this issue)

    • prepare a Python2 conda environment (for extracting visbeat)

      • conda create -n mm21_py2 python=2.7
        conda activate mm21_py2
        pip install -r py2_requirements.txt
        
        • 1
        • 2
        • 3
      • open visbeat package directory (e.g. anaconda3/envs/XXXX/lib/python2.7/site-packages/visbeat), replace the original Video_CV.py with src/video2npz/Video_CV.py

    Training

    Note: use the mm21_py3 environment: conda activate mm21_py3

    • A quick start using the processed data lpd_5_prcem_mix_v8_10000.npz (1~2 days on 8x 1080Ti GPUs):

      python train.py --name train_default -b 8 --gpus 0 1 2 3 4 5 6 7
      
      • 1
    • If you want to reproduce the whole process:

      1. download the lpd-5-cleansed dataset from HERE and put the extracted files under dataset/lpd_5_cleansed/

      2. go to src/ and convert the pianoroll files (.npz) to midi files (~3 files / sec):

        python pianoroll2midi.py --in_dir ../dataset/lpd_5_cleansed/ --out_dir ../dataset/lpd_5_cleansed_midi/
        
        • 1
      3. convert midi files to .npz files with our proposed representation (~5 files / sec):

        python midi2numpy_mix.py --midi_dir ../dataset/lpd_5_cleansed_midi/ --out_name data.npz 
        
        • 1
      4. train the model (1~2 days on 8x 1080Ti GPUs):

        python train.py --name train_exp --train_data ../dataset/data.npz -b 8 --gpus 0 1 2 3 4 5 6 7
        
        • 1

    Note: If you want to train with another MIDI dataset, please ensure that each track belongs to one of the five instruments (Drums, Piano, Guitar, Bass, or Strings) and is named exactly with its instrument. You can check this with Muspy:

    import muspy
    
    midi = muspy.read_midi('xxx.mid')
    print([track.name for track in midi.tracks]) # Should be like ['Drums', 'Guitar', 'Bass', 'Strings']
    
    • 1
    • 2
    • 3
    • 4

    Inference

    • convert input video (MP4 format) into npz (use the mm21_py2 environment):

      conda activate mm21_py2
      cd src/video2npz
      # try resizing the video if this takes a long time
      sh video2npz.sh ../../videos/xxx.mp4
      
      • 1
      • 2
      • 3
      • 4
    • run model to generate .mid (use the mm21_py3 environment) :

      conda activate mm21_py3
      python gen_midi_conditional.py -f "../inference/xxx.npz" -c "../exp/loss_8_params.pt"
      
      # if using another training set, change `decoder_n_class` and `init_n_class` in `gen_midi_conditional` to the ones in `train.py`
      
      • 1
      • 2
      • 3
      • 4
    • convert midi into audio: use GarageBand (recommended) or midi2audio

      • set tempo to the value of tempo in video2npz/metadata.json (generated when running video2npz.sh)
    • combine original video and audio into video with BGM:

      ffmpeg -i 'xxx.mp4' -i 'yyy.mp3' -c:v copy -c:a aac -strict experimental -map 0:v:0 -map 1:a:0 'zzz.mp4'
      
      # xxx.mp4: input video
      # yyy.mp3: audio file generated in the previous step
      # zzz.mp4: output video
      
      • 1
      • 2
      • 3
      • 4
      • 5

    Matching Method

    • The matching method finds the five most matching music pieces from the music library for a given video (use the mm21_py3 environment).

      conda activate mm21_py3
      python src/match.py inference/xxx.npz dataset/lpd_5_prcem_mix_v8_10000.npz
      
      • 1
      • 2

    Citation

    @inproceedings{di2021video,
      title={Video Background Music Generation with Controllable Music Transformer},
      author={Di, Shangzhe and Jiang, Zeren and Liu, Si and Wang, Zhaokai and Zhu, Leyan and He, Zexin and Liu, Hongming and Yan, Shuicheng},
      booktitle={Proceedings of the 29th ACM International Conference on Multimedia},
      pages={2037--2045},
      year={2022}
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    Acknowledgements

    Our code is based on Compound Word Transformer.

  • 相关阅读:
    【分享】如何使用集简云的“数组拆分“ 功能
    vue3初尝试
    springboot异步线程池
    PacBio全长扩增子测序发现酵母益生菌可提高黑山羊免疫力
    ROS从入门到精通2-8:Gazebo仿真之快速生成二维地图真值
    C# —— 方法的参数列表
    OpenHarmony实战开发-如何实现防盗链应用功能。
    python经典百题之字符串长度
    Python:每日一题之最少砝码
    激光雷达战场越来越激烈了
  • 原文地址:https://blog.csdn.net/fangjiayou/article/details/124131170