How I make...

How I make my videos

Oliver Brown
— This upcoming video may not be available to view yet.

This is high level overview of how I make my videos. I hope to produce further posts with more detail.

Not using Synthesia

One of the most popular programs used for doing music visualizations like the ones I produce is Synthesia. It’s an designed for learning the piano, and can render and play pretty much any MIDI file. For my purposes it has a major downside though, it only runs in real time. This means I have to let it run live and record it using some screen recording software. I then have to do some editing to cut out unwanted content at the beginning and end and clip out UI.

MIDI visualizer

Instead, I’m using the cunningly named MIDIVisualizer by kosua20 and ekuiter. It can be made to render offline and thus eliminates the problems with Synthesia. One minor downside is it doesn’t actual play audio, but that is easy to deal with.

MuseScore

MuseScore is software for writing and editing music scores, and has support for importing and exporting in various formats. I primarily use it for either creating the original music or cleaning up existing music. For the full score visualizations I also use it to generate the actual audio.

TiMidity

Timidity is a command line tool for synthesizing audio from a MIDI file. I use it to generate audio for the modes retuning videos.

FFmpeg

FFMpeg is the real workhorse of the process. It’s a command line too for manipulating audio and video. I use it for at least the following:

  • Generate the video from the MIDIVisualizer output (which is just a series of PNGs of each frame).
  • Add the text in the form of subtitles.
  • Combine the video and audio.
  • Pan-and-scan the individual parts for better spacing.
  • Combine multiple parts together in a mosaic.

Wrapping it all up

All the above is combined together with a custom tool that runs all the individual bits in a pipeline.

How I make 'Simple Pipeline' videos

Oliver Brown
— This upcoming video may not be available to view yet.

I have four different pipelines I use to generate video. This is the simplest, that I actually call ‘Simple Pipeline’.

Example

Oddly enough, there are no videos on the channel that use the simple pipeline. There is a single video I posted on my personal channel:

Interacting with this video is done so under the Terms of Service of YouTube
View this video directly on YouTube

Step 1 - MIDI

If my source file is not already a MIDI file, I use MuseScore to convert the source from MuseScore’s native .mscz format to MIDI.

Step 2 - FLAC audio

MIDI files do not store audio, but just instructions to play specific notes on specific instruments. I use TiMidity++ to convert the MIDI file in a FLAC file containing real audio.

FLAC is a Free Lossless Audio Codec, somewhat akin to MP3, but lossless. For “normal” audio, FLAC files are really big, but the since the audio generated from MIDIs is unnaturally more consistent, it tends to compress very well.

Step 3 - Visualization

I used the previously mentioned MIDI visualizer to generate the visualization. This takes a settings file, and a MIDI file as input and produces a series of PNG files as output.

The settings file is mostly the same every time, but I do tweak the color selection where I think it would be appropriate.

The output can be quite… extreme. The visualization runs at sixty frames per second, which means I get a lot of 4K images. A ten minute video produces 36,000 for instance.

Step 4 - Final video

Finally, I bring it altogether using FFmpeg. It takes the source images and source audio and generates a video file. The format details are largely dictated by YouTube’s suggestions:

  • Video - H.264
  • Audio - AAC
  • Container - MP4