AWS DeepComposer and Yamaha Clavinova

The AWS DeepComposer has been around for about 2 years now. It is an AWS service that follows in the similar services of AWS DeepLens and AWS DeepRacer whose intentions are to provide fun, educational ways to learn about deep learning. AWS DeepComposer allows you to record a melody and then transform it using one of several deep learning methods. The result is a melody enriched by machine learning that has more notes, more instruments, and maybe somewhat closer to a real song!

You can record the melody using the AWS DeepComposer by clumsily clicking keyboard keys, by using the AWS DeepComposer keyboard, or by recording your own MIDI file and uploading it using the AWS Console. I don’t think there is a lot of value in the AWS DeepComposer keyboard so I don’t recommend purchasing one. It’s very limited in what you can do and doesn’t offer much more than the on-screen virtual piano keyboard available in the AWS Console.

I occasionally play the piano and I have a Yamaha Clavinova. It has the ability to record MIDI files you can either retrieve using a USB thumb drive or by connecting the piano directly to a computer using software such as Cakewalk. Cakewalk has a whole lot of awesome features that I don’t need but it works great for capturing songs as MIDIs played on the Clavinova.

To do this if you have a Clavinova, you need a USB type B cable. I picked up a USB type B to USB type C cable like this one to make for an easy connection from the piano to a Windows computer. With Cakewalk running and the USB cable connected, it’s now up to you to come up with a good melody!

Start recording in Cakewalk and play a melody on the piano. Stop recording and export your masterpiece to a MIDI file. Here is my embarrassingly simple melody. A cat could probably make a better melody by walking on the keyboard!


Now, go to AWS DeepComposer to get started. Click the Start Composing button.

On the next screen, click the Choose input track dropdown and click Import a track. Click Choose File in the popup and select your MIDI file. You will now see the New Composition screen.

Click the orange Continue button on the left side to select your machine learning options. You will see a few machine learning options at the top of the page. They are AR-CNN, GANs, and Transformers. Each method manipulates your melody in based on a different machine learning technique.

There is text at the bottom that provides some configuration options for each machine learning method. These options give you some control over the parameters to the machine learning techniques. The best way to learn about each method is to simply play with them! Pick some different options, click Continue, and let the machine learning do its thing! In a minute or two, the algorithm will have transformed your melody into something that more resembles a song. Play it and listen to it, and then come back and select some other options. Experiment with how the different parameters impact the generated song.

Here’s my “melody”, if you can call it that: melody. And here is a generated song. It’s not going to win any Grammy awards but it’s a pretty cool way to learn about machine learning methods and how the input parameters affect the output.

So try it out! You will be charged for the AWS computing resources to transform your melody. While it will likely be small charges, it is something to keep in mind as you experiment and iterate.

Notify of
Inline Feedbacks
View all comments