This month I attended my first MusicHackDay event in London. I went along with work colleague @JGarciaMartin and we managed to implement “FFTM“, which stands for “Fast Forward Time Machine”. The idea is that the user types in the name of a music artist, and FFTM will play a selection of music samples by the artist over the span of their career, in chronological order. There is also a basic speech system implemented on top of the music which narrates the song name and the year of the release.
Implemented in Python, this is how it works:
Firstly, we query the MusicBrainz database for the artist.
Then we check for available MP3 previews of songs in 7Digital.
After that we cap the number of tracks per year to N number of tracks.
The previews are downloaded.
The text to speech DJ starts to introduce the artist playlist with a variety of phrases that are randomly generated.
Each of the previews are played one after the other with supporting transition effects.
For each song the DJ mentions the Year of the release and the name of the song.
People who use hrtflib often don’t have a background in audio programming, and so they ask me what to do with the output of that library. The answer is convolution. There is tons of information on the net about it, and there is lots of cover on it but I wanted to put up a simple code sample in C, to show how convolution works. Here it is: