Goodbye, Jukedeck!

This is just a quick post to let everyone know that I have decided to leave Jukedeck. It’s been a unique and fascinating journey the past three or so years with a flexible and forward-thinking company, and a stimulating work environment. I couldn’t have asked for a more apt transition into employment after my PhD than the one that led me to Jukedeck and I’m really grateful for all that I have learned here, the people I’ve had the opportunity to work with and everything the company has done for me during this period. This also means that I’m no longer going to be living or working in the UK, and my wife Nina and I have some new and exciting plans for the future that I’m really looking forward to.

There have also been some interesting developments in regards to where I’ll be going and what I’ll be doing next now that my tenure at Jukedeck has come to an end. I’ll post updates here on my blog as and when things take shape in the coming months.

Oral Presentation at the 19th International Society for Music Information Retrieval Conference

A few months following the acceptance of our paper at ISMIR 2018, I attended the conference in Paris with several of my colleagues from Jukedeck. We had a fairly large presence there dwarfed (as far as I can tell) only by a larger one from Spotify. The conference was organised very well and everything went-off smoothly. It was great to be back in the beautiful city after my last visit nearly 8 years ago!

I was particularly pleased by the new format for presenting accepted papers at this ISMIR wherein each paper was given both oral and poster presentation slots thus removing the traditional distinction between papers that exists in conferences. In the case of our paper on StructureNet, I made the oral presentation and my colleagues and co-authors – Gabriele and Marco – made the poster presentation. Fortunately, this year ISMIR was streamed live and the videos were later stored on YouTube so I’m able to share the video of my presentation with you. It’s only a 4-minute presentation so do check it out! And it appeared to me each time I passed our poster by that it received a lot of attention, and this was of course great! I, with help from members of my team, also prepared a blog post on StructureNet which was published recently on Jukedeck R & D Team’s Medium page. I urge you to give it a read if you’re curious what the paper is all about. Here’s a picture of the Jukedeck team at ISMIR:

The Jukedeck Team at ISMIR 2018 – (from left-to-right) Ben, Reinier, Gabriele, Matt, me, Katerina and Marco.

I also signed up to play in this year’s ISMIR jam session organised by Uri Nieto from Pandora! If I remember correctly, it’s something that started in 2014 and has been getting more popular by the year. As anticipated, the jam session was a success and a lot of fun, with music ranging from AI-composed folk tunes to Jazz, Blues, Rock and Heavy Metal. I played two songs with my fellow attendees – Blackest Eyes by Porcupine Tree and Plush by Stone Temple Pilots. My friend Juanjo shared a recording of the first song with me in which I played bass.

As always, ISMIR this year provided a great opportunity to make new acquaintances, and meet old friends and colleagues. As it turns out quite a few of my friends from the Music Informatics Research Group (MIRG) at City, University of London showed up this time and it was great to catch up with them.

The MIRG at ISMIR 2018: (from left-to-right, back-to-front) Shahar, me, Daniel, Tillman, Andreas, Radha and Reinier.

And to top it all off, my master thesis supervisor Hendrik Purwins managed to make it to the conference on the last day giving me the opportunity to get this one selfie with Tillman (my PhD thesis supervisor) and him.

Tillman, me and Hendrik at the conference venue.

Paper Accepted at ISMIR 2018

A paper I submitted with my colleagues from Jukedeck was accepted at the 19th International Society for Music Information Retrieval Conference. A big congratulations to my co-authors Gabriele, Katerina, Matt, Samer, Marco, Ed and Kevin. It’s been a pleasure working with you all on this project and it’s a well-deserved recognition of the work itself!

More details to come soon!

Edit (13-06-2018): ISMIR has officially announced the list of accepted papers, so I’m sharing the details of our accepted paper too!

Medeot, G., Cherla, S., Kosta, K., McVicar, M., Abdallah, S., Selvi, M., Newton-Rex, E., and Webster, K., StructureNet: Inducing Structure in Generated Melodies. In: Proc. International Society for Music Information Retrieval Conference (ISMIR 2018). Paris, France.

Tensorflow Tip: Pretrain and Retrain

I recently ran into a situation where I had to initially train a neural network first on one dataset, save it and then load it up later to train it on a different dataset (or using a different training procedure). I implemented this in Tensorflow and thought I’d share a stripped down version of the script here as it could serve as an instructive example on the use of Tensorflow sessions. Note that this is not necessarily the best way of doing this, and it might indeed be simpler to load the original graph and train that graph itself by making its parameters trainable, or something else like that.

The script can be found here. In the first stage of this script (the pre-training stage) there is only a single graph which contains the randomly initialised and trained model. One might as well avoid explicitly defining a graph as Tensorflow’s default graph will be used for this purpose. This model (together with its parameters) is saved to a file and then loaded for the second re-training stage. In this second stage, there are two graphs. The first graph is loaded from the saved file and contains the pre-trained model whose parameters are the ones whose values we wish to assign to those of the second model before training the latter on a different dataset. The parameters of the second model are randomly initialised prior to this assignment step. In order for the assignment to work, I found it necessary to assign parameters across graphs and this could be done by saving the parameters of the first model as numpy tensors and assigning the values of these numpy tensors to the right parameters of the second model.

Completed Andrew Ng’s “Convolutional Neural Networks” course on Coursera

I successfully completed this course with a 100.0% mark. Unlike the other two courses I had done as a part of this Deep Learning specialisation, there was much to learn for me in this one. I had only skimmed over a couple of papers on conv. nets in the past and hadn’t really implemented any aspects of this class of models except helping out colleagues in fixing bugs in their code. So I was stoked to do this course. And I was not disappointed. Andrew Ng designs and delivers his lectures very well and this course was no exception. The programming assignments and quizzes were engaging and moderately challenging. The idea of 1D, 2D and 3D convolutions was explained clearly and in sufficient depth in the lectures. They also covered some state-of-the-art convolutional architectures such as VGG Net, Inception Net, Network-in-Network and also applications such as Object and Face Recognition and Neural Style Transfer net, to all of which convolutional networks are a cornerstone. The reading list for the course was also very useful and interesting. All in all, a great resource in my opinion for someone interested in this topic! And as usual, here’s the certificate I received on completing this course.

Completed the Course “Big Data Modeling and Management Systems” offered by UCSD on Coursera

I successfully completed this course with a 100.0% mark. It was quite broad and covered a range of topics somewhat superficially, from Relational Databases, their relation to Big Data Management Systems, the various alternatives that exist for processing different types of big data. As with the first course, there were a lot of new names to grasp and connections to be made between the things they represented. The assignments were straightforward and involved running a few specific command-line tools and spreadsheet commands to process data and carry out some basic analysis just to get a feel for data tables and how one might go about extracting information from them. The final assignment involved completing an incomplete relational database design for a game. In my opinion, its goals could have been more precise, its connection to the course material more clear, and being a peer-graded assignment the evaluation criteria more well-defined. Quite a few learners seem to have lost out due to someone else not being able to evaluate their assignment properly due to the latter shortcoming. And as usual, here’s the certificate that I was awarded on completing the course.

It looks like the upcoming courses in this specialisation contain more practical and hands-on exercises, so looking forward to that in the coming weeks!

Completed the Course “Introduction to Big Data” offered by UCSD on Coursera

I successfully completed this course with a 98.9% mark. It was easy and covered mostly definitions, some history of big data, big data jargon and very basic principles. There was an emphasis on what constitutes big data (in terms of size, variety, complexity, etc.), what kinds of analyses one can carries out on big data, what sources they can be from, and what tools one could use to analyse them. When it came to the latter, the course offered a brief introduction to the Hadoop  ecosystem that I found particularly interesting as I hadn’t ever worked with any of the software that is a part of this ecosystem. And there was also a simple assignment that gave one a taste of what working with Hadoop could be like. Here’s a link to the certificate I received from Coursera on completing this course.

Looking forward to the remaining courses in the Big Data specialisation!

Completed Andrew Ng’s “Improving Deep Neural Networks” course on Coursera

I successfully completed this course with a 100.0% mark. Once again, this course was easy given my experience so far in machine learning and deep learning. However, as with the previous course I completed in the same specialisation there were a few things that were worth attending this course for. I particularly found the sections on Optimisation (exponential moving averages, Momentum, RMSProp and Adam optimisers, etc.), Batch Normalisation, and to some extend Dropout useful. Here’s a link to the certificate from Coursera for this course.

I’m looking forward to the course on Convolutional Neural Networks!

Invited Talks at the International Institute of Information Technology – Bangalore and Robert Bosch

I’m currently on a break from work at Jukedeck until the 22nd of September, and visiting friends and old colleagues in Bangalore for a few days. On coming to know of my visit to Bangalore, my past mentors invited me to give talks at their respective organisations – the International Institute of Information Technology – Bangalore, and Robert Bosch. Today I presented the work I did on sequence modelling in music, RBMs and Recurrent RBMs during my PhD to the staff and students at the International Institute of Information Technology – Bangalore (IIIT-B). And next Monday (the 18th of September, 2017) it will be more or less the same talk at Robert Bosch.

Here is a copy of the slides for those presentations.

Participating in CSMC 2017 Panel Discussion

At 11:30 on the 13th of September, 2017  I will be participating in a panel discussion on the subject of “Applying Musical Patterns in Generation” together with Elaine Chew, Roger Dean, Steven Jan, David Meredith and Tillman Weyde. It is being organised by Iris Yuping Ren as a part of the 2nd Conference on Computer Simulation of Musical Creativity between the 11th-13th of September, 2017 at Milton-Keynes, UK.

Really excited and looking forward to it!