As some of you might already know, I have been volunteering with a few of my peers in India to promote awareness about Music Technology in the country through the Music Tech Community – India initiative. Upon my suggestion, during the past months we had agreed upon and planned to begin a new blog post series that would contain interviews with individuals engaged with Music Technology in India, or elsewhere but who are from India. We hope that readers of this blog post series will have much to learn from the experiences of these individuals and that this will help them gain valuable insights into the field and inspire them to shape their own careers in the future.
I’m very pleased to announce today that we just published the first post in this series on the website! It is an interview with an active member of the community and a researcher applying Information Retrieval techniques to Indian classical music – Ajay Srinivasamurthy. During the weeks that preceded the publication of the post, we got in touch with Ajay who kindly offered to take part in this initiative. You can read what Ajay had to say during the interview in the blog post.
I believe this is a great start, and I look forward to more of such interesting chats in the future!
Now that I’m no longer working at Jukedeck, I happen to have plenty of free time on my hands! I’ve been spending this time travelling, catching up on my reading list, helping out with activities of the Music Tech Community – India and making music among other things. In an effort to satisfy a long-standing curiosity, I signed up for the Recommender Systems specialisation being offered on Coursera by University of Minnesota, and recently completed it. It comprised of four courses:
Introduction to Recommender Systems: Non-personalised and Content-based (certificate)
Recommender Systems: Evaluation and Metrics (certificate)
Matrix Factorisation and Advanced Techniques (certificate)
It took me about a month to complete all four courses at a fairly liesurely pace given how much time I had at my disposal while not working. This was a very well-taught specialisation with some of the best-designed Courses I’ve done on Coursera so far. It covered a wide range of topics that offered a comprehensive overview of a vast area of research. Solving the assignments by hand was a new, but very engaging experience that really allowed me to focus on what actually happens at a very basic level under-the-hood in such systems. It was all done by implementing the various formulae for content-based filtering, item-item collaborative filtering, user-user collaborative filtering (including matrix factorisation methods) in spreadsheets. There was an Honours Track in each course that focused on implementing the various types of recommender systems and related concepts that I decided not to pursue, as all the programming was in Java. I decided I would follow the courses up with my own implementation projects in Python as that’s something of greater interest to me. So now I’m looking for little projects to get me going.
I would definitely recommend this specialisation to anyone interested in Recommender Systems. It has left me with a very good understanding of the basics and a fair idea of the various directions in which I can pursue things in more detail. Not to mention, a tonne of references to read up on which I look forward to doing along with implementing some of the algorithms in the coming weeks.
This is just a quick post to let everyone know that I have decided to leave Jukedeck. It’s been a unique and fascinating journey the past three or so years with a flexible and forward-thinking company, and a stimulating work environment. I couldn’t have asked for a more apt transition into employment after my PhD than the one that led me to Jukedeck and I’m really grateful for all that I have learned here, the people I’ve had the opportunity to work with and everything the company has done for me during this period. This also means that I’m no longer going to be living or working in the UK, and my wife Nina and I have some new and exciting plans for the future that I’m really looking forward to.
There have also been some interesting developments in regards to where I’ll be going and what I’ll be doing next now that my tenure at Jukedeck has come to an end. I’ll post updates here on my blog as and when things take shape in the coming months.
I happened to be on a holiday then in beautiful Mararikulam in Kerala around then, but I really didn’t want to miss this opportunity to speak so we decided to make it a remote talk that I delivered via Skype. Thanks to the excellent organisers – Albin Correya, Manaswi Mishra and Siddharth Bharadwaj, the talk went off smoothly and was apparently well-received. Other speakers during the event were Harshit Agarwal, and two of the organisers themselves – Albin Correya and Manaswi Mishra.
A few months following the acceptance of our paper at ISMIR 2018, I attended the conference in Paris with several of my colleagues from Jukedeck. We had a fairly large presence there dwarfed (as far as I can tell) only by a larger one from Spotify. The conference was organised very well and everything went-off smoothly. It was great to be back in the beautiful city after my last visit nearly 8 years ago!
I was particularly pleased by the new format for presenting accepted papers at this ISMIR wherein each paper was given both oral and poster presentation slots thus removing the traditional distinction between papers that exists in conferences. In the case of our paper on StructureNet, I made the oral presentation and my colleagues and co-authors – Gabriele and Marco – made the poster presentation. Fortunately, this year ISMIR was streamed live and the videos were later stored on YouTube so I’m able to share the video of my presentation with you. It’s only a 4-minute presentation so do check it out! And it appeared to me each time I passed our poster by that it received a lot of attention, and this was of course great! I, with help from members of my team, also prepared a blog post on StructureNet which was published recently on Jukedeck R & D Team’s Medium page. I urge you to give it a read if you’re curious what the paper is all about. Here’s a picture of the Jukedeck team at ISMIR:
I also signed up to play in this year’s ISMIR jam session organised by Uri Nieto from Pandora! If I remember correctly, it’s something that started in 2014 and has been getting more popular by the year. As anticipated, the jam session was a success and a lot of fun, with music ranging from AI-composed folk tunes to Jazz, Blues, Rock and Heavy Metal. I played two songs with my fellow attendees – Blackest Eyes by Porcupine Tree and Plush by Stone Temple Pilots. My friend Juanjo shared a recording of the first song with me in which I played bass.
As always, ISMIR this year provided a great opportunity to make new acquaintances, and meet old friends and colleagues. As it turns out quite a few of my friends from the Music Informatics Research Group (MIRG) at City, University of London showed up this time and it was great to catch up with them.
And to top it all off, my master thesis supervisor Hendrik Purwins managed to make it to the conference on the last day giving me the opportunity to get this one selfie with Tillman (my PhD thesis supervisor) and him.
Edit (13-06-2018): ISMIR has officially announced the list of accepted papers, so I’m sharing the details of our accepted paper too!
Medeot, G., Cherla, S., Kosta, K., McVicar, M., Abdallah, S., Selvi, M., Newton-Rex, E., and Webster, K., StructureNet: Inducing Structure in Generated Melodies. In: Proc. International Society for Music Information Retrieval Conference (ISMIR 2018). Paris, France.
I recently ran into a situation where I had to initially train a neural network first on one dataset, save it and then load it up later to train it on a different dataset (or using a different training procedure). I implemented this in Tensorflow and thought I’d share a stripped down version of the script here as it could serve as an instructive example on the use of Tensorflow sessions. Note that this is not necessarily the best way of doing this, and it might indeed be simpler to load the original graph and train that graph itself by making its parameters trainable, or something else like that.
The script can be found here. In the first stage of this script (the pre-training stage) there is only a single graph which contains the randomly initialised and trained model. One might as well avoid explicitly defining a graph as Tensorflow’s default graph will be used for this purpose. This model (together with its parameters) is saved to a file and then loaded for the second re-training stage. In this second stage, there are two graphs. The first graph is loaded from the saved file and contains the pre-trained model whose parameters are the ones whose values we wish to assign to those of the second model before training the latter on a different dataset. The parameters of the second model are randomly initialised prior to this assignment step. In order for the assignment to work, I found it necessary to assign parameters across graphs and this could be done by saving the parameters of the first model as numpy tensors and assigning the values of these numpy tensors to the right parameters of the second model.
I successfully completed this course with a 98.9% mark. This course was relatively more focused than the others so far. The machine learning theory that was covered in it was very basic and good for beginners so I skimmed through it fairly quickly. Nevertheless, it was a good refresher of models such as Naive Bayes, Decision Trees and k-Means Clustering. What I found particularly useful was the introduction to the KNIME and Spark ML frameworks and the exercises where one had to apply these ML models to some example datasets.
I think this course and the last one were more hands-on and what I was looking for when I first started this module with a greater focus on ML in the context of Big Data.
I successfully completed this course with a 97.7% mark. This course was once again broad and touched upon some big data technologies through a series of lectures, assignments and hands-on exercises. The focus was mainly on querying JSON data using MongoDB, analysing data using Pandas, and programming in Spark (Spark SQL, Spark Streaming, Spark MLLIB and Spark GraphX). All these were things I was curious about and it was great that they introduced these in the course. There were also an exercise on analysing tweets using both MongoDB and Spark. They had one section on something called Splunk which I thought was a waste of time but I guess they have to keep their sponsors happy.
This specialisation so far (I’m halfway through) has been fairly introductory and lacking depth. It’s been good to the extent that I feel like I’m aware of all these different technologies and would be able to know where to start if I was to use them for some specific application. As I expected, this course was more hands-on which was great!
This is one of the first AiC songs I heard that got me into the band. While I had the broken finger, I also took some time off the fast licks and finger-intensive playing to learn some wah coordination and this was a great song to begin with. I used a Vox wah pedal here that I bought years ago. A bit squeaky but it worked alright. There’s still a few rough edges in this final recording and this is the result of the time I was willing to spend on perfecting it. Another Rush song coming up afte this!