It’s just been confirmed that four of us from Moodagent – Reinier de Valk, Pierre Lafitte, Tomas Gajarsky and I, will be attending ISMIR 2019 in Delft (The Netherlands). This year, two of my colleagues from Moodagent will be presenting their work at ISMIR:
I happened to be on a holiday then in beautiful Mararikulam in Kerala around then, but I really didn’t want to miss this opportunity to speak so we decided to make it a remote talk that I delivered via Skype. Thanks to the excellent organisers – Albin Correya, Manaswi Mishra and Siddharth Bharadwaj, the talk went off smoothly and was apparently well-received. Other speakers during the event were Harshit Agarwal, and two of the organisers themselves – Albin Correya and Manaswi Mishra.
A few months following the acceptance of our paper at ISMIR 2018, I attended the conference in Paris with several of my colleagues from Jukedeck. We had a fairly large presence there dwarfed (as far as I can tell) only by a larger one from Spotify. The conference was organised very well and everything went-off smoothly. It was great to be back in the beautiful city after my last visit nearly 8 years ago!
I was particularly pleased by the new format for presenting accepted papers at this ISMIR wherein each paper was given both oral and poster presentation slots thus removing the traditional distinction between papers that exists in conferences. In the case of our paper on StructureNet, I made the oral presentation and my colleagues and co-authors – Gabriele and Marco – made the poster presentation. Fortunately, this year ISMIR was streamed live and the videos were later stored on YouTube so I’m able to share the video of my presentation with you. It’s only a 4-minute presentation so do check it out! And it appeared to me each time I passed our poster by that it received a lot of attention, and this was of course great! I, with help from members of my team, also prepared a blog post on StructureNet which was published recently on Jukedeck R & D Team’s Medium page. I urge you to give it a read if you’re curious what the paper is all about. Here’s a picture of the Jukedeck team at ISMIR:
I also signed up to play in this year’s ISMIR jam session organised by Uri Nieto from Pandora! If I remember correctly, it’s something that started in 2014 and has been getting more popular by the year. As anticipated, the jam session was a success and a lot of fun, with music ranging from AI-composed folk tunes to Jazz, Blues, Rock and Heavy Metal. I played two songs with my fellow attendees – Blackest Eyes by Porcupine Tree and Plush by Stone Temple Pilots. My friend Juanjo shared a recording of the first song with me in which I played bass.
As always, ISMIR this year provided a great opportunity to make new acquaintances, and meet old friends and colleagues. As it turns out quite a few of my friends from the Music Informatics Research Group (MIRG) at City, University of London showed up this time and it was great to catch up with them.
And to top it all off, my master thesis supervisor Hendrik Purwins managed to make it to the conference on the last day giving me the opportunity to get this one selfie with Tillman (my PhD thesis supervisor) and him.
I’ve lately spent some time reading about Curriculum Learning and experimenting with the algorithms described in two of the papers in this domain
Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009, June). Curriculum learning. In Proceedings of the 26th annual international conference on machine learning (pp. 41-48). ACM.
Graves, A., Bellemare, M. G., Menick, J., Munos, R., & Kavukcuoglu, K. (2017). Automated Curriculum Learning for Neural Networks. arXiv preprint arXiv:1704.03003.
The first of the above can be considered important given how with empirical results supporting Curriculum Learning, it revived the interest among researchers in this technique. The second is one of the recently proposed approaches for Curriculum Learning that I thought would be interesting to understand in greater depth.
I’ve summarised my thoughts on these in a short presentation. I hope to share my code and results not too long from now as well.
I’m currently on a break from work at Jukedeck until the 22nd of September, and visiting friends and old colleagues in Bangalore for a few days. On coming to know of my visit to Bangalore, my past mentors invited me to give talks at their respective organisations – the International Institute of Information Technology – Bangalore, and Robert Bosch. Today I presented the work I did on sequence modelling in music, RBMs and Recurrent RBMs during my PhD to the staff and students at the International Institute of Information Technology – Bangalore (IIIT-B). And next Monday (the 18th of September, 2017) it will be more or less the same talk at Robert Bosch.
Here is a copy of the slides for those presentations.
Ever since first working with Recurrent Neural Networks (RNNs) for predicting musical sequences during my PhD, I have been fascinated by these models and try to keep up with exciting developments in connectionist machine learning research surrounding these models. One of these for me has been the emergence of RNNs that are augmented by a dedicated memory unit. The idea was notably illustrated as the Neural Turing Machine (NTM) in an ArXiV submission by Alex Graves and colleagues from Google DeepMind. This early work while having gathered a fair deal of acclaim in the community, has since been followed up in a publication in the prestigious journal Nature that introduces a more evolved variant of the NTM known as the Differentiable Neural Computer (DNC). During the past couple of weeks, I managed to spend some time learning about the NTM and the DNC and prepared a little slide-show (with Google Slides) containing my observations to share with others.
So here is the link to the slides, and I hope some of you who read it benefit from it! Please let me know if you find anything that needs to be corrected in it. I would appreciate that!
I had the opportunity to join my colleagues at Jukedeck – Patrick, Lydia, Eliza, Matt, Katerina and Gabriele – at the Science Museum Lates last night. For those of you that are unfamiliar with the concept, Lates are adults-only, after-hours theme nights that take place in The Science Museum (in London) on the last Wednesday of every month. It is attended by various organisations that would like to showcase their work relating to a chosen theme to an audience, as well as an audience that is keen on learning more about the science and technology underlying the theme. On the last day of August 2016, it was Jukedeck’s turn to show-off its awesome technology at the museum and some of us volunteered to tag along.
The museum was packed with visitors, and it was great to see so many people interested in our technology! I hardly had the time to go grab some dinner amidst the constant stream of people wanting to listen to our music and know more about the underlying algorithms. To me, as someone who does the research and writes the code that generates our music, this was an incredibly rewarding experience to see first-hand the appreciation people had for our work. It’s, in many ways, like having a poster presentation at a conference but with a non-technical audience. I enjoyed it very muchIn the future, I’ll try my best not to let such opportunities pass. And I look forward to attending the event myself in the future as a spectator! If you happen to be in London around the time this event is on, I highly recommend attending it if you’re interested in science and technology.
“We are interested in modelling musical pitch sequences in melodies in the symbolic form. The task here is to learn a model to predict the probability distribution over the various possible values of pitch of the next note in a melody, given those leading up to it. For this task, we propose the Recurrent Temporal Discriminative Restricted Boltzmann Machine (RTDRBM). It is obtained by carrying out discriminative learning and inference as put forward in the Discriminative RBM (DRBM), in a temporal setting by incorporating the recurrent structure of the Recurrent Temporal RBM (RTRBM). The model is evaluated on the cross entropy of its predictions using a corpus containing 8 datasets of folk and chorale melodies, and compared with n-grams and other standard connectionist models. Results show that the RTDRBM has a better predictive performance than the rest of the models, and that the improvement is statistically significant.
I presented the paper in the session on Recurrent Neural Networks. The model that we proposed in the paper – the RTDRBM – was the first original Machine Learning contribution of my PhD. And it was a pleasure to collaborate with my friend and colleague Son Tran in the work. He presented a second paper at the conference titled, “Efficient Representation Ranking for Transfer Learning”.
Yet again a conference has taken me to a place in the world that I probably would’ve never visited otherwise! This doesn’t at all mean that the visit wasn’t worthwhile. The lush green Irish landscape, the charming town of Killarney and the abounding nature around it, and a friendly and welcoming hostel all made this a very memorable trip! Unfortunately, I had sore throat and a fever during much of my stay so I chose Irish coffee over a pint of Guinness (which I heard tastes much better in Ireland) when I had the chance. I regret this, but maybe that’s another reason to visit Ireland once again sometime!
“The multiple viewpoints representation is an event-based representation of symbolic music data which offers a means for the analysis and generation of notated music. Previous work using this representation has predominantly relied on n-gram and variable order Markov models for music sequence modelling. Recently the efficacy of a class of distributed models, namely restricted Boltzmann machines, was demonstrated for this purpose. In this paper, we demonstrate the use of two neural network models which use fixed-length sequences of various viewpoint types as input to predict the pitch of the next note in the sequence. The predictive performance of each of these models is comparable to that of models previously evaluated on the same task. We then combine the predictions of individual models using an entropy-weighted combination scheme to improve the overall prediction performance, and compare this with the predictions of a single equivalent model which takes as input all the viewpoint types of each of the individual models in the combination.”
I have to note that this year’s ISMIR organisation was fantastic! Everything from the review process, information on the website to the venue, the assitance at the venue, and the banquet were very well managed and executed by the organisers. The most interesting part of the conference for me was the keynote lecture, titled “Sound and Music Computing for Exercise and (Re-)habilitation” by Prof. Ye Wang, in which he described the potential in music to serve as a means to rehabilitate and improve the quality of life of individuals with different ailments, and illustrated this with the help of a few projects his group at the National University of Singapore has been working. It was a very inspiring talk, and I really admire Dr. Wang’s statement regarding the often overlooked direct impact of research and published work to society which has been the cornerstone of these projects. I have lately taken interest in Music Therapy and have been going through some literature to see if my own work on music modelling can in some way be applied to achieve therapeutic goals. There were some interesting late-breaking sessions as well that I took part in, including the very successful one organised by my supervisor Tillman on Big Data and Music where I was taking notes during the discussion.
And finally, as is always the case when I attend a conference, I did take some time off in Taipei and its surrounding areas. On one evening, I joined some friends and colleagues to go see the tallest building in the city – Taipei 101.
On another day, a couple of us planned a day-trip to a nearby village called Jiufen where we checked out some temples, the market and the old Japanese mining village on top of a hill.
And on another day, I joined my buddy Marius on a local site-seeing round to see some local museums, Shilin night market, Chiang Kai Shek Memorial, and other places before taking the long flight back to London eventually.
Taipei was fantastic, and I’d be up for another visit anytime! Last but not least, the hospitality of Fun Taipei hostel made the whole trip a little better each day.
I was selected to attend the Machine Learning Summer School in Reykjavik between April 25-May 4, 2014. I was also awarded a travel grant to attend this event which made it possible for me to attend it. I also proposed to present a poster about my ongoing work on musical pitch prediction with neural networks.
Many of the topics were very new to me, but I found the tutorials on Machine Learning and HCI (Roderick Murray-Smith), Introduction to ML (Neil Lawrence), Deep Learning (Yoshua Bengio), Probabilistic Modelling (Iain Murray), and Reinforcement Learning (David Silver) particularly interesting. Especially the last talk seemed like there was much in it that could be adopted into my own work on music modelling and I was very tempted to do so. Let’s see how that goes.
I was also a bit stressed carrying out experiments for a paper we’re submitting to the 15th International Society of Music Information Retrieval Conference (ISMIR 2014). So fingers-crossed that it will all work out for that.
I managed to travel a little while I was in Reykjavik. This was something that had to be done given how novel a destination Iceland is. I joined the rest of the workshop attendees on the Golden Circle Tour that showed us some fascinating and very alien Icelandic landscapes.
And finally, I made a last-minute trip to the Blue Lagoon on the day before my return to London.
It was indeed very fortunate that I was able to attend the summer school in Reykjavik. This has been an incredible learning experience one of the most unique destinations I have been to in my entire life!
I’m sharing a copy of the poster (made using Beamer/LaTeX) I presented here.