Having been curious about Functional Programming for a while now, and tried incorporating features of the paradigm into my own work with Python, I decided to give the first (Part A) of the three-part Programming Languages course module on Coursera. The module is meant to systematically introduce one to various theoretical concepts of programming languages, while having a special focus on Functional Programming. This first course (Part A), which I recently completed with a score of 98%, illustrated said concepts with the help of Standard ML – a Functional-style language.
It was excellently designed course, and also quite challenging. Apart from spending time on introducing the very basics of SML early on, it covered some very interesting concepts such as Pattern Matching, Function Closures, Partials, Currying and Mutual Recurstion. The programming assignments really made sure you understood what was covered in the course material, and the course-handouts were thorough and clear. There was also a strong focus on the matter of programming style, with the instructor commenting on what he considered good/poor programming style while covering the various concepts. We were marked on the style of our submissions too.
Now that I’m no longer working at Jukedeck, I happen to have plenty of free time on my hands! I’ve been spending this time travelling, catching up on my reading list, helping out with activities of the Music Tech Community – India and making music among other things. In an effort to satisfy a long-standing curiosity, I signed up for the Recommender Systems specialisation being offered on Coursera by University of Minnesota, and recently completed it. It comprised of four courses:
- Introduction to Recommender Systems: Non-personalised and Content-based (certificate)
- Nearest Neighbour Collaborative Filtering (certificate)
- Recommender Systems: Evaluation and Metrics (certificate)
- Matrix Factorisation and Advanced Techniques (certificate)
It took me about a month to complete all four courses at a fairly liesurely pace given how much time I had at my disposal while not working. This was a very well-taught specialisation with some of the best-designed Courses I’ve done on Coursera so far. It covered a wide range of topics that offered a comprehensive overview of a vast area of research. Solving the assignments by hand was a new, but very engaging experience that really allowed me to focus on what actually happens at a very basic level under-the-hood in such systems. It was all done by implementing the various formulae for content-based filtering, item-item collaborative filtering, user-user collaborative filtering (including matrix factorisation methods) in spreadsheets. There was an Honours Track in each course that focused on implementing the various types of recommender systems and related concepts that I decided not to pursue, as all the programming was in Java. I decided I would follow the courses up with my own implementation projects in Python as that’s something of greater interest to me. So now I’m looking for little projects to get me going.
I would definitely recommend this specialisation to anyone interested in Recommender Systems. It has left me with a very good understanding of the basics and a fair idea of the various directions in which I can pursue things in more detail. Not to mention, a tonne of references to read up on which I look forward to doing along with implementing some of the algorithms in the coming weeks.
I successfully completed this course with a 98.9% mark. This course was relatively more focused than the others so far. The machine learning theory that was covered in it was very basic and good for beginners so I skimmed through it fairly quickly. Nevertheless, it was a good refresher of models such as Naive Bayes, Decision Trees and k-Means Clustering. What I found particularly useful was the introduction to the KNIME and Spark ML frameworks and the exercises where one had to apply these ML models to some example datasets.
I think this course and the last one were more hands-on and what I was looking for when I first started this module with a greater focus on ML in the context of Big Data.
And here’s the certificate that I was awarded on completing the course.
I successfully completed this course with a 97.7% mark. This course was once again broad and touched upon some big data technologies through a series of lectures, assignments and hands-on exercises. The focus was mainly on querying JSON data using MongoDB, analysing data using Pandas, and programming in Spark (Spark SQL, Spark Streaming, Spark MLLIB and Spark GraphX). All these were things I was curious about and it was great that they introduced these in the course. There were also an exercise on analysing tweets using both MongoDB and Spark. They had one section on something called Splunk which I thought was a waste of time but I guess they have to keep their sponsors happy.
This specialisation so far (I’m halfway through) has been fairly introductory and lacking depth. It’s been good to the extent that I feel like I’m aware of all these different technologies and would be able to know where to start if I was to use them for some specific application. As I expected, this course was more hands-on which was great!
And here’s the certificate that I was awarded on completing the course.
I successfully completed this course with a 100.0% mark. It was quite broad and covered a range of topics somewhat superficially, from Relational Databases, their relation to Big Data Management Systems, the various alternatives that exist for processing different types of big data. As with the first course, there were a lot of new names to grasp and connections to be made between the things they represented. The assignments were straightforward and involved running a few specific command-line tools and spreadsheet commands to process data and carry out some basic analysis just to get a feel for data tables and how one might go about extracting information from them. The final assignment involved completing an incomplete relational database design for a game. In my opinion, its goals could have been more precise, its connection to the course material more clear, and being a peer-graded assignment the evaluation criteria more well-defined. Quite a few learners seem to have lost out due to someone else not being able to evaluate their assignment properly due to the latter shortcoming. And as usual, here’s the certificate that I was awarded on completing the course.
It looks like the upcoming courses in this specialisation contain more practical and hands-on exercises, so looking forward to that in the coming weeks!
I successfully completed this course with a 98.9% mark. It was easy and covered mostly definitions, some history of big data, big data jargon and very basic principles. There was an emphasis on what constitutes big data (in terms of size, variety, complexity, etc.), what kinds of analyses one can carries out on big data, what sources they can be from, and what tools one could use to analyse them. When it came to the latter, the course offered a brief introduction to the Hadoop ecosystem that I found particularly interesting as I hadn’t ever worked with any of the software that is a part of this ecosystem. And there was also a simple assignment that gave one a taste of what working with Hadoop could be like. Here’s a link to the certificate I received from Coursera on completing this course.
Looking forward to the remaining courses in the Big Data specialisation!
I successfully completed this course with a 100.0% mark. Once again, this course was easy given my experience so far in machine learning and deep learning. However, as with the previous course I completed in the same specialisation there were a few things that were worth attending this course for. I particularly found the sections on Optimisation (exponential moving averages, Momentum, RMSProp and Adam optimisers, etc.), Batch Normalisation, and to some extend Dropout useful. Here’s a link to the certificate from Coursera for this course.
I’m looking forward to the course on Convolutional Neural Networks!
I successfully completed this course with a 96.7% mark. It was fairly easy given my experience so far in machine learning and deep learning, but there were a few new ideas that I learned here and also others that I investigated in greater depth out of my own curiosity while doing it. I felt like the Transfer Learning, Multitask Learning and End-to-End ML lectures are not really useful immediately after the course unless one takes these up after the course in greater depth as the lectures on these topics were quite superficial and brief. The practical advice, however, and the hand-on exercises that focused on real-world scenarios were useful and I wish there was more of the latter (perhaps optional) in the course.
Here’s a link to the certificate I received from Coursera for this course.