Completed Generative AI with LLMs Course on Coursera

It’s been quite a while since I did a course (as I regularly used to) to top-up my knowledge on the rapidly evolving field of AI. As a new parent, I guess it can sometimes get tough to make time for doing such courses, and regularly posting updates on a blog.

Since April 2023, I’ve been working on Muse at Unity. Muse is an LLM-driven AI assistant for Unity developers and enthusiasts that began as a web-based chat interface back then and has over time gotten closer and closer to Unity Editor to the point that now it’s able to take stock of what a user is actively working on in the Editor and provide tailored advice, code and even perform tasks within the Editor on their behalf.

I’ve been a part of this project pretty much since its inception, and have learned a lot simply by doing. There was little time to actually learn about LLMs more systematically given how hectic things have been at work (and outside work too), and I noticed quite a few gaps in my understanding of the wider LLM-space. So, I finally took the time to do a Coursera foundation course on LLMs which gave me the opportunity to fill some of these gaps. I really enjoyed doing this particular course, and I feel it was very well designed and executed by Andrew Ng and his team of instructors! And I would highly recommend it to anyone interested in getting a general idea of the space, key concepts and some basic hands-on experience in prompt engineering and model fine-tuning. As always, I chose to do it with the option of getting a certificate of completion.

Of course, this was only the tip of the iceberg when it comes to LLMs! I’m looking forward to now placing anything new I come across into a more structured understanding of the LLM space that this course has given me!

Completed Practical Reinforcement Learning on Coursera

It didn’t surprise me a few weeks into starting work at Unity that Reinforcement Learning would be a useful thing to know at least a little about. So I started studying the fundamentals of Reinforcement Learning from what seemed to be the most recommended reference on the subject – Reinforcement Learning: An Introduction by Sutton & Barto. I must acknowledge that this is a fantastic read and so thoroughly explained. It did take me several revisions of certain topics to read what is implied between the lines, which happens to be quite a lot of useful insights and information, but overall this textbook covers RL theory very very well!

After having spent a few weeks going through the chapters on the Dynamic Programming, Markov-Chain Monte-Carlo and Temporal Difference methods, I felt that I could use some hands-on practice to take the message home and, as always, I looked up Coursera to find the course Practical Reinforcement Learning. It took me more than a month (nearly two) to get through this course. This was partly because I was making sure to review the same material covered in the course in the reference textbook as well, which was very useful. And partly because the course material itself didn’t feel very up to the mark. I felt that in wanting to cover a vast amount of topics in the span of a single course, things got quite rushed. And the assignments were also not very well explained, and offered very little feedback in terms of what was wrong, which made it incredibly frustrating to get through them. To be honest, about half-way into the course I was no longer enjoying it, and was eager to just be done with it ASAP. And that’s exactly what happened. I can’t say I’m very thorough with any of the material covered in weeks 5 and 6, which I would definitely like to revisit in the future.

That being said, Reinforcement Learning is actually one of the most interesting topics in Computer Science / Machine Learning that I have done and I really do hope I have the opportunity to do something interesting using it in the future. And, of course, here’s the certificate that I completed the Coursera course (phew!).

Completed the Course “Machine Learning with Big Data” offered by UCSD on Coursera

I successfully completed this course with a 98.9% mark. This course was relatively more focused than the others so far. The machine learning theory that was covered in it was very basic and good for beginners so I skimmed through it fairly quickly. Nevertheless, it was a good refresher of models such as Naive Bayes, Decision Trees and k-Means Clustering. What I found particularly useful was the introduction to the KNIME and Spark ML frameworks and the exercises where one had to apply these ML models to some example datasets.

I think this course and the last one were more hands-on and what I was looking for when I first started this module with a greater focus on ML in the context of Big Data.

And here’s the certificate that I was awarded on completing the course.

Completed the Course “Big Data Integration and Processing” offered by UCSD on Coursera

I successfully completed this course with a 97.7% mark. This course was once again broad and touched upon some big data technologies through a series of lectures, assignments and hands-on exercises. The focus was mainly on querying JSON data using MongoDB, analysing data using Pandas, and programming in Spark (Spark SQL, Spark Streaming, Spark MLLIB and Spark GraphX). All these were things I was curious about and it was great that they introduced these in the course. There were also an exercise on analysing tweets using both MongoDB and Spark. They had one section on something called Splunk which I thought was a waste of time but I guess they have to keep their sponsors happy.

This specialisation so far (I’m halfway through) has been fairly introductory and lacking depth. It’s been good to the extent that I feel like I’m aware of all these different technologies and would be able to know where to start if I was to use them for some specific application. As I expected, this course was more hands-on which was great!

And here’s the certificate that I was awarded on completing the course.

Completed Andrew Ng’s “Convolutional Neural Networks” course on Coursera

I successfully completed this course with a 100.0% mark. Unlike the other two courses I had done as a part of this Deep Learning specialisation, there was much to learn for me in this one. I had only skimmed over a couple of papers on conv. nets in the past and hadn’t really implemented any aspects of this class of models except helping out colleagues in fixing bugs in their code. So I was stoked to do this course. And I was not disappointed. Andrew Ng designs and delivers his lectures very well and this course was no exception. The programming assignments and quizzes were engaging and moderately challenging. The idea of 1D, 2D and 3D convolutions was explained clearly and in sufficient depth in the lectures. They also covered some state-of-the-art convolutional architectures such as VGG Net, Inception Net, Network-in-Network and also applications such as Object and Face Recognition and Neural Style Transfer net, to all of which convolutional networks are a cornerstone. The reading list for the course was also very useful and interesting. All in all, a great resource in my opinion for someone interested in this topic! And as usual, here’s the certificate I received on completing this course.

Completed Andrew Ng’s “Improving Deep Neural Networks” course on Coursera

I successfully completed this course with a 100.0% mark. Once again, this course was easy given my experience so far in machine learning and deep learning. However, as with the previous course I completed in the same specialisation there were a few things that were worth attending this course for. I particularly found the sections on Optimisation (exponential moving averages, Momentum, RMSProp and Adam optimisers, etc.), Batch Normalisation, and to some extend Dropout useful. Here’s a link to the certificate from Coursera for this course.

I’m looking forward to the course on Convolutional Neural Networks!

Completed Andrew Ng’s “Structuring Machine Learning Projects” course on Coursera

I successfully completed this course with a 96.7% mark. It was fairly easy given my experience so far in machine learning and deep learning, but there were a few new ideas that I learned here and also others that I investigated in greater depth out of my own curiosity while doing it. I felt like the Transfer Learning, Multitask Learning and End-to-End ML lectures are not really useful immediately after the course unless one takes these up after the course in greater depth as the lectures on these topics were quite superficial and brief. The practical advice, however, and the hand-on exercises that focused on real-world scenarios were useful and I wish there was more of the latter (perhaps optional) in the course.

Here’s a link to the certificate I received from Coursera for this course.