I successfully completed this course with a 100.0% mark. Unlike the other two courses I had done as a part of this Deep Learning specialisation, there was much to learn for me in this one. I had only skimmed over a couple of papers on conv. nets in the past and hadn’t really implemented any aspects of this class of models except helping out colleagues in fixing bugs in their code. So I was stoked to do this course. And I was not disappointed. Andrew Ng designs and delivers his lectures very well and this course was no exception. The programming assignments and quizzes were engaging and moderately challenging. The idea of 1D, 2D and 3D convolutions was explained clearly and in sufficient depth in the lectures. They also covered some state-of-the-art convolutional architectures such as VGG Net, Inception Net, Network-in-Network and also applications such as Object and Face Recognition and Neural Style Transfer net, to all of which convolutional networks are a cornerstone. The reading list for the course was also very useful and interesting. All in all, a great resource in my opinion for someone interested in this topic! And as usual, here’s the certificate I received on completing this course.
andrew ng
Completed Andrew Ng’s “Improving Deep Neural Networks” course on Coursera
I successfully completed this course with a 100.0% mark. Once again, this course was easy given my experience so far in machine learning and deep learning. However, as with the previous course I completed in the same specialisation there were a few things that were worth attending this course for. I particularly found the sections on Optimisation (exponential moving averages, Momentum, RMSProp and Adam optimisers, etc.), Batch Normalisation, and to some extend Dropout useful. Here’s a link to the certificate from Coursera for this course.
I’m looking forward to the course on Convolutional Neural Networks!
Completed Andrew Ng’s “Structuring Machine Learning Projects” course on Coursera
I successfully completed this course with a 96.7% mark. It was fairly easy given my experience so far in machine learning and deep learning, but there were a few new ideas that I learned here and also others that I investigated in greater depth out of my own curiosity while doing it. I felt like the Transfer Learning, Multitask Learning and End-to-End ML lectures are not really useful immediately after the course unless one takes these up after the course in greater depth as the lectures on these topics were quite superficial and brief. The practical advice, however, and the hand-on exercises that focused on real-world scenarios were useful and I wish there was more of the latter (perhaps optional) in the course.
Here’s a link to the certificate I received from Coursera for this course.