FastAI Course, Part I, Lessons 1 and 2
I started the Deep Learning course by FastAI. I describe my initial impressions here.
Other posts in series
First impressions
I recently started this FastAI Deep Learning course and done a couple of little projects. Below I summarise some of what I did and my thoughts.
- I am impressed by how the course is taught. It is clear a lot of thought has been put into how to teach the course, which examples to use, what messages/advice to give to students, what documentation to provide, etc.
- I quite like how they have a system for the live audience to submit questions, which the lecturer Jeremy Howard then answers during the lecture.
- I like how Jeremy showcases various examples of what students have done. It is surprising to see how many state-of-the-art things can be done with just the knowledge given in the first couple of lectures. The FastAI team should be chuffed with themselves for creating such a course, tool and community.
- I tried a couple of image classification tasks. The first was to distinguish between the Dragonball Z characters Goku and Vegeta. The second was to distinguish between squash and tennis rackets. I got ok results. At this stage, the main way to improve the algorithm is to improve the image set (I just downloaded the first bunch of images from Google Image Search).
- This was slowed down a bit, because the first online GPU provider I used was paperspace, but there was some bug which meant I could not re-open a notebook after using it once. I then tried Google Colab, which took a bit more time to learn, and got things working there. In the meantime, the bug in paperspace was fixed, so I will use that again.