Learning Fast and Learning Slow
Mix up your approach to keep learning fun
In a fast-changing discipline like software development, you need to be a continuous learner. In my time, I’ve gone deep into plenty of unfamiliar places. And I’ve found that when confronting a new field of knowledge, there are two basic strategies: go fast and go slow.
I like to start with a go slow approach to most new topics. In the past, that’s meant grabbing a stack of books, talking to experts in the field, or watching a course from an established community figure. But there’s plenty of value in the go fast approach too, which for me means jumping in head first, playing with things you don’t understand until they break, and filling the knowledge gaps while you go.
Going slow
That’s where I’m at after my first month of AI exploration. For the go slow approach, I’ve chosen a compact, well respected Google training course called Machine Learning Crash Course. This course doesn’t pad the time with conversation, but it does start with theoretical concepts rather than practical applications, which I think is exactly right. If you’re considering it yourself, here are a few of my observations so far:
It's a great starting point if you've got a computer science background but you don’t have specialist data science exposure.
If you already know basic terms like "gradient descent" or you spend time on Kaggle, this course going to seem easy—probably too easy.
The playground exercises are excellent. This makes the material more useful than something like the free lectures that so many schools have online for AI (like Harvard CS50). If you've watched that kind of content before, this is your chance to shift into more hands-on work with very little effort.
The course is a few years old—Peter Norvig welcoming you to Google at the beginning is just one of many telltale signs. It doesn't feel outdated, but you're certainly not working on the cutting edge.
Going fast
I’m still working through the Google course, but recently I added a different go fast approach—I decided to start experimenting with large language models, installed locally on my own computer. And it turns out that the world of today makes that feat dramatically easier than it was just months ago, thanks to tools like Ollama. If you want to try it yourself, here’s everything you need to know to get started.
Final thought: It’s almost mind-boggling how a field that makes such immense demands on mathematical sophistication, data, and compute, has become so accessible with simple tools. Thanks to open source models and free AI tools, every programmer has the ability to explore some of the world’s most advanced software constructs. Are you ready to dive in?
This article was originally published on Young Coder.


