Originally written on Dec 2018
Deep learning, machine learning, and artificial intelligence have been with us for decades—first as the stuff of cinema (think Blade Runner and A.I.) and now as everyday reality. With modern CPUs/GPUs and the convenience of the cloud, these ideas have jumped off the screen and into our lives: voice assistants in our cars, Siri in our pockets, face recognition that unlocks our devices, and the relentless march toward driverless cars. The sci-fi gloss has given way to shipping code.
Recently I dove into a stack of papers on machine and deep learning and found myself captivated enough to spend nights and weekends digging in. What began as a simple quest—“how do machines learn?”—quickly spiraled into bigger questions about us: how our brains take in information, make decisions, and translate intention into movement. The technology sent me back to first principles, and in the process I started to see learning—biological and artificial—as two perspectives on the same puzzle.
The surprise wasn’t how hard it all was, but how approachable it’s become. I expected rocket science that would be impossible to tinker with in spare hours. Instead, thanks to polished frameworks like PyTorch and Keras—and the runtime power of platforms like AWS SageMaker—the gap between curiosity and a working model feels refreshingly small. The heavy lifting of decades of math and research is still there, but modern tooling lets you stand on those shoulders and build.
I put this to the test by prototyping with PyTorch, Keras, and AWS SageMaker, using in-house CNN models. Along the way I gorged on articles and papers about neural networks. Each resource was excellent in its lane, but none stitched the full picture together. So as I learned, I started assembling my own narrative: a slide deck that pulls from many sources and threads them into a single, coherent story aimed at answering the questions that kept nudging me—what do filters really “see” in a CNN? How do human neurons relate to their artificial counterparts? Where do the analogies hold, and where do they break?
The slides don’t claim novelty; about 80% of the words and diagrams are quoted directly from the papers and sites that informed me. My contribution is the curation and the connective tissue—selecting, trimming, and arranging, and adding a few diagrams where the story needed clarity. Every source that helped shape the deck is credited at the end, because this is a conversation with the community, not a monologue.
Version 1.0 was my first pass; I’ve now released V1.3 with runnable PyTorch examples you can execute on your machine. Next up, I’m expanding the SageMaker sections and packaging a few “weekend-build” prototypes to make getting started even easier. This has been a journey from sci-fi to shipping, from curiosity to code—and I’m only just getting warmed up (2018)