Hi,

Today you're going to learn how to train your first neural network using the PyTorch library:

Image

The big picture: We all need to start somewhere — welcome to your intro to PyTorch. This tutorial will teach you the fundamentals of training a neural network with PyTorch, including:

  1. How to define a basic neural network architecture with PyTorch
  2. How to define your loss function and optimizer
  3. How to properly zero your gradient, perform backpropagation, and update your model parameters — most deep learning practitioners new to PyTorch make a mistake in this step 

How it works: If you're already familiar with Keras/TensorFlow, then when going through today's tutorial, you may be surprised how much PyTorch code is required to train a simple neural network, predominantly in the training loop itself.

While Keras/TensorFlow encapsulates the training procedure into a single "model.fit" call, PyTorch allows you full control and customizability over the training procedure — the PyTorch developers designed the library in this way.

Advanced TensorFlow users familiar with "GradientTape" know that it takes significantly more code to train a network than simply calling "model.fit", but the additional control you have is sometimes worth it, especially if you are doing state-of-the-art research.

Thus, you think of PyTorch as being in "GradientTape" mode by default — the training loop must be implemented by hand which will require additional code.

My thoughts: Yes, coding neural networks and training procedures with PyTorch does take more code, and oftentimes more effort than the higher-level Keras API; however, Keras doesn't give you control over what happens inside "model.fit" — PyTorch does.

Yes, but: All that flexibility comes at a cost though. I haven't met a single deep learning practitioner who hasn't at least once screwed up:

  1. Zeroing their gradients with "opt.zero_grad()"
  2. Performing backpropagation with "loss.backward()"
  3. Updating their model weights with "opt.step()"

Don't know what I'm talking about?

Well, to quote master Yoda…

Image

Stay smart: Take your time learning the basics with PyTorch. While the library is incredibly customizable, it tends to have a higher initial learning curve than Keras/TensorFlow.

This series of tutorials on PyTorch is meant to ramp you up as quickly as possible and get you to the point where you see neural networks like Neo does The Matrix:

Image

Click here to learn how to train your first neural network with PyTorch.


Adrian Rosebrock
Chief PyImageSearcher

P.S. Interested in me doing some consulting work for you or your company?

I've decided to reopen myself to consulting work — but this time I'm doing it a bit differently.

Instead of going it alone, I've teamed up with my former PhD advisor, Dr. Tim Oates, and together, we're offering PyImageSearch Consulting for Computer Vision, Deep Learning, and Artificial Intelligence, through Tim's company, Synaptiq.

My primary day-to-day responsibilities lie with authoring new blog posts, books, and courses for the PyImageSearch blog, while Tim manages the consulting side of the business with myself stepping in when necessary.

If you're interested in our services, use this link to tell us about your project:

>> Click here to tell us about your project

Our services are super competitive, so if you're serious about us working together, please provide as much detail as possible when filling out the form.

I hope we can work with you on your next CV/DL project.