Hi,

I'd say that when I first saw The Terminator, it fascinated me. As a young kid who had just got into science fiction, the portrayal of artificial intelligence (AI) in the film made me go bonkers. You essentially had a cyborg from the future with his own AI, getting as close to a real human as possible. 

Amongst the ton of flashy futuristic stuff shown in the film series, some scenes showed the perspective of The Terminator himself. You would get a glimpse of the world through the advanced cyborg's eyes, and it was something like this:


Source

Imagine having a neural network fitted in your brain which could scan objects for you in real-time and tell you all the data you need to know! When I had just started diving into the domain of machine learning, real-time efficient and accurate object detection seemed like a distant dream, considering all the constraints. 

However, the evolution of machine learning easily crossed that point, and today we have multiple ways to utilize real-time accurate object detection, which is fast enough to help a self-driving car! 

The big picture: Today, we'll look at two state-of-the-art models for Object Detection: YOLOv5 and SSD. We'll learn to draw inferences on our custom datasets using these models. 

How it works: As we have previously learned, we'll call these models using the torch.hub.load function. A brief recap: The target GitHub repository will need to have a hubconf.py script for this function to work. Inside the script, any function is treated as entry points and can be called in other projects.  

For our project, we'll call not only the models but also other utility functions stored inside entry points. 

My thoughts: For Object Detection practitioners (both old and new), the ease of access to state-of-the-art models is really helpful. These models have been trained and taken to their peak performance stage and are open sourced for the community. The possibilities of their usage are endless, and if one wants, they can also call the models without the trained weights. 

Yes, but: One should have a basic understanding of these models when using them for their projects. Not only is the idea behind these models ingenious, but a basic understanding is going to keep possible roadblocks to a minimum.

Stay smart: Keep a close eye on the PyTorch community to stay updated regarding any changes that pop up about Torch Hub or these models. A simple change might render one's previous scripts or notebooks obsolete, so it's a good idea to avoid such a situation!  

Click here to read the full tutorial

PyImageSearch University

This lesson is part of PyImageSearch University, our flagship program to help you master computer vision, deep learning, and OpenCV.  PyImageSearch University is updated each week with new lessons.

Don't know Python?  No problem, we've got you covered with a short and sweet Python course to get you going.

Having problems with your local development environment or IDE?  Fortunately, our pre-configured Colab Notebooks allow you to run code the moment you join PyImageSearch University.  But, of course, you don't want to be a sys-admin, so don't waste time messing with your development environment.  

You can find the current lesson under Torch Hub 101 —  Practical Applications of Torch Hub and the direct link here.

Want to Master Computer Vision and Deep Learning?

Do you think mastering computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?

That's not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that's exactly what we do. Our mission is to change education and how complex Artificial Intelligence topics are taught.

Inside PyImageSearch University, you'll find:

  • 30 courses on the hottest computer vision, deep learning, and OpenCV topics
  • 30 Certificates of Completion (one for each course)
  • 39+ hours of on-demand video
  • Pre-configured Jupyter Notebooks running in Google Colab
  • Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • Access to centralized code repos for all 500+ tutorials on the PyImageSearch blog
  • Easy one-click downloads for code, datasets, pre-trained models, etc.
  • Access on mobile, laptop, desktop, etc.
  • New courses released regularly and new tutorials weekly, ensuring you can keep up with state-of-the-art techniques

Click here to join PyImageSearch University


PyImageSearch Team