Hi,

This week you'll learn about What's Behind PyTorch 2.0? TorchDynamo and TorchInductor (primarily for developers).

Image

PyTorch, one of the most popular deep learning frameworks, has just released version 2.0 with a new feature called torch.compile that promises to speed up eager mode execution by 30-200% for most models. But how does it all work behind the scenes?

The key ingredients of PyTorch 2.0  are four new technologies (i.e., TorchDynamo, TorchInductor, AOT Autograd, and PrimTorch) that make PyTorch code run faster and with less memory, all while requiring minimal code changes.  

We take you through a little tour of these technologies in more detail and will dig deeper into the usage and behavior of TorchDynamo and TorchInductor. 

The big picture: The creators of PyTorch recently announced the release of PyTorch 2.0, which can provide 30-200% speedups in eager mode for most models we run daily. But how does it all work behind the scenes?

How it works: While retaining its essence, PyTorch 2.0 brings in new technologies (e.g., TorchDynamo, TorchInductor, AOT Autograd, etc.), which aims to significantly improve execution time for all kinds of operations. For example, TorchDynamo is capable of safely and correctly capturing/generating computation graphs from arbitrary Python code without significant code changes. TorchInductor, on the other hand, takes the computation graph generated by TorchDynamo and converts it into optimized low-level kernels.

AOT Autograd is PyTorch's new automatic differentiation engine that creates ahead-of-time backward passes. This accelerates both forward and backward passes. In addition, PrimTorch canonicalizes 2000+ operations into ~250 smaller primitive operations.

Our thoughts: With these new technologies, one can now speed up real deep learning models up to 13% on NVIDIA A6000s. Furthermore, TorchDynamo outperforms existing solutions like TorchScript and FX Tracing by handling data-dependent control flows and non-PyTorch code without any significant changes to the code. 

Introducing these new technologies to make experimentation with deep learning architectures faster, while preserving the essence of PyTorch 1.x is a welcome development for the growing PyTorch community.

Yes, but: PyTorch 2.0 is still in its initial stages. However, as PyTorch has always had a strong community, feedback from the community will play a key role in the direction toward which PyTorch 2.0 moves. Furthermore, PyTorch 2.0 requires the latest hardware upgrades to fully benefit from the performance improvements.

Stay smart: Don't miss out on the next big thing in deep learning! Instead, closely follow the releases of PyTorch 2.0 and their quest to cater this powerful new tool to the community's needs. Then, try PyTorch 2.0 on a small project, or join the community forums to learn more about the new features and provide feedback.

Click here to read the full tutorial

Do You Have an OpenCV Project in Mind?

You can instantly access all the code for What's Behind PyTorch 2.0? TorchDynamo and TorchInductor (primarily for developers), along with courses on TensorFlow, PyTorch, Keras, and OpenCV by joining PyImageSearch University. 

Guaranteed Results: If you haven't accomplished your Computer Vision/Deep Learning goals, let us know within 30 days of purchase and receive a refund.

Do You Have an OpenCV Project in Mind?



Your PyImageSearch Team

P.S. Be sure to subscribe to our YouTube channel so you will be notified of our next live stream!

Follow and Connect with us on LinkedIn