Hi,

This week you'll learn about A Deep Dive into Transformers with TensorFlow and Keras: Part 2.

Image

Venturing into the vast lands of Transformers isn't an easy task, but if you have stuck with us till now, this is something you will wholeheartedly enjoy. 

Previously we went through all the important individual parts that combine to make Transformers the formidable entity it is widely known to be. But this begs the question, what are the "wires" that join these parts to make when complete? 

Remember, back in the 1990s, we watched shows like Power Rangers, where team dynamics was a huge theme. Each Power Ranger had their own mechanical beast called "Zord," but their true power would only show when these Zords combined to make one complete "Megazord." 

Imagine the different powerful parts of the transformer as something similar. These need to be properly combined to bring out their full potential, and that's exactly what we will learn about in today's post. 

Today's tutorial takes us to the second part of our "A Deep Dive into Transformers" series, where we will learn about the parts that connect the encoder and decoder of the transformer. 

The big picture: We have termed the connecting parts of the transformer as Skip Connections, Layer Normalization, the Feed-Forward Network, and Positional Encodings. These not only connect the decoder and encoder but also amplify the architecture's capabilities.

How it works: There are a few problems with how Transformers process data. A notable mention is how we lose the data order. Positional encodings help us keep the information of the order alive. On the other hand, skip connections tremendously help the model process information and keep prior layer representations alive. Like these two, the other connecting parts have their own way of playing their role. 

Our thoughts: Since our Deep Dive series is broken into three parts, skipping even one will result in some lapse in understanding the complete essence of what Transformers represent. 

Yes, but: Be sure you take time to understand the importance of connecting wires in an architecture that is this complex. Take as much help as you can, if you find this more difficult to grasp.

Stay smart: And stay tuned for the final part of our Deep Dive Series. If you have enjoyed it till now, pat yourself on the back because you made it this far. But the finish line is just a few steps away, so see you then! 

Click here to read the full tutorial

Do You Have a Transformers Project in Mind?

You can instantly access all of the code for A Deep Dive into Transformers with TensorFlow and Keras: Part 2, along with courses on TensorFlow, PyTorch, Keras, and OpenCV by joining PyImageSearch University. 

Guaranteed Results: If you haven't accomplished your Computer Vision/Deep Learning goals, let us know within 30 days of purchase and get a full refund. 

Discover how to finish your project faster!



Your PyImageSearch Team

P.S. The recording of our most recent Live Stream on Transformers is now available online!