Hi there, Last week, the PyImageSearch Team was happy to introduce you to our new course on HydraNets & Multi-Task Learning for Self-Driving Cars with PyTorch. This course is our latest addition to PyImageSearch University, and it will help you explore multi-task learning architectures for self-driving cars. It was created by Jeremy Cohen, founder of the online course platform Think Autonomous, which teaches engineers how to succeed and land a job in the cutting-edge, self-driving car world. Today, Jeremy will teach us Task Correlation. Right before that — Today, and only until tomorrow night, you can save 15% on PyImageSearch University!! Enroll Today Monstrous Techniques and Task Correlation The course on HydraNets is a set of what I call "Monstrous Techniques." And one of these techniques is something you can use today to make your semantic segmentation algorithms much better. We found it in a paper by Trevor Standley and his team that includes Jitendra Malik, a researcher who appeared on the Lex Fridman Podcast. In the paper, they talk about something called: Task Correlation What is that? To understand it, let's talk about tennis. If you learned to play tennis, did you notice how it helped you become good at other racket sports like ping-pong? It might be because you're learning to track the ball, or because you're used to running, or practicing your backhand, but if you've noticed … learning tennis makes you better at ping-pong. And task correlation is exactly this: learning one task that helps you learn another related task. And you can apply this to Deep Learning. How? Let's take the example of semantic segmentation. In our latest Torch Hub series, we discussed Image Segmentation using PyTorch. But did you know that you could make these results much better? Here's the trick: When training an image segmentation model, it's a good thing to train it with another Computer Vision task! 👉🏼 Studies have proven that adding ANY TASK to semantic segmentation made it better in almost every case where the network and dataset were big enough. How much better? - If you add Depth Estimation, it can be 1-5% better
- If you add Keypoints Detection, it can be 0-4% better
- If you add Edge Detection, it can be 0-3% better
- If you add Normals Estimation, it can be 8-12% better
This is from one of the settings the researchers used: The study doesn't stop there, there are TONS of other insights from this study, and it doesn't stop at image segmentation! For example, - Edge Detection makes Depth Estimation 3% worse...
... but Segmentation makes it 3% better! - Keypoints make Normals 10% worse!
... but Normals make Keypoints 88% better! In our HydraNets course, we have completely dissected the research, and I'm showing you many other ways you can improve your models, including a sick way to boost the results of ANY MODEL by up to 99%! You can access it via PyImageSearch University, and you can learn a lot more about these techniques. And if you enroll before tomorrow night, you'll get a 15% discount. Click here to join PyImageSearch University Not interested in hearing more about HydraNets? No problem Opt-out of HydraNet updates Jeremy & the PyImageSearch Team |
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.