Hi, I'd like to formally introduce you to Jeremy Cohen, the instructor who will be leading you in your journey to master visual sensor fusion for self-driving vehicles. Jeremy is not only an authority in self-driving cars and sensor fusion, but he's also an expert educator as well! Good technology is invisible. I'm a strong believer that good technology is "invisible" — we shouldn't even have to think about it, our brains and the tech are so in-sync that the user experience is fluid and imperceptible. One day, self-driving cars will be that way. We'll simply get in our car, turn it on, set our destination, and have it take us there automatically (and in all likelihood, the vehicle will already know where we want to go, simply by analyzing our daily life patterns). Like good technology, my role is to be "invisible" in Jeremy teaching the Visual Sensor Fusion for Autonomous Cars course. I'll be there when you need me, but by and large, I need to get out of his way and let him share his expertise with you. With that said...I'm stepping out of the way and letting Jeremy take the reins. A message from Jeremy: My name is Jeremy and I'm the founder of Think Autonomous, a company that helps engineers work in cutting-edge applications of AI such as self-driving cars or computer vision! A few years ago, I was a self-driving car engineer, and I got to work on many applications of Computer Vision such as object tracking or driveable area segmentation. This was AMAZING! But also very hard… In these jobs, you always fear you won't be good enough. Just like you, I started to learn Computer Vision with Adrian on the PyImageSearch blog. But as a self-taught learner, I was missing a few details. I was good when dealing with images, but had a hard time plugging the camera, improving the image, and most of all: interpreting the output. For example, what is the correct way to turn bounding box coordinates into an instruction the vehicle can understand, such as "Stop the car in 10 meters!" As an engineer, I needed more concrete understanding of the sensors, something that was second-nature to many of my colleagues. Years later, after founding Think Autonomous and releasing many online courses on advanced technologies to thousands of engineers, I found that it was essential for engineers to not only master image processing, but also understand the sensors better. Understanding sensors better and knowing how to interpret the data is what helps fill a gap in your skills, and become more useful to companies! ...and that is exactly what we'll be doing in this course on visual sensor fusion! Inside the course, we're going to spend some time together learning about 3D Vision, LiDARs, cameras, and Sensor Fusion! These are super hot skills in self-driving cars, computer vision, and augmented reality! At the end of our session together, you'll be able to project 3D laser point clouds on 2D images! Just like this: I hope you're ready! Enrollment begins Tuesday. Mark your calendars — Visual Sensor Fusion for Autonomous Cars releases in PyImageSearch University on Tuesday, July 13th. To celebrate the release of the course we'll be offering both: - A 7-day free trial to PyImageSearch University
- 25% OFF all memberships prices
Keep an eye on your inbox for more details... Adrian Rosebrock Chief PyImageSearcher | |
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.