The IEEE Intelligent Vehicles Symposium is an annual technical forum sponsored by the IEEE Intelligent Transportation Systems Society (ITSS). It brings together researchers and practitioners from universities, industry, and government agencies worldwide to share and discuss the latest advances in theory and technology related to intelligent vehicles. During the IV 2020 one of our very own Machine Learning Engineers, Emilio Oldenziel wrote and submitted a paper together with two former colleagues on the topic of provident detection of vehicles at night.
We asked if he’s willing to share with us a little bit about the paper and his findings to which he agreed.
Thanks for joining us, Emilio. Can you tell us a bit about yourself and how you ended up working on this paper?
Emilio: Sure! Hi everyone, my name is Emilio and I’ve been working at Eraneos for one year now as a Machine Learning Engineer. Before starting my career, I studied Computer Science at the University of Groningen.
At the end of my master’s degree, I wrote a thesis in the field of Machine Learning at Porsche AG, where I was doing an graduation internship. After finishing the project, my supervisor at Porsche asked if I’d like to write a research paper on the topic. I thought it’s an excellent idea to share my findings with the broader community, so we submitted it to the IEEE. It got approved and we were supposed to present it in Las Vegas in June but due to the travel restrictions the event was online, so I couldn’t fly to the US personally. We still presented it, and the reactions were very positive, so I’m quite happy with the results!
It’s unfortunate that you couldn’t see Las Vegas! But it’s still a great achievement to be accepted at the IEEE, their approval process is very rigorous, from what I’ve heard. So, why autonomous driving? Was there a reason you chose this specific field?
Emilio: Autonomous vehicles and self-driving cars offer a lot of potential benefits for road mobility improvement. Innovation in the field will allow us to improve traffic flows, reduce accidents and of course decrease our environmental impact. But I also have to admit that there is some professional curiosity as well. As an engineer myself, working on these challenging problems is quite fascinating.
While fully self-driving cars are still far away, new models are getting better and better at assisting drivers and improving the driving comfort. These systems are generally called Advanced Driver Assistance Systems (ADAS) and include things such as adaptive cruise control, traffic sign recognition, parking cameras, adaptive driving beam, and many more. For my thesis, I worked on the topic of Adaptive Driving Beam (ADB). In layman’s terms, by using camera recognition technology, the ADB (and other similar systems) adjust the light beam of the vehicle to optimize for visibility and road safety.
Sounds like a very interesting topic. Can you explain what is camera recognition technology and why is this topic of such interest to the industry?
Emilio: Camera recognition based ADAS rely on the camera recognition system to recognize objects in the environment in order to perform a certain action. Objects which are not in the direct line of sight of the camera (obstructed objects) are not detectable. For example, when driving at night and a car is approaching but is still hidden. Drivers are required to have their headlights on to create visibility for themselves and other people on the road. In that situation, although the light beams are visible, the car is still not detected by the camera. Humans are very good at recognizing this, but current camera systems only recognize the approaching vehicle once they “see” it directly. In our study we measured the time that it takes a human to providently recognize an oncoming vehicle using this phenomenon and how long it takes for a camera system to do the same thing using direct sight.
In the second part of the study we created a dataset and trained a model to perform the task of providently detecting vehicles using the light features. The experiments showed us that that there is a significant time gap between the human provident detection and the camera system detection. But more importantly, how to reduce this gap with the use of our Machine Learning model.
Here is the model in action. As you can see, the camera has already detected an oncoming car and adjusted the vehicle’s light beam automatically.
That’s amazing! I guess the most burning question now is when can we see this on the market?
Emilio: It might take some time. As mentioned in the paper, this is only a proof of concept. Further studies are currently being performed to better evaluate this method and integrate it into vehicles. There are still some hardware and computational constraints which need to be solved first. But I think it will definitely be an available feature in the coming generation of ADAS cameras. It’s only a matter of time!
Thanks Emilio for your input and your work in the field. For everyone curious to read the paper, you can find it here.