When storm Emma started forming in late February 2018 over South England, we weren’t expecting 22 inches of snow to cover the grounds over just a week. As much of an inconvenience as it was, for the general population of UK, we considered it a unique and timely opportunity to test the quality of our world beating Vision AI system for autonomous car perception.
We wanted to use the most basic sensing capability (an off the shelf consumer grade camera) to test our system in the most challenging driving conditions. As we all know very well, driving on snow covered roads is a huge challenge for human drivers due to the tricky road conditions where salting is infeasible, such as residential neighbourhood roads and rural lanes. We set ourselves the goal of testing our Vision AI system for detecting the ground surface and segmenting the drivable free space in the most challenging set of conditions — snow covered residential neighbourhood roads and rural lanes. This meant, we would never be able to see the full road surface clearly, most of the road and lane markings would be snowed over, there would be slush on the road with lots of tyre tracks, we wouldn’t be able to see the road curbs and land edges, and most everything on the ground would look white.
This is probably one of the hardest set of conditions one can throw at a perception engine to detect where the road surface is and where can the autonomous system drive. Our Vision AI has two key features that make it unmatchable and beyond the state-of-the-art in autonomous perception. First, Vision AI is a generalisable perception system that works out of the box. You turn on the system and it starts to do what it is supposed to do without the need for any data driven training. Second, it is highly sophisticated in its technical capabilities to detect and segment the ground surface and drivable free space in conditions where humans need to make inferences and guesses about where the ground might be — for example when we are unable to see the road clearly due to snow cover, we tend to follow the tracks left by road users who have driven before us, without needing to see the entire road surface. For an autonomous perception system to be able to replicate this performance requires technically very advanced capabilities.
To our delight, not once did the Vision AI let us down. We drove over the entire period from Emma’s forming to dissipation (nearly 6 days) and clocked over 250 miles of driving and perception data collection and Vision AI performed like an expert road surface detector, clearly segmenting round-about junctions, partly occluded lanes due to parked vehicles, slush, driving tracks of black on otherwise a uniformly white surface.
When you see the video clips of some of the footage of Vision AI at work, you will notice how clean and accurate the performance is. The conditions of the surface are feature sparse, means there’s isn’t much to detect and make sense of. Yet the system provided a very high fidelity output. We keep an eye out for how the field of autonomous perception is advancing and keenly review the release of video footage put out in the public by our peers in the industry. We won’t be off the mark if we said that this is a ‘world first’ in terms of what’s out there as evidence of the state and technical sophistication of autonomous perception capabilities.
We have broken new ground in pushing the technical boundaries and have been constantly refining the capabilities of Vision AI through out this year. We are hoping that UK might give us another opportunity this year where we get to test the advances we have achieved in Vision AI performance in the last 8–10 months.