PathFinder – a mapping-free, go-anywhere, autonomous path planner

PathFinder in action on muddy roads without lane markings or curb edges

Imagine you have to go from a bedroom to kitchen in a new house, completely blind folded. You would need to first practice the route a couple of times without the blindfold, and then under a blindfold, you would keep touching the walls along the way as you make your turns to get to your destination in order to not crash into things. When you walk the path without blindfolds you are creating your ‘localisation map’ and when you touch the walls under a blindfold to get a sense of you position with respect to the room, you are localising using your map memory.

When we drive, we carry out several tasks without being conscious of the complexity of instant decisions we make to ensure we drive safely. An Autonomous Driving System (ADS) needs to replicate this very complex performance, and it is often challenging to understand what’s really happening under-the-hood.

Path planning enables a highly automated vehicle to select viable path trajectories in real time, on an on-going basis, as the vehicle traverses from one position to another. In finding and following a path, the ADS must be able to detect where the drivable free space is, segment it accurately, know its own precise position with respect to its environment, and then calculate a viable path to follow within the total drivable free space while maintaining its position along the path.

The ability of the ADS to find its current precise position with respect to its location is called ‘localisation’. The industry state-of-the-art to enable real time localisation is achieved by use of ‘localisation maps’. These maps are different from navigation maps we use everyday. A localisation map is a detailed feature memory based on high precision data collected by driving on the path beforehand manually. It contains a lot of information on the scene features and structure such as; lane edges, lane markings, features like tress and buildings, road signs, traffic lights etc.

Autonomous cars need to localise within 10-15 centimetres of their position in real time as they traverse a path to make sure they don’t drift. Localisation maps being a high precision memory, make this possible. The challenge of this approach is two-fold; first, autonomous cars today can drive only where they have been driven before manually for data collection (practice runs without the blindfold) and second, it is nearly impossible to scale these maps worldwide, over nearly 200 million Kilometres of road networks. Add to this challenge the fact that the world keeps changing all the time – road works, change in road layouts, new buildings etc., so new updated versions are needed and these maps must be developed for driving in both directions. Imagine what would happen in our analogy if the room layout was changed and the furniture moved around, you would be unable to get to the kitchen blindfolded because you would struggle to figure out your position. You would have to go back and practice the route again without the blindfold.

Interestingly enough, human drivers operate very differently from an ADS when it comes to localisation. Human drivers don’t need centimetre-precise prior information on where everything is around them – often a GPS satellite navigation system is more than enough for us to navigate busy urban streets. Human drivers can drive on roads they have never driven on before without detailed prior map data – autonomous cars today struggle with this challenge.

The reason human drivers are able to drive with such flexibility is down to our incredible environmental perception. In a fraction of a second, we perceive where we are in the road context, what is around us, what does the road look like, where are the traffic lights, how are other cars navigating through a junction, and can safely drive on the basis of our environmental perception.

Our technical inspiration comes from how the human mind processes visual data to perceive the world, and we have built a visual cognition engine that can match this performance for autonomous driving. This means our autonomous car doesn’t need prior high definition localisation maps to drive, it drives by seeing and understanding its environment. We are proud to unveil for the first time another world beating capability as our autonomous car can drive where there are no maps. Our VisionAI is a generalisable cognition and perception capability. Vision AI makes it possible for our ADS to perceive the scene as humans do, keep the vehicle localised with respect to its current position and safely follow the chosen path trajectory. To plan the path for our autonomous vehicle as it drives, our ADS calculates not one but several concurrent path trajectories based on highly accurate drivable free space detection and detection of all still and moving obstacles.  The most viable trajectory from amongst the several possible ones is selected in real time and those that become infeasible automatically get dropped from the set of possibilities.

We have been testing our ‘PathFinder’ for nearly 15 months, in all sorts of varied and difficult scenarios clocking nearly 8,000 miles of driving on complete of- road paths, highways, rural roads with no lane markings, residential neighbourhood roads and urban city centre layouts. PathFinder and VisionAI work together to tell our autonomous car what’s around it and how it should drive through its environment to safely avoid obstacles and make its way to the end destination. PathFinder is able to pick out a safe and viable trajectory each time, no matter the scenario.

Here, we share a few visualisations PathFinder’s outputs from a bird’s eye view looking down. The little square at the bottom right always represents the autonomous vehicle and the red dots represent the minimal proximity obstacles and match up with detections of perception outputs from VisionAI in the video image. The dynamically changing blue bars represent the detection of drivable free space in real time that matches up with detections in the video. The green dotted lines are ‘localisation’ markers for the vehicle and the orange coloured group of lines represents all possible paths our autonomous vehicle can traverse with the differently coloured line chosen as the most viable one to follow.

We chose the Park Street through Woburn Safari Park connecting Woburn to M1 as a test case. The road runs straight through the park but just wide enough for two vehicles to pass in opposite directions, has no lane marking, no clear road edges or curbs, small wooden bollards dotted along both sides and most importantly, it can have roaming deer cross the road at any time. Notice how PathFinder automatically and dynamically changes the trajectory outputs when opposing traffic approaches our vehicle. We had no localisation maps of GPS data for the road but the output was perfectly creating a driving corridor for the vehicle to localise itself. The Point-of-View output of the path planning system in the video shows that impressive capability.

We had to push the limits to test our system performance in the most challenging conditions – driving on really narrow rural B road barely wide enough for two cars. It had rained earlier that day and the road edges were just wet mud, and of course no markings of any sort, no clear curbs. These roads are tricky even for experienced human drivers. It was sheer moment of pride and joy for us to see PathFinder navigate in a totally unmapped environment.

We have been refining and enhancing the capabilities of PathFinder and VisionAI over the last 6 months and are now getting ready to demonstrate how they work for fully autonomous driving on public roads, and will be releasing some really awesome stuff over the coming months. Keep looking at this space for more soon.