“Mapping” is a commonly associated term with autonomous vehicles. We generally think of maps as birds-eye view representations of roads and geographies which highlight important features like the locations of buildings, roadside infrastructure and places of interests. Maps are used by people for navigation – to answer the question, “how do I get there?”, however, autonomous vehicles use maps in very different ways.
Autonomous vehicles are unable to navigate with simple high level goals such as “take the next left”, or “turn right at the end of the road”. While these instructions are very simple for human drivers to follow, translating these types of instructions into autonomous driving actions is a complicated task – that’s where maps come in. Autonomous cars utilise high-definition (HD), three-dimensional maps of the environment to know the centimetre-precise road layout beforehand. These maps can have multiple annotations about the locations of lane-markings, traffic lights, traffic signs and other important road features as well as the exact path an autonomous vehicle should travel along, to “take the next left”, for example. These paths are usually annotated by expert operators and define the preferred behaviour of an autonomous car.
In essence, an HD map tells a car exactly what the static scene looks like, where important road features are, as well as typical driving manoeuvres to negotiate a specific part of the road network. Localisation is an important step which autonomous cars use to figure out their exact position in an HD map by matching their live sensor data with the stored map data. It’s like remembering a place you’ve visited beforehand. This allows the autonomous car to know where it is in the map and utilise all of the prior information stored within the map.
To create HD maps, autonomous cars need to pre-drive routes to collect the map data and create a type of “digital scene memory”. This raw survey data is enhanced with human input to highlight important road features for use in autonomous driving. This is a time consuming process as these annotations are performed on each LIDAR (laser scanner) and camera frame, and many frames are being produced by multiple sensors during each second of driving.
The other challenge besides the task of annotation effort is keeping HD maps fresh and updated. Imagine driving back to a place you’ve visited beforehand except the road structure and environment has changed. Your memory of that place no longer matches what it now looks like, but you as a human driver can continue to drive in an exploratory mode. For autonomous cars, that’s not really possible, because the autonomous car can no longer match it’s “digital scene memory” to its live perception of the environment and is unable locate itself in the HD map. Even if the autonomous car is able to localise itself, the HD map will no longer be representative of the scene and can’t be relied upon to guide autonomous driving. That’s why it is critically important for autonomous cars to do frequent mapping runs to ensure that their “digital scene memory” of places is fresh and accounts for all changes such as roadworks, diversions, or infrastructure upgrades.
So, autonomous cars need HD maps of all roads to drive there, and require that these maps are updated. It’s a massive challenge given the size of global road networks and the required frequency of map updates. Some companies are taking on this challenge to map out the world.
We are taking a different approach…