Driving the safety revolution

We build software technologies for series production passenger vehicles

Active Safety

Active Lane Keeping with Lateral Control Guidance, Lane Departure Warning, and Closest In-Path Vehicle Detection. All condition Automatic Emergency Braking with Forward Collision Warning, Vehicle Cut-In detection, All speed automatic cruise control guidance, and Intelligent Speed Assist without a lead vehicle. Detection of vehicles, pedestrians, cyclists, children and other generic objects

Hands Free Driving

Active safety features plus Worldwide coverage map enabled speed guidance, Road semantics and topology, High curvature enhanced lateral control, Lateral control guidance for faded lane markings, Enhanced closest in-path vehicle detection, Active speed adjustment for ramp-on and merges, Curve warning and Headlight steering guidance

Autonomous Pilot

Active safety features, Worldwide coverage map features, Advanced Path Planning and Control with All weather multi-sensor surround object detection and Lane detection for Exit-to-Exit highway automation

Technology In Action

See how our software enables active safety, hands-free driving, and autonomous pilot in a variety of weather, lighting, and road conditions

Play Video

temporary barriers

Play Video

pitch black

Play Video

Rainy conditions

Play Video

winding rural roads

Play Video

glare and reflectors

Play Video

traffic cones

Play Video

connecting roads

Play Video

english countryside

Play Video

off highway

Play Video

complex roadworks

Play Video

autonomous pilot

Our Unique Advantage

Our mapping technology dynamically creates a cm-accurate road model for any highway around the world and our scene perception technology identifies every possible object

Scene Perception Technology

Scene perception is a key technology which processes binary streams of raw sensor data and converts it into a semantic understanding of important objects and road elements. A key challenge in scene perception is to identify objects that the system has not encountered before and has no prior experience with. Many systems fail in such scenarios, leading to catastrophic collisions. This is referred to as the ‘Long Tail Problem’.

We have invented a scene perception technology which can detect and avoid all objects, including  pedestrians, cyclists, and vehicles – as well as strange and unknown objects which our system has never come across before. This key breakthrough enables our technology to safely contend with the unexpected, such as an animal crossing the road, debris, or other unusual scenarios.

Our scene perception technology works across sensor types including low-cost cameras, LIDAR, and RADAR, in challenging and varied weather and lighting conditions.

Play Video
Play Video
Play Video
Play Video

Globally Scaled HD Maps

High definition maps are an essential augmenting technology for active safety, hands-free driving and autonomous pilot. HD maps provide enhanced scene awareness on roads where lane markings are faded, and provide information beyond sensor range. However, building maps at a global scale has remained an open technological challenge, as it requires pre-driving each road, collecting massive amounts of data and time-consuming hand annotation.

We have developed a mapping technology which creates a cm-accurate road model for any highway, around the world – delivering globally scaled HD maps. We call our mapping technology the ‘Self Generating Living Map’ (SGLM).

Our technology has no dependence on: pre-driving, data collection, data crowd-sourcing, expensive mapping vehicles, and hand annotation of lane lines. A dynamically accurate road model is created in real-time with all relevant features for active safety and autonomous pilot including: lane topology, road shape, road curvature, ramps, merges, barriers, and static roadside structures.

We work with Vehicle Manufacturers and Tiered automotive suppliers

We are a specialist software provider for the automotive industry. Our technologies are sensor agnostic, run on commodity compute hardware and work across vehicle types and platforms at highly competitive price points – delivering superior safety and performance

Our Safety Thinking

We have published our detailed safety framework – Comprehensive Architecture for Road Safety of Autonomous Vehicles (CARSAV). CARSAV takes a deep dive into assuring safety for automated driving technologies and advanced driving functions. CARSAV serves as an open framework to aid regulators, insurers, policy makers and technology developers in following well thought-out guidelines for building, testing and deploying advanced driving technologies safely on public roads

Our Intellectual Property

We have a diverse Intellectual Property Portfolio built on scientific inventions in core technologies. Our growing IP portfolio consists of over 35 unique intellectual property assets including patents, inventions, trade secrets and know-how. This enables great flexibility in product specification and features for each customer’s unique requirements.

Frequently Asked Questions

What is required to initiate a proof of concept?

We can rapidly deploy a proof-of-concept on demand on any vehicle that comes available with standard compute, storage and drive-by-wire enablement.

Our advanced safety technologies integrate to provide a full set of software functionalities with production-grade automotive sensors. 

For Active Safety, we provide software for Camera-centric sensing augmented with Long Range RADAR.

To enable Hands Free Driving, our software additionally integrates short range surround view RADARs and ultrasound sensors. To add redundancy and enhanced scene awareness, we leverage our Globally Scaled HD maps with cm-accurate road information.

For Autonomous Pilot, we combine multiple surround-view sensing modalities with our Globally Scaled HD maps, to deliver the highest level of safety and performance for automated driving without human supervision. 

Can you provide a slice of your technology?

Our software is fully modular by design and we can provide component technologies to enable a bespoke functionality for a particular series production vehicle model. This allows us to tailor software solutions spanning Active Safety, Hands-Free Driving and Autonomous Pilot.

Our technology can be integrated across flexible sensing configurations to meet a desired Bill of Materials budget in accordance with system requirements.

Our technology includes state-of-the-art scene perception using Computer Vision at its core to provide 360 degree surround safety.

We can provide HD maps for the highway networks of any country worldwide to enable advanced safety technologies.

To request details of our technology portfolio, please contact us at info@propelmee.com 

How much storage does your HD map require?

Our HD maps are ultra lightweight. Global maps of worldwide highway networks can be stored on-vehicle. There is no requirement for persistent back-end data storage. Our HD maps leverage existing 2D navigation maps as a backbone, providing us global coverage from the outset. Enriched HD features are enabled through on-board processing using our proprietary mathematical invention. 

This means that the storage requirements for our HD map are identical to standard SatNav systems already deployed in series production vehicles. 

HD enriched map features are cm-accurate and responsive even to changing lane and road layouts. This is a unique extension in the state-of-the-art of HD mapping for advanced safety technologies. 

How is your HD map different from other approaches?

Competing technologies attempt to build self-updating HD maps by using Deep Learning to automatically annotate lane and road-edge features in camera images through harvesting sensor data from fleets of vehicles.

These approaches are limited by the accuracy of the Deep Learning algorithm, which could incorrectly label or altogether miss important road features. To construct a complete highway map, such approaches require pre-driving in each lane through multiple passes for harvesting of sensor data – usually over 8 to 10 passes.

Our HD maps require no manual or algorithmic annotation of lane lines and require no harvesting of data through pre-driving roads at all. The entire road model is constructed by the vehicle in real-time, live as it drives. Our HD maps additionally provide the host vehicle with information beyond the line-of-sight and range of its on-board sensors to anticipate curves, merges, ramps, gantries, and the complete road topography.

What sensors are needed to use your HD maps?​

A low cost monocular camera and off-the-shelf standard GPS receiver are all that is needed for a vehicle to use our HD maps – no additional sensors are required.

What type of compute is required to run your system?

Our perception technology and HD mapping technology run in real-time across a range of commodity compute hardware commonly used in automotive safety applications.

Can you support multi-sensor fusion?

Our perception technology supports the full range of sensing modalities including Mono-camera, Stereo Vision, Multi-focal camera systems, surround vision systems, Infra-red camera, LIDAR, RADAR, and ultrasound.

Our perception technology comes with a guaranteed base-safety layer which supports validation at the component level. 

How do you detect all objects?

To detect all generic objects, we have developed a new type of perception technology that is based on deterministic mathematical models and processes 3D scene data using advanced geometric understanding. This allows us to reliably and safely handle scenarios where AI faces ‘Long Tail’ corner case limitations.

Our technology detects, segments, and tracks all objects in 3D space, determining their speed and motion trajectory. Additionally, lane level occupancy and proximal objects are detected in a 360 degree view based on sensor configurations. Once all objects have been detected, specific objects are categorised as belonging to classes of interest. 

Get in touch

info@Propelmee.com
Close Menu