At Humanising Autonomy, we’re building an Advanced Driver Assistance System (ADAS) designed to make driving safer and smarter. Our technology uses artificial intelligence and computer vision to detect potential hazards on the road by interpreting and predicting pedestrian and vehicle behaviour to alert drivers when necessary. But how do we know if our system is actually doing its job?
In this article, we’ll break down how we measure the success of the Forward Collision Warning (FCW) feature of our ADAS system. FCW is a car safety system that alerts drivers when they are about to collide with another vehicle or pedestrian. We’ll explore the two main principles that guide us:
- Alert When It Matters: We want to make sure our system tells the driver when there’s a real danger, and that the alert comes with enough time to react.
- Don’t Annoy: We also want to avoid giving unnecessary alerts when there’s no real threat, keeping the driving experience smooth and stress-free.
Alert When It Matters: Ensuring Timely and Critical Interventions
One of the core objectives of our ADAS software is to provide alerts only when necessary. This means our system needs to detect events such as potential collisions and inform the driver in a timely and reliable manner. Here’s how we make sure that happens:
- Event Detection Accuracy: We measure how often our system detects real dangers on the road, like a car suddenly stopping ahead or a pedestrian crossing unexpectedly. We need to understand how many events were detected correctly out of the total number of real events. In simple terms, the Recall metric measures how good our system is at catching a collision risk.
Don’t Annoy: Minimising False Alarms
Equally important is ensuring that our system doesn’t overwhelm drivers with unnecessary alerts. Over-alerting can erode trust in the system, cause driver frustration, and distract from driving which is why we focus on the following KPIs under the “Don’t Annoy” principle:
- False Positive Rate (FPR):
- How often are we giving an alert when no threat is present? The False Positive Rate calculates the number of alerts triggered incorrectly (each invalid alert is considered a false positive). We know that drivers rely on our system to help them in critical moments—not to add distractions—so minimising false positives is part of our KPIs.
- Precision:
- How accurate are our alerts? When the system does send an alert, we want it to be correct. We refer to this metric as Precision, which measures how many of the total alerts provided were correct. It’s calculated as the ratio of True Positives (TP) to all alerts (both True Positives and False Positives), showing the accuracy of our alert system. A high precision score means the system is minimising incorrect alerts and focusing on real threats.
Simulations and Real-World Datasets: Combining the Best of Both
To build a reliable system, we don’t just rely on theory—we test it in both real world and virtual environments.
- Real-World Data: We use actual driving data, collected from real drivers on real roads across different geographies, to make sure our system works in everyday situations.
- Synthetic Data (Virtual Driving): We also create collision data using tools which build simulated environments. This enables us to evaluate the exact timing of our model predictions. This is different to real-word data where we leverage near-miss data.