Navigating Success in ADAS: Essential Metrics for Computer Vision and AI
Smriti Garga
March 3, 2025

At Humanising Autonomy, we’re building an Advanced Driver Assistance System (ADAS) designed to make driving safer and smarter. Our technology uses artificial intelligence and computer vision to detect potential hazards on the road by interpreting and predicting pedestrian and vehicle behaviour to alert drivers when necessary. But how do we know if our system is actually doing its job?

In this article, we’ll break down how we measure the success of the Forward Collision Warning (FCW) feature of our ADAS system. FCW is a car safety system that alerts drivers when they are about to collide with another vehicle or pedestrian. We’ll explore the two main principles that guide us:

  • Alert When It Matters: We want to make sure our system tells the driver when there’s a real danger, and that the alert comes with enough time to react.
  • Don’t Annoy: We also want to avoid giving unnecessary alerts when there’s no real threat, keeping the driving experience smooth and stress-free.

Alert When It Matters: Ensuring Timely and Critical Interventions

One of the core objectives of our ADAS software is to provide alerts only when necessary. This means our system needs to detect events such as potential collisions and inform the driver in a timely and reliable manner. Here’s how we make sure that happens:

  • Event Detection Accuracy: We measure how often our system detects real dangers on the road, like a car suddenly stopping ahead or a pedestrian crossing unexpectedly. We need to understand how many events were detected correctly out of the total number of real events. In simple terms, the Recall metric measures how good our system is at catching a collision risk.

Don’t Annoy: Minimising False Alarms

Equally important is ensuring that our system doesn’t overwhelm drivers with unnecessary alerts. Over-alerting can erode trust in the system, cause driver frustration, and distract from driving which is why we focus on the following KPIs under the “Don’t Annoy” principle:

  • False Positive Rate (FPR):
    • How often are we giving an alert when no threat is present? The False Positive Rate calculates the number of alerts triggered incorrectly (each invalid alert is considered a false positive). We know that drivers rely on our system to help them in critical moments—not to add distractions—so minimising false positives is part of our KPIs.
  • Precision:
    • How accurate are our alerts? When the system does send an alert, we want it to be correct. We refer to this metric as Precision, which measures how many of the total alerts provided were correct. It’s calculated as the ratio of True Positives (TP) to all alerts (both True Positives and False Positives), showing the accuracy of our alert system. A high precision score means the system is minimising incorrect alerts and focusing on real threats.

Simulations and Real-World Datasets: Combining the Best of Both

To build a reliable system, we don’t just rely on theory—we test it in both real world and virtual environments.

  • Real-World Data: We use actual driving data, collected from real drivers on real roads across different geographies, to make sure our system works in everyday situations.
  • Synthetic Data (Virtual Driving): We also create collision data using tools which build simulated environments. This enables us to evaluate the exact timing of our model predictions. This is different to real-word data where we leverage near-miss data.

Precision and Recall: Striving for the Optimal Balance

At the heart of our ADAS system’s success is the ongoing challenge of balancing Precision and Recall.

  • Precision ensures that when the system triggers an alert, it is truly necessary—meaning the alert is correct and the threat is real.
  • Recall measures how well the system catches all potential threats. In other words, it tells us how often we detect an event when there actually is one.

However, simply optimising for one at the cost of the other isn’t enough. Focusing solely on Recall might ensure that we detect every possible event, but it could also lead to over-alerting (more false positives). On the other hand, optimising purely for Precision could minimise false positives but might miss some critical events.

Introducing the F1 score. The F1-score is a metric that combines both Precision and Recall into a single number, offering a balanced measure of a system’s overall effectiveness. It is calculated as the harmonic mean of Precision and Recall:

This means the F1-score helps us avoid focusing on one metric at the expense of the other. A perfect F1-score of 1 would indicate that our system is both highly precise (few false positives) and highly responsive (few missed events). 

As an example of this score, imagine an analysis of 100 situations with the following results:

  • True Positives (TP): 40 times, a system correctly alerted the driver to a real threat.
  • False Positives (FP): 10 times, a system incorrectly alerted the driver when there was no threat.
  • False Negatives (FN): 20 times, a system failed to alert the driver to a real threat.
  • True Negatives (TN): 30 times, a system correctly did not alert the driver when there was no threat.

We can represent this in the following confusion matrix:

Predicted Threat Predicted No Threat
Actual Threat  40 TP  20 FN
No Threat   10 FP  30 TN

 

Now we can calculate:

  • Precision: TP / (TP + FP) = 40 / (40 + 10) = 80%

(Of all alerts, 80% were correct)

  • Recall: TP / (TP + FN) = 40 / (40 + 20) = 66.7%

(Of all actual hazards, 66.7% were detected)

As discussed, optimising for one can hurt the other. This is where the F1-score comes in. It provides a balanced measure:

  • F1-score: 2 * (Precision * Recall) / (Precision + Recall) = 2 * (0.8 * 0.667) / (0.8 + 0.667) = 72.7%

This example demonstrates how the F1-score balances the trade-off between Precision and Recall, giving a more holistic view of performance. This balance is critical for delivering an effective ADAS experience that gives drivers the right information at the right time—without unnecessary distractions.

Conclusion: Measuring Success through Metrics that Matter

At Humanising Autonomy, we believe the key to a successful ADAS system is a fine balance between alerting to genuine dangers  and staying silent when no threat exists. Measuring key metrics like  Precision, Recall, and minimising the False Positive Rate ensures that drivers can trust our system to help when it’s truly needed, while not interrupting them with unnecessary alerts. With a rigorous testing process that spans both synthetic simulations and real-world driving data, we’re working every day to pave the way for a future of safer and more enjoyable driving. 

Stay tuned as we continue to push the boundaries of what’s possible in autonomous driving technology!

For more insights into our approach and progress, follow us for future updates and discussions around ADAS and AI-powered safety systems.

Sign up to our Behaviour AI platform

START