In our first blog post, we explored the limitations of physics-based models for camera perception. Though the industry has undoubtedly made strides in advancing the ability of automated systems – end-to-end deep learning models’ complex structure poses challenges from a functional safety perspective. Similarly, physics-based models are interpretable, but lack the complexity needed to accurately predict human behaviour in all scenarios. It’s clear automated systems are missing something when it comes to crossing prediction.
In February, we examined how physics-based models can be advanced by modelling the mind of the pedestrian. We provided evidence that our Behaviour Model prevents incorrect or delayed predictions that physics-based models often fall victim to. By incorporating psychology into probabilistic machine learning models, Humanising Autonomy is able to mitigate the limitations of physics-based models while keeping the positive attributes of a white box AI approach: interpretability, transparency, small model size and a trustworthy estimate of its prediction uncertainty.
This week, we’re looking at our Behaviour Model in context, and will provide real world examples that highlight where Behaviour AI makes a real, tangible difference.
Abrupt stops confuse physics-based systems
Have you ever rushed to the crosswalk, only to stop at the last second as cars start rolling through the intersection? This pedestrian “screech-to-a-halt” confuses physics-based systems, and therefore a majority of automated vehicles on the road today. Moving quickly towards the crosswalk is often incorrectly predicted as “will cross in front of the vehicle”. This behaviour is mostly observed when visibility is reduced, as is the case around narrow sidewalks and corners.