Building ethical AI models for safer automated systems
Paddy La Torre
January 28, 2021

As Dr. David Leslie astutely points out in his guidance for Understanding Artificial Intelligence Ethics and Safety: “Progress in AI will help humanity to confront some of its most urgent challenges… [but] as with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made.” As a behavioural AI company, it is ours and the industry’s responsibility to ensure this technology is responsibly used, and promotes the safety and well being of society.

We understand and firmly believe that explainability, trust, accountability, and responsibility are guiding principles to how we develop our Behavioural AI Platform. Ethical AI is difficult to achieve; but it is essential that people’s privacy, safety and responsible use of their data is considered in developing AI products. Here’s a glimpse into how Humanising Autonomy grapples with these challenges, and builds algorithms that support the ethical, safe use of AI.

Why is ethical AI so difficult to achieve?
Teaching morality to machines has never been an easy feat, and complex multi-system robotics make it even more difficult. The future of several industries – automated mobility, manufacturing and smart cities among them – will be reliant on ethical AI to make morally sound decisions.

Whilst current AI products often have simple goals, the future looks to widen the scope, creating products that use multiple AI features simultaneously. These products will combine data from a much wider set of sources, for a number of different aims. For example, autonomous vehicles and smart cities will need to grapple with questions of new, abstract data, the goals of these systems, and how to safely deploy them.

When considering ethics, safety and privacy, this data-fusion complexity brings new questions and challenges. AI systems need to be able to co-exist not only with other systems, but also with humans. Therefore, the communication of these systems is as important as the systems themselves. How does the overall control system handle conflicts between the different AI systems? Where does the ethical responsibility of such a system lie? This is especially important in products that use software and hardware from multiple sources.

Building Ethical AI Models
With these complications in mind, prioritising privacy, understanding the risks as a safety-critical function for autonomous systems, and the responsible usage of data are core to our mission. Our horizontal, modular solution fuses behavioural psychology, statistical AI and novel deep learning algorithms to enable human centric decision making for automated systems. We extract key aspects of human actions from video footage in real-time, and these features are then used as a basis for high-level behaviour models. This provides privacy to our subjects, as the system does not divulge any personal information to the higher level systems and focuses on communicating only the necessary attributes.

By embedding this modularity within our systems, outputs can be broken down to their components, essentially building in aspects of explainability. This is part of a white box approach, rather than black box, to provide the ability of individuals to investigate the performance and inference of the system beyond inputs and outputs. A white box approach is also preferred when explaining the system to end-users and interfacing systems. Our solution helps inform decision-making functions for autonomous vehicles, Advanced Driver Assist Systems, and urban mobility systems.

We still have a long way to go until we see robots acting with true moral understanding, but the work in shaping safety and ethics standards starts now. With the introduction of 5G, new edge cases, and the rise of simulation-based validation and verification, there are opportunities to emphasize the need for not only safe, but also ethical products.

The challenge will be not to over-regulate the domain, but to align efforts and assure that legal requirements remain technically achievable and proportionate to assure that AI can be beneficial for society in the long run. This will require harmonised, data-driven regulations and it’s going to require companies to be open with how they use data.

As the world continues to become more automated, using ethical AI will reinforce human equality in our cities and urban mobility systems, the industry can move society towards a more sustainable, ethically conscientious, and inclusive world.

Sign up to our Behaviour AI platform

START