The rise of artificial intelligence has led to improved road safety, expanded preventative healthcare options and an effective decrease in levels of air pollution. All of this, however, is incumbent on one important detail: the human factor. AI cannot be entrusted with the big things in life without having as close to a human understanding of the world and the people in it as possible.
Ethical, human-centric, behaviour AI – modelled on human behaviour to help it make decisions that impact real people, in real time, using robust datasets – is key to unlocking a better, tech-enabled future. But, as common sense as it sounds, designing “people-first” algorithms and models and integrating human context are still not standard with many developers.
There’s a long history of failed human/AI interactions where human nuance would have made all the difference. In workplace settings, we’ve seen recruitment AI tools enforce sexist and racist bias by consulting historically biased human-resource records, down-ranking candidates the technology was originally designed to help. Machine-learning chatbots have had to be removed from Twitter for posting offending tweets after developing an undesirable tone of language. Facial-recognition software has continuously been called out for its bias in algorithms towards people of different skin tones and facial structures – in the recent case of Uber drivers, this facial-biometric tech has negatively affected their livelihood. In industrial environments, the threat of technology failure can even cost lives – it is reported that one person is killed every year by an industrial robot in the United States.
The ubiquity of automation explains why it’s been easy to slip into complacency when it comes to designing human-centric technology. Our supermarket shopping bag is weighed by machines and our homes are serviced by robotic vacuum cleaners. Yet both are fallible: the checkout bot will warn you of unexpected items in the bagging area when there aren’t any, and the vacuum will get stuck in a corner of your room until its battery is drained.
In all the above scenarios, the technology was launched with a key weakness: an inability to make human-like inferences in real-world situations. An AI that truly understands human behaviour requires relevant data on which to base its decision-making engines – and for that, it will need data that is inclusive, balanced, concrete and based on physical evidence.
But the advent of machine learning, computer vision and natural language processing has seen passive robots go from communicating to each other to actively speaking to us and gaining more access to our homes, workplaces and healthcare settings. Self-driving vehicles are clocking up millions of miles around the world, and robot surgeons represent a $4+ billion and growing market. More than ever, it’s essential that they understand the context of our answer.
Technology companies must be willing to be open and prove their AI has the human understanding and data to operate safely. Professor Sir Nigel Shadbolt, Head of Human Centred AI at the Institute for Ethics in AI, has argued for more data collection, noting: “Curiously, the key to retaining trust and high levels of individual privacy is not secrecy, but transparency.”
The payoff for that openness is exponential – and it’s building momentum. We’re seeing tech giants address efforts in different ways – Microsoft has shared how the company is adding safeguards for responsible use of their facial-recognition technology and Amazon recently launched a $1 billion Industrial Innovation fund with a focus on safety and robotics, suggesting just how much good AI is worth. We hope to see more.
The intent, skill and morality of the people building and deploying technology are key how big and how much AI can impact and achieve. At the price of human experience of the world, we must focus on the quality of that outcome – and not just the quantity it promises to deliver.