Human-like AI versus Human-centric AI Technologies
Kim Vigilia
April 3, 2023
Humanising Autonomy_Human-like AI versus Human-centric AI Technologies_1
Photo by Toa Heftiba / Unsplash

Artificial Intelligence (AI) is once again making headlines following the release of OpenAI’s ChatGPT-4 and its ilk. Featuring improved relevance of results and better mimicry of human-like responses compared to its predecessors, ChatGPT-3 reached over 100 million users in its first two months. Keen explorers quickly set out to test the accuracy of the system, experimenting for a wide range of purposes, such as writing articles, business strategies, travel itineraries, movie scripts, and conversations.

Although the feedback on the quality of the resulting text is mixed – journalists have been quick to point out ChatGPT’s lack of depth and substance, whilst programmers have noted its impressive ability to create fairly accurate code – an overwhelmingly concern has arisen that the technology is going too far, too fast, too soon, and without clarity on the damage it could do. OpenAI responded swiftly with GPT-4, an improved and self-acclaimed “safer” version; but tech leaders including Elon Musk remained skeptical and consequently have called for a
pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.”

Historical Context

Many technological advancements over the last century have straddled a fine line between empowering humankind for good versus being a tool in the breakdown of the population’s wellbeing and health. Cars, smartphones, and the internet have each gone through a phase of skepticism, scandal, and scrutiny before reaching a generally society-based acceptance, with ongoing debate of its positive and negative impact.

AI technology is in its early stages, and like the American Wild West, there is still a considerable lack of regulations, guidelines, and frameworks to help keep its risks in check. Since AI can be so complex, especially when combined with machine learning and built from deep neural networks hidden within black boxes, it is especially difficult to understand where to begin – and to extract who should have accountability and ownership of the consequences when things inevitably go wrong.

When we look at OpenAI’s original mission, they state it is “to ensure that artificial general intelligence benefits all of humanity.” Earlier references to its Chat GPT was as a conversational AI to be used for online customer care. On paper, this sounds straightforward and for the most part, fairly wholesome – so what happened between this original intent to the whirlwind of panic that it has seemingly set off?

One of the challenges could lie in its attempt to be human-like versus being human-centric.

Humanising Autonomy_Human-like AI versus Human-centric AI Technologies_1
Photo by Daniel K Cheung / Unsplash

Mimicry vs Experiential

In human-like technology, the focus is on generating a superficial interaction that looks, sounds, and seems human, but ultimately is still only a series of automations and algorithms that cannot truly understand human context and intent. It is like a humanoid robot that looks and moves as a person would, but is not sentient and has limited capacity to learn deeper emotions such as empathy. Human-like AI technology is particularly dangerous, because it confuses people with its mimicry of humanity and better hides the fact that it is without empathy and the ability to self-reflect – key things that make humans human. Its pseudo-humanness creates a sense of false security, and can instead shake a person’s trust even more when they realise that the AI cannot comprehend the wide range of direct, indirect, and grey-shade consequences. When developers seek to create human-like products, the priority is the technology itself.

Human-centric technology prioritises the person using or is impacted by the technology, and focuses on the experiences of that person. The technology itself doesn’t have to look or emulate human responses, but its decision-making processes, awareness of self-limitations, and its built-in layers of protection against undesired outcomes – including unfair bias and incorrect triggers – make it more human-friendly. With this framework, the AI is invisible and is a tool for empowerment within a larger product.

Typical product requirements include ensuring creators know who the product is for, what it will do or empower, how it will do that, and how much it will cost to develop. In addition to this, human-centric AI technology is developed with additional considerations such as ethicality, privacy protection, direct and indirect consequences, direct and indirect risks – all from the beginning. When companies seek to produce human-centric AI technology, the priority is on people, and in effect, the society and the world.

AI with human-centric positive outcomes include Apple’s Siri and its role in supporting elderly citizens and Amazon’s Alexa and how it reduced loneliness among ageing adults during Covid-19; other human-centric AI technology include the AI found in smartphones, advanced driver assistance systems (ADAS) and safety technology for cars, and in medical imaging.

If more businesses, AI developers and regulators demand more human-centric technologies from the beginning, we may face fewer surprises and instead, be able to enjoy the true potential and benefits of AI.

Humanising Autonomy_Human-like AI versus Human-centric AI Technologies_1
Photo by vackground.com / Unsplash

Human-centricity will Change our Relationship with AI

If we are to continue using AI technologies, then people must find a way to form positive relationships with them. Consider our relationship with modern smartphones – and how the first iPhone went from a disruptor of the telecoms industry to a staple in today’s society in less than twenty years.

Today, for many in the digitally active population, we have reached the “Can’t live without it” stage. The relationship is built on dependency, customisation, and empowerment – from access to emails, messages, work, social connections, maps, and diaries, people literally may be lost without their phone, as it is configured to our exact needs, preferences, and aesthetics. Since its rise in popularity forced telecom providers to reshape its connectivity availability through 5G, smartphones have also made it possible to connect people from around the world regardless of physical distance and/or barriers. Its continued evolution and more intuitive capabilities strengthen our relationship with them, and much like in a real relationship, the more information it gathers on you and your behaviours, preferences and patterns, the less you have to explain explicitly what you want or need. The AI in smartphones enables it to anticipate and preempt your requests and respond in ways that are most beneficial to you.

How can we translate more human-centric AI into other parts of our lives for a positive outcome and a better, more trusting relationship with AI?

People will trust AI more as they begin to understand how it can enable a better quality of life for them – and how the risks are minimised.

For instance, an experiential-focused AI in the smart home that understands you’re exhausted just by comprehending your body language and speed of movements – and responds by playing relaxing music and dimming the lights might sound convenient. But, if it means losing their privacy, there is no incentive to adopt the technology. Human-centric design would mean the AI can do all that it promises, without tracking your identity or recording your exact actions – it can be designed to respond to concrete and observable behaviours instead.

People may better support AI used in their cities and neighbourhoods more if they saw the direct impact on traffic flow optimisation and safer travel experiences, without the repercussion of surveillance and unfair profiling. For example, if traffic congestion decreased and more crashes prevented due to AI, only the physical safety of citizens would be impacted. However, if the AI could perform all of the optimisation tasks and also had a model to prevent unfair bias, the quality of life for everyone in the city would improve.

There are endless ways to change AI technologies to become more human-centric. Regardless of which purpose developers intend for AI, the most important thing is that they think of how to ensure the technology is human-centric from the beginning of design and before production begins, and how to enhance that over time.

How human-centric an AI technology is can be the driving force behind how we build a long-lasting relationship with it.

 

Read the full PulseLabs Article here

About Humanising Autonomy: Humanising Autonomy helps make automation human-centric. I’s computer-vision software goes beyond basic detection. It  can analyse live or historical video footage to quickly classify, interpret and predict human behaviour. We’re teaching machines to better understand people and using this layer of human context to help companies develop next generation products and services, create safer environments for people and to elevate the customer experience. Visit www.humanisingautonomy.com or email Kim at [email protected].

Pulse Labs (https://pulselabs.ai/) is a pioneering business insights company that specialises in turning human factors analyses and behavioral information into actionable product success metrics for our customers. Our proprietary data processes mean speed and accuracy. Our Power Portal™️ ensures that decision-makers have quick and ongoing access to results to increase their competitiveness. Our customer set includes innovators such as leading technology platforms, top brands and government agencies. For more information, visit https://pulselabs.ai or contact us at [email protected].

Sign up to our Behaviour AI platform

START