Artificial Intelligence (AI) is once again making headlines following the release of OpenAI’s ChatGPT-4 and its ilk. Featuring improved relevance of results and better mimicry of human-like responses compared to its predecessors, ChatGPT-3 reached over 100 million users in its first two months. Keen explorers quickly set out to test the accuracy of the system, experimenting for a wide range of purposes, such as writing articles, business strategies, travel itineraries, movie scripts, and conversations.
Although the feedback on the quality of the resulting text is mixed – journalists have been quick to point out ChatGPT’s lack of depth and substance, whilst programmers have noted its impressive ability to create fairly accurate code – an overwhelmingly concern has arisen that the technology is going too far, too fast, too soon, and without clarity on the damage it could do. OpenAI responded swiftly with GPT-4, an improved and self-acclaimed “safer” version; but tech leaders including Elon Musk remained skeptical and consequently have called for a
“pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.”
Historical Context
Many technological advancements over the last century have straddled a fine line between empowering humankind for good versus being a tool in the breakdown of the population’s wellbeing and health. Cars, smartphones, and the internet have each gone through a phase of skepticism, scandal, and scrutiny before reaching a generally society-based acceptance, with ongoing debate of its positive and negative impact.
AI technology is in its early stages, and like the American Wild West, there is still a considerable lack of regulations, guidelines, and frameworks to help keep its risks in check. Since AI can be so complex, especially when combined with machine learning and built from deep neural networks hidden within black boxes, it is especially difficult to understand where to begin – and to extract who should have accountability and ownership of the consequences when things inevitably go wrong.
When we look at OpenAI’s original mission, they state it is “to ensure that artificial general intelligence benefits all of humanity.” Earlier references to its Chat GPT was as a conversational AI to be used for online customer care. On paper, this sounds straightforward and for the most part, fairly wholesome – so what happened between this original intent to the whirlwind of panic that it has seemingly set off?
One of the challenges could lie in its attempt to be human-like versus being human-centric.