The evolution of chatbots: from basic responses to advanced human-like conversations
Michael McTear
Ulster University
Conversational AI has been gaining a lot of attention in the public domain within the past
few months due largely to the release to the general public of OpenAI’s chatbot ChatGPT.
With ChatGPT people can engage in advanced human-like conversations, asking questions
or issuing instructions and receiving a detailed response. More than one million users
enrolled to use ChatGPT within just five days of its launch, setting records for its rapid
adoption.
Chatbots have come a long way since the first chatbot ELIZA which was created in the 1960s
as an experiment to explore how conversations between humans and machines could be
simulated. Early chatbots were simple, rule-based systems that responded to specific
keywords or phrases using pattern-matching. At the same time dialogue systems were being
developed in academic and industrial research laboratories that explored the technologies
of spoken interactions between humans and machines, particularly how dialogues are
managed. These research systems drew extensively on theoretical work in linguistics and
artificial intelligence. By the early 1990s voice user interfaces were being deployed
commercially in application areas such as automated customer self-service. Here the focus
was on more practical issues such as design guidelines, usability, and standards.
While much of this earlier work is still relevant for the new generation of chatbots, rule-
based approaches have by and large been replaced by more advanced methods based on
machine learning and neural networks. One of the most significant developments is the use
of large language models (LLMs). These models use deep learning techniques to analyse and
understand human language, allowing the chatbots to provide more accurate and more
relevant responses. LLMs are trained on vast amounts of data and can learn to generate text
that is almost indistinguishable from human writing. As a result, they can handle a wide
range of tasks, from answering basic queries to carrying out complex conversations.
Chatbots such as ChatGPT have the potential to impact technology, society, and business in
a significant way. However, they have also been seen as a threat to the business models of
some large tech companies, prompting some to integrate ChatGPT into their own products
and services while others are investing heavily in research and development in Artificial
Intelligence to meet the challenges posed to them by ChatGPT and similar systems.
There are some downsides to how advanced chatbots using LLMs might be used. For
example, they could be used to generate fake news or content that is plagiarised. LLMs
might also produce content containing bias or prejudices against certain groups or
individuals as a result of the data they were trained on. Hallucination is another problem
where the models generate content that is factually inaccurate and not grounded in reality.
Addressing these problems in language models is an active area of research and efforts are
being made to develop techniques to minimise the impact of bias and hallucination in the
content generated by the new generation of chatbots.