Artificial intelligence is a hot topic, with barely a day passing without a mention of it in the news. Governments, business leaders, environmental groups and leading technology experts are all weighing in on the subject, and there’s a heightened awareness around the potential risks as well as the opportunities associated with AI.
A positive tool for businesses
For businesses, AI has represented something of a gamechanger over the past few years. The contact centre has been all but revolutionised, as businesses have turned to cloud-based solutions with integrated AI capabilities to facilitate omnichannel engagement, automate tasks and speed contact resolutions.
Machine learning, advanced analytics and deep data insights are leading to huge productivity gains, supporting businesses to innovate, speeding time to market with new products and services, and helping them deliver vastly improved customer and employee experiences. For these businesses, AI has been both revelatory and revolutionary.
ChatGPT and the democratisation of AI
The discussion around AI has opened up in large part due to the emergence and widespread uptake of ChatGPT, an open-source generative AI chatbot. Following its release by OpenAI in November 2022, the platform has become a feature in almost every facet of life, from business and art to law and education.
It is this lightning-speed democratisation of AI that is making people – including the model’s developers – nervous. Governments are increasingly anxious about unfettered and unregulated access to such a powerful large language model. Environmental groups are concerned about the climate impact of the energy-intensive tech needed to run the platform. And there are legal questions around who can, or should, be held accountable when incorrect data is generated then circulated or enacted.
AI panel discussion at DTX 2023
The future of AI, and in particular of ChatGPT, formed the basis of a panel discussion that took place at this year’s DTX as part of Manchester Tech Week 2023.
The discussion, entitled ‘What is the next evolution of AI and ML (and why it probably won’t be ChatGPT-5),’ was a wide-ranging look at the uses, impact and dangers of AI, and involved experts from a range of industries.
Panellists Tom Liptrot (Data Science Consultant at Ortom), Leanne Fitzpatrick (Director of Data Science at The Financial Times), Andy Griffiths (Senior Director of Data and Analytics at General Electric) and Robin Lester (Senior Cloud Solutions Architect for Microsoft) all spoke passionately about the business benefits of AI, and all agreed that regulation was a natural – and critical – next step.
AI as an efficiency-driving co-pilot
“AI is going to be an efficiency that helps people carry out their roles,” Leanne Fitzpatrick said, challenging the idea that AI will replace human beings in the workforce. “It’ll be like everybody’s got a personal assistant in their pocket and is saving loads of time on mundane things like organising calendars.”
“It’s not going to be taking people’s jobs,” Andy Griffiths added. “But it can definitely help us do better work, faster.”
Microsoft’s Robin Lester said AI was best used as a co-pilot, to support people in their roles rather than perform those roles for them. “It’s perfect for things like finding information, summarising really long documents, or comparing two documents,” he said. “Or in call centres, using AI for voice-to-text can make it easier to find relevant information, and ultimately to support better customer conversations.”
As Leanne went on to point out, the trick for businesses will be how they use AI to enhance what they do and how it can be used as a support tool to benefit business?
AI needs human input and human oversight
For many businesses, leveraging AI to streamline resources, boost efficiency and improve productivity has become almost mainstream with the advent of cloud solutions like CCaaS. Chatbots are being used to answer customer queries, NLP models are being used to optimise agent workflows, and agent assist tools are co-piloting conversations to help support first-contact resolutions, every time.
When it comes to generative large language models like ChatGPT, however, a little more thought needs to be given to the output.
“In regulated industries like finance, you need to be able to explain your output,” said Microsoft’s Robin Lester. “You couldn’t just turn around to a customer and say, ‘We can’t approve you for a bank account because ChatGPT said we can’t. You need to be able to be able to justify and track back through the decision-making, and that needs to be done by a human.”
AI and accountability
The ramifications of trusting generative AI’s output without question have already begun hitting the headlines. In May of this year, one lawyer in America was caught out when he relied on ChatGPT to prepare a court filing for him, asking the AI chatbot to locate cases that demonstrated a legal precedent. The AI returned some relevant cases, and the lawyer included these in his court brief. The problem was, these cases didn’t exist. The lawyer now faces a hearing of his own.
“There’s definitely a question around accountability,” says GE’s Andy Griffiths.
“How do you deal with it when AI makes a mistake? Who do you sue? Who has the accountability here? Is it the people who built the tech? The people who inputted the prompt? Or is it the people who publish or implement what the AI returns?”
Responsible businesses are already self-regulating
Leanne Fitzpatrick says that, while generative AI can be a useful tool for all businesses, responsible organisations should be taking proactive steps to prevent these sorts of incidents from occurring.
“At the Financial Times, we’re actually working on guidelines for the use of generative AI. We’re a news organisation, so being trustworthy is essential – human accountability will always be necessary for us to maintain that trust. We all know that AI can return false, incorrect, out of date and misleading information, and critical thinking is vital.”
Microsoft is also taking steps to protect users and the wider public, building parameters into its AI-supported platforms. “Microsoft has always been determined to be an ethical tech company, and we’re already asking our partners to sign agreements that commit them to only using AI ethically,” explains Robin Lester. “We have built flags into Azure that let us know if AI is being misused in any way.”
Regulation needs to be contextual
One of the primary concerns around AI generally is the lack of insight around the data that’s being fed into the models. Currently, tools open to being built with biases, including around cultural, racial, ableist and political matters.
“We definitely need regulation,” says Andy. “But regulation won’t be about banning the tech, it’ll be around how it gets used. We need to see regulation around the training of models, because they need to be trustworthy.”
Regulation also needs to be contextual, not only across industries, but across countries.
Regulation will be different around the world. What works in the UK is going to be different to what they want in America or Europe, so governments will likely need to form their own regulations.
That said, there’s likely to be some form of international agreement around the use and regulation of AI, too. UK Prime Minister Rishi Sunak is already planning for this, and has spearheaded a ‘global summit’ on the regulation of AI, set to take place in the autumn.
Preparing to host the talks with other G7 leaders, Sunak is reported in The FT as saying: “Historically the UK has got it right when we are trying to balance innovation with making sure the new technology is safe for society.”
The future looks bright for businesses
For businesses looking to use AI simply as a productivity-enhancing instrument, the future looks bright. New tools from private providers are emerging all the time to give businesses the edge when it comes to speeding up processes, freeing up resources and redirecting budgets and human ingenuity to more strategic pursuits.
Contextual regulation that preserves the creativity of the technology will only serve to strengthen the use-cases for AI, leading to more ethical practices, more reliable outputs, and better oversight of the models being used.
To discuss how you can harness the best of AI in your business, get in touch with our team today. We’d be happy to help you find a contact centre solution that works with your team to drive efficiencies, and support better omnichannel customer experience, 24/7.