At a time when public trust in institutions is waning, attendees at a recent roundtable in Australia have heard that organisations and governments that deploy artificial intelligence responsibly, including by ensuring the ethical governance of data, will gain an advantage.
Hosted by data and AI solutions company SAS, in partnership with TechnologyAdvice, the roundtable brought senior executives from Australia’s telecommunications, public sector and financial services sectors together to grapple with the ethical challenges and opportunities of AI.
Featured guest speaker Toby Walsh, Professor of AI at UNSW, said enterprises would soon be key beneficiaries of some of the most accurate and useful AI tools. He argued that taking an ethical approach to AI was likely to become a ‘competitive differentiator’ for organisations.
“It’s a powerful technology, and we have to think carefully about how we use it. It has the potential to be used in ways that won’t be positive; I suspect it’s going to become a corporate distinguisher whether you deploy these technologies in ways that align with public expectations,” Walsh said.
Below are four of the most critical takeaways for enterprises interested in implementing AI.
1. Enterprises are exploring a variety of use cases for AI
Roundtable participants said their organisations are all pursuing AI projects. These range from pilots, to more mature tools already being used in practice. Target outcomes include enhancing productivity, customer value and experiences, or boosting revenue.
Examples include chat bots helping customers diagnose phone and internet connectivity problems, to AI’s trained on up-to-date internal data, which can generate faster customer responses, or use customer behaviour to recommend and upsell additional solutions.
Others are seeking to use AI to accelerate the training of customer service agents who need to understand a range of complex products very quickly, or are using AI models to support and augment the provision of advice to customers in the financial sector.
Ethical considerations at the centre of AI experimentation
Attendees were aware of the necessity of introducing AI responsibly. For instance, some are trying to improve data governance in complex organisations or are considering the wisdom of using AI to replace a personal touch or human in some interactions or decisions.
Cyber security was singled out as a typical use case for AI. Even there, there is a tension between opening up more attack surfaces through exploration of AI use cases on the one hand and using AI to combat threats through improved cyber defences on the other.
2. Australian AI regulation likely to follow risk-based approach
Australia is likely to take a risk-based approach to regulating AI, roundtable attendees heard. If so, this would follow a similar approach to that of Europe, which earlier in 2024 passed the EU Artificial Intelligence Act to introduce new safeguards for the use of AI.
Roundtable attendees heard how AI technology was in most respects similar to the introduction of other technologies in the past like the internet or smartphones, which required business adaptation and questions around issues like equity and fairness or transparency.
The main caveat is AI’s unique ability to make autonomous decisions. This new power needs to be balanced with adequate human oversight, it was suggested, with organisations implementing AI urged to consult existing ethics resources like Australia’s AI Ethics Principles.
3. Fine-tuned AI models will bring the most benefit to enterprises
Enterprises will be in a prime position to capitalise on AI as it is better understood and refined. Roundtable attendees heard that AI models trained on more specific data sets will be less error prone and more useful in specific domains than general purpose chat bots like ChatGPT.
This is because the utility of a large language model (LLMs) depends on the data it is trained on. LLMs trained on masses of information to answer any question are almost mathematically going to provide an average answer rather than the best answer in every case, attendees heard.
One global example discussed was BloombergGPT, which worked with AI company OpenAI to build a custom finance-focused chat bot. Similar tools, fine-tuned with quality data from niche domains, could become more reliable specialists in areas like telecoms, finance or legal.
Data quality the key to successful, fine-tuned AI models
Data quality will be the key to quality AI. Attendees heard that, while historically data quality in data science has been about conducting basic quantitative checks like missing or erroneous ones and zeros, a future challenge will be testing the quality of free form text data.
While AI tools are getting smarter, SAS suggested that enterprise users of AI will need to come up with new ways to curate the data that goes into AI models, particularly unstructured data, to ensure that the model and its outputs are aligned with the intentions of the organisation.
4. Trust to become a competitive differentiator for organisations
AI will bring significant benefits to society and organisations. Attendees heard about AI’s power to trawl through vast amounts of data to establish patterns in ways humans cannot, which could help quantify a person’s risk of certain types of cancer from birth via their genetic code.
However, the roundtable discussed a growing lack of trust in public institutions and government, and the way truth was becoming an arbitrary idea. There was concern AI could supercharge that trend, through the AI-enabled spreading of misinformation during election cycles.
The widespread uptake of AI will depend on responsible deployment
Trustworthiness, including through the deployment of AI, could emerge as a significant business differentiator into the future, the roundtable concluded. According to SAS, building trust could be one of the greatest opportunities available to private and public sector institutions.
Attendees heard that, if society reaches a point where customers are very familiar with and trusting of AI because it has been implemented in a responsible, transparent, and ethical way, then enterprises will be able to take advantage of its benefits almost everywhere.
However, this could be held back if AI’s trustworthiness is in question. In this scenario, the practical use of AI may not reach its potential, because there will be more administrative and governance burdens placed on the technology to ensure that it is trustworthy.
Learn how to use generative AI to gain a competitive advantage here.