Whether you are creating or customizing an AI policy or reassessing how your company approaches trust, keeping customers’ confidence can be increasingly difficult with generative AI’s unpredictability in the picture. We spoke to Deloitte’s Michael Bondar, principal and enterprise trust leader, and Shardul Vikram, chief technology officer and head of data and AI at SAP Industries and CX, about how enterprises can maintain trust in the age of AI.
Organizations benefit from trust
First, Bondar said each organization needs to define trust as it applies to their specific needs and customers. Deloitte offers tools to do this, such as the “trust domain” system found in some of Deloitte’s downloadable frameworks.
Organizations want to be trusted by their customers, but people involved in discussions of trust often hesitate when asked exactly what trust means, he said. Companies that are trusted show stronger financial results, better stock performance and increased customer loyalty, Deloitte found.
“And we’ve seen that nearly 80% of employees feel motivated to work for a trusted employer,” Bondar said.
Vikram defined trust as believing the organization will act in the customers’ best interests.
When thinking about trust, customers will ask themselves, “What is the uptime of those services?” Vikram said. “Are those services secure? Can I trust that particular partner with keeping my data secure, ensuring that it’s compliant with local and global regulations?”
Deloitte found that trust “begins with a combination of competence and intent, which is the organization is capable and reliable to deliver upon its promises,” Bondar said. “But also the rationale, the motivation, the why behind those actions is aligned with the values (and) expectations of the various stakeholders, and the humanity and transparency are embedded in those actions.”
Why might organizations struggle to improve on trust? Bondar attributed it to “geopolitical unrest,” “socio-economic pressures” and “apprehension” around new technologies.
Generative AI can erode trust if customers aren’t informed about its use
Generative AI is top of mind when it comes to new technologies. If you’re going to use generative AI, it has to be robust and reliable in order not to decrease trust, Bondar pointed out.
“Privacy is key,” he said. “Consumer privacy must be respected, and customer data must be used within and only within its intended.”
That includes every step of using AI, from the initial data gathering when training large language models to letting consumers opt out of their data being used by AI in any way.
In fact, training generative AI and seeing where it messes up could be a good time to remove outdated or irrelevant data, Vikram said.
SEE: Microsoft Delayed Its AI Recall Feature’s Launch, Seeking More Community Feedback
He suggested the following methods for maintaining trust with customers while adopting AI:
- Provide training for employees on how to use AI safely. Focus on war-gaming exercises and media literacy. Keep in mind your own organization’s notions of data trustworthiness.
- Seek data consent and/or IP compliance when developing or working with a generative AI model.
- Watermark AI content and train employees to recognize AI metadata when possible.
- Provide a full view of your AI models and capabilities, being transparent about the ways you use AI.
- Create a trust center. A trust center is a “digital-visual connective layer between an organization and its customers where you’re teaching, (and) you’re sharing the latest threats, latest practices (and) latest use cases that are coming about that we have seen work wonders when done the right way,” Bondar said.
CRM companies are likely already following regulations — such as the California Privacy Rights Act, the European Union’s General Data Protection Regulation and the SEC’s cyber disclosure rules — that may also have an impact on how they use customer data and AI.
How SAP builds trust in generative AI products
“At SAP, we have our DevOps team, the infrastructure teams, the security team, the compliance team embedded deep within each and every product team,” Vikram said. “This ensures that every time we make a product decision, every time we make an architectural decision, we think of trust as something from day one and not an afterthought.”
SAP operationalizes trust by creating these connections between teams, as well as by creating and following the company’s ethics policy.
“We have a policy that we cannot actually ship anything unless it’s approved by the ethics committee,” Vikram said. “It’s approved by the quality gates… It’s approved by the security counterparts. So this actually then adds a layer of process on top of operational things, and both of them coming together actually helps us operationalize trust or enforce trust.”
When SAP rolls out its own generative AI products, those same policies apply.
SAP has rolled out several generative AI products, including CX AI Toolkit for CRM, which can write and rewrite content, automate some tasks and analyze enterprise data. CX AI Toolkit will always show its sources when you ask it for information, Vikram said; this is one of the ways SAP is trying to gain trust with its customers who use AI products.
How to build generative AI into your organization in a trustworthy way
Broadly, companies need to build generative AI and trustworthiness into their KPIs.
“With AI in the picture, and especially with generative AI, there are additional KPIs or metrics that customers are looking for, which is like: How do we build trust and transparency and auditability into the results that we get back from the generative AI system?” Vikram said. “The systems, by default or by definition, are non-deterministic to a high fidelity.
“And now, in order to use those particular capabilities in my enterprise applications, in my revenue centers, I need to have the basic level of trust. At least, what are we doing to minimize hallucinations or to bring the right insights?”
C-suite decision-makers are eager to try out AI, Vikram said, but they want to start with a few specific use cases at a time. The speed at which new AI products are coming out may clash with this desire for a measured approach. Concerns about hallucinations or poor quality content are common. Generative AI for performing legal tasks, for example, shows “pervasive” instances of mistakes.
But organizations want to try AI, Vikram said. “I have been building AI applications for the past 15 years, and it was never this. There was never this increasing appetite, and not just an appetite to know more but to do more with it.”