What consultants need to know about navigating data security in the age of AI

06 October 2024 Consultancy.com.au

In a conversation with Mike Mayer, Chief Revenue Officer at Appian, we delved into the most important considerations surrounding AI implementation and data security. Mike shares his insights on what consultants should prioritise as businesses increasingly adopt AI technologies.

What are the keys to a successful AI implementation?

When we think about AI implementation, there are really three key elements to consider. First and foremost, consultants need to know what the customer outcomes need to be. This is crucial – getting clarity on what we need to do is really important.

Secondly, it's important to select the right technology. AI is well-suited for a lot of problems, but other automation and orchestration technologies might be better suited in some situations. There are still a lot of situations where you want that really clear answer, that deterministic pathway, and it's not always AI that's going to give that to you.

Lastly, we need to give great consideration to the data that we use to train the AI model. Data quality heavily influences the results that are obtained from AI. Without a sound foundation of accurate data, AI models are prone to hallucinations, unfounded predictions, and misleading outputs.

What is public AI and why are Australian organisations using it?

Public AI refers to any kind of publicly available AI algorithm. These are usually trained on a wide set of data that's pulled from users of the system or customers, or even just data on the internet. ChatGPT is a perfect example of public AI – it was trained on publicly available data across different websites, text, articles, and images.

We are seeing Australian organisations choosing to use public AI because there's a lot of pressure right now to do more with fewer resources, and more urgently. Organisations are feeling a lot of hype around AI right now, and they feel pressure to apply AI very rapidly. They feel there's a risk of falling behind, so they are turning to those very accessible public AI models for deployments at the moment.

What should consultants know about the data security challenges with public AI?

There’s one crucial point here – all public AI providers share one glaring weakness, and it’s data privacy. Providing proprietary or sensitive data to these providers is a risky proposition. It's not uncommon for AI platforms to use customer data to train their own algorithms.

Even for businesses that have strong use cases for AI and they’re looking to quickly deploy, I don’t think any right-minded business leader wants their company to upload their own data to train a public AI algorithm. Large public AI providers build their business models on gaining access to data. That's how they continue to train it, and that's how they continue to improve it.

Making matters worse, their algorithms are shared by all of their customers, so an organisation's own data could be helping its competition.

Which types of organisations or industries should avoid public AI?

For organisations that are protecting customer, citizen, patient, worker data, the lack of privacy makes public AI providers just a non-starter. Organisations and industries that are under strict data privacy compliance laws – so healthcare, financial services, insurance, public sector - they should be particularly wary of it. There's also a real potential for a data leak, and there’s a lot of uncertainty that comes with sharing data when there aren't clear parameters for how it will be used.

What are the alternatives to public AI?

Given data security concerns for Australian organisations, businesses might look towards building models using an in-house team. We’re starting to see organisations hiring AI professionals, AI experts and building AI models in house that deliver higher levels of privacy and security, probably some of the highest. But there are long-term costs to maintaining the model and the underlying infrastructure associated with it.

Instead, I think organisations should explore working with vendors who can provide those private AI solutions.

What do consultants need to understand about private AI? Why should they be advising organisations to adopt this?

I would argue that consultants should absolutely be advising organisations to adopt private AI models. At the moment, we don't know where the regulations are going to go with AI, so the more you can do to protect your data, the more you can protect customer data particularly, will become increasingly important. We have an obligation to be good stewards, and protect the data we have – and I think private AI puts us in the best position to do that.

Beyond privacy, private AI models are just more practical too. Each organisation has a unique set of customers, a unique set of products, services and needs, and so with private AI they can tailor the algorithms for their own business, rather than having to use a more generic, publicly available algorithm or model.

Can you talk through any Appian AI customer use case examples?

A great example is our recent work with Netwealth, one of Australia's fastest-growing wealth management businesses. They've implemented a system called ‘NetFlow’ using Appian’s AI Process Platform to modernise their client and advisor case management.

NetFlow leverages our data fabric, automation technologies, and private AI architecture to streamline operations. A key improvement was in handling case creation failure messages. Previously, employees manually processed each email, identifying issues and redirecting them to appropriate teams. Now, our email classification AI Skill automatically routes these messages, achieving 98% accuracy almost immediately after implementation.

This automation has significantly accelerated work processes and improved customer issue resolution times. The system has expanded from handling a single workflow to managing over 30 tailored workflows for different administration teams.

What opportunities exist for consultants and partners in the AI space?

The growing interest in AI presents significant opportunities for consultants and partners in the Australian market. Organisations are increasingly seeking to streamline operations, reduce manual work, and enhance decision-making through AI implementation. Consultants play a pivotal role in identifying high-impact areas for AI application, designing and deploying AI-powered workflows, and integrating these solutions with existing systems.

A key area of opportunity lies in data strategy and AI governance. As AI adoption accelerates, businesses face challenges in managing and governing the vast amounts of data that feed AI models. Consultants can provide expertise in establishing robust data management practices, ensuring data quality, and creating governance frameworks that align with regulatory requirements. This is particularly important in heavily regulated sectors such as finance, healthcare, and government.