Generative AI has found its path to corporate acceptance
New research from ExtraHop surveyed 1,200 IT and security leaders from around the world to understand their plans for securing and governing the use of generative AI tools inside their organizations. Their responses are concerning, writes Simon Howe, Area Vice President for Australia and New Zealand at ExtraHop.
The data is in: artificial intelligence (AI) bans do not work. The only thing left to do is allow the technology’s use and set up guardrails for protection.
The findings of the ExtraHop study highlight the cognitive dissonance still present in the attitude and policy positions taken by some organisations in regards to generative AI uses in the workplace.
The survey by ExtraHop shows that one in three organisations ban staff from generative AI and large language model (LLM) usage, but only 5% say their employees never use the tools.
Beneath the covers, there may also be some nuance here. While some organisations, notably Federal Government departments and agencies in Australia, have initiated blanket bans, others have sought only to prevent the tools from being used for client-facing work, being fed production or other sensitive data, or being used to replace humans in critical decision-making contexts. That is, some uses are allowable.
Best case scenario, the 27% of employees that use generative AI in the workplace, despite the existence of bans, may be doing so benignly: to make their language in emails appear more professional, to overcome ‘writer's block’ when drafting materials, or experimenting using dummy or synthetic data to try to prove out a potential use case that can then be formally proposed for executive sponsorship and funding.
The problem for organisations is two-fold.
First, a bend of the rules is still a bend of the rules. Few organisations would knowingly turn a blind eye to any of these ‘best case scenarios’, knowing that if they did so, it could invite or encourage progressively more risky behaviours and interactions with the tools.
Second, even benign uses of the technology are hard to keep track of when the generative AI tools are internet-facing, so it’s difficult for organisations to truly know if ‘shadow’ uses of generative AI are actually benign or constitute a serious breach of internal policy. This is particularly the case for bans of work-related experimentation on an employee’s personal generative AI account or personal device. This is difficult to enforce and monitor, and the ability of the organisation to perform due diligence on any self-reported usage is limited.
While organisations naturally want to de-risk their exposure to generative AI, it should be clear that bans of any kind are an ineffective way of going about that. If bans were effective in a generative AI context, there would be similarity between the number of organisations banning generative AI and the acknowledged usage levels for the technology internally. The fact that 27% of employees still use generative AI despite the existence of bans is telling.
Still, there are some positive signs, with only 2% of global respondents to the ExtraHop research indicating that their organisation does nothing to govern employee use of generative AI tools.
It’s apparent from this that most organisations are taking this technology seriously and trying to find ways to maximise its benefits while mitigating its risks – even if the effectiveness of some of their actions to date may be questionable.
The guardrails for acceptance
We’ve reached a tipping point with generative AI where acceptance is really the only option available.
The number of new LLMs and tools emerging weekly makes it hard to stay aware of developments, let alone maintain an effective ban list. In addition, with nearly three-quarters of respondents to the survey reporting frequent or occasional use of generative AI tools, and that number continuing to rise, it should be apparent by now that generative AI in the workplace is here to stay.
To facilitate acceptance, organisations will need to set up some appropriate guardrails that cover people, processes and technology.
Setting up good policies around the use of generative AI is a critical first step to governing their use in the workplace. Implementing general data security best practices when developing policies is a good starting point.
All company AI policies should also include strong security training, which requires cooperation between non-technical leadership and the more technical leaders who oversee security. It’s equally important to involve employees who are actually using these tools so leaders can understand the benefits of specific use cases and how restrictions could potentially limit their workflow, as well as ensuring they understand the risks of leaking sensitive corporate data.
The ultimate goal of training should be to teach users to assess risk themselves in an intuitive way – and if they don’t know, to ask. A good rule of thumb: anything you would not send in an email to external organisations should never be shared with or uploaded to a generative AI service.
Finally, the right security tools are a necessary backstop to security training. The most valuable way to protect against data leakage with generative AI is to audit the use of the tools. Visibility measures that can help with monitoring data transfers are key. While most organisations already have the basic tools for monitoring who is accessing certain sites and how often - firewalls and proxies - certain security appliances can offer even more insight through measuring bytes in and bytes out.
If IT leaders have broad network visibility and see users are sending more bytes out than they should – in the form of data they’re uploading to LLMs – they can assess risk quickly and address the problem easily.