Ensuring safe and responsible AI in government portfolios
Artificial Intelligence (AI) is evolving at breakneck speed. Yet with all the benefits also come numerous risks. Declan Norrie and Kyle Wood from Proximity outline how governments can ensure AI risks are appropriately managed and its opportunities maximised in a safe and responsible manner.
While AI businesses pledge their commitment to the ethical design, development and deployment of artificial intelligence through internal controls and industry-developed non-binding standards, nations across the world are grappling with how best to ensure AI is used in a safe and responsible manner.
When it comes to effectively regulating AI, there is however so much to unpack and so much to consider.
From how AI is defined to what is meant by “safe and responsible”, from who in the AI value chain should be influenced through regulation and methods to identify tangible (and ideally quantifiable) risks and challenges that this emerging technology presents. It’s a balancing act – between trying to support businesses and individuals with leveraging the incredible potential of AI, and containing the risks to an “acceptable” level.
The Impetus behind calls for regulation of AI
As ASIC Chair Joe Longo highlighted in a speech on AI regulation earlier this year, the development and deployment of AI in Australia is hardly a lawless “Wild West”. To varying extents, AI developers and deployers are subject to Australia’s existing suite of (generally) technology neutral laws and associated regulatory frameworks.
Despite this, evidence indicates that a majority of Australians have low trust in AI, and are either unsure or disagree that existing protections are sufficient to ensure safety against AI-related harms.
They are not isolated in their concerns: the Bletchley Declaration, signed by Australia amongst a group of 28 countries and the EU on 1 November 2023, welcomed “recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”.
With trust already low, AI-related safety incidents risk hampering the sector’s development and impacting our ability to reap the significant public and private benefit of this emerging technology. Effective regulation is critical to mitigate the risk of individual and social harms, to ultimately provide the public and businesses with certainty and confidence. Longo’s closing point on existing regulation remains salient: “is this enough?”
How governments are responding
The Australian Government has committed to investigating options for a risk-based approach to regulating to ensure safe and responsible AI. As confirmed at the 2024 Budget, this will include consultation on potential mandatory, risk-based guardrails applying generally to AI systems, and consideration of options to strengthen and clarify existing laws which already regulate (or should regulate) AI in particular domains.
At time of writing, there are diverse approaches to AI regulation amongst similar developed nations despite agreement that alignment will be crucial. A simplified snapshot of how these approaches compare, both in terms of how mandatory key regulatory instruments are, and the breadth of their application:
The complex nature of AI regulation
The characteristics of AI technologies pose specific challenges to designing and implementing effective regulation, so they need to be closely considered in any regulatory approach.
Defining AI
Any bespoke regulatory approach faces the challenge of how to define AI to ensure sufficient legal certainty of what it applies to, while remaining sufficiently flexible to account for paradigmatic changes in AI’s nature and capabilities.
Setting the requirements for safe and responsible AI
Agencies must determine what safe and responsible means in their particular context, and what obligations and associated regulatory tools are required to achieve that.
Identifying and quantifying critical risks
Quantifying tangible risks and challenges is critical to operating a risk-based regulatory system, which can use limited resources to monitor, investigate, and enforce regulatory non-compliance most effectively.
Addressing the complex AI value chain
Regulation must be targeted to achieve regulatory outcomes that are efficient and effective. It needs to influence the right actors at the right time to minimise burden and maximise outcomes. The complex nature of the AI value chain, which may include a range of organisations across multiple jurisdictions, makes this challenging.
Initial actions for policymakers and regulators
All areas of government will need a baseline understanding of AI issues to ensure effective coordination of an approach to safe and responsible AI. As a starting point, public sector personnel at all levels can engage meaningfully with safe and responsible AI in their domain, by taking the following actions:
1) Read up
Develop a baseline understanding of AI’s applications, technical and ethical challenges. Acknowledging the complexity of the field and rapidity of change, utilise accessible resources including those published by DISR, the National AI Centre and academic institutions. Engage with experts and stay informed about emerging trends – both in your domain and more broadly.
2) Build capability
Invest in AI literacy. Recruit and train policymakers, regulators, and legal professionals at all levels to understand AI and navigate AI-related issues effectively. Review your policy, regulatory, and legislative tools to identify any gaps, challenges or risks to mitigating AI-related harms.
3) Collaborate with critical stakeholders
The government should work together with critical stakeholders including other government agencies, industry, academia, and civil society. Share insights, concerns and positions to ensure that critical risks are shared and don’t fall through cracks. Seek to engage with both central agencies and line agencies to address key pain points, especially areas of intersection and duplication.
4) Horizon scan: Anticipate future AI developments
Consider the impact of quantum computing, autonomous systems, and AI-driven decision-making on key activities and stakeholders in your domain. While you might not be able to predict all in a fast moving and complex field of technology, practicing preparation will give you the tools to adapt more quickly to change.
Proximity’s offerings
Proximity’s multi-disciplinary experts are knowledgeable in the challenges of designing, developing and reviewing complex and innovative regulatory frameworks. From assurance reviews to seconded lawyers, Proximity’s offerings can help ensure governments are well equipped to best seize the opportunities and manage the risks of artificial intelligence.