AI’s ethical imperative: Why C-Suite leaders must act now

05 November 2024 Consultancy.com.au

The race to adopt AI is on, but without ethical safeguards, businesses risk more than just reputational damage, writes Shaun Wadsworth, Director of AI and IoT at Fujitsu and Chair of the firm’s AI Ethics Committee in Asia Pacific.

The rapid adoption of AI, particularly generative AI, is outpacing the ability for businesses to prepare for its potential to revolutionise the way people work.

Three out of four knowledge workers now use AI at work, and 78% bring their own AI tools to work. The Tech Council of Australia estimates that Gen AI will contribute $45 billion to $115 billion annually to the Australian economy by 2030. While 79% of leaders agree that AI adoption is critical to remaining competitive, 60% admit their company lacks a vision and plan to implement it.

This lack of preparedness is fraught with risk. The integration of AI into core business functions brings many ethical considerations that demand careful attention. Bias, discrimination, and opacity are just some of the risks associated with unethical AI.

The Australian Government has recently introduced the Voluntary AI Safety Standard. But a more rigorous regulatory environment is on the horizon. Businesses must take a proactive approach to ethical AI, or risk facing significant consequences.

Bias, discrimination, and a lack of transparency are not just ethical concerns, they are business risks.

The United Nations Educational, Scientific, and Cultural Organisation’s (UNESCO) International Research Centre on Artificial Intelligence has found that Gen AI’s outputs reflect considerable gender-based bias. UNESCO’s research uncovers three major reasons for bias:

  • Data bias: If Gen AI isn’t exposed to data from underrepresented groups, it will perpetuate societal inequalities.
  • Algorithm bias: Algorithm selection bias can also entrench existing prejudices, turning AI into an unwitting accomplice in discrimination.
  • Deployment bias: AI systems applied in contexts different than those they were created for can result in dangerous associations that can stigmatise entire groups.

These biases risk cementing unfair practices into seemingly objective technological systems by amplifying historical injustices.

AI's ethical imperative: Why C-Suite leaders must act now

A study from the UN has found that AI’s outputs reflect considerable gender-based bias

Another challenge is the lack of transparency and explainability in many AI systems.

As AI algorithms grow more complex, their decision-making processes often become opaque, even to their creators. This ‘black box’ nature of AI can be particularly problematic. Imagine a scenario where an AI system recommends a specific medical treatment or denies a loan application without providing a clear rationale. This lack of explainability undermines trust and makes it difficult to identify and correct errors or biases in the system.

The consequences of unethical AI go far beyond reputational damage. Businesses risk legal action, loss of customer trust, and damage to their brand.

The roadmap for ethical AI adoption

As a global leader in AI, Fujitsu has been promoting the research and development of innovative AI and machine learning technologies for over 30 years. We are also at the forefront of advocating for ethical AI, contributing to the Australian Government’s Supporting Responsible AI discussion paper.

Our recommended approach to harness the full potential of AI while mitigating its risk is a three-phase process: design, implement, and monitor.

The design phase: Setting a clear vision for ethical AI
Ethical AI is not just an IT concern, it is a strategic imperative that touches every aspect of the business.

The design phase is the foundation of ethical AI practices within an organisation. It begins by securing buy-in from top leadership, recognising that ethical AI is not only an IT concern but a strategic imperative that touches every aspect of the business. Business leaders must articulate a clear vision for ethical AI and define principles that align with the company’s values and societal expectations.

These principles should then be translated into concrete policies that guide AI development and deployment. This phase involves planning for governance structures that will oversee the implementation of these policies. These governance bodies should be diverse and bring together perspectives from various departments such as legal, risk management, business operations, and human resources. The inclusion of external AI ethics experts can provide valuable independent insights and enhance the credibility of the governance process.

The implementation phase: Implementing clear processes at every stage
Ethical AI implementation is an ongoing process that begins at the project proposal stage and continues through design, development, testing, and deployment.

The implementation phase brings the ethical AI framework to life. Governance groups are established with clear mandates and terms of reference. Processes are implemented to manage every stage of AI development and deployment ethically. This is not a one-time effort but an ongoing process that begins at the project proposal stage and continues through design, development, testing, and deployment.

AI's ethical imperative: Why C-Suite leaders must act now

Ethical AI implementation is an ongoing process

It is important to recognise that ethical AI implementation often involves navigating complex trade-offs. There may be instances of ethical considerations conflicting with short-term business objectives. Organisations must be prepared to make difficult decisions and prioritise long-term sustainability and societal impact over immediate gains.

The monitor phase: Staying on top of ethical AI practices
Continuous evaluation and adaptation are essential for ensuring the ongoing effectiveness of ethical AI practices.

The final step, the monitor phase, ensures the ongoing effectiveness of ethical AI practices. This phase involves continuously evaluating governance processes and staying on top of technological advancements. It also requires adapting to the changing legal and regulatory landscapes, which are also lagging AI deployment. Regular audits of AI systems can help identify potential biases or unintended consequences that may have emerged over time.

Striking the balance

AI technologies will continue to advance, and the ethical implications of their use will only grow in complexity and importance. Organisations that address these challenges proactively will be better positioned to build trust with customers, employees, and stakeholders. They will also be more resilient in the face of regulatory scrutiny and better equipped to deal with the ethical dilemmas that will inevitably arise in the AI-driven business landscape.

Ethical AI is not a destination but a journey. It requires ongoing commitment, resources, and a willingness to engage with difficult questions. By embracing this challenge, organisations can unlock the transformative potential of AI while upholding their responsibilities to society.

More on: Fujitsu
Australia
Company profile
Fujitsu is not a Australia partner of Consultancy.org
Partnership information »
Partnership information

Consultancy.org works with three partnership levels: Local, Regional and Global.

Fujitsu is a not a partner of Consultancy.org.

Upgrade or more information? Get in touch with our team for details.