Filling the disconnect between artificial intelligence and ethics

15 October 2020 Consultancy.com.au

With growth in the artificial intelligence market growing at 44%, attention is shifting to the ethical implications of the technology. A growing number of stakeholders realise that a balance needs to be found between making money and keeping AI clean from bias and partiality, writes Catriona Wallace, the founder of Ethical AI Advisory and board member of Boab AI.

The challenge stems from the increasing capabilities of the technology itself. Every day, AI is taking over more tasks that previously would have required human intervention to complete. Indeed Gartner notes that by the end of 2020, 40% of organisations will have AI deployments. 

As AI begins to make more and more complex decisions, questions arise about what controls should be put in place to ensure outcomes are in line with societal norms and expectations. With a number of recent high-profile cases of unethical AI recently surfacing, including the Applecard and Optum’s racist medical algorithms, pressure is mounting. Even Australia’s Human Rights Commissioner, Ed Santow, is leading the HRC’s push to create laws around AI-informed decision making. 

Catriona Wallace, CEO of Ethical AI Advisory

The areas in which controls will be required are many and varied. AI’s use in medicine to spot signs of disease or in police work to predict where crimes are more likely to occur can deliver big benefits, but also raise big questions. If an AI makes a decision that is ultimately found to be wrong, who should take responsibility? If that decision results in significant loss or even death, who should be held accountable? 

Another area in which AI usage is increasing is banking and financial services. However, should a bank rely on an AI algorithm to determine who poses a credit risk and who doesn’t? How can you be sure people are not discriminated against because of the way the assessment process works? 

If the algorithms are being trained on historical data that already has inherent bias then we are training the machines that will soon be running our lives and businesses to replicate and then put to scale the challenges that have plagued humans and society in the past. How do we not hard code and scale existing societal problems? 

Autonomous vehicles is another area that requires close attention. In the event of a looming accident, should the car make choices that protect its occupants or minimise the impact on others? 

White Box AI

These challenges grow as the AI become more complex. Some can learn from past decisions and are able to take on more complex tasks. In these instances, understanding how a particular decision is made and on what basis is challenging. Ensuring it is ethical is even more so.

The majority of AI is termed ‘black box’ which means it is enormously difficult for even its creators to explain why the machine made a certain decision or behaved in a certain way. White Box AI is a newer concept that leans towards the machines being transparent and explainable. Contestability, transparency, explainability and accountability are key principles in ethical AI.

It’s important to remember that AI tools don’t always operate in the way they should and generate accurate results. A recent example occurred in the United Kingdom when Covid-19 restrictions prevented some students from sitting their final exams. An AI algorithm was put to work to assess a range of factors including previous test results and teacher opinions. Unfortunately, the result was the generation of grades that differed widely from what anyone expected. The project was deemed a failure. 

Government guidance

Last year, the Australian Government released its AI Ethics Framework designed to stimulate thinking and debate about the issue. It incorporated a set of voluntary ethical principles designed to provide some guidance for those organisations deploying the technology. 

The overarching theme is that AI system should benefit individuals, society, and the environment at all times. They should also be transparent and accountable for the decisions they make. Companies involved in the development of AI should carefully consider these principles before deploying their new tools. Getting the ethics right from the outset can avoid costly and harmful mistakes or misjudgements in the future. 

Additionally, investors in AI companies should ensure that from an early stage the company is aware of and is considering the implementation of ethical AI. Going forward this is likely to be either a procurement requirement or investment requirement for companies. 

The government guidance is useful for any organisation considering the deployment of AI. It’s important to carefully consider not only how the AI will operate and arrive at any decisions they make but importantly the data on which the tools are trained.

It’s also important to take time to carefully examine and understand how the AI operates and the rules that it follows when making choices and taking actions. You need to ensure they adhere to the same ethical guidelines as would be expected from a human undertaking the same task.

As AI becomes more widely used, having ethical guidelines in place will become increasingly important.  However, it needs to be remembered that final responsibility for decisions and actions taken by the AI remains with their human controllers – until the time the AI no longer needs its human controllers – estimated to be around the year 2040. Between now and then we have the opportunity to create the future of the human+machine era so that it serves us, humanity.