From automating business processes to enabling detailed data analysis or improving manufacturing outputs and logistics planning, artificial intelligence (AI) offers a wealth of growth opportunities for businesses of all sizes and within all sectors. 

But, as AI becomes increasingly commonplace across the business landscape, it brings with it a unique set of challenges. In particular, a lack of emotional, empathetic and ethical perspectives – the very things that humans bring to decision making – can deliver significant legal and reputational risk for organisations deploying AI technologies.

Machine learning can replicate human error

One world-leading e-commerce company found this out the hard way when they used AI to improve their hiring process. Because the AI’s algorithms were so clever at mimicking the patterns of thousands of human hiring decisions from the previous decade, the AI recruiter ultimately privileged men’s resumes over women’s.

“The machine learning system clearly noticed the historical sexism well enough to faithfully replicate it,” says Dr Nicole Vincent, senior lecturer at the UTS TD School.

“This system lacked even a semblance of the nuance that all good business leaders possess – namely, the ability to discern which patterns to learn from and follow, which ones to correct, which ones to create, and sometimes, which ones to report to the authorities.”

This scenario represents the crucial challenge that many businesses face when establishing AI systems in the workplace. Too often, leaders pass the responsibility of implementing AI down the chain of command, but an absence of leadership engagement in AI projects can lead to unintended business consequences.

“Businesses that are developing or using AI need to be aware that things can go wrong, which may involve legal liability,” says Professor David Lindsay from the UTS Faculty of Law.

“Positive initiatives for promoting ethical AI [are] increasingly essential for building a good business.”

The global challenge of ethical AI

Embedding ethical perspectives into AI technologies—that is, establishing the conventions for ethical, safe, and transparent technological processes and policies to guide future workplace practice – is a critical global challenge.

“[When humans learn from patterns, we’re] actively choosing what has value and what matters. Our decisions shape the world. This isn't something that AI can or should [do on our behalf],” says Dr Vincent.

In Australia, the Federal Government’s Artificial Intelligence Framework describes eight principles designed to support the development of ethical AI standards, while a recently released issues paper from the Department of Prime Minister and Cabinet seeks to position Australia as a world leader in digital economy regulation.

Universities have a role to play, too: at UTS, a new microcredential called Ethical AI for Good Business is set to help businesses understand the role of ethical AI in minimising liability, consumer trust and human rights risks.

Delivered by the TD School and the Faculty of Law, the course will prepare non-technicians to embrace automation with confidence and stay competitive – vital skills for modern business leaders, both within their own organisations and in the increasingly networked and complex international business environment.

As AI becomes ubiquitous across almost every sector, it’s time for businesses to act in order to protect their people, their profile and their profits.

Read more about the Ethical AI for Good Business short course on UTS Open.