image

With rapid advances in AI (artificial intelligence)-enabled computing systems, learning algorithms, big data and predictive analytics, services are now being automated using “robo” or algorithmic decision-making systems.

While these technological advances hold tremendous promise for mankind, they also pose difficult questions in areas such as ethics, privacy, human rights, intellectual property and economics.

In a world that is increasingly connected by the Internet of Things and where machine-based algorithms now use available data to make many of the decisions that affect our lives, how do we ensure these automated decisions are appropriate and transparent? And what recourse do we have when these decisions intrude on our rights, freedoms and legitimate interests?

Companies such as Google and Facebook have invested millions of dollars in developing algorithms to analyse our search histories, product interests and online interactions, using this data to predict and even influence our future behaviour.

A University of Michigan study into the algorithm used by Facebook to populate our newsfeeds found that it hid posts it thought were uninteresting based on unspecified criteria. Study participants were outraged when they realised the hidden posts often included items from close friends and family members.

Analysts often refer to algorithms as a black box, not only because of the way corporations carefully protect these unique pieces of code, but also due to the commercial value and competitive advantage they represent, and their inherent complexity.

A natural outcome of machine learning and the application of big data is that algorithms often become even more complex over time, to the point where even their developers don’t fully understand what is going on.

Biased algorithms

So what happens when an algorithmic decision leads to someone being disadvantaged or discriminated against? There have been numerous instances where this has happened, not necessarily due to the algorithm itself, but because the underlying data reflects an inherent bias or pattern that becomes obvious when the algorithm is applied to it. As algorithmic complexity and autonomy increases, it becomes even more important to build in checks and balances to protect the legitimate interests of individuals.

In the EU, new data protection laws have been passed that provide an individual with the right to “contest the decision” made by automated processing in cases where the individual’s legitimate interests and freedoms have been significantly affected. Europe has a strong tradition of respecting personal privacy and the General Data Protection Regulation, which comes into force in 2018, includes a right for the affected individual to “express his or her point of view”.

Here in Australia, financial services companies, including fintech start-ups, are providing digital or robo advice to customers using highly sophisticated algorithms. ASIC last month released its regulatory guide pertaining to the provision of automated financial product advice using algorithms and technology and without the direct involvement of a human adviser. The guide also addresses issues such as the application of organisational competence obligation to digital advice licensees and the monitoring of their algorithms.

 

 

Auditing automated systems

Is there merit in the idea of professional algorithmists, whose roles include assessing and reviewing algorithms for potential issues when concerns are raised, providing a level of accountability and transparency that is currently missing?

Recognising that technological advances in automated systems and AI will have wide-ranging impacts, key players in the ICT sector have taken steps to better understand how things are developing and to establish ethical standards.

Leaders from five of the world’s biggest ICT companies — Amazon, Facebook, IBM, Microsoft and Google’s parent company, Alphabet — have been meeting since February to work on ethical standards relating to AI. At the same time, a major initiative called the 100 Year Study on Artificial Intelligence has released the first of 20 reports, to be conducted every five years for the next century.

 

Anthony Wong is president of the ACS and chief executive of AGW Consulting, a multidisciplinary ICT, intellectual property legal and consulting practice.

http://www.ict-21.ch/com-ict/IMG/pdf/article-Anthony-Wong-SB-90.pdf