General Ethical Guidelines

This section provides a foundation, both for students’ understanding and the framework itself, as to the importance of embedding an ethical approach in AI development.

Progress in the field of AI has the potential to improve many facets of the way our society operates. At the same time, not all impacts upon individuals and the broader community will be anticipated or positive. Any harm caused can vary just as much as the benefits brought (see: Figure 2). Incorporating ethical considerations, from beginning to end stages of a project, can lead to safer and fairer outcomes.

Taking steps to produce these outcomes, in turn, builds public trust in AI and consumer loyalty and overall ensures that all Australians benefit from transformative technologies. For team members, this is a relevant consideration because it relates not only to the social impact of individual MDN projects, but also to the work they may apply their skills to in the course of future employment.

When it comes to pursuing MDN projects in a way that keeps ethical consideration at the forefront, this Framework emphasises the importance of asking questions. The consequences of failing to do so can be serious and detrimental, as seen in the Australian Government’s high-profile attempt at automating debt collection.

CASE STUDY: ‘ROBODEBT’

Algorithmic discrimination is a common problem in the field of AI. The so-called ‘Robodebt scandal’ is a classic and particularly devastating case of this.

Between 2015 and 2019, an online compliance intervention system was used by Services Australia to monitor and investigate Centrelink recipient wages automatically. It was developed by the Commonwealth Department of Social Services, in consultation with other Federal government departments, and its purpose was to ensure the accuracy of recipient income-reporting. This automated system, which became known as the Robodebt Scheme, could producemdboo in one week the same number of discrepancy notices that were produced in a year when done manually.

Yet its use ultimately proved detrimental, giving rise to both a class action against the Federal Government and a Royal Commission investigation after one in five people received incorrect debt collection notices, requiring a total of 470,000 collected payments to be refunded. By overcalculating individuals’ employment earnings during the periods in social security payments were received, the Robodebt system indicated that recipients were overpaid in the relevant periods and thus owed debt to the Commonwealth government.

The Robodebt Scheme is no longer in use. However, the harm it caused was significant and the effects have outlasted the automated system itself. Those eligible for government support commonly come from low socio-economic backgrounds who are already marginalised and disadvantaged. A 2019 Senate Committee Inquiry found that the scheme ‘indiscriminately targeted’ vulnerable demographics in the community, causing significant psychological and financial distress. Among debt notice recipients were 2,000 ‘vulnerable’ Australians who passed away following receipt of debt notices. Sufficient safeguards at every stage are all the more important when automation is intended to serve the interests of vulnerable populations.

As the Royal Commission progresses, accounts from those involved in both development and implementation are beginning to demonstrate a recurring theme: doubts went unraised, questions unasked, and risks ignored. What the Robodebt debacle highlights is that, regardless of seniority or the type of work undertaken by an individual within an organisation, upholding an ethical approach to a project is everyone’s responsibility.

In order to mitigate issues like algorithmic discrimination and allow AI models to flourish to the benefit of its users and their community, the Australian Government has identified eight key principles that should underpin the design, development and implementation of AI systems (see: Figure 3).

This is a form of guidance only. It does not create any legal obligations and it is not something businesses or any other organisations, like student research teams, are bound to follow. However, the principles it suggests are useful as a resource when discussing how best to safeguard against ethical risks to do with AI and as such form the ethical touchpoints considered throughout the MDN project life cycle under this Framework.