Identifying Risks

When considering how a project might pose an ethical risk, it is important to account for the fact that AI and computer systems are ‘socio-technical’ in nature, meaning that what might appear to be objective and technical components are nonetheless influenced by social factors during both its creation and implementation.

Below are eight questions that can act as a guide in identifying and assessing ethical risks relating to a project.

Human, Societal And
Environmental
Wellbeing

Is the platform/product
actually providing a
social good?
AI and computer systems should benefit individuals, society, and the environment.

  • Teams should outline the AI systems objectives, including the merits of achieving its outcomes.
  • Teams are encouraged to consider how the AI systems affect people, and how it will benefit them, not harm them.
    Ask yourself, "what social cause does the product go towards furthering?"
  • For example, this can be as simple as AI systems which improve internal business processes (e.g. efficiency)


Human-Centred Values

Does the system respect
the human rights of
individuals?
test
Fairness

Is the AI inclusive and
accessible, and not
impacting on any
individual or group
discriminately?
test
Privacy, Protection and
Security

Does the system protect
privacy and individual
security?
test
Reliability and Safety

Is the system operating as
intended?
test
Transparency and
Explainability

Can you explain the decision
that the system has arrived at?
Will you inform people when and
how their data is used?
test
Contestability

Can someone contest a decision
made by the system?
test
Accountability

Who will take responsibility
for the impacts that the
system will have on people?
test