The accelerating pace of progress in AI development (driven particularly by the subfield of machine learning) is currently generating a frenzied mix of anxiety and excitement. Public debates between figures such as Elon Musk and Mark Zuckerberg over the threats of ‘superintelligent’ forms of AI have received extensive coverage, while optimists have argued that AI might be directed towards solving pressing global challenges. But these narratives can easily distract from the fact that various AI-related technologies are already in widespread use. Some of these, as Professor Alston’s report highlights, can have distinct implications for human rights today.
Analysis of the intersections between human rights and AI-related technologies has been growing across a range of areas. Perhaps the most prominent have been predictions of significantly decreased employment in a various sectors due to automation. The development of lethal autonomous weapons systems (LAWS) has prompted a backlash from campaigners seeking a pre-emptive ban on so-called ‘killer robots’. Researchers have also identified concerns over the privacy impacts of facial recognition software, the risks of discrimination through replication or exacerbation of bias in AI systems, and the effects of some ‘predictive policing’ methods.
The rights implications of AI technologies have recently begun to feature more directly at the UN Human Rights Council (HRC). During 2017, two formal reports submitted to the HRC discussed these issues. Report A/HRC/36/48 from the Independent Expert on the rights of older persons addressed the opportunities and challenges of robotics, artificial intelligence and automation in the care of older persons. Earlier in the year, report A/HRC/35/9 from the Office of the High Commissioner for Human Rights (OHCHR), on the topic of the gender digital divide, made reference to algorithmic discrimination and bias, and the potential for AI to drive improvements in women’s health.
The emerging relevance of AI issues can also be seen in the work of advocates at the HRC. At the 36th HRC session in September, a group of NGOs and states co-hosted a side event on the topic of ‘Artificial intelligence, justice and human rights’. In an OpenGlobalRights article at the conclusion of the session, Peter Splinter of the Geneva Centre for Security Policy called for HRC member and observer States to become more forward-looking on thematic issues including sophisticated algorithmic systems and future forms of AI, in order for the body to help to shape regulations.