Skip to Main Content

Artificial intelligence and the rights of persons with disabilities.

Keywords: , , , , ,

Artificial Intelligence is at the core of new technologies that are resetting human relations. They are key for making our societies more inclusive, but left without regulation, they could make discrimination more systematic and harder to detect. In this report, the Special Rapporteur analyzes the risks an opportunity of these new technologies, and suggests how to avoid their most harmful effects.

Automated decision-making and machine-learning technologies are driving our search for inclusive equality across a broad range of fields such as employment, education and independent living.

However, there is growing evidence of their discriminatory impact, and the broad human rights challenges that they pose. A number of studies from UN agencies, the European Union and other independent experts provide ample detail of the severe risks posed by new technologies in different aspects of human life, particularly in our interaction with the state, our access to services and social protection. This is why Artificial Intelligence has been described as humanity’s biggest challenge.

And while many of the risks of artificial intelligence systems affect all people, some of them are unique to persons with disabilities.  

How does AI work—and why it reproduces human bias:

Artificial intelligence is made “smart” through a process of machine learning, which is based on the information and data provided to the machine by humans. This process is guided by an algorithm, the basic set of parameters and instructions that indicate how the machine should interpret the information it receives.

AI is commonly used by companies, organizations and institutions around the world in job recruitment. It promised to make the process of identifying the right candidate for a job more efficient, and less vulnerable to human bias. But evidence indicates that the data sets on which the machine is learning is already “infected” with bias: communities historically underrepresented, such as persons with disabilities are likely missing in the history of the company’s successful recruitments, and are therefore missing in the data sets. This means that the machine will not see them as suitable candidates for the positions, and will therefore replicate the same pattern of bias that existed previously in the human-managed process.  

This example illustrates well-documented problems of AI-powered applications affecting not only job recruitments, but also eligibility for social services, determination of health insurance costs and many other areas where machines play roles in deciding over critical aspects of our lives.

Further aggravating these challenges is what is known as AI’s “black box” or transparency problem: the system’s inner workings and training data are often concealed and protected by privacy and intellectual property laws, which make virtually impossible to determine if or where bias is occurring.

Impact on people with disabilities

Artificial intelligence-enabled systems offer enormous opportunities for disability inclusion. They are revolutionizing assistive technologies, enabling persons with disabilities to identify accessible routes, enhancing personal mobility, and allowing communication through eye-tracking and voice-recognition software among many other benefits. Their adaptive nature allow these systems to address specific individual needs, greatly expanding the possibilities for reasonable accommodation.

But these opportunities are accompanied by important risks. As these systems become the primary gatekeepers in processes such as employment, access to services and social benefits, they are also transforming our relationship with the State. Biased data sets and discriminatory algorithms can restrict persons with disabilities from employment or benefits making them even more vulnerable to poverty and marginalization, and in ways that are more systematic and harder to detect.

In the report, the Special Rapporteur provides examples of how AI systems are enhancing assistive technologies, and the possibilities for persons with disabilities to live independently, as well as cases illustrating the risks entailed in their current unregulated use. It cautions about the gravity of these risks, and the possibility that they outmatch the benefits if urgent action isn’t taken.  

To mitigate those risks, the report delves into the key legal obligations that must be applied to the development and use of artificial intelligence. These include the rights to equality and non-discrimination, to autonomy, privacy, work, education, health, social protection, participation, among other fundamental rights enshrined in the UNCRPD.

Key recommendations

The study offers a series of recommendations for States, national human rights institutions, businesses, and the UN system, to mitigate the risks of AI’s negative impact on the rights of persons with disabilities.

States must ensure that provisions against discrimination and protecting human rights are part of national regulations of AI development and implementation, and should not allow the use and sale of AI systems that pose high risks. This requires the participation of persons with disabilities and their organizations in monitoring the impact of AI systems, and the development of public procurement standards that are disability-inclusive and human rights compliant.

National human rights organizations should increase their capacity and engagement on AI policy debates, and to include the convention on the rights of persons with disabilities as part of their positions.

Businesses and private sector should incorporate human rights and transparency standards in their operations pertaining AI systems, conduct impact assessments and rectify practices when necessary. They are encouraged to hire developers of artificial intelligence who have lived experience of disability, and consult with organizations of persons with disabilities to gain the necessary perspective.

The UN system is asked to include artificial intelligence in the United Nations Disability Inclusion Strategy, and ensure that disability is part of the work on AI across all its areas. In particular, the Committee on the rights of persons with disabilities is encouraged to develop a general comment clarifying the obligations of States in respect of AI, and all agencies should continue conducting assessments on the discriminatory impact of AI in human rights across their areas of work.

https://www.youtube.com/watch?v=YSnPyJf9wbg
magnifiercross