It's not just about privacy! Ethical and legal questions to Artificial Intelligence - Part I

Artificial intelligence (AI) can be defined as "a variety of processing techniques and technologies used to perform results-oriented work and the means to reason in pursuit of that task" (Defense Innovation Advisory Board, 2019, p.5)[1][2] and it is currently one of the technologies whose development is considered strategic by countries such as China or the United States. Being a multifaceted technology, it can be applied to estimate risks, identify patterns, such as your likes of series on Netflix, even the prediction of criminal behavior and as an instrument of mass surveillance through the recognition of faces, voices or digital trails.

In this context, the main question that has centralized the discussion on the ethical and legal ramifications of AI has been the effect on people's privacy[3]. However, there is a wide spectrum of affectations to the well-being of people that is not reduced to privacy, but can violate other human rights (Human Rights). For example, in the face of acts of police violence and seeking to avoid racial profiling in protests in the United States, companies such as IBM, Amazon or Microsoft suspended the offer of their facial recognition services to the police in this country.[4].

To highlight this point, below are cases where human rights could be being violated:

1. Discrimination and bias against minorities

According to (Buolamwini and Gebru, 2018)[5] most of software commercial facial recognition have a higher error rate when detecting female and dark-skinned faces (fluctuating error rate between 20.8 to 34.7%) compared to male and light-skinned faces (fluctuating error rate between 0.0 and 0.3%) , which could be because these algorithms are trained with a database of predominantly Caucasian faces. These data alone already represent a legitimate question as to whether these AI-based services comply or not with fair and non-discriminatory treatment.

Source: Twitter / @ Chicken3gg

The situation is exacerbated when the police use this type of algorithms to make arrests. Thus, the American Civil Liberties Union (2018)[6] in a test of the recognition softwares found that they incorrectly identified 28 members of the United States Congress with people who were arrested for crimes, of which 40% corresponded to people of color, although they only represent 20% of the members congressional.

This kind of errors it would also be affecting due process bordering on arbitrary arrests, as happened a few days ago when Robert Julian Borchak Williams, a 42-year-old African American, was arrested, attributing crimes he did not commit due to a mistake in the facial recognition algorithm.[7]

2. AI and crimes

Specifically, AI is a technology that, if used by malicious subjects, would enhance a wide variety of criminal activities, including the theft of personal information through phishing, which can improve its effectiveness with improvements in the falsification of voice. and call monitoring.

3. Secret algorithms

What if the challenges to facial recognition algorithms were also applicable to algorithms that seek to predict future behavior of inmates? That problem exists in the case of the COMPAS algorithm, which must carry a prior warning to its use about its potential biases by order of the Wisconsin Supreme Court[8].

Despite the above, this algorithm is used to estimate the risk of recidivism and it is controversial whether its results are discriminatory in predicting a higher risk for African-American defendants compared to white defendants.[9], although, there are also refutations of the above[10].

However, in this case there is another relevant problem that lies in lack of transparency about how the algorithm works, which behaves like a black box (black box) for interested third parties, since the developer company refuses to detail the operation of the algorithm based on its intellectual property. This leads us to think that the right to effective judicial protection may be being violated before an opaque justice system, which uses algorithms in a slightly transparent way[11].

The cases presented have served to illustrate that AI has multiple ethical and legal questions and, as already noted, these are not only about privacy. If this is so, For the development and implementation of AI in Peru, it is not enough to have a personal data protection law as a regulatory framework.

On this point, it could be argued that a regulatory gap is natural about AI as it is a new technology; However, this reasoning is quite weak as there are several countries in the region that are in the process of preparing national plans, policies, or strategies that expressly aim at AI (Argentina[12], Brazil[13] and Chile[14]) or have already approved them (Colombia[15] and Uruguay[16]).

In this situation, Peru should take a first step, under the stewardship of the PCM Digital Government Secretariat with the support of the Citizenship, NGOs and the Academy, by developing a framework consistent with AI challenges. To this end, the following actions are proposed:

  1. Determine what will be the ethical principles that will guide us in the development and application of AI initiatives that respect human rights and are related to our values ​​as a society.
  2. Design, choose and implement regulatory tools, technical standards or incentive systems that ensure effective application of these principles.

These actions will help us catch up on the development of an AI governance framework and in the second part of this article we will delve into these points.


[1] Defense Innovation Board (2019). AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense. Recovered on 20.06.20 from: https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF

[2] N. del T. From English “A variety of information processing techniques and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task".

[3] An example of the above is to focus on the privacy concerns derived from the use of Amazon's Rekognition software and not to include other relevant effects such as discrimination and algorithm attribution error. In this regard, see: Williams, J. and Lynch, J. (2020) Amazon, Stop Powering Government Surveillance. Electronic Frontier Foundation. Recovered on 21.06.20 from: https://www.eff.org/deeplinks/2018/05/amazon-stop-powering-government-surveillance

[4] Greene, J. (2020) Microsoft won't sell police its facial-recognition technology, following similar moves by Amazon and IBM. The Washington Post. Recovered on 21.06.20 from: https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/

[5] Buolamwini, J. and Gebru, T. (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research. 81: 1-15. Recovered on 21.06.20 from: https://dam-prod.media.mit.edu/x/2018/02/06/Gender%20Shades%20Intersectional%20Accuracy%20Disparities.pdf

[6] Snow, J. (2018) Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots. ACLU. Recovered on 21.06.20 from: https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28

[7] Hill, K. (2020) Wrongfully Accused by an Algorithm. The New York Times. Recovered on 24.06.20: https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html#click=https://t.co/xG7Y4G0aH2

[8] See: Harvard Law Review (2017) State v. Loomis. Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing. Recovered on 22.06.20 from: https://harvardlawreview.org/2017/03/state-v-loomis/

[9] Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016) Machine Bias. There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica. Recovered on 20.06.20 from: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[10] Flores, A., Lowenkamp, ​​C. and Bechtel, K. (2017) False Positives, False Negatives, and False Analyzes: A Rejoinder to “Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And it's Biased Against Blacks.”Retrieved on 21.06.20 http://www.crj.org/assets/2017/07/9_Machine_bias_rejoinder.pdf

[11] Pasquale, F. (2017) Secret Algorithms Threaten the Rule of Law. MIT Technology Review. Recovered on 24.06.20 from: https://www.technologyreview.com/2017/06/01/151447/secret-algorithms-threaten-the-rule-of-law/

[12] Presidency of the Nation (2018) National Plan for Artificial Intelligence. Recovered on 20.06.20 from: https://www.uai.edu.ar/ciiti/2019/buenos-aires/downloads/B1/JA-Plan-Nacional-IA.pdf

[13] Brazil digital country (2019) Brazilian Strategy of Artificial Intelligence. Recovered on 20.06.20 from: https://brasilpaisdigital.com.br/estrategia-brasileira-de-inteligencia-artificial/

[14] Ministry of Science, Technology, Knowledge and Innovation (2019) Participation process to contribute to the National Artificial Intelligence Policyl. Recovered on 20.06.20 from: http://www.minciencia.gob.cl/Pol%C3%ADticaIA

[15] National Council for Economic and Social Policy of the Republic of Colombia (2019) National Policy for Digital Transformation and Artificial Intelligence. Recovered on 24.06.20 from: https://colaboracion.dnp.gov.co/CDT/Conpes/Econ%C3%B3micos/3975.pdf

[16]Electronic Government Agency and the Information and Knowledge Society (2019) Artificial Intelligence Strategy for the Digital Governmentl. Recovered on 20.06.20 from: https://www.gub.uy/agencia-gobierno-electronico-sociedad-informacion-conocimiento/sites/agencia-gobierno-electronico-sociedad-informacion-conocimiento/files/documentos/publicaciones/Estrategia%20IA%20-%20versi%C3%B3n%20espa%C3%B1ol.pdf

- Advertising Notice-
Rommel Infante
Lawyer from the Universidad Nacional Mayor de San Marcos (UNMSM), with a specialization in Behavioral Economics from the Universidad del Pacífico. Interested in decision architecture, digital transformation, startups, legal tech, regulation and the ethical use of new technologies. Currently, he works as an associate of the Caro & Asociados study in the Compliance and New Technologies area. Email: rommelinfanteasto@gmail.com

Similar

1,816Happy fans
443FollowersFollow
70FollowersFollow

Subscribe

*All fields are required
es Spanish
X