Browse Source

Moar links

master
David Larlet 1 year ago
parent
commit
64485ccab3
No known key found for this signature in database

+ 83
- 0
cache/2020/384b330b3de6f4f2bac8c81f0f04c404/index.html View File

@@ -0,0 +1,83 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the <title>
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,minimum-scale=1,initial-scale=1,shrink-to-fit=no">
<!-- Required to make a valid HTML5 document. -->
<title>Atlanta Asks Google Whether It Targeted Black Homeless People (archive) — David Larlet</title>
<!-- Lightest blank gif, avoids an extra query to the server. -->
<link rel="icon" href="data:;base64,iVBORw0KGgo=">
<!-- Thank you Florens! -->
<link rel="stylesheet" href="/static/david/css/style_2020-01-09.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" crossorigin>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://www.nytimes.com/2019/10/04/technology/google-facial-recognition-atlanta-homeless.html">

<body class="remarkdown h1-underline h2-underline hr-center ul-star pre-tick">

<article>
<h1>Atlanta Asks Google Whether It Targeted Black Homeless People</h1>
<h2><a href="https://www.nytimes.com/2019/10/04/technology/google-facial-recognition-atlanta-homeless.html">Source originale du contenu</a></h2>
<p class="css-exrw3m evys1bk0">Atlanta officials are seeking answers from Google after a news report said that company contractors had sought out black homeless people in the city to scan their faces to improve Google’s facial-recognition technology.</p>

<p class="css-exrw3m evys1bk0">The New York Daily News <a class="css-1g7m0tk" href="https://www.nydailynews.com/news/national/ny-google-darker-skin-tones-facial-recognition-pixel-20191002-5vxpgowknffnvbmy5eg7epsf34-story.html" title="" rel="noopener noreferrer" target="_blank">reported on Wednesday</a> that a staffing agency hired by Google had sent its contractors to numerous American cities to target black people for facial scans. One unnamed former worker told the newspaper that in Atlanta, the effort included finding those who were homeless because they were less likely to speak to the media.</p>

<p class="css-exrw3m evys1bk0">On Friday, Nina Hickson, Atlanta’s city attorney, sent a letter to Google asking for an explanation.</p>

<p class="css-exrw3m evys1bk0">“The possibility that members of our most vulnerable populations are being exploited to advance your company’s commercial interest is profoundly alarming for numerous reasons,” she said in a letter to Kent Walker, Google’s legal and policy chief. “If some or all of the reporting was accurate, we would welcome your response as what corrective action has been and will be taken.”</p>

<p class="css-exrw3m evys1bk0">Google said it had hired contractors to scan the faces of volunteers to improve software that would enable users to unlock Google’s new phone simply by looking at it. The company immediately suspended the research and began investigating after learning of the details in The Daily News article, a Google spokesman said.</p>

<p class="css-exrw3m evys1bk0">“We’re taking these claims seriously,” he said in a statement.</p>

<p class="css-exrw3m evys1bk0">The dust-up is the latest scrutiny of tech companies’ development of facial-recognition technology. Critics say that such technology can be <a class="css-1g7m0tk" href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" title="">abused by governments</a> or bad actors and that it has already shown <a class="css-1g7m0tk" href="https://www.nytimes.com/2018/07/26/technology/amazon-aclu-facial-recognition-congress.html" title="">signs of bias</a>. Some facial-recognition software has <a class="css-1g7m0tk" href="https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html" title="">struggled with dark-skinned people</a>.</p>

<p class="css-exrw3m evys1bk0">But even companies’ efforts to improve the software and prevent such bias are proving controversial, as it requires <a class="css-1g7m0tk" href="https://www.nytimes.com/2019/07/13/technology/databases-faces-facial-recognition-technology.html" title="">the large-scale collection</a> of scans and images of real people’s faces.</p>

<p class="css-exrw3m evys1bk0">Google said it hired contractors from a staffing agency named Randstad for the research. Google wanted the contractors to collect a diverse sample of faces to ensure that its software would work for people of all skin tones, two Google executives said in an email to colleagues on Thursday. A company spokesman provided the email to The New York Times.</p>

<p class="css-exrw3m evys1bk0">“Our goal in this case has been to ensure we have a fair and secure feature that works across different skin tones and face shapes,” the Google executives said in the email.</p>

<p class="css-exrw3m evys1bk0">The unnamed person who told The Daily News that Randstad sent the contractors to Atlanta to focus on black homeless people also told the newspaper that a Google manager was not present when that order was made. A second unnamed contractor told The Daily News that employees were instructed to locate homeless people and university students in California because they would probably be attracted to the $5 gift cards volunteers received in exchange for their facial scans.</p>

<p class="css-exrw3m evys1bk0">Randstad manages a work force of more than 100,000 contractors in the United States and Canada each week. The company, which is based in the Netherlands and has operations in 38 countries, did not respond to requests for comment. Google relies heavily on contract and temporary workers; they now <a class="css-1g7m0tk" href="https://www.nytimes.com/2019/05/28/technology/google-temp-workers.html" title="">outnumber its full-time employees</a>.</p>

<p class="css-exrw3m evys1bk0">Several unnamed people who worked on the facial recognition project told The Daily News that Randstad managers urged the contractors to mislead participants in the study, including by rushing them through consent forms and telling them that the phone scanning their faces was not recording.</p>

<p class="css-exrw3m evys1bk0">The Google executives did not confirm those details in their email. They said that the tactics described in the article were “very disturbing.” Google instructed its contractors to be “truthful and transparent” with volunteers in the study by obtaining their consent and ensuring they knew why Google was collecting the data, the executives said in the email.</p>

<p class="css-exrw3m evys1bk0">“Transparency is obviously important, and it is absolutely not okay to be misleading with participants,” they said.</p>

<p class="css-exrw3m evys1bk0">A Google spokesman said that the volunteers’ facial scans were encrypted and only used for the research, and deleted once the research is completed.</p>

<p class="css-exrw3m evys1bk0">In 2017, <a class="css-1g7m0tk" href="https://gizmodo.com/how-apple-says-it-prevented-face-id-from-being-racist-1819557448" title="" rel="noopener noreferrer" target="_blank">an Apple executive told Congress</a> that the company developed its facial-recognition software using more than a billion images, including facial scans collected in its own research studies.</p>

<p class="css-exrw3m evys1bk0">“We worked with participants from around the world to include a representative group of people accounting for gender, age, ethnicity and other factors,” the executive said.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil">🏠</a> •
<a href="/david/log/" title="Accès au flux RSS">🤖</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant>🇨🇦</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel">📮</a> •
<abbr title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340">🧚</abbr>
</p>
</footer>
<script src="/static/david/js/instantpage-3.0.0.min.js" type="module" defer></script>
</body>
</html>

+ 7
- 0
cache/2020/384b330b3de6f4f2bac8c81f0f04c404/index.md View File

@@ -0,0 +1,7 @@
title: Atlanta Asks Google Whether It Targeted Black Homeless People
url: https://www.nytimes.com/2019/10/04/technology/google-facial-recognition-atlanta-homeless.html
hash_url: 384b330b3de6f4f2bac8c81f0f04c404

<p class="css-exrw3m evys1bk0">Atlanta officials are seeking answers from Google after a news report said that company contractors had sought out black homeless people in the city to scan their faces to improve Google’s facial-recognition technology.</p><p class="css-exrw3m evys1bk0">The New York Daily News <a class="css-1g7m0tk" href="https://www.nydailynews.com/news/national/ny-google-darker-skin-tones-facial-recognition-pixel-20191002-5vxpgowknffnvbmy5eg7epsf34-story.html" title="" rel="noopener noreferrer" target="_blank">reported on Wednesday</a> that a staffing agency hired by Google had sent its contractors to numerous American cities to target black people for facial scans. One unnamed former worker told the newspaper that in Atlanta, the effort included finding those who were homeless because they were less likely to speak to the media.</p><p class="css-exrw3m evys1bk0">On Friday, Nina Hickson, Atlanta’s city attorney, sent a letter to Google asking for an explanation.</p><p class="css-exrw3m evys1bk0">“The possibility that members of our most vulnerable populations are being exploited to advance your company’s commercial interest is profoundly alarming for numerous reasons,” she said in a letter to Kent Walker, Google’s legal and policy chief. “If some or all of the reporting was accurate, we would welcome your response as what corrective action has been and will be taken.”</p>
<p class="css-exrw3m evys1bk0">Google said it had hired contractors to scan the faces of volunteers to improve software that would enable users to unlock Google’s new phone simply by looking at it. The company immediately suspended the research and began investigating after learning of the details in The Daily News article, a Google spokesman said.</p><p class="css-exrw3m evys1bk0">“We’re taking these claims seriously,” he said in a statement.</p><p class="css-exrw3m evys1bk0">The dust-up is the latest scrutiny of tech companies’ development of facial-recognition technology. Critics say that such technology can be <a class="css-1g7m0tk" href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" title="">abused by governments</a> or bad actors and that it has already shown <a class="css-1g7m0tk" href="https://www.nytimes.com/2018/07/26/technology/amazon-aclu-facial-recognition-congress.html" title="">signs of bias</a>. Some facial-recognition software has <a class="css-1g7m0tk" href="https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html" title="">struggled with dark-skinned people</a>.</p><p class="css-exrw3m evys1bk0">But even companies’ efforts to improve the software and prevent such bias are proving controversial, as it requires <a class="css-1g7m0tk" href="https://www.nytimes.com/2019/07/13/technology/databases-faces-facial-recognition-technology.html" title="">the large-scale collection</a> of scans and images of real people’s faces.</p><p class="css-exrw3m evys1bk0">Google said it hired contractors from a staffing agency named Randstad for the research. Google wanted the contractors to collect a diverse sample of faces to ensure that its software would work for people of all skin tones, two Google executives said in an email to colleagues on Thursday. A company spokesman provided the email to The New York Times.</p><p class="css-exrw3m evys1bk0">“Our goal in this case has been to ensure we have a fair and secure feature that works across different skin tones and face shapes,” the Google executives said in the email.</p>
<p class="css-exrw3m evys1bk0">The unnamed person who told The Daily News that Randstad sent the contractors to Atlanta to focus on black homeless people also told the newspaper that a Google manager was not present when that order was made. A second unnamed contractor told The Daily News that employees were instructed to locate homeless people and university students in California because they would probably be attracted to the $5 gift cards volunteers received in exchange for their facial scans.</p><p class="css-exrw3m evys1bk0">Randstad manages a work force of more than 100,000 contractors in the United States and Canada each week. The company, which is based in the Netherlands and has operations in 38 countries, did not respond to requests for comment. Google relies heavily on contract and temporary workers; they now <a class="css-1g7m0tk" href="https://www.nytimes.com/2019/05/28/technology/google-temp-workers.html" title="">outnumber its full-time employees</a>.</p><p class="css-exrw3m evys1bk0">Several unnamed people who worked on the facial recognition project told The Daily News that Randstad managers urged the contractors to mislead participants in the study, including by rushing them through consent forms and telling them that the phone scanning their faces was not recording.</p><p class="css-exrw3m evys1bk0">The Google executives did not confirm those details in their email. They said that the tactics described in the article were “very disturbing.” Google instructed its contractors to be “truthful and transparent” with volunteers in the study by obtaining their consent and ensuring they knew why Google was collecting the data, the executives said in the email.</p><p class="css-exrw3m evys1bk0">“Transparency is obviously important, and it is absolutely not okay to be misleading with participants,” they said.</p><p class="css-exrw3m evys1bk0">A Google spokesman said that the volunteers’ facial scans were encrypted and only used for the research, and deleted once the research is completed.</p><p class="css-exrw3m evys1bk0">In 2017, <a class="css-1g7m0tk" href="https://gizmodo.com/how-apple-says-it-prevented-face-id-from-being-racist-1819557448" title="" rel="noopener noreferrer" target="_blank">an Apple executive told Congress</a> that the company developed its facial-recognition software using more than a billion images, including facial scans collected in its own research studies.</p><p class="css-exrw3m evys1bk0">“We worked with participants from around the world to include a representative group of people accounting for gender, age, ethnicity and other factors,” the executive said.</p>

+ 84
- 0
cache/2020/cf5eab15b9590499ccb6d989f50fe5e3/index.html View File

@@ -0,0 +1,84 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the <title>
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,minimum-scale=1,initial-scale=1,shrink-to-fit=no">
<!-- Required to make a valid HTML5 document. -->
<title>Against Black Inclusion in Facial Recognition (archive) — David Larlet</title>
<!-- Lightest blank gif, avoids an extra query to the server. -->
<link rel="icon" href="data:;base64,iVBORw0KGgo=">
<!-- Thank you Florens! -->
<link rel="stylesheet" href="/static/david/css/style_2020-01-09.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" crossorigin>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://digitaltalkingdrum.com/2017/08/15/against-black-inclusion-in-facial-recognition/">

<body class="remarkdown h1-underline h2-underline hr-center ul-star pre-tick">

<article>
<h1>Against Black Inclusion in Facial Recognition</h1>
<h2><a href="https://digitaltalkingdrum.com/2017/08/15/against-black-inclusion-in-facial-recognition/">Source originale du contenu</a></h2>
<p>By Nabil Hassein</p>

<p><span>Researchers have documented the frequent inability of facial recognition software to detect Black people’s faces due to programmers’ use of unrepresentative data to train machine learning models.</span><span>1</span><span> This issue is not unique, but systemic; in a related example, automated passport photo validation has registered Asian people’s open eyes as being closed.</span><span>2</span><span> Such technological biases have precedents in mediums older than software. For example, color photography was initially optimized for lighter skin tones at the expense of people with darker skin, a bias corrected mainly due to the efforts and funding of furniture manufacturers and chocolate sellers to render darker tones more easily visible in photographs — the better to sell their products.</span><span>3</span><span> Groups such as the Algorithmic Justice League have made it their mission to “highlight algorithmic bias” and “develop practices for accountability during the design, development, and deployment of coded systems”.</span><span>4</span><span> I support all of those goals abstractly, but at a concrete level, I question whose interests would truly be served by the deployment of automated systems capable of reliably identifying Black people.</span></p>

<p><span>As a longtime programmer, I know first-hand that software can be an unwelcoming medium for Black folks, not only because of racism among programmers, but also because of biases built into code, which programmers can hardly avoid as no other foundations exist to build on. It’s easy for me to understand a desire to rid software of these biases. Just last month, I wrote up a sketch of a proposal to decolonize the Pronouncing software library</span><span>5</span><span> I used in a simple art project to generate rhymes modeled on those of my favorite rapper.</span><span>6</span><span> So I empathized when I heard Joy Buolamwini of the Algorithmic Justice League speak on wearing a white mask to get her own highly imaginative “Aspire Mirror” project involving facial recognition to perceive her existence.</span><span>7</span><span> Modern technology has rendered literal Frantz Fanon’s metaphor of “Black Skin, White Masks”.</span><span>8</span></p>

<p><span>Facial recognition has diverse applications, but as a police and prison abolitionist, the enhancement of state-controlled surveillance cameras (including police body cameras) to automatically identify people looms much larger in my mind than any other use.</span><span>9</span><span> Researchers at Georgetown University found that fully half of American adults, or over 100 million people, are registered in one or another law enforcement facial recognition database, drawing from sources such as driver’s license photos.</span><span>10</span><span> Baltimore Police used the technology to identify participants in the uprising following the murder of Freddie Gray.</span><span>11</span><span> The US government plans to use facial recognition to identify every airline passenger exiting the United States.</span><span>12</span><span> Machine learning researchers have even reinvented the racist pseudoscience of physiognomy, in a study claiming to identify criminals with approximately 90% accuracy based on their faces alone — using data provided by police.</span><span>13</span></p>

<p><span>I consider it obvious that most if not all data collected by police to serve their inherently racist mission will be severely biased. It is equally clear to me that no technology under police control will be used to hold police accountable or to benefit Black folks or other oppressed people. Even restricting our attention to machine learning in the so-called “justice” system, examples abound of technology used to harm us, such as racist predictive models used by the courts to determine bail and sentencing decisions — matters of freedom and captivity, life and death.</span><span>14</span><span> Accordingly, I have no reason to support the development or deployment of technology which makes it easier for the state to recognize and surveil members of my community. Just the opposite: by refusing to don white masks, we may be able to gain some temporary advantages by partially obscuring ourselves from the eyes of the white supremacist state. The reality for the foreseeable future is that the people who control and deploy facial recognition technology at any consequential scale will predominantly be our oppressors. Why should we desire our faces to be legible for efficient automated processing by systems of their design? We could demand instead that police be forbidden to use such unreliable surveillance technologies. Anti-racist technologists could engage in high-tech direct action by using the limited resources at our disposal to further develop extant techniques for tricking machine learning models into misclassifications,</span><span>15</span><span> or distributing anti-surveillance hardware such as glasses designed to obscure the wearer’s face from cameras.</span><span>16</span></p>

<p><span>This analysis clearly contradicts advocacy of “diversity and inclusion” as the universal or even typical response to bias. Among the political class, “Black faces in high places” have utterly failed to produce gains for the Black masses.</span><span>17</span><span> Similarly, Black cops have shown themselves just as likely as white cops to engage in racist brutality and murder.</span><span>18</span><span> Why should the inclusion of Black folks in facial recognition, or for that matter, the racist technology industry be different? Systemic oppression cannot be addressed by a change in the complexion of the oppressor, as though a rainbow 1% and more white people crowding the prisons would mean justice. That’s not the world I want to live in. We must imagine and build a future of real freedom.</span></p>

<p><span>All of the arguments I’ve presented could be (and have been) applied to many domains beyond facial recognition. I continue to grapple with what that means for my own work as a technologist and a political organizer, but I am firm already in at least two conclusions. The first is that despite every disadvantage, we must reappropriate oppressive technology for emancipatory purposes. The second is that the liberation of Black folks and all oppressed peoples will never be achieved by inclusion in systems controlled by a capitalist elite which benefits from the perpetuation of racism and related oppressions. It can only be achieved by the destruction of those systems, and the construction of new technologies designed, developed, and deployed by our own communities for our own benefit. The struggle for liberation is not a struggle for diversity and inclusion — it is a struggle for decolonization, reparations, and self-determination. We can realize those aspirations only in a socialist world.</span></p>

<p><i><span>Nabil Hassein is a software developer and organizer based in Brooklyn, NY.</span></i></p>

<ol>
<li><a href="https://www.digitaltrends.com/photography/google-apologizes-for-misidentifying-a-black-couple-as-gorillas-in-photos-app/"><span>https://www.digitaltrends.com/photography/google-apologizes-for-misidentifying-a-black-couple-as-gorillas-in-photos-app/</span></a><span>;</span><a href="https://www.theguardian.com/technology/2017/may/28/joy-buolamwini-when-algorithms-are-racist-facial-recognition-bias"> <span>https://www.theguardian.com/technology/2017/may/28/joy-buolamwini-when-algorithms-are-racist-facial-recognition-bias</span></a><span>↩</span></li>
<li><a href="https://www.dailydot.com/irl/richard-lee-eyes-closed-facial-recognition/"><span>https://www.dailydot.com/irl/richard-lee-eyes-closed-facial-recognition/</span></a><span>↩</span></li>
<li><a href="https://petapixel.com/2015/09/19/heres-a-look-at-how-color-film-was-originally-biased-toward-white-people/"><span>https://petapixel.com/2015/09/19/heres-a-look-at-how-color-film-was-originally-biased-toward-white-people/</span></a><span>↩</span></li>
<li><a href="https://www.ajlunited.org/"><span>https://www.ajlunited.org</span></a><span>↩</span></li>
<li><a href="https://pronouncing.readthedocs.io/en/latest/"><span>https://pronouncing.readthedocs.io/en/latest/</span></a><span>↩</span></li>
<li><a href="https://nabilhassein.github.io/blog/generative-doom/"><span>https://nabilhassein.github.io/blog/generative-doom/</span></a><span>↩</span></li>
<li><a href="https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms/"><span>https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms/</span></a><span>↩</span></li>
<li><span>Frantz Fanon: “Black Skin, White Masks”.↩</span></li>
<li><a href="https://theintercept.com/2017/03/22/real-time-face-recognition-threatens-to-turn-cops-body-cameras-into-surveillance-machines/"><span>https://theintercept.com/2017/03/22/real-time-face-recognition-threatens-to-turn-cops-body-cameras-into-surveillance-machines/</span></a><span>↩</span></li>
<li><a href="https://www.law.georgetown.edu/news/press-releases/half-of-all-american-adults-are-in-a-police-face-recognition-database-new-report-finds.cfm"><span>https://www.law.georgetown.edu/news/press-releases/half-of-all-american-adults-are-in-a-police-face-recognition-database-new-report-finds.cfm</span></a><span>↩</span></li>
<li><a href="http://www.aclunc.org/docs/20161011_geofeedia_baltimore_case_study.pdf"><span>http://www.aclunc.org/docs/20161011_geofeedia_baltimore_case_study.pdf</span></a><span>↩</span></li>
<li><a href="https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp030-tvs-may2017.pdf"><span>https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp030-tvs-may2017.pdf</span></a><span>↩</span></li>
<li><a href="https://www.rt.com/news/368307-facial-recognition-criminal-china/"><span>https://www.rt.com/news/368307-facial-recognition-criminal-china/</span></a><span>↩</span></li>
<li><a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing"><span>https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</span></a><span>↩</span></li>
<li><a href="https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture"><span>https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture</span></a><span>;</span><a href="https://cvdazzle.com/"> <span>https://cvdazzle.com/</span></a><span>↩</span></li>
<li><a href="https://blogs.wsj.com/japanrealtime/2015/08/07/eyeglasses-with-face-un-recognition-function-to-debut-in-japan/"><span>https://blogs.wsj.com/japanrealtime/2015/08/07/eyeglasses-with-face-un-recognition-function-to-debut-in-japan/</span></a><span>↩</span></li>
<li><span>Keeanga-Yamahtta Taylor: “From #BlackLivesMatter to Black Liberation”, Chapter 3, “Black Faces in High Places”.↩</span></li>
<li><a href="https://mic.com/articles/118290/it-s-time-to-talk-about-the-black-police-officers-who-killed-freddie-gray"><span>https://mic.com/articles/118290/it-s-time-to-talk-about-the-black-police-officers-who-killed-freddie-gray</span></a><span>↩</span></li>
</ol>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil">🏠</a> •
<a href="/david/log/" title="Accès au flux RSS">🤖</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant>🇨🇦</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel">📮</a> •
<abbr title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340">🧚</abbr>
</p>
</footer>
<script src="/static/david/js/instantpage-3.0.0.min.js" type="module" defer></script>
</body>
</html>

+ 32
- 0
cache/2020/cf5eab15b9590499ccb6d989f50fe5e3/index.md View File

@@ -0,0 +1,32 @@
title: Against Black Inclusion in Facial Recognition
url: https://digitaltalkingdrum.com/2017/08/15/against-black-inclusion-in-facial-recognition/
hash_url: cf5eab15b9590499ccb6d989f50fe5e3

<p>By Nabil Hassein</p>
<p><span>Researchers have documented the frequent inability of facial recognition software to detect Black people’s faces due to programmers’ use of unrepresentative data to train machine learning models.</span><span>1</span><span> This issue is not unique, but systemic; in a related example, automated passport photo validation has registered Asian people’s open eyes as being closed.</span><span>2</span><span> Such technological biases have precedents in mediums older than software. For example, color photography was initially optimized for lighter skin tones at the expense of people with darker skin, a bias corrected mainly due to the efforts and funding of furniture manufacturers and chocolate sellers to render darker tones more easily visible in photographs — the better to sell their products.</span><span>3</span><span> Groups such as the Algorithmic Justice League have made it their mission to “highlight algorithmic bias” and “develop practices for accountability during the design, development, and deployment of coded systems”.</span><span>4</span><span> I support all of those goals abstractly, but at a concrete level, I question whose interests would truly be served by the deployment of automated systems capable of reliably identifying Black people.</span></p>
<p><span>As a longtime programmer, I know first-hand that software can be an unwelcoming medium for Black folks, not only because of racism among programmers, but also because of biases built into code, which programmers can hardly avoid as no other foundations exist to build on. It’s easy for me to understand a desire to rid software of these biases. Just last month, I wrote up a sketch of a proposal to decolonize the Pronouncing software library</span><span>5</span><span> I used in a simple art project to generate rhymes modeled on those of my favorite rapper.</span><span>6</span><span> So I empathized when I heard Joy Buolamwini of the Algorithmic Justice League speak on wearing a white mask to get her own highly imaginative “Aspire Mirror” project involving facial recognition to perceive her existence.</span><span>7</span><span> Modern technology has rendered literal Frantz Fanon’s metaphor of “Black Skin, White Masks”.</span><span>8</span></p>
<p><span>Facial recognition has diverse applications, but as a police and prison abolitionist, the enhancement of state-controlled surveillance cameras (including police body cameras) to automatically identify people looms much larger in my mind than any other use.</span><span>9</span><span> Researchers at Georgetown University found that fully half of American adults, or over 100 million people, are registered in one or another law enforcement facial recognition database, drawing from sources such as driver’s license photos.</span><span>10</span><span> Baltimore Police used the technology to identify participants in the uprising following the murder of Freddie Gray.</span><span>11</span><span> The US government plans to use facial recognition to identify every airline passenger exiting the United States.</span><span>12</span><span> Machine learning researchers have even reinvented the racist pseudoscience of physiognomy, in a study claiming to identify criminals with approximately 90% accuracy based on their faces alone — using data provided by police.</span><span>13</span></p>
<p><span>I consider it obvious that most if not all data collected by police to serve their inherently racist mission will be severely biased. It is equally clear to me that no technology under police control will be used to hold police accountable or to benefit Black folks or other oppressed people. Even restricting our attention to machine learning in the so-called “justice” system, examples abound of technology used to harm us, such as racist predictive models used by the courts to determine bail and sentencing decisions — matters of freedom and captivity, life and death.</span><span>14</span><span> Accordingly, I have no reason to support the development or deployment of technology which makes it easier for the state to recognize and surveil members of my community. Just the opposite: by refusing to don white masks, we may be able to gain some temporary advantages by partially obscuring ourselves from the eyes of the white supremacist state. The reality for the foreseeable future is that the people who control and deploy facial recognition technology at any consequential scale will predominantly be our oppressors. Why should we desire our faces to be legible for efficient automated processing by systems of their design? We could demand instead that police be forbidden to use such unreliable surveillance technologies. Anti-racist technologists could engage in high-tech direct action by using the limited resources at our disposal to further develop extant techniques for tricking machine learning models into misclassifications,</span><span>15</span><span> or distributing anti-surveillance hardware such as glasses designed to obscure the wearer’s face from cameras.</span><span>16</span></p>
<p><span>This analysis clearly contradicts advocacy of “diversity and inclusion” as the universal or even typical response to bias. Among the political class, “Black faces in high places” have utterly failed to produce gains for the Black masses.</span><span>17</span><span> Similarly, Black cops have shown themselves just as likely as white cops to engage in racist brutality and murder.</span><span>18</span><span> Why should the inclusion of Black folks in facial recognition, or for that matter, the racist technology industry be different? Systemic oppression cannot be addressed by a change in the complexion of the oppressor, as though a rainbow 1% and more white people crowding the prisons would mean justice. That’s not the world I want to live in. We must imagine and build a future of real freedom.</span></p>
<p><span>All of the arguments I’ve presented could be (and have been) applied to many domains beyond facial recognition. I continue to grapple with what that means for my own work as a technologist and a political organizer, but I am firm already in at least two conclusions. The first is that despite every disadvantage, we must reappropriate oppressive technology for emancipatory purposes. The second is that the liberation of Black folks and all oppressed peoples will never be achieved by inclusion in systems controlled by a capitalist elite which benefits from the perpetuation of racism and related oppressions. It can only be achieved by the destruction of those systems, and the construction of new technologies designed, developed, and deployed by our own communities for our own benefit. The struggle for liberation is not a struggle for diversity and inclusion — it is a struggle for decolonization, reparations, and self-determination. We can realize those aspirations only in a socialist world.</span></p>
<p><i><span>Nabil Hassein is a software developer and organizer based in Brooklyn, NY.</span></i></p>
<ol>
<li><a href="https://www.digitaltrends.com/photography/google-apologizes-for-misidentifying-a-black-couple-as-gorillas-in-photos-app/"><span>https://www.digitaltrends.com/photography/google-apologizes-for-misidentifying-a-black-couple-as-gorillas-in-photos-app/</span></a><span>;</span><a href="https://www.theguardian.com/technology/2017/may/28/joy-buolamwini-when-algorithms-are-racist-facial-recognition-bias"> <span>https://www.theguardian.com/technology/2017/may/28/joy-buolamwini-when-algorithms-are-racist-facial-recognition-bias</span></a><span>↩</span></li>
<li><a href="https://www.dailydot.com/irl/richard-lee-eyes-closed-facial-recognition/"><span>https://www.dailydot.com/irl/richard-lee-eyes-closed-facial-recognition/</span></a><span>↩</span></li>
<li><a href="https://petapixel.com/2015/09/19/heres-a-look-at-how-color-film-was-originally-biased-toward-white-people/"><span>https://petapixel.com/2015/09/19/heres-a-look-at-how-color-film-was-originally-biased-toward-white-people/</span></a><span>↩</span></li>
<li><a href="https://www.ajlunited.org/"><span>https://www.ajlunited.org</span></a><span>↩</span></li>
<li><a href="https://pronouncing.readthedocs.io/en/latest/"><span>https://pronouncing.readthedocs.io/en/latest/</span></a><span>↩</span></li>
<li><a href="https://nabilhassein.github.io/blog/generative-doom/"><span>https://nabilhassein.github.io/blog/generative-doom/</span></a><span>↩</span></li>
<li><a href="https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms/"><span>https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms/</span></a><span>↩</span></li>
<li><span>Frantz Fanon: “Black Skin, White Masks”.↩</span></li>
<li><a href="https://theintercept.com/2017/03/22/real-time-face-recognition-threatens-to-turn-cops-body-cameras-into-surveillance-machines/"><span>https://theintercept.com/2017/03/22/real-time-face-recognition-threatens-to-turn-cops-body-cameras-into-surveillance-machines/</span></a><span>↩</span></li>
<li><a href="https://www.law.georgetown.edu/news/press-releases/half-of-all-american-adults-are-in-a-police-face-recognition-database-new-report-finds.cfm"><span>https://www.law.georgetown.edu/news/press-releases/half-of-all-american-adults-are-in-a-police-face-recognition-database-new-report-finds.cfm</span></a><span>↩</span></li>
<li><a href="http://www.aclunc.org/docs/20161011_geofeedia_baltimore_case_study.pdf"><span>http://www.aclunc.org/docs/20161011_geofeedia_baltimore_case_study.pdf</span></a><span>↩</span></li>
<li><a href="https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp030-tvs-may2017.pdf"><span>https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp030-tvs-may2017.pdf</span></a><span>↩</span></li>
<li><a href="https://www.rt.com/news/368307-facial-recognition-criminal-china/"><span>https://www.rt.com/news/368307-facial-recognition-criminal-china/</span></a><span>↩</span></li>
<li><a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing"><span>https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</span></a><span>↩</span></li>
<li><a href="https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture"><span>https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture</span></a><span>;</span><a href="https://cvdazzle.com/"> <span>https://cvdazzle.com/</span></a><span>↩</span></li>
<li><a href="https://blogs.wsj.com/japanrealtime/2015/08/07/eyeglasses-with-face-un-recognition-function-to-debut-in-japan/"><span>https://blogs.wsj.com/japanrealtime/2015/08/07/eyeglasses-with-face-un-recognition-function-to-debut-in-japan/</span></a><span>↩</span></li>
<li><span>Keeanga-Yamahtta Taylor: “From #BlackLivesMatter to Black Liberation”, Chapter 3, “Black Faces in High Places”.↩</span></li>
<li><a href="https://mic.com/articles/118290/it-s-time-to-talk-about-the-black-police-officers-who-killed-freddie-gray"><span>https://mic.com/articles/118290/it-s-time-to-talk-about-the-black-police-officers-who-killed-freddie-gray</span></a><span>↩</span></li>
</ol>

+ 171
- 0
cache/2020/d562b547dc4833f0eb84a67ec2a8465d/index.html View File

@@ -0,0 +1,171 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the <title>
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,minimum-scale=1,initial-scale=1,shrink-to-fit=no">
<!-- Required to make a valid HTML5 document. -->
<title>Technology Can't Fix Algorithmic Injustice (archive) — David Larlet</title>
<!-- Lightest blank gif, avoids an extra query to the server. -->
<link rel="icon" href="data:;base64,iVBORw0KGgo=">
<!-- Thank you Florens! -->
<link rel="stylesheet" href="/static/david/css/style_2020-01-09.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" crossorigin>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="http://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic">

<body class="remarkdown h1-underline h2-underline hr-center ul-star pre-tick">

<article>
<h1>Technology Can't Fix Algorithmic Injustice</h1>
<h2><a href="http://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic">Source originale du contenu</a></h2>
<p>We need greater democratic oversight of AI not just from developers and designers, but from all members of society.</p>

<p>A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly <a href="https://www.bbc.com/news/technology-30290540" target="_blank">expressed</a> their concerns about the advent of this kind of “strong” (or “general”) AI—and the associated existential risk that it may pose for humanity. In Hawking’s words, the development of strong AI “could spell the end of the human race.”</p>

<p><pullquote>
<p>Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.</p>
</pullquote></p>
<p>These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.</p>

<p>What responsibilities and obligations do we bear for AI’s social consequences <em>in the present</em>—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.</p>

<p align="center">• • •</p>

<p>The first thing we must do is carefully scrutinize the arguments underpinning the present use of AI.</p>

<p>Some are optimistic that weak AI systems can contribute positively to social justice. Unlike humans, the argument goes, algorithms can avoid biased decision making, thereby achieving a level of neutrality and objectivity that is not humanly possible. A great deal of recent work has critiqued this presumption, including Safiya Noble’s <em>Algorithms of Oppression </em>(2018), Ruha Benjamin’s <em>Race After Technology</em> (2019)<em>,</em> Meredith Broussard’s <em>Artificial Unintelligence </em>(2018), Hannah Fry’s <em>Hello World </em>(2018), Virginia Eubanks’s <em>Automating Inequality </em>(2018), Sara Wachter-Boettcher’s <em>Technically Wron</em>g (2017), and Cathy O’Neil’s <em>Weapons of Math Destruction </em>(2016). As these authors emphasize, there is a wealth of empirical evidence showing that the use of AI systems can often replicate historical and contemporary conditions of injustice, rather than alleviate them.</p>

<p>In response to these criticisms, practitioners have focused on optimizing the <em>accuracy</em> of AI systems in order to achieve ostensibly objective, neutral decision outcomes. Such optimists concede that algorithmic systems are not neutral <em>at present</em>, but argue that they can be made neutral in the future, ultimately rendering their deployment morally and politically unobjectionable. As IBM Research, one of the many corporate research hubs focused on AI technologies, <a href="https://www.research.ibm.com/5-in-5/ai-and-bias/" target="_blank">proclaims</a>:</p>

<blockquote>
<p>AI bias will explode. But only the unbiased AI will survive. Within five years, the number of biased AI systems and algorithms will increase. But we will deal with them accordingly—coming up with new solutions to control bias in AI and champion AI systems free of it.</p>
</blockquote>

<p>The emerging computer science subfield of FAT ML (“fairness, accountability and transparency in machine learning”) includes a number of important contributions in this direction—described in an accessible way in two new books: Michael Kearns and Aaron Roth’s <em>The Ethical Algorithm</em> (2019) and Gary Marcus and Ernest Davis’s <em>Rebooting AI: Building Artificial Intelligence We Can Trust </em>(2019). Kearns and Roth, for example, write:</p>

<blockquote>
<p>We . . . . believe that curtailing algorithmic misbehavior will itself require more and better algorithms—algorithms that can assist regulators, watchdog groups, and other human organizations to monitor and measure the undesirable and unintended effects of machine learning.</p>
</blockquote>

<p>There are serious limitations, however, to what we might call this <em>quality control</em> approach to algorithmic bias. Algorithmic fairness, as the term is currently used in computer science, often describes a rather limited value or goal, which political philosophers might call “procedural fairness”—that is, the application of the same impartial decision rules and the use of the same kind of data for each individual subject to algorithmic assessments, as opposed to a more “substantive” approach to fairness, which would involve interventions into decision <em>outcomes</em> and their impact on society (rather than decision <em>processes</em> only) in order to render the former more just.</p>

<p>Even if code is modified with the aim of securing procedural fairness, however, we are left with the deeper philosophical and political issue of whether neutrality constitutes fairness in background conditions of pervasive inequality and structural injustice. Purportedly neutral solutions in the context of widespread injustice risk further entrenching existing injustices. As many critics have pointed out, even if algorithms themselves achieve some sort of neutrality in themselves, the data that these algorithms learn from is still riddled with prejudice. In short, the data we have—and thus the data that gets fed into the algorithm—is neither the data we need nor the data we deserve. Thus, the cure for algorithmic bias may not be more, or better, algorithms. There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.</p>

<p><pullquote>
<p>There is a wealth of empirical evidence showing that the use of AI systems can often replicate historical and contemporary conditions of injustice, rather than alleviate them.</p>
</pullquote></p>
<p>For a concrete example, consider the machine learning systems used in predictive policing, whereby historical crime rate data is fed into algorithms in order to predict future geographic distributions of crime. The algorithms flag certain neighborhoods as prone to violent crime. On that basis, police departments make decisions about where to send their officers and how to allocate resources. While the concept of predictive policing is worrisome for a number of reasons, one common defense of the practice is that AI systems are uniquely “neutral” and “objective,” compared to their human counterparts. On the face of it, it might seem preferable to take decision making power out of the hands of biased police departments and police officers. But what if the data itself is biased, so that even the “best” algorithm would yield biased results?</p>

<p>This is not a hypothetical scenario: predictive policing algorithms are fed historical crime rate data that we know is biased. We know that marginalized communities—in particular black, indigenous, and Latinx communities—have been overpoliced. Given that more crimes are discovered and more arrests are made under conditions of disproportionately high police presence, the associated data is skewed. The problem is one of overrepresentation: particular communities feature disproportionately highly in crime activity data in part because of how (unfairly) closely they have been surveilled, and how inequitably laws have been enforced.</p>

<p>It should come as no surprise, then, that these algorithms make predictions that mirror past patterns. This new data is then fed back into the technological model, creating a pernicious feedback loop in which social injustice is not only replicated, but in fact further entrenched. It is also worth noting that the same communities that have been overpoliced have been severely neglected<em>, </em>both intentionally and unintentionally, in many other areas of social and political life. While they are overrepresented in crime rate data sets, they are underrepresented in many other data sets (e.g. those concerning educational achievement).</p>

<p>Structural injustice thus yields biased data through a variety of mechanisms—prominently including under- and overrepresentation—and worrisome feedback loops result. Even if the quality control problems associated with an algorithm’s decision rules were resolved, we would be left with a more fundamental problem: these systems would still be learning from and relying on data born out of conditions of pervasive and long-standing injustice.</p>

<p>Conceding that these issues pose genuine problems for the possibility of a truly neutral algorithm, some might advocate for implementing countermeasures to correct for the bias in the data—a purported equalizer at the algorithmic level. While this may well be an important step in the right direction, it does not amount to a satisfactory solution on its own. Countermeasures might be able to help account for the over- and underrepresentation issues in the data, but they cannot correct for the problem what <em>kind </em>of data has been collected in the first place.</p>

<p><pullquote>
<p>The data we have—and thus the data that gets fed into the algorithm—is often neither the data we need nor the data we deserve.</p>
</pullquote></p>
<p>Consider, for instance, another controversial application of weak AI: algorithmic risk scoring in the criminal justice process, which has been shown to lead to racially biased outcomes. As a well-known <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing" target="_blank">study</a> by ProPublica showed in 2016, one such algorithm classified black defendants as having a “high recidivism risk” at disproportionately higher rates in comparison to white defendants even after controlling for variables such as the type and severity of the crime committed. As ProPublica put it, “prediction fails differently for black defendants”—in other words, algorithmic predictions did a much worse job of accurately predicting recidivism rates for black defendants, compared to white defendants, given the crimes that individual defendants had previously committed; black defendants who were in fact low risk were much more likely to receive a high-risk score than similar white defendants. Often, such algorithmic systems rely on socio-demographic data, such as age, gender, educational background, residential stability, and familial arrest record. Even though the algorithm in this case does not explicitly rely on race as a variable, these other socio-demographic features can function as proxies for race. The result is a digital form of redlining, or, as computer scientists call it, “redundant encoding.”</p>

<p>In response to this problem, some states—including New Jersey—recently implemented more minimalist algorithmic risk scoring systems, relying purely on behavioral data, such as arrest records. The goal of such systems is to prevent redundant encoding by reducing the amount of socio-demographic information fed into the algorithm. However, given that communities of color are policed disproportionately heavily, and, in turn, arrested at disproportionately high rates, “purely behavioral” data about arrest history, for example, is still heavily raced (and classed, and gendered). Thus, the redundant encoding problem is not in fact solved. New Jersey’s example, among many others, shows that abstracting away from the social circumstances of defendants does not lead to true impartiality.</p>

<p align="center">• • •</p>

<p>In light of these issues, any approach focused on optimizing for procedural fairness—without attention to the social context in which these systems operate—is going to be insufficient. Algorithmic design cannot be fixed in isolation. Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?” We must carefully examine the relationship and contribution of AI systems to existing configurations of political and social injustice, lest these systems continue to perpetuate those very conditions under the guise of neutrality. As many critical race theorists and feminist philosophers have argued, neutral solutions might well secure just outcomes in a <em>just</em> society, but only serve to preserve the status quo in an unjust one.</p>

<p>What, then, <em>does</em> algorithmic and AI fairness require, when we attend to the place of this technology in society at large?</p>

<p>The first—but far from only—step is transparency about the choices that go into AI development and the responsibilities that such choices present.</p>

<p><pullquote>
<p>There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.</p>
</pullquote></p>
<p>Some might be inclined to absolve AI developers and researchers of moral responsibility despite their expertise on the potential risks of deploying these technologies. After all, the thought goes, if they followed existing regulations and protocols and made use of the best available information and data sets, how can they be held responsible for any errors and accidents that they did not foresee? On this view, such are the inevitable, necessary costs of technological advancement—growing pains that we will soon forget as the technology improves over time.</p>

<p>One significant problem with this quietism is the assumption that existing regulations and research are adequate for ethical deployment of AI—an assumption that even industry leaders themselves, <a href="https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html" target="_blank">including</a> Microsoft’s president and chief legal officer Brad Smith, have conceded is unrealistic.</p>

<p>The biggest problem with this picture, though, is its inaccurate portrayal of how AI systems are developed and deployed. Developing algorithmic systems entails making many deliberate choices<em>. </em>For example, machine learning algorithms are often “trained” to navigate massive data sets by making use of certain pre-defined key concepts or variables,  such as “creditworthiness” or “high-risk individual.” The algorithm does not define these concepts itself; human beings—developers and data scientists—choose which concepts to appeal to, at least as an initial starting point. It is implausible to think that these choices are not informed by cultural and social context—a context deeply shaped by a history of inequality and injustice. The variables that tech practitioners choose to include, in turn, significantly influence how the algorithm processes the data and the recommendations it ultimately makes.</p>

<p>Making choices about the concepts that underpin algorithms is not a purely technological problem. For instance, a developer of a predictive policing algorithm inevitably makes choices that determine which members of the community will be affected and how. Making the <em>right</em> choices in this context is as much a moral enterprise as it is a technical one. This is no less true when the exact consequences are difficult even for developers to foresee. New pharmaceutical products often have unexpected side effects, but that is precisely why they undergo extensive rounds of controlled testing and trials before they are approved for use—not to mention the possibility of recall in cases of serious, unforeseen defect.</p>

<p>Unpredictability is thus not an excuse for moral quiescence when the stakes are so high. If AI technology really is unpredictable, this only presents <em>more</em> reason for caution and moderation in deploying these technologies. Such caution is particularly called for when the AI is used to perform such consequential tasks as allocating the resources of a police department or evaluating the creditworthiness of first-time homebuyers.</p>

<p><pullquote>
<p>Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?”</p>
</pullquote></p>
<p>To say that these choices are deliberate is not to suggest that their negative consequences are always, or even often, intentional. There may well be some clearly identifiable “bad” AI developers who must be stopped, but our larger point is that all developers, in general, know enough about these technologies to be regarded as complicit in their outcomes—a point that is obscured when we act as if AI technology is already escaping human control. We must resist the common tendency to think that an AI-driven world means that we are freed not only from making choices, but also from having to scrutinize and evaluate these automated choices in the way that we typically do with human decisions. (This psychological tendency to trust the outputs of an automated decision making system is what researchers call “automation bias.”)</p>

<p>Complicity here means that the responsibility for AI is shared by individuals involved in its development and deployment, regardless of their particular intentions, simply because they know <em>enough </em>about the potential harms. As computer scientist Joshua Kroll has <a href="https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0084?url_ver=Z39.88-2003&amp;rfr_id=ori%3Arid%3Acrossref.org&amp;rfr_dat=cr_pub%3Dpubmed&amp;" target="_blank">argued</a>, “While structural inscrutability frustrates users and oversight entities, system creators and operators always determine that the technologies they deploy are fit for certain uses, making no system wholly inscrutable.”</p>

<p>The apocalypse-saturated discourse on AI, by contrast, encourages a mentality of learned helplessness. The popular perception that strong AI will eventually grow out of our control risks becoming a self-fulfilling prophecy, despite the present reality that weak AI is very much the product of human deliberation and decision making. Avoiding learned helplessness and automation bias will require adopting a model of responsibility that recognizes that a variety of (even well-intentioned) agents must share the responsibility for AI given their role in its development and deployment.</p>

<p align="center">• • •</p>

<p>In the end, the responsible development and deployment of weak AI will involve not just developers and designers, but the public at large. This means that we need, among other things, to scrutinize current narratives about AI’s potential costs and benefits. As we have argued, AI’s alleged neutrality and inevitability are harmful, yet pervasive, myths. Debunking them will require an ongoing process of public, democratic contestation about the social, political, and moral dimensions of algorithmic decision making.</p>

<p><pullquote>
<p>We must resist the apocalypse-saturated discourse on AI that encourages a mentality of learned helplessness.</p>
</pullquote></p>
<p>This is not an unprecedented proposal: similar suggestions have been made by philosophers and activists seeking to address other complex, collective moral problems, such as climate change and sweatshop labor. Just as their efforts have helped raise public awareness and spark political debate about those issues, it is high time for us as a public to take seriously our responsibilities for the present and looming social consequences of AI. Algorithmic bias is not a purely technical problem for researchers and tech practitioners; we must recognize it as a moral and political problem in which all of us—as democratic citizens—have a stake. Responsibility cannot simply be offloaded and outsourced to tech developers and private corporations.</p>

<p>This also means that we need, in part, to think critically about government decisions to procure machine learning tools from private corporations—especially because these tools are subsequently used to partially automate decisions that were previously made by democratically authorized, if not directly elected, public officials. But we will also have to ask uncomfortable questions about our own role as a public in authorizing and contesting the use of AI technologies by corporations and the state. Citizens must come to view issues surrounding AI as a collective problem <em>for all of us </em>rather than a technical problem <em>just for them. </em>Our proposal is aligned with an emerging “second wave” of thinking about algorithmic accountability, as legal scholar Frank Pasquale <a href="https://lpeblog.org/2019/11/25/the-second-wave-of-algorithmic-accountability/" target="_blank">puts</a> it: a perspective which critically questions “whether [certain algorithmic systems] should be used at all—and, if so, who gets to govern them,” rather than asking how such systems might be improved in order to make them more fair. This “second” perspective has immediate implications for how we ought to think about the relationship between democratic power and AI. As Julia Powles and Helen Nissenbaum <a href="https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53" target="_blank">emphasize</a>, “Any AI system that is integrated into people’s lives must be capable of contest, account, and redress to citizens and representatives of the public interest.”</p>

<p>If using algorithmic decision making means making deliberate choices for which we are all on the hook, how exactly should we as a democratic society respond? What is our role, as citizens, in shaping technology for a more just society?</p>

<p>One might be tempted to think that the unprecedented novelty of machine learning can be regulated effectively only if we manage to create entirely new democratic procedures and institutions. A range of such measures have been proposed: creating new governmental departments (such as the new Department of Technology proposed by Democratic presidential candidate Andrew Yang), new laws (such as <a href="http://www.europarl.europa.eu/RegData/etudes/STUD/2019/624262/EPRS_STU(2019)624262_EN.pdf." target="_blank">consumer protection laws</a>, enforced by a kind of “FDA for algorithms”), as well as (rather weak) new measures for voluntary individual self-regulation, such as a Hippocratic oath for developers.</p>

<p>One problem with these proposals is that they reinforce learned helplessness about algorithmic bias; they suggest that until we implement large-scale, complex institutional and social change, we will be unable to address algorithmic bias in any meaningful way. But there is another way to think about algorithmic accountability. Rather than advocating for entirely new democratic institutions and procedures, why not first try to shift existing democratic <em>agendas</em>? Democratic agenda-setting means enabling citizens to contest and control the concepts that underpin algorithmic decision rules and to deliberate about whether algorithmic decision-making ought to be used in a particular domain in the first place.</p>

<p><pullquote>
<p>Citizens must come to view issues surrounding AI as a collective problem <em>for all of us </em>rather than a technical problem <em>just for them.</em></p>
</pullquote></p>
<p>San Francisco, for example, recently banned the use of facial recognition tools in policing due to the increasing amount of empirical evidence of algorithmic bias in law enforcement technology;  the city’s board of supervisors passed the “Stop Secret Surveillance” ordinance in an 8-1 vote. The ordinance, which states that “the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring,” also establishes more wide-ranging accountability mechanisms beyond the use of facial recognition technology in policing, such as a requirement that city agencies obtain approval before purchasing and implementing other types of surveillance technology.</p>

<p>Broaching questions of algorithmic justice via the democratic process would give members of communities most impacted by algorithmic bias more direct democratic power over crucial decisions concerning weak AI—not merely after its deployment, but also at the design stage. San Francisco’s new ordinance exemplifies the importance of providing meaningful opportunities for bottom-up democratic deliberation about, and democratic contestation of, algorithmic tools ideally <em>before</em> they are deployed: “Decisions regarding if and how surveillance technologies should be funded, acquired, or used, and whether data from such technologies should be shared, should be made only after meaningful public input has been solicited and given significant weight.”</p>

<p>Similar local, bottom-up procedures akin to San Francisco’s model can and should be implemented in other communities, rather than simply waiting for the creation of more comprehensive, top-down regulatory institutions. Significant public oversight over AI development and deployment is already possible. Rather than allowing tech practitioners to navigate the ethics of AI by themselves, we the public should be included in decisions about whether and how AI will be deployed and to what ends. Furthermore, even once we do create new, larger-scale democratic institutions empowered to legally regulate AI in a meaningful way, bottom-up democratic procedures will still be essential: they play a crucial role in identifying which agendas such institutions ought to pursue, and they can shine a light on whose interests are most affected by emerging technologies.</p>

<p>Moving to this agenda-setting approach means incorporating an <em>ex ante</em> perspective when we think about algorithmic accountability, rather than resigning ourselves to a callous, <em>ex post</em>, “wait and see” attitude. When we wait and see, we let corporations and practitioners take the lead and set the terms. In the meantime, we expect those who are already experiencing significant social injustice to continue bearing its burden.</p>

<p><pullquote>
<p>To take full responsibility for how technology shapes our lives, we will have to make the deployment of AI democratically contestable by putting it on our democratic agendas.</p>
</pullquote></p>
<p>Of course, shifting democratic agendas toward decisions about algorithmic <em>tools</em> will not entirely resolve the problem of algorithmic <em>bias</em>. To take full responsibility for how technology shapes our lives going forward, we will have to make the deployment of weak AI democratically contestable by putting it on our democratic agendas. That being said, we will eventually have to <em>combine</em> that strategy with an effort to establish institutions that can enforce the just and equitable use of technology after it has been deployed.</p>

<p>In other words, a democratic critique of algorithmic injustice requires both an <em>ex ante</em> and an <em>ex post</em> perspective. In order for us to start thinking about <em>ex post</em> accountability in a meaningful way—that is, in a way that actually reflects the concerns and lived experiences of those most affected by algorithmic tools—we need to first make it possible for society as a whole, not just tech industry employees, to ask the deeper <em>ex ante</em> questions (e.g. “Should we even use weak AI in this domain at all?”). Changing the democratic agenda is a prerequisite to tackling algorithmic injustice, not just one policy goal among many.</p>

<p align="center">• • •</p>

<p>Democratic agenda setting can be a powerful mechanism for exercising popular control over state and corporate use of technology, and for contesting technology’s threats to our rights. Effective agenda-setting, of course, will mean coupling the public’s agenda setting power with tangible bottom-up decision making power, rather than merely exercising our rights of deliberation and consultation.</p>

<p>This is where we can learn from other recent democratic innovations, such as participatory budgeting, in which local and municipal decisions about how to allocate resources for infrastructure, energy, healthcare, and environmental policy are being made directly by residents themselves after several rounds of collective deliberation. Enabling more robust democratic participation from the outset helps us identify the kinds of concerns and problems that we ought to prioritize. Rather than rushing to quick, top-down solutions aimed at quality control, optimization, and neutrality, we must first clarify what particular kind of problem we are trying to solve in the first place. Until we do so, algorithmic decision making will continue to entrench social injustice, even as tech optimists herald it as the cure for the very ills it exacerbates.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil">🏠</a> •
<a href="/david/log/" title="Accès au flux RSS">🤖</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant>🇨🇦</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel">📮</a> •
<abbr title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340">🧚</abbr>
</p>
</footer>
<script src="/static/david/js/instantpage-3.0.0.min.js" type="module" defer></script>
</body>
</html>

+ 127
- 0
cache/2020/d562b547dc4833f0eb84a67ec2a8465d/index.md View File

@@ -0,0 +1,127 @@
title: Technology Can't Fix Algorithmic Injustice
url: http://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic
hash_url: d562b547dc4833f0eb84a67ec2a8465d

<p>We need greater democratic oversight of AI not just from developers and designers, but from all members of society.</p>

<p>A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly <a href="https://www.bbc.com/news/technology-30290540" target="_blank">expressed</a> their concerns about the advent of this kind of “strong” (or “general”) AI—and the associated existential risk that it may pose for humanity. In Hawking’s words, the development of strong AI “could spell the end of the human race.”</p>
<pullquote>
<p>Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.</p>
</pullquote>

<p>These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.</p>

<p>What responsibilities and obligations do we bear for AI’s social consequences <em>in the present</em>—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.</p>

<p align="center">• • •</p>

<p>The first thing we must do is carefully scrutinize the arguments underpinning the present use of AI.</p>

<p>Some are optimistic that weak AI systems can contribute positively to social justice. Unlike humans, the argument goes, algorithms can avoid biased decision making, thereby achieving a level of neutrality and objectivity that is not humanly possible. A great deal of recent work has critiqued this presumption, including Safiya Noble’s <em>Algorithms of Oppression </em>(2018), Ruha Benjamin’s <em>Race After Technology</em> (2019)<em>,</em> Meredith Broussard’s <em>Artificial Unintelligence </em>(2018), Hannah Fry’s <em>Hello World </em>(2018), Virginia Eubanks’s <em>Automating Inequality </em>(2018), Sara Wachter-Boettcher’s <em>Technically Wron</em>g (2017), and Cathy O’Neil’s <em>Weapons of Math Destruction </em>(2016). As these authors emphasize, there is a wealth of empirical evidence showing that the use of AI systems can often replicate historical and contemporary conditions of injustice, rather than alleviate them.</p>

<p>In response to these criticisms, practitioners have focused on optimizing the <em>accuracy</em> of AI systems in order to achieve ostensibly objective, neutral decision outcomes. Such optimists concede that algorithmic systems are not neutral <em>at present</em>, but argue that they can be made neutral in the future, ultimately rendering their deployment morally and politically unobjectionable. As IBM Research, one of the many corporate research hubs focused on AI technologies, <a href="https://www.research.ibm.com/5-in-5/ai-and-bias/" target="_blank">proclaims</a>:</p>

<blockquote>
<p>AI bias will explode. But only the unbiased AI will survive. Within five years, the number of biased AI systems and algorithms will increase. But we will deal with them accordingly—coming up with new solutions to control bias in AI and champion AI systems free of it.</p>
</blockquote>

<p>The emerging computer science subfield of FAT ML (“fairness, accountability and transparency in machine learning”) includes a number of important contributions in this direction—described in an accessible way in two new books: Michael Kearns and Aaron Roth’s <em>The Ethical Algorithm</em> (2019) and Gary Marcus and Ernest Davis’s <em>Rebooting AI: Building Artificial Intelligence We Can Trust </em>(2019). Kearns and Roth, for example, write:</p>

<blockquote>
<p>We . . . . believe that curtailing algorithmic misbehavior will itself require more and better algorithms—algorithms that can assist regulators, watchdog groups, and other human organizations to monitor and measure the undesirable and unintended effects of machine learning.</p>
</blockquote>

<p>There are serious limitations, however, to what we might call this <em>quality control</em> approach to algorithmic bias. Algorithmic fairness, as the term is currently used in computer science, often describes a rather limited value or goal, which political philosophers might call “procedural fairness”—that is, the application of the same impartial decision rules and the use of the same kind of data for each individual subject to algorithmic assessments, as opposed to a more “substantive” approach to fairness, which would involve interventions into decision <em>outcomes</em> and their impact on society (rather than decision <em>processes</em> only) in order to render the former more just.</p>

<p>Even if code is modified with the aim of securing procedural fairness, however, we are left with the deeper philosophical and political issue of whether neutrality constitutes fairness in background conditions of pervasive inequality and structural injustice. Purportedly neutral solutions in the context of widespread injustice risk further entrenching existing injustices. As many critics have pointed out, even if algorithms themselves achieve some sort of neutrality in themselves, the data that these algorithms learn from is still riddled with prejudice. In short, the data we have—and thus the data that gets fed into the algorithm—is neither the data we need nor the data we deserve. Thus, the cure for algorithmic bias may not be more, or better, algorithms. There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.</p>
<pullquote>
<p>There is a wealth of empirical evidence showing that the use of AI systems can often replicate historical and contemporary conditions of injustice, rather than alleviate them.</p>
</pullquote>

<p>For a concrete example, consider the machine learning systems used in predictive policing, whereby historical crime rate data is fed into algorithms in order to predict future geographic distributions of crime. The algorithms flag certain neighborhoods as prone to violent crime. On that basis, police departments make decisions about where to send their officers and how to allocate resources. While the concept of predictive policing is worrisome for a number of reasons, one common defense of the practice is that AI systems are uniquely “neutral” and “objective,” compared to their human counterparts. On the face of it, it might seem preferable to take decision making power out of the hands of biased police departments and police officers. But what if the data itself is biased, so that even the “best” algorithm would yield biased results?</p>

<p>This is not a hypothetical scenario: predictive policing algorithms are fed historical crime rate data that we know is biased. We know that marginalized communities—in particular black, indigenous, and Latinx communities—have been overpoliced. Given that more crimes are discovered and more arrests are made under conditions of disproportionately high police presence, the associated data is skewed. The problem is one of overrepresentation: particular communities feature disproportionately highly in crime activity data in part because of how (unfairly) closely they have been surveilled, and how inequitably laws have been enforced.</p>

<p>It should come as no surprise, then, that these algorithms make predictions that mirror past patterns. This new data is then fed back into the technological model, creating a pernicious feedback loop in which social injustice is not only replicated, but in fact further entrenched. It is also worth noting that the same communities that have been overpoliced have been severely neglected<em>, </em>both intentionally and unintentionally, in many other areas of social and political life. While they are overrepresented in crime rate data sets, they are underrepresented in many other data sets (e.g. those concerning educational achievement).</p>

<p>Structural injustice thus yields biased data through a variety of mechanisms—prominently including under- and overrepresentation—and worrisome feedback loops result. Even if the quality control problems associated with an algorithm’s decision rules were resolved, we would be left with a more fundamental problem: these systems would still be learning from and relying on data born out of conditions of pervasive and long-standing injustice.</p>

<p>Conceding that these issues pose genuine problems for the possibility of a truly neutral algorithm, some might advocate for implementing countermeasures to correct for the bias in the data—a purported equalizer at the algorithmic level. While this may well be an important step in the right direction, it does not amount to a satisfactory solution on its own. Countermeasures might be able to help account for the over- and underrepresentation issues in the data, but they cannot correct for the problem what <em>kind </em>of data has been collected in the first place.</p>
<pullquote>
<p>The data we have—and thus the data that gets fed into the algorithm—is often neither the data we need nor the data we deserve.</p>
</pullquote>

<p>Consider, for instance, another controversial application of weak AI: algorithmic risk scoring in the criminal justice process, which has been shown to lead to racially biased outcomes. As a well-known <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing" target="_blank">study</a> by ProPublica showed in 2016, one such algorithm classified black defendants as having a “high recidivism risk” at disproportionately higher rates in comparison to white defendants even after controlling for variables such as the type and severity of the crime committed. As ProPublica put it, “prediction fails differently for black defendants”—in other words, algorithmic predictions did a much worse job of accurately predicting recidivism rates for black defendants, compared to white defendants, given the crimes that individual defendants had previously committed; black defendants who were in fact low risk were much more likely to receive a high-risk score than similar white defendants. Often, such algorithmic systems rely on socio-demographic data, such as age, gender, educational background, residential stability, and familial arrest record. Even though the algorithm in this case does not explicitly rely on race as a variable, these other socio-demographic features can function as proxies for race. The result is a digital form of redlining, or, as computer scientists call it, “redundant encoding.”</p>

<p>In response to this problem, some states—including New Jersey—recently implemented more minimalist algorithmic risk scoring systems, relying purely on behavioral data, such as arrest records. The goal of such systems is to prevent redundant encoding by reducing the amount of socio-demographic information fed into the algorithm. However, given that communities of color are policed disproportionately heavily, and, in turn, arrested at disproportionately high rates, “purely behavioral” data about arrest history, for example, is still heavily raced (and classed, and gendered). Thus, the redundant encoding problem is not in fact solved. New Jersey’s example, among many others, shows that abstracting away from the social circumstances of defendants does not lead to true impartiality.</p>

<p align="center">• • •</p>

<p>In light of these issues, any approach focused on optimizing for procedural fairness—without attention to the social context in which these systems operate—is going to be insufficient. Algorithmic design cannot be fixed in isolation. Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?” We must carefully examine the relationship and contribution of AI systems to existing configurations of political and social injustice, lest these systems continue to perpetuate those very conditions under the guise of neutrality. As many critical race theorists and feminist philosophers have argued, neutral solutions might well secure just outcomes in a <em>just</em> society, but only serve to preserve the status quo in an unjust one.</p>

<p>What, then, <em>does</em> algorithmic and AI fairness require, when we attend to the place of this technology in society at large?</p>

<p>The first—but far from only—step is transparency about the choices that go into AI development and the responsibilities that such choices present.</p>
<pullquote>
<p>There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.</p>
</pullquote>

<p>Some might be inclined to absolve AI developers and researchers of moral responsibility despite their expertise on the potential risks of deploying these technologies. After all, the thought goes, if they followed existing regulations and protocols and made use of the best available information and data sets, how can they be held responsible for any errors and accidents that they did not foresee? On this view, such are the inevitable, necessary costs of technological advancement—growing pains that we will soon forget as the technology improves over time.</p>

<p>One significant problem with this quietism is the assumption that existing regulations and research are adequate for ethical deployment of AI—an assumption that even industry leaders themselves, <a href="https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html" target="_blank">including</a> Microsoft’s president and chief legal officer Brad Smith, have conceded is unrealistic.</p>

<p>The biggest problem with this picture, though, is its inaccurate portrayal of how AI systems are developed and deployed. Developing algorithmic systems entails making many deliberate choices<em>. </em>For example, machine learning algorithms are often “trained” to navigate massive data sets by making use of certain pre-defined key concepts or variables,  such as “creditworthiness” or “high-risk individual.” The algorithm does not define these concepts itself; human beings—developers and data scientists—choose which concepts to appeal to, at least as an initial starting point. It is implausible to think that these choices are not informed by cultural and social context—a context deeply shaped by a history of inequality and injustice. The variables that tech practitioners choose to include, in turn, significantly influence how the algorithm processes the data and the recommendations it ultimately makes.</p>

<p>Making choices about the concepts that underpin algorithms is not a purely technological problem. For instance, a developer of a predictive policing algorithm inevitably makes choices that determine which members of the community will be affected and how. Making the <em>right</em> choices in this context is as much a moral enterprise as it is a technical one. This is no less true when the exact consequences are difficult even for developers to foresee. New pharmaceutical products often have unexpected side effects, but that is precisely why they undergo extensive rounds of controlled testing and trials before they are approved for use—not to mention the possibility of recall in cases of serious, unforeseen defect.</p>

<p>Unpredictability is thus not an excuse for moral quiescence when the stakes are so high. If AI technology really is unpredictable, this only presents <em>more</em> reason for caution and moderation in deploying these technologies. Such caution is particularly called for when the AI is used to perform such consequential tasks as allocating the resources of a police department or evaluating the creditworthiness of first-time homebuyers.</p>
<pullquote>
<p>Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?”</p>
</pullquote>

<p>To say that these choices are deliberate is not to suggest that their negative consequences are always, or even often, intentional. There may well be some clearly identifiable “bad” AI developers who must be stopped, but our larger point is that all developers, in general, know enough about these technologies to be regarded as complicit in their outcomes—a point that is obscured when we act as if AI technology is already escaping human control. We must resist the common tendency to think that an AI-driven world means that we are freed not only from making choices, but also from having to scrutinize and evaluate these automated choices in the way that we typically do with human decisions. (This psychological tendency to trust the outputs of an automated decision making system is what researchers call “automation bias.”)</p>

<p>Complicity here means that the responsibility for AI is shared by individuals involved in its development and deployment, regardless of their particular intentions, simply because they know <em>enough </em>about the potential harms. As computer scientist Joshua Kroll has <a href="https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0084?url_ver=Z39.88-2003&amp;rfr_id=ori%3Arid%3Acrossref.org&amp;rfr_dat=cr_pub%3Dpubmed&amp;" target="_blank">argued</a>, “While structural inscrutability frustrates users and oversight entities, system creators and operators always determine that the technologies they deploy are fit for certain uses, making no system wholly inscrutable.”</p>

<p>The apocalypse-saturated discourse on AI, by contrast, encourages a mentality of learned helplessness. The popular perception that strong AI will eventually grow out of our control risks becoming a self-fulfilling prophecy, despite the present reality that weak AI is very much the product of human deliberation and decision making. Avoiding learned helplessness and automation bias will require adopting a model of responsibility that recognizes that a variety of (even well-intentioned) agents must share the responsibility for AI given their role in its development and deployment.</p>

<p align="center">• • •</p>

<p>In the end, the responsible development and deployment of weak AI will involve not just developers and designers, but the public at large. This means that we need, among other things, to scrutinize current narratives about AI’s potential costs and benefits. As we have argued, AI’s alleged neutrality and inevitability are harmful, yet pervasive, myths. Debunking them will require an ongoing process of public, democratic contestation about the social, political, and moral dimensions of algorithmic decision making.</p>
<pullquote>
<p>We must resist the apocalypse-saturated discourse on AI that encourages a mentality of learned helplessness.</p>
</pullquote>

<p>This is not an unprecedented proposal: similar suggestions have been made by philosophers and activists seeking to address other complex, collective moral problems, such as climate change and sweatshop labor. Just as their efforts have helped raise public awareness and spark political debate about those issues, it is high time for us as a public to take seriously our responsibilities for the present and looming social consequences of AI. Algorithmic bias is not a purely technical problem for researchers and tech practitioners; we must recognize it as a moral and political problem in which all of us—as democratic citizens—have a stake. Responsibility cannot simply be offloaded and outsourced to tech developers and private corporations.</p>

<p>This also means that we need, in part, to think critically about government decisions to procure machine learning tools from private corporations—especially because these tools are subsequently used to partially automate decisions that were previously made by democratically authorized, if not directly elected, public officials. But we will also have to ask uncomfortable questions about our own role as a public in authorizing and contesting the use of AI technologies by corporations and the state. Citizens must come to view issues surrounding AI as a collective problem <em>for all of us </em>rather than a technical problem <em>just for them. </em>Our proposal is aligned with an emerging “second wave” of thinking about algorithmic accountability, as legal scholar Frank Pasquale <a href="https://lpeblog.org/2019/11/25/the-second-wave-of-algorithmic-accountability/" target="_blank">puts</a> it: a perspective which critically questions “whether [certain algorithmic systems] should be used at all—and, if so, who gets to govern them,” rather than asking how such systems might be improved in order to make them more fair. This “second” perspective has immediate implications for how we ought to think about the relationship between democratic power and AI. As Julia Powles and Helen Nissenbaum <a href="https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53" target="_blank">emphasize</a>, “Any AI system that is integrated into people’s lives must be capable of contest, account, and redress to citizens and representatives of the public interest.”</p>

<p>If using algorithmic decision making means making deliberate choices for which we are all on the hook, how exactly should we as a democratic society respond? What is our role, as citizens, in shaping technology for a more just society?</p>

<p>One might be tempted to think that the unprecedented novelty of machine learning can be regulated effectively only if we manage to create entirely new democratic procedures and institutions. A range of such measures have been proposed: creating new governmental departments (such as the new Department of Technology proposed by Democratic presidential candidate Andrew Yang), new laws (such as <a href="http://www.europarl.europa.eu/RegData/etudes/STUD/2019/624262/EPRS_STU(2019)624262_EN.pdf." target="_blank">consumer protection laws</a>, enforced by a kind of “FDA for algorithms”), as well as (rather weak) new measures for voluntary individual self-regulation, such as a Hippocratic oath for developers.</p>

<p>One problem with these proposals is that they reinforce learned helplessness about algorithmic bias; they suggest that until we implement large-scale, complex institutional and social change, we will be unable to address algorithmic bias in any meaningful way. But there is another way to think about algorithmic accountability. Rather than advocating for entirely new democratic institutions and procedures, why not first try to shift existing democratic <em>agendas</em>? Democratic agenda-setting means enabling citizens to contest and control the concepts that underpin algorithmic decision rules and to deliberate about whether algorithmic decision-making ought to be used in a particular domain in the first place.</p>
<pullquote>
<p>Citizens must come to view issues surrounding AI as a collective problem <em>for all of us </em>rather than a technical problem <em>just for them.</em></p>
</pullquote>

<p>San Francisco, for example, recently banned the use of facial recognition tools in policing due to the increasing amount of empirical evidence of algorithmic bias in law enforcement technology;  the city’s board of supervisors passed the “Stop Secret Surveillance” ordinance in an 8-1 vote. The ordinance, which states that “the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring,” also establishes more wide-ranging accountability mechanisms beyond the use of facial recognition technology in policing, such as a requirement that city agencies obtain approval before purchasing and implementing other types of surveillance technology.</p>

<p>Broaching questions of algorithmic justice via the democratic process would give members of communities most impacted by algorithmic bias more direct democratic power over crucial decisions concerning weak AI—not merely after its deployment, but also at the design stage. San Francisco’s new ordinance exemplifies the importance of providing meaningful opportunities for bottom-up democratic deliberation about, and democratic contestation of, algorithmic tools ideally <em>before</em> they are deployed: “Decisions regarding if and how surveillance technologies should be funded, acquired, or used, and whether data from such technologies should be shared, should be made only after meaningful public input has been solicited and given significant weight.”</p>

<p>Similar local, bottom-up procedures akin to San Francisco’s model can and should be implemented in other communities, rather than simply waiting for the creation of more comprehensive, top-down regulatory institutions. Significant public oversight over AI development and deployment is already possible. Rather than allowing tech practitioners to navigate the ethics of AI by themselves, we the public should be included in decisions about whether and how AI will be deployed and to what ends. Furthermore, even once we do create new, larger-scale democratic institutions empowered to legally regulate AI in a meaningful way, bottom-up democratic procedures will still be essential: they play a crucial role in identifying which agendas such institutions ought to pursue, and they can shine a light on whose interests are most affected by emerging technologies.</p>

<p>Moving to this agenda-setting approach means incorporating an <em>ex ante</em> perspective when we think about algorithmic accountability, rather than resigning ourselves to a callous, <em>ex post</em>, “wait and see” attitude. When we wait and see, we let corporations and practitioners take the lead and set the terms. In the meantime, we expect those who are already experiencing significant social injustice to continue bearing its burden.</p>
<pullquote>
<p>To take full responsibility for how technology shapes our lives, we will have to make the deployment of AI democratically contestable by putting it on our democratic agendas.</p>
</pullquote>

<p>Of course, shifting democratic agendas toward decisions about algorithmic <em>tools</em> will not entirely resolve the problem of algorithmic <em>bias</em>. To take full responsibility for how technology shapes our lives going forward, we will have to make the deployment of weak AI democratically contestable by putting it on our democratic agendas. That being said, we will eventually have to <em>combine</em> that strategy with an effort to establish institutions that can enforce the just and equitable use of technology after it has been deployed.</p>

<p>In other words, a democratic critique of algorithmic injustice requires both an <em>ex ante</em> and an <em>ex post</em> perspective. In order for us to start thinking about <em>ex post</em> accountability in a meaningful way—that is, in a way that actually reflects the concerns and lived experiences of those most affected by algorithmic tools—we need to first make it possible for society as a whole, not just tech industry employees, to ask the deeper <em>ex ante</em> questions (e.g. “Should we even use weak AI in this domain at all?”). Changing the democratic agenda is a prerequisite to tackling algorithmic injustice, not just one policy goal among many.</p>

<p align="center">• • •</p>

<p>Democratic agenda setting can be a powerful mechanism for exercising popular control over state and corporate use of technology, and for contesting technology’s threats to our rights. Effective agenda-setting, of course, will mean coupling the public’s agenda setting power with tangible bottom-up decision making power, rather than merely exercising our rights of deliberation and consultation.</p>

<p>This is where we can learn from other recent democratic innovations, such as participatory budgeting, in which local and municipal decisions about how to allocate resources for infrastructure, energy, healthcare, and environmental policy are being made directly by residents themselves after several rounds of collective deliberation. Enabling more robust democratic participation from the outset helps us identify the kinds of concerns and problems that we ought to prioritize. Rather than rushing to quick, top-down solutions aimed at quality control, optimization, and neutrality, we must first clarify what particular kind of problem we are trying to solve in the first place. Until we do so, algorithmic decision making will continue to entrench social injustice, even as tech optimists herald it as the cure for the very ills it exacerbates.</p>

+ 6
- 0
cache/2020/index.html View File

@@ -33,6 +33,8 @@
<li><a href="/david/cache/2020/5ddeb776b27bade5f581d66e40de4c6c/" title="Accès à l'article caché">Big Mood Machine</a> (<a href="https://thebaffler.com/downstream/big-mood-machine-pelly" title="Accès à l'article original">original</a>)</li>
<li><a href="/david/cache/2020/384b330b3de6f4f2bac8c81f0f04c404/" title="Accès à l'article caché">Atlanta Asks Google Whether It Targeted Black Homeless People</a> (<a href="https://www.nytimes.com/2019/10/04/technology/google-facial-recognition-atlanta-homeless.html" title="Accès à l'article original">original</a>)</li>
<li><a href="/david/cache/2020/fb2849b42586654e0c899bf1a31fa5a5/" title="Accès à l'article caché">Sparrow’s Guide To Meditation</a> (<a href="https://www.thesunmagazine.org/issues/529/sparrows-guide-to-meditation" title="Accès à l'article original">original</a>)</li>
<li><a href="/david/cache/2020/2390380d879c04ee56baf320b6f7e681/" title="Accès à l'article caché">Twelve Million Phones, One Dataset, Zero Privacy</a> (<a href="https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html" title="Accès à l'article original">original</a>)</li>
@@ -43,6 +45,8 @@
<li><a href="/david/cache/2020/67c8c54b07137bcfc0069fccd8261b53/" title="Accès à l'article caché">Mercurial's Journey to and Reflections on Python 3</a> (<a href="https://gregoryszorc.com/blog/2020/01/13/mercurial%27s-journey-to-and-reflections-on-python-3/" title="Accès à l'article original">original</a>)</li>
<li><a href="/david/cache/2020/cf5eab15b9590499ccb6d989f50fe5e3/" title="Accès à l'article caché">Against Black Inclusion in Facial Recognition</a> (<a href="https://digitaltalkingdrum.com/2017/08/15/against-black-inclusion-in-facial-recognition/" title="Accès à l'article original">original</a>)</li>
<li><a href="/david/cache/2020/abb215ff6b7cb9c876db622d42385aca/" title="Accès à l'article caché">La Licence Contributive Commons</a> (<a href="https://contributivecommons.org/la-licence-contributive-commons/" title="Accès à l'article original">original</a>)</li>
<li><a href="/david/cache/2020/82e58e715a4ddb17b2f9e2a023005b1a/" title="Accès à l'article caché">Wordsmiths | Getting Real</a> (<a href="https://basecamp.com/gettingreal/08.6-wordsmiths" title="Accès à l'article original">original</a>)</li>
@@ -65,6 +69,8 @@
<li><a href="/david/cache/2020/a1ba10f6326b0ed4c9ca343a214f671d/" title="Accès à l'article caché">Végétarien carnivore</a> (<a href="https://oncletom.io/2020/01/12/vegetarien-carnivore/" title="Accès à l'article original">original</a>)</li>
<li><a href="/david/cache/2020/d562b547dc4833f0eb84a67ec2a8465d/" title="Accès à l'article caché">Technology Can't Fix Algorithmic Injustice</a> (<a href="http://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic" title="Accès à l'article original">original</a>)</li>
<li><a href="/david/cache/2020/f6f75ff6a361536ccb7a2861ee386bbd/" title="Accès à l'article caché">Acte de naissance</a> (<a href="http://scopyleft.fr/blog/2013/acte-de-naissance/" title="Accès à l'article original">original</a>)</li>
<li><a href="/david/cache/2020/322e7a8997c732a5fdca0baaea7b9ede/" title="Accès à l'article caché">Guide to Internal Communication, the Basecamp Way</a> (<a href="https://basecamp.com/guides/how-we-communicate" title="Accès à l'article original">original</a>)</li>

Loading…
Cancel
Save