A place to cache linked articles (think custom and personal wayback machine)
Nelze vybrat více než 25 témat Téma musí začínat písmenem nebo číslem, může obsahovat pomlčky („-“) a může být dlouhé až 35 znaků.

index.html 33KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179
  1. <!doctype html><!-- This is a valid HTML5 document. -->
  2. <!-- Screen readers, SEO, extensions and so on. -->
  3. <html lang="fr">
  4. <!-- Has to be within the first 1024 bytes, hence before the <title>
  5. See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
  6. <meta charset="utf-8">
  7. <!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
  8. <!-- The viewport meta is quite crowded and we are responsible for that.
  9. See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
  10. <meta name="viewport" content="width=device-width,initial-scale=1">
  11. <!-- Required to make a valid HTML5 document. -->
  12. <title>Technology Can't Fix Algorithmic Injustice (archive) — David Larlet</title>
  13. <!-- Generated from https://realfavicongenerator.net/ such a mess. -->
  14. <link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
  15. <link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
  16. <link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
  17. <link rel="manifest" href="/static/david/icons2/site.webmanifest">
  18. <link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
  19. <link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
  20. <meta name="msapplication-TileColor" content="#f0f0ea">
  21. <meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
  22. <meta name="theme-color" content="#f0f0ea">
  23. <!-- Thank you Florens! -->
  24. <link rel="stylesheet" href="/static/david/css/style_2020-01-24.css">
  25. <!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
  26. <link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" crossorigin>
  27. <link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" crossorigin>
  28. <link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" crossorigin>
  29. <meta name="robots" content="noindex, nofollow">
  30. <meta content="origin-when-cross-origin" name="referrer">
  31. <!-- Canonical URL for SEO purposes -->
  32. <link rel="canonical" href="http://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic">
  33. <body class="remarkdown h1-underline h2-underline hr-center ul-star pre-tick">
  34. <article>
  35. <h1>Technology Can't Fix Algorithmic Injustice</h1>
  36. <h2><a href="http://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic">Source originale du contenu</a></h2>
  37. <p>We need greater democratic oversight of AI not just from developers and designers, but from all members of society.</p>
  38. <p>A great deal of recent public debate about artificial intelligence has been driven by apocalyptic visions of the future. Humanity, we are told, is engaged in an existential struggle against its own creation. Such worries are fueled in large part by tech industry leaders and futurists, who anticipate systems so sophisticated that they can perform general tasks and operate autonomously, without human control. Stephen Hawking, Elon Musk, and Bill Gates have all publicly <a href="https://www.bbc.com/news/technology-30290540" target="_blank">expressed</a> their concerns about the advent of this kind of “strong” (or “general”) AI—and the associated existential risk that it may pose for humanity. In Hawking’s words, the development of strong AI “could spell the end of the human race.”</p>
  39. <p><pullquote>
  40. <p>Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.</p>
  41. </pullquote></p>
  42. <p>These are legitimate long-term worries. But they are not all we have to worry about, and placing them center stage distracts from ethical questions that AI is raising here and now. Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.</p>
  43. <p>What responsibilities and obligations do we bear for AI’s social consequences <em>in the present</em>—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.</p>
  44. <p align="center">• • •</p>
  45. <p>The first thing we must do is carefully scrutinize the arguments underpinning the present use of AI.</p>
  46. <p>Some are optimistic that weak AI systems can contribute positively to social justice. Unlike humans, the argument goes, algorithms can avoid biased decision making, thereby achieving a level of neutrality and objectivity that is not humanly possible. A great deal of recent work has critiqued this presumption, including Safiya Noble’s <em>Algorithms of Oppression </em>(2018), Ruha Benjamin’s <em>Race After Technology</em> (2019)<em>,</em> Meredith Broussard’s <em>Artificial Unintelligence </em>(2018), Hannah Fry’s <em>Hello World </em>(2018), Virginia Eubanks’s <em>Automating Inequality </em>(2018), Sara Wachter-Boettcher’s <em>Technically Wron</em>g (2017), and Cathy O’Neil’s <em>Weapons of Math Destruction </em>(2016). As these authors emphasize, there is a wealth of empirical evidence showing that the use of AI systems can often replicate historical and contemporary conditions of injustice, rather than alleviate them.</p>
  47. <p>In response to these criticisms, practitioners have focused on optimizing the <em>accuracy</em> of AI systems in order to achieve ostensibly objective, neutral decision outcomes. Such optimists concede that algorithmic systems are not neutral <em>at present</em>, but argue that they can be made neutral in the future, ultimately rendering their deployment morally and politically unobjectionable. As IBM Research, one of the many corporate research hubs focused on AI technologies, <a href="https://www.research.ibm.com/5-in-5/ai-and-bias/" target="_blank">proclaims</a>:</p>
  48. <blockquote>
  49. <p>AI bias will explode. But only the unbiased AI will survive. Within five years, the number of biased AI systems and algorithms will increase. But we will deal with them accordingly—coming up with new solutions to control bias in AI and champion AI systems free of it.</p>
  50. </blockquote>
  51. <p>The emerging computer science subfield of FAT ML (“fairness, accountability and transparency in machine learning”) includes a number of important contributions in this direction—described in an accessible way in two new books: Michael Kearns and Aaron Roth’s <em>The Ethical Algorithm</em> (2019) and Gary Marcus and Ernest Davis’s <em>Rebooting AI: Building Artificial Intelligence We Can Trust </em>(2019). Kearns and Roth, for example, write:</p>
  52. <blockquote>
  53. <p>We . . . . believe that curtailing algorithmic misbehavior will itself require more and better algorithms—algorithms that can assist regulators, watchdog groups, and other human organizations to monitor and measure the undesirable and unintended effects of machine learning.</p>
  54. </blockquote>
  55. <p>There are serious limitations, however, to what we might call this <em>quality control</em> approach to algorithmic bias. Algorithmic fairness, as the term is currently used in computer science, often describes a rather limited value or goal, which political philosophers might call “procedural fairness”—that is, the application of the same impartial decision rules and the use of the same kind of data for each individual subject to algorithmic assessments, as opposed to a more “substantive” approach to fairness, which would involve interventions into decision <em>outcomes</em> and their impact on society (rather than decision <em>processes</em> only) in order to render the former more just.</p>
  56. <p>Even if code is modified with the aim of securing procedural fairness, however, we are left with the deeper philosophical and political issue of whether neutrality constitutes fairness in background conditions of pervasive inequality and structural injustice. Purportedly neutral solutions in the context of widespread injustice risk further entrenching existing injustices. As many critics have pointed out, even if algorithms themselves achieve some sort of neutrality in themselves, the data that these algorithms learn from is still riddled with prejudice. In short, the data we have—and thus the data that gets fed into the algorithm—is neither the data we need nor the data we deserve. Thus, the cure for algorithmic bias may not be more, or better, algorithms. There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.</p>
  57. <p><pullquote>
  58. <p>There is a wealth of empirical evidence showing that the use of AI systems can often replicate historical and contemporary conditions of injustice, rather than alleviate them.</p>
  59. </pullquote></p>
  60. <p>For a concrete example, consider the machine learning systems used in predictive policing, whereby historical crime rate data is fed into algorithms in order to predict future geographic distributions of crime. The algorithms flag certain neighborhoods as prone to violent crime. On that basis, police departments make decisions about where to send their officers and how to allocate resources. While the concept of predictive policing is worrisome for a number of reasons, one common defense of the practice is that AI systems are uniquely “neutral” and “objective,” compared to their human counterparts. On the face of it, it might seem preferable to take decision making power out of the hands of biased police departments and police officers. But what if the data itself is biased, so that even the “best” algorithm would yield biased results?</p>
  61. <p>This is not a hypothetical scenario: predictive policing algorithms are fed historical crime rate data that we know is biased. We know that marginalized communities—in particular black, indigenous, and Latinx communities—have been overpoliced. Given that more crimes are discovered and more arrests are made under conditions of disproportionately high police presence, the associated data is skewed. The problem is one of overrepresentation: particular communities feature disproportionately highly in crime activity data in part because of how (unfairly) closely they have been surveilled, and how inequitably laws have been enforced.</p>
  62. <p>It should come as no surprise, then, that these algorithms make predictions that mirror past patterns. This new data is then fed back into the technological model, creating a pernicious feedback loop in which social injustice is not only replicated, but in fact further entrenched. It is also worth noting that the same communities that have been overpoliced have been severely neglected<em>, </em>both intentionally and unintentionally, in many other areas of social and political life. While they are overrepresented in crime rate data sets, they are underrepresented in many other data sets (e.g. those concerning educational achievement).</p>
  63. <p>Structural injustice thus yields biased data through a variety of mechanisms—prominently including under- and overrepresentation—and worrisome feedback loops result. Even if the quality control problems associated with an algorithm’s decision rules were resolved, we would be left with a more fundamental problem: these systems would still be learning from and relying on data born out of conditions of pervasive and long-standing injustice.</p>
  64. <p>Conceding that these issues pose genuine problems for the possibility of a truly neutral algorithm, some might advocate for implementing countermeasures to correct for the bias in the data—a purported equalizer at the algorithmic level. While this may well be an important step in the right direction, it does not amount to a satisfactory solution on its own. Countermeasures might be able to help account for the over- and underrepresentation issues in the data, but they cannot correct for the problem what <em>kind </em>of data has been collected in the first place.</p>
  65. <p><pullquote>
  66. <p>The data we have—and thus the data that gets fed into the algorithm—is often neither the data we need nor the data we deserve.</p>
  67. </pullquote></p>
  68. <p>Consider, for instance, another controversial application of weak AI: algorithmic risk scoring in the criminal justice process, which has been shown to lead to racially biased outcomes. As a well-known <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing" target="_blank">study</a> by ProPublica showed in 2016, one such algorithm classified black defendants as having a “high recidivism risk” at disproportionately higher rates in comparison to white defendants even after controlling for variables such as the type and severity of the crime committed. As ProPublica put it, “prediction fails differently for black defendants”—in other words, algorithmic predictions did a much worse job of accurately predicting recidivism rates for black defendants, compared to white defendants, given the crimes that individual defendants had previously committed; black defendants who were in fact low risk were much more likely to receive a high-risk score than similar white defendants. Often, such algorithmic systems rely on socio-demographic data, such as age, gender, educational background, residential stability, and familial arrest record. Even though the algorithm in this case does not explicitly rely on race as a variable, these other socio-demographic features can function as proxies for race. The result is a digital form of redlining, or, as computer scientists call it, “redundant encoding.”</p>
  69. <p>In response to this problem, some states—including New Jersey—recently implemented more minimalist algorithmic risk scoring systems, relying purely on behavioral data, such as arrest records. The goal of such systems is to prevent redundant encoding by reducing the amount of socio-demographic information fed into the algorithm. However, given that communities of color are policed disproportionately heavily, and, in turn, arrested at disproportionately high rates, “purely behavioral” data about arrest history, for example, is still heavily raced (and classed, and gendered). Thus, the redundant encoding problem is not in fact solved. New Jersey’s example, among many others, shows that abstracting away from the social circumstances of defendants does not lead to true impartiality.</p>
  70. <p align="center">• • •</p>
  71. <p>In light of these issues, any approach focused on optimizing for procedural fairness—without attention to the social context in which these systems operate—is going to be insufficient. Algorithmic design cannot be fixed in isolation. Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?” We must carefully examine the relationship and contribution of AI systems to existing configurations of political and social injustice, lest these systems continue to perpetuate those very conditions under the guise of neutrality. As many critical race theorists and feminist philosophers have argued, neutral solutions might well secure just outcomes in a <em>just</em> society, but only serve to preserve the status quo in an unjust one.</p>
  72. <p>What, then, <em>does</em> algorithmic and AI fairness require, when we attend to the place of this technology in society at large?</p>
  73. <p>The first—but far from only—step is transparency about the choices that go into AI development and the responsibilities that such choices present.</p>
  74. <p><pullquote>
  75. <p>There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.</p>
  76. </pullquote></p>
  77. <p>Some might be inclined to absolve AI developers and researchers of moral responsibility despite their expertise on the potential risks of deploying these technologies. After all, the thought goes, if they followed existing regulations and protocols and made use of the best available information and data sets, how can they be held responsible for any errors and accidents that they did not foresee? On this view, such are the inevitable, necessary costs of technological advancement—growing pains that we will soon forget as the technology improves over time.</p>
  78. <p>One significant problem with this quietism is the assumption that existing regulations and research are adequate for ethical deployment of AI—an assumption that even industry leaders themselves, <a href="https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html" target="_blank">including</a> Microsoft’s president and chief legal officer Brad Smith, have conceded is unrealistic.</p>
  79. <p>The biggest problem with this picture, though, is its inaccurate portrayal of how AI systems are developed and deployed. Developing algorithmic systems entails making many deliberate choices<em>. </em>For example, machine learning algorithms are often “trained” to navigate massive data sets by making use of certain pre-defined key concepts or variables,  such as “creditworthiness” or “high-risk individual.” The algorithm does not define these concepts itself; human beings—developers and data scientists—choose which concepts to appeal to, at least as an initial starting point. It is implausible to think that these choices are not informed by cultural and social context—a context deeply shaped by a history of inequality and injustice. The variables that tech practitioners choose to include, in turn, significantly influence how the algorithm processes the data and the recommendations it ultimately makes.</p>
  80. <p>Making choices about the concepts that underpin algorithms is not a purely technological problem. For instance, a developer of a predictive policing algorithm inevitably makes choices that determine which members of the community will be affected and how. Making the <em>right</em> choices in this context is as much a moral enterprise as it is a technical one. This is no less true when the exact consequences are difficult even for developers to foresee. New pharmaceutical products often have unexpected side effects, but that is precisely why they undergo extensive rounds of controlled testing and trials before they are approved for use—not to mention the possibility of recall in cases of serious, unforeseen defect.</p>
  81. <p>Unpredictability is thus not an excuse for moral quiescence when the stakes are so high. If AI technology really is unpredictable, this only presents <em>more</em> reason for caution and moderation in deploying these technologies. Such caution is particularly called for when the AI is used to perform such consequential tasks as allocating the resources of a police department or evaluating the creditworthiness of first-time homebuyers.</p>
  82. <p><pullquote>
  83. <p>Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?”</p>
  84. </pullquote></p>
  85. <p>To say that these choices are deliberate is not to suggest that their negative consequences are always, or even often, intentional. There may well be some clearly identifiable “bad” AI developers who must be stopped, but our larger point is that all developers, in general, know enough about these technologies to be regarded as complicit in their outcomes—a point that is obscured when we act as if AI technology is already escaping human control. We must resist the common tendency to think that an AI-driven world means that we are freed not only from making choices, but also from having to scrutinize and evaluate these automated choices in the way that we typically do with human decisions. (This psychological tendency to trust the outputs of an automated decision making system is what researchers call “automation bias.”)</p>
  86. <p>Complicity here means that the responsibility for AI is shared by individuals involved in its development and deployment, regardless of their particular intentions, simply because they know <em>enough </em>about the potential harms. As computer scientist Joshua Kroll has <a href="https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0084?url_ver=Z39.88-2003&amp;rfr_id=ori%3Arid%3Acrossref.org&amp;rfr_dat=cr_pub%3Dpubmed&amp;" target="_blank">argued</a>, “While structural inscrutability frustrates users and oversight entities, system creators and operators always determine that the technologies they deploy are fit for certain uses, making no system wholly inscrutable.”</p>
  87. <p>The apocalypse-saturated discourse on AI, by contrast, encourages a mentality of learned helplessness. The popular perception that strong AI will eventually grow out of our control risks becoming a self-fulfilling prophecy, despite the present reality that weak AI is very much the product of human deliberation and decision making. Avoiding learned helplessness and automation bias will require adopting a model of responsibility that recognizes that a variety of (even well-intentioned) agents must share the responsibility for AI given their role in its development and deployment.</p>
  88. <p align="center">• • •</p>
  89. <p>In the end, the responsible development and deployment of weak AI will involve not just developers and designers, but the public at large. This means that we need, among other things, to scrutinize current narratives about AI’s potential costs and benefits. As we have argued, AI’s alleged neutrality and inevitability are harmful, yet pervasive, myths. Debunking them will require an ongoing process of public, democratic contestation about the social, political, and moral dimensions of algorithmic decision making.</p>
  90. <p><pullquote>
  91. <p>We must resist the apocalypse-saturated discourse on AI that encourages a mentality of learned helplessness.</p>
  92. </pullquote></p>
  93. <p>This is not an unprecedented proposal: similar suggestions have been made by philosophers and activists seeking to address other complex, collective moral problems, such as climate change and sweatshop labor. Just as their efforts have helped raise public awareness and spark political debate about those issues, it is high time for us as a public to take seriously our responsibilities for the present and looming social consequences of AI. Algorithmic bias is not a purely technical problem for researchers and tech practitioners; we must recognize it as a moral and political problem in which all of us—as democratic citizens—have a stake. Responsibility cannot simply be offloaded and outsourced to tech developers and private corporations.</p>
  94. <p>This also means that we need, in part, to think critically about government decisions to procure machine learning tools from private corporations—especially because these tools are subsequently used to partially automate decisions that were previously made by democratically authorized, if not directly elected, public officials. But we will also have to ask uncomfortable questions about our own role as a public in authorizing and contesting the use of AI technologies by corporations and the state. Citizens must come to view issues surrounding AI as a collective problem <em>for all of us </em>rather than a technical problem <em>just for them. </em>Our proposal is aligned with an emerging “second wave” of thinking about algorithmic accountability, as legal scholar Frank Pasquale <a href="https://lpeblog.org/2019/11/25/the-second-wave-of-algorithmic-accountability/" target="_blank">puts</a> it: a perspective which critically questions “whether [certain algorithmic systems] should be used at all—and, if so, who gets to govern them,” rather than asking how such systems might be improved in order to make them more fair. This “second” perspective has immediate implications for how we ought to think about the relationship between democratic power and AI. As Julia Powles and Helen Nissenbaum <a href="https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53" target="_blank">emphasize</a>, “Any AI system that is integrated into people’s lives must be capable of contest, account, and redress to citizens and representatives of the public interest.”</p>
  95. <p>If using algorithmic decision making means making deliberate choices for which we are all on the hook, how exactly should we as a democratic society respond? What is our role, as citizens, in shaping technology for a more just society?</p>
  96. <p>One might be tempted to think that the unprecedented novelty of machine learning can be regulated effectively only if we manage to create entirely new democratic procedures and institutions. A range of such measures have been proposed: creating new governmental departments (such as the new Department of Technology proposed by Democratic presidential candidate Andrew Yang), new laws (such as <a href="http://www.europarl.europa.eu/RegData/etudes/STUD/2019/624262/EPRS_STU(2019)624262_EN.pdf." target="_blank">consumer protection laws</a>, enforced by a kind of “FDA for algorithms”), as well as (rather weak) new measures for voluntary individual self-regulation, such as a Hippocratic oath for developers.</p>
  97. <p>One problem with these proposals is that they reinforce learned helplessness about algorithmic bias; they suggest that until we implement large-scale, complex institutional and social change, we will be unable to address algorithmic bias in any meaningful way. But there is another way to think about algorithmic accountability. Rather than advocating for entirely new democratic institutions and procedures, why not first try to shift existing democratic <em>agendas</em>? Democratic agenda-setting means enabling citizens to contest and control the concepts that underpin algorithmic decision rules and to deliberate about whether algorithmic decision-making ought to be used in a particular domain in the first place.</p>
  98. <p><pullquote>
  99. <p>Citizens must come to view issues surrounding AI as a collective problem <em>for all of us </em>rather than a technical problem <em>just for them.</em></p>
  100. </pullquote></p>
  101. <p>San Francisco, for example, recently banned the use of facial recognition tools in policing due to the increasing amount of empirical evidence of algorithmic bias in law enforcement technology;  the city’s board of supervisors passed the “Stop Secret Surveillance” ordinance in an 8-1 vote. The ordinance, which states that “the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring,” also establishes more wide-ranging accountability mechanisms beyond the use of facial recognition technology in policing, such as a requirement that city agencies obtain approval before purchasing and implementing other types of surveillance technology.</p>
  102. <p>Broaching questions of algorithmic justice via the democratic process would give members of communities most impacted by algorithmic bias more direct democratic power over crucial decisions concerning weak AI—not merely after its deployment, but also at the design stage. San Francisco’s new ordinance exemplifies the importance of providing meaningful opportunities for bottom-up democratic deliberation about, and democratic contestation of, algorithmic tools ideally <em>before</em> they are deployed: “Decisions regarding if and how surveillance technologies should be funded, acquired, or used, and whether data from such technologies should be shared, should be made only after meaningful public input has been solicited and given significant weight.”</p>
  103. <p>Similar local, bottom-up procedures akin to San Francisco’s model can and should be implemented in other communities, rather than simply waiting for the creation of more comprehensive, top-down regulatory institutions. Significant public oversight over AI development and deployment is already possible. Rather than allowing tech practitioners to navigate the ethics of AI by themselves, we the public should be included in decisions about whether and how AI will be deployed and to what ends. Furthermore, even once we do create new, larger-scale democratic institutions empowered to legally regulate AI in a meaningful way, bottom-up democratic procedures will still be essential: they play a crucial role in identifying which agendas such institutions ought to pursue, and they can shine a light on whose interests are most affected by emerging technologies.</p>
  104. <p>Moving to this agenda-setting approach means incorporating an <em>ex ante</em> perspective when we think about algorithmic accountability, rather than resigning ourselves to a callous, <em>ex post</em>, “wait and see” attitude. When we wait and see, we let corporations and practitioners take the lead and set the terms. In the meantime, we expect those who are already experiencing significant social injustice to continue bearing its burden.</p>
  105. <p><pullquote>
  106. <p>To take full responsibility for how technology shapes our lives, we will have to make the deployment of AI democratically contestable by putting it on our democratic agendas.</p>
  107. </pullquote></p>
  108. <p>Of course, shifting democratic agendas toward decisions about algorithmic <em>tools</em> will not entirely resolve the problem of algorithmic <em>bias</em>. To take full responsibility for how technology shapes our lives going forward, we will have to make the deployment of weak AI democratically contestable by putting it on our democratic agendas. That being said, we will eventually have to <em>combine</em> that strategy with an effort to establish institutions that can enforce the just and equitable use of technology after it has been deployed.</p>
  109. <p>In other words, a democratic critique of algorithmic injustice requires both an <em>ex ante</em> and an <em>ex post</em> perspective. In order for us to start thinking about <em>ex post</em> accountability in a meaningful way—that is, in a way that actually reflects the concerns and lived experiences of those most affected by algorithmic tools—we need to first make it possible for society as a whole, not just tech industry employees, to ask the deeper <em>ex ante</em> questions (e.g. “Should we even use weak AI in this domain at all?”). Changing the democratic agenda is a prerequisite to tackling algorithmic injustice, not just one policy goal among many.</p>
  110. <p align="center">• • •</p>
  111. <p>Democratic agenda setting can be a powerful mechanism for exercising popular control over state and corporate use of technology, and for contesting technology’s threats to our rights. Effective agenda-setting, of course, will mean coupling the public’s agenda setting power with tangible bottom-up decision making power, rather than merely exercising our rights of deliberation and consultation.</p>
  112. <p>This is where we can learn from other recent democratic innovations, such as participatory budgeting, in which local and municipal decisions about how to allocate resources for infrastructure, energy, healthcare, and environmental policy are being made directly by residents themselves after several rounds of collective deliberation. Enabling more robust democratic participation from the outset helps us identify the kinds of concerns and problems that we ought to prioritize. Rather than rushing to quick, top-down solutions aimed at quality control, optimization, and neutrality, we must first clarify what particular kind of problem we are trying to solve in the first place. Until we do so, algorithmic decision making will continue to entrench social injustice, even as tech optimists herald it as the cure for the very ills it exacerbates.</p>
  113. </article>
  114. <hr>
  115. <footer>
  116. <p>
  117. <a href="/david/" title="Aller à l’accueil">🏠</a> •
  118. <a href="/david/log/" title="Accès au flux RSS">🤖</a> •
  119. <a href="http://larlet.com" title="Go to my English profile" data-instant>🇨🇦</a> •
  120. <a href="mailto:david%40larlet.fr" title="Envoyer un courriel">📮</a> •
  121. <abbr title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340">🧚</abbr>
  122. </p>
  123. </footer>
  124. <script src="/static/david/js/instantpage-3.0.0.min.js" type="module" defer></script>
  125. </body>
  126. </html>