Source originale du contenu
Part of the problem with the ideal of individualized informed consent is that it assumes companies have the ability to inform us about the risks we are consenting to. They don’t. Strava surely did not intend to reveal the GPS coordinates of a possible Central Intelligence Agency annex in Mogadishu, Somalia — but it may have done just that. Even if all technology companies meant well and acted in good faith, they would not be in a position to let you know what exactly you were signing up for.
Another part of the problem is the increasingly powerful computational methods called machine learning, which can take seemingly inconsequential data about you and, combining them with other data, can discover facts about you that you never intended to reveal. For example, research shows that data as minor as your Facebook “likes” can be used to infer your sexual orientation, whether you use addictive substances, your race and your views on many political issues. This kind of computational statistical inference is not 100 percent accurate, but it can be fairly close — certainly close enough to be used to profile you for a variety of purposes.
A challenging feature of machine learning is that exactly how a given system works is opaque. Nobody — not even those who have access to the code and data — can tell what piece of data came together with what other piece of data to result in the finding the program made. This further undermines the notion of informed consent, as we do not know which data results in what privacy consequences. What we do know is that these algorithms work better the more data they have. This creates an incentive for companies to collect and store as much data as possible, and to bury the privacy ramifications, either in legalese or by playing dumb and being vague.
What can be done? There must be strict controls and regulations concerning how all the data about us — not just the obviously sensitive bits — is collected, stored and sold. With the implications of our current data practices unknown, and with future uses of our data unknowable, data storage must move from being the default procedure to a step that is taken only when it is of demonstrable benefit to the user, with explicit consent and with clear warnings about what the company does and does not know. And there should also be significant penalties for data breaches, especially ones that result from underinvestment in secure data practices, as many now do.
Companies often argue that privacy is what we sacrifice for the supercomputers in our pockets and their highly personalized services. This is not true. While a perfect system with no trade-offs may not exist, there are technological avenues that remain underexplored, or even actively resisted by big companies, that could allow many of the advantages of the digital world without this kind of senseless assault on our privacy.
With luck, stricter regulations and a true consumer backlash will force our technological overlords to take this issue seriously and let us take back what should be ours: true and meaningful informed consent, and the right to be let alone.