Repository with sources and generator of https://larlet.fr/david/ https://larlet.fr/david/
Вы не можете выбрать более 25 тем Темы должны начинаться с буквы или цифры, могут содержать дефисы(-) и должны содержать не более 35 символов.

5 лет назад
1234567891011121314151617181920212223242526
  1. title: Human value(s)
  2. lang: en
  3. > The greatest threat that humanity faces from artificial intelligence is not killer robots, but rather, our lack of willingness to analyze, name, and live to the values we want society to have today. Reductionism denies not only the specific values that individuals hold, but erodes humanity's ability to identify and build upon them in aggregate for our collective future.
  4. >
  5. > Our primary challenge today is determining what a human is worth. If we continue to prioritize shareholder-maximized growth, we need to acknowledge the reality that there is no business imperative to keep humans in jobs once their skills and attributes can be replaced by machines – and like Turner, once people can't work and consume, they're of no value to society at all.
  6. >
  7. > Genuine prosperity means prioritizing people and planet at the same level as financial profit. […] So, before algorithms make all our decisions – while we remain – the question we have to ask ourselves is:
  8. >
  9. > *How will machines know what we value if we don't know ourselves?*
  10. >
  11. > <cite>*[While We Remain](https://wilsonquarterly.com/quarterly/living-with-artificial-intelligence/while-we-remain/)* ([cache](/david/cache/492166778077261a5a7ab531b43eb887/))</cite>
  12. Maybe one thing to consider today is more “How will *developers* know what we value if we don’t know ourselves?” Because as Oliver Reichenstein once wrote:
  13. > Let’s imagine that it will be scientifically and morally obvious that machines make better political decisions than humans. Who runs those machines that sit in parliament? Who monitors them? And aren’t we ultimately subjecting ourselves to those who build, manage, run and own the machines rather than the machines themselves? Who decides that machines make better decisions? The people that voted the machines into power? The smarter machines? The market? The Lobbyists? A group of programmers on Slack? The machines autonomously? Whom would you like to take such decisions?
  14. >
  15. > As crazy as this may sound, all of this is not Science Fiction. It is happening right now. Machines already filter, sort and choose the information we base our decisions upon. They count our votes. They sort the tasks we spend our time on, they choose the people we talk to and meet. More and more key aspects of our lives are decided by information technology. And things go wrong. Machines are made by humans. As long as we make mistakes, our machines make mistakes.
  16. >
  17. > <cite>*[Who serves whom?](https://ia.net/topics/who-serves-whom)* ([cache](/david/cache/4761494c490b2b1873e89abcd4984b99/))</cite>
  18. This is here. This is shaping our current thoughts. This is literally manipulating us for a whole decade now. And we still are so passive about that. Not as developers but as humans.
  19. *Where are the whistleblowers from GAFAM+?*
  20. Or maybe we [don’t even need them anymore](http://www.wired.co.uk/article/silicon-valley-whistleblowers-james-bridle-book-new-dark-age) ([cache](/david/cache/a184b6847fe33b94ee831f5b356a26d1/)).