Repository with sources and generator of https://larlet.fr/david/ https://larlet.fr/david/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

index.md 3.0KB

title: Human value(s) lang: en

The greatest threat that humanity faces from artificial intelligence is not killer robots, but rather, our lack of willingness to analyze, name, and live to the values we want society to have today. Reductionism denies not only the specific values that individuals hold, but erodes humanity’s ability to identify and build upon them in aggregate for our collective future.

Our primary challenge today is determining what a human is worth. If we continue to prioritize shareholder-maximized growth, we need to acknowledge the reality that there is no business imperative to keep humans in jobs once their skills and attributes can be replaced by machines – and like Turner, once people can’t work and consume, they’re of no value to society at all.

Genuine prosperity means prioritizing people and planet at the same level as financial profit. […] So, before algorithms make all our decisions – while we remain – the question we have to ask ourselves is:

How will machines know what we value if we don’t know ourselves?

*While We Remain* (cache)

Maybe one thing to consider today is more “How will developers know what we value if we don’t know ourselves?” Because as Oliver Reichenstein once wrote:

Let’s imagine that it will be scientifically and morally obvious that machines make better political decisions than humans. Who runs those machines that sit in parliament? Who monitors them? And aren’t we ultimately subjecting ourselves to those who build, manage, run and own the machines rather than the machines themselves? Who decides that machines make better decisions? The people that voted the machines into power? The smarter machines? The market? The Lobbyists? A group of programmers on Slack? The machines autonomously? Whom would you like to take such decisions?

As crazy as this may sound, all of this is not Science Fiction. It is happening right now. Machines already filter, sort and choose the information we base our decisions upon. They count our votes. They sort the tasks we spend our time on, they choose the people we talk to and meet. More and more key aspects of our lives are decided by information technology. And things go wrong. Machines are made by humans. As long as we make mistakes, our machines make mistakes.

*Who serves whom?* (cache)

This is here. This is shaping our current thoughts. This is literally manipulating us for a whole decade now. And we still are so passive about that. Not as developers but as humans.

Where are the whistleblowers from GAFAM+?

Or maybe we don’t even need them anymore (cache).