Browse Source

Links

master
David Larlet 3 months ago
parent
commit
82870f6ba2
Signed by: David Larlet <david@larlet.fr> GPG Key ID: 3E2953A359E7E7BD

+ 248
- 0
cache/2024/3ea27fca4fabb81676fc1b98264f3bd8/index.html View File

@@ -0,0 +1,248 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>It’s OK to call it Artificial Intelligence (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Is that even respected? Retrospectively? What a shAItshow…
https://neil-clarke.com/block-the-bots-that-feed-ai-models-by-scraping-your-website/ -->
<meta name="robots" content="noai, noimageai">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://simonwillison.net/2024/Jan/7/call-it-ai/">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>It’s OK to call it Artificial Intelligence</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://simonwillison.net/2024/Jan/7/call-it-ai/" title="Lien vers le contenu original">Source originale</a>
<br>
Mis en cache le 2024-01-13
</p>
</nav>
<hr>
<p><em><strong>Update 9th January 2024</strong>: This post was clumsily written and failed to make the point I wanted it to make. I’ve published a follow-up, <a href="https://simonwillison.net/2024/Jan/9/what-i-should-have-said-about-ai/">What I should have said about the term Artificial Intelligence</a> which you should read instead.</em></p>
<p><em>My original post follows.</em></p>

<hr>
<p>We need to be having high quality conversations about AI: what it can and can’t do, its many risks and pitfalls and how to integrate it into society in the most beneficial ways possible.</p>
<p>Any time I write anything that <a href="https://simonwillison.net/tags/ai/">mentions AI</a> it’s inevitable that someone will object to the very usage of the term.</p>
<p>Strawman: “Don’t call it AI! It’s not actually intelligent—it’s just spicy autocomplete.”</p>
<p>That strawman is right: it’s not “intelligent” in the same way that humans are. And “spicy autocomplete” is actually a pretty good analogy for how a lot of these things work. But I still don’t think this argument is a helpful contribution to the discussion.</p>
<p>We need an agreed term for this class of technology, in order to have conversations about it. I think it’s time to accept that “AI” is good enough, and is already widely understood.</p>
<p>I’ve fallen for this trap myself. Every time I write a headline about AI I find myself reaching for terms like “LLMs” or “Generative AI”, because I worry that the term “Artificial Intelligence” over-promises and implies a false mental model of a sci-fi system like Data from Star Trek, not the predict-next-token technology we are building with today.</p>
<p>I’ve decided to cut that out. I’m going to embrace the term Artificial Intelligence and trust my readers to understand what I mean without assuming I’m talking about Skynet.</p>
<p>The term “Artificial Intelligence” has been in use by academia since 1956, with the <a href="https://en.m.wikipedia.org/wiki/Dartmouth_workshop">Dartmouth Summer Research Project on Artificial Intelligence</a>—this field of research is nearly 70 years old now!</p>
<p>John McCarthy’s 2nd September 1955 proposal for that workshop included this:</p>
<blockquote>
<p>The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find <strong>how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves</strong>.</p>
</blockquote>
<p>I think this is a very strong definition, which fits well with the AI models and use-cases we are talking about today. Let’s use it.</p>

<h4 id="why-not-llms">Why not LLMs?</h4>

<p>I’ve spent the past year mainly talking about <a href="https://simonwillison.net/tags/llms/">LLMs</a>—Large Language Models—often as an alternative to the wider term “AI”.</p>

<p>While this term is accurate, it comes with a very significant downside: most people still don’t know what it means.</p>
<p>I find myself starting every article with “Large Language Models (LLMs), the technology behind ChatGPT and Google Bard...”</p>
<p>I don’t think this is helpful. Why use a relatively obscure term I have to define every single time, just because the word “intelligence” in the common acronym AI might be rejected by some readers?</p>
<p>The term LLM is already starting to splinter as language models go multi-modal. I’ve seen the term “LMMs” for Large Multimodal Models start to circulate, which risks introducing yet another piece of jargon that people need to understand in order to comprehend my writing!</p>

<h4 id="argument-against">The argument against using the term AI</h4>

<p>My link to this post <a href="https://fedi.simonwillison.net/@simon/111711738419044973">on Mastodon</a> has attracted thoughtful commentary that goes well beyond the straw man argument I posed above.</p>

<p>The thing that’s different with the current wave of AI tools, most notably LLMs, is that at first glance they really do look like the AI from science fiction. You can have conversations with them, they exhibit knowledge of a vast array of things, and it’s easy to form an initial impression of them that puts them at the same level of “intelligence” as <a href="https://en.wikipedia.org/wiki/J.A.R.V.I.S.">Jarvis</a>, or <a href="https://en.wikipedia.org/wiki/Data_(Star_Trek)">Data</a>, or a <a href="https://en.wikipedia.org/wiki/Terminator_(character)">T-800</a>.</p>

<p>The more time you spend with them, and the more you understand about how they work, the more this illusion falls apart as their many flaws start to become apparent.</p>

<p>Where this gets <em>actively harmful</em> is when people start to deploy systems under the assumption that these tools really are trustworthy, intelligent systems—capable of making decisions that have a real impact on people’s lives.</p>

<p>This is the thing we have to fight back against: we need to help people overcome their science fiction priors, understand exactly what modern AI systems are capable of, how they can be used responsibly and what their limitations are.</p>

<p>I don’t think refusing to use the term AI is an effective tool to help us do that.</p>

<h4 id="not-agi-instead">Let’s tell people it’s “not AGI” instead</h4>
<p>If we’re going to use “Artificial Intelligence” to describe the entire field of machine learning, generative models, deep learning, computer vision and so on... what should we do about the science fiction definition of AI that’s already lodged in people’s heads?</p>
<p>Our goal here is clear: we want people to understand that the LLM-powered tools they are interacting with today aren’t actually anything like the omniscient AIs they’ve seen in science fiction for the <a href="https://en.wikipedia.org/wiki/Erewhon">past ~150 years</a>.</p>
<p>Thankfully there’s a term that’s a good fit for this goal already: AGI, for “Artificial General Intelligence”. This is generally understood to mean AI that matches or exceeds human intelligence.</p>
<p>AGI itself is vague and infuriatingly hard to define, but in this case I think that’s a feature. “ChatGPT isn’t AGI” is an easy statement to make, and I don’t think its accuracy is even up for debate.</p>
<p>The term is right there for the taking. “You’re thinking about science fiction there: ChatGPT isn’t AGI, like in the movies. It’s just an AI language model that can predict next tokens to generate text.”</p>
<h4 id="misc-thoughts">Miscellaneous additional thoughts</h4>
<p>There’s so much good stuff in <a href="https://fedi.simonwillison.net/@simon/111711738419044973">the conversation</a> about this post. I already added the new sections <a href="https://simonwillison.net/2024/Jan/7/call-it-ai/#why-not-llms">Why not LLMs?</a>, <a href="https://simonwillison.net/2024/Jan/7/call-it-ai/#argument-against">The argument against using the term AI</a> and <a href="https://simonwillison.net/2024/Jan/7/call-it-ai/#not-agi-instead">Let’s tell people it’s “not AGI” instead</a> based on those comments.</p>
<p>I’ll collect a few more miscellaneous thoughts in this section, which I may continue to grow in the future.</p>
<ul>
<li>I’ve seen a few reactions to this post that appear to interpret me as saying “Everyone should be calling it AI. Stop calling it something else.” That really wasn’t my intention. If you want to use more accurate language in your conversations, go ahead! What I’m asking for here is that people try to resist the temptation to jump in to every AI discussion with “well actually, AI is a bad name for it because...”, in place of more productive conversations.</li>
<li>Academia really did go all-in on Artificial Intelligence, across many decades. The Stanford Artificial Intelligence Lab (SAIL) was founded by John McCarthy (who coined the term for the Dartmouth workshop) in 1963. The University of Texas Laboratory for Artificial Intelligence opened in 1983. Bernard Meltzer at the University of Edinburgh founded the Artificial Intelligence Journal in 1970.</li>
<li>Industry has behaved slightly differently. Just 5-10 years ago my experience was that people building with these technologies tended to avoid the term “AI”, instead talking about “machine learning” or “deep learning”. That’s changed in the past few years, as the rise of LLMs and generative AI have produced systems that feel a little bit closer to the sci-fi version.</li>
<li>The term <a href="https://en.wikipedia.org/wiki/AI_winter">AI winter</a> was coined in 1984 to describe a period of reduced funding for AI research. There have been two major winters so far, and people are already predicting a third.</li>
<li>The most influential organizations building Large Language Models today are <a href="https://openai.com/">OpenAI</a>, <a href="https://mistral.ai/">Mistral AI</a>, <a href="https://ai.meta.com/">Meta AI</a>, <a href="https://ai.google/">Google AI</a> and <a href="https://www.anthropic.com/">Anthropic</a>. All but Anthropic have AI in the title; Anthropic call themselves “an AI safety and research company”. Could rejecting the term “AI” be synonymous with a disbelief in the value or integrity of this whole space?</li>
<li>One of the problems with saying “it’s not actually intelligent” is that it raises the question of what intelligence truly is, and what capabilities a system would need in order to match that definition. This is a rabbit hole which I think can only act as a distraction from discussing the concrete problems with the systems we have today.</li>
<li><p>Juan Luis <a href="https://social.juanlu.space/@astrojuanlu/111714012496518004">pointed me</a> to this fascinating little snippet tucked away in the references section on Wikipedia’s <a href="https://en.m.wikipedia.org/wiki/History_of_artificial_intelligence">History of Artificial Intelligence</a>, quoting John McCarthy in 1988:</p>
<blockquote>
<p>[O]ne of the reasons for inventing the term “artificial intelligence” was to escape association with “cybernetics”. Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him.</p>
</blockquote>
<p>So part of the reason he coined the term AI was kind of petty! This is also a neat excuse for me to link to one of my favourite podcast episodes, 99% Invisible on <a href="https://99percentinvisible.org/episode/project-cybersyn/">Project Cybersyn</a> (only tangentially related but it’s so good!)</p>
</li>
<li>Jim Gardner <a href="https://fosstodon.org/@jimgar/111714306458200053">pointed out</a> that the term AI "is <a href="https://www.merriam-webster.com/dictionary/polysemous">polysemic</a>. It means X to researchers, but Y to laypeople who only know of ChatGPT". I think this observation may be crucial to understanding why this topic is so hotly debated!</li>
<li>Aside from confusion with science-fiction, one of the strongest reasons for people to reject the term AI is due to its association with marketing and hype. Slapping the label “AI” on something is seen as a cheap trick that any company can use to attract attention and raise money, to the point that some people have a visceral aversion to the term.</li>
</ul>
<p>Here’s what Glyph Lefkowitz <a href="https://mastodon.social/@glyph/111712677063589054">had to say</a> about this last point:</p>
<blockquote>
<p>A lot of insiders—not practitioners as such, but marketers &amp; executives—use “AI” as the label not in spite of its confusion with the layperson’s definition, but because of it. Investors who vaguely associate it with machine-god hegemony assume that it will be very profitable. Users assume it will solve their problems. It’s a term whose primary purpose has become deceptive.</p>
<p>At the same time, a lot of the deception is unintentional. When you exist in a sector of the industry that the public knows as “AI”, that the media calls “AI”, that industry publications refer to as “AI”, that other products identify as “AI”, going out on a limb and trying to build a brand identity around pedantic hairsplitting around “LLMs” and “machine learning” is a massive uphill battle which you are disincentivized at every possible turn to avoid.</p>
</blockquote>
<p>Glyph’s closing thought here reflects my own experience: I tried to avoid leaning too hard into the term “AI” myself, but eventually it felt like an uphill battle that was resulting in little to no positive impact.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 79
- 0
cache/2024/3ea27fca4fabb81676fc1b98264f3bd8/index.md View File

@@ -0,0 +1,79 @@
title: It’s OK to call it Artificial Intelligence
url: https://simonwillison.net/2024/Jan/7/call-it-ai/
hash_url: 3ea27fca4fabb81676fc1b98264f3bd8
archive_date: 2024-01-13

<p><em><strong>Update 9th January 2024</strong>: This post was clumsily written and failed to make the point I wanted it to make. I’ve published a follow-up, <a href="https://simonwillison.net/2024/Jan/9/what-i-should-have-said-about-ai/">What I should have said about the term Artificial Intelligence</a> which you should read instead.</em></p>
<p><em>My original post follows.</em></p>

<hr>
<p>We need to be having high quality conversations about AI: what it can and can’t do, its many risks and pitfalls and how to integrate it into society in the most beneficial ways possible.</p>
<p>Any time I write anything that <a href="https://simonwillison.net/tags/ai/">mentions AI</a> it’s inevitable that someone will object to the very usage of the term.</p>
<p>Strawman: “Don’t call it AI! It’s not actually intelligent—it’s just spicy autocomplete.”</p>
<p>That strawman is right: it’s not “intelligent” in the same way that humans are. And “spicy autocomplete” is actually a pretty good analogy for how a lot of these things work. But I still don’t think this argument is a helpful contribution to the discussion.</p>
<p>We need an agreed term for this class of technology, in order to have conversations about it. I think it’s time to accept that “AI” is good enough, and is already widely understood.</p>
<p>I’ve fallen for this trap myself. Every time I write a headline about AI I find myself reaching for terms like “LLMs” or “Generative AI”, because I worry that the term “Artificial Intelligence” over-promises and implies a false mental model of a sci-fi system like Data from Star Trek, not the predict-next-token technology we are building with today.</p>
<p>I’ve decided to cut that out. I’m going to embrace the term Artificial Intelligence and trust my readers to understand what I mean without assuming I’m talking about Skynet.</p>
<p>The term “Artificial Intelligence” has been in use by academia since 1956, with the <a href="https://en.m.wikipedia.org/wiki/Dartmouth_workshop">Dartmouth Summer Research Project on Artificial Intelligence</a>—this field of research is nearly 70 years old now!</p>
<p>John McCarthy’s 2nd September 1955 proposal for that workshop included this:</p>
<blockquote>
<p>The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find <strong>how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves</strong>.</p>
</blockquote>
<p>I think this is a very strong definition, which fits well with the AI models and use-cases we are talking about today. Let’s use it.</p>

<h4 id="why-not-llms">Why not LLMs?</h4>

<p>I’ve spent the past year mainly talking about <a href="https://simonwillison.net/tags/llms/">LLMs</a>—Large Language Models—often as an alternative to the wider term “AI”.</p>

<p>While this term is accurate, it comes with a very significant downside: most people still don’t know what it means.</p>
<p>I find myself starting every article with “Large Language Models (LLMs), the technology behind ChatGPT and Google Bard...”</p>
<p>I don’t think this is helpful. Why use a relatively obscure term I have to define every single time, just because the word “intelligence” in the common acronym AI might be rejected by some readers?</p>
<p>The term LLM is already starting to splinter as language models go multi-modal. I’ve seen the term “LMMs” for Large Multimodal Models start to circulate, which risks introducing yet another piece of jargon that people need to understand in order to comprehend my writing!</p>

<h4 id="argument-against">The argument against using the term AI</h4>

<p>My link to this post <a href="https://fedi.simonwillison.net/@simon/111711738419044973">on Mastodon</a> has attracted thoughtful commentary that goes well beyond the straw man argument I posed above.</p>

<p>The thing that’s different with the current wave of AI tools, most notably LLMs, is that at first glance they really do look like the AI from science fiction. You can have conversations with them, they exhibit knowledge of a vast array of things, and it’s easy to form an initial impression of them that puts them at the same level of “intelligence” as <a href="https://en.wikipedia.org/wiki/J.A.R.V.I.S.">Jarvis</a>, or <a href="https://en.wikipedia.org/wiki/Data_(Star_Trek)">Data</a>, or a <a href="https://en.wikipedia.org/wiki/Terminator_(character)">T-800</a>.</p>

<p>The more time you spend with them, and the more you understand about how they work, the more this illusion falls apart as their many flaws start to become apparent.</p>

<p>Where this gets <em>actively harmful</em> is when people start to deploy systems under the assumption that these tools really are trustworthy, intelligent systems—capable of making decisions that have a real impact on people’s lives.</p>

<p>This is the thing we have to fight back against: we need to help people overcome their science fiction priors, understand exactly what modern AI systems are capable of, how they can be used responsibly and what their limitations are.</p>

<p>I don’t think refusing to use the term AI is an effective tool to help us do that.</p>



<h4 id="not-agi-instead">Let’s tell people it’s “not AGI” instead</h4>
<p>If we’re going to use “Artificial Intelligence” to describe the entire field of machine learning, generative models, deep learning, computer vision and so on... what should we do about the science fiction definition of AI that’s already lodged in people’s heads?</p>
<p>Our goal here is clear: we want people to understand that the LLM-powered tools they are interacting with today aren’t actually anything like the omniscient AIs they’ve seen in science fiction for the <a href="https://en.wikipedia.org/wiki/Erewhon">past ~150 years</a>.</p>
<p>Thankfully there’s a term that’s a good fit for this goal already: AGI, for “Artificial General Intelligence”. This is generally understood to mean AI that matches or exceeds human intelligence.</p>
<p>AGI itself is vague and infuriatingly hard to define, but in this case I think that’s a feature. “ChatGPT isn’t AGI” is an easy statement to make, and I don’t think its accuracy is even up for debate.</p>
<p>The term is right there for the taking. “You’re thinking about science fiction there: ChatGPT isn’t AGI, like in the movies. It’s just an AI language model that can predict next tokens to generate text.”</p>
<h4 id="misc-thoughts">Miscellaneous additional thoughts</h4>
<p>There’s so much good stuff in <a href="https://fedi.simonwillison.net/@simon/111711738419044973">the conversation</a> about this post. I already added the new sections <a href="https://simonwillison.net/2024/Jan/7/call-it-ai/#why-not-llms">Why not LLMs?</a>, <a href="https://simonwillison.net/2024/Jan/7/call-it-ai/#argument-against">The argument against using the term AI</a> and <a href="https://simonwillison.net/2024/Jan/7/call-it-ai/#not-agi-instead">Let’s tell people it’s “not AGI” instead</a> based on those comments.</p>
<p>I’ll collect a few more miscellaneous thoughts in this section, which I may continue to grow in the future.</p>
<ul>
<li>I’ve seen a few reactions to this post that appear to interpret me as saying “Everyone should be calling it AI. Stop calling it something else.” That really wasn’t my intention. If you want to use more accurate language in your conversations, go ahead! What I’m asking for here is that people try to resist the temptation to jump in to every AI discussion with “well actually, AI is a bad name for it because...”, in place of more productive conversations.</li>
<li>Academia really did go all-in on Artificial Intelligence, across many decades. The Stanford Artificial Intelligence Lab (SAIL) was founded by John McCarthy (who coined the term for the Dartmouth workshop) in 1963. The University of Texas Laboratory for Artificial Intelligence opened in 1983. Bernard Meltzer at the University of Edinburgh founded the Artificial Intelligence Journal in 1970.</li>
<li>Industry has behaved slightly differently. Just 5-10 years ago my experience was that people building with these technologies tended to avoid the term “AI”, instead talking about “machine learning” or “deep learning”. That’s changed in the past few years, as the rise of LLMs and generative AI have produced systems that feel a little bit closer to the sci-fi version.</li>
<li>The term <a href="https://en.wikipedia.org/wiki/AI_winter">AI winter</a> was coined in 1984 to describe a period of reduced funding for AI research. There have been two major winters so far, and people are already predicting a third.</li>
<li>The most influential organizations building Large Language Models today are <a href="https://openai.com/">OpenAI</a>, <a href="https://mistral.ai/">Mistral AI</a>, <a href="https://ai.meta.com/">Meta AI</a>, <a href="https://ai.google/">Google AI</a> and <a href="https://www.anthropic.com/">Anthropic</a>. All but Anthropic have AI in the title; Anthropic call themselves “an AI safety and research company”. Could rejecting the term “AI” be synonymous with a disbelief in the value or integrity of this whole space?</li>
<li>One of the problems with saying “it’s not actually intelligent” is that it raises the question of what intelligence truly is, and what capabilities a system would need in order to match that definition. This is a rabbit hole which I think can only act as a distraction from discussing the concrete problems with the systems we have today.</li>
<li><p>Juan Luis <a href="https://social.juanlu.space/@astrojuanlu/111714012496518004">pointed me</a> to this fascinating little snippet tucked away in the references section on Wikipedia’s <a href="https://en.m.wikipedia.org/wiki/History_of_artificial_intelligence">History of Artificial Intelligence</a>, quoting John McCarthy in 1988:</p>
<blockquote>
<p>[O]ne of the reasons for inventing the term “artificial intelligence” was to escape association with “cybernetics”. Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him.</p>
</blockquote>
<p>So part of the reason he coined the term AI was kind of petty! This is also a neat excuse for me to link to one of my favourite podcast episodes, 99% Invisible on <a href="https://99percentinvisible.org/episode/project-cybersyn/">Project Cybersyn</a> (only tangentially related but it’s so good!)</p>
</li>
<li>Jim Gardner <a href="https://fosstodon.org/@jimgar/111714306458200053">pointed out</a> that the term AI "is <a href="https://www.merriam-webster.com/dictionary/polysemous">polysemic</a>. It means X to researchers, but Y to laypeople who only know of ChatGPT". I think this observation may be crucial to understanding why this topic is so hotly debated!</li>
<li>Aside from confusion with science-fiction, one of the strongest reasons for people to reject the term AI is due to its association with marketing and hype. Slapping the label “AI” on something is seen as a cheap trick that any company can use to attract attention and raise money, to the point that some people have a visceral aversion to the term.</li>
</ul>
<p>Here’s what Glyph Lefkowitz <a href="https://mastodon.social/@glyph/111712677063589054">had to say</a> about this last point:</p>
<blockquote>
<p>A lot of insiders—not practitioners as such, but marketers &amp; executives—use “AI” as the label not in spite of its confusion with the layperson’s definition, but because of it. Investors who vaguely associate it with machine-god hegemony assume that it will be very profitable. Users assume it will solve their problems. It’s a term whose primary purpose has become deceptive.</p>
<p>At the same time, a lot of the deception is unintentional. When you exist in a sector of the industry that the public knows as “AI”, that the media calls “AI”, that industry publications refer to as “AI”, that other products identify as “AI”, going out on a limb and trying to build a brand identity around pedantic hairsplitting around “LLMs” and “machine learning” is a massive uphill battle which you are disincentivized at every possible turn to avoid.</p>
</blockquote>
<p>Glyph’s closing thought here reflects my own experience: I tried to avoid leaning too hard into the term “AI” myself, but eventually it felt like an uphill battle that was resulting in little to no positive impact.</p>

+ 184
- 0
cache/2024/668d0f82ae65b0e94ea76145057759a7/index.html View File

@@ -0,0 +1,184 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>‘One in a million’ iPhone bridal photo explanation: blame panorama mode (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Is that even respected? Retrospectively? What a shAItshow…
https://neil-clarke.com/block-the-bots-that-feed-ai-models-by-scraping-your-website/ -->
<meta name="robots" content="noai, noimageai">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://www.theverge.com/2023/12/2/23985299/iphone-bridal-photo-three-poses-explanation-panorama-photoshop-generative-ai">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>‘One in a million’ iPhone bridal photo explanation: blame panorama mode</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://www.theverge.com/2023/12/2/23985299/iphone-bridal-photo-three-poses-explanation-panorama-photoshop-generative-ai" title="Lien vers le contenu original">Source originale</a>
<br>
Mis en cache le 2024-01-13
</p>
</nav>
<hr>
<div class="duet--article--article-body-component"><p class="duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white">Depending on how “online” you are, you may have seen a picture floating around socials with a strange quirk: a woman — comedian Tessa Coates — is standing in front of two mirrors in a bridal gown and, somehow, <a href="https://www.instagram.com/p/CzPGNmJIebC/">holding three poses at once</a>. Coates insisted in her Instagram post that the picture wasn’t altered; it just came out that way.</p></div>
<div class="duet--article--article-body-component"><p class="duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white">So what happened? Was it a glitched iOS Live Photo (the iOS feature that takes short videos and picks out the best one)? A faked image manipulated with Photoshop? A brief glimpse into three different, parallel realities? </p></div>
<div class="duet--article--article-body-component"><p class="duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white">Nope, it’s simpler than all of that. Faruk from the iPhonedo YouTube channel <a href="https://www.threads.net/@ayfondo/post/C0VzJWCuwnU">posted a short video</a> to Threads explaining exactly what happened, and it’s much more straightforward than you might expect.</p></div>
<div class="duet--article--article-body-component"><p class="duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white">It’s multiple images, stitched together using Coates’ iPhone 12’s “pano” feature. Faruk figured this out by peeking at the shot’s metadata and seeing its resolution is cropped from the main camera’s normal resolution down to 3028 x 3948, which happens when a picture is taken in panoramic mode. </p></div>
<div class="duet--article--article-body-component"><p class="duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white">The reason is to do with how panoramic shots on the iPhone work. When you take a picture in “pano” mode, the camera takes many pictures and stitches them together into one, wider photo. To keep the final image from being all wiggly, the phone has to crop them before the stitch, panning up, down, and across the original images to match them at the edges. The same principle is at play in digital video stabilization, producing smooth video from previously shaky footage. </p></div>
<div class="duet--article--article-body-component"><p class="duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white">That’s somewhat similar to the iPhone’s Deep Fusion computational photography feature, which <a href="/2019/10/1/20893516/apple-deep-fusion-camera-mode-iphone-11-pro-max-ios-13-beta">compensates for dim lighting</a> by taking several pictures at once within a fraction of a second and blends them together after processing them at the pixel level to pull out lighting, color, and tone detail. Then there’s Google’s bevy of <a href="/23902248/google-photos-pixel-8-ai-magic-editor-best-take-audio-magic-eraser">AI photo-editing tools in the Pixel 8</a>, which let you take several photos and <a href="/23914615/pixel-8-pro-camera-parents-best-take-face-unblur">swap out faces</a>, tweak the background, or <a href="/2023/5/10/23716165/google-photos-ai-magic-editor-transform-pixel-io">move whole people or things around</a> to create the image you wanted <a href="/23902248/google-photos-pixel-8-ai-magic-editor-best-take-audio-magic-eraser">rather than the one you took</a>.</p></div>
<div class="duet--article--article-body-component"><p class="duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white">Stitching panoramic shots together isn’t as fancy as all that, and it isn’t perfect. Anyone who’s taken their share of panoramic iPhone shots can attest panoramics often result in wacky artifacts like missing arms and distorted faces. In Coates’ case, her phone’s camera took several pictures, and since it couldn’t know that the women in the mirrors were also Coates, it didn’t make sure to synchronize the poses. Faruk even manages to reproduce the phenomenon himself in his video. Shame, though. I had hoped we were actually seeing evidence of the multiverse. </p></div>
<div class="duet--article--article-body-component"><p class="duet--article--dangerously-set-cms-markup duet--article--standard-paragraph mb-20 font-fkroman text-18 leading-160 -tracking-1 selection:bg-franklin-20 dark:text-white dark:selection:bg-blurple [&amp;_a:hover]:shadow-highlight-franklin dark:[&amp;_a:hover]:shadow-highlight-blurple [&amp;_a]:shadow-underline-black dark:[&amp;_a]:shadow-underline-white"><em><strong>Update December 2nd, 2023, 3:31PM ET: </strong>Added contextual information about computational photography and generative AI photo editing features from Apple and Google.</em></p></div>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 6
- 0
cache/2024/668d0f82ae65b0e94ea76145057759a7/index.md
File diff suppressed because it is too large
View File


+ 177
- 0
cache/2024/e990536ed88823f047296ea25a6b7933/index.html
File diff suppressed because it is too large
View File


+ 6
- 0
cache/2024/e990536ed88823f047296ea25a6b7933/index.md
File diff suppressed because it is too large
View File


+ 6
- 0
cache/2024/index.html View File

@@ -104,10 +104,14 @@
<li><a href="/david/cache/2024/87c468a4eddabe5d2c28e902d7f17504/" title="Accès à l’article dans le cache local : je ne sais pas pourquoi">je ne sais pas pourquoi</a> (<a href="https://www.la-grange.net/2024/01/11/pourquoi" title="Accès à l’article original distant : je ne sais pas pourquoi">original</a>)</li>
<li><a href="/david/cache/2024/3ea27fca4fabb81676fc1b98264f3bd8/" title="Accès à l’article dans le cache local : It’s OK to call it Artificial Intelligence">It’s OK to call it Artificial Intelligence</a> (<a href="https://simonwillison.net/2024/Jan/7/call-it-ai/" title="Accès à l’article original distant : It’s OK to call it Artificial Intelligence">original</a>)</li>
<li><a href="/david/cache/2024/62bf3ce6ef66e39b7f250a6123d92e66/" title="Accès à l’article dans le cache local : Tomorrow & Tomorrow & Tomorrow">Tomorrow & Tomorrow & Tomorrow</a> (<a href="https://erinkissane.com/tomorrow-and-tomorrow-and-tomorrow" title="Accès à l’article original distant : Tomorrow & Tomorrow & Tomorrow">original</a>)</li>
<li><a href="/david/cache/2024/4a56aa5497e68df0c5bb1d5331203219/" title="Accès à l’article dans le cache local : When “Everything” Becomes Too Much: The npm Package Chaos of 2024">When “Everything” Becomes Too Much: The npm Package Chaos of 2024</a> (<a href="https://socket.dev/blog/when-everything-becomes-too-much" title="Accès à l’article original distant : When “Everything” Becomes Too Much: The npm Package Chaos of 2024">original</a>)</li>
<li><a href="/david/cache/2024/668d0f82ae65b0e94ea76145057759a7/" title="Accès à l’article dans le cache local : ‘One in a million’ iPhone bridal photo explanation: blame panorama mode">‘One in a million’ iPhone bridal photo explanation: blame panorama mode</a> (<a href="https://www.theverge.com/2023/12/2/23985299/iphone-bridal-photo-three-poses-explanation-panorama-photoshop-generative-ai" title="Accès à l’article original distant : ‘One in a million’ iPhone bridal photo explanation: blame panorama mode">original</a>)</li>
<li><a href="/david/cache/2024/b31ba18e3de1fc479b79f1885043026a/" title="Accès à l’article dans le cache local : When to use CSS text-wrap: balance; vs text-wrap: pretty;">When to use CSS text-wrap: balance; vs text-wrap: pretty;</a> (<a href="https://blog.stephaniestimac.com/posts/2023/10/css-text-wrap/" title="Accès à l’article original distant : When to use CSS text-wrap: balance; vs text-wrap: pretty;">original</a>)</li>
<li><a href="/david/cache/2024/55477786fc56b6fc37bb97231b634d90/" title="Accès à l’article dans le cache local : Fabrique : concept">Fabrique : concept</a> (<a href="https://www.quaternum.net/2023/06/02/fabrique-concept/" title="Accès à l’article original distant : Fabrique : concept">original</a>)</li>
@@ -122,6 +126,8 @@
<li><a href="/david/cache/2024/b1da1249f2db388d7e84d6ad23c2fc5d/" title="Accès à l’article dans le cache local : Data Luddism">Data Luddism</a> (<a href="https://www.danmcquillan.org/dataluddism.html" title="Accès à l’article original distant : Data Luddism">original</a>)</li>
<li><a href="/david/cache/2024/e990536ed88823f047296ea25a6b7933/" title="Accès à l’article dans le cache local : Samsung caught faking zoom photos of the Moon">Samsung caught faking zoom photos of the Moon</a> (<a href="https://www.theverge.com/2023/3/13/23637401/samsung-fake-moon-photos-ai-galaxy-s21-s23-ultra" title="Accès à l’article original distant : Samsung caught faking zoom photos of the Moon">original</a>)</li>
<li><a href="/david/cache/2024/cd2fda3dae5d89990f73fbdaa1c3b491/" title="Accès à l’article dans le cache local : build a world, not an audience">build a world, not an audience</a> (<a href="https://keningzhu.com/journal/build-a-world-not-an-audience" title="Accès à l’article original distant : build a world, not an audience">original</a>)</li>
</ul>

Loading…
Cancel
Save