Browse Source

Links

master
David Larlet 1 year ago
parent
commit
64a07fe9be

+ 242
- 0
cache/2023/08f83e8893cad4d5a2eb6a560f73dd65/index.html View File

@@ -0,0 +1,242 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>Expérimentations GPTiennes: assistant vocal (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="http://dataholic.ca/2023/04/05/gpt-assistant-vocal/">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>Expérimentations GPTiennes: assistant vocal</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="http://dataholic.ca/2023/04/05/gpt-assistant-vocal/" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<p>Dernière exploration avec GPT: est-il possible d’interfacer un <a href="https://fr.wikipedia.org/wiki/Mod%C3%A8le_de_langage">modèle de langage (LLM)</a> avec des outils logiciels existants, par exemple pour envoyer des courriels? Et d’ailleurs pourquoi?</p>

<p>Démontrer <em>ad nauseam</em> que les connaissances générales de GPT ne sont pas si bonnes ou qu’il est facile de lui faire dire n’importe quoi et son contraire, tout cela fait que l’on passe à côté d’une réelle compréhension de ce genre d’outil et donc de son impact possible. Le fait que GPT fasse preuve d’une certaine “culture générale” mâtinée d’une tendance à l’affabulation est un bénéfice secondaire.</p>

<p>La fonction première de ces modèles est celle d’interprétation du “langage naturel”. Cette fonction d’interprétation du langage est ce qui fait défaut aux outils informatiques depuis des lunes; barrière qui, une fois éliminée, permettrait de s’affranchir du symbolisme actuellement nécessaire et représenté par des interfaces d’utilisation contraignantes.</p>

<p>Sauf que pour être en mesure de s’affranchir réellement de cette barrière, il faut que les LLM soient capables de faire le pont: comprendre d’un côté le langage humain et être capable de l’autre côté d’utiliser du langage “machine”, suivant un certain formalisme, pour transformer le verbe en action (informatique).</p>

<p>GPT démontre d’ores et déjà cette capacité: la version Copilot qui permet de générer du code est en exemple. L’intégration avec Bing pour faire un moteur de recherche assisté en est une autre. Toutefois, je voulais tester moi-même comment cela pourrait fonctionner. Mon précédent test sur le code de sécurité routière (billet <a href="/2023/02/19/apprendre-a-gpt3/">1</a> et <a href="/2023/03/11/addendum-gpt/">2</a>) visait à tester la capacité de traitement et d’interprétation de GPT sur des volumes d’information supérieurs à sa fenêtre de contexte, ici, je cherche à évaluer la capacité du modèle de langage à jouer le rôle d’interface d’interprétation humain-machine.</p>

<h2 id="commande-vocale-pour-courriel">Commande vocale pour courriel</h2>

<p>Mon défi: était-il possible de passer une commande vocale instruisant Gmail d’envoyer un courriel?</p>

<p>Les blocs Lego utilisés pour l’occasion:</p>
<ul>
<li>Une interface me permettant d’envoyer des messages vocaux, de récupérer ces messages vocaux dans un script de ma conception (via une <a href="https://fr.wikipedia.org/wiki/Interface_de_programmation">API</a>) et de renvoyer des réponses écrites à l’utilisateur. J’étais parti pour utiliser Discord, mais ça ne marchait pas à mon goût. En donnant mes contraintes à ChatGPT, il m’a conseillé <a href="https://telegram.org/">Telegram</a> qui s’est avéré effectivement un très bon choix.</li>
<li>Un outil parole-vers-texte, là aussi pouvant être appelé par script/API, en l’occurrence le module <a href="https://platform.openai.com/docs/guides/speech-to-text">Whisper API</a> d’OpenAI</li>
<li>Évidemment GPT et Gmail, les deux offrant là aussi des API pour être contrôlés par un script.</li>
</ul>

<p>Je m’étais fixé un objectif supplémentaire: avoir un mécanisme modulaire qui serait capable de recevoir d’autres commandes de manière flexible: par exemple, créer des événements dans un agenda, gérer des tâches, etc. J’ai donc mis en place un mécanisme de recette: un fichier de configuration définit l’ensemble des étapes et des fonctions à appeler pour réaliser une tâche particulière.</p>

<p>Résultat net: un succès, avec quelques bémols. Ci-dessous une capture d’écran montrant l’échange sur l’interface web de Telegram.</p>

<p>Le déclencheur de la séquence est un message vocal qui va comme suit (ceci est exactement la chaîne de caractère produite par Whisper): « Est-ce que tu peux écrire un courriel à Stéphane Guidoin pour lui dire que demain je ne rentrerai pas au travail, car il fait trop beau pour travailler. Je rentrerai après demain. Signé Robert. »</p>

<p><img src="/images/2023-04-05_echange_telegram.png" alt="Échange via Telegram"></p>
<p class="photoattrib">Échange avec le bot Telegram</p>

<p>Pour les curieux, une section méthodologie à la fin rentre plus dans le détail (et présente quelques limites).
Tout commence par un fichier de configuration qui contient les recettes. Le fichier décrit ce que chaque recette est capable de faire ainsi que les étapes pour la réaliser. Ensuite, j’ai créé un <a href="https://core.telegram.org/bots/">bot</a> Telegram, lequel est contrôlé par mon script Python.</p>

<p>Lorsque l’usager envoie un message vocal au bot, le fichier son est reçu par mon script qui l’envoie à Whisper API, ce dernier générant une transcription en texte. La transcription est envoyée à GPT conjointement avec une liste contenant les noms et descriptions des recettes et une instruction: retourner le nom de la recette correspondant à la demande de l’utilisateur. Pour rendre le tout facilement utilisable par mon script Python -et c’est la clé de la démarche, je demande à GPT d’utiliser en guise de réponse le format descriptif JSON. Ça prend le format <code class="highlighter-rouge"><span class="p">{</span><span class="nt">"nom_recette"</span><span class="p">:</span><span class="w"> </span><span class="s2">"send_mail"</span><span class="p">}</span></code></p>

<p>Une fois la recette sélectionnée, une confirmation est envoyée à l’utilisateur via Telegram et le script va ensuite s’en tenir à suivre les étapes de la recette, à savoir une alternance de requêtes à GPT et de fonctions auprès d’autres services, Gmail dans ce cas-ci. Les requêtes GPT sont entièrement décrites dans le fichier de configuration, les fonctions Gmail sont nommées dans le fichier de configuration, mais doivent évidemment être codées. La recette pour l’envoi de courriel ressemble à ceci:</p>

<ol>
<li>La requête de l’utilisateur est envoyée à GPT avec l’instruction de retourner le nom du ou des destinataires, là encore en retournant les résultats au format JSON;</li>
<li>Les noms des destinataires sont envoyés à Gmail pour récupérer les adresses courriel;</li>
<li>La requête de l’utilisateur est de nouveau envoyée à GPT avec l’instruction, cette fois-ci, de générer un titre et un contenu de courriel;</li>
<li>Mon script produit un brouillon de courriel qui est envoyé à l’utilisateur via Telegram pour confirmation;</li>
<li>Sur approbation de l’utilisateur, grâce un bouton oui/non, le courriel est envoyé.</li>
</ol>

<h2 id="est-ce-que-ça-marche">Est-ce que ça marche?</h2>

<p>Ça fonctionne étonnamment bien, considérant que mon code ferait surement hurler un vrai développeur. De manière générale, GPT interprète de manière fiable les requêtes. Quand on lui fournit un canevas de réponse (ici une structure JSON avec des trous à remplir), il comprend toujours comment faire. Sur des dizaines d’essai, il a toujours bien procédé. Tel qu’expliqué dans la méthodologie, il a juste fallu que je gère les excès verbomoteurs de GPT.</p>

<p>Je dois dire que Whisper API m’a aussi impressionné pour la transcription: à peu près pas d’erreur, il ôte les onomatopées diverses et variées et autres hésitations et arrive même à bien épelé la majorité des noms de famille.</p>

<p>Mon produit est loin d’être « production ready », mais les quelques heures que j’ai passé dessus m’ont confirmé ce dont j’avais l’impression: la capacité de GPT à interpréter les demandes fait des LLM un candidat vraiment sérieux pour servir d’interface flexible. Vous me direz que Siri, Alexa et autres font déjà cela. C’est en partie vrai: Siri et Alexa font plus d’erreurs (à mes yeux) et surtout ce sont des systèmes pour lesquels il est plus difficile de s’intégrer. Ici, il est possible de faire des intégrations multiples et jusqu’à un certain point de contrôler ces intégrations. Nombre de plateformes proposent d’ores et déjà des fonctionnalités “AI-improved” et cela va surement exploser dans les prochains mois.</p>

<p>Évidemment, reste la question de la réelle fiabilité de la chose. C’est à travers des intégrations à grand volume qu’il sera possible d’évaluer réellement si la fiabilité est de l’ordre de 99% ou de 90%, la différence entre un bidule perçu comme fiable ou pas fiable.</p>

<p>Dernier commentaire de fond: jusqu’à un certain point, en expliquant les règles du jeu à GPT, il serait capable de générer des recettes. En lui fournissant comme exemple ma recette, je lui ai demandé de faire de même pour créer une tâche Asana; il m’a fourni une réponse qui se tenait. De la même manière, ici je me limite à envoyer un courriel à partir de zéro, mais il serait possible de répondre à un courriel. De manière plus générale, la même approche pourrait être utilisée pour faire une synthèse des courriels d’une journée, faire ressortir les courriels qui semblent nécessiter une action urgente et y répondre, etc.</p>

<p>Tel que mentionné, le principal point où GPT manquait de constance et de prévisibilité pour servir de pont humain-machine est cette tendance à être inutilement verbeux et à fournir une réponse du type</p>

<p><code class="highlighter-rouge">Voici la structure JSON répondant à votre requête:
{"recette": "send_mail"}</code></p>

<p>Alors que l’on voudrait simplement la structure JSON. J’ai contourné le problème avec une expression régulière, mais c’est… bof bof. L’exemple de Copilot montre toutefois que lorsqu’entrainé dans cet objectif, un LLM est capable de s’en tenir à des formats structurés.</p>

<p>L’autre enjeu dans ce cas d’usage est la manière d’épeler les noms de famille. À ma surprise, Whisper avait la majorité des noms de famille correctement. Mais quand il les manquait, je n’ai pas trouvé de manière fiable de faire comprendre à GPT que si je lui donnais une série de lettres après le nom de famille, ça disait comme épeler le nom. Par ailleurs, l’API de Gmail n’est pas très tolérante aux fautes d’orthographe quand on cherche un nom, donc récupérer une adresse courriel avec une erreur dans le nom ne marche pas. C’est la principale limite, insurmontée à ce stade, dans ma démarche.</p>

<p>Whisper API supporte uniquement des messages d’une minute. Il existe évidemment des approches pour segmenter un fichier audio et le transcrire en plusieurs morceaux, toutefois je n’ai pas implémenté cette fonction. Mes tests se sont donc limités sur des messages vocaux de moins d’une minute. Quoiqu’il en soit, dans la majorité de mes tests, GPT a suivi les consignes; que je lui demande un courriel court ou plus long, formel ou informel, tutoiement ou vouvoiement et autres permutations que j’ai tentées. La génération du titre du courriel laissait parfois à désirer, mais c’est mieux que beaucoup de titre de courriel que nous nous envoyons quotidiennement (quand il y a un titre…). Genre de petite limitation un peu dommage: GPT n’interprétait pas que quand je lui disais que le message allait à ma conjointe, il pouvait automatiquement sélectionner une formulation informelle et le tutoiement.</p>

<p>Je n’ai pas mis en place beaucoup de chemins alternatifs: si l’adresse courriel n’est pas trouvée, si l’utilisateur veut ajuster le brouillon, etc. Ça se ferait parfaitement, ça prenait du temps dont je ne disposais plus.</p>

<p>Tout cela est accompli avec environ 300 lignes de script Python et un fichier de configuration JSON d’une centaine de lignes. Je demeure impressionné par la facilité de mise en œuvre. Les deux tâches qui m’ont pris le plus de temps: corriger mon installation de Homebrew qui n’avait pas appréciée de passer sur une puce M1 et gérer les <em>callbacks</em> de l’API de Telegram. Le contrôle de Telegram se fait avec la librairie <a href="https://pypi.org/project/pyTelegramBotAPI/">Telebot</a>, tandis que pour Whisper, GPT et Gmail, j’utilise les librairies officielles. Le modèle utilisé pour GPT est <code class="highlighter-rouge">gpt-3.5-turbo</code>, je n’ai pas encore accès à GPT4 via l’API.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 79
- 0
cache/2023/08f83e8893cad4d5a2eb6a560f73dd65/index.md View File

@@ -0,0 +1,79 @@
title: Expérimentations GPTiennes: assistant vocal
url: http://dataholic.ca/2023/04/05/gpt-assistant-vocal/
hash_url: 08f83e8893cad4d5a2eb6a560f73dd65

<p>Dernière exploration avec GPT: est-il possible d’interfacer un <a href="https://fr.wikipedia.org/wiki/Mod%C3%A8le_de_langage">modèle de langage (LLM)</a> avec des outils logiciels existants, par exemple pour envoyer des courriels? Et d’ailleurs pourquoi?</p>

<p>Démontrer <em>ad nauseam</em> que les connaissances générales de GPT ne sont pas si bonnes ou qu’il est facile de lui faire dire n’importe quoi et son contraire, tout cela fait que l’on passe à côté d’une réelle compréhension de ce genre d’outil et donc de son impact possible. Le fait que GPT fasse preuve d’une certaine “culture générale” mâtinée d’une tendance à l’affabulation est un bénéfice secondaire.</p>

<p>La fonction première de ces modèles est celle d’interprétation du “langage naturel”. Cette fonction d’interprétation du langage est ce qui fait défaut aux outils informatiques depuis des lunes; barrière qui, une fois éliminée, permettrait de s’affranchir du symbolisme actuellement nécessaire et représenté par des interfaces d’utilisation contraignantes.</p>

<p>Sauf que pour être en mesure de s’affranchir réellement de cette barrière, il faut que les LLM soient capables de faire le pont: comprendre d’un côté le langage humain et être capable de l’autre côté d’utiliser du langage “machine”, suivant un certain formalisme, pour transformer le verbe en action (informatique).</p>

<p>GPT démontre d’ores et déjà cette capacité: la version Copilot qui permet de générer du code est en exemple. L’intégration avec Bing pour faire un moteur de recherche assisté en est une autre. Toutefois, je voulais tester moi-même comment cela pourrait fonctionner. Mon précédent test sur le code de sécurité routière (billet <a href="/2023/02/19/apprendre-a-gpt3/">1</a> et <a href="/2023/03/11/addendum-gpt/">2</a>) visait à tester la capacité de traitement et d’interprétation de GPT sur des volumes d’information supérieurs à sa fenêtre de contexte, ici, je cherche à évaluer la capacité du modèle de langage à jouer le rôle d’interface d’interprétation humain-machine.</p>

<h2 id="commande-vocale-pour-courriel">Commande vocale pour courriel</h2>

<p>Mon défi: était-il possible de passer une commande vocale instruisant Gmail d’envoyer un courriel?</p>

<p>Les blocs Lego utilisés pour l’occasion:</p>
<ul>
<li>Une interface me permettant d’envoyer des messages vocaux, de récupérer ces messages vocaux dans un script de ma conception (via une <a href="https://fr.wikipedia.org/wiki/Interface_de_programmation">API</a>) et de renvoyer des réponses écrites à l’utilisateur. J’étais parti pour utiliser Discord, mais ça ne marchait pas à mon goût. En donnant mes contraintes à ChatGPT, il m’a conseillé <a href="https://telegram.org/">Telegram</a> qui s’est avéré effectivement un très bon choix.</li>
<li>Un outil parole-vers-texte, là aussi pouvant être appelé par script/API, en l’occurrence le module <a href="https://platform.openai.com/docs/guides/speech-to-text">Whisper API</a> d’OpenAI</li>
<li>Évidemment GPT et Gmail, les deux offrant là aussi des API pour être contrôlés par un script.</li>
</ul>

<p>Je m’étais fixé un objectif supplémentaire: avoir un mécanisme modulaire qui serait capable de recevoir d’autres commandes de manière flexible: par exemple, créer des événements dans un agenda, gérer des tâches, etc. J’ai donc mis en place un mécanisme de recette: un fichier de configuration définit l’ensemble des étapes et des fonctions à appeler pour réaliser une tâche particulière.</p>

<p>Résultat net: un succès, avec quelques bémols. Ci-dessous une capture d’écran montrant l’échange sur l’interface web de Telegram.</p>

<p>Le déclencheur de la séquence est un message vocal qui va comme suit (ceci est exactement la chaîne de caractère produite par Whisper): « Est-ce que tu peux écrire un courriel à Stéphane Guidoin pour lui dire que demain je ne rentrerai pas au travail, car il fait trop beau pour travailler. Je rentrerai après demain. Signé Robert. »</p>

<p><img src="/images/2023-04-05_echange_telegram.png" alt="Échange via Telegram"></p>
<p class="photoattrib">Échange avec le bot Telegram</p>



<p>Pour les curieux, une section méthodologie à la fin rentre plus dans le détail (et présente quelques limites).
Tout commence par un fichier de configuration qui contient les recettes. Le fichier décrit ce que chaque recette est capable de faire ainsi que les étapes pour la réaliser. Ensuite, j’ai créé un <a href="https://core.telegram.org/bots/">bot</a> Telegram, lequel est contrôlé par mon script Python.</p>

<p>Lorsque l’usager envoie un message vocal au bot, le fichier son est reçu par mon script qui l’envoie à Whisper API, ce dernier générant une transcription en texte. La transcription est envoyée à GPT conjointement avec une liste contenant les noms et descriptions des recettes et une instruction: retourner le nom de la recette correspondant à la demande de l’utilisateur. Pour rendre le tout facilement utilisable par mon script Python -et c’est la clé de la démarche, je demande à GPT d’utiliser en guise de réponse le format descriptif JSON. Ça prend le format <code class="highlighter-rouge"><span class="p">{</span><span class="nt">"nom_recette"</span><span class="p">:</span><span class="w"> </span><span class="s2">"send_mail"</span><span class="p">}</span></code></p>

<p>Une fois la recette sélectionnée, une confirmation est envoyée à l’utilisateur via Telegram et le script va ensuite s’en tenir à suivre les étapes de la recette, à savoir une alternance de requêtes à GPT et de fonctions auprès d’autres services, Gmail dans ce cas-ci. Les requêtes GPT sont entièrement décrites dans le fichier de configuration, les fonctions Gmail sont nommées dans le fichier de configuration, mais doivent évidemment être codées. La recette pour l’envoi de courriel ressemble à ceci:</p>

<ol>
<li>La requête de l’utilisateur est envoyée à GPT avec l’instruction de retourner le nom du ou des destinataires, là encore en retournant les résultats au format JSON;</li>
<li>Les noms des destinataires sont envoyés à Gmail pour récupérer les adresses courriel;</li>
<li>La requête de l’utilisateur est de nouveau envoyée à GPT avec l’instruction, cette fois-ci, de générer un titre et un contenu de courriel;</li>
<li>Mon script produit un brouillon de courriel qui est envoyé à l’utilisateur via Telegram pour confirmation;</li>
<li>Sur approbation de l’utilisateur, grâce un bouton oui/non, le courriel est envoyé.</li>
</ol>

<h2 id="est-ce-que-ça-marche">Est-ce que ça marche?</h2>

<p>Ça fonctionne étonnamment bien, considérant que mon code ferait surement hurler un vrai développeur. De manière générale, GPT interprète de manière fiable les requêtes. Quand on lui fournit un canevas de réponse (ici une structure JSON avec des trous à remplir), il comprend toujours comment faire. Sur des dizaines d’essai, il a toujours bien procédé. Tel qu’expliqué dans la méthodologie, il a juste fallu que je gère les excès verbomoteurs de GPT.</p>

<p>Je dois dire que Whisper API m’a aussi impressionné pour la transcription: à peu près pas d’erreur, il ôte les onomatopées diverses et variées et autres hésitations et arrive même à bien épelé la majorité des noms de famille.</p>

<p>Mon produit est loin d’être « production ready », mais les quelques heures que j’ai passé dessus m’ont confirmé ce dont j’avais l’impression: la capacité de GPT à interpréter les demandes fait des LLM un candidat vraiment sérieux pour servir d’interface flexible. Vous me direz que Siri, Alexa et autres font déjà cela. C’est en partie vrai: Siri et Alexa font plus d’erreurs (à mes yeux) et surtout ce sont des systèmes pour lesquels il est plus difficile de s’intégrer. Ici, il est possible de faire des intégrations multiples et jusqu’à un certain point de contrôler ces intégrations. Nombre de plateformes proposent d’ores et déjà des fonctionnalités “AI-improved” et cela va surement exploser dans les prochains mois.</p>

<p>Évidemment, reste la question de la réelle fiabilité de la chose. C’est à travers des intégrations à grand volume qu’il sera possible d’évaluer réellement si la fiabilité est de l’ordre de 99% ou de 90%, la différence entre un bidule perçu comme fiable ou pas fiable.</p>

<p>Dernier commentaire de fond: jusqu’à un certain point, en expliquant les règles du jeu à GPT, il serait capable de générer des recettes. En lui fournissant comme exemple ma recette, je lui ai demandé de faire de même pour créer une tâche Asana; il m’a fourni une réponse qui se tenait. De la même manière, ici je me limite à envoyer un courriel à partir de zéro, mais il serait possible de répondre à un courriel. De manière plus générale, la même approche pourrait être utilisée pour faire une synthèse des courriels d’une journée, faire ressortir les courriels qui semblent nécessiter une action urgente et y répondre, etc.</p>



<p>Tel que mentionné, le principal point où GPT manquait de constance et de prévisibilité pour servir de pont humain-machine est cette tendance à être inutilement verbeux et à fournir une réponse du type</p>

<p><code class="highlighter-rouge">Voici la structure JSON répondant à votre requête:
{"recette": "send_mail"}</code></p>

<p>Alors que l’on voudrait simplement la structure JSON. J’ai contourné le problème avec une expression régulière, mais c’est… bof bof. L’exemple de Copilot montre toutefois que lorsqu’entrainé dans cet objectif, un LLM est capable de s’en tenir à des formats structurés.</p>

<p>L’autre enjeu dans ce cas d’usage est la manière d’épeler les noms de famille. À ma surprise, Whisper avait la majorité des noms de famille correctement. Mais quand il les manquait, je n’ai pas trouvé de manière fiable de faire comprendre à GPT que si je lui donnais une série de lettres après le nom de famille, ça disait comme épeler le nom. Par ailleurs, l’API de Gmail n’est pas très tolérante aux fautes d’orthographe quand on cherche un nom, donc récupérer une adresse courriel avec une erreur dans le nom ne marche pas. C’est la principale limite, insurmontée à ce stade, dans ma démarche.</p>

<p>Whisper API supporte uniquement des messages d’une minute. Il existe évidemment des approches pour segmenter un fichier audio et le transcrire en plusieurs morceaux, toutefois je n’ai pas implémenté cette fonction. Mes tests se sont donc limités sur des messages vocaux de moins d’une minute. Quoiqu’il en soit, dans la majorité de mes tests, GPT a suivi les consignes; que je lui demande un courriel court ou plus long, formel ou informel, tutoiement ou vouvoiement et autres permutations que j’ai tentées. La génération du titre du courriel laissait parfois à désirer, mais c’est mieux que beaucoup de titre de courriel que nous nous envoyons quotidiennement (quand il y a un titre…). Genre de petite limitation un peu dommage: GPT n’interprétait pas que quand je lui disais que le message allait à ma conjointe, il pouvait automatiquement sélectionner une formulation informelle et le tutoiement.</p>

<p>Je n’ai pas mis en place beaucoup de chemins alternatifs: si l’adresse courriel n’est pas trouvée, si l’utilisateur veut ajuster le brouillon, etc. Ça se ferait parfaitement, ça prenait du temps dont je ne disposais plus.</p>

<p>Tout cela est accompli avec environ 300 lignes de script Python et un fichier de configuration JSON d’une centaine de lignes. Je demeure impressionné par la facilité de mise en œuvre. Les deux tâches qui m’ont pris le plus de temps: corriger mon installation de Homebrew qui n’avait pas appréciée de passer sur une puce M1 et gérer les <em>callbacks</em> de l’API de Telegram. Le contrôle de Telegram se fait avec la librairie <a href="https://pypi.org/project/pyTelegramBotAPI/">Telebot</a>, tandis que pour Whisper, GPT et Gmail, j’utilise les librairies officielles. Le modèle utilisé pour GPT est <code class="highlighter-rouge">gpt-3.5-turbo</code>, je n’ai pas encore accès à GPT4 via l’API.</p>

+ 215
- 0
cache/2023/096a44a83d8d3f2bdfd21e3d378e4719/index.html View File

@@ -0,0 +1,215 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>Aller voir les aurores boréales en train (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://blog.professeurjoachim.com/billet/2023-03-31-aller-voir-les-aurores-boreales-en-train">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>Aller voir les aurores boréales en train</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://blog.professeurjoachim.com/billet/2023-03-31-aller-voir-les-aurores-boreales-en-train" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<p>Depuis le début de l’année, j’avais besoin d’un changement d’atmosphère. Donc j’ai pris le train pour aller voir des aurores boréales.</p>
<p>J’ai ressenti l’impulsion après qu’une amie a demandé à la cantonade “ma fille voudrait aller voir les aurores boréales, mais ma famille ne prend plus l’avion, vous pensez que c’est possible en train ?”. Ça doit être possible, je me suis dit, mais compliqué à organiser. Et puis j’ai regardé les cartes, les zones de visibilité des aurores, les meilleures périodes de l’année pour les voir, la météo scandinave, les prédictions d’activité solaire… en fait, c’est bien plus accessible que je ne le pensais. Et si j’y allais ?</p>
<h2>Quoi ?</h2>
<p>Une <a href="https://fr.wikipedia.org/wiki/Aurore_polaire">aurore boréale</a> apparait quand un vent solaire interagit avec la haute atmosphère terrestre, au niveau des pôles magnétiques terrestres. Hors orages solaires violents, les aurores se produisent généralement entre le 65e et le 75e degrés de latitude, grosso modo à cheval sur le cercle polaire.<br>
Donc pour en voir il faut aller au nord tant qu’on peut, puis encore un peu plus au nord. On s’arrête quand il fait trop froid ou que le ciel est vert.</p>
<h2>Quand ?</h2>
<p>Est-ce qu’on peut les prévoir ? Oui, un peu, mais sans précision. Pour avoir une aurore, il faut que la Terre soit sur le chemin d’un vent solaire, ce qu’on peut prédire grossièrement avec un mois d’avance en surveillant <a href="https://jemma.mobi/noaa27d?e">l’index Kp</a> (qui mesure l’interaction entre l’activité solaire et le champ magnétique terrestre), et en l’extrapolant sur la période à venir (le soleil fait un tour sur lui-même en 27 jours). La prédiction à quelques heures est bien plus exacte.</p>
<p>Étant donné que ça dépend du soleil, pour lequel on n’a pas d’excellents outils de prédiction d’activité, les scientifiques et observateurs peuvent se retrouver surpris par des orages solaires—comme <a href="https://www.livescience.com/most-powerful-solar-storm-in-6-years-caused-auroras-all-over-the-us-and-nobody-saw-it-coming">celui du 24–25 mars</a>, d’une puissance record depuis six ans, qui a pris tout le monde de court.</p>
<p>Évidemment, il y a aussi la question de la météo ; un ciel nuageux empêchera de voir les aurores. Mais prédire la météo, vous connaissez. Sachez juste que la zone boréale a un temps qui change rapidement, sans vraie assurance la veille de la couverture nuageuse du lendemain.</p>
<h2>Où ?</h2>
<p>À la fin du 19e siècle, un gisement de fer est découvert à 145 km au nord du cercle boréal arctique. Pour l’exploiter il faut pouvoir déplacer le minerais là où il pourra être traité, et donc il faut une ligne de train qui partira vers l’Atlantique, étant donné que la Baltique plus éloignée, et est gelée une grosse partie de l’année. La ligne part du port norvégien de Narvik, et arrive jusqu’au gisement, où une ville est construite à son tour, Kiruna. La ligne continue ensuite jusqu’au port de Luleå sur la Baltique, où elle rejoint la ligne vers Stockholm. La ligne transporte des trains de passagers en plus des trains de minerais de fer.</p>
<p>Dans les montagnes entre Narvik et Kiruna, un camp de travailleurs du chemin de fer est devenu un village, Abisko, et une base touristique a été créée pour accueillir les visiteurs.</p>
<h2>Et donc ?</h2>
<p>Fort de toutes ces informations, j’ai pu répondre positivement à la question de mon amie.</p>
<ul>
<li>✔︎ il est possible de prendre des trains de Paris à Stockholm (je l’ai fait en 2008 en train + ferry depuis Berlin et en 2017 en train de nuit depuis Hambourg)</li>
<li>✔︎ depuis Stockholm, il y a un train de nuit qui s’arrête à Luleå, Kiruna, Abisko et termine à Narvik</li>
<li>✔︎ on peut savoir grossièrement quelles périodes seront les meilleures pour l’observation des aurores</li>
</ul>
<p>Reste à savoir si :</p>
<ul>
<li>il y a un délai minimal pour réserver un aller-retour vers le cercle polaire en train</li>
<li>les conditions seront réunies pour voir des aurores</li>
<li>le coût du voyage sera abordable</li>
<li>il existe des activités pour s’occuper en journée</li>
</ul>
<p>J’ai tendance à improviser mes voyages : une fois que j’ai décidé que je pars un de ces jours, je me documente un peu sur le trajet et la destination, je repère les options qui me permettent le plus de flexibilité. Au besoin, j’envoie un ou deux emails pour jauger du besoin de placer une réservation très à l’avance.</p>
<h2>Comment ?</h2>
<p>Pour ce voyage, j’ai pris ma décision de partir à une semaine de la date du départ. Il y avait <a href="https://jemma.mobi/mittaushistoria.php?p=2023-03-05">un pic d’activité Kp prévu pour le 5–6 mars</a>, je l’avais repéré quinze jours avant. Les mises à jour des estimations confirmaient l’activité, donc j’ai commencé à prévenir mon chef que je poserais sans doutes plusieurs jours très prochainement.</p>
<p>Le trajet en train de Paris au cercle polaire dépend d’un train de nuit de Hambourg à Stockholm, puis d’un autre train de nuit de Stockholm à Narvik. Pour rejoindre Hambourg j’ai décidé de passer par Cologne, qui est desservie par le Thalys, et d’où on peut prendre un ICE (équivalent allemand du TGV) pour Hambourg. C’est quasiment huit heures de trains express.</p>
<p>Pour être à Narvik le 5 mars au soir, il fallait donc que j’y arrive le 5 au matin, ce qui impliquait de prendre le train de nuit à Stockholm le 4 au soir, et donc prendre le train de Hambourg le 3 au soir, ce qui voulait dire partir de Paris le vendredi 3 au matin.</p>
<p>Quoi faire après ça ? Ma sœur vit à Stockholm, c’est l’occasion parfaite pour passer la voir quelques jours au retour. Puis, pourquoi pas m’inviter chez des amis sur le chemin du retour ? Je connais du monde à Roskilde au Danemark (enfin je ne les connaissais pas avant d’y aller mais ils sont très sympa), à Berlin ou à Breda en Hollande… autant aller passer quelques jours et voir un peu d’Europe ! Et à l’aller, ça tombe bien : mon arrêt à Cologne me permet d’y déjeuner avec un ami montreuillois qui y a déménagé. Le programme est prévu, je préviens les amis et leur demande à quel point ils sont flexibles pour m’héberger, et zou ! Je peux commander mes premiers billets. On est dimanche 26 février, je partirai vendredi 3 mars.</p>
<h2>Combien ?</h2>
<p>Mon atout, quand je voyage en train en Europe, c’est le <a href="https://www.interrail.eu/fr">pass Interrail</a>. Je l’ai découvert en 2008 pour voyager de Venise à Stockholm, et pour ce voyage arctique j’ai découvert qu’il fonctionnait sur une app, au lieu d’un carnet dans lequel on doit marquer chacun de ses déplacements. Ce pass me donne la possibilité de voyager à volonté pendant 7 journées sur la durée d’un mois. Mis à part les trains express (TGV, Thalys, ICE…) et les trains de nuit, les trajets sont gratuits. Pour les trains payants, le prix ne couvre que la réservation. Par exemple une place de Thalys Paris–Cologne coûte 37 euros au lieu de 111, et une couchette Stockholm–Abisko coûte 240 couronnes suédoises au lieu de 1200 (soit environ 21 euros au lieu de 105). Comme je pouvais me le permettre, j’ai opté pour le pass Interrail 1e classe. La première classe s’applique pour tous les trains sauf les trains de nuit—j’ai donc pu profiter du café gratuit dans les trains scandinaves, de places plus larges, etc. Pour les trains de nuit, c’est le choix classique : place assise (avec un peu de chance c’est dans un compartiment couchette donc on peut quand même dormir allongé), couchette (avec oreiller, draps et couette) en compartiment de six places, ou en compartiment privé simple ou double.</p>
<p>Au final, j’ai payé 160 euros de réservations, en plus des 440 euros de pass Interrail, ce qui fait 606 euros de train. Sans le pass et pour les mêmes places, j’aurais payé 984 euros. 40 % d’économie, ça change vraiment la donne.</p>
<p>Pour la comparaison, je viens de regarder les vols de Paris à Narvik, l’aller retour dans deux semaines, c’est 555 euros. Et ça, ça n’est que pour Paris—Narvik. Il n’y a pas la suite du voyage : un déjeuner à Cologne, une après-midi à Stockholm entre deux trains de nuit, un passage à Abisko au retour de Narvik, deux jours à Stockholm, puis Roskilde, Berlin, Breda… j’ai la flemme de voir ce que ça m’aurait coûté en avion. La rapidité de déplacement ne fait pas tout, et elle ne compense pas l’émission de carbone dans l’atmosphère. Vous me connaissez, <a href="https://blog.professeurjoachim.com/billet/2019-04-10-je-ne-prends-plus-l-avion">je ne voyage plus en avion</a>, et si vous ne vous posez pas la question on n’aura pas grand chose (de poli) à se dire sur le sujet.</p>
<h2>Et c’était comment ?</h2>
<p>Quelques jours avant l’arrivé à Narvik, j’ai réservé un tour (auprès de <a href="https://www.dayoutnarvik.com/the-northern-lights-package">Day Out Narvik</a>). Au programme : on prend un minibus et on va chasser les aurores dans le fjord, dans l’archipel (les îles Lofoten) ou dans les montagnes vers la frontière. La destination dépend de la météo. Sur place on admire les aurores et on prend des photos avec les conseils du guide, on boit une boisson chaude et on mange une gaufre faite au feu de bois. Ce tour ne donnait aucune assurance qu’on verrait des aurores. Par chance, ce soir là a eu les conditions idéales. Ciel dégagé à partir de 19 heures, grosse activité d’aurores boréales de 19 h 30 à 21 h, froid mais pas trop (-6 à -10 ºC). On n’a eu à faire que 16 km depuis Narvik pour trouver une plage au fond du fjord, avec la pleine lune qui éclairait la montagne et la mer qui reflétait les lumières.</p>
<p>Et les aurores. D’abord c’est faible, on les confond avec des nuages, puis on voit leur couleur, vert pâle, puis la forme, qui bouge. Puis il y a des mouvements plus rapides, les couleurs s’intensifient, on peut voir du violet, du blanc lumineux, qui s’étendent d’un bout à l’autre du ciel. Avec de la chance on voit les “aurores dansantes”, aux couleurs et mouvements très intenses. La durée d’apparition des aurores dépend surtout des vents solaires, donc c’est variable. Parfois c’est quelques minutes, parfois plusieurs heures. À un moment elles ne réapparaissent pas après avoir disparu, ou les nuages se lèvent et cachent le ciel.</p>
<p>Photos prises avec mon appareil numérique, un Fuji X100T, avec son 23mm f/2, à 3200 iso. Photo plus grande au clic.</p>

<p>Photos prises avec mon appareil argentique, un Contax G1, avec son Contax G 35mm f/2, sur de la Kodak Portra 800. Temps de pose : 16 secondes. Photo plus grande au clic.</p>

<p>Quasiment tout dans cette affaire est une histoire de chance, une fois qu’on est au bon endroit. Sur les trois soirs où j’étais dans le cercle polaire (deux à Narvik, un à Abisko), je n’ai eu les conditions idéales qu’une fois, la première nuit. Le deuxième soir était nuageux et le troisième a montré des aurores ni vives ni durables. C’était quand même une bonne surprise d’en apercevoir depuis le train de retour vers Stockholm ; les tchèques avec qui je partageais le compartiment, qui avaient été dans la région depuis une semaine, allaient repartir sans en avoir vu… mais on a passé plusieurs minutes collés contre la fenêtre gelée, à admirer le spectacle naturel qui se produisait au dessus de nos têtes alors qu’on traversait les forêts enneigées et rivières gelées du grand nord scandinave à 80 kilomètres par heure.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 48
- 0
cache/2023/096a44a83d8d3f2bdfd21e3d378e4719/index.md View File

@@ -0,0 +1,48 @@
title: Aller voir les aurores boréales en train
url: https://blog.professeurjoachim.com/billet/2023-03-31-aller-voir-les-aurores-boreales-en-train
hash_url: 096a44a83d8d3f2bdfd21e3d378e4719

<p>Depuis le début de l’année, j’avais besoin d’un changement d’atmosphère. Donc j’ai pris le train pour aller voir des aurores boréales.</p>
<p>J’ai ressenti l’impulsion après qu’une amie a demandé à la cantonade “ma fille voudrait aller voir les aurores boréales, mais ma famille ne prend plus l’avion, vous pensez que c’est possible en train ?”. Ça doit être possible, je me suis dit, mais compliqué à organiser. Et puis j’ai regardé les cartes, les zones de visibilité des aurores, les meilleures périodes de l’année pour les voir, la météo scandinave, les prédictions d’activité solaire… en fait, c’est bien plus accessible que je ne le pensais. Et si j’y allais ?</p>
<h2>Quoi ?</h2>
<p>Une <a href="https://fr.wikipedia.org/wiki/Aurore_polaire">aurore boréale</a> apparait quand un vent solaire interagit avec la haute atmosphère terrestre, au niveau des pôles magnétiques terrestres. Hors orages solaires violents, les aurores se produisent généralement entre le 65e et le 75e degrés de latitude, grosso modo à cheval sur le cercle polaire.<br>
Donc pour en voir il faut aller au nord tant qu’on peut, puis encore un peu plus au nord. On s’arrête quand il fait trop froid ou que le ciel est vert.</p>
<h2>Quand ?</h2>
<p>Est-ce qu’on peut les prévoir ? Oui, un peu, mais sans précision. Pour avoir une aurore, il faut que la Terre soit sur le chemin d’un vent solaire, ce qu’on peut prédire grossièrement avec un mois d’avance en surveillant <a href="https://jemma.mobi/noaa27d?e">l’index Kp</a> (qui mesure l’interaction entre l’activité solaire et le champ magnétique terrestre), et en l’extrapolant sur la période à venir (le soleil fait un tour sur lui-même en 27 jours). La prédiction à quelques heures est bien plus exacte.</p>
<p>Étant donné que ça dépend du soleil, pour lequel on n’a pas d’excellents outils de prédiction d’activité, les scientifiques et observateurs peuvent se retrouver surpris par des orages solaires—comme <a href="https://www.livescience.com/most-powerful-solar-storm-in-6-years-caused-auroras-all-over-the-us-and-nobody-saw-it-coming">celui du 24–25 mars</a>, d’une puissance record depuis six ans, qui a pris tout le monde de court.</p>
<p>Évidemment, il y a aussi la question de la météo ; un ciel nuageux empêchera de voir les aurores. Mais prédire la météo, vous connaissez. Sachez juste que la zone boréale a un temps qui change rapidement, sans vraie assurance la veille de la couverture nuageuse du lendemain.</p>
<h2>Où ?</h2>
<p>À la fin du 19e siècle, un gisement de fer est découvert à 145 km au nord du cercle boréal arctique. Pour l’exploiter il faut pouvoir déplacer le minerais là où il pourra être traité, et donc il faut une ligne de train qui partira vers l’Atlantique, étant donné que la Baltique plus éloignée, et est gelée une grosse partie de l’année. La ligne part du port norvégien de Narvik, et arrive jusqu’au gisement, où une ville est construite à son tour, Kiruna. La ligne continue ensuite jusqu’au port de Luleå sur la Baltique, où elle rejoint la ligne vers Stockholm. La ligne transporte des trains de passagers en plus des trains de minerais de fer.</p>
<p>Dans les montagnes entre Narvik et Kiruna, un camp de travailleurs du chemin de fer est devenu un village, Abisko, et une base touristique a été créée pour accueillir les visiteurs.</p>
<h2>Et donc ?</h2>
<p>Fort de toutes ces informations, j’ai pu répondre positivement à la question de mon amie.</p>
<ul>
<li>✔︎ il est possible de prendre des trains de Paris à Stockholm (je l’ai fait en 2008 en train + ferry depuis Berlin et en 2017 en train de nuit depuis Hambourg)</li>
<li>✔︎ depuis Stockholm, il y a un train de nuit qui s’arrête à Luleå, Kiruna, Abisko et termine à Narvik</li>
<li>✔︎ on peut savoir grossièrement quelles périodes seront les meilleures pour l’observation des aurores</li>
</ul>
<p>Reste à savoir si :</p>
<ul>
<li>il y a un délai minimal pour réserver un aller-retour vers le cercle polaire en train</li>
<li>les conditions seront réunies pour voir des aurores</li>
<li>le coût du voyage sera abordable</li>
<li>il existe des activités pour s’occuper en journée</li>
</ul>
<p>J’ai tendance à improviser mes voyages : une fois que j’ai décidé que je pars un de ces jours, je me documente un peu sur le trajet et la destination, je repère les options qui me permettent le plus de flexibilité. Au besoin, j’envoie un ou deux emails pour jauger du besoin de placer une réservation très à l’avance.</p>
<h2>Comment ?</h2>
<p>Pour ce voyage, j’ai pris ma décision de partir à une semaine de la date du départ. Il y avait <a href="https://jemma.mobi/mittaushistoria.php?p=2023-03-05">un pic d’activité Kp prévu pour le 5–6 mars</a>, je l’avais repéré quinze jours avant. Les mises à jour des estimations confirmaient l’activité, donc j’ai commencé à prévenir mon chef que je poserais sans doutes plusieurs jours très prochainement.</p>
<p>Le trajet en train de Paris au cercle polaire dépend d’un train de nuit de Hambourg à Stockholm, puis d’un autre train de nuit de Stockholm à Narvik. Pour rejoindre Hambourg j’ai décidé de passer par Cologne, qui est desservie par le Thalys, et d’où on peut prendre un ICE (équivalent allemand du TGV) pour Hambourg. C’est quasiment huit heures de trains express.</p>
<p>Pour être à Narvik le 5 mars au soir, il fallait donc que j’y arrive le 5 au matin, ce qui impliquait de prendre le train de nuit à Stockholm le 4 au soir, et donc prendre le train de Hambourg le 3 au soir, ce qui voulait dire partir de Paris le vendredi 3 au matin.</p>
<p>Quoi faire après ça ? Ma sœur vit à Stockholm, c’est l’occasion parfaite pour passer la voir quelques jours au retour. Puis, pourquoi pas m’inviter chez des amis sur le chemin du retour ? Je connais du monde à Roskilde au Danemark (enfin je ne les connaissais pas avant d’y aller mais ils sont très sympa), à Berlin ou à Breda en Hollande… autant aller passer quelques jours et voir un peu d’Europe ! Et à l’aller, ça tombe bien : mon arrêt à Cologne me permet d’y déjeuner avec un ami montreuillois qui y a déménagé. Le programme est prévu, je préviens les amis et leur demande à quel point ils sont flexibles pour m’héberger, et zou ! Je peux commander mes premiers billets. On est dimanche 26 février, je partirai vendredi 3 mars.</p>
<h2>Combien ?</h2>
<p>Mon atout, quand je voyage en train en Europe, c’est le <a href="https://www.interrail.eu/fr">pass Interrail</a>. Je l’ai découvert en 2008 pour voyager de Venise à Stockholm, et pour ce voyage arctique j’ai découvert qu’il fonctionnait sur une app, au lieu d’un carnet dans lequel on doit marquer chacun de ses déplacements. Ce pass me donne la possibilité de voyager à volonté pendant 7 journées sur la durée d’un mois. Mis à part les trains express (TGV, Thalys, ICE…) et les trains de nuit, les trajets sont gratuits. Pour les trains payants, le prix ne couvre que la réservation. Par exemple une place de Thalys Paris–Cologne coûte 37 euros au lieu de 111, et une couchette Stockholm–Abisko coûte 240 couronnes suédoises au lieu de 1200 (soit environ 21 euros au lieu de 105). Comme je pouvais me le permettre, j’ai opté pour le pass Interrail 1e classe. La première classe s’applique pour tous les trains sauf les trains de nuit—j’ai donc pu profiter du café gratuit dans les trains scandinaves, de places plus larges, etc. Pour les trains de nuit, c’est le choix classique : place assise (avec un peu de chance c’est dans un compartiment couchette donc on peut quand même dormir allongé), couchette (avec oreiller, draps et couette) en compartiment de six places, ou en compartiment privé simple ou double.</p>
<p>Au final, j’ai payé 160 euros de réservations, en plus des 440 euros de pass Interrail, ce qui fait 606 euros de train. Sans le pass et pour les mêmes places, j’aurais payé 984 euros. 40 % d’économie, ça change vraiment la donne.</p>
<p>Pour la comparaison, je viens de regarder les vols de Paris à Narvik, l’aller retour dans deux semaines, c’est 555 euros. Et ça, ça n’est que pour Paris—Narvik. Il n’y a pas la suite du voyage : un déjeuner à Cologne, une après-midi à Stockholm entre deux trains de nuit, un passage à Abisko au retour de Narvik, deux jours à Stockholm, puis Roskilde, Berlin, Breda… j’ai la flemme de voir ce que ça m’aurait coûté en avion. La rapidité de déplacement ne fait pas tout, et elle ne compense pas l’émission de carbone dans l’atmosphère. Vous me connaissez, <a href="https://blog.professeurjoachim.com/billet/2019-04-10-je-ne-prends-plus-l-avion">je ne voyage plus en avion</a>, et si vous ne vous posez pas la question on n’aura pas grand chose (de poli) à se dire sur le sujet.</p>
<h2>Et c’était comment ?</h2>
<p>Quelques jours avant l’arrivé à Narvik, j’ai réservé un tour (auprès de <a href="https://www.dayoutnarvik.com/the-northern-lights-package">Day Out Narvik</a>). Au programme : on prend un minibus et on va chasser les aurores dans le fjord, dans l’archipel (les îles Lofoten) ou dans les montagnes vers la frontière. La destination dépend de la météo. Sur place on admire les aurores et on prend des photos avec les conseils du guide, on boit une boisson chaude et on mange une gaufre faite au feu de bois. Ce tour ne donnait aucune assurance qu’on verrait des aurores. Par chance, ce soir là a eu les conditions idéales. Ciel dégagé à partir de 19 heures, grosse activité d’aurores boréales de 19 h 30 à 21 h, froid mais pas trop (-6 à -10 ºC). On n’a eu à faire que 16 km depuis Narvik pour trouver une plage au fond du fjord, avec la pleine lune qui éclairait la montagne et la mer qui reflétait les lumières.</p>
<p>Et les aurores. D’abord c’est faible, on les confond avec des nuages, puis on voit leur couleur, vert pâle, puis la forme, qui bouge. Puis il y a des mouvements plus rapides, les couleurs s’intensifient, on peut voir du violet, du blanc lumineux, qui s’étendent d’un bout à l’autre du ciel. Avec de la chance on voit les “aurores dansantes”, aux couleurs et mouvements très intenses. La durée d’apparition des aurores dépend surtout des vents solaires, donc c’est variable. Parfois c’est quelques minutes, parfois plusieurs heures. À un moment elles ne réapparaissent pas après avoir disparu, ou les nuages se lèvent et cachent le ciel.</p>
<p>Photos prises avec mon appareil numérique, un Fuji X100T, avec son 23mm f/2, à 3200 iso. Photo plus grande au clic.</p>

<p>Photos prises avec mon appareil argentique, un Contax G1, avec son Contax G 35mm f/2, sur de la Kodak Portra 800. Temps de pose : 16 secondes. Photo plus grande au clic.</p>

<p>Quasiment tout dans cette affaire est une histoire de chance, une fois qu’on est au bon endroit. Sur les trois soirs où j’étais dans le cercle polaire (deux à Narvik, un à Abisko), je n’ai eu les conditions idéales qu’une fois, la première nuit. Le deuxième soir était nuageux et le troisième a montré des aurores ni vives ni durables. C’était quand même une bonne surprise d’en apercevoir depuis le train de retour vers Stockholm ; les tchèques avec qui je partageais le compartiment, qui avaient été dans la région depuis une semaine, allaient repartir sans en avoir vu… mais on a passé plusieurs minutes collés contre la fenêtre gelée, à admirer le spectacle naturel qui se produisait au dessus de nos têtes alors qu’on traversait les forêts enneigées et rivières gelées du grand nord scandinave à 80 kilomètres par heure.</p>

+ 215
- 0
cache/2023/230f8f7224199132de4ce030458536de/index.html View File

@@ -0,0 +1,215 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>The mounting human and environmental costs of generative AI (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>The mounting human and environmental costs of generative AI</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<p>Over the past few months, the field of artificial intelligence has seen rapid growth, with wave after wave of new models like Dall-E and GPT-4 emerging one after another. Every week brings the promise of new and exciting models, products, and tools. It’s easy to get swept up in the waves of hype, but these shiny capabilities come at a real cost to society and the planet.</p>
<p>Downsides include the environmental toll of mining rare minerals, the human costs of the labor-intensive process of data annotation, and the escalating financial investment required to train AI models as they incorporate more parameters.</p>
<p>Let’s look at the innovations that have fueled recent generations of these models—and raised their associated costs.</p>
<h2>Bigger models</h2>
<p>In recent years, AI models have been getting bigger, with researchers now measuring their size in the hundreds of billions of parameters. “Parameters” are the internal connections used within the models to learn patterns based on the training data.</p>
<p>For large language models (LLMs) like ChatGPT, we’ve gone from around 100 million parameters in 2018 to 500 billion in 2023 with Google’s PaLM model. The theory behind this growth is that models with more parameters should have better performance, even on tasks they were not initially trained on, although this hypothesis remains unproven.
Model size growth over the years.
Enlarge / Model size growth over the years.</p>
<p>Bigger models typically take longer to train, which means they also need more GPUs, which cost more money, so only a select few organizations are able to train them. Estimates put the training cost of GPT-3, which has 175 billion parameters, at $4.6 million—out of reach for the majority of companies and organizations. (It's worth noting that the cost of training models is dropping in some cases, such as in the case of LLaMA, the recent model trained by Meta.)</p>
<p>This creates a digital divide in the AI community between those who can train the most cutting-edge LLMs (mostly Big Tech companies and rich institutions in the Global North) and those who can’t (nonprofit organizations, startups, and anyone without access to a supercomputer or millions in cloud credits). Building and deploying these behemoths requires a lot of planetary resources: rare metals for manufacturing GPUs, water to cool huge data centers, energy to keep those data centers running 24/7 on a planetary scale… all of these are often overlooked in favor of focusing on the future potential of the resulting models.</p>
<h2>Planetary impacts</h2>
<p>A study from Carnegie Melon University professor Emma Strubell about the carbon footprint of training LLMs estimated that training a 2019 model called BERT, which has only 213 million parameters, emitted 280 metric tons of carbon emissions, roughly equivalent to the emissions from five cars over their lifetimes. Since then, models have grown and hardware has become more efficient, so where are we now?</p>
<p>In a recent academic article I wrote to study the carbon emissions incurred by training BLOOM, a 176-billion parameter language model, we compared the power consumption and ensuing carbon emissions of several LLMs, all of which came out in the last few years. The goal of the comparison was to get an idea of the scale of emissions of different sizes of LLMs and what impacts them.
Enlarge
Sasha Luccioni, et al.</p>
<p>Depending on the energy source used for training and its carbon intensity, training a 2022-era LLM emits at least 25 metric tons of carbon equivalents if you use renewable energy, as we did for the BLOOM model. If you use carbon-intensive energy sources like coal and natural gas, which was the case for GPT-3, this number quickly goes up to 500 metric tons of carbon emissions, roughly equivalent to over a million miles driven by an average gasoline-powered car.</p>
<p>And this calculation doesn’t consider the manufacturing of the hardware used for training the models, nor the emissions incurred when LLMs are deployed in the real world. For instance, with ChatGPT, which was queried by tens of millions of users at its peak a month ago, thousands of copies of the model are running in parallel, responding to user queries in real time, all while using megawatt hours of electricity and generating metric tons of carbon emissions. It’s hard to estimate the exact quantity of emissions this results in, given the secrecy and lack of transparency around these big LLMs.</p>
<h2>Closed, proprietary models</h2>
<p>Let’s go back to the LLM size plot above. You may notice that neither ChatGPT nor GPT-4 are on it. Why? Because we have no idea how big they are. Although there are several reports published about them, we know almost nothing about their size and how they work. Access is provided via APIs, which means they are essentially black boxes that can be queried by users.</p>
<p>These boxes may contain either a single model (with a trillion parameters?) or multiple models, or, as I told Bloomberg, “It could be three raccoons in a trench coat.” We really don’t know.</p>
<p>The plot below presents a timeline of recent releases of LLMs and the type of access that each model creator provided. As you can see, the biggest models (Megatron, PaLM, Gopher, etc.) are all closed source. And if you buy into the theory that the bigger the model, the more powerful it is (I don’t), this means the most powerful AI tech is only accessible to a select few organizations, who monopolize access to it.
A timeline of recent releases of LLMs and the type of access each model creator provided.
Enlarge / A timeline of recent releases of LLMs and the type of access each model creator provided.
Irene Solaiman</p>
<p>Why is this problematic? It means it’s difficult to carry out external evaluations and audits of these models since you can’t even be sure that the underlying model is the same every time you query it. It also means that you can’t do scientific research on them, given that studies must be reproducible.</p>
<p>The only people who can keep improving these models are the organizations that trained them in the first place, which is something they keep doing to improve their models and provide new features over time.</p>
<h2>Human costs</h2>
<p>How many humans does it take to train an AI model? You may think the answer is zero, but the amount of human labor needed to make recent generations of LLMs is steadily rising.</p>
<p>When Transformer models came out a few years ago, researchers heralded them as a new era in AI because they could be trained on “raw data.” In this case, raw data means “unlabeled data”—books, encyclopedia articles, and websites that have been scraped and collected in massive quantities.</p>
<p>That was the case for models like BERT and GPT-2, which required relatively little human intervention in terms of data gathering and filtering. While this was convenient for the model creators, it also meant that all sorts of undesirable content, like hate speech and pornography, were sucked up during the model training process, then often parroted back by the models themselves.</p>
<p>This data collection approach changed with the advent of RLHF (reinforcement learning with human feedback), the technique used by newer generations of LLMs like ChatGPT. As its name indicates, RLHF adds additional steps to the LLM training process, and these steps require much more human intervention.</p>
<p>Essentially, once a model has been trained on large quantities of unlabeled data (from the web, books, etc.), humans are then asked to interact with the model, coming up with prompts (e.g., “Write me a recipe for chocolate cake”) and provide their own answers or evaluate answers provided by the model. This data is used to continue training the model, which is then again tested by humans, ad nauseam, until the model is deemed good enough to be released into the world.</p>
<p>This kind of RLHF training is what made ChatGPT feasible for wide release since it could decline to answer many classes of potentially harmful questions.
An illustration of RLHF training.
Enlarge / An illustration of RLHF training.</p>
<p>But that success has a dirty secret behind it: To keep the costs of AI low, the people providing this “human feedback” are underpaid, overexploited workers. In January, Time wrote a report about Kenyan laborers paid less than $2 an hour to examine thousands of messages for OpenAI. This kind of work can have long-lasting psychological impacts, as we've seen in content-moderation workers.</p>
<p>To make it worse, the efforts of these nameless workers aren’t recognized in the reports accompanying AI models. Their labor remains invisible.</p>
<h2>What should we do about it?</h2>
<p>For the creators of these models, instead of focusing on scale and size and optimizing solely for performance, it’s possible to train smaller, more efficient models and make models accessible so that they can be reused and fine-tuned (read: adapted) by members of the AI community, who won’t need to train models from scratch. Dedicating more efforts toward improving the safety and security of these models—developing features like watermarks for machine-generated content, more reliable safety filters, and the ability to cite sources when generating answers to questions—can also contribute toward making LLMs more accessible and robust.</p>
<p>As users of these models (sometimes despite ourselves), it's within our power to demand transparency and push back against the deployment of AI models in high-risk scenarios, such as services that provide mental help therapy or generate forensic sketches. These models are still too new, poorly documented, and unpredictable to be deployed in circumstances that can have such major repercussions.</p>
<p>And the next time someone tells you that the latest AI model will benefit humanity at large or that it displays evidence of artificial general intelligence, I hope you'll think about its hidden costs to people and the planet, some of which I’ve addressed in the sections above. And these are only a fraction of the broader societal impacts and costs of these systems (some of which you can see on the image below, crowdsourced via Twitter)—things like job impacts, the spread of disinformation and propaganda, and copyright infringement concerns.
There are many hidden costs of generative AI.
Enlarge / There are many hidden costs of generative AI.</p>
<p>The current trend is toward creating bigger and more closed and opaque models. But there’s still time to push back, demand transparency, and get a better understanding of the costs and impacts of LLMs while limiting how they are deployed in society at large. Legislation like the Algorithmic Accountability Act in the US and legal frameworks on AI governance in the European Union and Canada are defining our AI future and putting safeguards in place to ensure safety and accountability in future generations of AI systems deployed in society. As members of that society and users of these systems, we should have our voices heard by their creators.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 80
- 0
cache/2023/230f8f7224199132de4ce030458536de/index.md View File

@@ -0,0 +1,80 @@
title: The mounting human and environmental costs of generative AI
url: https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/
hash_url: 230f8f7224199132de4ce030458536de

Over the past few months, the field of artificial intelligence has seen rapid growth, with wave after wave of new models like Dall-E and GPT-4 emerging one after another. Every week brings the promise of new and exciting models, products, and tools. It’s easy to get swept up in the waves of hype, but these shiny capabilities come at a real cost to society and the planet.

Downsides include the environmental toll of mining rare minerals, the human costs of the labor-intensive process of data annotation, and the escalating financial investment required to train AI models as they incorporate more parameters.

Let’s look at the innovations that have fueled recent generations of these models—and raised their associated costs.

## Bigger models

In recent years, AI models have been getting bigger, with researchers now measuring their size in the hundreds of billions of parameters. “Parameters” are the internal connections used within the models to learn patterns based on the training data.

For large language models (LLMs) like ChatGPT, we’ve gone from around 100 million parameters in 2018 to 500 billion in 2023 with Google’s PaLM model. The theory behind this growth is that models with more parameters should have better performance, even on tasks they were not initially trained on, although this hypothesis remains unproven.
Model size growth over the years.
Enlarge / Model size growth over the years.

Bigger models typically take longer to train, which means they also need more GPUs, which cost more money, so only a select few organizations are able to train them. Estimates put the training cost of GPT-3, which has 175 billion parameters, at $4.6 million—out of reach for the majority of companies and organizations. (It's worth noting that the cost of training models is dropping in some cases, such as in the case of LLaMA, the recent model trained by Meta.)

This creates a digital divide in the AI community between those who can train the most cutting-edge LLMs (mostly Big Tech companies and rich institutions in the Global North) and those who can’t (nonprofit organizations, startups, and anyone without access to a supercomputer or millions in cloud credits). Building and deploying these behemoths requires a lot of planetary resources: rare metals for manufacturing GPUs, water to cool huge data centers, energy to keep those data centers running 24/7 on a planetary scale… all of these are often overlooked in favor of focusing on the future potential of the resulting models.

## Planetary impacts

A study from Carnegie Melon University professor Emma Strubell about the carbon footprint of training LLMs estimated that training a 2019 model called BERT, which has only 213 million parameters, emitted 280 metric tons of carbon emissions, roughly equivalent to the emissions from five cars over their lifetimes. Since then, models have grown and hardware has become more efficient, so where are we now?

In a recent academic article I wrote to study the carbon emissions incurred by training BLOOM, a 176-billion parameter language model, we compared the power consumption and ensuing carbon emissions of several LLMs, all of which came out in the last few years. The goal of the comparison was to get an idea of the scale of emissions of different sizes of LLMs and what impacts them.
Enlarge
Sasha Luccioni, et al.

Depending on the energy source used for training and its carbon intensity, training a 2022-era LLM emits at least 25 metric tons of carbon equivalents if you use renewable energy, as we did for the BLOOM model. If you use carbon-intensive energy sources like coal and natural gas, which was the case for GPT-3, this number quickly goes up to 500 metric tons of carbon emissions, roughly equivalent to over a million miles driven by an average gasoline-powered car.

And this calculation doesn’t consider the manufacturing of the hardware used for training the models, nor the emissions incurred when LLMs are deployed in the real world. For instance, with ChatGPT, which was queried by tens of millions of users at its peak a month ago, thousands of copies of the model are running in parallel, responding to user queries in real time, all while using megawatt hours of electricity and generating metric tons of carbon emissions. It’s hard to estimate the exact quantity of emissions this results in, given the secrecy and lack of transparency around these big LLMs.

## Closed, proprietary models

Let’s go back to the LLM size plot above. You may notice that neither ChatGPT nor GPT-4 are on it. Why? Because we have no idea how big they are. Although there are several reports published about them, we know almost nothing about their size and how they work. Access is provided via APIs, which means they are essentially black boxes that can be queried by users.

These boxes may contain either a single model (with a trillion parameters?) or multiple models, or, as I told Bloomberg, “It could be three raccoons in a trench coat.” We really don’t know.

The plot below presents a timeline of recent releases of LLMs and the type of access that each model creator provided. As you can see, the biggest models (Megatron, PaLM, Gopher, etc.) are all closed source. And if you buy into the theory that the bigger the model, the more powerful it is (I don’t), this means the most powerful AI tech is only accessible to a select few organizations, who monopolize access to it.
A timeline of recent releases of LLMs and the type of access each model creator provided.
Enlarge / A timeline of recent releases of LLMs and the type of access each model creator provided.
Irene Solaiman

Why is this problematic? It means it’s difficult to carry out external evaluations and audits of these models since you can’t even be sure that the underlying model is the same every time you query it. It also means that you can’t do scientific research on them, given that studies must be reproducible.

The only people who can keep improving these models are the organizations that trained them in the first place, which is something they keep doing to improve their models and provide new features over time.

## Human costs

How many humans does it take to train an AI model? You may think the answer is zero, but the amount of human labor needed to make recent generations of LLMs is steadily rising.

When Transformer models came out a few years ago, researchers heralded them as a new era in AI because they could be trained on “raw data.” In this case, raw data means “unlabeled data”—books, encyclopedia articles, and websites that have been scraped and collected in massive quantities.

That was the case for models like BERT and GPT-2, which required relatively little human intervention in terms of data gathering and filtering. While this was convenient for the model creators, it also meant that all sorts of undesirable content, like hate speech and pornography, were sucked up during the model training process, then often parroted back by the models themselves.

This data collection approach changed with the advent of RLHF (reinforcement learning with human feedback), the technique used by newer generations of LLMs like ChatGPT. As its name indicates, RLHF adds additional steps to the LLM training process, and these steps require much more human intervention.

Essentially, once a model has been trained on large quantities of unlabeled data (from the web, books, etc.), humans are then asked to interact with the model, coming up with prompts (e.g., “Write me a recipe for chocolate cake”) and provide their own answers or evaluate answers provided by the model. This data is used to continue training the model, which is then again tested by humans, ad nauseam, until the model is deemed good enough to be released into the world.

This kind of RLHF training is what made ChatGPT feasible for wide release since it could decline to answer many classes of potentially harmful questions.
An illustration of RLHF training.
Enlarge / An illustration of RLHF training.

But that success has a dirty secret behind it: To keep the costs of AI low, the people providing this “human feedback” are underpaid, overexploited workers. In January, Time wrote a report about Kenyan laborers paid less than $2 an hour to examine thousands of messages for OpenAI. This kind of work can have long-lasting psychological impacts, as we've seen in content-moderation workers.

To make it worse, the efforts of these nameless workers aren’t recognized in the reports accompanying AI models. Their labor remains invisible.

## What should we do about it?

For the creators of these models, instead of focusing on scale and size and optimizing solely for performance, it’s possible to train smaller, more efficient models and make models accessible so that they can be reused and fine-tuned (read: adapted) by members of the AI community, who won’t need to train models from scratch. Dedicating more efforts toward improving the safety and security of these models—developing features like watermarks for machine-generated content, more reliable safety filters, and the ability to cite sources when generating answers to questions—can also contribute toward making LLMs more accessible and robust.

As users of these models (sometimes despite ourselves), it's within our power to demand transparency and push back against the deployment of AI models in high-risk scenarios, such as services that provide mental help therapy or generate forensic sketches. These models are still too new, poorly documented, and unpredictable to be deployed in circumstances that can have such major repercussions.

And the next time someone tells you that the latest AI model will benefit humanity at large or that it displays evidence of artificial general intelligence, I hope you'll think about its hidden costs to people and the planet, some of which I’ve addressed in the sections above. And these are only a fraction of the broader societal impacts and costs of these systems (some of which you can see on the image below, crowdsourced via Twitter)—things like job impacts, the spread of disinformation and propaganda, and copyright infringement concerns.
There are many hidden costs of generative AI.
Enlarge / There are many hidden costs of generative AI.

The current trend is toward creating bigger and more closed and opaque models. But there’s still time to push back, demand transparency, and get a better understanding of the costs and impacts of LLMs while limiting how they are deployed in society at large. Legislation like the Algorithmic Accountability Act in the US and legal frameworks on AI governance in the European Union and Canada are defining our AI future and putting safeguards in place to ensure safety and accountability in future generations of AI systems deployed in society. As members of that society and users of these systems, we should have our voices heard by their creators.

+ 218
- 0
cache/2023/452be27c5cc8a4b9824d1d7e005546c6/index.html View File

@@ -0,0 +1,218 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>We need to tell people ChatGPT will lie to them, not debate linguistics (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://simonwillison.net/2023/Apr/7/chatgpt-lies/">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>We need to tell people ChatGPT will lie to them, not debate linguistics</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://simonwillison.net/2023/Apr/7/chatgpt-lies/" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<p><strong>ChatGPT lies to people</strong>. This is a serious bug that has so far resisted all attempts at a fix. We need to prioritize helping people understand this, not debating the most precise terminology to use to describe it.</p>
<h4>We accidentally invented computers that can lie to us</h4>
<p>I <a href="https://twitter.com/simonw/status/1643469011127259136">tweeted</a> (and <a href="https://fedi.simonwillison.net/@simon/110144293948444462">tooted</a>) this:</p>

<p>Mainly I was trying to be pithy and amusing, but this thought was inspired by reading Sam Bowman’s excellent review of the field, <a href="https://cims.nyu.edu/~sbowman/eightthings.pdf">Eight Things to Know about Large Language Models</a>. In particular this:</p>
<blockquote>
<p>More capable models can better recognize the specific circumstances under which they are trained. Because of this, they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated.</p>
</blockquote>
<p>Sycophancy and sandbagging are my two favourite new pieces of AI terminology!</p>
<p>What I find fascinating about this is that these extremely problematic behaviours are not the system working as intended: they are bugs! And we haven’t yet found a reliable way to fix them.</p>
<p>(Here’s the paper that snippet references: <a href="https://arxiv.org/abs/2212.09251">Discovering Language Model Behaviors with Model-Written Evaluations</a> from December 2022.)</p>
<h4>“But a machine can’t deliberately tell a lie”</h4>
<p>I got quite a few replies complaining that it’s inappropriate to refer to LLMs as “lying”, because to do so anthropomorphizes them and implies a level of intent which isn’t possible.</p>
<p>I completely agree that anthropomorphism is bad: these models are fancy matrix arithmetic, not entities with intent and opinions.</p>
<p>But in this case, I think the visceral clarity of being able to say “ChatGPT will lie to you” is a worthwhile trade.</p>
<p>Science fiction has been presenting us with a model of “artificial intelligence” for decades. It’s firmly baked into our culture that an “AI” is an all-knowing computer, incapable of lying and able to answer any question with pin-point accuracy.</p>
<p>Large language models like ChatGPT, on first encounter, seem to fit that bill. They appear astonishingly capable, and their command of human language can make them seem like a genuine intelligence, at least at first glance.</p>
<p>But the more time you spend with them, the more that illusion starts to fall apart.</p>
<p>They fail spectacularly when prompted with logic puzzles, or basic arithmetic, or when asked to produce citations or link to sources for the information they present.</p>
<p>Most concerningly, they hallucinate or confabulate: they make things up! My favourite example of this remains <a href="https://simonwillison.net/2023/Mar/10/chatgpt-internet-access/#i-dont-believe-it">their ability to entirely imagine the content of a URL</a>. I still see this catching people out every day. It’s remarkably convincing.</p>
<p><a href="https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/">Why ChatGPT and Bing Chat are so good at making things up</a> is an excellent in-depth exploration of this issue from Benj Edwards at Ars Technica.</p>
<h4>We need to explain this in straight-forward terms</h4>
<p>We’re trying to solve two problems here:</p>
<ol>
<li>ChatGPT cannot be trusted to provide factual information. It has a very real risk of making things up, and if people don’t understand it they are guaranteed to be mislead.</li>
<li>Systems like ChatGPT are not sentient, or even intelligent systems. They do not have opinions, or feelings, or a sense of self. We must resist the temptation to anthropomorphize them.</li>
</ol>
<p>I believe that <strong>the most direct form of harm caused by LLMs today is the way they mislead their users</strong>. The first problem needs to take precedence.</p>
<p>It is vitally important that new users understand that these tools cannot be trusted to provide factual answers. We need to help people get there as quickly as possible.</p>
<p>Which of these two messages do you think is more effective?</p>
<p><strong>ChatGPT will lie to you</strong></p>
<p>Or</p>
<p><strong>ChatGPT doesn’t lie, lying is too human and implies intent. It hallucinates. Actually no, hallucination still implies human-like thought. It confabulates. That’s a term used in psychiatry to describe when someone replaces a gap in one’s memory by a falsification that one believes to be true—though of course these things don’t have human minds so even confabulation is unnecessarily anthropomorphic. I hope you’ve enjoyed this linguistic detour!</strong></p>
<p>Let’s go with the first one. We should be shouting this message from the rooftops: <strong>ChatGPT will lie to you</strong>.</p>
<p>That doesn’t mean it’s not useful—it can be astonishingly useful, for all kinds of purposes... but seeking truthful, factual answers is very much not one of them. And everyone needs to understand that.</p>
<p>Convincing people that these aren’t a sentient AI out of a science fiction story can come later. Once people understand their flaws this should be an easier argument to make!</p>
<h4 id="warn-off-or-help-on">Should we warn people off or help them on?</h4>
<p>This situation raises an ethical conundrum: if these tools can’t be trusted, and people are demonstrably falling for their traps, should we encourage people not to use them at all, or even campaign to have them banned?</p>
<p>Every day I personally find new problems that I can solve more effectively with the help of large language models. Some recent examples from just the last few weeks:</p>

<p>Each of these represents a problem I could have solved without ChatGPT... but at a time cost that would have been prohibitively expensive, to the point that I wouldn’t have bothered.</p>
<p>I wrote more about this in <a href="https://simonwillison.net/2023/Mar/27/ai-enhanced-development/">AI-enhanced development makes me more ambitious with my projects</a>.</p>
<p>Honestly, at this point using ChatGPT in the way that I do feels like a massively unfair competitive advantage. I’m not worried about AI taking people’s jobs: I’m worried about the impact of AI-enhanced developers like myself.</p>
<p>It genuinely feels unethical for me <em>not</em> to help other people learn to use these tools as effectively as possible. I want everyone to be able to do what I can do with them, as safely and responsibly as possible.</p>
<p>I think the message we should be emphasizing is this:</p>
<p><strong>These are incredibly powerful tools. They are far harder to use effectively than they first appear. Invest the effort, but approach with caution: we accidentally invented computers that can lie to us and we can’t figure out how to make them stop.</strong></p>
<p>There’s a time for linguistics, and there’s a time for grabbing the general public by the shoulders and shouting “It lies! The computer lies to you! Don’t trust anything it says!”</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 51
- 0
cache/2023/452be27c5cc8a4b9824d1d7e005546c6/index.md View File

@@ -0,0 +1,51 @@
title: We need to tell people ChatGPT will lie to them, not debate linguistics
url: https://simonwillison.net/2023/Apr/7/chatgpt-lies/
hash_url: 452be27c5cc8a4b9824d1d7e005546c6

<p><strong>ChatGPT lies to people</strong>. This is a serious bug that has so far resisted all attempts at a fix. We need to prioritize helping people understand this, not debating the most precise terminology to use to describe it.</p>
<h4>We accidentally invented computers that can lie to us</h4>
<p>I <a href="https://twitter.com/simonw/status/1643469011127259136">tweeted</a> (and <a href="https://fedi.simonwillison.net/@simon/110144293948444462">tooted</a>) this:</p>

<p>Mainly I was trying to be pithy and amusing, but this thought was inspired by reading Sam Bowman’s excellent review of the field, <a href="https://cims.nyu.edu/~sbowman/eightthings.pdf">Eight Things to Know about Large Language Models</a>. In particular this:</p>
<blockquote>
<p>More capable models can better recognize the specific circumstances under which they are trained. Because of this, they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated.</p>
</blockquote>
<p>Sycophancy and sandbagging are my two favourite new pieces of AI terminology!</p>
<p>What I find fascinating about this is that these extremely problematic behaviours are not the system working as intended: they are bugs! And we haven’t yet found a reliable way to fix them.</p>
<p>(Here’s the paper that snippet references: <a href="https://arxiv.org/abs/2212.09251">Discovering Language Model Behaviors with Model-Written Evaluations</a> from December 2022.)</p>
<h4>“But a machine can’t deliberately tell a lie”</h4>
<p>I got quite a few replies complaining that it’s inappropriate to refer to LLMs as “lying”, because to do so anthropomorphizes them and implies a level of intent which isn’t possible.</p>
<p>I completely agree that anthropomorphism is bad: these models are fancy matrix arithmetic, not entities with intent and opinions.</p>
<p>But in this case, I think the visceral clarity of being able to say “ChatGPT will lie to you” is a worthwhile trade.</p>
<p>Science fiction has been presenting us with a model of “artificial intelligence” for decades. It’s firmly baked into our culture that an “AI” is an all-knowing computer, incapable of lying and able to answer any question with pin-point accuracy.</p>
<p>Large language models like ChatGPT, on first encounter, seem to fit that bill. They appear astonishingly capable, and their command of human language can make them seem like a genuine intelligence, at least at first glance.</p>
<p>But the more time you spend with them, the more that illusion starts to fall apart.</p>
<p>They fail spectacularly when prompted with logic puzzles, or basic arithmetic, or when asked to produce citations or link to sources for the information they present.</p>
<p>Most concerningly, they hallucinate or confabulate: they make things up! My favourite example of this remains <a href="https://simonwillison.net/2023/Mar/10/chatgpt-internet-access/#i-dont-believe-it">their ability to entirely imagine the content of a URL</a>. I still see this catching people out every day. It’s remarkably convincing.</p>
<p><a href="https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/">Why ChatGPT and Bing Chat are so good at making things up</a> is an excellent in-depth exploration of this issue from Benj Edwards at Ars Technica.</p>
<h4>We need to explain this in straight-forward terms</h4>
<p>We’re trying to solve two problems here:</p>
<ol>
<li>ChatGPT cannot be trusted to provide factual information. It has a very real risk of making things up, and if people don’t understand it they are guaranteed to be mislead.</li>
<li>Systems like ChatGPT are not sentient, or even intelligent systems. They do not have opinions, or feelings, or a sense of self. We must resist the temptation to anthropomorphize them.</li>
</ol>
<p>I believe that <strong>the most direct form of harm caused by LLMs today is the way they mislead their users</strong>. The first problem needs to take precedence.</p>
<p>It is vitally important that new users understand that these tools cannot be trusted to provide factual answers. We need to help people get there as quickly as possible.</p>
<p>Which of these two messages do you think is more effective?</p>
<p><strong>ChatGPT will lie to you</strong></p>
<p>Or</p>
<p><strong>ChatGPT doesn’t lie, lying is too human and implies intent. It hallucinates. Actually no, hallucination still implies human-like thought. It confabulates. That’s a term used in psychiatry to describe when someone replaces a gap in one’s memory by a falsification that one believes to be true—though of course these things don’t have human minds so even confabulation is unnecessarily anthropomorphic. I hope you’ve enjoyed this linguistic detour!</strong></p>
<p>Let’s go with the first one. We should be shouting this message from the rooftops: <strong>ChatGPT will lie to you</strong>.</p>
<p>That doesn’t mean it’s not useful—it can be astonishingly useful, for all kinds of purposes... but seeking truthful, factual answers is very much not one of them. And everyone needs to understand that.</p>
<p>Convincing people that these aren’t a sentient AI out of a science fiction story can come later. Once people understand their flaws this should be an easier argument to make!</p>
<h4 id="warn-off-or-help-on">Should we warn people off or help them on?</h4>
<p>This situation raises an ethical conundrum: if these tools can’t be trusted, and people are demonstrably falling for their traps, should we encourage people not to use them at all, or even campaign to have them banned?</p>
<p>Every day I personally find new problems that I can solve more effectively with the help of large language models. Some recent examples from just the last few weeks:</p>

<p>Each of these represents a problem I could have solved without ChatGPT... but at a time cost that would have been prohibitively expensive, to the point that I wouldn’t have bothered.</p>
<p>I wrote more about this in <a href="https://simonwillison.net/2023/Mar/27/ai-enhanced-development/">AI-enhanced development makes me more ambitious with my projects</a>.</p>
<p>Honestly, at this point using ChatGPT in the way that I do feels like a massively unfair competitive advantage. I’m not worried about AI taking people’s jobs: I’m worried about the impact of AI-enhanced developers like myself.</p>
<p>It genuinely feels unethical for me <em>not</em> to help other people learn to use these tools as effectively as possible. I want everyone to be able to do what I can do with them, as safely and responsibly as possible.</p>
<p>I think the message we should be emphasizing is this:</p>
<p><strong>These are incredibly powerful tools. They are far harder to use effectively than they first appear. Invest the effort, but approach with caution: we accidentally invented computers that can lie to us and we can’t figure out how to make them stop.</strong></p>
<p>There’s a time for linguistics, and there’s a time for grabbing the general public by the shoulders and shouting “It lies! The computer lies to you! Don’t trust anything it says!”</p>

+ 256
- 0
cache/2023/4a485034e94dc6123a624e8a589e8dac/index.html View File

@@ -0,0 +1,256 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>Poking around OpenAI. (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://lethain.com/openai-exploration/">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>Poking around OpenAI.</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://lethain.com/openai-exploration/" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<p>I haven’t spent much time playing around with the latest LLMs,
and decided to spend some time doing so. I was particularly curious
about the usecase of using embeddings to supplement user prompts
with additional, relevant data (e.g. supply the current status of their
recent tickets into the prompt where they might inquire about progress on
said tickets). This usecase is interesting because it’s very attainable
for existing companies and products to take advantage of, and I imagine it’s
roughly how e.g. Stripe’s GPT4 integration with their documentation works.</p>
<p>To play around with that, I created a script that converts all of my writing
into embeddings, tokenizes the user-supplied prompt to identify relevant sections
of my content to inject into an expanded prompt, and sent that expanded prompt
to OpenAI AI’s API.</p>
<p>You can <a href="https://github.com/lethain/openai-experiments/blob/main/corpus.py">see the code on Github</a>,
and read my notes on this project below.</p>
<h2 id="references">References</h2>
<p>This exploration is inspired by the recent work
by <a href="https://eugeneyan.com/writing/llm-experiments/#llm-tools-to-summarize-query-and-advise">Eugene Yan</a>
and <a href="https://simonwillison.net/2023/Apr/4/llm/">Simon Willison</a>.
I owe particular thanks to <a href="https://twitter.com/eugeneyan/status/1646336530695467010">Eugene Yan</a>
for his suggestions to improve the quality of the responses.</p>
<p>The code I’m sharing below is scrapped together from a number of sources:</p>
<p>I found none of the examples quite worked as documented, but ultimately I was able to get them working
with some poking around, relearning Pandas, and so on.</p>
<h2 id="project">Project</h2>
<p>My project was to make the OpenAI API answer questions with awareness of all of my personal writing from this blog,
<a href="https://staffeng.com">StaffEng</a> and <a href="https://infraeng.dev/">Infrastructure Engineering</a>.
Specifically this means creating embeddings from Hugo blog posts in Markdown to use with OpenAI.</p>
<p>You can <a href="https://github.com/lethain/openai-experiments/blob/main/corpus.py">read the code on Github</a>.
I’ve done absolutely nothing to make it easy to read, but it is a complete example, and you could use
it with your own writing by changing <a href="https://github.com/lethain/openai-experiments/blob/main/corpus.py#L112">Line 112</a>
to point at your blog’s content directories. (Oh, and changing the prompts on <a href="https://github.com/lethain/openai-experiments/blob/main/corpus.py#L260">Line 260</a>.</p>
<p>You can see a screenshot of what this looks like below.</p>
<p><img src="/static/blog/2023/openai-experiment.png" alt="Screenshot of terminal program running Github lethain/openai-experiment"></p>
<p>This project is pretty neat, in the sense that it works. It did take me a bit longer than expected, probably about three hours
to get it working given some interruptions, mostly because the documentation’s examples were all subtly broken or didn’t actually connect
together into working code. After it was working, I inevitably spent a few more hours fiddling around as well.
My repo is terrible code, but is a full working code if anyone
else had similar issues getting the question answering using embeddings stuff working!</p>
<p>The other comment on this project is that I don’t really view this as a particularly effective solution to the problem I wanted to solve,
as it’s performing a fairly basic k-means algorithm to match tokenized versions of my blog posts against the query,
and then injecting the best matches into the GPT query as context. Going into this, I expected, I dunno, something more
sophisticated than this. It’s a very reasonable solution, and a cost efficient solution because it avoids any model (re)training,
but feels a bit more basic than I imagined.</p>
<p>Also worth noting, the total cost to developing this app and running it a few dozen times: $0.50.</p>
<h2 id="thoughts">Thoughts</h2>
<p>This was a fun project, in part because it was a detour away from what I’ve spent most of my time on the last few months,
which is writing my next book. Writing and editing a book is very valuable work, but it lacks the freeform joy of
hacking around a small project with zero users. Without overthinking or overstructuring things too much,
here are some bullet points thoughts about this project and expansion of AI in the industry at large:</p>
<ul><li>As someone who’s been working in the industry for a while now, it’s easy to get jaded about new things.
My first reaction to the recent AI hype is very similar to my first reaction to the crypto hype:
we’ve seen hype before, and initial hype is rarely correlated with long-term impact on the industry
or on society. In other words, I wasn’t convinced.</li><li>Conversely, I think part of long-term engineering leadership is remaining open to new things.
The industry has radically changed from twenty years ago, with mobile development as the most obvious proof point.
Most things won’t change the industry much, but some things will completely transform it,
and we owe cautious interest to these potentially transformational projects.</li><li>My personal bet is that the new AI wave is moderately transformative but not massively so.
Expanding on my thinking a bit, LLMs are showing significant promise at mediocre solutions to very general problems.
A very common, often unstated, Silicon Valley model is to hire engineers, pretend the engineers are
solving a problem, hire a huge number of non-engineers to actually solve the problem “until the technology automates it”,
grow the business rapidly, and hope automation solves the margins in some later year.
LLM adoption should be a valuable tool in improving margins in this kind of business,
which in theory should enable new businesses to be created by improving the potential margin.
However, we’ve been in a decade of <a href="https://www.readmargins.com/p/zirp-explains-the-world">zero-interest-rate policy</a>
which has meant that current-year margins haven’t mattered much to folks,
which implies that most of these ideas that should be enabled by improved margins should
have already been attempted in the preceeding margin-agnostic decade.
This means that LLMs will make those businesses better, but the businesses themselves should
have already been tried, and many of them have failed ultimately due to market size preventing
required returns moreso than margin of operating their large internal teams to mask over missing margin-enhancing technology.</li><li>If you ignore the margin-enhancement opporunties represented by LLMs,
which I’ve argued shouldn’t generate new business ideas but improve existing business ideas already
tried over the last decade, then it’s interesting to ponder what the sweet spot is for these tools.
My take is that they’re very good at supporting domain experts, where the potential damaged caused by
inaccuracies is constrained, e.g. Github Copilot is a very plausible way to empower a proficient programmer,
and a very risky way to train a novice in a setting where the code has access to sensitive resources or data.
However, to the extent that we’re pushing experts from authors to editors, I’m not sure that’s an actual speed
improvement for our current generation of experts, who already have mastery in authorship and (often) a lesser
skill in editing. Maybe there is a new generation of experts who are exceptional editors first, and authors second,
which these tools will foster. If that’s true, then likely the current generation of leaders is unable to
assess these tools appropriately, but&amp;mldr; I think that most folks make this argument about most new technologies,
and it’s only true sometimes. (Again, crypto is a clear example of something that has not overtaken existing
technologies in the real world with significant regulatory overhead.)</li></ul>
<p>Anyway, it was a fun project, and I have a much better intuitive sense of what’s possible
in this space after spending some time here, which was my goal. I’ll remain very curious to
see what comes together here as the timeline progresses.</p>
<p class="mt6 instapaper_ignoref"></p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 70
- 0
cache/2023/4a485034e94dc6123a624e8a589e8dac/index.md View File

@@ -0,0 +1,70 @@
title: Poking around OpenAI.
url: https://lethain.com/openai-exploration/
hash_url: 4a485034e94dc6123a624e8a589e8dac

<p>I haven’t spent much time playing around with the latest LLMs,
and decided to spend some time doing so. I was particularly curious
about the usecase of using embeddings to supplement user prompts
with additional, relevant data (e.g. supply the current status of their
recent tickets into the prompt where they might inquire about progress on
said tickets). This usecase is interesting because it’s very attainable
for existing companies and products to take advantage of, and I imagine it’s
roughly how e.g. Stripe’s GPT4 integration with their documentation works.</p><p>To play around with that, I created a script that converts all of my writing
into embeddings, tokenizes the user-supplied prompt to identify relevant sections
of my content to inject into an expanded prompt, and sent that expanded prompt
to OpenAI AI’s API.</p><p>You can <a href="https://github.com/lethain/openai-experiments/blob/main/corpus.py">see the code on Github</a>,
and read my notes on this project below.</p><h2 id="references">References</h2><p>This exploration is inspired by the recent work
by <a href="https://eugeneyan.com/writing/llm-experiments/#llm-tools-to-summarize-query-and-advise">Eugene Yan</a>
and <a href="https://simonwillison.net/2023/Apr/4/llm/">Simon Willison</a>.
I owe particular thanks to <a href="https://twitter.com/eugeneyan/status/1646336530695467010">Eugene Yan</a>
for his suggestions to improve the quality of the responses.</p><p>The code I’m sharing below is scrapped together from a number of sources:</p><p>I found none of the examples quite worked as documented, but ultimately I was able to get them working
with some poking around, relearning Pandas, and so on.</p><h2 id="project">Project</h2><p>My project was to make the OpenAI API answer questions with awareness of all of my personal writing from this blog,
<a href="https://staffeng.com">StaffEng</a> and <a href="https://infraeng.dev/">Infrastructure Engineering</a>.
Specifically this means creating embeddings from Hugo blog posts in Markdown to use with OpenAI.</p><p>You can <a href="https://github.com/lethain/openai-experiments/blob/main/corpus.py">read the code on Github</a>.
I’ve done absolutely nothing to make it easy to read, but it is a complete example, and you could use
it with your own writing by changing <a href="https://github.com/lethain/openai-experiments/blob/main/corpus.py#L112">Line 112</a>
to point at your blog’s content directories. (Oh, and changing the prompts on <a href="https://github.com/lethain/openai-experiments/blob/main/corpus.py#L260">Line 260</a>.</p><p>You can see a screenshot of what this looks like below.</p><p><img src="/static/blog/2023/openai-experiment.png" alt="Screenshot of terminal program running Github lethain/openai-experiment"></p><p>This project is pretty neat, in the sense that it works. It did take me a bit longer than expected, probably about three hours
to get it working given some interruptions, mostly because the documentation’s examples were all subtly broken or didn’t actually connect
together into working code. After it was working, I inevitably spent a few more hours fiddling around as well.
My repo is terrible code, but is a full working code if anyone
else had similar issues getting the question answering using embeddings stuff working!</p><p>The other comment on this project is that I don’t really view this as a particularly effective solution to the problem I wanted to solve,
as it’s performing a fairly basic k-means algorithm to match tokenized versions of my blog posts against the query,
and then injecting the best matches into the GPT query as context. Going into this, I expected, I dunno, something more
sophisticated than this. It’s a very reasonable solution, and a cost efficient solution because it avoids any model (re)training,
but feels a bit more basic than I imagined.</p><p>Also worth noting, the total cost to developing this app and running it a few dozen times: $0.50.</p><h2 id="thoughts">Thoughts</h2><p>This was a fun project, in part because it was a detour away from what I’ve spent most of my time on the last few months,
which is writing my next book. Writing and editing a book is very valuable work, but it lacks the freeform joy of
hacking around a small project with zero users. Without overthinking or overstructuring things too much,
here are some bullet points thoughts about this project and expansion of AI in the industry at large:</p><ul><li>As someone who’s been working in the industry for a while now, it’s easy to get jaded about new things.
My first reaction to the recent AI hype is very similar to my first reaction to the crypto hype:
we’ve seen hype before, and initial hype is rarely correlated with long-term impact on the industry
or on society. In other words, I wasn’t convinced.</li><li>Conversely, I think part of long-term engineering leadership is remaining open to new things.
The industry has radically changed from twenty years ago, with mobile development as the most obvious proof point.
Most things won’t change the industry much, but some things will completely transform it,
and we owe cautious interest to these potentially transformational projects.</li><li>My personal bet is that the new AI wave is moderately transformative but not massively so.
Expanding on my thinking a bit, LLMs are showing significant promise at mediocre solutions to very general problems.
A very common, often unstated, Silicon Valley model is to hire engineers, pretend the engineers are
solving a problem, hire a huge number of non-engineers to actually solve the problem “until the technology automates it”,
grow the business rapidly, and hope automation solves the margins in some later year.
LLM adoption should be a valuable tool in improving margins in this kind of business,
which in theory should enable new businesses to be created by improving the potential margin.
However, we’ve been in a decade of <a href="https://www.readmargins.com/p/zirp-explains-the-world">zero-interest-rate policy</a>
which has meant that current-year margins haven’t mattered much to folks,
which implies that most of these ideas that should be enabled by improved margins should
have already been attempted in the preceeding margin-agnostic decade.
This means that LLMs will make those businesses better, but the businesses themselves should
have already been tried, and many of them have failed ultimately due to market size preventing
required returns moreso than margin of operating their large internal teams to mask over missing margin-enhancing technology.</li><li>If you ignore the margin-enhancement opporunties represented by LLMs,
which I’ve argued shouldn’t generate new business ideas but improve existing business ideas already
tried over the last decade, then it’s interesting to ponder what the sweet spot is for these tools.
My take is that they’re very good at supporting domain experts, where the potential damaged caused by
inaccuracies is constrained, e.g. Github Copilot is a very plausible way to empower a proficient programmer,
and a very risky way to train a novice in a setting where the code has access to sensitive resources or data.
However, to the extent that we’re pushing experts from authors to editors, I’m not sure that’s an actual speed
improvement for our current generation of experts, who already have mastery in authorship and (often) a lesser
skill in editing. Maybe there is a new generation of experts who are exceptional editors first, and authors second,
which these tools will foster. If that’s true, then likely the current generation of leaders is unable to
assess these tools appropriately, but&amp;mldr; I think that most folks make this argument about most new technologies,
and it’s only true sometimes. (Again, crypto is a clear example of something that has not overtaken existing
technologies in the real world with significant regulatory overhead.)</li></ul><p>Anyway, it was a fun project, and I have a much better intuitive sense of what’s possible
in this space after spending some time here, which was my goal. I’ll remain very curious to
see what comes together here as the timeline progresses.</p><p class="mt6 instapaper_ignoref"></p>

+ 173
- 0
cache/2023/6eef954bc8dd84322cf19ab38caf2ee3/index.html View File

@@ -0,0 +1,173 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>GitHub Copilot AI pair programmer: Asset or Liability? (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://www.sciencedirect.com/science/article/abs/pii/S0164121223001292">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>GitHub Copilot AI pair programmer: Asset or Liability?</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://www.sciencedirect.com/science/article/abs/pii/S0164121223001292" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<h2>Abstract</h2>
<p>Automatic program synthesis is a long-lasting dream in software engineering. Recently, a promising Deep Learning (DL) based solution, called Copilot, has been proposed by OpenAI and Microsoft as an industrial product. Although some studies evaluate the correctness of Copilot solutions and report its issues, more empirical evaluations are necessary to understand how developers can benefit from it effectively. In this paper, we study the capabilities of Copilot in two different programming tasks: (i) generating (and reproducing) correct and efficient solutions for fundamental algorithmic problems, and (ii) comparing Copilot’s proposed solutions with those of human programmers on a set of programming tasks. For the former, we assess the performance and functionality of Copilot in solving selected fundamental problems in computer science, like sorting and implementing data structures. In the latter, a dataset of programming problems with human-provided solutions is used. The results show that Copilot is capable of providing solutions for almost all fundamental algorithmic problems, however, some solutions are buggy and non-reproducible. Moreover, Copilot has some difficulties in combining multiple methods to generate a solution. Comparing Copilot to humans, our results show that the correct ratio of humans’ solutions is greater than Copilot’s suggestions, while the buggy solutions generated by Copilot require less effort to be repaired. Based on our findings, if Copilot is used by expert developers in software projects, it can become an asset since its suggestions could be comparable to humans’ contributions in terms of quality. However, Copilot can become a liability if it is used by novice developers who may fail to filter its buggy or non-optimal solutions due to a lack of expertise.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 7
- 0
cache/2023/6eef954bc8dd84322cf19ab38caf2ee3/index.md View File

@@ -0,0 +1,7 @@
title: GitHub Copilot AI pair programmer: Asset or Liability?
url: https://www.sciencedirect.com/science/article/abs/pii/S0164121223001292
hash_url: 6eef954bc8dd84322cf19ab38caf2ee3

## Abstract

Automatic program synthesis is a long-lasting dream in software engineering. Recently, a promising Deep Learning (DL) based solution, called Copilot, has been proposed by OpenAI and Microsoft as an industrial product. Although some studies evaluate the correctness of Copilot solutions and report its issues, more empirical evaluations are necessary to understand how developers can benefit from it effectively. In this paper, we study the capabilities of Copilot in two different programming tasks: (i) generating (and reproducing) correct and efficient solutions for fundamental algorithmic problems, and (ii) comparing Copilot’s proposed solutions with those of human programmers on a set of programming tasks. For the former, we assess the performance and functionality of Copilot in solving selected fundamental problems in computer science, like sorting and implementing data structures. In the latter, a dataset of programming problems with human-provided solutions is used. The results show that Copilot is capable of providing solutions for almost all fundamental algorithmic problems, however, some solutions are buggy and non-reproducible. Moreover, Copilot has some difficulties in combining multiple methods to generate a solution. Comparing Copilot to humans, our results show that the correct ratio of humans’ solutions is greater than Copilot’s suggestions, while the buggy solutions generated by Copilot require less effort to be repaired. Based on our findings, if Copilot is used by expert developers in software projects, it can become an asset since its suggestions could be comparable to humans’ contributions in terms of quality. However, Copilot can become a liability if it is used by novice developers who may fail to filter its buggy or non-optimal solutions due to a lack of expertise.

+ 191
- 0
cache/2023/89aa5bbfeaa7c8f2411980f99801359c/index.html View File

@@ -0,0 +1,191 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>AIs can write for us but will we actually want them to? (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://www.bryanbraun.com/2023/04/14/ais-can-write-for-us-but-will-we-want-them-to/">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>AIs can write for us but will we actually want them to?</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://www.bryanbraun.com/2023/04/14/ais-can-write-for-us-but-will-we-want-them-to/" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<p>From <a href="https://openai.com/blog/chatgpt">ChatGPT</a> to <a href="https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/">Microsoft 365 Copilot</a>, we’re seeing a wave of AIs that can write and write well.</p>
<p>In a recent post, Jim Nielsen described how having AIs write for us is a trade-off:</p>
<blockquote>
<p>“Writing is a moment for self-reflection, for providing the space and time necessary for the conception of thoughts or feelings that can change your heart or mind. Offloading that task to AI is not necessarily a net-gain, it is a trade-off. One to make consciously.”</p>
<p>Jim Nielsen - <a href="https://blog.jim-nielsen.com/2023/more-everything-with-ai">More Everything With AI</a></p>
</blockquote>
<p>That made me think about my own writing. If I had to break down my current writing activity (not counting code), it would look something like this:</p>
<ul>
<li>10% - Journaling</li>
<li>10% - <a href="https://www.bryanbraun.com/blog/">Blog posts</a></li>
<li>20% - Texting and Personal Emails</li>
<li>10% - Meeting notes / todos</li>
<li>35% - Programming notes (usually to help me work through tricky coding issues)</li>
<li>15% - <a href="https://www.bryanbraun.com/books/">Book notes</a></li>
</ul>
<p>Could I hand any of these over to AI?</p>
<p>Definitely no on the journaling and blog posts, since those are basically pure self-reflection. It’s me figuring out what I believe. I could augment that a bit with spelling and grammar check tools, but it’s hard to imagine offloading more without compromising <a href="http://www.paulgraham.com/words.html">the process</a>.</p>
<p>For texting and emails I already use autocomplete and <a href="https://support.google.com/mail/answer/9116836?hl=en&amp;co=GENIE.Platform%3DDesktop">Smart Compose</a>. I also use Gmail templates for frequent responses, so I can’t see how I could automate this much further.</p>
<p>Personal notes (for meetings, books, and coding) seems the most promising but I don’t think AI can do this for me either. When I take notes, I’m only interested in writing out the stuff that matters to me. Every book I read has a hundred summaries on the internet, each more detailed and comprehensive than mine, but I still take <a href="https://www.bryanbraun.com/books/">book notes</a> because I want to remember <a href="https://sive.rs/bfaq">what impacted me</a>. Even if an AI knew what those things were, delegating that work would defeat the purpose.</p>
<p>So maybe I don’t want AIs to take over my writing but that doesn’t mean it’s useless. Autocomplete, grammar check, and Smart Compose… these tools are already AI powered. As AI tech progresses, I expect these tools to improve and become more pervasive, impacting my writing in little ways, mostly from the margins.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 24
- 0
cache/2023/89aa5bbfeaa7c8f2411980f99801359c/index.md View File

@@ -0,0 +1,24 @@
title: AIs can write for us but will we actually want them to?
url: https://www.bryanbraun.com/2023/04/14/ais-can-write-for-us-but-will-we-want-them-to/
hash_url: 89aa5bbfeaa7c8f2411980f99801359c

<p>From <a href="https://openai.com/blog/chatgpt">ChatGPT</a> to <a href="https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/">Microsoft 365 Copilot</a>, we’re seeing a wave of AIs that can write and write well.</p>
<p>In a recent post, Jim Nielsen described how having AIs write for us is a trade-off:</p>
<blockquote>
<p>“Writing is a moment for self-reflection, for providing the space and time necessary for the conception of thoughts or feelings that can change your heart or mind. Offloading that task to AI is not necessarily a net-gain, it is a trade-off. One to make consciously.”</p>
<p>Jim Nielsen - <a href="https://blog.jim-nielsen.com/2023/more-everything-with-ai">More Everything With AI</a></p>
</blockquote>
<p>That made me think about my own writing. If I had to break down my current writing activity (not counting code), it would look something like this:</p>
<ul>
<li>10% - Journaling</li>
<li>10% - <a href="https://www.bryanbraun.com/blog/">Blog posts</a></li>
<li>20% - Texting and Personal Emails</li>
<li>10% - Meeting notes / todos</li>
<li>35% - Programming notes (usually to help me work through tricky coding issues)</li>
<li>15% - <a href="https://www.bryanbraun.com/books/">Book notes</a></li>
</ul>
<p>Could I hand any of these over to AI?</p>
<p>Definitely no on the journaling and blog posts, since those are basically pure self-reflection. It’s me figuring out what I believe. I could augment that a bit with spelling and grammar check tools, but it’s hard to imagine offloading more without compromising <a href="http://www.paulgraham.com/words.html">the process</a>.</p>
<p>For texting and emails I already use autocomplete and <a href="https://support.google.com/mail/answer/9116836?hl=en&amp;co=GENIE.Platform%3DDesktop">Smart Compose</a>. I also use Gmail templates for frequent responses, so I can’t see how I could automate this much further.</p>
<p>Personal notes (for meetings, books, and coding) seems the most promising but I don’t think AI can do this for me either. When I take notes, I’m only interested in writing out the stuff that matters to me. Every book I read has a hundred summaries on the internet, each more detailed and comprehensive than mine, but I still take <a href="https://www.bryanbraun.com/books/">book notes</a> because I want to remember <a href="https://sive.rs/bfaq">what impacted me</a>. Even if an AI knew what those things were, delegating that work would defeat the purpose.</p>
<p>So maybe I don’t want AIs to take over my writing but that doesn’t mean it’s useless. Autocomplete, grammar check, and Smart Compose… these tools are already AI powered. As AI tech progresses, I expect these tools to improve and become more pervasive, impacting my writing in little ways, mostly from the margins.</p>

+ 209
- 0
cache/2023/ccb1821caf1a27ed2a2e9a92a26d0b65/index.html View File

@@ -0,0 +1,209 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>The one about AI (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://macwright.com/2023/04/15/ai.html">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>The one about AI</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://macwright.com/2023/04/15/ai.html" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<p>Like everyone, I’ve been thinking about AI. It’s already useful, in a way that the previous big thing, crypto, wasn’t. I don’t think it’ll become generalized AI - I think the <a href="https://en.wikipedia.org/wiki/AI_winter">AI winter cycle</a> is the base case and human-like intelligence is qualitatively different than LLM, no matter how many terabytes of training data you throw at them. But that isn’t what this article is about.</p>
<p>No, it’s about other stuff, particularly technological change, happiness, and craft.</p>
<p>Optimists and pessimists agree that AI will change the world.</p>
<p>If it goes wrong, AI will continue to do a <a href="https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/">bad job suggesting sentences for criminals</a> and <a href="https://www.statnews.com/2017/09/05/watson-ibm-cancer/">promise, but fail, to diagnose cancer</a>, and find its way into a lot of other jobs that it’s not qualified for – much like an overconfident young man, which is also its preferred writing style. Maybe it’ll gain sentience and destroy us all.</p>
<p>Or it goes well, and it gives us superpowers: it’s <a href="https://every.to/chain-of-thought/the-end-of-organizing">the future of notetaking</a>, <a href="https://every.to/chain-of-thought/gpt-3-is-the-best-journal-you-ve-ever-used">a better journal</a>, it helps us <a href="https://every.to/chain-of-thought/linus-lee-is-living-with-ai">think better and get more done</a>. Maybe it’ll gain sentience and be our best friend.</p>
<p>But there’ll be winners and losers – everyone agrees. If it’s good, then the productivity gains will be unevenly distributed and those with only basic abilities - in programming, writing, music - will be replaced by the machines, or by someone using a machine to produce a lot more of the product. If it’s bad, the people using the AI will benefit but those at the other end of the algorithm, those subjected to AI-powered policing, healthcare, or hiring are subject to the inaccuracy, bias, or malice built into the system.</p>
<p>I suspect we’ll get part of both futures. AI will be integrated into a lot of things and become like the <a href="https://en.wikipedia.org/wiki/History_of_statistics#Bayesian_statistics">bayesian spam filters</a> that now seem obvious and simple. It’ll be implemented in places it doesn’t belong, and cause havoc. Jobs will shift, some becoming more in demand and others less.</p>
<p>Enough context, let’s talk about history and vibes and happiness.</p>
<p>AI feels like a reshuffling.</p>
<p><picture><source srcset="/images/2023-04-15-ai-execution-in-the-french-revolution.webp" type="image/webp"><img alt="Execution in the French Revolution" src="/images/2023-04-15-ai-execution-in-the-french-revolution.jpg"></source></picture></p>
<figcaption><a href="https://en.wikipedia.org/wiki/French_Revolution#/media/File:Execution_of_Louis_XVI.jpg">Execution of Louis XVI in the Place de la Concorde, facing the empty pedestal where the statue of his grandfather, Louis XV previously stood</a></figcaption>
<p>Consider the French Revolution, which the history books say benefited the third estate, roughly the working class, and demoted the first and second, the priests and nobility, sometimes, ahem, dramatically so. Or the <a href="https://en.wikipedia.org/wiki/Eastern_Bloc">evolution of the Eastern Bloc</a>, when socialist and communist countries <a href="https://www.csmonitor.com/1983/0324/032433.html">introduced elements of capitalism</a>, creating new winners &amp; losers from those who were in the right place &amp; time.</p>
<p>Fortunes are made and lost in a reshuffling, for those situated in the right place, class, and job – or those who rush to realign themselves with the new wave. We saw a brash, stupid version of this ideology with crypto’s motto, <a href="https://www.coindesk.com/markets/2021/03/03/the-decoder-have-fun-staying-poor/">Have Fun Staying Poor</a> - that everyone who didn’t own Bitcoin would be left behind in the new economy. But I see a variation of it every day from people writing about AI. AI is going to change everything: <em>here’s how to adapt</em>, <em>here’s who will win out</em>, writes lots of people on LinkedIn and Twitter.</p>
<p>We in the tech industry are used to the ground shifting under our feet: when there’s some paradigm that lets us think less or do more, most of us jump to it. We might choose parts of the industry based on our tolerance for change – embedded programming in 2023 is much more similar to embedded programming in 2000 than web programming is to the same. But every part of the industry has churn.</p>
<p>AI feels different though, in both the micro and macro.</p>
<p>In the micro sense, more than anything that came before it, AI is a black box. It’s not even like a C++ compiler or React.js’s internals, something that’s complex and huge, but ultimately understandable. AI is not understood deeply by its creators. Fine tuning it is more an art than a science. Bugs are not fixed directly, but indirectly, by adding more to the input, or cleaning up the inputs. And the AI models come to us from familiar deities - Microsoft, Google, Facebook. The costs right now are so enormous, like <a href="https://www.semafor.com/article/04/07/2023/stability-ai-is-on-shaky-ground-as-it-burns-through-cash">Stability AI’s 75 million dollar server bill</a>, that no small startup is going to compete on the same ground. So the vast majority of “AI startups” are building on someone else’s model, and tinkering with LLMs for fun means using an existing model, probably the one written by Facebook or Microsoft-funded researchers.</p>
<p>But in the macro sense, it’s also different: I keep hearing, and thinking, that it’s going to replace all the junior developers. It’s going to empower the seniors, their managers, the idea people, the CEOs - there’ll be fewer salaries to pay, and the least skilled are the ones to be eliminated. This, you hear from venture capitalists, CEOs, and senior developers: they might be right, but they also need to be right. Basically, just cranking out code won’t matter as much - CoPilot can do that. No longer will people write shortform content for travel blogs and paid promotion columns - ChatGPT will write it.</p>
<p>I have a few thoughts about this.</p>
<p><picture><source srcset="/images/2023-04-15-ai-new-jersey-gas-station.webp" type="image/webp"><img alt="New Jersey Gas Station" src="/images/2023-04-15-ai-new-jersey-gas-station.jpg"></source></picture></p>
<figcaption><a href="https://www.flickr.com/photos/dacosta1/32279343448/">New Jersey Gas Station (cc) Victor Reynolds Flickr</a></figcaption>
<p>I grew up in New Jersey. It’s one of the two states where you can’t pump your own gas. I first had to fill up a car with gas midway through college, and needed a friend to teach me how. Despite it being obviously possible to pump one’s own gas, New Jersey will probably <a href="https://www.cnn.com/2022/06/18/energy/new-jersey-oregon-pump-your-own-gas/index.html">keep that policy</a>.<sup id="fnref:1" role="doc-noteref"></sup></p>
<p>The point is, those jobs were created because of a bizarre law, and they could be lost by removing that law. And all jobs are on that scale: they’re all kind of made-up. You can take an industry and increase salaries by unionizing or restricting the labor supply by requiring more qualifications, or you can decrease salaries by dismantling workers’ rights. You can create a job out of thin air, like a gas station pump attendant or a marijuana dispensary salesman, or remove a class of jobs, like elephant hunting or TV repair.</p>
<p>To a large extent, we get the labor market we aim for with policy, and there is no natural state to it: there are entire categories of jobs that could have been automated away a decade ago but won’t be. Employment and compensation are the output of a lot of different factors: <a href="https://macwright.com/2022/07/28/youre-paid-what-youre-worth.html">You’re Paid What You’re Worth</a> is a great guide to those.</p>
<p>So I’m not necessarily excited for entry-level jobs to become automated away. I’m not convinced that they <em>have to</em> be automated away. Treating automation as a technological eventuality feels hollow: we don’t have automated kiosks at McDonalds because they were just invented, we have them because it helps the company’s margins. If McDonalds wanted a better customer experience, it could do the opposite. And then if activist investors get angry, it’ll go back to the touchpads again. And until we have UBI, which might happen never, it seems much better for there to be a variety of jobs for a variety of people than to make the job market even more selective. Average people need jobs, to live.</p>
<p>I also just don’t especially want to stop thinking about code. I don’t want to stop writing sentences in my own voice. I get a lot of joy from craft. It’s not a universal attitude toward work, but I’ve always been thankful that programming is a <em>craft</em> that pays a good living wage. I’d be a luthier, photographer, or, who knows, if those jobs were as viable and available. But programming lets you write and think all day, and reliably pay my rent. Writing, both code and prose, for me, is both an end product and an end in itself. I don’t want to automate away the things that give me joy.</p>
<p>And that is something that I’m more and more aware of as I get older – sources of joy. It’s good to diversify them, to keep track of them, because it’s way too easy to run out. Or to end up with just one, and then lose it.</p>
<p>The thing about luddites is that they make good punchlines, but they were all people.</p>
<p><picture><source srcset="/images/2023-04-15-ai-manuscript-with-illumination.webp" type="image/webp"><img alt="Manuscript with illumination" src="/images/2023-04-15-ai-manuscript-with-illumination.jpg"></source></picture></p>
<figcaption><a href="https://en.wikipedia.org/wiki/Illuminated_manuscript#/media/File:Old_Armenian_Manuscript.jpg">Definitions of Philosophy of David the Invincible; 1280; vellum; Matenadaran (Yerevan, Armenia)</a></figcaption>
<p>Someone was there making <a href="https://en.wikipedia.org/wiki/Illuminated_manuscript">illuminated manuscripts</a> when movable type was invented, and they said - correctly - that it sucks and is much less fun. Of course movable type and the printing press would win out and those laborers were the last of their kind, but if we hopped into a time machine and watched them work, would we make fun of them for not getting with the times? Doesn’t that kind of seem wrong? They weren’t wrong to enjoy their craft and mourn its loss.</p>
<p>And this is not to say that work is free of tedium. To some extent, we all benefit from spellcheck and pre-mixed paints and code completion and all kinds of assistance. And the new writer putting out five stories a day as she tries to earn the right to write front-page headlines probably isn’t savoring every trend piece about bottled water or ashwagandha. But a newspaper with <em>only</em> headline writers, only abstract thinkers at the top of their game commanding ChatGPT to write the unimportant stuff - is that a future that we want, for anyone? How does one learn to write, learn what’s good or bad, learn how to have a journalistic voice? And what about the people who have the writing skills to reliably write a story a day but don’t aspire to or don’t have the ability to be a star – are they cut out of the industry entirely?</p>
<p>Universal Basic Income, maybe. Appealing across the political spectrum, for troubling reasons.<sup id="fnref:2" role="doc-noteref"></sup> Sam Altman, the OpenAI one, <a href="https://futurism.com/the-byte/basic-income-y-combinator">tried</a> and <a href="https://www.wired.com/story/y-combinator-learns-basic-income-is-not-so-basic-after-all/">delayed</a> and never re-started a plan to research UBI. I don’t know. To me, it feels like a talking point unless someone has a real plan to actually do it, to get the private money, or government policy in place, now and before it’s too late. Tech has been terrific at stalling legislation but unsuccessful in creating it: the most likely outcome seems like we put forth the idea of UBI to blame the government for not doing it.</p>
<p>So, it’s all about adapting, or in another word, opportunism. You go where the future is and stay open minded about what that is. Even if it’s a bubble, I think that Matt Levine’s words are gospel:</p>
<blockquote><p>My basic view of bubbles is that if you can identify a bubble, and you have some free time, the right move is to sell into the bubble. Not sell short, mind you, which is risky; you don’t know when the bubble will pop. Sell long. Get into the business that is bubbly, because that’s where the money is. There is demand; become the supply. -<a href="https://www.bloomberg.com/opinion/articles/2022-10-11/anti-esg-can-be-good-business">Anti-ESG Can Be Good Business</a></p></blockquote>
<p>Where does this all land? I’m moderately optimistic about AI.</p>
<p>But I think the thing that excites a lot of people about it is the reorganization, the shift, the reward for opportunism. Navigating that change in market opportunity and being <em>there</em> is its own reward to a lot of people. And it should be: this is the essence of progress in an industrialized society. The relationships, the strategy, matters much more to many people than craft or art: what goes into the production of a thing is just a variable to be minimized.</p>
<p>How people feel about AI has a lot to do with how they think society should be structured, what makes work valuable, and what they truly enjoy doing.</p>
<p>I feel in the middle, as someone who writes prose and code on a regular basis but also helps guide companies, people, and do other sorts of <em>founder</em> stuff. All I’m saying is, whichever way it turns out, spare me in the revolution.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 5
- 0
cache/2023/ccb1821caf1a27ed2a2e9a92a26d0b65/index.md
File diff suppressed because it is too large
View File


+ 179
- 0
cache/2023/d1545c8cf9387ad9b0c98020c7ccfe61/index.html View File

@@ -0,0 +1,179 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>Scattered ChatGPT thoughts (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://notebook.wesleyac.com/gpt-ugh/">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>Scattered ChatGPT thoughts</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://notebook.wesleyac.com/gpt-ugh/" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<p class="subtitle" id="dateline">Sunday March 26, 2023 — Brooklyn, New York</p>
<p>I started trying to write something about ChatGPT and “AI” for my <a href="https://blog.wesleyac.com">other blog</a>, but it made me to sad and annoyed to be able to write something good, so I’m just going to dump some stuff here.</p>
<p>It’s legitimately terrifying to me how many people seem to be comfortable conflating language ability with intelligence or sentience. It’s terrifying to me that so many people don’t seem to notice that they’re doing that, and don’t seem to be able to think critically about it when it’s pointed out to them.</p>
<p>It’s astonishing to me how little people have learned about trusting centralized entities with huge amounts of power in their lives. LLMs are a fundamentally centralized phenomenon — they take a huge amount of human and computer time to make, and are thus only accessible to enormous institutions. I don’t understand if people are simply blind to these power relations, or if they don’t care.</p>
<p>I put a large amount of effort in my life to avoiding allowing ML models to put things into my brain where at all possible (for instance, <a href="https://wesleyac.com/youtube/">using adblock to remove all recommended YouTube videos</a>). It’s very strange to see people run to fill their brains with ML output, while I regard it as a sort of contamination that should be avoided as much as possible.</p>
<p>I try to have the things in my life — particularly books, media, and software — be as high quality as possible, because I enjoy having a nice life where things work and I am surrounded by beauty. Seeing people transfixed by a machine that can generate massive amounts of mediocrity is baffling to me.</p>
<hr><blockquote><p>There is no inevitability as long as there is a willingness to contemplate what is happening.</p></blockquote>
<p>— John M. Culkin, <em>A Schoolman’s Guide to Marshall McLuhan</em> (often misattributed to McLuhan himself)</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 5
- 0
cache/2023/d1545c8cf9387ad9b0c98020c7ccfe61/index.md View File

@@ -0,0 +1,5 @@
title: Scattered ChatGPT thoughts
url: https://notebook.wesleyac.com/gpt-ugh/
hash_url: d1545c8cf9387ad9b0c98020c7ccfe61

<p class="subtitle" id="dateline">Sunday March 26, 2023 — Brooklyn, New York</p><p>I started trying to write something about ChatGPT and “AI” for my <a href="https://blog.wesleyac.com">other blog</a>, but it made me to sad and annoyed to be able to write something good, so I’m just going to dump some stuff here.</p><p>It’s legitimately terrifying to me how many people seem to be comfortable conflating language ability with intelligence or sentience. It’s terrifying to me that so many people don’t seem to notice that they’re doing that, and don’t seem to be able to think critically about it when it’s pointed out to them.</p><p>It’s astonishing to me how little people have learned about trusting centralized entities with huge amounts of power in their lives. LLMs are a fundamentally centralized phenomenon — they take a huge amount of human and computer time to make, and are thus only accessible to enormous institutions. I don’t understand if people are simply blind to these power relations, or if they don’t care.</p><p>I put a large amount of effort in my life to avoiding allowing ML models to put things into my brain where at all possible (for instance, <a href="https://wesleyac.com/youtube/">using adblock to remove all recommended YouTube videos</a>). It’s very strange to see people run to fill their brains with ML output, while I regard it as a sort of contamination that should be avoided as much as possible.</p><p>I try to have the things in my life — particularly books, media, and software — be as high quality as possible, because I enjoy having a nice life where things work and I am surrounded by beauty. Seeing people transfixed by a machine that can generate massive amounts of mediocrity is baffling to me.</p><hr><blockquote><p>There is no inevitability as long as there is a willingness to contemplate what is happening.</p></blockquote><p>— John M. Culkin, <em>A Schoolman’s Guide to Marshall McLuhan</em> (often misattributed to McLuhan himself)</p>

+ 243
- 0
cache/2023/dc43f3c837d95313ac7317e10349511e/index.html View File

@@ -0,0 +1,243 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>Ask LukeW: New Ways into Web Content (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://www.lukew.com/ff/entry.asp?2008">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>Ask LukeW: New Ways into Web Content</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://www.lukew.com/ff/entry.asp?2008" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<p class="feature">Large language (AI) models allow us to rethink how to build software and design user interfaces. To that end, we made use of these new capabilities to create a different way of interacting with this site at: <a href="https://ask.lukew.com">ask.lukew.com</a></p>

<p>Though quiet recently, this site built up a <a href="https://www.lukew.com/ff/">decent amount</a> of content over the past 27 years. Specifically, there's nearly 2,000 text articles, 375 presentations, 60 videos, and 3 books worth of explorations and explanations about all forms of digital product design from early Web sites to Mobile apps to Augmented Reality experiences.</p>

<p><a href="//static.lukew.com/ask_lukew_data_2x.png"><img src="//static.lukew.com/ask_lukew_data.png" srcset="//static.lukew.com/ask_lukew_data.png, //static.lukew.com/ask_lukew_data_2x.png 2x" alt="2,000 articles, 375 presentations, 57 videos on LukeW's site"></a>
</p>

<p>Anyone interested in these materials, has essentially two options: search or browse. Searching (primarily through Google) gets people to a specific article, presentation, or video when they have a sense of what they're looking for. Browsing (on this site or other sites with links to this one) helps people discover things they might not have been explicitly looking for.</p>

<p>But with over half a million written words, three and a half thousand minutes of video, and thousands of images it's hard to know what's available, to connect related content, and ultimately get the most value out of this site.</p>

<p><a href="//static.lukew.com/ask_lukew_data2_2x.png"><img src="//static.lukew.com/ask_lukew_data2.png" srcset="//static.lukew.com/ask_lukew_data2.png, //static.lukew.com/ask_lukew_data2_2x.png 2x" alt="Search or browse the content on LukeW Ideation and Design"></a>
</p>

<p>Enter <a href="https://www.computer.org/csdl/magazine/co/2022/05/09771130/1DeEYd2FXZm">large-scale AI models for language</a> (LLMs). By making use of these models to perform a variety of language operations, we can re-index the content on this site by concepts using embeddings, and generate new ways to interact with it.</p>

<p>We make use of large-language models to:</p>
<ul>
<li>summarize articles</li>
<li>extract key concepts from articles</li>
<li>create follow-on questions to ask with specific articles</li>
<li>make exploratory questions to expose people to new content</li>
<li>generate answers in response to what people ask</li>
</ul>
<p>This combination of language operations adds up to a very different new way to experience the content on <a href="https://ask.lukew.com">lukew.com</a></p>

<p><a href="//static.lukew.com/ask_lukew_questions_2x.png"><img src="//static.lukew.com/ask_lukew_questions.png" srcset="//static.lukew.com/ask_lukew_questions.png, //static.lukew.com/ask_lukew_questions_2x.png 2x" alt="Suggested questions in the Ask LukeW interface"></a>
</p>

<p><a href="https://ask.lukew.com">Ask LukeW</a> starts off with a series of suggested questions that change each time someone loads the page. This not only helps with the "what should I ask?" problem of empty text fields but is also a compelling way to explore what the site has to offer. Of course, someone can start with their own specific question. But in testing, many folks gravitate to the suggestions first, which helps expose people to more of the breadth and depth of available content.</p>

<p><a href="//static.lukew.com/ask_lukew_answer2_2x.png"><img src="//static.lukew.com/ask_lukew_answer2.png" srcset="//static.lukew.com/ask_lukew_answer2.png, //static.lukew.com/ask_lukew_answer2_2x.png 2x" alt="Generated answers and visual sources in the Ask LukeW interface"></a>
</p>

<p>After someone selects a question or types their own question, we generate an answer using the corpus of information on lukew.com. These results tend to be more opinionated than what a large language model operating solely on a much bigger set of content (like the Web) provides, even with prompt engineering to direct it toward specific kinds of answers (i.e. UI design-focused).</p>

<p><a href="//static.lukew.com/ask_lukew_corpus_2x.png"><img src="//static.lukew.com/ask_lukew_corpus.png" srcset="//static.lukew.com/ask_lukew_corpus.png, //static.lukew.com/ask_lukew_corpus_2x.png 2x" alt="Comparing answers from a fixed corpus to ChatGPT in the Ask LukeW interface"></a>
</p>

<p>The content we use to answer someone's question can come from one or more articles so we give provide visual sources to make this clear. In the current build, we're citing Web pages but PDFs and videos are next. It's also worth noting that we follow-up each answer with additional suggested questions to once again give people a better sense of what they can ask next. No dead ends.</p>

<p><a href="//static.lukew.com/ask_lukew_objects_2x.png"><img src="//static.lukew.com/ask_lukew_objects.png" srcset="//static.lukew.com/ask_lukew_objects.png, //static.lukew.com/ask_lukew_objects_2x.png 2x" alt="Sources object cards from the Ask LukeW interface"></a>
</p>

<p>If someone wants to go deeper into any of the sourced materials, they can select the card and get an article-specific experience. Here we make use of LLM language operations to create a summary, extract related topics and provide suggested questions that the article can answer. People can ask questions of just this document (as indicated by the green article "chip" in the question bar) or go back to site-wide questions by tapping the close (x) icon.</p>

<p><a href="//static.lukew.com/ask_lukew_article_2x.png"><img src="//static.lukew.com/ask_lukew_article.png" srcset="//static.lukew.com/ask_lukew_article.png, //static.lukew.com/ask_lukew_article_2x.png 2x" alt="Article specific features in the Ask LukeW interface"></a>
</p>

<p>As the number of answers builds up, we collapse each one automatically, so people can focus on the current question they've asked. This also makes it easier to scroll through a long conversation and pick out answers from short summaries consisting of the question and the first two lines of its answer.</p>

<p>People can also pin individual question and answer pairs to save them for later and come back to previous conversations in addition to making new ones using the menu bar on the left.</p>

<p><a href="//static.lukew.com/ask_lukew_conversation_2x.png"><img src="//static.lukew.com/ask_lukew_conversation.png" srcset="//static.lukew.com/ask_lukew_conversation.png, //static.lukew.com/ask_lukew_conversation_2x.png 2x" alt="Conversation listing in the Ask LukeW interface"></a>
</p>

<p>While there's a number of features in the <a href="https://ask.lukew.com">Ask LukeW</a> interface, it's mostly a beta. We don't save state from question to question so the kind of ongoing dialog people may expect from <a href="https://openai.com/blog/chatgpt">ChatGPT</a> isn't there yet, pinned answers and saved conversations are only done locally (cookie-based) and as mentioned before, PDFs and videos aren't yet part of the index.</p>

<p>Despite that, it's been interesting to explore how an existing body of content can gain new life using large-language model technology. I've been regularly surprised and interested by questions like:</p>

<ul>
<li>How can progressive enhancement be used in software development?
</li><li>What are the central mental traits that people unconsciously display through the products they buy?</li>
<li>What are the design considerations for touch-based apps for kids?</li>
<li>What is small multiples and how can it help people make sense of large amounts of information quickly and easily?</li>
<li>What is the debate around the utility of usability testing in design?</li>
</ul>

<p>And I wrote all this content! Since that happened across a quarter century, maybe it's not that surprising that I don't remember it all. Anyhow... hope you also enjoy trying out <a href="https://ask.lukew.com">ask.lukew.com</a> and feel free to <a href="https://www.lukew.com/about/">send</a> any ideas or comments over.</p>

<h2>Acknowledgments</h2>
<p>Big thanks to <a href="https://liyangguang.com/">Yangguang Li</a> (front end), <a href="https://twitter.com/swissgrid">Thanh Tran</a> (design), and <a href="https://twitter.com/sampullara">Sam Pullara</a> (back end) in helping pull together this <a href="https://ask.lukew.com">exploration</a>.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 76
- 0
cache/2023/dc43f3c837d95313ac7317e10349511e/index.md View File

@@ -0,0 +1,76 @@
title: Ask LukeW: New Ways into Web Content
url: https://www.lukew.com/ff/entry.asp?2008
hash_url: dc43f3c837d95313ac7317e10349511e
<p class="feature">Large language (AI) models allow us to rethink how to build software and design user interfaces. To that end, we made use of these new capabilities to create a different way of interacting with this site at: <a href="https://ask.lukew.com">ask.lukew.com</a></p>
<p>Though quiet recently, this site built up a <a href="https://www.lukew.com/ff/">decent amount</a> of content over the past 27 years. Specifically, there's nearly 2,000 text articles, 375 presentations, 60 videos, and 3 books worth of explorations and explanations about all forms of digital product design from early Web sites to Mobile apps to Augmented Reality experiences.</p>
<p><a href="//static.lukew.com/ask_lukew_data_2x.png"><img src="//static.lukew.com/ask_lukew_data.png" srcset="//static.lukew.com/ask_lukew_data.png, //static.lukew.com/ask_lukew_data_2x.png 2x" alt="2,000 articles, 375 presentations, 57 videos on LukeW's site"></a>
</p>
<p>Anyone interested in these materials, has essentially two options: search or browse. Searching (primarily through Google) gets people to a specific article, presentation, or video when they have a sense of what they're looking for. Browsing (on this site or other sites with links to this one) helps people discover things they might not have been explicitly looking for.</p>
<p>But with over half a million written words, three and a half thousand minutes of video, and thousands of images it's hard to know what's available, to connect related content, and ultimately get the most value out of this site.</p>
<p><a href="//static.lukew.com/ask_lukew_data2_2x.png"><img src="//static.lukew.com/ask_lukew_data2.png" srcset="//static.lukew.com/ask_lukew_data2.png, //static.lukew.com/ask_lukew_data2_2x.png 2x" alt="Search or browse the content on LukeW Ideation and Design"></a>
</p>
<p>Enter <a href="https://www.computer.org/csdl/magazine/co/2022/05/09771130/1DeEYd2FXZm">large-scale AI models for language</a> (LLMs). By making use of these models to perform a variety of language operations, we can re-index the content on this site by concepts using embeddings, and generate new ways to interact with it.</p>
<p>We make use of large-language models to:</p>
<ul>
<li>summarize articles</li>
<li>extract key concepts from articles</li>
<li>create follow-on questions to ask with specific articles</li>
<li>make exploratory questions to expose people to new content</li>
<li>generate answers in response to what people ask</li>
</ul>
<p>This combination of language operations adds up to a very different new way to experience the content on <a href="https://ask.lukew.com">lukew.com</a></p>
<p><a href="//static.lukew.com/ask_lukew_questions_2x.png"><img src="//static.lukew.com/ask_lukew_questions.png" srcset="//static.lukew.com/ask_lukew_questions.png, //static.lukew.com/ask_lukew_questions_2x.png 2x" alt="Suggested questions in the Ask LukeW interface"></a>
</p>
<p><a href="https://ask.lukew.com">Ask LukeW</a> starts off with a series of suggested questions that change each time someone loads the page. This not only helps with the "what should I ask?" problem of empty text fields but is also a compelling way to explore what the site has to offer. Of course, someone can start with their own specific question. But in testing, many folks gravitate to the suggestions first, which helps expose people to more of the breadth and depth of available content.</p>
<p><a href="//static.lukew.com/ask_lukew_answer2_2x.png"><img src="//static.lukew.com/ask_lukew_answer2.png" srcset="//static.lukew.com/ask_lukew_answer2.png, //static.lukew.com/ask_lukew_answer2_2x.png 2x" alt="Generated answers and visual sources in the Ask LukeW interface"></a>
</p>
<p>After someone selects a question or types their own question, we generate an answer using the corpus of information on lukew.com. These results tend to be more opinionated than what a large language model operating solely on a much bigger set of content (like the Web) provides, even with prompt engineering to direct it toward specific kinds of answers (i.e. UI design-focused).</p>
<p><a href="//static.lukew.com/ask_lukew_corpus_2x.png"><img src="//static.lukew.com/ask_lukew_corpus.png" srcset="//static.lukew.com/ask_lukew_corpus.png, //static.lukew.com/ask_lukew_corpus_2x.png 2x" alt="Comparing answers from a fixed corpus to ChatGPT in the Ask LukeW interface"></a>
</p>
<p>The content we use to answer someone's question can come from one or more articles so we give provide visual sources to make this clear. In the current build, we're citing Web pages but PDFs and videos are next. It's also worth noting that we follow-up each answer with additional suggested questions to once again give people a better sense of what they can ask next. No dead ends.</p>
<p><a href="//static.lukew.com/ask_lukew_objects_2x.png"><img src="//static.lukew.com/ask_lukew_objects.png" srcset="//static.lukew.com/ask_lukew_objects.png, //static.lukew.com/ask_lukew_objects_2x.png 2x" alt="Sources object cards from the Ask LukeW interface"></a>
</p>
<p>If someone wants to go deeper into any of the sourced materials, they can select the card and get an article-specific experience. Here we make use of LLM language operations to create a summary, extract related topics and provide suggested questions that the article can answer. People can ask questions of just this document (as indicated by the green article "chip" in the question bar) or go back to site-wide questions by tapping the close (x) icon.</p>
<p><a href="//static.lukew.com/ask_lukew_article_2x.png"><img src="//static.lukew.com/ask_lukew_article.png" srcset="//static.lukew.com/ask_lukew_article.png, //static.lukew.com/ask_lukew_article_2x.png 2x" alt="Article specific features in the Ask LukeW interface"></a>
</p>
<p>As the number of answers builds up, we collapse each one automatically, so people can focus on the current question they've asked. This also makes it easier to scroll through a long conversation and pick out answers from short summaries consisting of the question and the first two lines of its answer.</p>
<p>People can also pin individual question and answer pairs to save them for later and come back to previous conversations in addition to making new ones using the menu bar on the left.</p>
<p><a href="//static.lukew.com/ask_lukew_conversation_2x.png"><img src="//static.lukew.com/ask_lukew_conversation.png" srcset="//static.lukew.com/ask_lukew_conversation.png, //static.lukew.com/ask_lukew_conversation_2x.png 2x" alt="Conversation listing in the Ask LukeW interface"></a>
</p>
<p>While there's a number of features in the <a href="https://ask.lukew.com">Ask LukeW</a> interface, it's mostly a beta. We don't save state from question to question so the kind of ongoing dialog people may expect from <a href="https://openai.com/blog/chatgpt">ChatGPT</a> isn't there yet, pinned answers and saved conversations are only done locally (cookie-based) and as mentioned before, PDFs and videos aren't yet part of the index.</p>
<p>Despite that, it's been interesting to explore how an existing body of content can gain new life using large-language model technology. I've been regularly surprised and interested by questions like:</p>
<ul>
<li>How can progressive enhancement be used in software development?
</li><li>What are the central mental traits that people unconsciously display through the products they buy?</li>
<li>What are the design considerations for touch-based apps for kids?</li>
<li>What is small multiples and how can it help people make sense of large amounts of information quickly and easily?</li>
<li>What is the debate around the utility of usability testing in design?</li>
</ul>
<p>And I wrote all this content! Since that happened across a quarter century, maybe it's not that surprising that I don't remember it all. Anyhow... hope you also enjoy trying out <a href="https://ask.lukew.com">ask.lukew.com</a> and feel free to <a href="https://www.lukew.com/about/">send</a> any ideas or comments over.</p>
<h2>Acknowledgments</h2>
<p>Big thanks to <a href="https://liyangguang.com/">Yangguang Li</a> (front end), <a href="https://twitter.com/swissgrid">Thanh Tran</a> (design), and <a href="https://twitter.com/sampullara">Sam Pullara</a> (back end) in helping pull together this <a href="https://ask.lukew.com">exploration</a>.</p>

+ 530
- 0
cache/2023/f23d043d8e99f2af5fcf1b970f98744a/index.html View File

@@ -0,0 +1,530 @@
<!doctype html><!-- This is a valid HTML5 document. -->
<!-- Screen readers, SEO, extensions and so on. -->
<html lang="fr">
<!-- Has to be within the first 1024 bytes, hence before the `title` element
See: https://www.w3.org/TR/2012/CR-html5-20121217/document-metadata.html#charset -->
<meta charset="utf-8">
<!-- Why no `X-UA-Compatible` meta: https://stackoverflow.com/a/6771584 -->
<!-- The viewport meta is quite crowded and we are responsible for that.
See: https://codepen.io/tigt/post/meta-viewport-for-2015 -->
<meta name="viewport" content="width=device-width,initial-scale=1">
<!-- Required to make a valid HTML5 document. -->
<title>Artificial General Intelligence and the bird brains of Silicon Valley (archive) — David Larlet</title>
<meta name="description" content="Publication mise en cache pour en conserver une trace.">
<!-- That good ol' feed, subscribe :). -->
<link rel="alternate" type="application/atom+xml" title="Feed" href="/david/log/">
<!-- Generated from https://realfavicongenerator.net/ such a mess. -->
<link rel="apple-touch-icon" sizes="180x180" href="/static/david/icons2/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/david/icons2/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/david/icons2/favicon-16x16.png">
<link rel="manifest" href="/static/david/icons2/site.webmanifest">
<link rel="mask-icon" href="/static/david/icons2/safari-pinned-tab.svg" color="#07486c">
<link rel="shortcut icon" href="/static/david/icons2/favicon.ico">
<meta name="msapplication-TileColor" content="#f7f7f7">
<meta name="msapplication-config" content="/static/david/icons2/browserconfig.xml">
<meta name="theme-color" content="#f7f7f7" media="(prefers-color-scheme: light)">
<meta name="theme-color" content="#272727" media="(prefers-color-scheme: dark)">
<!-- Documented, feel free to shoot an email. -->
<link rel="stylesheet" href="/static/david/css/style_2021-01-20.css">
<!-- See https://www.zachleat.com/web/comprehensive-webfonts/ for the trade-off. -->
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t4_poly_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_regular.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_bold.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<link rel="preload" href="/static/david/css/fonts/triplicate_t3_italic.woff2" as="font" type="font/woff2" media="(prefers-color-scheme: dark)" crossorigin>
<script>
function toggleTheme(themeName) {
document.documentElement.classList.toggle(
'forced-dark',
themeName === 'dark'
)
document.documentElement.classList.toggle(
'forced-light',
themeName === 'light'
)
}
const selectedTheme = localStorage.getItem('theme')
if (selectedTheme !== 'undefined') {
toggleTheme(selectedTheme)
}
</script>

<meta name="robots" content="noindex, nofollow">
<meta content="origin-when-cross-origin" name="referrer">
<!-- Canonical URL for SEO purposes -->
<link rel="canonical" href="https://softwarecrisis.dev/letters/ai-bird-brains-silicon-valley/">

<body class="remarkdown h1-underline h2-underline h3-underline em-underscore hr-center ul-star pre-tick" data-instant-intensity="viewport-all">


<article>
<header>
<h1>Artificial General Intelligence and the bird brains of Silicon Valley</h1>
</header>
<nav>
<p class="center">
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="https://softwarecrisis.dev/letters/ai-bird-brains-silicon-valley/" title="Lien vers le contenu original">Source originale</a>
</p>
</nav>
<hr>
<blockquote>
<p>
The problem is, if one side of the communication does not have meaning,
then the comprehension of the implicit meaning is an illusion arising
from our singular human understanding of language (independent of the
model). Contrary to how it may seem when we observe its output, an LM is
a system for haphazardly stitching together sequences of linguistic
forms it has observed in its vast training data, according to
probabilistic information about how they combine, but without any
reference to meaning: a stochastic parrot.
</p>
</blockquote>
<figcaption>
<p>Emily M. Bender, Timnit Gebru, et al., <em>On the Dangers of Stochastic
Parrots: Can Language Models Be Too Big?</em>.</p>
</figcaption>
</figure>
<p>Bird brains have a bad reputation. The diminutive size of your average
bird and their brain has lead people to assume that they are, well,
dumb.</p>
<p>But, bird brains are amazing. Birds commonly outperform mammals with
larger brains at a variety of general reasoning and problem-solving
tasks. Some by a large margin. Their small brains manage this by
packing numerous neurons in a small space using structures that are
unlike from those you find in mammals.</p>
<p>Even though birds have extremely capable minds, those minds are built in
ways that are different from our own or other mammals. Similar
capabilities; different structure.</p>
<p>The ambition of the Silicon Valley AI industry is to create something
analogous to a bird brain: a new kind of mind that is functionally
similar to the human mind, possibly outperforming it, while being built
using very different mechanisms. Similar capabilities; different
structure.</p>
<p>This effort goes back decades, to the dawn of computing, and has had
limited success.</p>
<p>Until recently, it seems.</p>
<p>If you’re reading this, you’ve almost certainly interacted with a
Generative AI, however indirectly. Maybe you’ve tried Bing Chat. Maybe
you’ve subscribed to the paid tier for ChatGPT. Or, maybe you’ve used
Midjourney to generate images. At the very least you’ve been forced to
see the images or text posted by the overenthusiastic on social media.</p>
<p>These AI models are created by pushing an enormous amount of training
data through various algorithms:</p>
<ul>
<li>Language models like ChatGPT is trained on a good chunk of the textual material available in digital form in the world.</li>
<li>Image models like Midjourney and Stable Diffusion are trained on a huge collection of images found on the internet.</li>
</ul>
<p>What comes out the other end is a mathematical model of the media domain
in question: text or images.</p>
<p>You know what Generative AI is in terms of how it presents to you as
software: clever chatbots that do or say things in response to what you
say: <em>your prompt</em>. Some of those responses are useful, and they give
you an impression of sophisticated comprehension. The models that
generate text are fluent and often quite engaging.</p>
<p>This fluency is misleading. What Bender and Gebru meant when they coined
the term <em>stochastic parrot</em> wasn’t to imply that these are, indeed, the
new bird brains of Silicon Valley, but that they are unthinking text
synthesis engines that just repeat phrases. They are the proverbial
parrot who echoes without thinking, not the actual parrot who is capable
of complex reasoning and problem-solving.</p>
<p>A <em>zombie parrot</em>, if you will, that screams for <em>brains</em> because it has
none.</p>
<p>The fluency of the zombie parrot—the unerring confidence and a style of
writing that some find endearing—creates a strong illusion of
intelligence.</p>
<p>Every other time we read text, we are engaging with the product of
another mind. We are so used to the idea of text as a representation of
another person’s thoughts that we have come to mistake their writing
<em>for</em> their thoughts. But they aren’t. Text and media are tools that
authors and artists create to let people change their own state of
mind—hopefully in specific ways to form the image or effect the author
was after.</p>
<p>Reading is an indirect collaboration with the author, mediated through
the writing. Text has no inherent reasoning or intelligence. Agatha
Christie’s ghost does not inhabit the words of <em>Murder on the Orient Express</em>.
Stephen King isn’t hovering over you when you read <em>Carrie</em>. The ghost
you feel while reading is an illusion you’ve made out of your own
experience, knowledge, and imagination. Every word you read causes your
mind to reconstruct its meaning using your memories and creativity. The
idea that there is intelligence somehow inherent in writing is an
illusion. The intelligence is <em>all</em> yours, all the time: thoughts you
make yourself in order to make sense of another person’s words. This can
prompt us to greatness, broaden our minds, inspire new thoughts, and
introduce us to new concepts. A book can contain worlds, but we’re the
ones that bring them into being as we read. What we see is uniquely our
own. The thoughts are not transported from the author’s mind and
injected into ours.</p>
<p>The words themselves are just line forms on a background with no
inherent meaning or intelligence. The word “horse” doesn’t come with the
Platonic ideal of a horse attached to it. The word “anger” isn’t full of
seething emotion or the restrained urge towards violence. Even words
that are arguably onomatopoeic, like the word “brabra” we use in
Icelandic for the sound a duck makes, are still incredibly specific to
the cultures and context they come from. We are the ones doing the heavy
lifting in terms of reconstructing a picture of an intelligence behind
the text. When there is no actual intelligence, such as with ChatGPT, we
are the ones who end up filling in the gaps with our memories,
experience and imagination.</p>
<p>When ChatGPT demonstrates intelligence, that comes from us. Some of
it we construct ourselves. Some of it comes from our inherent
biases.</p>
<p>There is no ‘there’ there. We are alone in the room, reconstructing an
abstract representation of a mind. The reasoning you see is only in your
head. You are hallucinating intelligence where there is none. You are
doing the textual equivalent of seeing a face in a power outlet.</p>
<p>This drive—<em>anthropomorphism</em>—seems to be innate. Our first instinct
when faced with anything unfamiliar—whose drives, motivations, and
mechanisms we don’t understand—is to assume that they think much like a
human would. When that unfamiliar agent uses language like a human
would, the urge to see them as near or fully human is impossible to
resist—a recurring issue in the history of AI research that dates all
the way back to 1966.</p>
<p>These tools solve problems and return fluent, if untruthful, answers,
which is what creates such a convincing illusion of intelligence.</p>
<p>Text synthesis engines like ChatGPT and GPT-4 do not have any
self-awareness. They are mathematical models of the various patterns to
be found in the collected body of human text. How granular the model is
depends on its design and the languages in question. Some of the
tokens—the smallest unit of language the model works with—will be
characters or punctuation marks, some of them will be words, syllables,
or even phrases. Many language models are a mixture of both.</p>
<p>With enough detail—a big enough collection of text—these tools will
model enough of the probabilistic distribution of various words or
characters to be able to perform what looks like magic:</p>
<ul>
<li>They generate fluent answers by calculating the most probable sequence
of words, at that time, which would serve as the continuation of or
response to the prompt.</li>
<li>They can perform limited reasoning tasks that correlate with textual
descriptions of prior reasoning tasks in the training data.</li>
</ul>
<p>With enough of these correlative shortcuts, the model can perform
something that looks like common sense reasoning: its output is text
that replicates prior representations of reasoning. This works for
as long as you don’t accidentally use the wrong phrasing in your prompt
and break the correlation.</p>
<p>The mechanism behind these systems is entirely correlative from the
ground up.What looks like reasoning is incredibly fragile and
breaks as soon as you rephrase or reword your prompt. It exists
only as a probabilistic model of text. A Generative AI chatbot is a
language engine incapable of genuine thought.</p>
<p>These language models are interactive but static snapshots of the
probability distributions of a written language.</p>
<p>It’s obviously interactive, that’s the whole point of a chatbot. It’s
static in that it does not change when it’s used or activated. In fact,
changing it requires an enormous amount of computing power over a long
period of time. What the system models are the distributions and
correlations of the tokens it records for the texts in its training data
set—how the various words, syllables, and punctuation relate to each
other over as much of the written history of a language as the company
can find.</p>
<p>That’s what distinguishes biological minds from these algorithmic
hindsight factories: a biological mind does not reason using the
probability distributions of all the prior cultural records of its
ancestors. Biological minds learn primarily through trial and error.
Try, fail, try again. They build their neural network, which is
functionally very different from what you see in a software model,
through constant feedback, experimentation, and repeated failure—driven
by a chemical network that often manifests as instinct, emotion,
motivation, and drive. The neural network—bounded, defined, and driven
by the chemical network—is constantly changing and responding to outside
stimuli. Every time an animal’s nervous system is “used”, it changes. It
is always changing, until it dies.</p>
<p>Biological minds <em>experience</em>. Synthesis engines parse imperfect
<em>records</em> of experiences. The former are forward-looking and operate
primarily in the present, sometimes to their own detriment. The latter
exist exclusively as probabilistic manifestations of imperfect
representations of thoughts past. They are snapshots. Generative AI are
themselves cultural records.</p>
<p>These models aren’t new bird brains—new alien minds that are peers to
our own. They aren’t even insect brains. Insects have autonomy. They are
capable of general problem-solving—some of them dealing with tasks of
surprising complexity—and their abilities tolerate the kind of
minor alterations in the problem environment that would break the
correlative pseudo-reasoning of a language model. Large Language
Models are something lesser. They are water running down pathways etched
into the ground over centuries by the rivers of human culture. Their
originality comes entirely from random combinations of historical
thought. They do not know the ‘meaning’ of anything—they only know the
records humans find meaningful enough to store. Their unreliability
comes from their unpredictable behaviour in novel circumstances. When
there is no riverbed to follow, they drown the surrounding landscape.</p>
<p>The entirety of their documented features, capabilities, and recorded
behaviour—emergent or not—is explained by this conceptual model of
generative AI. There are no unexplained corner cases that don’t fit or
actively disprove this theory.</p>
<p>Yet people keep assuming that what ChatGPT does can only be explained as
the first glimmer of genuine Artificial General Intelligence. The bird
brain of Silicon Valley is born at last!</p>
<p>Because text and language are the primary ways we experience other
people’s reasoning, it’ll be next to impossible to dislodge the notion
that these are genuine intelligences. No amount of examples, scientific
research, or analysis will convince those who want to maintain a
pseudo-religious belief in alien peer intelligences. After all, if you
want to believe in aliens, an artificial one made out of supercomputers
and wishful thinking <em>feels</em> much more plausible than little grey men
from outer space. But that’s what it is: <em>a belief in aliens.</em></p>
<p>It doesn’t help that so many working in AI seem to <em>want</em> this to be
true. They seem to be true believers who are convinced that the spark of
Artificial General Intelligence has been struck.</p>
<p>They are inspired by the science fictional notion that if you make
something complex enough, it will spontaneously become intelligent. This
isn’t an uncommon belief. You see it in movies and novels—the notion
that any network of sufficient complexity will spontaneously become
sentient has embedded itself in our popular psyche. James Cameron’s
skull-crushing metal skeletons have a lot to answer for.</p>
<p>That notion doesn’t seem to have any basis in science. The idea that
general intelligence is an emergent property of neural networks that
appears once the network reaches sufficient complexity, is a view based
on archaic notions of animal intelligence—that animals are soulless
automata incapable of feeling or reasoning. That view that was
formed during a period where we didn’t realise just how common
self-awareness (i.e. the mirror test) and general reasoning is in the
animal kingdom. Animals are smarter than we assumed and the
difference between our reasoning and theirs seems to be a matter of
degree, not of presence or absence.</p>
<p>General reasoning seems to be an <em>inherent</em>, not emergent, property of
pretty much any biological lifeform with a notable nervous system.</p>
<p>The bumblebee, despite having only a tiny fraction of the neurons of a
human brain, is capable of not only solving puzzles but also of
<em>teaching other bees to solve those puzzles.</em> They reason and have a
culture. They have more genuine and robust general reasoning
skills—that don’t collapse into incoherence at minor adjustments to the
problem space—than GPT-4 or any large language model on the market.
That’s with only around half a million neurons to work with.</p>
<p>Conversely, GPT-3 is made up of 175 <em>billion</em> parameters—what passes for
a “neuron” in a digital neural network. GPT-4 is even larger, with
some estimates coming in at a <em>trillion</em> parameters. Then you have
fine-tuned systems such as ChatGPT, that are built from multiple
interacting models layered on top of GPT-3.5 or GPT-4, which make for an
even more complex interactive system.</p>
<p>ChatGPT, running on GPT-4 is, easily a <em>million</em> times more complex than
the “neural network” of a bumblebee and yet, out of the two, it’s the
striped invertebrate that demonstrates robust and adaptive
general-purpose reasoning skills. Very simple minds, those belonging to
small organisms that barely have a brain, are capable of reasoning about
themselves, the world around them, and the behaviour of other
animals.</p>
<p>Unlike the evidence for ‘sparks’ of AGI in language models, the evidence
for animal reasoning—even consciousness—is broad, compelling, and
encompasses decades of work by numerous scientists.</p>
<p>AI models are flawed attempts at digitally synthesising neurologies.
They are built on the assumption that all the rest—metabolisms,
hormones, chemicals, and senses—aren’t necessary for developing
intelligence.</p>
<p>Reasoning in biological minds does not seem to be a property that
emerges from complexity. The capacity to reason looks more likely to be
a <em>built-in</em> property of most animal minds. A reasoning mind
appears to be a direct consequence of how animals are structured as a
whole—chemicals, hormones, and physical body included. The animal
capacity for problem-solving, social reasoning, and self-awareness seem
to increase, unevenly, and fitfully with the number of neurons until it
reaches the level we see in humans. Reasoning does not ‘emerge’ or
appear. Some creatures are better at it than others, but it’s there in
some form even in very small, very simple beings like the bumblebee. It
doesn’t happen magically when you hook up a bunch of disparate objects
together in a complex enough network. A reasoning mind is the <em>starting
point</em> of biological thinking, not the endpoint that only “emerges” with
sufficient complexity.</p>
<p>The internet—a random interconnected collection of marketing offal,
holiday snaps, insufferable meetings, and porn—isn’t going to become
self-aware and suddenly acquire the capacity for general reasoning once
it reaches a certain size, and neither will Large-Language-Models. The
notion that we are making autonomous beings capable of Artificial
General Intelligence just by loading a neural network up with an
increasingly bigger collection of garbage from the internet is not one
that has any basis in anything we understand about biology or animal
reasoning.</p>
<p>But, AI companies insist that they are on the verge of AGI. Their
rhetoric around it verges on the religious as the idea of an AGI is
idealised and almost worshipped. They claim to be close to making a
new form of thinking life, but they refuse to release the data required
to prove it. They’ve built software that performs well on the
arbitrary benchmarks they’ve chosen and claim are evidence of general
intelligence, but those tests prove no such thing and have no such
validity. The benchmarks are theatrics that have no applicability
towards demonstrating genuine general intelligence.</p>
<p>AI researchers love to resurrect outdated pseudoscience such as
phrenology—shipping AI software that promises to be able to tell you if
somebody is likely to be a criminal based on the shape of their
skull. It’s a field where researchers and vendors routinely claim
that their AIs can detect whether you’re a potential criminal, gay, a
good employee, liberal or conservative, or even a psychopath, based on
“your face, body, gait, and tone of voice.”</p>
<p><em>It’s pseudoscience</em>.</p>
<p>This is the field and the industry that claims to have accomplished the
first ‘spark’ of Artificial General Intelligence?</p>
<p>Last time we saw a claim this grand, with this little scientific
evidence, the men in the white coats were promising us room-temperature
fusion, giving us free energy for life, and ending the world’s
dependence on fossil fuels.</p>
<p>Why give the tech industry the benefit of the doubt when they are all
but claiming godhood—that they’ve created a new form of life never seen
before?</p>
<p>As <a href="https://en.wikipedia.org/wiki/Sagan_standard">Carl Sagan said</a>:
<em>“extraordinary claims require extraordinary evidence.”</em></p>
<p>He didn’t say “extraordinary claims require only vague insinuations and
pinky-swear promises.”</p>
<p>To claim you’ve created a completely new kind of mind that’s on par with
any animal mind—or, even superior—and provides general intelligence
using mechanisms that don’t resemble anything anybody has ever seen in
nature, is by definition the most extraordinary of claims.</p>
<p>The AI industry is backing their claims of Artificial General
Intelligence with hot air, hand-waving, and cryptic references to data
and software nobody outside their organisations is allowed to review or
analyse.</p>
<p>They are pouring an every-increasing amount of energy and work into
ever-larger models all in the hope of triggering the
‘<a href="https://en.wikipedia.org/wiki/Technological_singularity">singularity</a>’
and creating a digital superbeing. Like a cult of monks boiling the
oceans in order to hear whispers of the name of God.</p>
<p>It’s a farce. All theatre; no evidence. Whether they realise it or not,
they are taking us for a ride. The sooner we see that they aren’t
backing their claims with science, the sooner we can focus on finding
safe and productive uses—limiting its harm, at least—for the technology
as it exists today.</p>
<p>After everything the tech industry has done over the past decade, the
financial bubbles, the gig economy, legless virtual reality avatars,
crypto, the endless software failures—just think about it—do you think
we should believe them when they make grand, unsubstantiated claims
about miraculous discoveries? Have they earned our trust? Have they
shown that their word is worth more than that of independent scientists?</p>
<p>Do you think that they, with this little evidence, have really done what
they claim, and discovered a literal new form of life? But are
conveniently unable to prove it because of ‘safety’?</p>
<p>Me neither.</p>
<p>The notion that large language models are on the path towards Artificial
General Intelligence is a dangerous one. It’s a myth that directly
undermines any effort to think clearly or strategise about generative AI
because it strongly reinforces <em>anthropomorphism</em>.</p>
<p>That’s when you reason about an object or animal <em>as if it were a
person</em>. It prevents you from forming an accurate mental model of the non-human thing’s behaviour. AI is especially prone to creating this reaction. Software such as chatbots trigger all three major factors that promote
anthropomorphism in people:</p>
<ol>
<li><em>Understanding.</em> If we lack an understanding of how an object works,
our minds will resort to thinking of it in terms of something that’s
familiar to us: people. We understand the world as people because
that’s what we are. This becomes stronger the more similar we
perceive the object to be to ourselves.</li>
<li><em>Motivation.</em> We are motivated to both seek out human interaction
and to interact effectively with our environment. This reinforces
the first factor. The more uncertain we are of how that thing works,
the stronger the anthropomorphism. The less control we have over it,
the stronger the anthropomorphism.</li>
<li><em>Sociality</em>. We have a need for human contact and our tendency
towards anthropomorphising objects in our environment increase with
our isolation.</li>
</ol>
<p>Because we lack cohesive cognitive models for what makes these language
models so fluent, feel a strong motivation to understand and use them as
they are integrated into our work, and, increasingly, our socialisation
in the office takes on the very same text conversation form as a chatbot
does, we inevitably feel a strong drive to see these software systems as
people. The myth of AGI reinforces this—supercharges the anthropomorphism—because it implies that “people”
is indeed an appropriate cognitive model for how these systems behave.</p>
<p>It isn’t. <strong><em>AI are not people.</em></strong> Treating them as such is a major
strategic error as it will prevent you from thinking clearly about their
capabilities and limitations.</p>
<p>Believing the myth of Artificial General Intelligence makes you incapable of understanding what language models today are and how they work.</p>
</article>


<hr>

<footer>
<p>
<a href="/david/" title="Aller à l’accueil"><svg class="icon icon-home">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-home"></use>
</svg> Accueil</a> •
<a href="/david/log/" title="Accès au flux RSS"><svg class="icon icon-rss2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-rss2"></use>
</svg> Suivre</a> •
<a href="http://larlet.com" title="Go to my English profile" data-instant><svg class="icon icon-user-tie">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-user-tie"></use>
</svg> Pro</a> •
<a href="mailto:david%40larlet.fr" title="Envoyer un courriel"><svg class="icon icon-mail">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-mail"></use>
</svg> Email</a> •
<abbr class="nowrap" title="Hébergeur : Alwaysdata, 62 rue Tiquetonne 75002 Paris, +33184162340"><svg class="icon icon-hammer2">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-hammer2"></use>
</svg> Légal</abbr>
</p>
<template id="theme-selector">
<form>
<fieldset>
<legend><svg class="icon icon-brightness-contrast">
<use xlink:href="/static/david/icons2/symbol-defs-2021-12.svg#icon-brightness-contrast"></use>
</svg> Thème</legend>
<label>
<input type="radio" value="auto" name="chosen-color-scheme" checked> Auto
</label>
<label>
<input type="radio" value="dark" name="chosen-color-scheme"> Foncé
</label>
<label>
<input type="radio" value="light" name="chosen-color-scheme"> Clair
</label>
</fieldset>
</form>
</template>
</footer>
<script src="/static/david/js/instantpage-5.1.0.min.js" type="module"></script>
<script>
function loadThemeForm(templateName) {
const themeSelectorTemplate = document.querySelector(templateName)
const form = themeSelectorTemplate.content.firstElementChild
themeSelectorTemplate.replaceWith(form)

form.addEventListener('change', (e) => {
const chosenColorScheme = e.target.value
localStorage.setItem('theme', chosenColorScheme)
toggleTheme(chosenColorScheme)
})

const selectedTheme = localStorage.getItem('theme')
if (selectedTheme && selectedTheme !== 'undefined') {
form.querySelector(`[value="${selectedTheme}"]`).checked = true
}
}

const prefersColorSchemeDark = '(prefers-color-scheme: dark)'
window.addEventListener('load', () => {
let hasDarkRules = false
for (const styleSheet of Array.from(document.styleSheets)) {
let mediaRules = []
for (const cssRule of styleSheet.cssRules) {
if (cssRule.type !== CSSRule.MEDIA_RULE) {
continue
}
// WARNING: Safari does not have/supports `conditionText`.
if (cssRule.conditionText) {
if (cssRule.conditionText !== prefersColorSchemeDark) {
continue
}
} else {
if (cssRule.cssText.startsWith(prefersColorSchemeDark)) {
continue
}
}
mediaRules = mediaRules.concat(Array.from(cssRule.cssRules))
}

// WARNING: do not try to insert a Rule to a styleSheet you are
// currently iterating on, otherwise the browser will be stuck
// in a infinite loop…
for (const mediaRule of mediaRules) {
styleSheet.insertRule(mediaRule.cssText)
hasDarkRules = true
}
}
if (hasDarkRules) {
loadThemeForm('#theme-selector')
}
})
</script>
</body>
</html>

+ 363
- 0
cache/2023/f23d043d8e99f2af5fcf1b970f98744a/index.md View File

@@ -0,0 +1,363 @@
title: Artificial General Intelligence and the bird brains of Silicon Valley
url: https://softwarecrisis.dev/letters/ai-bird-brains-silicon-valley/
hash_url: f23d043d8e99f2af5fcf1b970f98744a

<blockquote>
<p>
The problem is, if one side of the communication does not have meaning,
then the comprehension of the implicit meaning is an illusion arising
from our singular human understanding of language (independent of the
model). Contrary to how it may seem when we observe its output, an LM is
a system for haphazardly stitching together sequences of linguistic
forms it has observed in its vast training data, according to
probabilistic information about how they combine, but without any
reference to meaning: a stochastic parrot.
</p>
</blockquote>
<figcaption>
<p>Emily M. Bender, Timnit Gebru, et al., <em>On the Dangers of Stochastic
Parrots: Can Language Models Be Too Big?</em>.</p>
</figcaption>
</figure>
<p>Bird brains have a bad reputation. The diminutive size of your average
bird and their brain has lead people to assume that they are, well,
dumb.</p>
<p>But, bird brains are amazing. Birds commonly outperform mammals with
larger brains at a variety of general reasoning and problem-solving
tasks. Some by a large margin. Their small brains manage this by
packing numerous neurons in a small space using structures that are
unlike from those you find in mammals.</p>
<p>Even though birds have extremely capable minds, those minds are built in
ways that are different from our own or other mammals. Similar
capabilities; different structure.</p>
<p>The ambition of the Silicon Valley AI industry is to create something
analogous to a bird brain: a new kind of mind that is functionally
similar to the human mind, possibly outperforming it, while being built
using very different mechanisms. Similar capabilities; different
structure.</p>
<p>This effort goes back decades, to the dawn of computing, and has had
limited success.</p>
<p>Until recently, it seems.</p>
<p>If you’re reading this, you’ve almost certainly interacted with a
Generative AI, however indirectly. Maybe you’ve tried Bing Chat. Maybe
you’ve subscribed to the paid tier for ChatGPT. Or, maybe you’ve used
Midjourney to generate images. At the very least you’ve been forced to
see the images or text posted by the overenthusiastic on social media.</p>
<p>These AI models are created by pushing an enormous amount of training
data through various algorithms:</p>
<ul>
<li>Language models like ChatGPT is trained on a good chunk of the textual material available in digital form in the world.</li>
<li>Image models like Midjourney and Stable Diffusion are trained on a huge collection of images found on the internet.</li>
</ul>
<p>What comes out the other end is a mathematical model of the media domain
in question: text or images.</p>
<p>You know what Generative AI is in terms of how it presents to you as
software: clever chatbots that do or say things in response to what you
say: <em>your prompt</em>. Some of those responses are useful, and they give
you an impression of sophisticated comprehension. The models that
generate text are fluent and often quite engaging.</p>
<p>This fluency is misleading. What Bender and Gebru meant when they coined
the term <em>stochastic parrot</em> wasn’t to imply that these are, indeed, the
new bird brains of Silicon Valley, but that they are unthinking text
synthesis engines that just repeat phrases. They are the proverbial
parrot who echoes without thinking, not the actual parrot who is capable
of complex reasoning and problem-solving.</p>
<p>A <em>zombie parrot</em>, if you will, that screams for <em>brains</em> because it has
none.</p>
<p>The fluency of the zombie parrot—the unerring confidence and a style of
writing that some find endearing—creates a strong illusion of
intelligence.</p>
<p>Every other time we read text, we are engaging with the product of
another mind. We are so used to the idea of text as a representation of
another person’s thoughts that we have come to mistake their writing
<em>for</em> their thoughts. But they aren’t. Text and media are tools that
authors and artists create to let people change their own state of
mind—hopefully in specific ways to form the image or effect the author
was after.</p>
<p>Reading is an indirect collaboration with the author, mediated through
the writing. Text has no inherent reasoning or intelligence. Agatha
Christie’s ghost does not inhabit the words of <em>Murder on the Orient Express</em>.
Stephen King isn’t hovering over you when you read <em>Carrie</em>. The ghost
you feel while reading is an illusion you’ve made out of your own
experience, knowledge, and imagination. Every word you read causes your
mind to reconstruct its meaning using your memories and creativity. The
idea that there is intelligence somehow inherent in writing is an
illusion. The intelligence is <em>all</em> yours, all the time: thoughts you
make yourself in order to make sense of another person’s words. This can
prompt us to greatness, broaden our minds, inspire new thoughts, and
introduce us to new concepts. A book can contain worlds, but we’re the
ones that bring them into being as we read. What we see is uniquely our
own. The thoughts are not transported from the author’s mind and
injected into ours.</p>
<p>The words themselves are just line forms on a background with no
inherent meaning or intelligence. The word “horse” doesn’t come with the
Platonic ideal of a horse attached to it. The word “anger” isn’t full of
seething emotion or the restrained urge towards violence. Even words
that are arguably onomatopoeic, like the word “brabra” we use in
Icelandic for the sound a duck makes, are still incredibly specific to
the cultures and context they come from. We are the ones doing the heavy
lifting in terms of reconstructing a picture of an intelligence behind
the text. When there is no actual intelligence, such as with ChatGPT, we
are the ones who end up filling in the gaps with our memories,
experience and imagination.</p>
<p>When ChatGPT demonstrates intelligence, that comes from us. Some of
it we construct ourselves. Some of it comes from our inherent
biases.</p>
<p>There is no ‘there’ there. We are alone in the room, reconstructing an
abstract representation of a mind. The reasoning you see is only in your
head. You are hallucinating intelligence where there is none. You are
doing the textual equivalent of seeing a face in a power outlet.</p>
<p>This drive—<em>anthropomorphism</em>—seems to be innate. Our first instinct
when faced with anything unfamiliar—whose drives, motivations, and
mechanisms we don’t understand—is to assume that they think much like a
human would. When that unfamiliar agent uses language like a human
would, the urge to see them as near or fully human is impossible to
resist—a recurring issue in the history of AI research that dates all
the way back to 1966.</p>
<p>These tools solve problems and return fluent, if untruthful, answers,
which is what creates such a convincing illusion of intelligence.</p>
<p>Text synthesis engines like ChatGPT and GPT-4 do not have any
self-awareness. They are mathematical models of the various patterns to
be found in the collected body of human text. How granular the model is
depends on its design and the languages in question. Some of the
tokens—the smallest unit of language the model works with—will be
characters or punctuation marks, some of them will be words, syllables,
or even phrases. Many language models are a mixture of both.</p>
<p>With enough detail—a big enough collection of text—these tools will
model enough of the probabilistic distribution of various words or
characters to be able to perform what looks like magic:</p>
<ul>
<li>They generate fluent answers by calculating the most probable sequence
of words, at that time, which would serve as the continuation of or
response to the prompt.</li>
<li>They can perform limited reasoning tasks that correlate with textual
descriptions of prior reasoning tasks in the training data.</li>
</ul>
<p>With enough of these correlative shortcuts, the model can perform
something that looks like common sense reasoning: its output is text
that replicates prior representations of reasoning. This works for
as long as you don’t accidentally use the wrong phrasing in your prompt
and break the correlation.</p>
<p>The mechanism behind these systems is entirely correlative from the
ground up.What looks like reasoning is incredibly fragile and
breaks as soon as you rephrase or reword your prompt. It exists
only as a probabilistic model of text. A Generative AI chatbot is a
language engine incapable of genuine thought.</p>
<p>These language models are interactive but static snapshots of the
probability distributions of a written language.</p>
<p>It’s obviously interactive, that’s the whole point of a chatbot. It’s
static in that it does not change when it’s used or activated. In fact,
changing it requires an enormous amount of computing power over a long
period of time. What the system models are the distributions and
correlations of the tokens it records for the texts in its training data
set—how the various words, syllables, and punctuation relate to each
other over as much of the written history of a language as the company
can find.</p>
<p>That’s what distinguishes biological minds from these algorithmic
hindsight factories: a biological mind does not reason using the
probability distributions of all the prior cultural records of its
ancestors. Biological minds learn primarily through trial and error.
Try, fail, try again. They build their neural network, which is
functionally very different from what you see in a software model,
through constant feedback, experimentation, and repeated failure—driven
by a chemical network that often manifests as instinct, emotion,
motivation, and drive. The neural network—bounded, defined, and driven
by the chemical network—is constantly changing and responding to outside
stimuli. Every time an animal’s nervous system is “used”, it changes. It
is always changing, until it dies.</p>
<p>Biological minds <em>experience</em>. Synthesis engines parse imperfect
<em>records</em> of experiences. The former are forward-looking and operate
primarily in the present, sometimes to their own detriment. The latter
exist exclusively as probabilistic manifestations of imperfect
representations of thoughts past. They are snapshots. Generative AI are
themselves cultural records.</p>
<p>These models aren’t new bird brains—new alien minds that are peers to
our own. They aren’t even insect brains. Insects have autonomy. They are
capable of general problem-solving—some of them dealing with tasks of
surprising complexity—and their abilities tolerate the kind of
minor alterations in the problem environment that would break the
correlative pseudo-reasoning of a language model. Large Language
Models are something lesser. They are water running down pathways etched
into the ground over centuries by the rivers of human culture. Their
originality comes entirely from random combinations of historical
thought. They do not know the ‘meaning’ of anything—they only know the
records humans find meaningful enough to store. Their unreliability
comes from their unpredictable behaviour in novel circumstances. When
there is no riverbed to follow, they drown the surrounding landscape.</p>
<p>The entirety of their documented features, capabilities, and recorded
behaviour—emergent or not—is explained by this conceptual model of
generative AI. There are no unexplained corner cases that don’t fit or
actively disprove this theory.</p>
<p>Yet people keep assuming that what ChatGPT does can only be explained as
the first glimmer of genuine Artificial General Intelligence. The bird
brain of Silicon Valley is born at last!</p>
<p>Because text and language are the primary ways we experience other
people’s reasoning, it’ll be next to impossible to dislodge the notion
that these are genuine intelligences. No amount of examples, scientific
research, or analysis will convince those who want to maintain a
pseudo-religious belief in alien peer intelligences. After all, if you
want to believe in aliens, an artificial one made out of supercomputers
and wishful thinking <em>feels</em> much more plausible than little grey men
from outer space. But that’s what it is: <em>a belief in aliens.</em></p>
<p>It doesn’t help that so many working in AI seem to <em>want</em> this to be
true. They seem to be true believers who are convinced that the spark of
Artificial General Intelligence has been struck.</p>
<p>They are inspired by the science fictional notion that if you make
something complex enough, it will spontaneously become intelligent. This
isn’t an uncommon belief. You see it in movies and novels—the notion
that any network of sufficient complexity will spontaneously become
sentient has embedded itself in our popular psyche. James Cameron’s
skull-crushing metal skeletons have a lot to answer for.</p>
<p>That notion doesn’t seem to have any basis in science. The idea that
general intelligence is an emergent property of neural networks that
appears once the network reaches sufficient complexity, is a view based
on archaic notions of animal intelligence—that animals are soulless
automata incapable of feeling or reasoning. That view that was
formed during a period where we didn’t realise just how common
self-awareness (i.e. the mirror test) and general reasoning is in the
animal kingdom. Animals are smarter than we assumed and the
difference between our reasoning and theirs seems to be a matter of
degree, not of presence or absence.</p>
<p>General reasoning seems to be an <em>inherent</em>, not emergent, property of
pretty much any biological lifeform with a notable nervous system.</p>
<p>The bumblebee, despite having only a tiny fraction of the neurons of a
human brain, is capable of not only solving puzzles but also of
<em>teaching other bees to solve those puzzles.</em> They reason and have a
culture. They have more genuine and robust general reasoning
skills—that don’t collapse into incoherence at minor adjustments to the
problem space—than GPT-4 or any large language model on the market.
That’s with only around half a million neurons to work with.</p>
<p>Conversely, GPT-3 is made up of 175 <em>billion</em> parameters—what passes for
a “neuron” in a digital neural network. GPT-4 is even larger, with
some estimates coming in at a <em>trillion</em> parameters. Then you have
fine-tuned systems such as ChatGPT, that are built from multiple
interacting models layered on top of GPT-3.5 or GPT-4, which make for an
even more complex interactive system.</p>
<p>ChatGPT, running on GPT-4 is, easily a <em>million</em> times more complex than
the “neural network” of a bumblebee and yet, out of the two, it’s the
striped invertebrate that demonstrates robust and adaptive
general-purpose reasoning skills. Very simple minds, those belonging to
small organisms that barely have a brain, are capable of reasoning about
themselves, the world around them, and the behaviour of other
animals.</p>
<p>Unlike the evidence for ‘sparks’ of AGI in language models, the evidence
for animal reasoning—even consciousness—is broad, compelling, and
encompasses decades of work by numerous scientists.</p>
<p>AI models are flawed attempts at digitally synthesising neurologies.
They are built on the assumption that all the rest—metabolisms,
hormones, chemicals, and senses—aren’t necessary for developing
intelligence.</p>
<p>Reasoning in biological minds does not seem to be a property that
emerges from complexity. The capacity to reason looks more likely to be
a <em>built-in</em> property of most animal minds. A reasoning mind
appears to be a direct consequence of how animals are structured as a
whole—chemicals, hormones, and physical body included. The animal
capacity for problem-solving, social reasoning, and self-awareness seem
to increase, unevenly, and fitfully with the number of neurons until it
reaches the level we see in humans. Reasoning does not ‘emerge’ or
appear. Some creatures are better at it than others, but it’s there in
some form even in very small, very simple beings like the bumblebee. It
doesn’t happen magically when you hook up a bunch of disparate objects
together in a complex enough network. A reasoning mind is the <em>starting
point</em> of biological thinking, not the endpoint that only “emerges” with
sufficient complexity.</p>
<p>The internet—a random interconnected collection of marketing offal,
holiday snaps, insufferable meetings, and porn—isn’t going to become
self-aware and suddenly acquire the capacity for general reasoning once
it reaches a certain size, and neither will Large-Language-Models. The
notion that we are making autonomous beings capable of Artificial
General Intelligence just by loading a neural network up with an
increasingly bigger collection of garbage from the internet is not one
that has any basis in anything we understand about biology or animal
reasoning.</p>
<p>But, AI companies insist that they are on the verge of AGI. Their
rhetoric around it verges on the religious as the idea of an AGI is
idealised and almost worshipped. They claim to be close to making a
new form of thinking life, but they refuse to release the data required
to prove it. They’ve built software that performs well on the
arbitrary benchmarks they’ve chosen and claim are evidence of general
intelligence, but those tests prove no such thing and have no such
validity. The benchmarks are theatrics that have no applicability
towards demonstrating genuine general intelligence.</p>
<p>AI researchers love to resurrect outdated pseudoscience such as
phrenology—shipping AI software that promises to be able to tell you if
somebody is likely to be a criminal based on the shape of their
skull. It’s a field where researchers and vendors routinely claim
that their AIs can detect whether you’re a potential criminal, gay, a
good employee, liberal or conservative, or even a psychopath, based on
“your face, body, gait, and tone of voice.”</p>
<p><em>It’s pseudoscience</em>.</p>
<p>This is the field and the industry that claims to have accomplished the
first ‘spark’ of Artificial General Intelligence?</p>
<p>Last time we saw a claim this grand, with this little scientific
evidence, the men in the white coats were promising us room-temperature
fusion, giving us free energy for life, and ending the world’s
dependence on fossil fuels.</p>
<p>Why give the tech industry the benefit of the doubt when they are all
but claiming godhood—that they’ve created a new form of life never seen
before?</p>
<p>As <a href="https://en.wikipedia.org/wiki/Sagan_standard">Carl Sagan said</a>:
<em>“extraordinary claims require extraordinary evidence.”</em></p>
<p>He didn’t say “extraordinary claims require only vague insinuations and
pinky-swear promises.”</p>
<p>To claim you’ve created a completely new kind of mind that’s on par with
any animal mind—or, even superior—and provides general intelligence
using mechanisms that don’t resemble anything anybody has ever seen in
nature, is by definition the most extraordinary of claims.</p>
<p>The AI industry is backing their claims of Artificial General
Intelligence with hot air, hand-waving, and cryptic references to data
and software nobody outside their organisations is allowed to review or
analyse.</p>
<p>They are pouring an every-increasing amount of energy and work into
ever-larger models all in the hope of triggering the
‘<a href="https://en.wikipedia.org/wiki/Technological_singularity">singularity</a>’
and creating a digital superbeing. Like a cult of monks boiling the
oceans in order to hear whispers of the name of God.</p>
<p>It’s a farce. All theatre; no evidence. Whether they realise it or not,
they are taking us for a ride. The sooner we see that they aren’t
backing their claims with science, the sooner we can focus on finding
safe and productive uses—limiting its harm, at least—for the technology
as it exists today.</p>
<p>After everything the tech industry has done over the past decade, the
financial bubbles, the gig economy, legless virtual reality avatars,
crypto, the endless software failures—just think about it—do you think
we should believe them when they make grand, unsubstantiated claims
about miraculous discoveries? Have they earned our trust? Have they
shown that their word is worth more than that of independent scientists?</p>
<p>Do you think that they, with this little evidence, have really done what
they claim, and discovered a literal new form of life? But are
conveniently unable to prove it because of ‘safety’?</p>
<p>Me neither.</p>
<p>The notion that large language models are on the path towards Artificial
General Intelligence is a dangerous one. It’s a myth that directly
undermines any effort to think clearly or strategise about generative AI
because it strongly reinforces <em>anthropomorphism</em>.</p>
<p>That’s when you reason about an object or animal <em>as if it were a
person</em>. It prevents you from forming an accurate mental model of the non-human thing’s behaviour. AI is especially prone to creating this reaction. Software such as chatbots trigger all three major factors that promote
anthropomorphism in people:</p>
<ol>
<li><em>Understanding.</em> If we lack an understanding of how an object works,
our minds will resort to thinking of it in terms of something that’s
familiar to us: people. We understand the world as people because
that’s what we are. This becomes stronger the more similar we
perceive the object to be to ourselves.</li>
<li><em>Motivation.</em> We are motivated to both seek out human interaction
and to interact effectively with our environment. This reinforces
the first factor. The more uncertain we are of how that thing works,
the stronger the anthropomorphism. The less control we have over it,
the stronger the anthropomorphism.</li>
<li><em>Sociality</em>. We have a need for human contact and our tendency
towards anthropomorphising objects in our environment increase with
our isolation.</li>
</ol>
<p>Because we lack cohesive cognitive models for what makes these language
models so fluent, feel a strong motivation to understand and use them as
they are integrated into our work, and, increasingly, our socialisation
in the office takes on the very same text conversation form as a chatbot
does, we inevitably feel a strong drive to see these software systems as
people. The myth of AGI reinforces this—supercharges the anthropomorphism—because it implies that “people”
is indeed an appropriate cognitive model for how these systems behave.</p>
<p>It isn’t. <strong><em>AI are not people.</em></strong> Treating them as such is a major
strategic error as it will prevent you from thinking clearly about their
capabilities and limitations.</p>
<p>Believing the myth of Artificial General Intelligence makes you incapable of understanding what language models today are and how they work.</p>

+ 22
- 0
cache/2023/index.html View File

@@ -75,6 +75,8 @@
<li><a href="/david/cache/2022/5b35e3f3639ceb7d9f684aa81979f304/" title="Accès à l’article dans le cache local : The Market for Lemons">The Market for Lemons</a> (<a href="https://infrequently.org/2023/02/the-market-for-lemons/" title="Accès à l’article original distant : The Market for Lemons">original</a>)</li>
<li><a href="/david/cache/2022/452be27c5cc8a4b9824d1d7e005546c6/" title="Accès à l’article dans le cache local : We need to tell people ChatGPT will lie to them, not debate linguistics">We need to tell people ChatGPT will lie to them, not debate linguistics</a> (<a href="https://simonwillison.net/2023/Apr/7/chatgpt-lies/" title="Accès à l’article original distant : We need to tell people ChatGPT will lie to them, not debate linguistics">original</a>)</li>
<li><a href="/david/cache/2022/c9925184359c01c5c077be55b7cd6505/" title="Accès à l’article dans le cache local : Carrying a camera">Carrying a camera</a> (<a href="https://macwright.com/2017/11/03/carrying-a-camera.html" title="Accès à l’article original distant : Carrying a camera">original</a>)</li>
<li><a href="/david/cache/2022/300b9aa899d44f7606a8448991e2acfd/" title="Accès à l’article dans le cache local : Time to Write? Go Outside">Time to Write? Go Outside</a> (<a href="https://archive.nytimes.com/opinionator.blogs.nytimes.com/2013/09/16/time-to-write-go-outside/" title="Accès à l’article original distant : Time to Write? Go Outside">original</a>)</li>
@@ -117,6 +119,8 @@
<li><a href="/david/cache/2022/4d3fa4020fd0504dbced1a408a2d394e/" title="Accès à l’article dans le cache local : #132: The contagious visual blandness of Netflix">#132: The contagious visual blandness of Netflix</a> (<a href="https://haleynahman.substack.com/p/132-the-contagious-visual-blandness" title="Accès à l’article original distant : #132: The contagious visual blandness of Netflix">original</a>)</li>
<li><a href="/david/cache/2022/dc43f3c837d95313ac7317e10349511e/" title="Accès à l’article dans le cache local : Ask LukeW: New Ways into Web Content">Ask LukeW: New Ways into Web Content</a> (<a href="https://www.lukew.com/ff/entry.asp?2008" title="Accès à l’article original distant : Ask LukeW: New Ways into Web Content">original</a>)</li>
<li><a href="/david/cache/2022/b5acd8bbf209345ff300ea8c10c44181/" title="Accès à l’article dans le cache local : Retiring Pinafore">Retiring Pinafore</a> (<a href="https://nolanlawson.com/2023/01/09/retiring-pinafore/" title="Accès à l’article original distant : Retiring Pinafore">original</a>)</li>
<li><a href="/david/cache/2022/ca3e313992d7ac7e4aeaece85e7f4b6a/" title="Accès à l’article dans le cache local : William Shatner: My Trip to Space Filled Me With Sadness - Variety">William Shatner: My Trip to Space Filled Me With Sadness - Variety</a> (<a href="https://variety.com/2022/tv/news/william-shatner-space-boldly-go-excerpt-1235395113/" title="Accès à l’article original distant : William Shatner: My Trip to Space Filled Me With Sadness - Variety">original</a>)</li>
@@ -151,8 +155,12 @@
<li><a href="/david/cache/2022/dddffbc175fe6802b5e33a92ebc440ec/" title="Accès à l’article dans le cache local : Année 2022 en revue">Année 2022 en revue</a> (<a href="https://blog.hello-bokeh.fr/2022/12/30/annee-2022-en-revue/" title="Accès à l’article original distant : Année 2022 en revue">original</a>)</li>
<li><a href="/david/cache/2022/6eef954bc8dd84322cf19ab38caf2ee3/" title="Accès à l’article dans le cache local : GitHub Copilot AI pair programmer: Asset or Liability?">GitHub Copilot AI pair programmer: Asset or Liability?</a> (<a href="https://www.sciencedirect.com/science/article/abs/pii/S0164121223001292" title="Accès à l’article original distant : GitHub Copilot AI pair programmer: Asset or Liability?">original</a>)</li>
<li><a href="/david/cache/2022/4c5b3193ced812222ef1a6d53e3470aa/" title="Accès à l’article dans le cache local : Fast Path to a Great UX - Increased Exposure Hours">Fast Path to a Great UX - Increased Exposure Hours</a> (<a href="https://articles.uie.com/user_exposure_hours/" title="Accès à l’article original distant : Fast Path to a Great UX - Increased Exposure Hours">original</a>)</li>
<li><a href="/david/cache/2022/89aa5bbfeaa7c8f2411980f99801359c/" title="Accès à l’article dans le cache local : AIs can write for us but will we actually want them to?">AIs can write for us but will we actually want them to?</a> (<a href="https://www.bryanbraun.com/2023/04/14/ais-can-write-for-us-but-will-we-want-them-to/" title="Accès à l’article original distant : AIs can write for us but will we actually want them to?">original</a>)</li>
<li><a href="/david/cache/2022/1fb96c68665818ad66939956b9c4188c/" title="Accès à l’article dans le cache local : TJM - le Taux Journalier Militant">TJM - le Taux Journalier Militant</a> (<a href="https://www.24joursdeweb.fr/2022/tjm-tarif-journalier-militant/" title="Accès à l’article original distant : TJM - le Taux Journalier Militant">original</a>)</li>
<li><a href="/david/cache/2022/745057669a6d4c8fd3c5ce1c5dd81b8c/" title="Accès à l’article dans le cache local : Network effect">Network effect</a> (<a href="https://bastianallgeier.com/notes/network-effect" title="Accès à l’article original distant : Network effect">original</a>)</li>
@@ -167,6 +175,8 @@
<li><a href="/david/cache/2022/25d41d569f637f8342c495139ccce8a8/" title="Accès à l’article dans le cache local : Stupeur et tremblements : comment faire fuir les développeuses expérimentées.">Stupeur et tremblements : comment faire fuir les développeuses expérimentées.</a> (<a href="https://www.duchess-france.fr/coup%20de%20gueule/sexisme/2023/03/06/stupeur-et-trembements.html" title="Accès à l’article original distant : Stupeur et tremblements : comment faire fuir les développeuses expérimentées.">original</a>)</li>
<li><a href="/david/cache/2022/230f8f7224199132de4ce030458536de/" title="Accès à l’article dans le cache local : The mounting human and environmental costs of generative AI">The mounting human and environmental costs of generative AI</a> (<a href="https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/" title="Accès à l’article original distant : The mounting human and environmental costs of generative AI">original</a>)</li>
<li><a href="/david/cache/2022/9718ae2062146285e1c4f406240e04af/" title="Accès à l’article dans le cache local : An update on Robust Client-Side JavaScript">An update on Robust Client-Side JavaScript</a> (<a href="https://molily.de/update-on-robust-javascript/" title="Accès à l’article original distant : An update on Robust Client-Side JavaScript">original</a>)</li>
<li><a href="/david/cache/2022/63654b08ad9eda03b6bea8d1f82e2843/" title="Accès à l’article dans le cache local : Yearnotes #3 • détour.studio">Yearnotes #3 • détour.studio</a> (<a href="https://détour.studio/yearnotes/3/" title="Accès à l’article original distant : Yearnotes #3 • détour.studio">original</a>)</li>
@@ -183,6 +193,10 @@
<li><a href="/david/cache/2022/d7f9460e62402a298210736cdf64b88c/" title="Accès à l’article dans le cache local : 7 Reasons why I don’t write">7 Reasons why I don’t write</a> (<a href="https://mxb.dev/blog/seven-reasons-why-i-dont-write/" title="Accès à l’article original distant : 7 Reasons why I don’t write">original</a>)</li>
<li><a href="/david/cache/2022/4a485034e94dc6123a624e8a589e8dac/" title="Accès à l’article dans le cache local : Poking around OpenAI.">Poking around OpenAI.</a> (<a href="https://lethain.com/openai-exploration/" title="Accès à l’article original distant : Poking around OpenAI.">original</a>)</li>
<li><a href="/david/cache/2022/ccb1821caf1a27ed2a2e9a92a26d0b65/" title="Accès à l’article dans le cache local : The one about AI">The one about AI</a> (<a href="https://macwright.com/2023/04/15/ai.html" title="Accès à l’article original distant : The one about AI">original</a>)</li>
<li><a href="/david/cache/2022/392138accbdaee722a669834da5f1a8d/" title="Accès à l’article dans le cache local : Farandole de projets">Farandole de projets</a> (<a href="https://marienfressinaud.fr/farandole-de-projets.html" title="Accès à l’article original distant : Farandole de projets">original</a>)</li>
<li><a href="/david/cache/2022/fb08217a583922fd319fabb55f34a4f3/" title="Accès à l’article dans le cache local : A community isn’t a garden, it’s a bar.">A community isn’t a garden, it’s a bar.</a> (<a href="https://powazek.com/posts/3571" title="Accès à l’article original distant : A community isn’t a garden, it’s a bar.">original</a>)</li>
@@ -195,6 +209,8 @@
<li><a href="/david/cache/2022/e29bd9361e89e31ac21ee21180ec1dfb/" title="Accès à l’article dans le cache local : Un coup d’œil sous le capot">Un coup d’œil sous le capot</a> (<a href="https://blog.gandi.net/fr/posts/un-coup-d-oeil-sous-le-capot/" title="Accès à l’article original distant : Un coup d’œil sous le capot">original</a>)</li>
<li><a href="/david/cache/2022/096a44a83d8d3f2bdfd21e3d378e4719/" title="Accès à l’article dans le cache local : Aller voir les aurores boréales en train">Aller voir les aurores boréales en train</a> (<a href="https://blog.professeurjoachim.com/billet/2023-03-31-aller-voir-les-aurores-boreales-en-train" title="Accès à l’article original distant : Aller voir les aurores boréales en train">original</a>)</li>
<li><a href="/david/cache/2022/acb867f0c6a744d9a06cd82cd9da002e/" title="Accès à l’article dans le cache local : Which emoji scissors close">Which emoji scissors close</a> (<a href="https://wh0.github.io/2020/01/02/scissors.html" title="Accès à l’article original distant : Which emoji scissors close">original</a>)</li>
<li><a href="/david/cache/2022/4b5bae499ad13fe0f5413d8c7b77c09a/" title="Accès à l’article dans le cache local : Understanding A Protocol">Understanding A Protocol</a> (<a href="https://aeracode.org/2022/12/05/understanding-a-protocol/" title="Accès à l’article original distant : Understanding A Protocol">original</a>)</li>
@@ -205,6 +221,8 @@
<li><a href="/david/cache/2022/7ff62009f21336b8eb54ea18261bcfb7/" title="Accès à l’article dans le cache local : JavaScript, Community">JavaScript, Community</a> (<a href="https://www.zachleat.com/web/javascript-community/" title="Accès à l’article original distant : JavaScript, Community">original</a>)</li>
<li><a href="/david/cache/2022/d1545c8cf9387ad9b0c98020c7ccfe61/" title="Accès à l’article dans le cache local : Scattered ChatGPT thoughts">Scattered ChatGPT thoughts</a> (<a href="https://notebook.wesleyac.com/gpt-ugh/" title="Accès à l’article original distant : Scattered ChatGPT thoughts">original</a>)</li>
<li><a href="/david/cache/2022/576a604fce44b337a38425c021b3b0b3/" title="Accès à l’article dans le cache local : The Best Time to Own a Domain Was 20 Years Ago; The Second Best Time Is Today">The Best Time to Own a Domain Was 20 Years Ago; The Second Best Time Is Today</a> (<a href="https://blog.jim-nielsen.com/2023/best-time-to-own-a-domain/" title="Accès à l’article original distant : The Best Time to Own a Domain Was 20 Years Ago; The Second Best Time Is Today">original</a>)</li>
<li><a href="/david/cache/2022/d6877059a2203cab6c811c5ee3148c17/" title="Accès à l’article dans le cache local : Les Drôles Nouvelles de l'Energie">Les Drôles Nouvelles de l'Energie</a> (<a href="https://www.2000watts.org/index.php/home/reflexion/1317-les-droles-nouvelles-de-l-energie.html" title="Accès à l’article original distant : Les Drôles Nouvelles de l'Energie">original</a>)</li>
@@ -213,8 +231,12 @@
<li><a href="/david/cache/2022/09c0739036ea4a8b6c985e127fe7e3c8/" title="Accès à l’article dans le cache local : ☕️ Journal : Carnets">☕️ Journal : Carnets</a> (<a href="https://thom4.net/2023/02/01/carnets/" title="Accès à l’article original distant : ☕️ Journal : Carnets">original</a>)</li>
<li><a href="/david/cache/2022/f23d043d8e99f2af5fcf1b970f98744a/" title="Accès à l’article dans le cache local : Artificial General Intelligence and the bird brains of Silicon Valley">Artificial General Intelligence and the bird brains of Silicon Valley</a> (<a href="https://softwarecrisis.dev/letters/ai-bird-brains-silicon-valley/" title="Accès à l’article original distant : Artificial General Intelligence and the bird brains of Silicon Valley">original</a>)</li>
<li><a href="/david/cache/2022/98a93dedbf2eb7665680ec6b1bb31e8c/" title="Accès à l’article dans le cache local : 10 Films By Indigenous Filmmakers To Watch Instead Of Avatar: The Way Of Water">10 Films By Indigenous Filmmakers To Watch Instead Of Avatar: The Way Of Water</a> (<a href="https://www.cbr.com/better-movies-than-camerons-avatar-2-inigenous-creators/" title="Accès à l’article original distant : 10 Films By Indigenous Filmmakers To Watch Instead Of Avatar: The Way Of Water">original</a>)</li>
<li><a href="/david/cache/2022/08f83e8893cad4d5a2eb6a560f73dd65/" title="Accès à l’article dans le cache local : Expérimentations GPTiennes: assistant vocal">Expérimentations GPTiennes: assistant vocal</a> (<a href="http://dataholic.ca/2023/04/05/gpt-assistant-vocal/" title="Accès à l’article original distant : Expérimentations GPTiennes: assistant vocal">original</a>)</li>
<li><a href="/david/cache/2022/614fe609b04719e7835fc0717b99c1c6/" title="Accès à l’article dans le cache local : Retraite : la fin du “bonheur différé”, par Denis Maillard">Retraite : la fin du “bonheur différé”, par Denis Maillard</a> (<a href="https://www.philomag.com/articles/retraite-la-fin-du-bonheur-differe-par-denis-maillard" title="Accès à l’article original distant : Retraite : la fin du “bonheur différé”, par Denis Maillard">original</a>)</li>
<li><a href="/david/cache/2022/42b4db67c4daf075941dc387d6be4aaf/" title="Accès à l’article dans le cache local : ETC-ISTE : Bonne année">ETC-ISTE : Bonne année</a> (<a href="http://etc-iste.blogspot.com/2022/12/bonne-annee.html" title="Accès à l’article original distant : ETC-ISTE : Bonne année">original</a>)</li>

Loading…
Cancel
Save