The one about AI
Like everyone, I’ve been thinking about AI. It’s already useful, in a way that the previous big thing, crypto, wasn’t. I don’t think it’ll become generalized AI - I think the AI winter cycle is the base case and human-like intelligence is qualitatively different than LLM, no matter how many terabytes of training data you throw at them. But that isn’t what this article is about.
No, it’s about other stuff, particularly technological change, happiness, and craft.
Optimists and pessimists agree that AI will change the world.
If it goes wrong, AI will continue to do a bad job suggesting sentences for criminals and promise, but fail, to diagnose cancer, and find its way into a lot of other jobs that it’s not qualified for – much like an overconfident young man, which is also its preferred writing style. Maybe it’ll gain sentience and destroy us all.
Or it goes well, and it gives us superpowers: it’s the future of notetaking, a better journal, it helps us think better and get more done. Maybe it’ll gain sentience and be our best friend.
But there’ll be winners and losers – everyone agrees. If it’s good, then the productivity gains will be unevenly distributed and those with only basic abilities - in programming, writing, music - will be replaced by the machines, or by someone using a machine to produce a lot more of the product. If it’s bad, the people using the AI will benefit but those at the other end of the algorithm, those subjected to AI-powered policing, healthcare, or hiring are subject to the inaccuracy, bias, or malice built into the system.
I suspect we’ll get part of both futures. AI will be integrated into a lot of things and become like the bayesian spam filters that now seem obvious and simple. It’ll be implemented in places it doesn’t belong, and cause havoc. Jobs will shift, some becoming more in demand and others less.
Enough context, let’s talk about history and vibes and happiness.
AI feels like a reshuffling.
Consider the French Revolution, which the history books say benefited the third estate, roughly the working class, and demoted the first and second, the priests and nobility, sometimes, ahem, dramatically so. Or the evolution of the Eastern Bloc, when socialist and communist countries introduced elements of capitalism, creating new winners & losers from those who were in the right place & time.
Fortunes are made and lost in a reshuffling, for those situated in the right place, class, and job – or those who rush to realign themselves with the new wave. We saw a brash, stupid version of this ideology with crypto’s motto, Have Fun Staying Poor - that everyone who didn’t own Bitcoin would be left behind in the new economy. But I see a variation of it every day from people writing about AI. AI is going to change everything: here’s how to adapt, here’s who will win out, writes lots of people on LinkedIn and Twitter.
We in the tech industry are used to the ground shifting under our feet: when there’s some paradigm that lets us think less or do more, most of us jump to it. We might choose parts of the industry based on our tolerance for change – embedded programming in 2023 is much more similar to embedded programming in 2000 than web programming is to the same. But every part of the industry has churn.
AI feels different though, in both the micro and macro.
In the micro sense, more than anything that came before it, AI is a black box. It’s not even like a C++ compiler or React.js’s internals, something that’s complex and huge, but ultimately understandable. AI is not understood deeply by its creators. Fine tuning it is more an art than a science. Bugs are not fixed directly, but indirectly, by adding more to the input, or cleaning up the inputs. And the AI models come to us from familiar deities - Microsoft, Google, Facebook. The costs right now are so enormous, like Stability AI’s 75 million dollar server bill, that no small startup is going to compete on the same ground. So the vast majority of “AI startups” are building on someone else’s model, and tinkering with LLMs for fun means using an existing model, probably the one written by Facebook or Microsoft-funded researchers.
But in the macro sense, it’s also different: I keep hearing, and thinking, that it’s going to replace all the junior developers. It’s going to empower the seniors, their managers, the idea people, the CEOs - there’ll be fewer salaries to pay, and the least skilled are the ones to be eliminated. This, you hear from venture capitalists, CEOs, and senior developers: they might be right, but they also need to be right. Basically, just cranking out code won’t matter as much - CoPilot can do that. No longer will people write shortform content for travel blogs and paid promotion columns - ChatGPT will write it.
I have a few thoughts about this.
I grew up in New Jersey. It’s one of the two states where you can’t pump your own gas. I first had to fill up a car with gas midway through college, and needed a friend to teach me how. Despite it being obviously possible to pump one’s own gas, New Jersey will probably keep that policy.
The point is, those jobs were created because of a bizarre law, and they could be lost by removing that law. And all jobs are on that scale: they’re all kind of made-up. You can take an industry and increase salaries by unionizing or restricting the labor supply by requiring more qualifications, or you can decrease salaries by dismantling workers’ rights. You can create a job out of thin air, like a gas station pump attendant or a marijuana dispensary salesman, or remove a class of jobs, like elephant hunting or TV repair.
To a large extent, we get the labor market we aim for with policy, and there is no natural state to it: there are entire categories of jobs that could have been automated away a decade ago but won’t be. Employment and compensation are the output of a lot of different factors: You’re Paid What You’re Worth is a great guide to those.
So I’m not necessarily excited for entry-level jobs to become automated away. I’m not convinced that they have to be automated away. Treating automation as a technological eventuality feels hollow: we don’t have automated kiosks at McDonalds because they were just invented, we have them because it helps the company’s margins. If McDonalds wanted a better customer experience, it could do the opposite. And then if activist investors get angry, it’ll go back to the touchpads again. And until we have UBI, which might happen never, it seems much better for there to be a variety of jobs for a variety of people than to make the job market even more selective. Average people need jobs, to live.
I also just don’t especially want to stop thinking about code. I don’t want to stop writing sentences in my own voice. I get a lot of joy from craft. It’s not a universal attitude toward work, but I’ve always been thankful that programming is a craft that pays a good living wage. I’d be a luthier, photographer, or, who knows, if those jobs were as viable and available. But programming lets you write and think all day, and reliably pay my rent. Writing, both code and prose, for me, is both an end product and an end in itself. I don’t want to automate away the things that give me joy.
And that is something that I’m more and more aware of as I get older – sources of joy. It’s good to diversify them, to keep track of them, because it’s way too easy to run out. Or to end up with just one, and then lose it.
The thing about luddites is that they make good punchlines, but they were all people.
Someone was there making illuminated manuscripts when movable type was invented, and they said - correctly - that it sucks and is much less fun. Of course movable type and the printing press would win out and those laborers were the last of their kind, but if we hopped into a time machine and watched them work, would we make fun of them for not getting with the times? Doesn’t that kind of seem wrong? They weren’t wrong to enjoy their craft and mourn its loss.
And this is not to say that work is free of tedium. To some extent, we all benefit from spellcheck and pre-mixed paints and code completion and all kinds of assistance. And the new writer putting out five stories a day as she tries to earn the right to write front-page headlines probably isn’t savoring every trend piece about bottled water or ashwagandha. But a newspaper with only headline writers, only abstract thinkers at the top of their game commanding ChatGPT to write the unimportant stuff - is that a future that we want, for anyone? How does one learn to write, learn what’s good or bad, learn how to have a journalistic voice? And what about the people who have the writing skills to reliably write a story a day but don’t aspire to or don’t have the ability to be a star – are they cut out of the industry entirely?
Universal Basic Income, maybe. Appealing across the political spectrum, for troubling reasons. Sam Altman, the OpenAI one, tried and delayed and never re-started a plan to research UBI. I don’t know. To me, it feels like a talking point unless someone has a real plan to actually do it, to get the private money, or government policy in place, now and before it’s too late. Tech has been terrific at stalling legislation but unsuccessful in creating it: the most likely outcome seems like we put forth the idea of UBI to blame the government for not doing it.
So, it’s all about adapting, or in another word, opportunism. You go where the future is and stay open minded about what that is. Even if it’s a bubble, I think that Matt Levine’s words are gospel:
My basic view of bubbles is that if you can identify a bubble, and you have some free time, the right move is to sell into the bubble. Not sell short, mind you, which is risky; you don’t know when the bubble will pop. Sell long. Get into the business that is bubbly, because that’s where the money is. There is demand; become the supply. -Anti-ESG Can Be Good Business
Where does this all land? I’m moderately optimistic about AI.
But I think the thing that excites a lot of people about it is the reorganization, the shift, the reward for opportunism. Navigating that change in market opportunity and being there is its own reward to a lot of people. And it should be: this is the essence of progress in an industrialized society. The relationships, the strategy, matters much more to many people than craft or art: what goes into the production of a thing is just a variable to be minimized.
How people feel about AI has a lot to do with how they think society should be structured, what makes work valuable, and what they truly enjoy doing.
I feel in the middle, as someone who writes prose and code on a regular basis but also helps guide companies, people, and do other sorts of founder stuff. All I’m saying is, whichever way it turns out, spare me in the revolution.