title: What Does The Amazon Echo Look Mean For Personal Style?
url: https://www.racked.com/2018/4/17/17219166/fashion-style-algorithm-amazon-echo-look
hash_url: 58665f820b
The message of many things in America is “Like this or die.”
— George W.S. Trow, Within the Context of No Context, 1980
The camera is a small, white, curvilinear monolith on a pedestal. Inside its smooth casing are a microphone, a speaker, and an eye-like lens. After I set it up on a shelf, it tells me to look straight at it and to be sure to smile! The light blinks and then the camera flashes. A head-to-toe picture appears on my phone of a view I’m only used to seeing in large mirrors: me, standing awkwardly in my apartment, wearing a very average weekday outfit. The background is blurred like evidence from a crime scene. It is not a flattering image.
Amazon’s Echo Look, currently available by invitation only but also on eBay, allows you to take hands-free selfies and evaluate your fashion choices. “Now Alexa helps you look your best,” the product description promises. Stand in front of the camera, take photos of two different outfits with the Echo Look, and then select the best ones on your phone’s Echo Look app. Within about a minute, Alexa will tell you which set of clothes looks better, processed by style-analyzing algorithms and some assistance from humans. So I try to find my most stylish outfit, swapping out shirts and pants and then posing stiffly for the camera. I shout, “Alexa, judge me!” but apparently that’s unnecessary.
What I discover from the Style Check™ function is as follows: All-black is better than all-gray. Rolled-up sleeves are better than buttoned at the wrist. Blue jeans are best. Popping your collar is actually good. Each outfit in the comparison receives a percentage out of 100: black clothes score 73 percent against gray clothes at 27 percent, for example. But the explanations given for the scores are indecipherable. “The way you styled those pieces looks better,” the app tells me. “Sizing is better.” How did I style them? Should they be bigger or smaller?
The Echo Look won’t tell you why it’s making its decisions. And yet it purports to show us our ideal style, just as algorithms like Netflix recommendations, Spotify Discover, and Facebook and YouTube feeds promise us an ideal version of cultural consumption tailored to our personal desires. In fact, this promise is inherent in the technology itself: Algorithms, as I’ll loosely define them, are sets of equations that work through machine learning to customize the delivery of content to individuals, prioritizing what they think we want, and evolving over time based on what we engage with.
Confronting the Echo Look’s opaque statements on my fashion sense, I realize that all of these algorithmic experiences are matters of taste: the question of what we like and why we like it, and what it means that taste is increasingly dictated by black-box robots like the camera on my shelf.
In his 2017 book Taste, the Italian philosopher Giorgio Agamben digs up the roots of the word. Historically, it is defined as a form of knowledge through pleasure, from perceiving the flavor of food to judging the quality of an object. Taste is an essentially human capacity, to the point that it is almost subconscious: We know whether we like something or not before we understand why. “Taste enjoys beauty, without being able to explain it,” Agamben writes. He quotes Montesquieu: “This effect is principally founded on surprise.” Algorithms are meant to provide surprise, showing us what we didn’t realize we’d always wanted, and yet we are never quite surprised because we know to expect it.
Philosophers in the 18th century defined taste as a moral capacity, an ability to recognize truth and beauty. “Natural taste is not a theoretical knowledge; it’s a quick and exquisite application of rules which we do not even know,” wrote Montesquieu in 1759. This unknowingness is important. We don’t calculate or measure if something is tasteful to us; we simply feel it. Displacing the judgment of taste partly to algorithms, as in the Amazon Echo Look, robs us of some of that humanity.
Every cultural object we aestheticize and consume — “the most everyday choices of everyday life, e.g., in cooking, clothing or decoration,” Pierre Bourdieu writes in his 1984 book Distinction: A Social Critique of the Judgement of Taste — is a significant part of our identities and reflects who we are. “Taste classifies, and it classifies the classifier,” Bourdieu adds. If our taste is dictated by data-fed algorithms controlled by massive tech corporations, then we must be content to classify ourselves as slavish followers of robots.
We might say that “taste” is the abstract, moralized knowledge, while “style” is its visual expression. Fashion makes taste easily visible as style, in part because its distinctions between color or cut in clothing are so specific and yet so random (“rules which we don’t even know”). In the past, a whimsical consensus among elites dictated fashion culture; a royal court or an echelon of magazine editors imposed a certain taste from the top of society, down.
Roland Barthes noticed this arbitrariness in his 1960 essay Blue Is in Fashion This Year. Barthes scrutinizes a fragment of text from a fashion magazine — “blue is in fashion this year” — to see where its thesis, that a particular color is particularly tasteful right now, comes from. His conclusion is that it doesn’t come from anywhere: “We are not talking about a rigorous production of meaning: the link is neither obligatory nor sufficiently motivated.” Blue is not in fashion because it is particularly functional, nor is it symbolically linked to some wider economic or political reality; the statement has no semantic logic. Style, Barthes argues, is an inexplicable equation (a faulty algorithm).
Further evidence of the artificial and hierarchical nature of style in the past can be found in that scene from the 2006 film The Devil Wears Prada, in which Meryl Streep (as magazine editor and Anna Wintour facsimile Miranda Priestly) tells her assistant played by Anne Hathaway that the chunky blue sweater she is wearing was, in essence, chosen for her. “That blue represents millions of dollars and countless jobs, and it’s sort of comical how you think you made a choice that exempts you from the fashion industry when, in fact, you’re wearing a sweater that was selected for you by the people in this room from a pile of stuff,” Streep says.
In other words, blue is in fashion this year because some people decided it was. You, the non-tastemaker, have no choice in the matter.
Is it possible that instead of this artificial fashion language, algorithms like those powering Alexa could create a more systemic, logical construction of fashion aesthetics built on data? Blue is in fashion this year because 83.7 percent of users purchased (or clicked like on) blue shirts, the Amazon Echo Look algorithm says, therefore it is in fashion, therefore businesses should manufacture more blue shirts, and you, the customer, will buy and wear them. No human editors needed.
I’m not sure if this technology-derived algorithmic facticity of taste is better or worse than Meryl Streep-Anna Wintour deciding what I wear, which might be the core concern of this essay.
When modes of tastes change, there is a certain fear: Am I in or out? Do I understand the new or am I stuck in the old? In 1980, the New Yorker published George W.S. Trow’s essay describing this feeling under the title of “Within the Context of No Context,” from which I took the epigraph and structure for this piece. Trow’s essay came out as a book in 1981 and again in 1997. In the appended introduction to the 1997 edition, he uses the phrase “collapsing dominant” to describe a situation in which an older, established mode of cultural authority, or a taste regime, is fading and being replaced by a newer one. These regimes have two parts: the subjects of taste and the way taste is communicated.
Today we are seeing the collapse of the dominant regime that Trow originally observed emerging, mass-media television, which had previously replaced the moralistic mid-century novels of New England WASPs. Now, we have Instagram likes, Twitter hashtags, and Google-distributed display advertising spreading taste values. Instead of the maximalist, celebrity-driven, intoxicant culture of ‘70s television — Nixon, Star Wars, shag rugs, cocaine, nuclear bombs — we now have the flattened, participatory, somehow salutary aesthetic of avocado toast, Outdoor Voices leggings, reclaimed wood, Sky Ting yoga classes, and succulents in ceramic planters.
That we are in the midst of this shift in taste might help explain our larger mood of instability and paranoia (or is it just me?). We can’t figure out what might be sustainable to identify with, to orient our taste on. The algorithm suggests that we trust it, but we don’t entirely want to. We crave a more “authentic,” lasting form of meaning.
In 2009, a designer named Ben Pieratt, now living in Massachusetts, launched Svpply. It was a kind of online social network based on shopping, where invitation-only members could curate selections of products from elsewhere on the internet and users could follow their favorite tastemakers. Eventually, any user could become a curator. I remember it from the time as a calm, limpid pool in the midst of so much internet noise. The site presented only cool clothes, bags, and accessories, all chosen by individual humans, since algorithmic feeds weren’t widely deployed at the time. On Svpply you could find the melange of signifiers of a certain class of early-adopter design-bro: minimalist sneakers, fancy T-shirts, Leica cameras, and drop-crotch sweatpants.
In 2012, eBay acquired the company and quickly shut it down. In 2014, Pieratt launched a Kickstarter for Very Goods, a Svpply replacement that’s still active. Today he sees Svpply as a cautionary tale about the limits of human curation on the internet. Over the phone, we talk about how taste doesn’t really scale. The bigger a platform gets, the harder it is to maintain a particular sense of style. By opening the platform, Pieratt had tried to “convert from a human-driven community into a machine,” he explains. “When we lost the exclusivity, people didn’t really care anymore.” Svpply’s innate sense of uniqueness didn’t survive: “If everyone’s editing Vogue, it wouldn’t be Vogue.”
Another question: How good of a tastemaker can a machine ultimately be?
I worry that we are moving from a time of human curation (early Svpply) to a time in which algorithms drive an increasingly large portion of what we consume (the Facebook feed). This impacts not only the artifacts we experience but also how we experience them. Think of the difference between a friend recommending a clothing brand and something showing up in targeted banner ads, chasing you around the internet. It’s more likely that your friend understands what you want and need, and you’re more likely to trust the recommendation, even if it seems challenging to you.
Maybe it’s a particularly shapeless garment or a noisy punk track. If you know the source of the suggestion, then you might give it a chance and see if it meshes with your tastes. In contrast, we know the machine doesn’t care about us, nor does it have a cultivated taste of its own; it only wants us to engage with something it calculates we might like. This is boring. “I wonder if, at the core of fashion, the reason we find it fascinating is that we know there’s a human at the end of it,” Pieratt says. “We’re learning about people. If you remove that layer of humanity from underneath, does the soul of the interest leave with it?”
Pieratt makes a further distinction between style and taste. Style is a superficial aesthetic code that is relatively simple to replicate, whereas taste is a kind of wider aesthetic intelligence, able to connect and integrate disparate experiences. Algorithms can approximate the former — telling me I should wear a blue shirt — but can’t approximate the latter because the machine can’t tell me why it thinks I should wear a blue shirt or what the blue shirt might mean to me. When a machine has taken over the exploration of taste, the possibility of suddenly feeling something from a surprising object is narrowed to only what the machine decides to expose. “I don’t think there’s such a thing as machine taste yet,” says Pieratt.
Of course, he and I might just be part of the fading regime, our “collapsing dominant.” The dystopian babies of 2018 raised on algorithmic Spiderman-slash-Frozen YouTube videos may have different appetites in the future.
The threat of banality (or the lack of surprise) implicit in full machine curation reminds me of the seemingly random vocabulary meant to improve SEO on Craigslist posts. As one chair listing I encountered put it: “Goes with herman miller eames vintage mid century modern knoll Saarinen dwr design within reach danish denmark abc carpet and home arm chair desk dining slipper bedroom living room office.”
Imagine the optimized average of all of these ideas. The linguistic melange forms a taste vernacular built not on an individual brand identity or a human curator but a freeform mass of associations meant to draw the viewer in by any means necessary. If you like this, you’ll probably like that. Or, as a T-shirt I bought in Cambodia a decade ago reads, “Same same but different.” The slogan pops into my mind constantly as I scroll past so many content modules, each unique and yet unoriginal.
Algorithms promise: If you like this, you will get more of it, forever. This experience is leaking from the internet of Google ads for the bag you just bought into the physical world. Look to the artist Jenny Odell’s investigation of “free watch” offers on Instagram for an example. The watches appear, at a minimum, stylish, with small variations on minimalist faces and metal bands. But they are not the result of an enlightened sense of taste, per Pieratt’s definition. The brands that sell them are thin fictions whipped up in Squarespace and the actual products are the result of Alibaba manufacturing and Amazon drop-shipping, in which a product moves directly from manufacturer to consumer having never entered a store. The phantom watches are empty fashion language, objects without content.
Other ways in which our experiences are warped by algorithmic platforms include Spotify possibly commissioning original music from “fake” artists to match the latent content desires of its audience, as Noisey noticed; delivery restaurants that are only virtual, conjuring a digital brand out of a shadowy group kitchen and serving food via Uber Eats; the surreal kids’ YouTube videos, which exist because they are rewarded with views by the feed algorithm and thus earn their creators advertising profit; and the globalized visual vernacular of Airbnb interior decorating, which approximates a certain style emerging from the platform itself. Having analyzed the data from some platform or another, these are things the machine thinks you want, and it can serve them up immediately and infinitely.
We find ourselves in a cultural uncanny valley, unable to differentiate between things created by humans and those generated by a human-trained equation run amok. In other words, what is the product of genuine taste and what is not. (This lack of discernibility also contributes to the problems of fake news, which algorithmic feeds promote like any other content, however inaccurate.)
Spotify’s fake artists aren’t fake, per se; they’re a kind of muzak created by a Swedish production company that just so happens to have the same investors as Spotify. That the simple possibility of non-genuine music fed to us by an algorithmic platform without our knowledge created a media frenzy speaks to our fundamental fear — a possibly irrational or at least abstruse 21st-century anxiety — of an algorithmic culture.
In 1935, Walter Benjamin observed that the work of art in the 20th century was undergoing a change during the advent of photography and film. The newfound reproducibility of the individual work of art through these technologies meant that art was deprived of its “aura”: “the here and now of the original” or “the abstract idea of its genuineness,” as Benjamin writes.
Photography, as Benjamin observed, could reproduce a singular work of art. Algorithmic machine learning, however, can mimic an entire stylistic mode, generating new examples at will or overlaying a pre-existing object with a new style unrelated to its origins. In 2015, researchers released a paper in which they turned a photograph of Tübingen, Germany into a van Gogh painting, then overlaid the style of Munch and Kandinsky in turn. The system “achieves a separation of image content from style,” the researchers write (a disconnect that contributes to our anxiety).
So it’s not just an individual work which can be reproduced, but rather an artist’s entire aesthetic. The resulting lack of aura devalues unique style, or changes our experience of it, just as photography once challenged painting. “The reproduced work of art is to an ever-increasing extent the reproduction of a work of art designed for reproducibility,” Benjamin writes. Another cultural crisis is looming as we realize that “new” or popular styles will be increasingly optimized for their algorithmic reproducibility (in other words, designed to spread meme-like over digital platforms) instead of their originality.
Want another Picasso, Gucci, Gehry, Glossier, Beyoncé? Just push the button. It’ll be close enough. There’s already an Instagram influencer with over 700,000 followers, Miquela, who appears to be a 19-year-old model dressing up in clothes from Chanel, Proenza Schouler, and Supreme. Her vibe is Kylie Jenner, with her malevolent-cherub face and embrace of streetwear. Except Miquela is actually a virtual character her designers rendered by computer, as if produced by a Kardashian-fed AI. Unlike Jenner, Miquela is a style that can be reproduced cheaply and infinitely.
Every platform, canvassed by an algorithm that prioritizes some content over other content based on predicted engagement, develops a Generic Style that is optimized for the platform’s specific structure. This Generic Style evolves over time based on updates in the platform and in the incentives of the algorithm for users.
When we encounter the Generic Style in the world, we feel a shiver of fear: We have entered the realm of the not-quite-human, the not-quite-genuine. Did we make an independent decision or do the machines know us better than we know ourselves? (This anxiety might just be an iteration of the debate between free will and fate.)
Addendum I: Algorithmic Intimacy
One day, a friend of mine in New York City is on OKCupid, Bumble, or Hinge. He encounters the profile of a young woman and matches with her. He introduces himself with a joke based on the cultural signifiers in her profile, as is the habit of our time. She doesn’t respond.
Months later, I am sitting with him in a restaurant at the only two open seats left at the bar. At the end of our corner, there is a young woman sitting alone. My friend and the young woman strike up a conversation that seems to have a certain spark to it. Eventually, the realization occurs to her, or maybe she’d known all along: “Did we… match online?” She apologizes for not replying to his message and they keep chatting with increasing animation.
Would this flash of intimacy have occurred without the intervention of the algorithm that introduced them? Not so quickly, I think, if at all. The algorithm added a certain missing context through which they identified each other; it can be comforting, even helpful to feel recognized by the machine. He gets her phone number.
Addendum II: Cities
Then again, aren’t cities (and their bars, restaurants, and boutiques) really just highly attuned machines for sorting people according to their interests and desires? By being here, we have already communicated certain things about ourselves, much like checking preferences on an OKCupid account and surrendering to the equation.
Our experiences have always been algorithmic, if not previously driven by an actual algorithm. Sometimes it seems wrong to speak of some kind of lost originality or authenticity, as if life before Facebook were wholly innocent, non-formulaic, pure — tasteful. Taste has always been and always will be derivative, hierarchical, and shallow, but also vital.
What do we do, then, about this shift from human to digital taste? It’s possible to consciously resist the algorithm, like someone might buck the current fashion trend — wearing bell-bottoms and tie-dye, say, instead of trim, blank basics. I might only read books I stumble across in used bookstores, only watch TV shows on local channels, only buy vinyl, only write letters, forsake social media for print newspapers, wear only found vintage. (Etsy is already algorithmic, with its own faux-folksy Generic Style.) I could abstain from algorithmic culture like the Luddites who resisted the automation of textile factories in the 19th century by destroying machines. It would be so organic. Cool! Obscure! Authentic!
But as soon as something Cool, Obscure, and Authentic gets put back on the internet, it is factored into the equation, maybe it goes viral, and soon enough it’s as omnipresent as Millennial Pink circa 2017. In this way, algorithmic culture is not encouraging of diversity or the coexistence of multiple valid viewpoints and identities. If a stylistic quirk is effective, it is integrated into the Generic Style as quickly as possible; if it is ineffective, it is choked of public exposure. So you’d also have to keep your discoveries analog. Put an air gap between your brain and the internet.
Addendum III: One Example of Non-Algorithmic Taste
My friend is sitting across from me in a wine bar. She’s wearing a black turtleneck cashmere sweater with long ridges down the sleeves. It looks perfect and yet unplaceable; no brand logo, material texture, or discernible quirk identifies it with one source or another. “Where is that sweater from?” I ask.
“Oh, I got it from my grandma’s closet when she moved out of Manhattan,” she says.
I grew up in the early 2000s during the beginning of the social internet, when there were no smart feeds or adaptive algorithms to sort content. The primary ways I discovered new things were through forums, where members suggested which shoes to buy or bands to listen to, and through digital piracy, which gave me a relatively unfiltered list of possible cultural artifacts to consume on Kazaa or BitTorrent, which did not come with “You May Also Like This” recommendations. (I did not live in a city and the local comprehensive bookstore was a Borders 45 minutes away.) These services were the digital equivalent of used vinyl shops: You take what you find, either you like it or not, and then you try again, constantly refining an image of what you want and (thus) who you are.
Since those were formative teenage years, I derived a good part of my identity as a cultural consumer from DIY piracy. Still, the results were neither exceptional nor original. I downloaded a lot of Dave Matthews Band concert bootlegs and sought out American Apparel in the mall after seeing it online. But at least these things felt like mine? Or at least the assemblage aggregated into something I might have called personal taste.
Now YouTube tells me which videos to watch, Netflix serves me TV shows, Amazon suggests clothes to wear, and Spotify delivers music to listen to. If content doesn’t exist to match my desires, the companies work to cultivate it. The problem is that I don’t identify as much with these choices as what I once pirated, discovered, or dug up. When I look at my Spotify Discover playlists, I wonder how many other people got the exact same lists or which artists paid for their placement. I feel nostalgic for the days of undifferentiated .rar files loading slowly in green progress bars. There was friction. It all meant something.
To be fair, this content consumption was also extremely unethical. And it’s not like I don’t like Netflix shows or Spotify playlists. Like cigarettes or McDonald’s, they were designed for me to like them, so of course I like them. It’s just that I don’t always like that I like them.
Yet there are an increasing number of legal alternatives to these mainstream platforms. We’re seeing a profusion of smaller platforms with different brand images, the equivalent of a Reformation instead of a J.Crew or Glossier instead of Clinique. If Gap is a mainstream platform for fashion basics, then Everlane, with its transparent manufacturing and minimalist branding, and now Scott Sternberg’s Entireworld, which purports to offer a utopian clothing system, are its more niche, though no less generic, hipster equivalents.
FilmStruck, for example, streams “critically acclaimed classic movies, hard-to-find gems, and cult favorites” like those in the Criterion Collection, while MUBI selects “cult, classic, independent and award-winning films from around the world.” The full-bleed, black-and-white stills on their websites differentiate them as far hipper than Netflix or cable — you might feel safer about identifying your taste with them (“I don’t watch TV; I only watch FilmStruck,” a platform hipster says). Instead of Spotify, there’s The Overflow, with vetted Christian worship music, or Primephonic, with high-definition classical recordings. Quincy Jones launched the “Netflix of jazz.”
Digital platforms exist for non-digital products, too. The start-up Feather will rent you a “hip bedroom” bundle of faux-mid century side tables and bed frame for $109 a month in a kind of minimally stylish pre-packaged taste kit, a thinly reproduced aesthetic lacking any aura. Similarly, fashion companies like Gustin and Taylor Stitch crowdfund their new products, counting pre-orders before manufacturing anything. These are different from traditional brands in that they are driven from the bottom-up by the actions of users rather than the diktats of auteur creative directors. And, like the drop-shipped generic watches, they are extremely boring, releasing wave after wave of artisanal fabrics turned into rustic, vaguely outdoorsy gear.
What these businesses suggest is that you can have the benefits of a digital platform and an algorithmic feed while still feeling self-satisfied, pretentious, and exclusive in the knowledge that your content has been carefully curated by humans. Or, you could hire a tastemaker of your own. As The Verge reported, a musician named Deb Oh freelances as a Spotify curator through her service Debop, making custom playlists for $125. She culls from the “the symphony of algorithms,” as she beautifully puts it, and comes back with something more manageable, more human.
Oh’s services present original curation as a luxury good. It costs money to step off the consumption rails so conveniently laid out for us by tech companies and their advertisers. In the future, taste will be built on allegiances to platforms as much as individual creators or brands. Are you more of an Amazon, Apple, WeWork, Airbnb, or Facebook person? Unless you go off-platform, there are no other choices. Not just for your technology, but for your culture: fashion, furniture, music, art, film, media.
Platformization is something the fashion industry is already familiar with, of course: Each major brand is its own platform, expanding in a profusion of seasonal lines and accessories meant to cater to your every need within a single taste-system. LOT2046 is a smaller, independent algorithmic platform for fashion that I subscribed to last year and I haven’t looked back. Its thesis is simple: Your clothing desires can be reduced to a series of signifiers that the service automates and adapts to you. Shipments of all-black clothing and accessories arrive every month; the only customizations are a few stylistic choices — short socks or long, crew-neck or V-neck — and that the items come with your name emblazoned on them, like a black duffle bag I recently received that says KYLE CHAYKA in raised black thread.
LOT is pro-algorithm. “Any technology should know what you need and want more than you know,” its founder Vadik Marmeladov, a Russian designer who prefers to stay behind the scenes, told me. “Platforms will be telling you what you want before you want it.” He feels that machines should not just suggest things, but make decisions for us, from planning a weekend trip to a morning coffee order. In other words, they should supplant our taste entirely.
Surrendering to LOT is a kind of freedom to stop thinking about fashion, freeing the mind for loftier things — like contemplating mortality, Marmeladov suggests. Its promise is that by drastically narrowing the variables, perhaps an algorithm can actually help you achieve individuality, not just through clothing but induced existentialism. I don’t wear LOT’s clothes all the time, but I find its ethos seeping into how I think about my consumption in the algorithm age more generally. If our decisions about what we consume don’t seem to communicate much about ourselves anymore, why not just choose to not make them?
Say it with me: I enjoy what I enjoy regardless of its potential for receiving likes, going viral, or being found acceptable by an algorithm.
Say it with me: I also do not deny that I am implicated, inexorably, in the Generic Style of my time.
The promise of algorithms is that they will show you yourself, refining an image of your tastes that should be identical to what you would have chosen on your own. The current reality is that these feeds silo you in homogenizing platforms, calculating the best-fitting average identity. That these average identities come in increasingly minute shades does not mean that they are unique.
A better mode of resistance might be to use the algorithms’ homogenizing averageness against them, adapting their data for productive disruption. We can take advantage of the clash between multiple algorithmic ideals, or between an algorithm’s vision of the world and reality, creating a glitch-based aesthetic. What would be error could be art.
As culture has changed to accommodate every other technological innovation, so our ideas about algorithms will change. “Eventually we may opt to shift our definition of art in order to make accommodation for the creativity of artificial intelligence,” says Marian Mazzone, an art history professor at the College of Charleston who worked on a project in which AI created original styles of painting (they mostly look like mash-ups of Impressionism, Fauvism, and Cubism).
Oscar Sharp is the director of Sunspring, a short sci-fi film with a script generated by a machine-learning algorithm trained on episodes of The X-Files, Star Trek, and Futurama. The result is something spiky, mostly non-narrative — it doesn’t make much sense, but it is compelling and unique. The film doesn’t try to fool the viewer into thinking it’s 100 percent human-made. Rather, the actors strain to adapt to the aesthetics of the machine and discover something new in the process.
“It’s like you’re working on a big TV show with a very powerful showrunner who has written the episode, and the showrunner got drunk last night, passed out, and you couldn’t not make the episode,” Sharp says. “You have to do everything within your power to make the episode as it was written.” The challenge was generative: “Augmented creativity is much more interesting than a replacement of creativity,” he says.
The automated clothing service Stitch Fix, kind of a preppy version of LOT, uses algorithmic help to optimize their new original designs to increase sales and address gaps in the market, what they call “Hybrid Design”: customers like ruffles and plaid, so why not plaid ruffles? But we could instead go in the opposite direction, making clothes no one wants — yet. Algo-clash clothing would be more like the artist Philip David Stearns’s glitch textiles, unique fabrics generated from software gone intentionally awry, the discordant pattern of pixels made into a Baroque style.
Fashion is always one step ahead, though. The triple-waistband jeans recently released by ASOS already look like a glitched algorithm designed them.
It is not just that artists can collaborate with algorithms; there is always a person at the end of the machine — like the man behind the curtain in Oz — regulating what it does. The majority of these are currently Silicon Valley engineers. And we human consumers are still on the other side of the algorithm, with our freedom to decide what we consume or to opt out. Our decisions shape what is popular in the present as well as what is preserved into the future. “Let’s not forget the audience has a major role to play in determining what will matter and what will not, what is liked and what is not,” Mazzone says. In the long term, this is slightly comforting.
I leave the cyclopic Amazon Echo Look on a shelf in my living room, where it glares at me every time I walk past, not stopping for it to evaluate my outfit. It yearns to assign inexplicable percentages, and yet I am more comfortable judging for myself. It takes fine pictures, but like a mirror, it mostly shows me what I already know. And the device is trying to match me to some universalized average, not my individual style, whatever that might be. It doesn’t know me at all — it can’t tell what kind of clothes I’m comfortable in nor how the clothes I wear will function as symbols outside, in the place I live, in the contexts of class or gender. All-black doesn’t play the same in Kansas City as it does in New York, after all. This is the kind of social, aesthetic intelligence, the sense of taste, that our algorithms are missing, for now at least.
Amazon says the Look is for achieving your best style, but its ulterior motives aren’t hard to spot. When I asked the machine about my plaid shirt, an ad popped up on the app’s feed showing me a few other, similarly colored plaid shirts — none particularly stylish or different enough from the one I own, bereft of brand name — that I could buy on Amazon. In fact, Amazon is already using the data it collects to manufacture its own clothing lines, and the results are about what you’d expect from a robot: wan imitations of whatever is currently popular, from the “globally inspired” Ella Moon to the cool-French-girl knockoff Paris Sunday. Training on millions of users’ worth of data and images from the Look showing what we actually wear could make the in-house brands slightly less uncanny. Then again, imagine a potential leak, not of credit card data but an extensive cache of your outfits.
It’s up to us whether or not we care about the shades of distinction between human and machine choice, or indeed if we care about fashion at all. Maybe taste is the last thing separating us from the Singularity; maybe it’s the first thing we should get rid of. “I don’t think the consumer cares, as long as it works,” one Stitch Fix executive said of its algorithmically designed clothes.
But if we do want to avoid displacing or reassigning our desires and creativity to machines, we can decide to become a little more analog. I imagine a future in which our clothes, music, film, art, books come with stickers like organic farmstand produce: Algorithm Free.
“Echo” is a good name for Amazon’s device because it creates an algorithmic feedback loop in which nothing original emerges.
Alexa, how do I look?
You look derivative, Kyle.