Sometimes, a conversation reminds me that my household spends much more time than average discussing the tech dystopia. Perhaps it’s inevitable: Two media studies nerds, reared on the (particularly millennial) wild and unsupervised internet, rendered nostalgic by working in community radio, “radicalized” by the failure of many promised futures, now gone to seed and semi-Luddite in the country, learning endangered handicrafts and studying Shaker furniture. With such rich shared special interests and backgrounds, of course we talk about this shit all the time.
I forget that even though almost everyone I know (and everyone that I like) feels a vague dis-ease in these times, not everyone wants to immerse themselves in the gory details.
I share this to position myself within the journey I plan to undertake in this series. I’m not a tech journalist, really, and I don’t work in ~*`ThE teCh SPacE”*~, but I am pretty ok at thinking and I read and think about this a lot. And since many readers here did not subscribe expecting a ton of writing about tech, perhaps I can provide useful introductions to corners of this overwhelming and often scary—but just as often silly—world. Or, if you’re deep into it, too, we can have a discussion and you can let me know what I’m missing.
I was listening to one of my little tech dystopia podcasts recently—this one about Clearview AI and its sinister, nearly unimaginable reality of “Shazam for faces”—when one anecdote nailed something I’ve been struggling to articulate about my reasons for deep distrust and disdain for the cult of techno-optimist oligarchs and their ardent believers. In the anecdote, journalist Kashmir Hill recounts a meeting with Clearview co-founder Hoan Ton-That, in which she asked him to consider the possible repercussions of this powerful tool in the wrong hands, or simply in more hands. What would he do about copycat programs? Or to address the casual misuse of such an invasive search? She recalls that he told her, “That’s a really good question, I’ll have to think about it”.
I’ll have to think about it? Seems like SOMEthing SOMEONE should have already THOUGHT about? Wild how the entire field of tech ethics, not to mention other adjacent fields of study, representing many entire lifetimes of dedication to the exploration of these very questions, can be so easily bypassed by one guy. Ton-That, like many shady opportunists in “tech”, can skip the ethical pondering and get straight to, according to his own paraphrase of his process, essentially Googling how to train facial recognition AI, scraping the public internet for billions of photos, then selling the output to police departments for a massive profit. The thinking can come later, right?
This is just one example plucked out from the contemporary era of a movement I’ve decided to call “the Big Dumb Gold Rush.” It’s a tale as old as time, gushes of capital investment loft short-sighted new industries and their Captains to wealth and power. This time, it’s AI.
It’s Big. For the introduction, it’s enough to say that private investment in generative AI octupled since 2022, reaching roughly $25.2 billion this year. Dizzying speculations (growth over 40% for each of the next ten years) abound. Even people not in the know with the markets (me, lol) can look around and see nearly every industry pivoting to incorporate AI tools and generative AI into their existing products, likely hoping to cash in on this growth spurt. “Harnessing the power of AI…” feels like an updated version of 2020’s “In these unprecedented times…” for advertising teams everywhere.
It’s Dumb. I will say at the outset, that yes, I am absolutely sure there are tasks for which high-powered computing, large language models, and other tools in the “AI” category will be transformative and life-changing. Climate and other types of threat modeling? Medical diagnostics? Automation in unsafe and undesirable fields of work? Sure, let’s talk about it. But for now, the vast majority of what’s being hastily pushed to market is…not it.
Many of the use cases being peddled by the enthusiastic about generative AI are just kind of wack. I can generate my own animated Marvel-style action movie populated with images of my own cat fighting squirrels in space? Did anyone ask for that? Anyone can create art, though most lack the circumstances to create what they wish they could instantaneously. Merely typing demands into a magic stuff-maker lacks the fundamental inquiry, struggle, and mystery of creativity. Much of AI generated “art” feels more like an extension of the worst of humanity’s cultural output, blended into a soup that tastes of shiny, quippy nothingness.
Last year, when ChatGPT first launched to the public and people began (understandably) speculating and expressing concern about the potential of such technology to impact humanity’s future, we were being told that it was dangerously powerful by its chief innovators. A tone of “Ooopsie, it might kill everyone but probably not, lol” prevailed in tech media, while the public, and to some extent, governments and other institutions, were left unsuitably informed to analyze the level of potential danger. But now, from the massive holes in Google’s initial AI search rollout to the potential for large language models to perpetuate every human-caused error on and off the internet, these models don’t seem powerful enough to End Us. They do seem likely to keep us in our own way as a species by amplifying the faulty premises and power imbalances of everything we’ve built so far.
Also, it uses so much energy to run this shit we didn’t even ask for (see Google search, Meta AI toolbar, etc.) that we will probably beat AI to the punch indirectly by burning the last of the fossil fuels to power fruitless and frivolous searches brought to the Oracle, or to make our own animated bullshit to watch within the cozy innards of the empire while the world burns.
It Reminds Me Of The Gold Rush. In the California Gold Rush, the influx of (mostly) European settlers hoping to strike it rich permanently altered a massive landscape through deforestation, pollution, and the intensification of colonial violence. The urgency of the desire for gold, for a personal stake in a new era of wealth creation, superseded thoughts of ecology, future land use, and the well-being of “the other.” Looking back at surges like this one, historical actors often get a pass. In dominant American culture, we tend to take a “they didn’t know any better” attitude toward history.
The problem is, that this attitude isn’t confined to “history,” and get-rich-quick, harder-better-faster-stronger ideologies still drive the American psyche. Scammy garbage proliferated on the internet before access to generative AI, and now its production accelerates. (See: junk books, drop-shipping useless plastic bullshit, NFTs).
On the information side, the oceanic internet, once teeming with life and danger and weird shit and useful information and scams and surprises and stupid art and beauty, is being fracked by AI models, homogenized into an approximated “best version” of itself, and shit back out of the large pipes of a few powerful platforms. Search is worse, browsing is worse, social media is worse. Maybe it’s growing pains and I’ll be proven wrong, but the status quo doesn’t give me much hope for the future of this internet.
This whole series won’t be such a downer, and will focus more on dreams for different intelligences, links to thinkers way deeper than I on these topics, and useful tools/solutions. For now, I’ll let Amiri Baraka close us out. From 1970’s Technology and Ethos:
“The new technology must be spiritually oriented because it must aspire to raise man’s spirituality and expand man’s consciousness. It must begin by being ‘humanistic,’ though the white boy has yet to achieve this. Witness a technology that kills both plants & animals, poisons the air & degenerates or enslaves man.”
See you soon,
TRW