Does value generated for shareholders translate into value for the rest of us? Is technological progress a desirable goal in and of itself? Is it all worth it in the end when the line goes up? Can the line go up forever? What is the end game of a growth economy?
Welcome back to the Big Dumb Gold Rush. In the previous post of this series, I tried to think through the questions of function pertaining to AI (specifically large language models and generative AI). Function—what will/do these technologies do?—leads to questions of value—to what end will/do they function? What is their value, both in market terms and more importantly, on a societal level?
I’m not an economist. Here goes.
Speculation
People who are economists have noticed the same troubling patterns I have in the AI sphere. It’s a problem well-worn in the “tech space”—we probably have at least six more over-bloated streaming docuseries in the pipeline about the bubbles, scams, and hubristic slip-ups of charismatic founders and the willingness of venture capital to pour money on dreams of future value. Would nerdy guys in hoodies (or women in turtlenecks) lie to us?!
So while existing companies re-brand and re-tool products to incorporate the promises of machine learning and “the power of AI” and scores of startups throw their ideas into the mix, it’s worth examining which products and companies deliver and which simply—often in dramatic, compelling terms—communicate dreams of future technology that aren’t quite there yet. I have no doubt that there are useful, practical, humane applications of these technologies, but the past few years’ fervor brings concerning gold rush thoughts to the fore. With projections of market share in the trillions within less than a decade and actual octupling of private investment in generative AI between 2022 and 2023, those who genuinely seek visions of “human progress” (I mean, cue my skepticism there, too, but we’ll give them the benefit of the doubt for a minute) have to elbow in alongside opportunists dazzled by promises of such miraculous growth.
The rising stock market reflects, in large part, the speculative future value of AI-enhanced products and their component parts, driven by tech-optimism (will return to that in earnest in the final part of this series, on dreams) and the sheer saturation of AI as both buzzword and real framework. Growth is highly concentrated among the highest performing tech companies benefitting from this boom, and I had to learn about stock market math to confirm my gut feeling about that concentration of power and what it means. >_<
Smarter people than me (much smarter about finance wizardry, at least) seem to agree that the companies benefiting from a speculative boom right now are very unlikely to match their valuation in revenue when the time comes to make the dream of artificial general intelligence (the AI we think of in a science fiction sense, that matches or exceeds general human cognition) a reality. Growth is being driven in part by massive investment in the infrastructure required to power the new wave of AI, but once the metaphorical railroad is built, large questions loom about the value of what will be shipped on its tracks.
When that reality sets in, investors will win and lose, companies will survive or fail, but it’s unlikely that the tech industry as a whole will be shaken from its high tower in the overall economy. Players may shift, but I can’t personally imagine a world in which we un-ring the tech bell. A bunch of investor money will be lost, because that’s what happens when a bubble bursts. (I am now an economist.) The rest of us can hope that the ripple effects aren’t too far-reaching.
David Cahn, the COO of Sequoia Capital (God help me, writing this sent me to the Sequoia Capital blog) writes,
“Speculative frenzies are part of technology, and so they are not something to be afraid of. Those who remain level-headed through this moment have the chance to build extremely important companies. But we need to make sure not to believe in the delusion that has now spread from Silicon Valley to the rest of the country, and indeed the world. That delusion says that we’re all going to get rich quick, because [artificial general intelligence] is coming tomorrow…”
It seems, since he goes on to conclude that the whole affair will ultimately be “worthwhile,” that the operative word in his warning against rampant speculation is all. We’re not all going to get rich quick, but some (the smartest and fastest, surely) are going to cash out at the top and ride to the next big thing. And that’s my problem with all of this, the sheer waste built into the bizarre tools we use to store value. Billions of dollars, funneled into creating more value, yet so little of it makes its way down to the rest of us—the bottom 50% of Americans who collectively own 1% of the stock market, and the 38% of us who own no share in it at all.
I’m sorry to be so not an economist, but it’s dizzying to imagine all of that money, gambled on hypothetical futures, just lit on fire when there are children in this wonderful country, with its super economy, having bake sales to pay off their school lunch debt.
Surplus Value and Labor
Many of the AI dreams currently peddled promise convenience, efficiency, and time-saving. Aside from a personal disinterest in “optimizing” my life, it’s hard to believe that the “saved” time, the surplus labor/leisure hours gained from technology will result in gains for average people in the short term, or for those with the least resources globally. For centuries, industrialists have promised an increase in leisure that, for many, never seems to come. While we have dishwashers and laundry machines and DoorDash, everyone I know (including people living in generational poverty, working class, and professional/owning class Americans alike) still seems to struggle to balance work, domestic labor, and leisure. Add a commute and extra-curricular activities for children and I don’t understand how it’s possible at all.
When we add in a reactionary right-wing movement that sells late-capitalist malaise back to women as regressive trad-wife tropes, there’s a lot at stake in understanding our relationship to our own time and its value. But should we trust the current tech elite to shape how our time might be “freed” from drudgery? Who has the most to gain from an online “virtual agent” who can book vacations for you?
My late, great, favorite theory uncle, David Graeber, had a lot to say about the mystery of where our time went. Despite incredible human-labor saving advances in technology, workers in his Britain and my America continue to work eight-plus hour days, plus commuting, all while externalizing the extra work required of a household (cooking, cleaning, child care, maintenance) to a burgeoning service industry. In his essay (and later, book) on “bullshit jobs,” he proposes that productivity gains have been displaced across the economy into a bloated managerial class which seems to be capable of creating infinite “work” to sustain and expand itself.
The university may be the perfect example, but everywhere—from insurance to human resources, communications, finance, consulting, and far beyond—are workers deeply alienated from labor of true value, who do indirect “work” for unseen purposes, in fluorescent lighting. And it makes them miserable. The rest of us end up preparing their food, caring for their families, and driving their Ubers for sub-living-wages. Is that really the best quality of life that technology can buy us?
Here’s how Graeber described the thwarted potential of the surplus labor we could have captured from automation and improved efficiency in a 2018 interview:
“There is this rise-of-the-robots logic, this fear that gradually technology is going to throw more and more people out of work. People say, ‘Look, it hasn’t happened.’ I think it did happen, but they made up these imaginary jobs to keep us working anyway, because we have an irrational economy that makes people work eight hours whether or not there’s anything to do. Can you have a surer sign of a stupid economic system than one in which the prospect of getting rid of onerous labor is considered a problem? Any rational economic system would redistribute the necessary work in a reasonable way and everybody would work less.”
Good thing the powerful people making choices about how we’ll shape life-altering technology are so rational and pro-social!
The pursuit of optimization under a financialized profit-driven system has been a disaster for labor protections, safety, and quality of craftsmanship.
And, through tech accelerations of the past half-century or so, “digital technologies became the graveyard of shared prosperity.” So say Daron Acemoğlu and Simon Johnson. They concede that “…tech has sometimes powered the economy in ways that benefited the masses. Workers made real gains during the early days of the increasingly unionized automotive industry, for instance. But around the mid–twentieth century came trends toward deregulation and the decline of organized labor.”
In another interview, Acemoğlu makes an interesting case that the tax structure in the U.S. and other industrialized nations provides significant, invisible subsidies for automation because workers and worker income are taxed so much more heavily than capital or investment income. In a system like that, there’s only one “rational” answer for the soul-less optimizers at firms like McKinsey—to cut the fat of labor and reap the profit.
Without significant change to the way we do business and conceptualize value, it feels almost inevitable that these powerful tools will be put to use for the ends of capital. Ted Chiang wrote a long piece for The New Yorker considering this particular problem.
He writes,
“The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey?…If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as ‘capital’s willing executioners’? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse?”
Tech optimists have a troubling tendency to communicate in broad strokes about topics of safety and oversight. Someday, we will, of course, need to have, well, some type of regulation, but not in a way that will hinder the market, of course, and we’re sure that the hypothetical future action will be pleasing to all parties.
Even some of the tech leaders who originally signed on to an open letter to pause further AI development in early 2023 have not themselves paused due to fear of missing out on the profit and progress promised in the current cycle of development. Chiang points out the issues baked into even the most responsible use of AI. Even with checks on the safety of AI systems, sensible government regulation and so forth, “…it will always be possible to build A.I. that pursues shareholder value above all else, and more companies will prefer to use that A.I. instead of one constrained by your principles.”
A Making Machine
Since AI-talk has mainstreamed beyond sci-fi nerds and computer programmers, I assume that many still with me may have a familiarity with philosopher Nick Bostrom’s Paperclip Maximizer thought experiment.
Just in case, I’ll summarize since it’s a helpful way to consider some of the theoretical risks of general or purpose-built artificial intelligence: A system is built to maximize the number of paper clips produced. The system logically intuits that, for maximum efficiency, all non-paper-clip activities must be eliminated. When human programmers get in the way of paper clip production, the system decides we are most useful for the iron and other trace metals contained in our bodies. All life on earth is reorganized around automated mining and recycling of materials suitable for paper clip production.
A terrifying and absurd future, to be sure, but how far off is this vision from some of the ways the global economy already operates? How many people toil and suffer in conditions that rob their quality of life and provide for barest survival so that others may accumulate wealth to absurd, un-spendable levels? Those between, in the consumer classes of the world, enjoy proof of progress in endless little pleasures and distracting gadgets.
While neoliberalism wants us to believe that ever-growing global markets are, in fact, lifting everyone up, it’s hard to square that with images of children mining cobalt in Congo. With the profit motive at the wheel of our (still largely human) choices, we may as well have turned much of the earth into paper clips. We’ve turned it instead into an engine for growth itself, for more, forever. And much of this growth magic is done within markets by opaque financial tools.
Is there a more ideal application for AI systems than finance? If granted three wishes from a genie, wouldn’t most people include security and comfort on their list? What about people whose job it is to turn money into more money? Is the promise of reward for doing that task well worth the risk of further entrusting our economy to automation? Have the firms and individuals working in finance ever steered us wrong before?
Now, imagine adding an inscrutable layer of machine logic to this system. Think about how long it took average people to understand what happened in the lead-up to the 2008 financial crisis. Even Margot Robbie explaining mortgage-backed securities in a bubble bath in The Big Short wasn’t enough to make most of us really understand what went wrong. Now imagine “AI tools” that can interpret and model financial data in ways that humans haven’t even considered. If the tools “work,” can you imagine a Wall Street that turns them off out of caution? It’s a lot of trust to put in the hands of machines in the hands of a few unaccountable human institutions. I’m still not an economist, but enhancing the power of organized capital feels like a bad move to me.
In the penultimate installment of this series, I hope to strike further into the heart of the matter opened up here and talk about POWER. For now, I’ll let Acemoğlu and Johnson have the last word. They say that we won’t succeed in shifting the balance of power as it relates to technology without “…the rise of counterarguments and organizations that can stand up to the conventional wisdom. Confronting the prevailing vision and wresting the direction of technology away from the control of a narrow elite may even be more difficult today than it was in nineteenth-century Britain and America. But it is no less essential.”
Until next time,
TRW