What is Artificial Intelligence For, Exactly?
Take the new wave of AI hype with a big grain of salt
Remember self-driving cars?
Back in 2019, Telsa CEO and serial over-promiser Elon Musk asserted that his electric vehicle company would put a million self-driving cars on the road within a year. That prediction obviously failed to pan out, and federal automotive safety regulators have even issued warnings that Tesla’s self-driving software could cause its cars to crash. But Musk is in good company with this particular failure: ride-share app Uber gave up on developing its own driverless cars at the end of 2020, while Waymo—the autonomous vehicle company owned by Google’s parent corporation Alphabet—laid off eight percent of its workforce at the start of 2023. Traditional automakers have left the self-driving game as well, with Ford and Volkswagen ending their own joint venture last October.
Work continues on self-driving cars, of course, but fully automated vehicles appear no closer to reality than they did a decade ago. Splashy promises about completely driverless cars have given way to hard reality; less sexy driver-assist technologies, on the other hand, have been successfully incorporated into new models made by Ford and other long-time auto manufacturers. As usually seems to be the case when it comes to artificial intelligence crazes, fantastical claims made by enthusiasts and media boosters have yielded to more prosaic and functional applications in the real world.
Take the predicted extinction of human radiologists in the face of the awesome diagnostic powers of artificial intelligence systems powered by so-called deep learning: back in 2016, one prominent AI expert even went so far as to claim that medical schools should stop training radiologists because their jobs would be performed—and performed better—by artificial intelligence within a decade at most. That’s yet to occur seven years later, much to the relief and benefit of my radiologist friends who remain gainfully employed today.
These cautionary tales ought to serve as background for yet another renewed wave of artificial intelligence hype, this time focused on the “generative AI” that powers interactive chatbots like ChatGPT and image creators like DALL-E. As fun as these programs are to play around with (see this thread with images of every U.S. president with a mullet), it’s unclear at best what purpose or function they’re supposed to serve. Microsoft hoped that its ChatGPT-based bot Sydney would make its Bing search engine more responsive to human queries, but the bot comes across as “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine” more than anything else—entertaining, perhaps, but not terribly useful beyond that.
As things stand, these programs and platforms amount to mere curios, ways for programmers, technology enthusiasts, and journalists to amuse and titillate themselves. They don’t have any apparent purpose or function at the moment, and they don’t fulfill any distinct or meaningful role as yet. We may be able to use ChatGPT or similar AI systems to write press releases and do other necessary drudgery for us, but it’s hard to believe that automating such boring tasks will prove worth all the time and effort—not to mention the large sums of money—poured into this particular enterprise.
In other words, it’s hard to know what exactly these artificial intelligence platforms are supposed to be for—and it’s hard to figure that out when a sort of technological mysticism pervades most discussions of the subject. Some rhetoric about artificial intelligence veers into the theological, complete with its own prophesied apocalypse and rapture; by the same token, comparatively tame fantasies that artificial intelligence will “do everything” obfuscate just as much. By focusing our attention on unlikely scenarios and quasi-religious pronouncements, these flights of fancy prevent us from thinking about various artificial intelligence platforms as tools crafted for specific purposes.
Indeed, ChatGPT and other chatbots come with their own set of specific problems that endless pondering about the alleged “existential risk” posed by general-purpose artificial intelligence won’t solve. As computer scientists Arvind Narayanan and Sayash Kapoor put it, “ChatGPT is the greatest bullshitter ever”—it produces convincing responses to questions without any reference to the truth. That leads psychologist and long-time artificial intelligence researcher Gary Marcus to warn that chatbots like ChatGPT could produce prodigious quantities of seemingly credible bullshit like fake medical and scientific studies. Marcus also raises the disturbing possibility that mindless chatbots will emotionally and psychologically manipulate the humans they interact with, even to the point of encouraging murder and suicide. The fact that ChatGPT’s function and purpose remains murky at best only makes these dangers more troubling; there’s no sense of what anyone gains from the platform given the potential costs involved.
So what can we take away from this most recent instance of AI hype?
First off, it’s important to keep artificial intelligence in proper perspective. Contrary to the claims of its most ardent proponents and fears of others, artificial intelligence isn’t magic. That in turn means concerning ourselves more with what we want our artificial intelligence platforms to do and how we expect them to do it—not worrying whether they will somehow do everything, much less kill us all. Instead of indulging in apocalyptic fantasies about paperclip maximizers or relentless Terminators, we should more realistically consider the potential practical downsides of specific artificial intelligence platforms like ChatGPT, for instance, or the pernicious effects of AI-fueled TikTok filters and other social media algorithms.
Moreover, we ought to retain a healthy skepticism toward claims of inevitable and imminent progress on various forms and incarnations of artificial intelligence. Self-driving cars, for instance, have proven far more difficult to engineer than many believed and promised a decade ago, and machine learning hasn’t replaced various professions and fields like radiology once marked for extinction. That has implications for defense policy as well, where the quest for autonomous weapons remains alive and well: “AI-driven autonomy agents” recently flew dogfighting maneuvers in a modified F-16 out of Edwards Air Force Base in California, for instance. If any one institution in America stands poised to make incremental, evolutionary progress toward autonomous vehicles, it’s DARPA—the Pentagon’s advanced research and development shop. Still, it’s probably best not to count on fleets of autonomous fighter jets taking to the skies any time soon.
Above all else, we need to ask what a particular artificial intelligence platform is for—what purpose it’s intended to serve, what role it’ll fulfill, what function it’s supposed to assume. With autonomous weapons the answer to these questions might seem obvious, but it should not be taken for granted. For civil AI projects like ChatGPT, however, there needs to be more rigorous thinking about the ultimate purpose of the intended platforms and the research needed to build them. It may be fun and possibly even lucrative for programmers and companies to create various artificial intelligence platforms, but if there’s no practical reason for their existence—helping programmers write code more efficiently, for instance—they should probably remain confined to lab settings. Even so, it’s hard to say that that goal demands the sort of time and energy it’s receiving today.
Ultimately, though, we really ought to question the scale of resources we’ve devoted as a society to the pursuit of artificial intelligence over the past decade or so—especially at a time when national commissions urge the federal government to spend billions of dollars more on artificial intelligence research. Even articles that aim to push back against AI hype tend to inadvertently reinforce it by characterizing the technology as “transformative” and “disruptive,” all while leaving largely unanswered just what exactly is so earth-shattering about AI’s potential real-world applications. In the private sector, this multi-billion dollar investment in artificial intelligence failed to pay off in any meaningful way before the tech sector suffered its recent sudden downturn.
That leaves us with the question of what artificial intelligence is for in a more general sense: as a society, what is it exactly we think we’re going to get from these public and private investments? It may well be the case that we don’t see any real benefits from “artificial intelligence,” at least as it’s currently conceived and marketed; the field’s promoters simply promise more than they can actually deliver. We should probably expect incremental improvements in specific applications of artificial intelligence to existing technology instead—think assisted driving systems in an otherwise regular car rather than fully autonomous vehicles.
To put it another way, we’re probably better off thinking about artificial intelligence as a means to an end rather than an end in itself—the opposite of how we seem to talk about and handle AI today. It can be a powerful tool, but right now we seem more interested in admiring and refining the tool itself than using it effectively. We really ought to stop asking what we can do for artificial intelligence and start asking what artificial intelligence can do for us.