How AI Revives the Humanities
Artificial intelligence seems to be on the hook for solving nearly all the world’s problems; from curing cancer, to fixing climate change, to eliminating famine and locusts. These are big expectations to place on one relatively new discipline, but I think I can up the ante. The idea came to me as I was reading Verity Harding’s new book AI Needs You. I thought, what if AI is the thing that brings back the humanities? It would be the plot twist nobody saw coming.
For those who don’t know, educators and proponents of a liberal arts education have been worried about the decline in students getting degrees in the humanities for decades. The hand-wringing started as far back as the 90’s and has continued on down through the ages. According to the National Center for Education Statistics, the number of graduates in the humanities declined by 29.6% from 2012 to 2020, and is only getting worse. While some might argue that the humanities have declined in relevance by its own merits, its decline has been greatly hastened by the STEM vs. humanities debate in education, a debate which STEM won soundly years ago.
Throw into the mix the broad disruptions anticipated by AI, and generative AI in particular, and one might guess humanities’ tenuous place in systems of higher education is as good as done. How could this be otherwise. I think I know, read on.
While admittedly biased as a humanities person myself, I took Harding’s arguments as building to this crescendo: creating a bright future with AI is going to require participation from all of us, and in particular, from the book nerds, the history geeks, the readers of Emily Dickinson poems. (I’m kidding about that last one, but you get the drift.)
Harding, who was recently named one of Time’s 100 Most Influential People in AI, has worked as a technologist for Google and as an advisor to Britain's deputy prime minister. In AI Needs Us, she delivers a humanist manifesto for the age of AI, arguing that as AI’s power grows, so does the need to figure out what—and who—this technology is really for. Fulfilling the promise of AI’s future, she asserts, will require bringing a broad coalition of stakeholders beyond the technology industry echo chamber into the conversation.
Harding begins dismantling the dominant narrative likening the development of AI to that of the atomic bomb, a destructive force built in secrecy by a small group of chosen men, and instead draws on lessons from three twentieth-century tech revolutions—the space race, in vitro fertilization, and the internet—as roadmaps to a future in which we might navigate daunting levels of complexity to arrive at democratically determined values that guide AI to its highest purposes.
Harding’s arguments serve to both “show and tell” how the ability to mine history for its lessons is one way in which those with a background in humanities will offer insights that provide frameworks for creating AI’s future; guiding us away from past mistakes, and moving us toward using AI to solve problems that serve people, not only corporations.
Harding writes, “The AI hype of recent years has contributed to a god complex that positions technology leaders as voices of authority on the societal problems their creations have often caused. Those who are profiting from the AI hype should not be the same people who are judging their own creations. Neither do distinguished computer scientists, no matter how gifted in their fields, automatically understand the complex systems of power, money, and politics that will govern the use of their products in the future.”
Speaking at Davos last year, Stuart Russell, a Professor of Computer Science at University of California, Berkeley, would seem to agree, stating, “Some of the biggest challenges in the field of AI need to be solved by collaboration and by asking the right questions,” adding, “We write a few million lines of software and stick it on the world, whether the world likes it or not.”
This would seem to support an idea that is central to Harding’s premise, “STEM skills are extremely important to our future, health and prosperity but so are the disciplines such as art, literature, philosophy, and history. These are just as critical to the future of humanity.”
And here, you might be thinking. Art, literature, history? Isn’t there an AI for that?
While it’s true that ChatGPT and other generative AI tools are becoming almost alarmingly sophisticated in what they’re capable of producing, according to the experts, AI will never be capable of true creativity. Unlike Pinocchio, AI never turns into a real boy. (Though I do think we should worry if AI starts writing unprompted versions of Pinocchio with a chatbot as the lead.) But I digress.
According to an article on the World Economic Forum website: “The key characteristic of AI’s creative processes is that the current computational creativity is systematic, not impulsive, as its human counterpart can often be. It is programmed to process information in a certain way to achieve particular results predictably, albeit in often unexpected ways. In fact, this is perhaps the most significant difference between artists and AI: while artists are self- and product-driven, AI is very much consumer-centric and market-driven – we only get the art we ask for, which is not, perhaps, what we need.”
Another reason AI will never be truly creative—and this is my own inexpert theory, so take it with a grain of salt— AI lacks the sensory inputs that are a dynamic part of how humans gather certain types of information—the kind of information that puts the “human” in human intelligence, and without true human intelligence, AI will never have the means for true creativity, which relies on empathy, moral judgment and the ability to understand complex social and cultural dynamics.
And yet even as I throw around terms like “true creativity, empathy and moral judgment,” to describe humans, casual observation reveals these abilities aren’t evenly distributed in the human population. In part, because they require innate capacity combined with intentional cultivation through advanced study—which can be yours for the price of admission + about 80k a year - but that’s a discussion for another newsletter.
The formal study of a liberal arts disciplines goes back to the ancient Greeks, with the term “liberal arts” being attributed to Cicero, 106-43 BC. These studies were meant to be practical, not theoretical, instruction in how to be a citizen and participate in a democratic republic. Given the degree to which modern humanities programs have lost sight of this, it seems inevitable and reasonable that the study of these disciplines has come to be considered superfluous and irrelevant by so many.
Now, with the emergence of AI, a radically transformative technology bringing equal potential for good and harm, there is a renewed imperative to view the humanities education through a practical lens; restoring its role in shaping a future that aligns with humanitarian values.
In her author’s note, Verity Harding states, “If I have one hope, it is that the arguments presented here can convince those with a much wider set of skills and perspectives to involve themselves in how this technology develops. The future belongs to all of us, and when it comes to AI, the issues at hand are too complex and integral to be left to traditional “AI experts” alone.”
I hope that this space, The AI-Curious Newsletter, will be a place for non-AI experts to explore ideas posed by AI technology. I am a non-expert myself. I am a writer, and recruiter specializing in innovation and emerging technologies. Like most people delving into the world of AI for the first time, I started educating myself on AI for work; bringing curiosity, hope and some trepidation. I will use this space to explore AI issues, big and small; creating a glossary of terms along the way as a reference tool. I hope the content here will address the topics you care about most. The suggestion box is open for business.