A couple years ago, I had a professor who started each class by introducing an “AI tool” and having us fiddle around with it. Write a nonsense poem satirizing the musical Hamilton in Lewis Carroll’s voice, I’d type in, and the program would barf out some metrically-challenged slop. Make me a Marc Chagall-style painting of the Iron Dome, and the machine would produce a hilariously over-saturated image of a Star of David floating in the sky, with angels draped in Israeli flags flying underneath. The purpose of these exercises, according to the professor’s syllabus, was simple: “AI tools are here,” he giddily announced. “There’s no going back, and they’re pretty cool. Let’s not be Luddites… This is your future.”
Strange, you might be thinking, to hear that sort of triumphalist rhetoric from a professor and not someone who works in Silicon Valley and has a financial interest in creating viral hype. These days, though, that’s the direction Middlebury is heading. Just this past J-Term, MiddCore partnered with OpenAI to explore “ethical and productive uses of ChatGPT” (next year, I believe Little Red Riding Hood and the Big Bad Wolf are partnering up to discuss ethical eating). In the beginning, AI usage was confined to shameless or sheepish students who were too lazy to do their work. Now, it is more common than not that I take a class which allows or encourages its use. If you can’t beat ‘em, join ‘em, isn’t that what they say?
When asked to justify the partnership, the director of MiddCore reasoned that “AI tools have become ubiquitous across industries… avoiding AI entirely is going to increasingly limit how effectively people can participate in the workforce.” My professor made a very similar argument: “Generative AI, like other tools used appropriately, can increase the efficiency and productivity of your labor… People who know how to use these tools well will be at a significant competitive advantage over those who do not.” (It’s cute that they keep calling them tools, like we’re working on a garage project or something.)
What these explanations reveal is that for the pro-AI crowd, Middlebury's primary purpose is to serve as a waystation on students’ journey into the professional world. The goal of the college is to equip students with the skills, credentials, and connections necessary to launch a successful career after graduation. According to this view, education is purely instrumental, a means to an end. Output matters more than input. For instance, at the end of the semester, my professor excitedly reported that all of our class’s projects were better than the best projects he’d received before we could automate (excuse me, augment) the assignment with AI. Never mind that they all looked exactly the same, never mind that we almost certainly didn’t retain the material; they really did look pretty.
This view of a college’s purpose is an example of what the philosophers Gilles Deleuze and Felix Guattari describe as capitalism’s "deterritorializing” instinct, by which they mean its tendency to dissolve and commodify every non-market structure it comes into contact with. Where once we were students, now we are consumers; where once the college was an educational institution with a mission, now it is a supplier who must offer an enticing product. This process helps explain recent outgrowths like the CCI, with their tiresome propagandizing on behalf of well-heeled employers, as well as ResLife and all the other parts of the sprawling internal bureaucracy performing “bullshit jobs” with their clear corporate parallels. Phenomena like rampant grade inflation is another internalization of market logic — remember, the customer is always right! Even student activism, ostensibly the most radical element of campus life, is conditioned by the college’s neoliberalization. Have you noticed how often student protestors sound like petulant customers angry about how their money is being spent? Wasn’t it not too long ago when the aim of student protests was to create institutions ungovernable by the exchange of services for money? AI is simply the culmination of the college’s commodification, the automation required to cut labor costs and boost productivity.
“Why do men fight for their servitude as stubbornly as though it were their salvation,” Deleuze and Guattari ask in Capitalism and Schizophrenia. Are they right — have our desires been truncated and deformed to the point where we willingly accept the bastardized version of education that the AI boosters offer? If all we can imagine college to be is a human capital factory churning out capable and docile workers we should just automate the whole thing. Make it a nice closed loop, machines talking to machines. It’s already happening: I took a class a couple semesters ago where the professor generated their AI usage policy using Claude. Perfect meaninglessness; the mask starting to eat the face.
Once, we had the idea that the college was a place that worked differently from the rest of the economy. Here we could build a community dedicated to knowledge and learning. Students and professors thinking side by side, doing what makes us most human and doing it together. At its best, the college is the most radical utopia we have, both a model and an investigation of a better world. Its only end should be to provide a true education, to allow students to live as their fullest and most equal selves. A place where AI-augmented essays and LinkedIn and all things instrumental, external, and acquisitive have no place. We could have that again; maybe AI will even help. Because no matter what OpenAI or MiddCore or anybody says, you know when you’re using AI that you’re demeaning yourself, that it’s wrong to outsource your ability to think to a next-token prediction machine. You read those rote, lifeless sentences which mean nothing and you’ll forget instantly, and you cringe. Good. That’s a start.


