Lately, Middlebury College has been acting confused. With the state of the world right now, who can blame it? The cost to provide a liberal arts education continues to rise, which is exacerbated by the Trump administration’s war on higher education. We’re in the process of breaking up with our long-distance partner in Monterey, which has been awkward but for the best. Meanwhile, in Vermont, Ian Baucom is crisscrossing our campus with the energy of a philosophical super-senior. What are we here for? he is asking. Our college? This whole thing called a liberal arts education?
Baucom’s questions are necessary because something is changing. Many of the assignments that Midd’s professors have been assigning since forever can now be done by Artificial Intelligence (AI). Sure, the writing it produces is often strange, it has an alarming tendency to make stuff up, and it’s a tech product that comes with complicated ethical and environmental trade-offs. However, the fact remains that it is pretty good at debugging R code, summarising an academic article or writing a First-Year essay about the Peloponnesian War.
In the US, the proliferation of AI is a fait accompli, at least for now. However, it need not be one at Middlebury. I am disappointed with our milquetoast institutional response to AI, including our strategic plan, which calls for cautious integration of AI in education. I am very disappointed with professors who encourage students to purchase premium AI subscriptions for class, making things difficult for students who would rather opt out due to their personal beliefs about the technology and inequitable for those who must do so for money’s sake. I am disappointed that MiddCORE, a program known for partnering students with socially impactful businesses, partnered with a company so unscrupulous that it has decided to allow erotica for its emotionally attached users (though it has since put those plans on hold). Most of all, I am disappointed in those who deny their own agency by arguing that Middlebury must accommodate an inevitable future of AI in education rather than doing the harder work of creating the best academic community with the incredible wealth of resources we have.
Defenders of AI in education will argue that AI can act as a thought partner to augment our thinking. They say it can help us generate ideas, automate the drudgework of formatting code and enable a shift to project-based learning, such as presentations. However, integrating AI into a Middlebury education implicitly means giving up some of our capabilities to a machine. In-class essays are fine, but they don’t provide the same level of depth and exploration as a research paper. Using AI to debug your code means that you will have a weaker understanding of what’s going on when the AI can’t help you. Use a chatbot enough to generate ideas, and you will find you’re less creative on your own. Giving up these capabilities is not in any of our interests: not the students and families who make sacrifices for an education, not the professors who got into this for the love of learning, and not even for the long-term interest of an institution focused on its own prestige.
Even though AI will surely be common in our jobs and lives, the purpose of an education lies in doing the work. To fully understand what an integral is, you need to do it manually and mechanically before you use a calculator. To get strong, you need to use your muscles to lift weights, not a forklift. To truly understand an argument, a worldview, a mathematical concept or scientific literature, you need to do the reading, writing and problem sets. Doing so will hopefully have additional benefits. You may gain a deeper understanding of what you believe in, along with the emotional maturity, diligence and humility needed to be successful after college. Ironically, these skills are only becoming more important as technology proliferates.
Through the honor code, our school has a norm against academic dishonesty. It’s a weakened norm, but an important one. Students promise to follow the honor code, while the college is encouraged to provide us with the best educational resources it can. Professor Alison Stanger says that the college should embrace AI because the alternative is “[teaching] students to lie to us.” In doing so, she attempts to derive a normative statement about how Middlebury should operate from a descriptive one, a fallacy that Hume warned us against centuries ago. Would Stanger also propose getting rid of Middlebury’s honor code or parking regulations simply because most students fail to fully comply with them?
Just as we have the honor code, we must have a similar norm against the use of generative AI in our classes. Creating such a norm will take time, trust and cooperation. It may require dramatically reducing our use of technology at the institutional level. These are drastic measures that surely will not have universal approval; however, the threat to our students’ capacity to learn is too great to do anything less. I hope that Middlebury’s faculty, students and administration can take a stand against the use of AI in education. Doing so won’t exactly answer the question of what Middlebury is for, but it will allow us to renew the commitment to academic excellence that’s part of what makes this little ridge in Vermont so special.


