On Sept. 18–19, students, faculty, scholars and community members gathered for “AI Unboxed: Moving Beyond Hype and Fear” to explore the impact of Artificial Intelligence (AI). The Symposium offered 11 events over two days, spanning topics including climate change, higher education, computer science, art, international security and the future of work in light of rapid developments in the world of AI. Organizers aimed to move past utopian promises and dystopian dread, instead carving out a middle space for nuance, dialogue and critical inquiry.
AI and Climate
The Symposium began with a session focused on AI’s energy demands. Climate Action Fellow Vee Syengo ’25.5 explained how every ChatGPT query triggers billions of calculations in server warehouses — called data centers — that run like "non-stop marathons," consuming massive amounts of electricity for cooling.
Visiting professor from Duke University's Nicholas Institute, Tim Profeta, warned that demand for U.S. data centers could grow at a compound annual growth rate (CAGR) of 33% through 2030, doubling their share of national power consumption from 4% to 8%.
Yet, Profeta also suggested that AI’s high demand and hyperscaling might accelerate clean power adoption as tech giants promise to invest billions in geothermal, nuclear and storage technologies. Whether AI becomes a climate liability will depend on how its resource demands are managed and how quickly sustainable systems can be brought online.
President Baucom and Meghan O’Rourke on AI
For the Symposium’s official opening session, college President Ian Baucom sat down with acclaimed writer and his former student at Yale, Megan O'Rourke, to discuss what it means to read, write and teach in an age where machines can generate fluent text at will.
Baucom asked O’Rourke a series of questions that drew from her popular New York Times essay on AI. O'Rourke spoke about her “radicalizing” first encounter with ChatGPT, describing it as “a seductive cocktail of affirmation, perceptiveness, solicitousness and duplicity.” Yet she also recognized AI's limitations: Its tendency toward “pastiche-like writing” and the false sense of mastery it provided, briefly convincing her she had accomplished what the machine had actually produced. Both she and Baucom voiced concern about the offloading of cognitive and emotional effort and the erosion of originality. Baucom explored these themes further by stressing the enduring values of slowness, uncertainty and struggle in authorship.
Baucom, drawing on Longinus’ “On the Sublime,” suggested that AI's constraints might highlight uniquely human ways of expression and thought.
He spoke of a “frontier intelligence” emerging in the relationship and boundaries between the human and the machine. The liberal arts education, he argued, becomes ever more vital in an AI world because it cultivates a distinctly human intelligence rooted in reflection and empathy that technology can’t replicate.
For many, the conversation that unfolded became the Symposium's defining moment.
“I think it showed everyone that the president embodies liberal arts values,” Brian Harris '26 said. “It was just impressive. And it really set up the stage of what this entire [symposium] is supposed to be.”
Faculty Panel on AI
The faculty panel that followed revealed educators grappling with practical questions. The issue was how AI reshapes teaching and learning, and what it means to be educated in an AI world.
Associate Dean of Curriculum at the Language Schools Thor Sawin described a “sandwich approach,” allowing AI use before and after class, but not during.
Professor of American Studies Roberto Lint Sagarena shared how his J-Term course on Worldbuilding incorporated prompt engineering. He emphasized that professors must first learn and understand these tools in order to teach them, or even to reject them meaningfully. Some faculty found AI useful for revealing blind spots in their teaching. Others spoke of creating “AI-free” spaces that prioritize learning over assessments.
Middlebury Distinguished Endowed Professor Allison Stanger took the opposite tack, calling for full integration of AI with ample footnotes and citations, pointing out that institutions like Harvard already provide paid AI tools to all students.
“I think we should just stop this theater of prohibition and instead embrace it, because we do not want to teach students to lie to us,” Stanger said.
The conversation then turned to Assistant Professors of Economics Zara Contractor and German Reyes' new paper on Generative AI in Higher Education. Their survey from August found that while 82% of Middlebury students use AI academically, most weren't taking shortcuts. Instead, they described AI as a personal tutor, using LLMs for “augmentation,” not “automation.” The findings echoed this year’s Zeitgeist 7.0, which showed that roughly 85% of the 1,026 student survey respondents said they use AI for their classes.
The panel’s tension lay in designing pedagogy that neither bans nor blindly embraces AI, but instead teaches students to use it thoughtfully. The panel reaffirmed Middlebury’s core teaching values: Keeping students engaged, explaining why assignments matter and the importance of play in meaningful education.
“If we can get students to fall in love with the end we’re presenting for them, and teach them why we made the choices in our syllabus — and how AI can either help them reach those goals or steal from them — that helps naturally regulate how they use it,” Sawin said.
Cycles of Hype and Fear
The symposium’s first night closed with a keynote from Melanie Mitchell, a leading AI researcher at the Santa Fe Institute. She traced AI’s “tumultuous past,” from dreams of conscious machines in the 1950s through cycles of hype and “AI winters.” Each wave of innovation, from the early perceptron to the machine learning revolutions of the 2010s, carried both breakthroughs and limits. Today’s moment, she argued, is both “astounding” and “terrifying” as systems capable of producing complex mathematical proofs can also mistake billboards for stop signs. Her charge to the audience was not only to ask what AI can achieve, but what it should achieve.
The second day opened with Wilson Hall reconfigured into roundtables for a packed day of discussion. Sessions ranged from international security to the future of the workforce. Salesforce co-founder Parker Harris ’89 told students that AI’s disruptive power surpasses even the iPhone, predicting major upheavals in entry-level jobs.
Harris’ advice to students facing an AI-transformed job market was to “learn to learn.”
“Learning to learn will help you retrain yourself as jobs change… Because getting the knowledge and getting the skills is not the problem — it’s your skills to acquire those skills and to acquire that knowledge. And that’s what we’re teaching at Middlebury.”
Lunch was served with an alumni panel discussing graduates’ experience using AI in their careers and personal lives, followed by artist Laura Splan’s exploration of human-AI coexistence and a gentle primer on language models by Assistant Professor of Computer Science Laura Biester.
The symposium concluded with a presentation by lawyer, professor Kay Firth-Butterfield, an AI expert and a 2024 TIME 100 Impact awardee. She spoke of AI’s potential to cure disease, predict wildfires, and aid conservation, while also warning of its risks: Bias, hallucinations, surveillance, and a steep environmental cost.
“This intelligence and our own intelligence can be combined to really shape every stage of our lives,” Firth-Butterfield told The Campus. “To co-author what future we want to hand to our children and beyond — and how that dovetails with stewardship of the only planet that we’ve got to live on.”
The symposium ended with a structured dinner discussion where students and faculty reflected on what it means to coexist with AI at Middlebury. What began with climate anxieties closed with a call to define, more clearly than ever, what it means to be human.
Ting Cui '25.5 (she/her) is the Business Director.
Ting previously worked as Senior Sports Editor and Staff Writer and continues to contribute as a Sports Editor. A political science major with a history minor, she interned at the National Press Club in Washington, D.C. as a policy analyst and op-ed writer. She also competed as a figure skater for Team USA and enjoys hot pilates, thrifting, and consuming copious amounts of coffee.



