A Vermont online news publication appears to use artificial intelligence extensively to produce stories while declining to disclose this practice to readers, highlighting growing debates about transparency in AI-assisted journalism.

Compass Vermont has published stories on topics ranging from tariff effects on Vermont businesses to Vermont Air National Guard deployments, according to reporting by Seven Days. The outlet has attracted subscribers on Substack, with some readers praising its unique coverage.

“It’s one of my favorites right now online,” said Art Spellman, a 74-year-old South Burlington resident who reads the publication, according to Seven Days. Spellman said he appreciates stories he doesn’t find elsewhere in local media, including National Guard F-35 Fighter Wing coverage.

However, Spellman was unaware that evidence suggests Compass produces its stories at least partially with AI assistance, according to the report.

The publication represents part of a broader trend of AI integration in journalism. Major outlets like the New York Times use in-house AI versions for data analysis and tracking online commentary, while smaller operations like the Herald in Randolph use ChatGPT to produce stories from selectboard meeting minutes, according to Seven Days.

Compass appears to employ AI more extensively, sweeping the internet for data, government reports and articles from other media outlets, then relying on AI to analyze results and help write stories, the report indicates.

The approach raises questions about transparency standards in AI-assisted journalism. Alex Mahadevan, director of the media literacy program and AI Innovation Lab at Poynter, told Seven Days that transparency matters most when news organizations use AI.

“Audiences want news organizations to disclose when they’ve used AI substantially,” Mahadevan said, citing a Poynter study conducted with the University of Minnesota.

Most traditional news organizations label their AI use. The Cleveland Plain Dealer in Ohio discloses when AI writes stories to free journalists for in-person reporting, according to its editor’s recent column reported by Seven Days. Many outlets develop policies setting guardrails around AI use and requiring reader disclosure.

Compass takes a different approach. The publication does not disclose AI use, stating only that it relies on “modern research and analysis tools” to produce stories emphasizing facts and fairness, according to Seven Days.

“At the risk of sounding lofty, asking us the specifics of how we do it is like asking Coke for their cola recipe,” Compass stated on its previous About page before changing it recently, according to the report. “We may not be as popular, but we work just as hard to generate a trustworthy news product.”

Tom Davis, who identifies himself as Compass founder and “veteran media editor and publisher,” started the outlet in 2020, according to the About page. Davis also works full-time as Northfield’s economic development director, putting in at least 35 hours weekly in that role, Seven Days reported.

Davis declined to speak with Seven Days but responded to written questions, according to the report.

The Compass model highlights questions traditional journalists face regarding AI reliability, ethics in news production, and potential reader reactions. The technology itself isn’t problematic, according to Mahadevan, who told Seven Days that media companies deploy AI for various purposes from web scraping for investigations to data analysis.

The debate reflects broader industry discussions about maintaining journalistic integrity while adopting new technologies. As AI tools become more prevalent in newsrooms, questions about disclosure, accuracy, and reader trust continue to evolve across the media landscape.

Written by

Avery Chen

Contributing writer at The Dartmouth Independent

View all articles →