Tech companies don’t care that students use their AI agents to cheat

6 hours ago 4

AI companies know that children are the future — of their business model. The industry doesn’t hide their attempts to hook the youth on their products through well-timed promotional offers, discounts, and referral programs. “Here to help you through finals,” OpenAI said during a giveaway of ChatGPT Plus to college students. Students get free yearlong access to Google’s and Perplexity’s pricey AI products. Perplexity even pays referrers $20 for each US student that it gets to download its AI browser Comet.

Popularity of AI tools among teens is astronomical. Once the product makes its way through the education system, it’s the teachers and students who are stuck with the repercussions; teachers struggle to keep up with new ways their students are gaming the system, and their students are at risk of not learning how to learn at all, educators warn.

This has gotten even more automated with the newest AI technology, AI agents, which can complete online tasks for you. (Albeit slowly, as The Verge has seen in tests of several agents on the market.) These tools are making things worse by making it easier to cheat. Meanwhile tech companies play hot potato with the responsibility for how their tools can be used, often just blaming the students they’ve empowered with a seemingly unstoppable cheating machine.

Perplexity actually appears to lean into its reputation as a cheating tool. It released a Facebook ad in early October that showed a “student” discussing how his “peers” use Comet’s AI agent to do their multiple-choice homework. In another ad posted the same day to the company’s Instagram page, an actor tells students that the browser can take quizzes on their behalf. “But I’m not the one telling you this,” she says. When a video of Perplexity’s agent completing someone’s online homework — the exact use case in the company’s ads — appeared on X, Perplexity CEO Aravind Srinivas reposted the video, quipping, “Absolutely don’t do this.”

When The Verge asked for a response to concerns that Perplexity’s AI agents were used to cheat, spokesperson Beejoli Shah said that “every learning tool since the abacus has been used for cheating. What generations of wise people have known since then is cheaters in school ultimately only cheat themselves.”

This fall, shortly after the AI industry’s agentic summer, educators began posting videos of these AI agents seamlessly filing assignments in their online classrooms: OpenAI’s ChatGPT agent generating and submitting an essay on Canvas, one of the popular learning management dashboards; Perplexity’s AI assistant successfully completing a quiz and generating a short essay.

In another video, ChatGPT’s agent pretends to be a student on an assignment meant to help classmates get to know each other. “It actually introduced itself as me … so that kind of blew my mind,” the video’s creator, college instructional designer Yun Moh, told The Verge.

Canvas is the flagship product of parent company Instructure, which claims to have tens of millions of users, including those at “every Ivy League school” and “40% of U.S. K–12 districts.” Moh wanted the company to block AI agents from pretending to be students. He asked Instructure in its community ideas forum and sent an email to a company sales rep, citing concerns of “potential abuse by students.” He included the video of the agent doing Moh’s fake homework for him.

It took nearly a month for Moh to hear from Instructure’s executive team. On the topic of blocking AI agents from their platform, they seemed to suggest that this was not a problem with a technical solution, but a philosophical one, and in any case, it should not stand in the way of progress:

“We believe that instead of simply blocking AI altogether, we want to create new pedagogically-sound ways to use the technology that actually prevent cheating and create greater transparency in how students are using it.

“So, while we will always support work to prevent cheating and protect academic integrity, like that of our partners in browser lockdown, proctoring, and cheating-detection, we will not shy away from building powerful, transformative tools that can unlock new ways of teaching and learning. The future of education is too important to be stalled by the fear of misuse.”

Instructure was more direct with The Verge: Though the company has some guardrails verifying certain third-party access, Instructure says it can’t block external AI agents and their unauthorized use. Instructure “will never be able to completely disallow AI agents,” and it cannot control “tools running locally on a student’s device,” spokesperson Brian Watkins said, clarifying that the issue of students cheating is, at least in part, technological.

Moh’s team struggled as well. IT professionals tried to find ways to detect and block agentic behaviors like submitting multiple assignments and quizzes very quickly, but AI agents can change their behavioral patterns, making them “extremely elusive to identify,” Moh told The Verge.

In September, two months after Instructure inked a deal with OpenAI, and one month after Moh’s request, Instructure sided against a different AI tool that educators said helped students cheat, as The Washington Post reported. Google’s “homework help” button in Chrome made it easier to run an image search of any part of whatever is on the browser — such as a quiz question on Canvas, as one math teacher showed — through Google Lens. Educators raised the alarm on Instructure’s community forum. Google listened, according to a response on the forum from Instructure’s community team, and an example of the two companies’ “long-standing partnership” that includes “regular discussions” about education technology, Watkins told The Verge.

When asked, Google maintained that the “homework help” button was just a test of a shortcut to Lens, a preexisting feature. “Students have told us they value tools that help them learn and understand things visually, so we have been running tests offering an easier way to access Lens while browsing,” Google spokesperson Craig Ewer told The Verge. The company paused the shortcut test to incorporate early user feedback.

Google leaves open the possibility of future Lens/Chrome shortcuts, which it’s hard to imagine won’t be marketed to students given the presence of a recent company blog, written by an intern, declaring: “Google Lens in Chrome is a lifesaver for school.”

Some educators found that agents would occasionally, but inconsistently, refuse to complete academic assignments. But that guardrail was easy to overcome, as college English instructor Anna Mills showed by instructing OpenAI’s Atlas browser to submit assignments without asking for permission. “It’s the wild west,” Mills said to The Verge about AI use in higher education.

This is why educators like Moh and Mills want AI companies to take responsibility for their products, not blame students for using them. The Modern Language Association’s AI task force, which Mills sits on, released a statement in October calling on companies to give educators control over how AI agents and other tools are used in their classrooms.

OpenAI appears to want to distance itself from cheating while maintaining a future of AI-powered education. In July, the company added a study mode to ChatGPT that does not provide answers, and OpenAI’s vice president of education, Leah Belsky, told Business Insider that AI should not be used as an “answer machine.” Belsky told The Verge:

“Education’s role has always been to prepare young people to thrive in the world they’ll inherit. That world now includes powerful AI that will shape how work gets done, what skills matter, and what opportunities are available. Our shared responsibility as an education ecosystem is to help students use these tools well—to enhance learning, not subvert it—and to reimagine how teaching, learning, and assessment work in a world with AI.”

Meanwhile, Instructure leans away from trying to “police the tools,” Watkins emphasized. Instead, the company claims to be working toward a mission to “redefine the learning experience itself.” Presumably, that vision does not include constant cheating, but their proposed solution rings similar to OpenAI’s: “a collaborative effort” between the companies creating the AI tools and the institutions using them, as well as teachers and students, to “define what responsible AI use looks like.” That is a work in progress.

Ultimately, the enforcement of whatever guidelines for ethical AI use they eventually come up with on panels, in think tanks, and in corporate boardrooms will fall on the teachers in their classrooms. Products have been released and deals have been signed before those guidelines have even been established. Apparently, there’s no going back.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Read Entire Article
Lifestyle | Syari | Usaha | Finance Research