A new tool aims to help AI chatbots generate more human-sounding text — with the help of Wikipedia’s guide for detecting AI, as reported by Ars Technica. Developer Siqi Chen says he created the tool, called Humanizer, by feeding Anthropic’s Claude the list of tells that Wikipedia’s volunteer editors put together as part of an initiative to combat “poorly written AI-generated content.”
Wikipedia’s guide contains a list of signs that text may be AI-generated, including vague attributions, promotional language like describing something as “breathtaking,” and collaborative phrases, such as “I hope this helps!” Humanizer, which is a custom skill for Claude Code, is supposed to help the AI assistant avoid detection by removing these “signs of AI-generated writing from text, making it sound more natural and human,” according to its GitHub page.
The GitHub page provides some examples on how Humanizer might help Claude detect some of these tells, including by changing a sentence that described a location as “nestled within the breathtaking region” to “a town in the Gonder region,” as well as adjusting a vague attribution, like “Experts believe it plays a crucial role” to “according to a 2019 survey by…” Chen says the tool will “automatically push updates” when Wikipedia’s AI-detecting guide is updated.
It’s only a matter of time before the AI companies themselves start adjusting their chatbots against some of these tells, too, as OpenAI has already addressed ChatGPT’s overuse of em dashes, which has become an indicator of AI content.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

4 hours ago
2














































