ISLAMABAD: Some online AI detection tools are being criticised for falsely labelling genuine content as machine-generated and then offering paid services to “humanise” the text—raising concerns among experts that such practices resemble scams.
With misinformation already spreading rapidly online, unreliable detectors risk worsening the problem by misclassifying authentic material. Researchers warn that a growing number of fraudulent tools can be exploited to undermine credibility and damage reputations.
An investigation found several such tools inaccurately identifying human-written content—including news reports and literary works—as AI-generated. After producing these false results, the platforms prompted users to pay fees to “fix” or rewrite the text. In some cases, content was labelled as largely AI-generated regardless of language or input quality.
Developers
Developers of these tools defended their systems, acknowledging that no detector is fully accurate and that free versions may provide less reliable results. However, experts argue that some of these platforms appear to produce automated or scripted outputs rather than genuine analysis.
Critics say the business model relies on misleading users into paying for unnecessary services. Some tools even claimed links to prestigious institutions, though such affiliations were denied.
Beyond financial concerns, the issue also has broader implications. False AI detection claims can be used to discredit legitimate content, contributing to what researchers call the “liar’s dividend”—a tactic where authentic information is dismissed as fabricated.
While even legitimate detection tools can make mistakes, experts emphasise the need for verification through additional evidence, warning that overreliance on flawed systems could further erode trust in digital information.



