SAN FRANCISCO: A new artificial intelligence model dubbed “Mythos” has triggered fresh debate in the tech world, with its developers warning that the system could pose serious risks if misused — even as experts question the scale of those claims.
The model, designed to identify cybersecurity vulnerabilities, is said to outperform human experts in scanning large codebases and detecting high-risk flaws.
Its creators have cautioned that such capabilities, if accessed by malicious actors, could have far-reaching consequences for economies, infrastructure and national security.
Supporters argue that tools like Mythos highlight the growing power of AI systems and the need for stronger safeguards. By rapidly identifying weaknesses in software, the technology could help organisations patch vulnerabilities before they are exploited, potentially strengthening digital security worldwide.
False positive rates
However, critics are urging caution, pointing out that many of the claims about Mythos remain difficult to independently verify. Some experts have raised concerns about the lack of transparency around how the system was tested, particularly the absence of widely accepted performance benchmarks such as false positive rates — a key indicator of how accurately a tool identifies real threats.
Analysts also note that similar warnings about AI systems have been made in the past. Earlier models once described as “too dangerous” were eventually released to the public, leading some observers to question whether such messaging reflects genuine concern or a broader industry pattern.
“There are a lot of cracks in this narrative that Mythos is all powerful,” one expert said, highlighting the need for careful evaluation before drawing conclusions about its capabilities.
AI-driven cybersecurity tools
Beyond technical debates, the discussion around Mythos reflects a wider issue in the AI sector: how to balance innovation with responsibility.
As companies compete to build more advanced systems, the pressure to lead in the field is increasing — alongside concerns about safety, regulation and long-term impact.
While the risks associated with AI-driven cybersecurity tools are real, experts emphasise that they must be assessed alongside existing technologies already used to detect and prevent cyber threats.
In that context, Mythos may represent an evolution rather than a sudden leap into uncharted territory.
For now, the model remains at the centre of a growing conversation about the future of artificial intelligence — one that raises important questions about trust, accountability and how much control developers truly have over the tools they create.



