KEY POINTS
- AI chatbots risk research integrity by generating convincing but false citations and facts, a phenomenon experts call “fantasising.”
- Heavy AI reliance threatens to erode analytical thinking and original synthesis.
- Experts warn AI is becoming an “ultimate cheat code” for students.
- UK survey shows 92% of students now regularly using generative AI tools for academic work.
ISLAMABAD: Experts from academia, tech, and policy have warned that the reflexive use of AI chatbots is quietly undermining the integrity of research, cautioning that these tools, despite their utility, are propagating serious inaccuracies, encouraging intellectual complacency, and obscuring the path to trustworthy scholarship.
A recent UK survey underscores a seismic shift in higher education, revealing that a staggering 92% of students now regularly use generative AI tools, a dramatic surge from 66% just a year prior.
This integration into daily academic life presents a pressing question for researchers and institutions alike: is AI a transformative boon for scholarship or a fundamental threat to intellectual rigour?
In conversations with WE News English, several professionals, from educators and media figures to researchers, students, think tank officials, IT experts, and policymakers, shared a common fear.
They worry that heavy reliance on AI-generated content could foster a generation skilled at compiling information but lacking the ability to analyse critically, synthesise concepts, or generate truly original ideas.
Catalyst for Growth and Efficiency

Many experts view AI’s arrival with optimism, framing it as a potent stimulus for progress. Former Chairman Higher Education Commission (HEC) and a climate change expert, Dr. Tariq Banuri, describes AI as a “stimulus technology” capable of boosting economic competitiveness, productivity, and growth.
For nations like Pakistan, he advocates for policies that foster, not hinder, its adoption, such as ending disruptive internet shutdowns and stimulating demand in sectors like data services.
Within research, the advantages are tangible. AI acts as an “indispensable research assistant” or a “tireless, instant co-pilot,” as noted by Islamabad-based IT expert Usman Farooq. It dramatically accelerates literature reviews, data synthesis, coding, and methodological testing.
Dr. Faheem Siddiqui, a postdoctoral researcher at Vrije Universiteit Brussel in Belgium, highlights AI chatbots’ role in automating repetitive tasks, freeing researchers for higher-order analysis and innovation.
In fields like statistical analysis and medical diagnostics, AI performs wonders, handling complex computations and data projections with ease. This efficiency boost, as Dr. Saeed Minhas, a media practitioner and expert on governance, AI, and security, acknowledges, enhances overall research speed and scalability.
Eroding the Foundations of Scholarship

However, this efficiency comes with significant threats to the very core of academic integrity and intellectual development. The most urgent concern is AI’s role as an “ultimate cheat code,” a term used by Dr. Ilhan Niaz.
For students, over-reliance on AI to generate assignments risks creating a generation that loses the fundamental skill of writing, a critical process for developing brainpower and analytical thinking. This initiates a vicious cycle: diminished effort leads to greater dependence, further atrophying human cognitive ability.
Dr. Munawar Hussain, an expert in international relations and social media, observes a growing “laziness,” where the deep engagement with raw data, verification of facts, and discernment of patterns is eroded. The result, warns Usman Farooq, can be work that is “superficially competent but intellectually hollow.”
Furthermore, AI’s tendency to “hallucinate” or fabricate plausible citations and information poses a direct threat to research integrity. Compounding this is the difficulty of detection; as Humza Farooq, a faculty member of IT department of Jazan University, Saudi Arabia, notes, AI tools are now hard to spot, and detection software is notoriously unreliable.
For the academy itself, Dr. Niaz paints a dystopian picture: a hollowed-out institution where professors use AI to generate lectures and students use AI to complete assignments, rendering the educational experience meaningless. Unchecked, the internet risks being flooded with indistinguishable, AI-generated content, degrading the information ecosystem.
Adaptation, Oversight, and Human-centred Values

Confronting these challenges requires proactive adaptation, not rejection. Experts propose a multi-faceted strategy centred on human oversight.
Evolving Assessment and Methodology: Since AI can produce polished text, evaluation methods must change. Humza Farooq points to the revival of oral exams and live defences, which assess real-time understanding and critical thinking.
Academically, Dr. Fouzia Farooq, TORCH Global visiting Professor at the University of Oxford, stresses that researchers must move beyond proving originality and instead highlight their unique intellectual contribution beyond the AI’s output.
Implementing Smart Regulation and Policy
Dr. Ilhan Niaz calls for strict, sector-specific regulation, limiting AI use in fields like the humanities to preserve critical thought while encouraging it in technical domains.
Dr. Saeed Minhas argues that regulatory bodies like Pakistan’s HEC must establish clear policies mandating disclosure of AI use, limiting its role to technical assistance, and forbidding it from core scholarly tasks like constructing arguments or peer review. Ethical guidelines, including formal “AI co-authorship” protocols, are necessary.
Cultivating Critical Engagement and Skill: The goal must be to “make AI our slave, not our master,” advises Dr. Fouzia Farooq. This requires cultivating new skills.
Dr. Faheem Siddiqui emphasises mastering “prompt engineering” to mitigate bias and improve outputs. Ultimately, as Dr. Minhas advocates, a hybrid model must prevail: human researchers must lead theoretical development, critical interpretation, and ethical judgment, using AI as a tool for augmentation, not replacement.
Experts agreed AI is a transformative but double-edged tool for research. Its benefit depends entirely on human oversight.
Moving forward, researchers must master this new skill, approaching AI with both excitement and sharp scrutiny, balancing its power with indispensable human judgment.



