GENEVA: Pakistan on Tuesday highlighted the destabilising impact of India’s use of dual-capable delivery systems like the BrahMos missile during the recent conflict.
Ambassador Bilal Ahmad, Pakistan’s Permanent Representative to the United Nations, other international organisations, and the Conference on Disarmament in Geneva, highlighted the strategic, operational, legal, and technical risks posed by unregulated military applications of Artificial Intelligence (AI) during his statement at the Conference on Disarmament.
He drew attention to India’s AI-enabled disinformation campaigns, including false media narratives aimed at distorting reality during crises.
The representative called for a UN-led governance framework on military AI—one that preserves global and regional stability, ensures guardrails against irresponsible use, and promotes equitable access to peaceful AI applications.
In the statement, the representative said that the AI is a general-purpose technology vital for achieving the SDGs, while its military applications are cross-cutting and pose systemic risks, requiring a coherent, inclusive, and globally coordinated response.
He highlighted the useful initiatives by different States such as REAIM to promote responsible AI in the military domain.
Ambassador Bilal Ahmed said the UN disarmament machinery should play a central role in developing an international governance framework for military AI and preventing the fragmentation of the normative landscape.
“We are not persuaded by the argument that the AI in the military domain should only be dealt with through one narrow lens in one single venue,” the representative added.
He stressed the need for a structured, complementary, and multi-forum strategy in this regard. “No single venue or instrument can comprehensively address the AI challenge,” it added.
Ambassador Bilal proposed to maintain human control and not replace human judgment in decisions regarding nuclear weapons employment while prohibiting the use of AI capabilities to manipulate data or target systems.
He called for developing restraint measures on deployment and use of certain AI-capabilities, which can initiate pre-emptive strikes and contribute to escalatory nuclear risks.
The UNDC, through its Working Group II, is ideally positioned to develop practical guidelines and recommendations on the responsible military use of AI to address operational and technical risks. It added historically, the UNDC has effectively developed similar guidelines (e.g., confidence-building measures in 1988 and regional approaches to disarmament in 1993).
Pakistan’s UN envoy noted the UNGA First Committee should institutionalise periodic SG reports on military AI trends and host dedicated debates. These reports could support informed agenda-setting across the CD, UNDC, and CCW.
“The Convention on Certain Conventional Weapons (CCW) GGE must continue to focus on LAWS and negotiate a new Protocol under the CCW to address the humanitarian challenges associated with LAWS,” he said in the statement.
“We must begin building a global governance architecture that includes both binding and voluntary measures. Constructing such a governance framework will take time and will not be a result of our efforts in one go,” Ambassador Bilal Ahmad proposed.