ISLAMABAD: The Center for Artificial Intelligence and Digital Policy has filed a complaint with the US Federal Trade Commission against OpenAI’s GPT-4, calling it “deceptive, biased, and a risk to privacy and public safety.”
OpenAI, which Microsoft backs, recently launched the fourth iteration of its GPT AI program, which has garnered attention for its ability to engage in human-like conversation, compose songs, and summarize lengthy documents.
However, some experts have expressed concern about the potential risks posed by such powerful AI systems.
The complaint from the Center for Artificial Intelligence and Digital Policy follows an open letter signed by Elon Musk, AI experts, and industry executives, calling for a six-month pause in the development of systems more powerful than GPT-4.
OpenAI’s ChatGPT-4 fails to meet FTC’s standard
According to the complaint, OpenAI’s ChatGPT-4 fails to meet the FTC’s “transparent, explainable, fair and empirically sound standard while fostering accountability.”
The group cited examples of OpenAI exposing private chat histories to other users and an AI researcher finding that it was possible to take over someone’s account, view their chat history, and access their billing information without them ever realizing it.
The president of the Center for Artificial Intelligence and Digital Policy, Marc Rotenberg, expressed concern about commercial pressures pushing OpenAI to release a product that wasn’t ready.
He urged the FTC to “open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”
The complaint highlights the growing concern about the potential risks and biases of advanced AI systems and calls for greater oversight and regulation to protect privacy and public safety.