US Regulators Investigate OpenAI's ChatGPT Over AI Risks

– The Federal Trade Commission (FTC) is conducting an investigation into OpenAI, the maker of ChatGPT, an AI-powered chatbot.

– The probe focuses on potential harm caused by the chatbot fabricating false information about individuals and OpenAI's privacy and data security practices.

– Regulators worldwide are increasingly concerned about generative AI products, highlighting the large amount of personal data consumed and the potential for harmful outputs like misinformation, sexism, and racism.

– In a letter to OpenAI, the FTC requested information on user data handling, measures taken to address false or misleading statements produced by the model, and consumer complaints.

– OpenAI has declined to comment on the investigation, and the FTC has not provided further details.

– Lina Khan, FTC chair, expressed concerns about chatbots being fed a vast amount of data without proper checks, leading to issues such as the disclosure of sensitive information and the spread of defamatory statements.

– Users have reported instances of ChatGPT generating fabricated names, dates, facts, and fake links to news articles and academic papers.

– The FTC's investigation delves into the technical aspects of ChatGPT's design, including efforts to address hallucinations, as well as the oversight of human reviewers who affect consumer experiences.

– Italy's privacy watchdog previously temporarily banned ChatGPT due to concerns over data collection and privacy, but the ban was lifted after OpenAI made policy improvements and introduced age verification measures.

– OpenAI's CEO, Sam Altman, has acknowledged ChatGPT's limitations and the need for further work on robustness and truthfulness.

– This investigation reflects growing regulatory scrutiny and the need to address the potential risks associated with AI-powered chatbots.