Specifically, in a critical analysis, the European Union’s privacy watchdog’s task force has criticized OpenAI’s actions, stating that its measures to prevent the generation of factually wrong information by its ChatGPT chatbot did not adequately address the problem of violating European data rules.
The task force, established by the body that coordinates the national privacy regulators of Member States of Europe, published the report on its website on Friday; it reads, “While the measures taken aimed at ensuring compliance with the requirements of the principle of transparency helps to avoid subjecting the output of ChatGPT to certain misinterpretations, this in itself is not enough to ensure compliance with the principle of data accuracy.
The findings are based on the decisions made by national regulators- led by Italy – that criticized the popular artificial intelligence service last year, which led to the creation of the specific task force.
”The measures taken to meet the requirements of the transparency principle are helpful to prevent the misinterpretation of the output of ChatGPT since they do not align with the data accuracy principle completely,”
The task force stressed this throughout its report.
According to the data, OpenAI and similar companies still face essential problems in the EU as regulatory and legal problems remain the key issues for the AI business.
While the EU is one of the hotbeds of implementing AI-powered chatbots, the EU’s data protection supervisor is approaching such technologies’ compliance with the regional data protection regulation or GDPR, implying the need to develop further layers of protection to encourage user confidence and safeguard data authenticity.