Reporters Without Borders (RSF) warned this Wednesday that European regulation of artificial intelligence (AI) based on codes of conduct is not enough to protect the right to information, calling for regulation of algorithms.
“RSF believes that this proposal, if adopted, would be dangerous to the right to information and undermine the ethical use of AI in the media,” the press freedom organization said in a statement.
The reaction comes ahead of a new meeting to agree on a future European AI law, and after the G7 countries signed a voluntary code of conduct and guidelines on October 30 as part of the so-called Hiroshima AI process.
For RSF, a model based on codes of conduct “is not sufficient as it is not mandatory and depends only on the goodwill of sector participants.”
So he asked European negotiators to regulate the “core algorithms” that are the cornerstone of the artificial intelligence industry and on which practice-specific applications such as OpenAI’s ChatGPT are based.
These algorithms, in order to guarantee the right to information, must be verifiable, and therefore future European regulation should “impose on the underlying models standards of openness, explainability of functioning and transparency of systems, as well as measures to protect rights.” to information,” emphasized Vincent Berthier, head of technology at RSF.
The Paris-based press freedom organization also recalled that in November it issued a letter on principles for the appropriate use of AI in journalism.
The Paris Charter on Artificial Intelligence and Journalism, prepared by a commission created by Reporters Without Borders (RSF) and chaired by journalist and Nobel Peace Prize laureate Maria Ressa, includes 10 points, including the fact that “human judgment must remain central” in editorial decisions. .
“The social function of journalism and the media, which is the third pillar of trust for society and individuals, is the cornerstone of democracy and strengthens the right to information for everyone. Artificial intelligence systems can significantly help the media fulfill this role, but only if they are used in a transparent, fair and responsible manner, in an editorial environment that strongly defends the ethics of journalism,” the document says.
According to the letter, media outlets are “responsible for the content they publish,” “journalistic ethics guides media outlets and journalists in their use of technology,” and media outlets are transparent “in their use of artificial intelligence systems.”
In addition, the media “should engage in global AI governance and defend the viability of journalism in negotiations with technology companies, as well as help the public “confidently differentiate between genuine and synthetic content.”
Author: Lusa
Source: CM Jornal

I am Michael Melvin, an experienced news writer with a passion for uncovering stories and bringing them to the public. I have been working in the news industry for over five years now, and my work has been published on multiple websites. As an author at 24 News Reporters, I cover world section of current events stories that are both informative and captivating to read.