As Europe’s trilogue approaches, Italy’s publishers and sister organizations demand stronger, not weaker, regulations on AI.
A potentially pivotal moment occurs this week in the closely watched development of the European Union’s “AI Act.” On Wednesday (December 6), the AI Act is to have its fifth “trilogue.” That’s the term for a negotiating session in which the European Parliament, the European Commission, and the Council of the European Union. Previous trilogue meetings on the Artificial Intelligence Act were held in June, July, September, and October. Originally, the idea was that this December trilogue would finalize the bill for the bloc this year, but there’s increasing concern that the timing of such progress will be take longer. This, on legislation that saw its first draft in 2021 and was first proposed in 2019.
What has happened in the interim—you won’t be surprised to read—is the rise of “foundation models.” Sometimes called “general purpose,” these are the systems designed as large-language models built for “deep learning” that can be adapted for a wide array of scenarios. This contrasts, of course, with the concept of a traditional program designed to handle a specific and narrow task set, maybe speeding up a bit of office drudge work. Such less ambitious programs require nothing like some foundation models’ contentious free-range feeding on information—often copyrighted content—to build their algorithmic-response structures.
A foundation model is a form of what’s called “generative artificial intelligence,” meaning that it can generate output from a broad base of ingested data. At the highest intentional level, the over-arching core of discussion around this legislation has been, to quote the EU’s material, to handle “concerns especially with regard to safety, security, and fundamental rights protection.” But if the devil is usually in the details, a construct of digital details presents such a major chance for devilry that many observers now are worried about this important legislation’s progress.
Needless to say, the upheaval around OpenAI last month when its board fired and the rehired Sam Altman seemed to confirm fears that a major corporate player in the AI space could be thrown into turmoil by inscrutable internal governance issues. As Kevin Chan at the Associated Press is writing today, once the Altman fiasco had played out, European Commissioner Thierry Breton said at an AI conference, “‘At least things are now clear’ that companies like OpenAI defend their businesses and not the public interest.”
And yet, much discussed in coverage on the run-up to Wednesday’s trilogue is an unexpected resistance that’s been mounted by France, Spain, and Italy, which presented a pitch for self-regulation among AI players. At The Guardian, John Naughton wrote up this “Franco-German-Italian volte face,” as he calls it, as the result of everyone’s worst fears: “the power of the corporate lobbying that has been brought to bear on everyone in Brussels and European capitals generally.” More broadly, the assumption is that in each EU member-state seeming to make that about-face and start talking of self-regulation as the way to go, something has been promised by industry advocates for the local national AI companies, a divide-and-conquer effort by lobbyists.
Today, however (December 4), there’s a strong countervailing statement coming from Italy:
This morning, the Association of Italian Publishers (Associazione Italiana Editori, AIE) has issued a statement on behalf of itself and 34 other business associations representing authors and artists “from the entire cultural spectrum,” demanding that Italy “change its position on the European Artificial Intelligence regulation.”
Aligning themselves, they say, with French and German associations, this effectively forms a trifecta of the three nations’ cultural communities, all saying in one voice that self-regulation in AI won’t be adequate and that “more stringent regulation” is the way to go. In the best light, those are the cultural sectors of three European member-states trying to lead their associates to make a turnaround and stay the course on Europe’s earlier plan for tighter regulation, rejecting the self-regulatory concepts that have surprised many and could stall out the tough approach many have hoped would be the result of the AI Act’s development.
The top-line element of today’s statement from Italy’s 34 signatories here (emphasis ours): “We strongly ask the Italian government to support balanced regulation which, by guaranteeing the transparency of sources, favors the development of artificial intelligence technologies, while protecting and promoting original human creativity and all the cultural contents of our country.”
Concerned, they write, for the “delicate negotiation” coming up on Wednesday, these 34 trade organizations—representing hundreds of thousands of Italian citizens—are telling Rome that stronger regulation, not a lighter touch, is the way to go. Their statement may well warm the hearts of many who already have been in litigation and other forms of objection to how some AI development is handling copyrighted content.
We’ll quote in length here from the statement issued today from Rome by these organizations:
“There is a darker side to this technology. In particular, generative AI is being trained on large datasets and massive amounts of copyrighted content that is often collected and copied from the Internet. It is programmed to produce results that have the ability to compete with human creation. This technology poses several risks to our creative communities.
“Protected works, voices, and images are being used without the consent of the rights holders to generate new content. Some of these uses may infringe not only on copyrights but also on the moral rights and personalities of the authors and harm their personal and professional reputations. In addition, there is a risk that the original work of authors, artists, and cultural and creative enterprises will be replaced, forcing them to compete with their digital replicas, which would gain obvious advantages in several respects with serious economic consequences as well.
“There is also a broader risk to society, since people might be led to believe that the content they encounter—textual, audio, or audiovisual—is authentic and truthful human creation, when it’s simply the result of the generation or manipulation of AI. This deception can have far-reaching implications for the spread of misinformation and the erosion of trust in the authenticity of digital content, and also presents serious problems from an ethical standpoint.
“AI cannot develop while neglecting fundamental rights, such as the rights of authors and performers, the image and personality rights and the rights of the multiple creative and cultural industries that invest to make possible the creation of works over which it is legitimate to expect to be able to exercise control. AI should never be employed in ways that could mislead the public. The AI Act must ensure that absolute priority is given to maximum transparency of the sources used to train algorithms, for the benefit of the creative workers and industries we represent and society more generally in Europe.
“These obligations envisaged should be levied on developers and operators of generative AI systems and models upstream and downstream with particular reference to the obligation to keep and make publicly available sufficiently detailed information on the sources, content, and works used for training, in order to enable parties with a legitimate interest to determine whether and how their rights have been infringed and to take action.
“At a minimum, these obligations should be extended to all systems made available in the EU or generating outputs used in the EU, commercial or noncommercial, and lead to a presumption of use in case of non-compliance by allowing rights holders to exercise their prerogatives including for the granting of licensing.
“It is crucial to recognize that none of the legal protections currently in European legislation has the slightest chance of working unless strict and specific rules are placed of transparency on generative AI developers. We welcome the European Parliament’s proposals to include specific transparency requirements for basic AI models, and we appreciate the effort of the Spanish presidency [currently of the European Council] in finding a solution that is balanced, but it is of paramount importance to further strengthen these safeguards.
“The collection of data and text to train AI was initially allowed for research purposes and trend analysis trends; today, this has become an integral part of content creation. Legislation must reflect this change in regulating and protecting the use of protected works and personal data. These goals absolutely cannot be achieved by softening the proposal voted on by the European Parliament or by following self-regulatory assumptions.”
And so the fundamental point of debate is laid for Wednesday’s trilogue, the Italian cultural delegation having levied an unequivocal contradiction to the softer-regulatory approach its market signaled earlier.
For additional reading: publishingperspectives.com/2023/
Photo: Getty iStockphoto: HT Ganzo