Copyright and AI – a new AI Intellectual Property Right for composers, authors and artists

Background

The new technology landscape emerging from the super rapid progress in developing AI, Generative AI (GenAI) and towards Artificial General Intelligence (AGI) has been dominated by hyperscale super-funded, largely US-based players. Despite the potential opportunities created by  “distillation” from left field entrants such as DeepSeek, the major players are likely still to dominate.

Some commentators have thrown up their hands in despair and see this as the end of any chance to achieve a more open, widely distributed new technology ecosystem. They say the power of big compute and Large Language Models (LLMs) is such that no other companies could compete. They fear we are headed for an oligarchal technocracy which will display scant regard for legacy intellectual property (IP) regulation or any other interest that does not benefit the giant technology companies themselves.

Their fears are misplaced. The new global capability offered by Google, OpenAI, Meta, and Anthropic is on the one hand extraordinary in its ability to capture and draw on knowledge from a massive range of sources and is, on the other hand, still likely to be generic. The landscape is still open to new entrants as the development cycle continues to spiral upwards and specialist application areas open themselves for treatment. LLMs are less likely to be trained to become expert in more niche specialist areas of individual industrial sectors, such as aerospace regulatory requirements or IP rights in the creative industries. This is not because they could not be trained in those areas, but because the costs of training and sector systems integration are unlikely to yield sufficient levels of commercial revenue to the large players. As a consequence of this and the emergence of open source models, venture capitalists are beginning to identify specialist application areas as potential targets for investment. Any changes to IP policy should be cognisant of that shifting focus and not get in its way. Changes should neither hamper the ability of existing growth sectors to contribute to the economy, nor must they be allowed to inhibit cultural development by disincentivising creators and acts of creativity.

IP Rights Reform

IP rights-owning businesses in the creative industries are rightly concerned to regain the ground lost to GenAI companies who have exploited the world’s music, literature, games, photos and films to train GenAI platforms without regard for the rights of creators or rights owners. That wrong is obvious and needs fixing.

Historically we have seen remedies, either in court by creating precedents, or by governments changing the law. Commercial market-level solutions are also often achieved through negotiated settlements, agreed out of court where the terms remain opaque and not precedent-setting. Some constituencies in this debate might prefer that outcome. However, if we are to achieve a favourable shift in IP policy and legislation over time to reflect the new uses of content by AI, then we must make sure that those kinds of private deals do not prejudice the fair and open development of the market. It is therefore very welcome that the UK Government sees the need to work diligently, and at pace, to try to achieve a new kind of legislative IP environment designed for the longer term; an IP environment that will continue to incentivise authors, recording artists, photographers and fine artists in particular, as well as stimulate the emergent AI economy.

The fact is that as AI platforms continue to be developed, their level of opacity needs to be radically reduced. The sources of data that are included as constituent learning components in the output of content from GenAI platforms should be required to be made visible. Scientists continue to argue that the goal of AGI is a degree of autonomy or agency that goes beyond what individual prompt designers could achieve. This does not obviate the fact that an autonomous agent using its intelligence to make a new piece of content will do so under a lesser or greater degree of direction from a human, and base it on content upon which it was trained. The GenAI platforms must be required to invest in their systems so that they can render those sources visible and explainable. This will make a significant contribution to the global battle against misinformation too.

Reasonable parties agree that licensing content for AI training purposes should become a friction-free commercial service. Use of that content into newly generated material must be required to be traceable and paid for. If such a solution were designed in from the get-go, then Gen AI could become a valuable revenue stream for content creators and rights owners alike.

Regulators and legislators are right to reflect on the nuanced different positions of rights owning companies and individual creators. It might well serve both constituencies to have the GenAI platforms render their sources transparently and link them to their economic models.  If they are not forced to do so by law then a potentially unholy alliance between the platforms and the rights holders might motivate them to agree out of court on, for example, a standard royalty pay-out based not on works used in music, literature or photography, but on more generic indicators such as market shares of the rights owning company. This would be cruder and cheaper to implement. It could form the basis of a deal between, for example, a major record label or a photography rights owner and a GenAI platform. The result of which would see the rights owner receive a major new revenue stream and the GenAI platform could claim that it was complying with a form of the law. The individual originator of the content, the artist, might receive nothing at all, since the record label could justifiably claim that it did not know the precise source origin for which it was receiving payment. This kind of “black box revenue” is a common feature in income streams in some existing collecting societies and needs to be avoided in any new framework for licensing rights for AI training. Such a scenario might suit the more powerful commercial players and further prejudice the interests of individual artists who historically have struggled to litigate for their own benefit in such a context.

Please read full article from the source: The Creative PEC