WELCOME TO THE FEBRUARY EDITION OF TECHNOL-AG, ADDLESHAW GODDARD'S MONTHLY TECHNOLOGY UPDATE.


Communications and Digital Committee calls for change to Online Safety Bill

The Communications and Digital Committee has urged ministers to "change course" with the Online Safety Bill, as there is a risk it will remove freedom of speech protections.

Clause 39 (1)(a) of the Online Safety Bill (which the Committee recommends should be removed) enables the Culture Secretary to direct Ofcom in relation to the development of its codes of practice. The Committee suggests this could undermine the independence of Ofcom through interference with the online safety regime's implementation.

As social media continues to integrate into our daily lives, further emphasis is being placed on how it is regulated to make it a safe space for its users. However, equally, there are questions about who should be guiding the agenda for regulating. It has been suggested the Communications and Digital Committee's recommendations for the Online Safety Bill showcase the conflict between the Government's attempt to influence the legislation and the protection of Ofcom's independence. The Committee have called for the reinstatement in the Bill of a requirement for risk assessments, in relation to adult services, to be carried out by social media platforms. It has been suggested this would place an onus on social media platforms to increase the transparency in relation to their approach of balancing free speech and placing limitations on content. In the midst of the recommendations, there is also a restatement of a recommendation for the House of Commons and the House of Lords to have a joint committee to analyse digital regulation. Although the Bill is still in progress, it seems likely the emphasis on greater scrutiny of social media is only going to increase.

Current legal issues with AI are just the beginning

The development of sophisticated generative artificial intelligence (AI) has brought about a new wave of legal concerns in the tech industry. Unlike traditional software, which is programmed by humans and follows a set of predetermined rules, generative AI can create its own rules and behaviours. This means that it can be difficult to determine who is responsible for the actions of the generative AI.

The recent release of DALL-E 2, a generative AI model capable of creating high-quality images from textual descriptions, has raised concerns about copyright infringement. Similar legal questions were raised by the lawsuit against Microsoft, GitHub, and OpenAI regarding alleged copyright infringement. Both raise important questions about the use of copyrighted material in the context of generative AI as currently, copyright law is enforced against human infringement, not against machines.

Another notable example is the development of ChatGPT, a generative chatbot powered by GPT-2, which has recently caused controversy due to its ability to generate realistic and coherent human-like responses. The legal system is designed to hold individuals and organisations accountable for their actions, but it is not clear how this applies to non-human entities like generative AI.

So what does this mean for the future?

One possible solution is to develop a system of AI copyright ownership, whereby the creator of the generative AI is given ownership of it and is then responsible for any copyright infringement caused by the AI.

Another approach is to hold the users of generative AI responsible for any copyright infringement caused by the AI. This would involve requiring developers and users of generative AI to obtain permission from copyright holders before using their work to train AI models. This would allow copyright holders to control how their work is used; however, this approach may not be practical or enforceable, particularly in cases where the user is not aware of the copyright infringement.

Ultimately, the development of legal frameworks for addressing copyright infringement by generative AI requires a collaborative effort between legal experts, AI developers, and copyright holders. As the technology evolves, it will be important to ensure that the rights of copyright holders are protected.

Creating ethical guidelines for the use of generative AI could provide a set of principles for developers and users of generative AI to follow, with the goal of minimising the risk of harm caused by the AI. This approach has already been adopted in some industries, such as healthcare, where ethical guidelines have been developed for the use of AI in medical diagnosis and treatment.

Regardless of the approach taken, it is clear that the legal and ethical challenges are becoming increasingly complex as the technology becomes more sophisticated and capable of creating more intricate and original works. Developers, users, and policymakers must work together to address these challenges and ensure that generative AI is used in a responsible and ethical manner.

EESC welcomes Commission's proposal for an AI Liability Directive

Following the theme from the article above, the European Economic and Social Committee (EESC) has welcomed the European Commission's proposal for an Artificial Intelligence Liability Directive (Directive) to improve the rights of individuals affected by wrongful use of artificial intelligence (AI).

The EESC believes minimal harmonisation is best for the Directive but with clear legal definitions and enhancement of the required expertise of those who will apply the new legislation across the EU.

This Directive will help to create a more balanced legal system for the development and use of AI. However, as recognised by the EESC, there is a risk of divergent interpretations by stakeholders involved in its development and by judges. Therefore, the EESC has insisted upon clear legal definitions and enhancement of the required expertise of those who will apply the new legislation across the EU.

The EESC also recommends that:

  • a network of alternative dispute resolution bodies should be set up;
  • the Commission closely monitors the development of financial guarantees or insurance covering AI liability; and
  • the Directive be reviewed three years after it comes into force.

The UK will not need to transpose the Directive into its national law but any UK providers placing AI systems in the EU will be subject to the Directive.

Susan Garrett

Susan Garrett

Partner, Co-Head of Tech Group
Manchester, UK

View profile