WELCOME TO THE APRIL EDITION OF TECHNOL-AG, ADDLESHAW GODDARD'S MONTHLY TECHNOLOGY UPDATE.


AI is on the increase across all sectors – is the law keeping up?

Recent advancements in artificial intelligence e.g. ChatGPT and the introduction of Google's Bard chatbot, brings with it various possibilities across all different sectors.

By 2030, it is predicted that the number of cashiers in the retail sector could halve, as self-checkouts and self-scanning become more prominent in the retail sector. Retailers are currently experimenting with AI powered systems to spot gaps on shelves, pick and pack products, and automate price changes on shelves.

In the Energy industry, AI has the potential to predict and identify faults at power plants, use weather forecasts to identify locations of offshore windfarms and track carbon emissions of companies for sustainability planning.

Experts envisage the financial services sector will be the most susceptible to disruption from AI. Machines could have the potential to run customer background checks as part of the onboarding process for new clients, as well as update learning programmes with new regulatory guidelines to flag potential breaches or shortfalls in a company's system.

So how does AI impact legal risk?

If the use of AI is increasing, it is important to understand whether legislation is keeping up with the pace. Some associated risks to be considered before introducing AI into a business may include:

  • 1. Liability. Whilst AI looks to streamline processes, remove human-error and predict issues, it is safe to say that AI will not be error-proof and therefore when considering legal persons/personalities in relation to how legal elements such as liability are dealt with, AI itself cannot be attributable. It is therefore important to consider what elements of liability need to be attributable to which parties in a supply chain e.g. is it the AI developer of the AI user who is liable?
  • 2. Intellectual Property. Using AI-produced output requires careful analysis of the datasets used to teach the system to ensure that companies do not open themselves up to third party infringement claims. For example, where system owners seek to rely on copyright exemptions to acquire datasets free of charge, the application of such exemptions is not uniform across the world and copyright creators have been vocal in challenging the legality of relying on those exemptions in the first place. On the other hand, it is not clear how and to what extent AI-generated output can be protected, which means that companies may not be able to prevent others from using the output.  
  • 3. Competition/Antitrust Issues. Whilst it is widely recognised that AI increases competition through the use of algorithms, such as helping consumers find the lowest prices, there are concerns that such algorithms can also make price-fixing more effective – such parallel behaviour may not be with intention, but purely as a result of using AI in the first place. 
  • 4. Data Privacy. AI tools involve processing a vast amount of data, some of which could be personal data. This raises concerns in relation to identification from such data – whether through analysis or the use of AI itself to de-anonymise anonymised data.

So where is the law currently in terms of considering these risks?

On the one hand, the UK government published a white paper on 29 March 2023 "to guide the use of artificial intelligence in the UK, to drive responsible innovation and maintain public trust in this revolutionary technology" and have said they want to "avoid heavy-handed legislation which could stifle innovation". The white paper outlines 5 principles that regulators should consider to best facilitate the safe and innovative use of AI in specific industries:

  • Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed;
  • Transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI;
  • Fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes;
  • Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes;
  • Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI [1].

On the other hand, the European Commission has proposed legislation in the form of the Artificial Intelligence Act (the "Act"), to govern the development, marketing and use of AI in the EU. The Act will aim to manage AI applications using risk categories whilst aiming to work with current GDPR to "boost research and industrial capacity while ensuring safety and fundamental rights" [2]. Unlike the UK's more innovative approach, the Act will impose different legislative obligations at all stages development and use of an AI system.

With such accelerated developments in AI, it is difficult to predict whether the differing approaches by the UK and EU will have their desired impact, as well as seeing how innovation in the space deals with such a divergent legislative landscape.

DSIT publishes Spectrum Statement and Wireless Infrastructure Strategy

The Department for Science, Innovation & Technology (DSIT) has published a Spectrum Statement and a Wireless Infrastructure Strategy to drive the adoption of 5G and 6G technologies and ensure the UK remains a pioneer in innovative spectrum management.

Demand for spectrum is growing across the public and private sectors. To maximise its uses and expand its usability, it is necessary to have the right policy framework in place. The Spectrum Statement proposes how the government intends to do this with a view to becoming a science and technology "superpower" by 2030.

The Wireless Infrastructure Strategy sets out how the government plans to harness advanced wireless connectivity to benefit the economy and society and provides steps to aid commercial investment in 5G.

So what does this mean?

The Government expects spectrum's economic influence to further increase, as wireless applications in various sectors come to the fore. Within the Wireless Infrastructure Strategy, the Government suggests  that focusing on digital networks being deployed could lead to improved local public services.
Looking ahead, a key focus for the Government is using spectrum more efficiently. As spectrum is reusable, the Government has suggested targeting spectrum that is underutilised and intensively reusing it. Meanwhile, Ofcom has emphasised the importance of enabling various organisation types to access a spectrum that will match what they require.

However, these innovations present a different type of challenge when it comes to more vulnerable internet users. In order to free-up frequencies to provide new and innovative technologies, some providers, like Vodafone, are now working towards switching off their 3G network. For customers who use older, more basic hardware, there is a risk they will fall into "digital poverty". While Vodafone is one of the first networks to do this and begin the phase out in June 2023, the UK Government and mobile phone operators have said all 2G and 3G services will be shut off by 2033 at the latest, with 3G networks being shut first [3]. 

Recent case law reminds us of importance of good drafting in online terms

In the recent case of Parker-Grennan v Camelot UK Lotteries Ltd [2023] EWHC 800 (KB), the High Court dismissed a claim for summary judgment for £1 million against the online gaming company Camelot. The claimant, Joan Parker-Grennan, argued that she was entitled to the £1 million prize accidentally displayed on an interim and optional animated screen because of a coding issue in the software. Camelot refused to pay out the £1 million, only accepting that the claimant had won the £10 prize displayed on the final screen. The gaming operator was protected by the websites term and conditions and on proper interpretation, the interim animated display was irrelevant as to whether a player had won a prize.  

Jay J dismissed the summary judgment application on the following grounds:

  • 1. Incorporation of website terms. Camelot's website T&Cs were properly incorporated into Camelot's online contract with each of its customers, including the claimant. Such terms were not unusual or onerous, and were clearly drafted and easy to understand.
  • 2. Fairness of the website terms. None of the term's relied upon by Camelot were unfair in terms of the Unfair Terms in Consumer Contracts Regulations 1999 (SI 1999/2083) (applicable because the events giving rise to the claim occurred before the coming into effect of the Consumer Rights Act 2015).
  • 3. Interpretation of the website terms. Upon interpretation of the website terms, it was clear that only the amount shown on the final screen display and on Camelot's official list of winning numbers was conclusive as to the amount won by anyone playing the game.

This case is a reminder of how important it is to consider a wide range of risks when drafting website terms and conditions, ensuring they work with the common law, as well as fulfilling statutory requirements and fairness tests.

Key Contacts