• Sunday, October 06, 2024
businessday logo

BusinessDay

AI policies and governance: A guide for in-house lawyers – Part 1

AI policies and governance: A guide for in-house lawyers – Part 1

AI is a term that can be used to mean different things to different people depending on the context.

When we talk about AI in a legal or business context, we generally mean any machine that processes data or carries out tasks with some level of autonomy – ie, it makes decisions or finds answers to queries based on its own ‘machine learning’. This ‘machine learning’ is possible because of the algorithms that form part of the AI system and the data the system is trained on.

It’s important to understand that AI technology isn’t new and has been in use for years. For example, the technology used in many streaming services to predict the kind of content a viewer might like. Interest has piqued over the last 12 months or so because of a new breed of AI – Generative AI (GenAI) which has the ability to create videos, text, and audio at the touch of a button and in a way that (at least visually) looks polished. Examples include ChatGPT and Google Bard.

Many GenAI systems are built on what are known as ‘foundation models’. These are attracting a lot of attention because they can process vast amounts of unstructured data and perform multiple tasks at once. Foundation models can be used as a base technology to build an AI system for a specific purpose – and often allow organisations to build on top of their capabilities via an application programming interface (API) access as part of their business model.

GenAI in particular can have a transformational impact on businesses. Tasks that take a human hours to complete can be done in seconds. In-house lawyers need to be ahead of the game when it comes to AI because, in many sectors, their business teams will be very keen to implement these systems. There are associated risks and legal considerations, and in-house lawyers will be called on to prepare their businesses for the impact.

There’s a lot of talk about the risks of AI. What are the key legal and compliance issues that in-house legal teams need to be aware of?

It depends on the type of AI system being used, but let’s take the foundation model as an example. There are various risks for a business of using AI systems built with this kind of model. These include:

Risk one: IP and confidentiality

The data the system was trained on could include a third-party’s IP or confidential information. If the correct consents and/or licences haven’t been obtained, this could lead to action being taken against the user by the owner of the IP or confidential information. The extent to which training an AI system on third-party IP is an infringement is the subject of cases in many jurisdictions. In early 2023, Getty initiated legal proceedings against Stability AI for the alleged use of Getty’s images to train Stable Diffusion, a model owned by Stability. The legal position is still unclear until we have judgments in these cases.

It’s also unclear who owns the IP in the content generated by an AI system (the output). For example, when a business uses an AI system to write an article, the human writer (or their employer) would ordinarily own the IP in the article. But where the article has been wholly written by a machine, with minimal prompts from a human, there are questions about whether there’s sufficient creativity in the output from a machine for it to satisfy the criteria for IP protection. We’ve already seen a number of cases on whether an AI system can be the registered owner of a patent. The current trend is that it can’t.

The law around trade secrets is likely to become highly relevant. That’s because both the data that has been input into an AI system as a query, and the ensuing output, are the user’s trade secrets and protected more like confidential information than as a registered piece of IP such as a patent or a work of copyright.

Crucially, if a business inputs confidential information into a public AI system like ChatGPT, the model could, in theory, use that information to train itself. The information is arguably disclosed at that stage. There’s then a risk that the information is now public, which could be in breach of any confidentiality obligations the business has towards its staff or business partners.

Risk two: data protection

Businesses using AI systems still need to comply with their data protection law obligations. Because we’ve seen a number of high-profile IP cases, less attention is being paid to the complexities of using AI systems that process personal data.

Obligations include being fair and transparent, building data protection into the design of systems, and honoring certain rights of individuals – for example, the right to have personal data erased and rights related to automated decision-making.

Principles that must be adhered to include purpose limitation, data minimisation and accuracy. These are clearly very challenging principles to adhere to in systems that require huge swathes of data, and where human understanding of how they work can be limited. Personal data should only be processed in ways that an individual reasonably expects, and how that processing is carried out and the effect of it must be explained. With a complex AI system that uses a sophisticated algorithm (or algorithms) it’s unlikely this’ll be an easy task.

The UK’s data protection regulator, the Information Commissioner’s Office (ICO), has AI firmly on its radar. It recently published a round-up of all of its AI guidance and resources, including a detailed overview of how to apply data protection law principles to the use of information in AI systems. The guidance was updated last year after requests from UK industry to clarify requirements for fairness in AI. Fairness interacts with bias and discrimination, two key societal concerns associated with the use of AI. These are ethical considerations that all businesses will need to anticipate from the start, in particular if they themselves are involved in the development of AI systems. The ICO and The Alan Turing Institute have also jointly produced a guide on explaining decisions made with AI, which has some useful practical advice and tips.

It’s really important to take data protection issues into account at a very early stage of deployment. If the AI system is being brought in from a third-party, discuss these issues with them from the start.

Risk three: Accuracy

Because AI-generated output tends to be professionally presented and seemingly confident, with few typographical or spelling errors, this can lead people to trust that it’s error-free. But we know from reported stories of the risk of ‘AI hallucinations’, where content presented articulately as fact has simply been made up, including entire legal cases.

It’s important to understand that most GenAI systems, despite being trained on vast amounts of data, are essentially prediction machines – they’re designed to forecast the most probable answer to a prompt.

Unfortunately, this means that if the model doesn’t have the data needed to generate a correct response, it may confidently provide an incorrect one. This is more likely in highly technical fields, where general models might not have enough specific training to give accurate answers.

Even the smartest AI can get things wrong if it’s not armed with the right information. If staff within organisations believe that the tech they’re using produces high-quality, accurate results, they might be tempted to rely on it for certain aspects of their job, which could – depending on the nature of their role – be a risk for the business.

So, what action does the board and senior management need to take now?

A suitable internal structure should be put in place to lead on AI strategy and ensure consistent messaging and awareness throughout the business. As this is a rapidly evolving area, it’s more important than ever to keep up to date with technological and legal/regulatory requirements and guidance. Policies should be regularly reviewed and updated, and staff training will be key.

Carrying out a review and risk assessment of how AI is currently used in the business is an important first step.

All businesses need to understand how they might use AI systems. Ultimately, some may not have a use for AI operationally in their day-to-day business activities, but they must all consider that their employees may still be using freely available AI systems like ChatGPT. Employees need direction on how AI systems can and can’t be used. Otherwise, businesses may find that IP and confidential information (either theirs or that of their business partners) is being put into these systems, resulting in loss of control and possible breach of licences, confidentiality agreements and other obligations. There’s also the risk of reliance on outputs that aren’t accurate and/or may be tainted by inherent bias or discrimination. This is where a staff policy comes into play.

A new ISO AI standard was recently published (ISO 42001) which has a similar scope and purpose to the well-known ISO 27001 standard for best practice in information management. Businesses may want to contemplate adopting this standard when implementing their policies and governance frameworks.

Consider whether a log should be kept of the use of the AI systems for specific purposes, so you have a record for governance purposes. More generally, documenting the steps that have been taken to identify and mitigate risks, and the reasoning behind decisions taken, may be a legal and/or regulatory requirement (and is sound business practice, in any event).

“This article was sourced from Inner Temple Library”.

Join BusinessDay whatsapp Channel, to stay up to date

Open In Whatsapp