What do in-house legal teams need to consider when contracting for AI and what protections should be included?
Contracting for a new AI system is much like contracting for any new IT solution. Often times, the usual considerations around scope of your licence to use the system and appropriate indemnities and allocation of liabilities apply. But there are some additional points to be considered:
- Data protection – if the system will process any personal data, then it is necessary to map out how the business will comply with its data protection obligations. A Data Protection Impact Assessment (DPIA) would most likely be needed. There is also a need to consider privacy notice updates if personal data would be collected for new purposes and processed in different ways. In-house counsel would also need to take into consideration whether the AI third-party provider is a data controller or processor for data protection law purposes. Usually, in relation to IT solutions, it’s a standard controller to processor relationship – but the complexities of how AI systems are trained and kept up to date means this may need a more thorough analysis.
- IP and confidentiality – It is also important to confirm that the input data on which the AI system has been trained does not misuse the confidential information of another person or trade secrets of a business, or infringe their IP rights, and that the processes used to develop the system are legally and ethically sound. However, due to the uncertainty around IP infringement in relation to training data, many AI system suppliers may be reluctant about giving this protection at this stage.
- Preventing internal data being used for model training – For obvious reasons it is crucial to ensure that the AI system is not trained on any data provided by your business or its suppliers. Keeping your data in your hands and out of the AI’s learning process protects confidentiality for both your business and those that you work with.
- Who owns the data – It is essential that the contract between your organization and the supplier is clear about who has the rights over the output and what each party can do with it.
- Model maintenance – AI systems need to be kept up-to-date and trained on up-to-date data. The contract needs to be clear on how this would be achieved and who will take the lead on this important issue. This is likely to be an additional support service in some contracts.
What are the key points an AI staff policy should cover?
Different policies and instructions will be needed for a publicly available AI system like ChatGPT from an enterprise version that has been commissioned by the business. Not all businesses allow the use of publicly available AI systems, but those that do need to be clear what can and can’t be done with them. As already discussed, confidential information and IP shouldn’t be inputted into a publicly available AI system, and staff need to understand that these systems are not always accurate. Care needs to be taken both in terms of the prompts used to generate an output and the reliance on that output.
A business implementing an enterprise AI system should develop specific instructions based on its capabilities and how the business intends it to be used. Typically, when a business implements a new software system, its use is confined to its functionality. AI systems can be different in that their uses can be wide and varied, some of which can be low-risk and others high-risk. Depending on the kind of system you implement you would need to be clear about what tasks it can be used for.
Other areas to cover include an outline of the benefits and risks, the consequences of non-compliance with the policy, details of available training and other support, use of AI systems on personal devices, and monitoring.
This article was sourced from Inner Temple Library.
Join BusinessDay whatsapp Channel, to stay up to date
Open In Whatsapp