As recently revealed in the press, Samsung employees – who were allowed to use ChatGPT to help fix a source code for some new software – accidentally gave access to secret information on new technology developed by the company.
This is particularly problematic since “ChatGPT retains the data it is fed to train itself further. Hence, these trade secrets from Samsung are now in the hands of the AI chatbot maker OpenAI.” (https://www.businesstoday.in/technology/news/story/samsung-employees-accidentally-leaked-company-secrets-via-chatgpt-heres-what-happened-376375-2023-04-06)
Even though AI software may be very appealing for companies to increase efficiency and reduce delivery deadlines, the use of those new tools should be regulated to (i) make employees aware of the associated risks and (ii) draw the line between what is authorized and what is forbidden.
Under French Employment Law, the most appropriate way to regulate the use of AI software (especially ChatGPT) is to implement an IT policy or adapt the one in place. In our view, such policy should include:
- Explanations on the objectives of AI use within the company, as well as the risks in case of negligent or malevolent uses;
- Restrictions on (i) personnel authorized to use AI tools, (ii) the tasks for which AI tools can be used, (iii) the type of information that can be submitted to AI and (iv) the AI platforms that can be used in a professional framework;
- Obligations to declare and detail works that have been done with the help of AI to ensure transparency and assess liability should any problem arise;
- Sanctions in case of violations, especially when it leads to the leak of the company’s confidential information or cause damage to the company’s reputation;
In order to be fully enforceable, the IT policy should be incorporated or appended to the internal regulations and individually communicated to employees.
The company should also inform its employees on the risks of AI use in general and offer to train them how to use AI platforms and how to pre-process information before submitting it, through data anonymization for instance.
It is important to note that under French law, even without any written policy, employees have a general obligation of confidentiality towards their employers, which means that employees can be held liable for any leak of confidential information, even if it results from negligence, e.g. linked to the use of open AI platforms.
REMINDER: French employment law requires that any document containing obligations for employees be drafted in French. Documents drafted in another language would be considered as non-binding on the employee.