According to Goldman Sachs, ChatGPT is set to destroy 300 million jobs. Companies cannot ignore this new technology, which is supposed to significantly increase the productivity in a wide range of areas.
Recently, AI has gained attention as governments and officials gather to try and regulate this new tool: AI Safety Summit at Bletchley Park in November 2023, political agreement on the provisional rules that will comprise the European Union’s Artificial Intelligence in December 2023.
Employees are now used to using AI. However, employers must remain extremely cautious and anticipate the risks associated with the use of AI (artificial intelligence).
In addition to the risks of inaccuracy or bias in the answers that ChatGPT can give to technical or complex questions (which has been widely exposed over the past few months), employers need to pay attention to risks in terms of:
- Data protection: AI is likely to re-use the data, without any filter or possibility of controlling who has access to it;
- Plagiarism and counterfeiting: AI itself cannot create original content, the information generated by it is often cut and copied from multiple other sources;
- Liability in case of incorrect information: AI does not always provide sources. If not carefully checked, the information provided might prove incorrect.
- Damage to reputation, particularly if the content of ChatGPT is used for advertisement (which might otherwise be freely available).
Before considering the introduction of open-sourced AI tools, employers should ask themselves various questions, according to their needs and means. We will address these questions next week, in a series of articles related to:
- AI in the workplace
- Redundancies and AI
- Procedure to implement new AI-related project within companies
- Confidentiality breach
- Risks of using ChatGPT in the recruitment process