AI in the Workplace? Employers Should Be Alert.
The following article was first posted by Carmody Torrance Sandak Hennessey LLC as part of the firm’s Carmody@Work labor and employment series. It is reposted here with permission.
Seemingly overnight, artificial intelligence has gone from the pages of science fiction to the world of science fact, permeating nearly every aspect of our lives from healthcare to online shopping.
Its most recent and pervasive form is ChatGPT, a generative AI program created by Open AI and launched in November 2022.
Since then, ChatGPT and other AI tools have exploded in popularity.
Whether you know it or not, your employees are likely already using ChatGPT in their personal lives, and perhaps even at work.
Gaining Popularity
Why are ChatGPT and generative AI so popular?
In short, it is free and accessible on the internet. It interacts using natural language and, therefore, is extremely easy to use.
It can process an enormous amount of text data and instantly produce written responses in a human-like manner.
For example, by providing ChatGPT some basic information, the AI tool can solve coding problems, write essays, speeches, or articles (this article was written by a human!), write a cover letter or resume, and summarize an article.
Employees could, among other things, use ChatGPT to write job postings, job descriptions, interview questions, offer letters, employment policies, emails, and conduct research.
Generative AI applications such as ChatGPT are not only an entertaining distraction, but have the potential to be a powerful tools both in and out of the workplace.
It is not, however, without its very real risks.
Workplace Risk Analysis
It is important for employers to understand that ChatGPT is not perfect and has some significant limitations.
For example, ChatGPT can produce plausible sounding but completely incorrect answers.
In addition, the current version of ChatGPT only uses data through September 2021, and is not capturing more recent, and perhaps more reliable, information.
Also, ChatGPT cannot determine the reliability of, or make any qualitative assessment about, the information it accesses.
Therefore, ChatGPT’s results could be biased, offensive or discriminatory.
Other limitations include, the AI’s lack of common sense and emotional intelligence, the user’s inability to determine the source(s) used by ChatGPT, and the AI can produce inconsistent responses based on how a question is framed.
Employers should consider whether ChatGPT and/or other AI tools can and should be used by employees in performing their job duties.
In doing so, employers must take into consideration the capabilities and limitations of the AI tool.
Employers who allow employees to use AI should develop a policy that addresses potential legal issues.
Some key policy pointers include:
- Permitted and Prohibited Uses: the policy should identify the prohibited and permitted uses of AI.
- Quality Control: as noted above, ChatGPT can produce inaccurate answers. Therefore, employees must be instructed to carefully proofread and edit all AI-generated product. Employees should understand that AI can be a good starting place, but it cannot be a substitute for the employee doing their own work and fact-checking.
- Bias and Discrimination: users should carefully review ChatGPT’s results to guard against any bias or discrimination.
- Intellectual Property: there may be questions about who owns the intellectual property rights to the work produced using ChatGPT.
- Transparency: the circumstances in which AI is used should be transparent and disclosed to the employer.
- Non-disclosure/Other Agreements: contracts and terms/conditions should be updated to include provisions protecting any information you may be sharing with suppliers, vendors, or customers who may be using your information.
- Employee Manuals/Polices: updates may be required to address the use of Generative AI in and out of the workplace.
Lessons Learned
Among the costliest dangers with using these tools in the workplace is, as Samsung learned recently, is the potential loss of confidential or proprietary information.
Samsung engineers stymied by a difficult to fix problem with source code they were developing and with the blessing of the company, turned to ChatGPT for help.
In doing so, the engineers provided ChatGPT with confidential data, including the source code for a new program and internal meeting notes relating to Samsung hardware.
Samsung engineers were unaware that ChatGPT’s terms and conditions of use allow it to retain the data users input to help ChatGPT learn and that this information can be used by ChatGPT on future projects.
In short, when users input data into ChatGPT, that data becomes the property of OpenAI under the terms of service.
Employees turning to ChatGPT for help could easily and unwittingly turn over trade secrets, personnel information, or other sensitive data.
Samsung’s experience is a cautionary tale that generative AI is here and now is the time for employers to put policies and training in place governing employees’ use of ChatGPT and other generative AI applications.
Employers are well-advised to remain vigilant on the benefits and risks of using AI in the workplace.
About the authors: Jason Gagnon is a partner at Carmody Torrance Sandak & Hennessey. His practice focuses on commercial litigation with a particular emphasis on product liability, utility law, and employment law. Nick Zaino is a partner at Carmody Torrance Sandak & Hennessey. He is co-leader of the firm’s business services group, and primarily practices labor and employment law.
RELATED
EXPLORE BY CATEGORY
Stay Connected with CBIA News Digests
The latest news and information delivered directly to your inbox.