DOL Issues Guidance on AI, Worker Well-Being Best Practices
The following article first appeared in the News & Analysis section of Littler Mendelson’s website. It is reposted here with permission.
On Oct. 16, 2024, the U.S. Department of Labor published Artificial Intelligence and Worker Well-Being: Principles and Best Practices for Developers and Employers.
This document expands upon guidance released in May that focused on eight AI principles.
The new guidance now includes best practices that are intended to be a roadmap for developers and employers to implement these eight principles.
DOL envisions that the principles and best practices, in combination, will enable developers and employers “to harness AI technologies for their business while ensuring workers benefit from the new opportunities and are shielded from potential harms.”
Bold disclaimers accompany the DOL’s latest release of best practices, reinforcing that they are descriptive, not binding nor intended to modify or supplant existing law, regulation, or policy.
However, DOL envisions them to apply across all sectors and workplaces, and it encourages developers and employers to customize them to fit their needs based on worker input.
Principles and Best Practices
1. Centering Worker Empowerment. DOL envisions that AI should have the dual aim of benefiting employers while advancing their employees’ well-being, particularly workers from underserved communities.
Employers should seek “early and regular” input from their workers about the adoption and use of AI technologies, and bargain in good faith on the adoption of such technologies in union workplaces.
2. Ethically Developing AI. DOL urges developers, among other things, to establish standards so that AI systems brought to market will protect workers’ civil rights, mitigate risks to workers’ safety, and meet performance requirements.
DOL identifies additional best practices for ethical AI development. These include creating jobs for workers who will “review and refine data inputs used to train AI systems” and ensuring those jobs meet basic human rights and domestic and international labor standards.
The AI systems should also be developed to allow for ongoing human oversight and retrospective review of data that inform decisions.
And, AI system operations and uses should be described in such a manner that non-technical users are able to understand them.
3. Establishing AI Governance and Human Oversight. DOL guides employers to “establish governance structures, accountable to leadership, to produce guidance and provide coordination to ensure consistency across organizational components when adopting and implementing worker-impacting AI systems.”
These governance structures should incorporate worker input into their decision-making processes. Employers should further provide appropriate AI training to their workforce.
The DOL document states that employers should not rely on AI systems, or information collected through electronic monitoring, to make “significant employment decisions” free from “meaningful human oversight.”
The individuals who oversee such decisions should have training in the use and interpretation of AI outputs.
Employers should also identify and document the types of significant employment decisions that are informed by AI.
Employers should inform job applicants how AI is used in employment decisions, and the DOL recommends that employees be permitted to appeal such decisions, that employers conduct independent audits of these systems, and publicly report information on related worker rights and safety concerns.
4. Ensuring Transparency in AI Use. Employers should provide workers and their representatives advance notice and appropriate disclosure if they intend to use worker-impacting AI.
If employers are going to use any electronic monitoring systems, they should provide conspicuous notice to the employees who will be monitored.
When feasible, employers should permit workers and their representatives to submit corrections to individually identifiable data used to make significant employment decisions.
5. Protecting Labor and Employment Rights. Employers should not use AI systems that undermine, interfere with, or chill labor organizing and other protected activities.
AI should not be used to limit or detect organizing efforts and protected activity, and electronic monitoring should not be used in non-work (break) areas.
Employers should monitor all AI systems to ensure they do not negatively impact worker safety and well-being, that they comply with current laws, and that they do not undermine legally protected rights of workers (leaves of absence, accommodations, wages, break times).
Employers and developers should also ensure AI systems maintain compliance with anti-discrimination laws and should conduct routine monitoring for these effects.
The DOL document also states that developers and employers should consider how AI systems will impact job seekers with disabilities and how those systems can assist them in the workplace.
Employers should always encourage workers to raise concerns they have about the implementation and use of AI systems and refrain from retaliating against workers who do.
6. Using AI to Enable Workers. Employers should consider how AI systems can improve job quality and assist rather than supplant workers.
For example, AI systems could be implemented to reduce time employees spend on rote tasks to provide opportunities to enhance other needed and valuable skills.
The document also encourages employers to consider piloting the use of any AI systems and programs before broad deployment, include worker input, and provide training to workers to use, learn, and improve the systems.
Employers should minimize electronic monitoring to the least invasive measures necessary.
7. Supporting Workers Impacted by AI. Employers should train employees how to use AI systems that complement their work to prevent displacement.
Where displacement does occur, employers should retrain those workers so they can fill other roles within the organization when feasible.
8. Ensure Responsible Use of Worker Data
Employers should avoid collection, retention, and other handling of worker data that is not necessary for a legitimate and defined business purpose, comply with relevant laws related to collection and security of worker data, and not share worker data outside of the employer’s business without freely given, informed consent.
Key Takeaways for Employers
The eight DOL AI principles remain a representative guiding framework for businesses, not an exhaustive list.
While DOL’s latest guidance re-emphasizes that it is not binding on employers, employers should already be cognizant of the practices suggested in the document.
As employers consider whether, and to what extent, they want to incorporate AI into their business practices, they should also consider implementing guardrails that incorporate useable and practical elements of DOL’s Principles and Best Practices, with particular emphasis on engaging with workers on the use of AI, auditing AI systems used in employment decisions, considering how AI systems will enhance worker well-being, and reducing negative impacts on their workforce.
As with previous AI guidance issued by DOL, this guidance does not reflect input from employers on how organizations are using AI in the real world.
Indeed, employers did not have an opportunity to provide direct feedback on the guidance.
The DOL document is largely a list of general principles and best practices that each employer will need to modify to fit their needs, but are not legally binding.
Instead, employers will need to continually monitor how legislative bodies might regulate the use of AI in the workplace.
However, implementing some of these practices now may help employers down the line when legally binding statutes are enacted in varying jurisdictions.
About the authors: Alice Wang and Bradford Kelley are shareholders with Littler. Shane Young is an associate with the firm.
RELATED
EXPLORE BY CATEGORY
Stay Connected with CBIA News Digests
The latest news and information delivered directly to your inbox.