Get A Quote

AI Deepfakes Fuel New Wave of Workplace Harassment

The rise of generative artificial intelligence is creating a troubling new category of workplace risk: employees using AI-generated “deepfakes” to harass, humiliate or retaliate against co-workers.

While harassment claims are nothing new, employers should be aware that this emerging form of misconduct is already appearing in lawsuits and is expected to grow as AI tools become cheaper, easier to use and more realistic. These incidents can involve sexually explicit fake videos, manipulated recordings depicting an employee violating company policy or altered audio suggesting someone made offensive or abusive remarks.

It’s important that employers understand this emerging form of workplace harassment.

 

Recent cases

In one recent case, a law enforcement officer alleged colleagues created and circulated an AI-generated video depicting him in a sexualized scenario meant to mock his sexual orientation. In another, a television meteorologist sued her employer after deepfake sexual images using her likeness were circulated and, she claimed, the issue was inadequately addressed by her employer.

Appellate courts have also upheld significant verdicts where employers failed to act after deepfake content spread within organizations.

Compounding the risk, the volume of deepfake content is exploding. Reports have found millions of deepfake files circulating online, with sexually explicit content making up the majority. As these tools become more accessible, misuse in the workplace is expected to increase.

 

Existing laws still apply

Harassment involving deepfakes is generally evaluated under the same standards as traditional workplace harassment claims. If the content targets an employee based on protected characteristics such as gender, race or sexual orientation — and contributes to a hostile work environment — employers may face liability under federal and state anti-discrimination laws if complaints are not handled appropriately.

Employers may also be exposed to claims involving:

  • Defamation
  • Invasion of privacy
  • Intentional infliction of emotional distress
  • Violations of emerging state laws targeting nonconsensual deepfake content

 

Why it’s an issue

Most employee handbooks and anti-harassment policies were drafted before generative AI became widely available, so they do not explicitly address synthetic media or AI misuse.

As a result, employees may not clearly understand that this conduct is prohibited, and employers may have a harder time defending their policies if litigation arises.

 

What employers can do

  • Update anti-harassment policies. Explicitly prohibit creating, sharing or possessing AI-generated content that is sexually explicit, defamatory or targets protected characteristics in your policies.
  • Address off-duty conduct. Make it clear that behavior outside of work that affects the workplace can be subject to disciplinary action.
  • Enhance investigation protocols. Treat digital content as potentially manipulated evidence. Verify its authenticity and document findings carefully.
  • Train managers and employees. They should know how to recognize deepfake harassment and respond appropriately.
  • Act promptly and consistently. When issues arise, apply discipline regardless of the employee’s role or tenure.
  • Monitor legal developments. States continue to pass laws targeting deepfake misuse and Congress is considering broader regulation.
  • Review insurance coverage. Call us to see if your employment practices liability or cyber policies address claims involving synthetic media. An employment practices liability insurance can cover litigation costs, including legal fees, discovery, settlements and judgments in harassment cases.
Tags: