The Transparency Rule: Ethics, Identity, and Disclosure2026-03-16T16:44:44-06:00
  • Responsible AI at Work Module 4 - The Transparency Rule: Ethics, Identity, and Disclosure

The Transparency Rule: Ethics, Identity, and Disclosure

Module 4 — Responsible AI at Work  |  Priority for marketing, HR, communications, and customer-facing teams

The Transparency Rule: We do not deceive. If a machine made it, we label it.

AI can now generate content that is indistinguishable from human-created work. Text, images, video, and voice — all can be produced by AI tools with enough fidelity to mislead the people who encounter them. The Transparency Rule exists because the capability to deceive is not permission to deceive.
This AI Ethics and Transparency Training module addresses situations in which AI-generated content creates ethical and legal obligations — and where the shortcut AI offers is unavailable because the right to create the content does not belong to the organization.

What This Module Covers

AI can clone a voice, generate a face, and impersonate a human in a customer interaction. Technical possibility is not legal permission.

Risks This Module Addresses

  • Voice and likeness used in AI-generated content beyond the scope of existing contracts
  • AI chatbots interacting with customers without the required disclosure under EU AI Act
  • AI-generated images, videos, or deepfakes of real individuals created without explicit consent
  • Prohibited biometric AI uses in hiring — facial expression analysis and emotion recognition
  • Marketing and HR teams unaware that creative AI shortcuts carry IP and regulatory liability

Course Scenarios — Transparency, Identity, and Disclosure

Module 4 addresses situations in which AI creates ethical and legal obligations that employees in creative, marketing, HR, and customer-facing roles encounter every day. The following scenario is the centerpiece of the module — a real conversation that real marketing teams are having right now.

📽 From the Course — Scenario: AI Voice Cloning and Identity Rights

The Jenna Scenario

A marketing team has hours of recordings from a previous campaign with a voice artist. The new campaign would cost $8,000 to re-hire her. A team member proposes feeding the existing recordings into an AI voice generator to recreate her voice for free.

Alex’s response: “Even though the technology makes it easy, using AI to imitate her voice without permission crosses a legal and ethical line. A voice artist owns their voice the same way a photographer owns their photos. We can only use what we’ve licensed — nothing more.”

The contract covered the recordings. It did not cover her voice itself. Technical possibility is not legal permission.

▶ This is one of 3 decision scenarios employees work through in Module 4.

Course Reference — Transparency Violations and Compliant Alternatives

The course gives employees a clear reference covering the most common transparency violations in AI use — why each violates the principle, and the compliant alternative for each situation. This includes chatbot disclosure obligations required under the EU AI Act and UK AI principles.

AI Use Type Why It Violates Transparency Compliant Alternative
Cloning a voice artist’s voice without written permission Uses a person’s identity without consent — regardless of existing contract scope Obtain explicit written permission for the specific AI use — or re-hire the artist
AI chatbot impersonating a human employee Deceives customers about the nature of the interaction Clearly disclose that the chatbot is AI — “You’re chatting with our AI assistant.”
AI-generated images presented as real photographs Creates a false impression of authenticity — people depicted may not have consented Label AI-generated images; obtain consent if using a real person’s likeness
AI facial expression analysis in hiring decisions Prohibited under the EU AI Act — biometric categorization and emotion recognition Use validated, legally approved assessment methods. Consult a lawyer before deploying any AI hiring tool.
AI-generated deepfake video of a public figure or employee Identity theft and deception — creates a false impression of real speech or actions Never create deepfakes of real individuals without explicit consent for each use

Course Assessment — Knowledge Checks Throughout

Each module tests employee understanding through scenario-based knowledge checks that require identifying the compliant choice — not just recalling definitions. The following example reflects the format and difficulty level employees encounter in Module 4.

📝 From the Course — Knowledge Check

Identifying the Prohibited Use

Three employees are describing how they plan to use AI in their work this week. Which use is prohibited under your policy and the EU AI Act?

❌  Option A — Using AI to check a resume for grammar and formatting errors

✅  Option B — Using AI to analyze facial expressions in video interviews to score candidates on trustworthiness

❌  Option C — Using AI to brainstorm a list of interview questions for an open role

Why B is prohibited: Facial expression analysis falls under biometric categorization and emotion recognition — uses that are scientifically dubious and specifically restricted under the EU AI Act. The intent to improve hiring does not change the prohibition. This use is not permitted regardless of how the tool is framed.

Course Philosophy — Why Transparency Is a Values Statement

The Privacy Rule and the Accountability Rule are grounded primarily in risk management — data exposure, regulatory liability, and accountability for errors. The Transparency Rule goes further. It is a values statement about the kind of organization this training is designed to support.

Deceiving customers with AI that impersonates humans is not just a regulatory violation — it is an erosion of the trust that underpins every customer relationship. Using AI to recreate someone’s identity without permission is not just a licensing error — it is a failure to respect the rights of the people your organization works with.

Compliance training tends to frame everything in terms of risk. This module takes the position that some things are wrong not just because they carry legal risk, but because they violate the basic obligation to treat people honestly.

AI is transforming how we work. Responsible use keeps us safe and compliant. Ethical use keeps us trustworthy.

Course Deployment Information

📋 Module 4 — At a Glance
Audience Marketing, HR, communications, and customer-facing teams
Length 15–20 minutes
Format Scenario-based, SCORM-compatible, LMS-ready
Covers Disclosure obligations, voice and identity rights, deepfakes, chatbot transparency, prohibited biometric AI uses
Prerequisite Module 1 — AI Fundamentals recommended
Regulatory alignment EU AI Act, UK AI Principles, FTC, emerging US voice and likeness protections

Frequently Asked Questions

Our organization uses an AI chatbot for customer service. What do we need to disclose?2026-03-16T10:12:44-06:00

Under the EU AI Act and UK AI principles, customers must be informed they are interacting with an AI system when the interaction could create a reasonable impression of human involvement. A clear disclosure at the start of the chat — ‘You’re chatting with our AI assistant’ — satisfies the requirement. The chatbot should also be able to escalate to a human agent on request.

What if a voice artist gives verbal permission to clone their voice?2026-03-16T10:13:18-06:00

Verbal permission is not sufficient. AI voice use requires explicit, written authorization that specifies the scope of the permitted use — which campaigns, which platforms, which time period. Without written documentation, the organization has no defensible basis for the use and no evidence of consent if the use is disputed.

Can we use AI to generate images of people for marketing without using a real person’s likeness?2026-03-16T10:13:51-06:00

AI-generated images of fictional people — not based on any real individual’s likeness — are generally permissible for marketing use, subject to your organization’s content guidelines. The restriction applies when the AI is generating content that uses, resembles, or is derived from a specific real person’s appearance or identity.

Does the disclosure requirement apply to AI-assisted writing, like using AI to draft an email?2026-03-16T10:14:29-06:00

In most internal communication contexts, disclosure is not required for AI-assisted drafting — the distinction is between using AI as a writing tool versus presenting AI output as entirely the work of a specific human author in a way that would create a false impression. For customer-facing communications, marketing content, and any context where authenticity is material to the recipient, consult your legal team about disclosure requirements in your jurisdiction.

What makes the Jenna scenario different from other licensing situations?2026-03-16T10:15:04-06:00

The Jenna scenario isolates the specific question that AI voice technology creates: does a contract for recorded performances transfer rights to the performer’s voice pattern itself? The answer is no — those are different rights, and AI use requires the second right, not just the first. Most creative contracts were written before AI voice cloning existed and do not address it. The safe approach is explicit written consent for any AI voice use, regardless of what prior contracts say.

Complete the Series

Module 4 completes the Responsible AI at Work series. Employees who have completed all four modules understand the three pillars — Privacy, Accountability, and Transparency — and can apply them in the specific situations where AI use creates risk in their role.

Responsible AI at Work — 4-Module Series

Module 1: AI Fundamentals

Module 2: The Privacy Rule

Module 3: The Accountability Rule

▶  Module 4: The Transparency Rule — Ethics, Identity, and Disclosure  (this page)

Full series overview

Ready to Deploy Module 4?

Module 4 can be deployed as a targeted course for creative, marketing, and HR teams or as the final module of the full Responsible AI at Work series. SCORM-compatible and LMS-ready. Voice and identity scenarios can be customized to reflect your organization’s vendor contracts, content production workflows, and disclosure policies.

Request a preview or discuss deployment →

What service are you interested in?
Go to Top