AI as Muse, Sparring Partner, or Ghostwriter in L&D and HR

L&D and HR teams must weigh AI’s speed and convenience against instructional soundness and professional accountability.

In the Learning and Development (L&D) and Human Resources (HR) arenas, artificial intelligence (AI) engines now reside in diverse content-generation platforms. Instructional design (ID) practitioners are already experimenting with AI tool sets to enhance their workflows, while others are just beginning to explore their potential.

Early adopters are discovering that when used purposefully, AI can shorten design time, personalize learning paths, and handle routine tasks so practitioners can focus on higher-level strategizing, review, and refinement activities. And for certain creative applications, such as image, text-to-speech, video, or avatar generation, AI is quickly becoming the agent of choice.

The Allure of Over-Simplifying AI Use

In general, though, marketing narratives around AI in the workplace promise effortless content creation. A variety of tools offer appealing shortcuts to generating job aids, assessments, case studies, lessons, or entire courses with just a prompt, or by uploading articles, transcripts, white papers, or slide decks.

But using AI well means more than chasing the latest tool. It requires deciding why the technology belongs in the process before turning it loose, and determining the degree of automation that is appropriate for that process.

A simplified, “one-and-done” approach to content creation is tempting to time-pressured practitioners who would greatly prefer a single-pass solution and to those who are intimidated by the technology. After all, AI’s outputs are grammatically flawless, often cogent, and surprisingly complete.

In many cases, the results surpass what a non-expert (or a less-proficient writer) might produce without AI. So if the content appears credible and is likely to be well-received by the audience—such as clients, colleagues, learners, or stakeholders—why not just run with the first pass?

The notion that content must be further honed by iteratively collaborating with AI may seem inefficient or unnecessary. Yet relying on the initial output without further review or refinement could lead to superficial outcomes at best—and misinformation at worst.

Herein lies the tension: Should professionals be expected to spend more time refining something that already seems “good enough”? Are we setting unrealistic expectations by suggesting that more extensive interactions with AI are always necessary?

In L&D and HR, the answer depends on the stakes. When quality, tone, and legal or instructional integrity matter—such as in regulated industries, compliance training, or performance-critical job roles—AI output should be treated as a draft, not a final deliverable ready for publication or use. While the speed of AI is a boon, the coherence of the result still rests on human judgment. That makes collaborative iteration more essential, not less.

AI tools can produce outputs that look polished, sound authoritative, and appear “done.” But without an understanding of learning science, content sequencing, and assessment strategies, the results may be instructionally hollow. Worse, they may be dangerously misleading in regulated or technical fields.

There are no obvious red flags unless someone with relevant expertise, such as a subject matter expert (SME), intervenes. Here is a set of recommendations:

Example Risk of Misuse Review of AI Output Needed?
Drafting a trivia quiz Low Probably not. One pass may be fine.
Writing compliance training on sensitive workplace issues Medium-high Yes. Plan for a SME review and at least one more refinement pass.
Generating learner feedback or personalized guidance High, if not aligned to the target audience Yes. Nuance and voice matter, so a SME review is called for.
Drafting the phrasing of performance evaluations High ethical stakes for sensitive interactions Yes. HR or legal oversight is critical, so plan for additional refinement.
Creating online courseware by uploading text files and slide decks to an AI content engine High, if the output does not reflect best practices for instructional design Yes. Requires ID proficiency (or SME review) to verify instructional integrity and content accuracy.

 

4 Ways to Adopt AI Using a Thoughtful, Balanced Approach

  1. Automate the busywork, not the thinking.

Streamlining repetitive work—such as drafting job descriptions, screening résumés, answering policy questions, or outlining lesson plans—can result in valuable time savings. However, efficiency shouldn’t become the only goal.

Therefore, avoid relying solely on automation. Without the collaboration of human experts, AI risks flattening the creative and problem-solving opportunities that give workplace learning depth, relevance, and accuracy.

  1. Keep the purpose in the process.

Start with the learning objective, then choose tools that help meet it. AI performs best when it serves a clear instructional intent. For instance, an onboarding designer might use generative AI to brainstorm scenario prompts but still craft the final dialogue to fit company culture. A sales training team might have AI analyze learner data to reveal skill gaps, then decide which need human coaching. The technology amplifies design when guided by a clear purpose.

  1. Balance speed with substance.

Slow down to review, fact-check, and refine to ensure the result meets professional standards. A polished output isn’t necessarily a finished product. AI promises painless productivity, yet quality, accuracy, and compliance still matter. Think of AI as an assistant that accelerates producing the first draft—but never as the final editor.

  1. Aim for smarter collaboration.

Encourage iterative prompting, peer review, and expert oversight. The best results come when AI becomes a teammate rather than a shortcut. Before publishing or launching a course, ask:

  • Did the tool solve the right problem?
  • Were key decisions verified by a subject matter expert?
  • Does the final version sound like our organization, rather than an algorithm?

This discipline keeps the “human” in human-AI collaboration.

The Bottom Line

In many instances, the effective use of AI in L&D and HR involves:

  • Framing the problem clearly
  • Collaborating with AI iteratively
  • Vetting and refining the AI output rigorously, and
  • Co-authoring with human SMEs when appropriate

Ultimately, L&D and HR teams must weigh AI’s speed and convenience against instructional soundness and professional accountability. It’s a balancing act that shapes how AI becomes a muse, sparring partner, or ghostwriter.

NOTE: Portions of this article were adapted from “AI as Muse, Sparring Partner, or Ghostwriter? Comparing AI Use in Public Education vs. Professional Development,” in the Journal of Applied Instructional Design (JAID).

Adele Sommers
Adele Sommers, Ph.D., is an instructional designer, usability strategist, and president of Business Performance Inc. She specializes in performance-centered training design, technical communication, and organizational improvement. With more than 25 years of industry experience, she helps organizations simplify complex systems, reduce user errors, and align employees’ job workflows with the company’s core business objectives.