Context Matters

Context engineering—which shapes the world the AI understands before being asked anything—is quietly becoming the backbone of effective AI adoption. And L&D has the opportunity to lead it.

Conversations at the TED-AI gathering in October 2025 circled around a surprisingly shared sentiment: Enterprise artificial intelligence (AI) feels stuck in the “trough of despair.” Anyone who has shepherded a new technology knows this stage. The fireworks fade, the reality check arrives, and everyone suddenly remembers they have day jobs.

Gartner’s latest “Hype Cycle for Artificial Intelligence” confirms the moment.

Fresh data explains why. A Stanford Social Media Lab–linked study shows 91 percent of companies report at least one failed AI initiative.

Before anyone starts removing apps, a bit of perspective helps. A National Bureau of Economic Research paper shows generative AI reaching 40 percent adoption among U.S. adults within two years.

Amid that velocity, one area is emerging as the real differentiator for organizations: context engineering. Context engineering is quietly becoming the backbone of effective AI adoption. Up until now, many people have promoted prompt engineering as the core skill for obtaining good results from AI. Prompts sharpen the question you ask an AI. Context engineering shapes the world the AI understands before you ask anything. The difference is similar to telling a guest exactly what to cook versus keeping a well-stocked pantry and a set of family recipes pinned to the fridge. One is tactical. The other sets up the conditions for success.

Context engineering builds the scaffolding around a model so it can infer intent, understand your environment, and produce work that fits your business. It involves clean and structured data, curated knowledge bases, workflow signals, organizational memory, and human evaluation checkpoints. When those elements are in place, AI becomes a more reliable colleague instead of an unpredictable improviser.

Most people have experienced the difference between an AI that “sounds right” and an AI that is actually right. The gulf between the two is frequently context. Here, it turns out, humans are not that much different than machines.

Research Findings

Long before generative AI arrived, researchers were studying why some forms of training change behavior while others never escape the classroom. One of the clearest examples came from Larry Hirschhorn’s case studies of factory workers for the U.S. Office of Technology Assessment. A key finding was that training had limited impact when workers didn’t understand the broader context of their work.

At the anonymized Cookie-Foods, operators were trained in statistical process control and problem solving, but:

  • The SPC program’s impact on productivity was limited because “workers who collected data did not know how the data were used.”
  • The problem-solving course “had only a limited impact on worker behavior because it was not connected to organizational change.”

Hirschhorn then generalizes that training cannot be separated from the economic and strategic context of the organization and must be linked to structure and process.

Just as people struggle to apply training when they don’t understand where their work fits, AI models struggle when they lack the context that gives meaning to their output. A model can write, calculate, summarize, and suggest, but without the right organizational knowledge, workflow cues, and domain-specific grounding, its answers drift. It sounds right but is not right.

Context engineering is the modern parallel to Hirschhorn’s conclusion. Workers performed better when they understood how their data contributed to quality and performance. AI performs better when it understands the organization it is trying to support. That means giving the model curated knowledge bases, business language, examples of what good looks like, and signals from the workflow so it can interpret what the user really needs.

Why Context Engineering Matters to L&D

Learning and development (L&D) has a unique opportunity to lead context engineering rather than wait for IT to define it. L&D already owns many of the assets AI needs in order to behave well: curricular frameworks, competency models, knowledge repositories, skills taxonomies, assessment data, historical learning patterns, and subject matter expert interpretations of what good looks like. Here are several ways context engineering intersects directly with L&D work:

  1. Creating shared language and meaning. An AI model must understand the organization’s vocabulary before it can teach, coach, or assist. L&D’s role in defining skills, designing curricula, and establishing performance standards gives the function a clear advantage in shaping that vocabulary.
  2. Designing context-aware learning agents. AI learning coaches are only as good as their context. When L&D supplies domain-specific examples, curated content, and relevant scenarios, the AI’s guidance becomes far more helpful.
  3. Supporting personalized and adaptive learning. Context engineering allows systems to understand a learner’s role, prior experience, skill gaps, and preferred formats, which creates meaningful personalization.
  4. Embedding learning inside work. Learning at the moment of need requires context. If an AI agent understands the workflow, the tool in use, and common stumbling blocks, it can offer guidance that feels like a seasoned coworker.
  5. Raising the standard of human + AI teaming. L&D also plays a role in teaching employees how to evaluate and refine AI output. Spotting and preventing workslop is now a teachable skill.

Radical Redesign: The 2026 Skill Frontier

Context engineering also supports a more ambitious shift: radical redesign. Many organizations are still trying to use AI to do their old processes faster. The real opportunity lies in starting from the outcome and working backward. That means rethinking workflows around what people and AI each do best. People handle nuance, judgment, creativity, connection, ethics, and design. AI handles volume, pattern recognition, retrieval, and low-stakes first drafts. When L&D teaches teams how to redesign work at this level, AI adoption stops being a technology project and becomes a transformation project.

Organizations that take these lessons seriously, invest in context, build fluency, and give teams permission to experiment will be well positioned for the adoption turn coming in 2026. The trough of despair never lasts forever. The climb out is usually faster, more surprising, and far more rewarding than the early turbulence suggests.

Karie Willyerd
Karie Willyerd, six-time Chief Learning/Talent Officer at companies such as Visa and Sun Microsystems, now advises leaders on the intersection of people development and the future of work. An award-winning author and speaker, she brings strategic insight and practical playbooks to help organizations unlearn old habits, harness new technology, and lead with confidence in a world where standing still is the biggest risk.