Opal and the Rise of Agentic AI in Digital Experience

AdamDavey (2)
Adam Davey
Director of Technology
What it means for Optimizely CMS teams today

Last month, a prospective client – a retail brand using an Enterprise DXP shared that their development team spent 14 hours per week on routine content-structure requests.

  • New landing page templates.
  • Another product category variant.
  • Minor updates to checkout flow content.

None of it is complex, all of it is essential, and all of it is absorbing senior developer time that should have been dedicated to their composable commerce migration. Their CTO asked a very reasonable question: "Why does every content request require engineering involvement?"

Why CMS engineering is stalling

This scenario isn't unusual. After years of delivering Optimizely implementations at Candyspace, we've watched content teams drown in the content volumes that modern digital experiences demand. The platforms became more powerful – A/B testing, personalisation, omnichannel orchestration—but the cognitive load on humans increased proportionally. We automated publishing workflows and integrated headless architectures, but the fundamental model remained: humans configure, platforms execute.

Optimizely Opal is a true step change. Not because it's AI – enterprise software has been adding AI-powered recommendations for years. But it introduces agentic capabilities that transform how people interact with platforms. Instead of configuring a personalisation rule through endless screens of dropdowns, a content strategist can brief an agent: "Create A/B tests for our spring campaign landing page, targeting high-value segments, optimising for conversion." The agent understands the intent, executes multi-step workflows, and returns the results for validation.

The distinction matters. Traditional AI assistants answer questions or generate content snippets. Agentic AI systems like Opal make autonomous decisions across multiple steps, use tools (APIs, content repositories, analytics platforms), and complete complex workflows without constant human intervention. They don't just suggest; they act.

The mechanics of agentic AI

From a technical perspective, Opal's agentic capabilities rest on maintaining context across multi-turn conversations and revising approaches based on intermediate results. This is critical when dealing with the interconnected nature of CMS operations, where changes to one element cascade through personalisation rules, A/B tests, and content relationships.

Equally important, it has direct API access to Optimizely's platform capabilities: content creation, variant management, personalisation rule configuration, and analytics interpretation. It employs reasoning models that can plan multi-step workflows—"I need to create this content, then set up the A/B test, then configure the personalisation rule, then set success metrics” – and execute them autonomously.

The practical implications are significant. A content manager with Opal's 200 monthly credits (included free for Optimizely customers) can brief complex personalisation strategies in natural language rather than navigating nested configuration screens. Marketeers can ask Opal to analyse campaign performance and automatically generate variants for underperforming segments. A developer can delegate routine CMS tasks – "Set up the blog content model for our new product launch" – and focus on integration architecture.

Clean foundations or scaled chaos?

But here’s what we’re learning from early implementations: organisations that extract value from agentic AI aren’t necessarily the ones with the most sophisticated technical infrastructure. They’re the ones with clean content foundations. Opal’s effectiveness is directly tied to content model quality, taxonomic consistency, and structural clarity. If your content types are poorly defined, your metadata is inconsistent, and your URL structures are chaotic, an AI agent will amplify that chaos at machine speed.

We recently audited an Optimizely instance where different teams had created 47 content types, many of which were functionally identical but had slightly different field names and structures. Asking Opal to "create a new case study" becomes meaningless when "case study" exists as four different content types with conflicting schemas. Does the agent use the 2019 version with "customer_name" or the 2022 version with "client_title"? The "CaseStudy" type that marketing created or the "Case_Study_Template" that development built? Without clear content-domain models, autonomous agents make inconsistent decisions – and do so at impressive speed.

This reveals an uncomfortable truth: adopting agentic AI requires organisational readiness, not just technical integration. Teams used to working around inconsistent content structures – where institutional knowledge lives in people’s heads, not documented models – will struggle. Shifting from "configure and publish" to "brief and validate" means trusting the system to make good decisions, which starts with giving it solid foundations on which to reason.

Governance in the age of autonomy

The governance implications deserve equal attention. When an agent can autonomously create content variants and configure personalisation rules, who reviews its work? How do you audit AI-generated changes? What approval workflows make sense when the bottleneck you’re solving is human review capacity?

We're developing "confidence thresholds" with clients – low-risk changes (blog post variants, minor copy adjustments) execute automatically, medium-risk changes (personalisation rule modifications) trigger asynchronous review, high-risk changes (customer-facing policy content) require explicit approval before publishing.

The architectural prerequisite is clear: Opal’s capabilities shine in API-first, headless implementations where content lives independently of presentation. Tightly coupled CMS architectures that embed logic in templates limit what an agentic system can safely modify. This aligns perfectly with where Optimizely’s platform is heading: SaaS-first, composable, and headless by default.

Starting small with low-risk tasks

For technical teams evaluating Opal, start with low-consequence, high-volume tasks. Let it generate variant content for A/B tests. Use it to set up routine personalisation rules. Brief it to analyse campaign performance and suggest optimisations. Build institutional muscle around validating agent outputs rather than configuring everything manually. Then progressively expand its autonomy as confidence and governance frameworks mature.

The shift to agentic AI in content management feels inevitable, but the timeline depends far more on organisational readiness than on technology maturity. The 200 free monthly Opal credits that Optimizely provides are more than enough to explore its capabilities – provided your content foundations, team workflows, and governance models can support a fundamentally different way of working.

Is your business ready for the shift? 

The platforms are ready. The real question is whether your organisation is. If you’re running Optimizely CMS today, begin by auditing your content models for the consistency an autonomous agent requires. Review which routine tasks consume disproportionate developer time. Experiment with Opal’s 200 free monthly credits on low-risk, high-volume scenarios. But resist the urge to deploy agentic capabilities before your content foundations and governance frameworks are truly ready.

The organisations that will realise real value from agentic AI won’t necessarily be the early adopters. They’ll be the ones who understand that competitive advantage comes from operational foundations, not feature access – and act accordingly.

Tags: Optimizley, Agentic AI, Opal, CMS