London, AI Expo 2026 – Day one of AI Expo 2026 set a decisive tone for the future of enterprise automation, with industry leaders converging on a clear message: the age of “agentic AI” is arriving, but only organisations prepared with strong governance, trusted data, and resilient infrastructure will be able to deploy it safely and at scale.
Co-located with the Intelligent Automation Conference, the event brought together CIOs, AI architects, policymakers, and technology providers to explore how artificial intelligence is evolving from task-based automation into autonomous digital co-workers capable of reasoning, planning, and executing complex workflows.
While the headline vision focused on AI agents operating alongside humans, the deeper technical sessions revealed a more grounded reality. Before enterprises can embrace autonomous systems, they must address foundational challenges around data quality, governance, observability, safety, and cultural readiness.
From Scripted Automation to Autonomous Agents
One of the most prominent themes across the exhibition floor was the transition from traditional robotic process automation (RPA) to intelligent, agentic systems. Unlike legacy automation tools that execute predefined rules, agentic AI systems can adapt dynamically, make contextual decisions, and act across multiple enterprise functions.
Amal Makwana of Citi explained that these systems represent a fundamental architectural shift. Instead of automating isolated tasks, agentic AI can orchestrate entire workflows across departments, responding to real-time signals and changing conditions.
This distinction is critical. Earlier generations of automation required heavy configuration and constant maintenance. Agentic AI, by contrast, reduces friction between human intent and machine execution, allowing outcomes to be achieved with fewer manual interventions.
Scott Ivell and Ire Adewolu from DeepL described this evolution as the closing of the “automation gap” — the space between what employees want to accomplish and what systems are capable of delivering. In their view, agentic AI moves beyond productivity tools and begins to function as a true digital colleague.
However, this capability does not emerge overnight. Brian Halpin of SS&C Blue Prism cautioned that organisations must still master standard automation before introducing agentic systems. Enterprises lacking process maturity often struggle when AI is introduced too early, leading to inconsistent outcomes and operational risk.
Why Governance Is the Cornerstone of Agentic AI
As autonomy increases, so does uncertainty. One of the most pressing challenges discussed on day one was governance — specifically, how organisations can manage systems that produce non-deterministic outputs.
Steve Holyer of Informatica, joined by experts from MuleSoft and Salesforce, stressed that agentic AI cannot operate without a robust governance layer. Unlike traditional software, AI agents do not always behave predictably, which means controls must be embedded at the data, access, and decision-making levels.
Governance frameworks must answer critical questions:
- What data can AI agents access?
- How is that data validated and monitored?
- Who is accountable for AI-driven decisions?
- How are errors detected and corrected?
Without these safeguards, autonomous systems risk causing cascading failures across enterprise operations. Speakers emphasised that governance is not a compliance afterthought but a design requirement that must be implemented from the start.
Data Readiness: The Biggest Barrier to Deployment
Despite advances in AI models, speakers repeatedly returned to one core truth: artificial intelligence is only as good as the data it consumes.
Andreas Krause from SAP delivered a stark assessment: most AI initiatives fail not because of weak algorithms, but because enterprise data is fragmented, untrusted, or poorly contextualised. For generative AI to function reliably in business environments, it must draw from connected, accurate, and governed data sources.
This challenge becomes even more acute in agentic systems, where AI agents are expected to make decisions and take actions independently. In such cases, flawed data can lead to flawed actions at machine speed.
Meni Meller of Gigaspaces addressed one of the most visible symptoms of poor data integration: hallucinations in large language models (LLMs). He argued that enterprises must move beyond static knowledge bases and adopt real-time data retrieval strategies.
His proposed solution combined retrieval-augmented generation (RAG) with enterprise-grade semantic layers — often referred to as eRAG. This approach allows AI models to pull factual, up-to-date information directly from enterprise systems at the moment of execution, dramatically reducing inaccuracies.
Real-Time Analytics as a Competitive Advantage
Data quality is only part of the equation. Enterprises must also process and analyse information fast enough to support autonomous decision-making.
A panel featuring leaders from Equifax, British Gas, and Centrica highlighted the growing importance of cloud-native, real-time analytics platforms. In highly competitive markets, the ability to act on live data — rather than historical reports — is becoming a key differentiator.
These organisations shared how legacy batch-processing architectures struggle to keep pace with modern AI workloads. Agentic systems require continuous data streams, low-latency access, and scalable infrastructure capable of handling unpredictable demand.
As AI agents increasingly influence customer interactions, pricing decisions, and operational responses, delays of even a few seconds can translate into lost revenue or reputational damage.
Embodied AI Raises New Safety Questions
While much of the discussion focused on software, day one also explored the expansion of AI into physical environments — from factories and offices to public spaces.
A panel including Edith-Clare Hall from ARIA and Matthew Howard from IEEE Robotics and Automation Society examined the risks associated with embodied AI systems such as robots and autonomous machines.
Unlike software errors, physical AI failures can result in real-world harm. Speakers stressed that safety frameworks must be established before robots are deployed alongside humans, particularly in shared environments.
Perla Maiolino from the Oxford Robotics Institute offered insight into how advanced sensing technologies are addressing these challenges. Her research focuses on Time-of-Flight (ToF) sensors and electronic skin, which enable robots to perceive both their surroundings and their own physical state.
These integrated perception systems allow machines to detect obstacles, adjust movements, and respond safely to human presence — capabilities that are essential for sectors such as manufacturing, logistics, and healthcare.
Observability Becomes Critical in Autonomous Systems
As AI systems become more independent, understanding their behaviour becomes increasingly difficult — and increasingly important.
Yulia Samoylova from Datadog discussed how observability practices must evolve alongside AI-driven software development. Traditional monitoring tools, designed for deterministic systems, often fail to capture the internal reasoning processes of AI models.
For engineering teams, this creates new challenges:
- How do you debug an autonomous agent?
- How do you trace the cause of an unexpected decision?
- How do you ensure reliability without full predictability?
Samoylova argued that observability must extend beyond performance metrics to include decision paths, model inputs, and contextual signals. Without this visibility, enterprises risk deploying systems they cannot fully understand or control.
Infrastructure Must Be Built for AI, Not Retrofitted
Behind every successful AI deployment lies an often-overlooked component: network infrastructure.
Julian Skeels from Expereo emphasised that traditional enterprise networks were not designed for AI workloads. Agentic systems demand high bandwidth, low latency, and constant availability — especially when operating across distributed environments.
According to Skeels, organisations must invest in sovereign, secure, and “always-on” network fabrics capable of supporting real-time AI interactions. Retrofitting outdated networks often leads to performance bottlenecks that undermine AI initiatives.
As AI agents increasingly rely on cloud platforms, edge computing, and cross-border data flows, network reliability becomes a strategic concern rather than an IT detail.
The Human Factor: The Illusion of AI Readiness
Despite technological progress, several speakers warned that the biggest obstacles to AI adoption are often cultural rather than technical.
Paul Fermor of IBM Automation introduced the concept of the “illusion of AI readiness.” Many organisations assume that adopting AI tools is simply an extension of automation programs, underestimating the organisational change required.
Jena Miller reinforced this perspective, arguing that AI strategies must be human-centred. Employees need to understand, trust, and feel empowered by AI systems — otherwise adoption stalls and return on investment disappears.
Training, transparency, and clear communication were repeatedly cited as essential components of successful AI transformation. Without workforce buy-in, even the most advanced systems fail to deliver value.
Strategic Choices: Build or Buy?
Ravi Jay from Sanofi highlighted another critical decision facing enterprise leaders: determining where to build proprietary AI capabilities and where to rely on external platforms.
This choice involves not only cost considerations but also ethical, operational, and regulatory factors. Leaders must decide which systems represent core competitive advantages and which are better sourced from established vendors.
As Jay noted, these decisions should be made early. Retrofitting governance, ethics, and compliance into mature systems is significantly more difficult than designing them in from the outset.
What CIOs Should Take Away from Day One
The sessions and discussions from the first day of AI Expo 2026 made one thing clear: agentic AI is no longer theoretical, but its successful deployment depends on preparation.
For CIOs and technology leaders, the priorities are increasingly well-defined:
- Establish strong data governance frameworks that support retrieval-augmented generation and real-time access to trusted information
- Invest in scalable, low-latency infrastructure designed specifically for AI workloads
- Implement observability practices that provide insight into autonomous system behaviour
- Address safety and ethics early, particularly for embodied AI
- Develop human-centred adoption strategies alongside technical implementation
While AI technology continues to advance rapidly, day one of AI Expo 2026 underscored a more measured reality: autonomy without readiness is risk, not innovation. Enterprises that build strong foundations today will be the ones able to unlock the full potential of agentic AI tomorrow.