Franny Hsiao of Salesforce on What It Really Takes to Scale Enterprise AI

As enterprises race to deploy artificial intelligence at scale, a familiar pattern continues to emerge: promising pilots stall before reaching production. While generative AI demos are easy to create, turning them into dependable, enterprise-grade systems remains one of the most complex challenges facing organisations today.

According to Franny Hsiao, EMEA Leader of AI Architects at Salesforce, the problem rarely lies with the AI models themselves. Instead, failures are rooted in architectural blind spots—particularly around data infrastructure, governance, and operational readiness.

Speaking ahead of a major AI industry gathering in London in 2026, Hsiao shared insights into why so many enterprise AI initiatives collapse under real-world pressure and what organisations must do differently to build systems that last.


Why Enterprise AI Pilots Fail to Scale

The majority of enterprise AI failures do not occur during experimentation. They happen during the transition from pilot to production.

“In most cases, the root cause is architectural,” Hsiao explains. “Teams build AI pilots without designing for production-grade data systems or governance from day one.”

Early-stage AI projects are often developed in controlled environments using limited, clean datasets. These conditions create an illusion of success that rarely survives contact with enterprise reality.

Hsiao refers to this as the “pristine island” problem—a scenario where AI systems are developed in isolation from the complexity of real organisational data.

“These pilots operate on carefully curated data, simplified workflows, and minimal integration,” she says. “But enterprise data is messy. It’s fragmented across systems, inconsistent in quality, and constantly changing.”

When organisations attempt to scale these pilots without re-architecting the underlying data infrastructure, the systems quickly degrade. Latency increases, outputs become inconsistent, and trust erodes.

“The moment these models encounter real-world volume, variability, and integration challenges, they break,” Hsiao warns. “And once users stop trusting the system, adoption collapses.”


Building AI Systems That Can Survive the Real World

According to Hsiao, the organisations that succeed in scaling AI take a fundamentally different approach. Rather than treating governance as an afterthought, they embed it across the entire lifecycle.

“Successful teams bake in end-to-end observability, monitoring, and guardrails from the start,” she says. “That’s what allows leaders to understand not just whether the AI works, but how it’s being used, how it’s performing, and where it’s failing.”

This visibility is essential for identifying data gaps, performance bottlenecks, and user adoption issues before they become critical.

Without it, enterprises risk deploying AI systems that technically function but fail to deliver reliable business value.


The Latency Challenge of Large Reasoning Models

As enterprises adopt increasingly powerful reasoning models, a new challenge emerges: latency.

Advanced models can deliver deeper insights, but they often require heavy computation, slowing response times and frustrating users. In enterprise environments, even small delays can derail adoption.

Salesforce has addressed this challenge by focusing on what Hsiao calls “perceived responsiveness.”

Rather than forcing users to wait for a complete response, Salesforce’s Agentforce platform delivers output progressively through streaming.

“This approach allows users to see responses unfold in real time, even while complex reasoning continues in the background,” Hsiao explains. “It dramatically reduces perceived latency, which is one of the biggest barriers to production AI.”


Design as a Trust Mechanism

Performance alone is not enough. Transparency plays a critical role in how users perceive and trust AI systems.

Hsiao emphasises that thoughtful design can actively reinforce confidence in AI outputs.

“Showing users what the system is doing—whether that’s reasoning steps, tools being used, or progress indicators—helps set expectations,” she says. “It keeps users engaged and reassures them that the system is working deliberately.”

Elements such as progress bars, spinners, and visual indicators are not cosmetic. They communicate intent, reduce frustration, and strengthen trust.

Salesforce also strategically selects smaller models where appropriate and applies explicit response-length constraints to maintain speed and consistency.

“The goal is not to use the biggest model everywhere,” Hsiao explains. “It’s to use the right model for the job.”


Why Offline AI Matters More Than Ever

For many industries, continuous cloud connectivity cannot be assumed. Field services, utilities, logistics, and manufacturing operations often operate in environments where network access is unreliable or unavailable.

“For a large portion of our enterprise customers, offline capability is not optional,” says Hsiao. “It’s a requirement.”

This has driven Salesforce’s focus on edge AI—bringing intelligence directly onto devices.

In field service scenarios, technicians can use on-device AI to diagnose problems without a live connection.

“A technician might photograph a damaged component, scan a serial number, or capture an error code while offline,” Hsiao explains. “An on-device language model can immediately identify the issue and provide troubleshooting guidance from cached knowledge.”

Once connectivity is restored, the system automatically synchronises data back to the cloud, ensuring consistency across enterprise systems.

“This approach ensures productivity never stops,” Hsiao says. “Work continues even in the most disconnected environments.”


The Growing Importance of Edge Intelligence

Hsiao expects investment in edge AI to accelerate due to several advantages:

  • Ultra-low latency
  • Improved privacy and security
  • Reduced bandwidth costs
  • Greater energy efficiency

As models become more efficient, on-device intelligence will increasingly complement cloud-based systems rather than replace them.


Governance in an Age of Autonomous Agents

As AI agents gain autonomy, governance becomes more—not less—important.

“These systems are not set-and-forget,” Hsiao stresses. “Scaling enterprise AI requires clearly defining when humans must remain in control.”

Salesforce approaches this through human-in-the-loop governance, particularly at what Hsiao calls “high-stakes gateways.”

These include:

  • Any actions involving creation, updating, or deletion of data
  • Verified customer or contact actions
  • High-impact decisions that could affect compliance, security, or trust

“For these scenarios, we require explicit human confirmation,” Hsiao explains. “This isn’t about slowing things down—it’s about accountability.”


Turning Oversight into Continuous Learning

Rather than treating human oversight as a limitation, Salesforce views it as a learning mechanism.

“Human feedback allows agents to improve over time,” Hsiao says. “It creates a system of collaborative intelligence rather than unchecked automation.”

This feedback loop ensures that AI systems evolve responsibly while remaining aligned with business intent.


Seeing Inside the Black Box: Agent Observability

Trusting AI agents requires visibility into how decisions are made.

To address this, Salesforce developed a Session Tracing Data Model (STDM) that captures detailed, step-by-step logs of agent activity.

“This includes every user input, planning step, tool call, data retrieval, response, timing metric, and error,” Hsiao explains.

This level of transparency enables enterprises to:

  • Analyse agent adoption
  • Optimise performance
  • Monitor system health
  • Identify bottlenecks and failures

“Observability becomes mission control,” Hsiao says. “It’s how teams understand what’s happening across all their agents in one place.”


Standardising How AI Agents Communicate

As enterprises deploy agents from multiple vendors, interoperability becomes essential.

“Agents cannot operate in isolation,” Hsiao argues. “They need a shared language.”

Salesforce supports open-source protocols such as:

  • MCP (Model Context Protocol)
  • A2A (Agent-to-Agent Protocol)

“These standards prevent vendor lock-in and allow agents to collaborate across ecosystems,” she says.

However, communication alone is not enough if agents interpret information differently.


Solving the Semantic Gap in Enterprise AI

To address fragmented meaning across systems, Salesforce co-founded Open Semantic Interchange (OSI).

The goal is to standardise semantics so that an agent in one platform can accurately understand the intent and context of another.

“Without shared semantics, agents may exchange data but still misunderstand each other,” Hsiao explains. “OSI helps ensure that meaning travels with the message.”


The Next Bottleneck: Agent-Ready Data

Looking ahead, Hsiao believes the biggest challenge in scaling enterprise AI will shift from models to data.

“Many organisations still struggle with legacy systems where data is difficult to search, reuse, or contextualise,” she says.

Traditional ETL pipelines are too rigid for agent-driven systems that require dynamic access to context.

The future, according to Hsiao, lies in agent-ready data architectures—systems designed to make enterprise knowledge searchable, contextual, and reusable in real time.

“This is what enables hyper-personalised experiences,” she says. “Agents need access to the right data, at the right moment, with the right context.”


Scaling AI Is an Infrastructure Problem, Not a Model Race

Hsiao concludes that the coming year will not be defined by who builds the largest or newest model.

“The real differentiator will be orchestration, governance, and data infrastructure,” she says. “That’s what allows agentic systems to operate reliably at scale.”

Enterprises that invest in these foundations today will be best positioned to turn AI from an experiment into a durable competitive advantage.


Salesforce is a major sponsor of this year’s London event and will host multiple expert sessions, including insights from Franny Hsiao. Attendees can visit Salesforce at stand #163 to learn more about the company’s approach to enterprise AI architecture and agent-based systems.

Read Also: Masumi Network: How the Fusion of AI and Blockchain Is Building Trust in the Emerging Agent Economy