Enterprise artificial intelligence is entering a new phase. After several years dominated by experimental chatbots and proof-of-concept pilots, organisations are now embedding AI directly into their operational core. According to new data released by Databricks, this shift is being driven by the rapid adoption of agentic AI systems—architectures in which models plan, reason, and execute tasks autonomously rather than simply responding to prompts.
The findings suggest a decisive change in how businesses deploy AI at scale. Instead of isolated tools designed for limited interactions, enterprises are increasingly building intelligent workflows that span data, applications, and infrastructure. In this new model, AI is no longer an add-on. It is becoming part of the system itself.
From experimental chatbots to operational AI
The first wave of generative AI adoption created enormous expectations. Enterprises rushed to deploy chat interfaces powered by large language models (LLMs), hoping for productivity gains and customer engagement improvements. While these tools delivered value in specific areas, many organisations struggled to move beyond pilots.
Common challenges emerged: limited integration with enterprise systems, unclear governance, rising costs, and difficulty measuring business impact. For many technology leaders, generative AI remained something to demonstrate rather than something to depend on.
Databricks’ latest platform telemetry suggests that phase is ending.
Based on usage data from more than 20,000 organisations worldwide, including 60% of the Fortune 500, enterprises are now investing in architectures where AI agents operate as autonomous actors inside workflows. These systems do more than retrieve information or generate text. They make decisions, coordinate tasks, and interact directly with data and tools.
Between June and October 2025 alone, multi-agent workflow usage on Databricks grew by 327%, signalling that agentic AI is moving rapidly from experimentation into production.
What agentic AI means for enterprises
Agentic AI refers to systems in which AI models can independently plan actions, invoke tools, and execute multi-step workflows with minimal human oversight. Rather than relying on a single model to handle every request, agentic architectures divide work across specialised agents, each responsible for a defined function.
This approach mirrors how human organisations operate. A manager does not perform every task personally. Instead, they interpret intent, allocate responsibilities, and ensure outcomes meet standards. Agentic AI applies the same logic to software systems.
In practice, this allows enterprises to automate complex processes that previously required human coordination—such as compliance checks, document processing, data validation, and system configuration.
The rise of the Supervisor Agent
At the centre of this shift is the Supervisor Agent, which Databricks identifies as the fastest-growing agentic use case on its platform.
Launched in July 2025, the Supervisor Agent acts as an orchestrator. It interprets incoming requests, breaks them down into subtasks, applies policy and compliance checks, and routes work to specialised sub-agents or tools. By October, it accounted for 37% of all agent usage on the platform.
Rather than replacing specialised models, the Supervisor Agent coordinates them. One agent may retrieve documents, another may verify regulatory requirements, while a third generates a response or executes a transaction. The supervisor ensures the workflow remains aligned with business rules and security constraints.
Technology companies have been the earliest adopters, building nearly four times as many multi-agent systems as any other industry. However, adoption is spreading quickly across financial services, healthcare, retail, and manufacturing.
In financial services, for example, a single agentic workflow can retrieve client documentation, validate it against regulatory requirements, generate disclosures, and deliver a compliant response—without manual intervention.
Infrastructure pressure: why agentic AI changes everything underneath
As AI agents shift from answering questions to executing tasks, they are placing unprecedented demands on enterprise infrastructure.
Traditional Online Transaction Processing (OLTP) systems were designed for predictable, human-initiated interactions. Agentic systems break those assumptions. AI agents generate continuous, high-frequency read and write operations, often spinning up and tearing down environments programmatically to test code, simulate outcomes, or deploy changes.
Databricks’ telemetry reveals how dramatic this shift has been:
- Two years ago, AI agents created just 0.1% of databases
- Today, 80% of new databases are created by AI agents
- 97% of testing and development environments are now provisioned by AI
This level of automation has transformed development workflows. Engineers and so-called “vibe coders” can now create ephemeral environments in seconds instead of hours, dramatically reducing friction in experimentation and deployment.
Since the public preview of Databricks Apps, more than 50,000 data and AI applications have been created on the platform, with growth accelerating at 250% over the past six months.
Why enterprises are rejecting single-model AI strategies
As agentic AI becomes more deeply embedded, enterprises are also becoming more cautious about vendor lock-in. Databricks’ data shows a clear move toward multi-model strategies, where organisations use several different LLM families simultaneously.
By October 2025:
- 78% of enterprises were using two or more LLM families
- 59% were using three or more, up from 36% just two months earlier
Models such as ChatGPT, Claude, Llama, and Gemini are being deployed side by side, with workloads routed dynamically based on complexity, latency requirements, and cost.
This approach allows enterprises to use smaller, more cost-effective models for routine tasks while reserving frontier models for complex reasoning or high-stakes decisions. Retail companies are leading the way, with 83% using multiple model families to balance performance and margins.
As a result, unified platforms capable of integrating proprietary and open-source models are quickly becoming a baseline requirement for enterprise AI stacks.
Real-time AI becomes the default
Unlike the batch-oriented workflows that defined early big data platforms, agentic AI operates almost entirely in real time.
Databricks reports that 96% of inference requests are now processed in real time, reflecting the immediacy required for agent-driven workflows.
In sectors where latency directly impacts value, the shift is especially pronounced:
- Technology companies process 32 real-time requests for every batch request
- Healthcare and life sciences show a 13:1 real-time to batch ratio
These workloads include applications such as system monitoring, clinical decision support, fraud detection, and dynamic pricing—use cases where delayed responses reduce effectiveness or increase risk.
For IT leaders, this trend underscores the need for inference infrastructure that can scale elastically while maintaining consistent performance under load.
Governance: from perceived bottleneck to deployment accelerator
One of the most striking findings in Databricks’ report challenges a common assumption: that governance slows innovation.
In reality, organisations with strong AI governance frameworks are deploying more AI—not less.
Companies using governance tools put over 12 times more AI projects into production than those without formal controls. Similarly, organisations using systematic evaluation tools achieve nearly six times more production deployments.
The reason is confidence. Clear guardrails around data usage, access controls, rate limits, and evaluation criteria allow stakeholders to approve deployments without fear of unquantified risk. Without these safeguards, many AI initiatives stall indefinitely at the proof-of-concept stage.
In highly regulated industries, governance is not optional. But Databricks’ data shows it is also a competitive advantage, enabling faster and broader adoption of agentic systems.
Where agentic AI delivers value today
Despite the futuristic perception of autonomous agents, most enterprise value today comes from automating routine but essential work.
Databricks’ analysis highlights sector-specific patterns:
- Manufacturing and automotive: 35% of AI use cases focus on predictive maintenance
- Healthcare and life sciences: 23% involve medical literature synthesis and analysis
- Retail and consumer goods: 14% focus on market intelligence and demand forecasting
Across industries, 40% of top AI use cases address customer-centric functions such as support, onboarding, and advocacy. These applications reduce response times, lower costs, and improve consistency—benefits that are measurable and scalable.
Crucially, these “boring” use cases build the operational foundation required for more advanced agentic systems in the future.
A shift in executive mindset
For enterprise leaders, the lesson is clear: competitive advantage no longer comes from simply adopting AI tools, but from how those tools are engineered, governed, and integrated.
Dael Williamson, EMEA CTO at Databricks, says the conversation has fundamentally changed.
“For businesses across EMEA, the focus has moved from AI experimentation to operational reality,” Williamson explains. “AI agents are already running critical parts of enterprise infrastructure, but the organisations seeing real value are those treating governance and evaluation as foundations, not afterthoughts.”
He adds that long-term differentiation depends on openness and interoperability.
“Open platforms allow organisations to apply AI to their own data and workflows, rather than relying on embedded features that deliver short-term productivity but little strategic advantage,” Williamson says.
In regulated markets especially, the combination of control, transparency, and flexibility is what separates successful deployments from stalled pilots.
Agentic AI as enterprise infrastructure
Databricks’ data paints a picture of AI’s next phase: less visible, more embedded, and far more consequential.
Agentic systems are not replacing human workers wholesale. Instead, they are absorbing the coordination, execution, and verification tasks that slow organisations down. As these systems mature, AI becomes less of a standalone product and more like electricity—essential, ubiquitous, and largely invisible.
For enterprises, the challenge is no longer whether to adopt AI, but whether their infrastructure, governance, and operating models are ready for a world where software systems think, plan, and act on their behalf.
In that context, agentic AI is not just another trend. It represents a structural shift in how modern organisations function.