Artificial Intelligence is rapidly transforming the modern enterprise landscape, but behind every successful AI implementation lies one essential ingredient — data. Businesses today generate massive volumes of information from customers, operations, devices, applications, and internal workflows. Yet despite having access to large amounts of first-party data, many organisations still struggle to turn that information into actionable intelligence.
The often-repeated phrase “data is the new oil” continues to dominate conversations in the technology sector. However, unlike oil, data cannot simply be extracted and used immediately. Enterprise data is frequently fragmented across departments, stored in incompatible systems, and governed by outdated infrastructure that limits interoperability. As a result, organisations face major challenges when preparing their environments for AI adoption at scale.
At the same time, enterprises must make increasingly difficult decisions around AI infrastructure. Should AI workloads run in the cloud or on local systems? How can organisations balance rising compute costs with operational efficiency? What role will enterprise IT teams play in a future increasingly dominated by autonomous AI systems?
Ahead of the AI & Big Data Expo at the San Jose McEnery Convention Center on May 18–19, Jerome Gabryszewski, AI & Data Science Business Development Manager at HP, shared insights into how enterprises can overcome these challenges. His perspective highlights the growing importance of AI governance, local compute infrastructure, and secure data management in the modern business environment.
The Real Challenge Behind Enterprise AI Adoption
While AI adoption continues to accelerate globally, many enterprises underestimate the complexity involved in preparing their data ecosystems for AI integration.
According to Jerome Gabryszewski, one of the biggest obstacles businesses face is not necessarily the AI technology itself but the organisational and architectural debt surrounding their data infrastructure.
Many companies operate with:
- Fragmented data ownership across departments
- Inconsistent data formats and schemas
- Legacy systems lacking interoperability
- Poor governance frameworks
- Siloed operational data
These issues create friction when organisations attempt to automate data ingestion for AI models.
Moving from manual to automated data ingestion may sound straightforward in theory, but in practice, businesses often discover that their underlying systems were never designed to support modern AI workflows.
Jerome explained that the technical challenges of automation are frequently smaller than the governance and integration work required beforehand.
Before AI can generate meaningful business outcomes, companies must first organise, standardise, and govern their data properly.
Without a strong data foundation, even the most advanced AI models will struggle to deliver accurate or reliable insights.
Why Data Governance Is More Important Than Ever
As AI systems become increasingly autonomous, data governance is emerging as one of the most critical priorities for enterprises.
Modern AI models are capable of continuously learning and updating themselves over time. While this creates opportunities for improved performance and adaptability, it also introduces serious risks if not managed correctly.
Two of the biggest concerns include:
- Concept drift
- Data poisoning
Understanding Concept Drift
Concept drift occurs when the data patterns AI models were originally trained on begin to change over time. As business environments evolve, customer behaviour shifts, or operational conditions change, AI systems may gradually become less accurate.
Without proper monitoring, organisations may continue relying on outdated AI models that produce unreliable decisions.
Jerome emphasised that enterprises should treat AI model updates in the same way they manage software deployments.
Nothing should move into production without validation and oversight.
To address concept drift, businesses are increasingly implementing MLOps pipelines equipped with:
- Automated drift detection
- Monitoring systems
- Human-in-the-loop validation processes
- Controlled retraining workflows
These safeguards help ensure that AI systems remain accurate and trustworthy over time.
The Growing Threat of Data Poisoning
Data poisoning is another major concern in enterprise AI environments.
This occurs when malicious or corrupted data enters an AI training pipeline, potentially manipulating model behaviour or reducing accuracy.
Jerome highlighted that data poisoning is both a security issue and a data provenance issue.
Businesses must know:
- Where training data originates
- Who has access to it
- How it is modified
- Whether it has been validated
The enterprises achieving the greatest success with AI are not necessarily those with the most advanced technology. Instead, they are the organisations that embed AI governance directly into their risk management frameworks before scaling their AI operations.
Strong governance is becoming essential for maintaining trust, compliance, and operational stability in AI-driven enterprises.
The Evolution of AI Hardware for Enterprise Workloads
As AI workloads become more demanding, enterprise infrastructure requirements are evolving rapidly.
Traditional computing systems are often unable to support the intensive processing requirements associated with large AI models, fine-tuning, inference workloads, and continuous machine learning operations.
HP has positioned itself at the centre of this transformation through its professional-grade Z series infrastructure.
Jerome explained that HP’s long history in high-performance computing has allowed the company to develop hardware specifically designed for advanced AI workflows.
Rather than relying on a single machine type, HP now offers a spectrum of AI compute solutions tailored for different enterprise needs.
Local Compute vs Cloud AI Infrastructure
One of the biggest debates in enterprise AI today revolves around cloud versus local compute.
Cloud-based AI platforms provide scalability and flexibility, but they also introduce concerns related to:
- Data security
- Compliance
- Latency
- Operational costs
- Dependence on external infrastructure
Local compute, on the other hand, allows organisations to retain greater control over sensitive data and AI operations.
According to Jerome, enterprises increasingly require infrastructure capable of running sophisticated AI workloads locally without relying entirely on cloud services.
This shift is especially important for organisations handling sensitive or regulated information.
HP’s AI Workstation Ecosystem
HP’s Z series portfolio is designed to support the full AI development lifecycle, from experimentation to enterprise-scale deployment.
ZBook Ultra and Z2 Mini
For developers and smaller teams, HP offers systems such as the ZBook Ultra and Z2 Mini.
These compact workstations provide enough local compute power to:
- Run local Large Language Models (LLMs)
- Perform model experimentation
- Handle heavy AI workflows
- Reduce dependence on cloud infrastructure
These systems are particularly useful for iterative AI development where rapid experimentation is required.
The ZGX Nano and AI Supercomputing at the Desk
One of the most innovative systems in HP’s portfolio is the ZGX Nano.
Despite fitting into a compact 15x15cm form factor, the device delivers substantial AI processing power using the NVIDIA GB10 Grace Blackwell Superchip.
Key specifications include:
- 128GB unified memory
- 1,000 TOPS of FP4 AI performance
- Support for models up to 200 billion parameters locally
For larger workloads, enterprises can connect two ZGX Nano systems together using high-speed interconnect technology.
This configuration enables support for AI models with up to 405 billion parameters without requiring cloud infrastructure or external data centres.
The system also comes preconfigured with:
- NVIDIA DGX software stack
- HP ZGX Toolkit
This significantly reduces deployment time and allows teams to begin AI workflows within minutes.
High-End AI Infrastructure with Z8 Fury
For enterprise power-user teams, HP offers the Z8 Fury workstation.
This system supports up to:
- Four NVIDIA RTX PRO 6000 Blackwell GPUs
- 384GB VRAM
The Z8 Fury is designed to support the entire AI model development lifecycle on-premises.
This includes:
- Model training
- Fine-tuning
- Testing
- Inference
- Continuous deployment
By enabling organisations to manage these workloads locally, enterprises gain greater control over data privacy and operational performance.
The ZGX Fury and Trillion-Parameter AI
At the highest end of HP’s AI infrastructure portfolio is the ZGX Fury.
Powered by the NVIDIA GB300 Grace Blackwell Ultra Superchip, the system delivers:
- 748GB coherent memory
- Trillion-parameter inference capabilities
According to Jerome, this level of performance changes the conversation around enterprise AI infrastructure entirely.
Instead of relying on remote cloud clusters, organisations can run advanced AI workloads directly at the deskside.
This is especially valuable for businesses working with:
- Sensitive proprietary data
- Continuous model fine-tuning
- High-volume inference workloads
Jerome noted that for many enterprises, systems like the ZGX Fury can pay for themselves within 8 to 12 months compared with equivalent cloud compute costs.
The Real Enterprise AI Problem: Governance and Latency
Jerome argued that the true challenge facing modern AI systems is no longer purely computational.
Instead, the autonomous AI lifecycle introduces:
- Governance challenges
- Latency issues
- Security concerns
- Data residency requirements
Constantly sending sensitive training data to cloud environments creates significant risks for organisations operating in regulated industries.
This is why many enterprises are moving toward hybrid and local-first AI architectures.
HP’s infrastructure portfolio is designed to scale alongside organisational AI maturity, enabling companies to transition from small developer environments to fully distributed on-premises compute ecosystems.
Why Enterprise AI Costs Are Rising Rapidly
Generative AI adoption is driving an enormous increase in enterprise compute spending.
According to Jerome, enterprise GenAI spending surged to $37 billion in 2025.
However, despite falling unit inference costs, overall spending continues to rise because AI usage is growing even faster.
In fact, 80% of companies reportedly exceeded their AI cost forecasts by more than 25%.
This highlights a major structural issue within enterprise AI economics.
Cloud API models were originally designed for:
- Experimental workloads
- Low-volume usage
- Early-stage AI experimentation
They were never intended to serve as the primary economic foundation for large-scale production AI systems.
A Practical Strategy for Controlling AI Costs
Jerome explained that the solution to rising AI costs is not simply about infrastructure — it is about discipline.
Organisations must clearly separate:
- Exploratory AI workloads
- Production AI workloads
Early-stage experimentation, fine-tuning, and model evaluation should ideally run on local infrastructure such as:
- ZGX Nano
- Z8 Fury
This allows companies to make a one-time capital investment instead of continuously paying operational cloud expenses without guaranteed return on investment.
The Three-Tier AI Infrastructure Model
According to HP, the enterprises managing AI costs most effectively are implementing a three-tier compute strategy.
1. Cloud Infrastructure
Cloud environments are used for:
- Burst training
- Frontier model access
- Temporary large-scale workloads
2. On-Premises Infrastructure
HP Z infrastructure handles:
- Predictable high-volume inference
- Sensitive AI workloads
- Continuous enterprise operations
3. Edge Compute
Edge systems support:
- Low-latency applications
- Real-time AI processing
- Localised decision-making
Independent analysis suggests that on-premises infrastructure can deliver up to an 18x cost advantage per million tokens over a five-year lifecycle compared with cloud-only deployments.
Jerome summarised the strategy clearly:
“Cloud is for scale you’ve earned, not scale you’re hoping for.”
Making Proprietary Data AI-Ready
Another major challenge for enterprises involves preparing proprietary data for AI use without exposing sensitive information.
Jerome believes many companies misunderstand the problem.
He explained that making data “AI-ready” is not simply a data engineering challenge — it is fundamentally a data sovereignty challenge.
Sending sensitive enterprise data to external cloud models introduces:
- Compliance risks
- Security concerns
- Regulatory complications
- Governance failures
This is particularly important in industries such as:
- Healthcare
- Finance
- Government
- Legal services
Retrieval-Augmented Generation (RAG) and Secure AI
One of the most effective solutions for secure enterprise AI is Retrieval-Augmented Generation (RAG).
RAG systems allow AI models to retrieve relevant information from internal databases during query processing without permanently training on that data.
This architecture offers several advantages:
- Proprietary data remains on-premises
- No external data exposure
- Reduced compliance risk
- Better governance control
Jerome explained that systems such as the ZGX Nano or Z8 Fury can run fully local RAG pipelines against sensitive internal documents without any data leaving the organisation.
The Importance of Access Control in AI Systems
AI systems must also maintain strong role-based access controls.
Well-designed RAG architectures ensure that employees only access information they are authorised to view.
This mirrors the permission structures already used in modern document management systems.
By combining:
- Local compute
- Local models
- Local retrieval systems
- Governed access control
Enterprises can safely make proprietary data AI-ready without exposing critical business information.
Jerome emphasised that the companies succeeding with enterprise AI are not sending sensitive assets to the cloud. Instead, they are bringing intelligence directly to the data.
The Future Role of Enterprise IT Teams
As autonomous AI systems become more common, the role of enterprise IT teams is expected to change dramatically.
Jerome referenced NVIDIA CEO Jensen Huang’s perspective that the purpose of work extends far beyond repetitive operational tasks.
In IT environments, traditional responsibilities such as:
- Server provisioning
- Incident triage
- Manual infrastructure management
are increasingly being automated by AI agents.
According to Gartner, by the end of 2026:
- 40% of enterprise applications will contain embedded AI agents
- Up from less than 5% only a year earlier
This means the execution layer of IT is rapidly becoming automated.
From Task Execution to AI Governance
While routine IT tasks may decline, demand for governance and architecture expertise is expected to rise significantly.
The future IT team will focus less on manually operating systems and more on:
- Designing AI governance frameworks
- Managing intelligent agents
- Controlling infrastructure policies
- Ensuring compliance and observability
- Monitoring AI behaviour
Jerome noted that only one in five companies currently has a mature governance model capable of supporting this transition.
This creates both a challenge and an opportunity for enterprise IT departments.
Why Local Infrastructure Matters for AI Governance
Local-first infrastructure provides enterprises with greater visibility and control over AI systems.
When AI workloads run on infrastructure owned and managed internally, organisations gain:
- Full observability
- Improved governance
- Better security oversight
- Stronger compliance management
This level of control is often difficult to achieve when AI operations are heavily abstracted into external cloud environments.
Over the next several years, enterprise IT teams will increasingly become guardians of AI trust, governance, and operational accountability.
Conclusion
Artificial Intelligence is reshaping enterprise technology at an unprecedented pace, but successful AI adoption depends on far more than advanced models alone.
Data governance, infrastructure strategy, compute economics, security, and operational control are all becoming essential components of enterprise AI success.
Jerome Gabryszewski’s insights highlight a growing shift toward local-first AI architectures capable of delivering scalability without compromising governance or data sovereignty.
HP’s expanding AI infrastructure portfolio — from compact developer systems to trillion-parameter AI workstations — reflects the growing demand for enterprise-grade AI compute environments that support secure, cost-effective, and scalable AI operations.
As organisations continue navigating the complexities of AI adoption, the enterprises that succeed will not simply be those with the most powerful AI systems. Instead, they will be the companies that build strong governance foundations, control their data intelligently, and align infrastructure decisions with long-term operational goals.
The future of enterprise AI will not belong solely to the cloud or solely to local infrastructure. It will belong to businesses capable of combining intelligent governance, scalable compute, and secure data management into a unified AI strategy.
Discover more from AiTechtonic - Informative & Entertaining Text Media
Subscribe to get the latest posts sent to your email.