Axe Compute (NASDAQ: AGPU) has signed a $260 million, 36-month enterprise contract to deploy a dedicated cluster of 2,304 NVIDIA B300 GPUs and AI-focused high-speed storage in a U.S. Tier 3 data center. It is the largest enterprise engagement in the company’s history since entering the compute business.
Targeted deployment start is Q3 2026.
What This Deal Represents
A lot of enterprise GPU deals start with what’s available. This one started with what the customer actually needed.
The enterprise came with specific requirements — hardware spec, location, capacity, performance guarantees, and pricing structure. Axe Compute built to those requirements and delivered under a long-term contract with fixed economics. As CEO Christopher Miglino put it: “Enterprise AI customers are no longer willing to adapt their infrastructure roadmaps to the capacity constraints of legacy hyperscalers. A 2,304-GPU B300 deployment, contracted, dedicated, U.S.-based, and priced to compete, is what purpose-built AI infrastructure looks like.”
The contract is structured on a take-or-pay basis, secured with a deposit, prepayment, and monthly-in-advance payments. It includes options to renew for additional years and supports ancillary value-added services like dedicated local loops. Terms were architected by Axe Compute to align with the enterprise — not dictated by provider inventory.
It’s also a signal about where enterprise AI procurement is heading. J.P. Morgan projects AI infrastructure spending will reach $1.4 trillion annually by 2030, with GPUs accounting for 39% of data center costs. In that environment, enterprises that can secure guaranteed, dedicated capacity on their own terms have a structural advantage over those competing for shared resources on someone else’s timeline.
Built for the Most Demanding AI Workloads
The 2,304-GPU B300 cluster is purpose-built for the workloads that push infrastructure hardest:
Foundation model training. Pre-training large language models and multimodal foundation models requires sustained, high-throughput GPU compute across thousands of accelerators in tight coordination. The B300’s memory bandwidth and single-spine interconnect performance are specifically suited for training runs at this scale.
Fine-tuning and domain adaptation. Enterprises adapting foundation models to proprietary datasets — legal, financial, biomedical, customer-specific — need dedicated compute that eliminates multi-tenancy risks and unpredictable availability. Dedicated infrastructure keeps data within a controlled facility boundary and compute available on the enterprise’s schedule.
High-throughput inference. Production AI serving real-time inference at scale — recommendation engines, content generation, fraud detection, autonomous decision-making — requires low-latency, high-availability GPU infrastructure with predictable performance. Dedicated clusters eliminate the noisy-neighbor latency spikes that plague shared environments.
AI-intensive data processing. The integration of AI-focused high-speed storage with the GPU cluster handles workloads demanding rapid ingestion and processing of massive datasets at training time, including multimodal pipelines processing image, video, audio, and text at scale.
The cluster is backed by 4.8 megawatts of dedicated power on an N+1 redundant basis, with 24/7 on-site resources — fully committed to this deployment alone.
Why Choice Is the Differentiator
Two structural capabilities made a deal of this size and structure possible.
First, geographic reach. Axe Compute’s platform allows customers to match compute capacity to the regions their workloads actually require — a flexibility that providers constrained to the facilities they’ve already built cannot always offer.
Second, delivery guarantees. Axe Compute backs dedicated clusters with committed delivery, so customers receive the GPU compute they need when they need it. Combined with fixed monthly pricing and no hidden fees, that predictability lets enterprises align infrastructure spend directly to their monetization model.
That’s what choice looks like in practice — hardware, geography, deployment speed, and economics all specified by the customer, not the provider.
What Comes Next
Axe Compute is among the first publicly traded companies delivering dedicated neocloud GPU infrastructure at this scale. This contract establishes a new benchmark for enterprise AI infrastructure engagements and provides meaningful long-dated revenue visibility.
To find out how Axe Compute can help your team meet its GPU infrastructure needs, go here.