-

CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100 Supercomputers Set to Transform AI Landscape

  • CoreWeave customers will have access to a new GPU that improves performance, training and inference times for AI many times over
  • The company’s Kubernetes-native infrastructure yields industry-leading spin-up times and responsive auto-scaling capabilities for optimal compute usage and performance
  • Customers pay only for the compute capacity they use, making CoreWeave 50% to 80% less expensive than competitors

SPRINGFIELD, N.J.--(BUSINESS WIRE)--CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it is among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform. CoreWeave was the first Elite Cloud Service Provider for Compute in the NVIDIA Partner Network (NPN) and is also among the NPN’s Elite Cloud Service Providers for Visualization.

“This validates what we’re building and where we’re heading,” said Michael Intrator, CoreWeave co-founder and CEO. “CoreWeave’s success will continue to be driven by our commitment to making GPU-accelerated compute available to startup and enterprise clients alike. Investing in the NVIDIA HGX H100 platform allows us to expand that commitment, and our pricing model makes us the ideal partner for any companies looking to run large-scale, GPU-accelerated AI workloads.”

NVIDIA’s ecosystem and platform are the industry standard for AI. The NVIDIA HGX H100 platform allows a leap forward in the breadth and scope of AI work businesses can now tackle. The NVIDIA HGX H100 enables up to seven times better efficiency in high-performance computing (HPC) applications, up to nine times faster AI training on the largest models and up to 30 times faster AI inference than the NVIDIA HGX A100. That speed, combined with the lowest NVIDIA GPUDirect network latency in the market with the NVIDIA Quantum-2 InfiniBand platform, reduces the training time of AI models to “days or hours instead of months.” Such technology is critical now that AI has permeated every industry.

“AI and HPC workloads require a powerful infrastructure that delivers cost-effective performance and scale to meet the needs of today’s most demanding workloads and applications,” said Dave Salvator, director of product marketing at NVIDIA. “CoreWeave’s new offering of instances featuring NVIDIA HGX H100 supercomputers will enable customers the flexibility and performance needed to power large-scale HPC applications.”

In the same way that drivers of fuel-efficient cars save money on gas, CoreWeave clients spend between 50% to 80% less on compute resources. The company’s performance-adjusted cost structure is two-fold. First, clients only pay for the HPC resources they use, and CoreWeave cloud instances are highly configurable. Second, CoreWeave’s Kubernetes-native infrastructure and networking architecture produce performance advantages, including industry-leading spin-up times and responsive auto-scaling capabilities that allow clients to use compute more efficiently. CoreWeave competitors charge for idle compute capacity to maintain access to GPUs and use legacy-networking products that degrade performance with scale.

“CoreWeave’s infrastructure is purpose-built for large-scale GPU-accelerated workloads — we specialize in serving the most demanding AI and machine learning applications,” said Brian Venturo, CoreWeave co-founder and chief technology officer. “We empower our clients to create world-changing technology by delivering practical access to high-performance compute at scale, on top of the industry’s fastest and most flexible infrastructure.”

CoreWeave leverages a range of open-source Kubernetes projects, integrates with best-in-class technologies such as Determined.AI and offers support for open-source AI models including Stable Diffusion, GPT-NeoX-20B and BLOOM as part of its mission to lead the world in AI and machine learning infrastructure.

Founded in 2017, CoreWeave provides fast, flexible, and highly available GPU compute resources that are up to 35 times faster and 80% less expensive than large, generalized public clouds. An Elite Cloud Service Provider for Compute and Visualization in the NPN, CoreWeave offers cloud services for compute-intensive projects, including AI, machine learning, visual effects and rendering, batch processing and pixel streaming. CoreWeave’s infrastructure is purpose-built for burstable workloads, with the ability to scale up or down in seconds.

More information about the NVIDIA HGX H100 offering is now available on the CoreWeave site at https://coreweave.com/products/hgx-h100.

About CoreWeave

CoreWeave is a specialized cloud provider, delivering a massive scale of GPU compute resources on top of the industry’s fastest and most flexible infrastructure. CoreWeave builds cloud solutions for compute intensive use cases — digital assets, VFX and rendering, machine learning and AI, batch processing and pixel streaming — that are up to 35 times faster and 80% less expensive than the large, generalized public clouds. Learn more at www.coreweave.com.

Contacts

More News From CoreWeave

CoreWeave Secures $200 Million Series B Extension, Bringing Total Round to $421 Million

ROSELAND, N.J.--(BUSINESS WIRE)--CoreWeave (“the company”), a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced an additional $200 million in funding from Magnetar Capital (“Magnetar”), a leading alternative asset manager. The funding extension comes just one month after the company announced $221 million in Series B funding, also led by Magnetar, bringing the total for the round to $421 million. The boom in generative AI technology has accelerated dema...

CoreWeave Raises $221M Series B to Expand Specialized Cloud Infrastructure Powering the Generative AI and Large Language Model Boom

ROSELAND, N.J.--(BUSINESS WIRE)--CoreWeave (“the Company”), a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it has secured $221 million in Series B funding. The round was led by Magnetar Capital (“Magnetar”), a leading alternative asset manager, with contributions from NVIDIA, and rounded out by Nat Friedman and Daniel Gross. The latest funding will be used to further expand CoreWeave’s specialized cloud infrastructure for compute-intensive workload...

CoreWeave Announces NovelAI as Among the First to Have NVIDIA HGX H100 GPUs Online

ROSELAND, N.J.--(BUSINESS WIRE)--CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced general availability of instances of NVIDIA HGX H100 GPUs online. This is CoreWeave’s second NVIDIA H100 offering, following the company’s launch of H100 PCIe GPU instances in January. Anlatan, developers of NovelAI, will be among the first to deploy the latest NVIDIA H100 Tensor Core GPUs on CoreWeave, which began offering the new instances to select custome...
Back to Newsroom