Skip to main content
Cloud / Azure / Products / Azure HPC Cache - High Performance Storage Cache

Azure HPC Cache - High Performance Storage Cache

Azure HPC Cache accelerates access to on-premises NAS storage for HPC workloads running in Azure.

storage
Pricing Model Pay per cache throughput tier
Availability Selected Azure regions
Data Sovereignty Cache in Azure, source on-premises or cloud
Reliability 99.9% SLA

What is Azure HPC Cache?

Azure HPC Cache creates a high-speed caching layer in Azure that accelerates access to file-based data stored on-premises or in Azure Blob Storage. It is designed for high-performance computing workloads that read data intensively and would otherwise be bottlenecked by network latency to remote storage.

The cache presents data through NFS, making it accessible to Linux-based compute clusters. Once data is cached, compute nodes read at cloud-native speeds without waiting for each request to traverse the network to the source storage.

Core Features

  • NFS caching: Cache data from on-premises NFS or Azure Blob Storage
  • Aggregated namespace: Present multiple storage targets as a single mount point
  • Read and write caching: Accelerate both read-heavy and write-back workloads
  • Scalable throughput: Choose cache sizes from 2 Gbps to over 20 Gbps
  • Automated tiering: Keep frequently accessed data in cache

Typical Use Cases

HPC Cache is used in scenarios where compute jobs need fast access to large datasets that cannot be fully migrated to cloud storage. Common applications include engineering simulations, video rendering, seismic analysis, and genomics processing.

Benefits

  • Run cloud compute against on-premises data without full migration
  • Reduce time-to-result for data-intensive simulations
  • Single namespace simplifies job configuration
  • No changes required to existing NFS workflows

Frequently Asked Questions

What storage systems can HPC Cache front?

HPC Cache supports on-premises NFS v3 servers (NetApp, Dell EMC, etc.) and Azure Blob Storage with NFS protocol enabled. New storage targets can be added without reconfiguring compute jobs.

How much data can the cache hold?

Cache storage ranges from several terabytes to over 100 TB depending on the throughput tier selected. The cache automatically manages which data to keep based on access patterns.

Is HPC Cache suitable for write-intensive workloads?

Yes. HPC Cache supports write-back mode where writes are cached and asynchronously flushed to the source storage. This reduces latency for applications that write intermediate results.

How do I size the cache?

Size based on working set (the data actively used during jobs) rather than total dataset size. Azure provides sizing guidance based on throughput requirements and concurrent client count.

Integration with innFactory

As a Microsoft Solutions Partner, innFactory helps you implement Azure HPC Cache: storage architecture design, integration with compute clusters, and performance optimization.

Typical Use Cases

HPC and simulation workloads
Media rendering pipelines
Financial modeling
Life sciences computing

Microsoft Solutions Partner

innFactory is a Microsoft Solutions Partner. We provide expert consulting, implementation, and managed services for Azure.

Microsoft Solutions Partner Microsoft Data & AI

Similar Products from Other Clouds

Other cloud providers offer comparable services in this category. As a multi-cloud partner, we help you choose the right solution.

20 comparable products found across other clouds.

Ready to start with Azure HPC Cache - High Performance Storage Cache?

Our certified Azure experts help you with architecture, integration, and optimization.

Schedule Consultation