Hewlett Packard Enterprise (HPE) today announced the expansion of its HPE GreenLake for File Storage capabilities designed to power large-scale enterprise AI and data lake workloads. The latest iteration introduces 100 percent high-density all-flash options, propelling the company to the forefront of AI storage innovation.
Kamal Kashyap, Director, India – Storage Business Unit, Hewlett Packard Enterprise India said, “To drive AI initiatives and remain competitive in today’s marketplace, acquiring an AI-ready file storage solution offering enterprise-grade performance, efficiency, and customised to AI scalability, is essential. HPE GreenLake for File Storage plays a significant role in amplifying the efficiency of high-volume data applications, ensuring enterprise-level performance at AI scale. It is not just a storage solution, but a catalyst for transformation. As businesses of today are moving into or expanding their AI initiatives, we at HPE are empowering them to harness the potential of AI, extract maximum value from their data, and enhance productivity, all while promoting sustainability.”
When compared with the currently available version of HPE GreenLake for File Storage, the new options offer four times the capacity and up to two times the system performance per rack unit. These improvements increase AI throughput by a factor of two and reduce power consumption by up to 50 percent. Through these enhancements, HPE is taking another major step to enable customers to achieve enterprise performance, simplicity, and enhanced efficiency, all at the scale of AI and data lakes. With the latest high-density storage racks, HPE GreenLake for File Storage has increased the capacity density of the high-end offering by a factor of seven compared with what was published in mid-2023. In addition, HPE GreenLake for File Storage now offers up to 2.3 times the capacity density of competitors.
Powering enterprise performance with AI
HPE GreenLake for File Storage accelerates the most data-intensive applications with enterprise AI-scale performance. This is the performance that spans all the stages of AI – from data aggregation, data preparation, training, and tuning to inference. Moreover, it’s not just performance that peaks at a given time for a small data set. Instead, it’s fast, sustained performance that spans the full scale of your data for the most demanding, data-intensive AI applications, including GenAI and large language models (LLMs). Enterprise-scale AI performance helps extract more value from all the aggregated data, accelerating insights and providing a real competitive edge.
HPE GreenLake for File Storage has a disaggregated, shared-everything, highly resilient modular architecture that allows to scale the performance and capacity independently and it’s designed for exabyte scale. With all-NVMe speed for fast, predictable performance and no front-end caching, data movement between media, or tiered data pipelines, it can supercharge the most data-intensive AI applications.
Enhancing efficiency with AI
HPE GreenLake for File Storage can bring down the AI storage costs with four times the capacity per RU density and half the power consumption. It can also lower carbon footprint with industry-leading data reduction, non-disruptive upgrades, and an AI storage as-a-service consumption model that helps eliminate overprovisioning. It enables scaling performance, enhances capacity independently for higher efficiency at lower cost, and maximizes GPU utilisation — and therefore GPU ROI — with enterprise performance at AI scale.
HPE GreenLake for File Storage provides overload-free snapshots and native replication, superior flash efficiency, and enhanced data reduction via the similarity algorithm which, unlike compression and deduplication, reduces data with both a global and fine-grained approach. Savings are 2:1 for life sciences data; 3:1 for pre-reduced backups, pre-compressed log files, and HPC and animation data; and 8:1 for uncompressed time series data.
With support for optimised GPU utilisation via InfiniBand, NVIDIA GPUDirect and RDMA, HPE GreenLake for File Storage accelerates AI workloads by improving performance for model training and tuning via faster checkpoints. InfiniBand connectivity from the front-end host to networks, including the NVIDIA Quantum-2 InfiniBand platform, offers flexibility. Customers can scale up to 720 Po of effective capacity (with 3:1 data reduction) for large-scale enterprise AI file data.