Express Computer
Home  »  features  »  Speeding up the Cloud with Flash

Speeding up the Cloud with Flash

0 48

With low latency and reduced power consumption, flash memory can help accelerate application performance and enhance server utilization in cloud environments, writes Greg Huff

The rapid increase in processor performance and the deployment of highly virtualized data centers has placed increasing stress on conventional storage to the point that it is now often the main bottleneck to application performance, and just as importantly, to server utilization. Flash memory, with its low latency and reduced power consumption, holds the key to accelerating application performance and increasing server utilization As with anything new, flash was initially considered risky by cautious IT managers who believed it would be difficult or disruptive to deploy. Success by early adopters is now changing that view.

Breaking through the storage bottleneck
Multi-core processors and virtualized, multi-processor servers have dramatically improved server storage input/output (I/O) rates, while bandwidth and I/O operations per second (IOPs), the preferred metrics for assessing and improving storage performance, have increased to surpass 200,000 to 250,000 IOPs for common workloads. The problem is that most databases and other applications with heavy I/O processing require writes to storage to complete before continuing, causing transactions to stall and servers to be under-utilized—a dynamic that has now made I/O latency the primary limiting factor in server storage performance.

This shift has renewed the focus on the enormous I/O latency gap between a server’s memory and hard disk drive (HDD) storage: 100 nanoseconds versus 10 milliseconds, respectively—a difference of five orders of magnitude. I/O to a storage area network (SAN) or network-attached storage (NAS) is even higher, owing to the intervening Fiber Channel or Ethernet network, which also frequently becomes congested with deep I/O queues waiting to be serviced.

Flash memory breaks through the bottleneck caused by storage latency by bridging the gap between main memory and even the fastest-spinning, shortest-stroked HDDs. Whether deployed as a solid state drive (SSD), or as an adapter, flash can accelerate application and workload performance up to 30-fold, an increase that improves server utilization.

Direct-attached solid state storage
The closer the data is to the processor, the better the performance. This is why applications requiring high performance normally use direct-attached storage (DAS). With a typical SSD read latency of about 200 microseconds and write latencies often around 100 microseconds, using SSDs to supplement or replace HDDs can substantially improve application performance. And because SSDs have the same connectors and host interfaces as HDDs, they can be deployed seamlessly in standard server storage bays, or other external or internal configurations. Even the logical unit number (LUN) assignments can remain unchanged.

Although both Serial ATA (SATA) and Serial-Attached SCSI (SAS) SSDs can be used for DAS, SAS is the better choice for higher performance. While some applications can saturate the bandwidth of existing SAS SSDs, which operate at 6 Gb/s, 12 Gb/s SAS drives are now becoming available. And dual active/active interfaces hold the potential for 24 Gb/s SAS. By contrast, SATA drives, which target client devices, operate only at 3 Gb/s or 6 Gb/s with no roadmap for higher throughput.

Another option is to put the flash memory directly on the Peripheral Component Interconnect Express (PCIe) bus. Current PCIe cards consume up to 25 Watts of power (versus the 9 Watts in a storage bay), enabling higher flash performance at higher capacities, and have more lanes for higher throughput (8 versus the 1 or 2 in a storage bay).

A new class of products has emerged that integrates flash caching directly into PCIe-based RAID controller cards. The cards have no extra cables or drive bays for SSDs, or LUNs or mount points. Hot, or frequently accessed, data is automatically recognized and stored in flash on the card, while cool data is automatically migrated to HDDs attached to the RAID controller, usually improving application performance and server workloads 4- to 5-fold. Additionally, these controllers dramatically accelerate rebuild time, and maintain performance in the event of a hard disk failure. These plug-in devices enable simple upgrades and promise to be the most transparent, least disruptive way to accelerate applications and workloads.

Deploying an all solid state Tier 0 for some applications is now even feasible. Decisions about using flash for a particular application typically focus only the storage layer, pitting HDDs against SSDs with an emphasis on the capital expenditure. But a look at the application layer reveals that SSDs often reduce total cost of ownership despite their higher up-front cost.

Solid state storage is more reliable, easier to manage, faster to replicate and rebuild, and consumes less power than HDDs—advantages that make it easier to satisfy service-level agreements and avoid penalties for falling below acceptable service levels. And just as importantly, the superior performance of SSDs produces far higher server utilization, reducing the number of servers and associated software licenses and service contracts required to accommodate the same application workload. In some cases, SSDs can reduce other system-related costs 5 to 10 times while delivering a better user experience.

Flash cache acceleration for SAN and NAS
With flash memory capacity far exceeding what is possible with Dynamic RAM (terabytes versus gigabytes), flash caching has become a highly effective and cost-effective way to accelerate performance and improve virtualized server utilization. Flash memory is also non-volatile, another important advantage in a cache for write operations.

Like DAS, flash cache typically delivers the highest performance when deployed directly in the server on the PCIe bus. Intelligent caching software places hot data in the low-latency flash cache, a process that is transparent to applications. With no external connections and no intervening network to a SAN or NAS, the hot data is quickly accessible. And even though flash memory has a higher latency than DRAM, flash delivers superior performance because its significantly higher capacity dramatically increases the cache hit rate. Holding an application’s entire working dataset in the flash cache is even becoming commonplace.

Efforts are also underway to virtualize flash cache, in effect making it a pooled resource like virtualized servers and SANs. Moving an application to another virtual machine (VM) currently requires copying the cache to the new host. With virtualized flash cache, these moves will be immediate and transparent.

Emerging Express standards
PCIe flash adapters deliver exceptional application performance but lack standardization and critical storage device attributes like external serviceability. These limitations will begin to disappear starting in 2013 with the advent of new standards optimized for flash. Support for the PCIe interface on an externally accessible storage midplane will use a new Express Bay standard with a new storage form factor SFF-8639 connector. The Express Bay provides four dedicated PCIe lanes and up to 25 Watts of power to accommodate ultra-high performance, high-capacity enterprise SSDs in 2.5- or 3.5-inch drive form factors.

Because the Express Bay is a superset of and can coexist with standard drive bays, this evolutionary enhancement assures compatibility with existing SAS and SATA drives. And support for new SATA Express (SATAe) and SCSI Express (SCSIe) SSDs will also be provided in standard bays by multiplexing the PCIe protocols atop existing SAS/SATA lanes.

Will flash ever completely replace traditional spinning media? Not any time soon, as there is no real advantage to using SSDs in slower storage tiers or for bulk data storage. But in the fastest tiers, flash is already delivering substantial benefits, freeing servers and applications to do more work with less infrastructure, and the number of applications are growing fast as datacenter managers see the growing ROI flash delivers. For most users, the best practice is to continue using HDDs for their high capacity and low cost, while beginning to integrate flash where the additional performance is required and the benefits outweigh the higher initial cost.

Greg Huff is Senior Vice President of Corporate Strategy for LSI.

Get real time updates directly on you device, subscribe now.

Leave A Reply

Your email address will not be published.

LIVE Webinar

Digitize your HR practice with extensions to success factors

Join us for a virtual meeting on how organizations can use these extensions to not just provide a better experience to its’ employees, but also to significantly improve the efficiency of the HR processes
REGISTER NOW 

Stay updated with News, Trending Stories & Conferences with Express Computer
Follow us on Linkedin
India's Leading e-Governance Summit is here!!! Attend and Know more.
Register Now!
close-image
Attend Webinar & Enhance Your Organisation's Digital Experience.
Register Now
close-image
Enable A Truly Seamless & Secure Workplace.
Register Now
close-image
Attend Inida's Largest BFSI Technology Conclave!
Register Now
close-image
Know how to protect your company in digital era.
Register Now
close-image
Protect Your Critical Assets From Well-Organized Hackers
Register Now
close-image
Find Solutions to Maintain Productivity
Register Now
close-image
Live Webinar : Improve customer experience with Voice Bots
Register Now
close-image
Live Event: Technology Day- Kerala, E- Governance Champions Awards
Register Now
close-image
Virtual Conference : Learn to Automate complex Business Processes
Register Now
close-image