Americas

  • United States
John Edwards
Contributing writer

For enterprise storage, persistent memory is here to stay

News
Jun 06, 20198 mins
Data CenterEnterprise Storage

Persistent memory – also known as storage class memory – has tantalized data center operators for many years. A new technology promises the key to success.

file folder storage and sharing
Credit: Thinkstock

It’s hard to remember a time when semiconductor vendors haven’t promised a fast, cost-effective and reliable persistent memory technology to anxious data center operators. Now, after many years of waiting and disappointment, technology may have finally caught up with the hype to make persistent memory a practical proposition.

High-capacity persistent memory, also known as storage class memory (SCM), is fast and directly addressable like dynamic random-access memory (DRAM), yet is able to retain stored data even after its power has been switched off—intentionally or unintentionally. The technology can be used in data centers to replace cheaper, yet far slower traditional persistent storage components, such as hard disk drives (HDD) and solid-state drives (SSD).

Persistent memory can also be used to replace DRAM itself in some situations without imposing a significant speed penalty. In this role, persistent memory can deliver crucial operational benefits, such as lightning-fast database-server restarts during maintenance, power emergencies and other expected and unanticipated reboot situations.

Many different types of strategic operational applications and databases, particularly those that require low-latency, high durability and strong data consistency, can benefit from persistent memory. The technology also has the potential to accelerate virtual machine (VM) storage and deliver higher performance to multi-node, distributed-cloud applications.

In a sense, persistent memory marks a rebirth of core memory. “Computers in the ‘50s to ‘70s used magnetic core memory, which was direct access, non-volatile memory,” says Doug Wong, a senior member of Toshiba Memory America’s technical staff. “Magnetic core memory was displaced by SRAM and DRAM, which are both volatile semiconductor memories.”

One of the first persistent memory devices to come to market is Intel’s Optane DC. Other vendors that have released persistent memory products or are planning to do so include Samsung, Toshiba America Memory and SK Hynix.

Persistent memory: performance + reliability

With persistent memory, data centers have a unique opportunity to gain faster performance and lower latency without enduring massive technology disruption. “It’s faster than regular solid-state NAND flash-type storage, but you’re also getting the benefit that it’s persistent,” says Greg Schulz, a senior advisory analyst at vendor-independent storage advisory firm StorageIO. “It’s the best of both worlds.”

Yet persistent memory offers adopters much more than speedy, reliable storage. In an ideal IT world, all of the data associated with an application would reside within DRAM to achieve maximum performance. “This is currently not practical due to limited DRAM and the fact that DRAM is volatile—data is lost when power fails,” observes Scott Nelson, senior vice president and general manager of Toshiba Memory America’s memory business unit.

Persistent memory transports compatible applications to an “always on” status, providing continuous access to large datasets through increased system memory capacity, says Kristie Mann, Intel’s director of marketing for data center memory and storage. She notes that Optane DC can supply data centers with up to three-times more system memory capacity (as much as 36TBs), system restarts in seconds versus minutes, 36% more virtual machines per node, and up to 8-times better performance on Apache Spark, a widely used open-source distributed general-purpose cluster-computing framework.

System memory currently represents 60% of total platform costs, Mann says. She observes that Optane DC persistent memory provides significant customer value by delivering 1.2x performance/dollar on key customer workloads. “This value will dramatically change memory/storage economics and accelerate the data-centric era,” she predicts.

Where will persistent memory infiltrate enterprise storage?

Persistent memory is likely to first enter the IT mainstream with minimal fanfare, serving as a high-performance caching layer for high performance SSDs. “This could be adopted relatively-quickly,” Nelson observes. Yet this intermediary role promises to be merely a stepping-stone to increasingly crucial applications.

Over the next few years, persistent technology will impact data centers serving enterprises across an array of sectors. “Anywhere time is money,” Schulz says. “It could be financial services, but it could also be consumer-facing or sales-facing operations.”

Persistent memory supercharges anything data-related that requires extreme speed at extreme scale, observes Andrew Gooding, vice president of engineering at Aerospike, which delivered the first commercially available open database optimized for use with Intel Optane DC.

Machine learning is just one of many applications that stand to benefit from persistent memory. Gooding notes that ad tech firms, which rely on machine learning to understand consumers’ reactions to online advertising campaigns, should find their work made much easier and more effective by persistent memory. “They’re collecting information as users within an ad campaign browse the web,” he says. “If they can read and write all that data quickly, they can then apply machine-learning algorithms and tailor specific ads for users in real time.”

Meanwhile, as automakers become increasingly reliant on data insights, persistent memory promises to help them crunch numbers and refine sophisticated new technologies at breakneck speeds. “In the auto industry, manufacturers face massive data challenges in autonomous vehicles, where 20 exabytes of data needs to be processed in real time, and they’re using self-training machine-learning algorithms to help with that,” Gooding explains. “There are so many fields where huge amounts of data need to be processed quickly with machine-learning techniques—fraud detection, astronomy… the list goes on.”

Intel, like other persistent memory vendors, expects cloud service providers to be eager adopters, targeting various types of in-memory database services. Google, for example, is applying persistent memory to big data workloads on non-relational databases from vendors such as Aerospike and Redis Labs, Mann says.

High-performance computing (HPC) is yet another area where persistent memory promises to make a tremendous impact. CERN, the European Organization for Nuclear Research, is using Intel’s Optane DC to significantly reduce wait times for scientific computing. “The efficiency of their algorithms depends on … persistent memory, and CERN considers it a major breakthrough that is necessary to the work they are doing,” Mann observes.

How to prepare storage infrastructure for persistent memory

Before jumping onto the persistent memory bandwagon, organizations need to carefully scrutinize their IT infrastructure to determine the precise locations of any existing data bottlenecks. This task will be primary application-dependent, Wong notes. “If there is significant performance degradation due to delays associated with access to data stored in non-volatile storage—SSD or HDD—then an SCM tier will improve performance,” he explains. Yet some applications will probably not benefit from persistent memory, such as compute-bound applications where CPU performance is the bottleneck.

Developers may need to reevaluate fundamental parts of their storage and application architectures, Gooding says. “They will need to know how to program with persistent memory,” he notes. “How, for example, to make sure writes are flushed to the actual persistent memory device when necessary, as opposed to just sitting in the CPU cache.”

To leverage all of persistent memory’s potential benefits, significant changes may also be required in how code is designed. When moving applications from DRAM and flash to persistent memory, developers will need to consider, for instance, what happens when a program crashes and restarts. “Right now, if they write code that leaks memory, that leaked memory is recovered on restart,” Gooding explains. With persistent memory, that isn’t necessarily the case. “Developers need to make sure the code is designed to reconstruct a consistent state when a program restarts,” he notes. “You may not realize how much your designs rely on the traditional combination of fast volatile DRAM and block storage, so it can be tricky to change your code designs for something completely new like persistent memory.”

Older versions of operating systems may also need to be updated to accommodate the new technology, although newer OSes are gradually becoming persistent memory aware, Schulz says. “In other words, if they detect that persistent memory is available, then they know how to utilize that either as a cache, or some other memory.”

Hypervisors, such as Hyper-V and VMware, now know how to leverage persistent memory to support productivity, performance and rapid restarts. By utilizing persistent memory along with the latest versions of VMware, a whole system can see an uplift in speed and also maximize the number of VMs to fit on a single host, says Ian McClarty, CEO and president of data center operator PhoenixNAP Global IT Services. “This is a great use case for companies who want to own less hardware or service providers who want to maximize hardware to virtual machine deployments.”

Many key enterprise applications, particularly databases, are also becoming persistent memory aware. SQL Server and SAP’s flagship HANA database management platform have both embraced persistent memory. “The SAP HANA platform is commonly used across multiple industries to process data and transactions, and then run advanced analytics … to deliver real-time insights,” Mann observes.

In terms of timing, enterprises and IT organizations should begin persistent memory planning immediately, Schulz recommends. “You should be talking with your vendors and understanding their roadmap, their plans, for not only supporting this technology, but also in what mode: as storage, as memory.”