
News
Content for the HPC community and innovation enthusiasts: tutorials, news and press releases for users, engineers and administrators
- All News
- Aerospace & Defence
- Artificial Intelligence
- Blog
- Cloud Platform
- E4 Various
- European Projects
- HPC
- Kubernetes Cluster
- Latest news
- Press
NVDIMM: classification and advantages

Analysis of the main NVDIMM storage devices and their features.
Every good article about storage devices can only start with an established classification, which we will set aside for a few paragraphs, and with presenting a graph on the aspect of greatest interest: the classic pyramid relating to the hierarchy of memory performance, a growth trend in the amount of data produced, or a comparison in terms of euro/GB among the various types of devices.
Here is the classic hierarchical representation of storage devices: at the top of the pyramid, we can find devices with lower access times (CPU registers, L1 cache, L2, …, DRAM), while going down to the base we can see devices with gradually increasing latency (SSD, HDD…).

To allow the execution of instructions by the CPU, data must initially be carried from the devices placed at the base of the pyramid, to the top of it, going up the hierarchy of memories, exactly as it happened for the stone blocks with which it was made.
It’s too bad that the scale used in the graph does not give the correct idea of the enormous effort required to move the blocks. So, let’s assume that one meter in height of the pyramid corresponds to one ns (nanosecond, or a billionth of a second). A nanosecond corresponds, approximately, to the duration of a clock cycle, or the time it takes for the CPU to access the fastest (and smallest) memory area: one of its registers.
To retrieve data present in the caches of the CPU, it is necessary to go down a few meters, while to access data present in the “fast” DRAMs it is necessary to go down about tens of meters. To find out how many, we can use a simple tool1 :
[root@pmserver Linux]# ./mlc --latency_matrix
Intel(R) Memory Latency Checker - v3.9
Command line parameters: --latency_matrix
Using buffer size of 2000.000MiB
Measuring idle latencies (in ns)...
Numa node
Numa node 0 1
0 81.7 130.5
1 130.4 81.6
In the best case, it takes about 81 meters, in the worst case 130. And we still are in favorable scenarios. What would happen if the data were on the SSD device with the best latency available today (about 10 microseconds, or millionths of a second)?
The blocks to be lifted would not be at the base of the pyramid but would be on the slopes of Mount Everest, after which the non-voluntary construction worker would then have to go up another 8 pyramids placed at its summit, before being able to lay the block in the top.

Worst is the case when data were stored on a common hard disk (access time in the order of thousandths of a second): it would be a real journey to the center of the Earth, we should indeed go down thousands of km underground!
Placing the graph on the right scale, we can see there is an incredible gap in the hierarchy of memories. This gap corresponds to the area that separates the various types of memory, based on the historical classification into volatile memories and permanent memories. The former are very fast, but not able to preserve data in the absence of a power supply, the latter have a large capacity and are able to keep the information when the computer is off. Another distinguishing feature of the first devices is the possibility of being byte-addressable, so being able to access small-sized data, while permanent storage devices are generally block-addressable, they cannot access single bytes but must operate on blocks larger in size (e.g. 4kB).
In this context, there is a new type of device, the non-volatile/persistent memory, a category which the NVDIMM devices belong in, i.e. the Non Volatile Dual in-line Memory Modules. This devices are housed in the slots commonly used for the RAM memory. Currently, we have three types of NVDIMM:
- NVDIMM-N: modules containing both flash memory (persistent) and DRAM. Data protection, in the event of a power failure, is typically guaranteed by a high capacity condenser, which allows to save the data in DRAMs on the flash memory. They have similar performance to DRAM, but limited capacity, and are byte-addressable.
NVDIMM-F: this is flash-type memory which, unlike traditional SSDs housed on the SATA/SAS/PCIe bus, takes advantage of the higher performance offered by a DDR4 bus. They can provide large capacities but with significantly lower performance than DRAMs, and allow exclusively block access.
NVDIMM-P: it combines the properties of flash memory (large capacity, block access) to the DRAM properties (low latency, byte-addressable)
P-type devices are particularly interesting, for their performance (in terms of latency and throughput) and for their versatile features: they can in fact function both as persistent memory and as volatile memory, allowing both blocks and bytes access.
To appreciate the potential of this type of device, let’s consider one of the first NVDIMM-P implementations available, namely the Intel DC Persistent Memory Module (PMM); these are memory modules available in larger sizes (128GB, 256GB, 512GB) than those commonly available for DRAM, this allows up to 6TB to be made available on a common biprocessor2 server. How can we use this data area? Essentially in two different ways:
Memory mode: it configures the Intel DCPMMs to be used as main memory, while the DRAM on the system is used as a cache. Using this model, a volatile memory area is created, with performance comparable to that of DRAM.
- App direct mode: it allows the creation of persistent memory areas, with a consequent increase in latency (a few microseconds).
The memory mode enables the creation of servers with a considerable amount of memory, which is usually obtainable only in machines with a high number of sockets and the relative penalty in terms of higher costs and complexity of the systems. Some usage scenarios are evident: analysis of large datasets that cannot be distributed over multiple machines, in-memory databases, creation of optimal platforms to host a large number of virtual machines or containers, and much more!
The App direct mode allows applications, which must be appropriately modified, to have a persistent data area available, accessible in the same way as DRAM. The need to make changes to applications seems to make the adoption of this technology less usable in the immediate future, but fortunately, there is a rather simple way to take advantage of the performance offered by Intel DCPMMs: use modules such as Storage over App Direct. This allows you to access these devices as if they were the usual block devices (eg HDD), completely transparent to applications. It is therefore possible to use common file systems (xfs, ext4) to easily exploit Intel DCPMMs and create storage areas ideal for any type of I/O.
Having said that, we have some questions: are NVDIMMs as performing as stated? How complicated is their use?
In the next episode, we will therefore try to answer, starting with some simple high-level checks.
Stay tuned!
.
1- Intel(R) Memory Latency Checker – https://software.intel.com/content/www/us/en/develop/articles/intelr-memory-latency-checker.html
2- The number and configuration of the NVDIMMs depend on the features of the platform. It is necessary to consult the manual of the vendor.
.