After the CPU and GPU, the memory is the most important component of a computer, be it a smartphone, a computer or a server. Without memory, you can’t reliably run any computer. But when it comes to memory (RAM), there are many types found in different devices. Most primary systems generally use DDR4 memory while graphics cards and display adapters are equipped with GDDR5 or GDDR6 memory. Both are based on DRAM (Dynamic Random Access Memory) but there are some key differences.
Many people get confused between DDR4 and GDDR4/5 memory and often use them interchangeably which is quite wrong. Then there’s LPDDR4 memory used in smartphones and other mobile devices, as well as HBM used in servers and exascale computers. In this post, we explore the differences between DDR4 and GDDDR5 memory along with a brief explanation of HBM, LPDDR4 and the newer GDDR6 standard.
Double Data rate Generation Four (DDR4)
Nearly every kind of memory is based on dynamic random access memory or DRAM.
DDR4 is the latest iteration of DRAM. Released in 2014, it initially focused on reducing the voltage and power consumption rather than increasing the operating frequencies. With the coming of AMD’s Ryzen processors and the MCM design
DDR4 vs DDR3
Aside from the obvious (faster frequencies and lower latency), the primary advantages of DDR4 memory over DDR3 are higher DIMM sizes (up to 64 GB, DDR3 is limited to 16GB). It also draws considerably less power and runs at a lower voltage.
With that out of the way,
DDR3 Vs GDDR5
- DDR3 runs at a much higher voltage than GDDR5, 1.25-1.65 volts to be exact. GDDR5, on the other hand, is usually limited to 1V.
- Both DDR4 and DDR3 use a 64-bit memory controller per channel which results in a 128-bit bus for dual-channel memory and 256 bit for
quad-channel. GDDR5 memory, on the other hand, leverages a puny 32-bit controller per channel.
- Where CPU memory configurations have wider but fewer channels, GPUs can support any number of 32-bit memory channels. This is the reason many high-end GPUs like the GeForce RTX 2080 Ti and RTX 2080 have a 384-bit and 256-bit bus width, respectively.
Both these cards are connected to 1GB memory chips via 8 (for 2080) and 12 (for the Ti) 32-bit memory controllers or channels. GDDR5/6 can also operate in what is called clamshell mode, where each channel instead of being connected to one memory chip is split between two. This also allows manufacturers to double the memory capacity and makes hybrid memory configurations like the GTX 660 with its 192-bit bus width possible.
- Another core difference between DDR3 and GDDR5/6 memory involves the I/O cycles. Just like
SATA, DDR3 can only perform one operation (read or write) in one cycle. GDDR5 can handle input (read) as well as output (write) on the same cycle, essentially doubling the bus width.
- All this might put DDR3/4 memory in a bad light, but this configuration actually suits both setups. CPUs are largely sequential in nature while GPUs run thousands of parallel cores. The former benefits from low latency and slimmer channels, GPUs require a much higher bandwidth with loose timings.
GDDR5 vs GDDR5X vs GDDR6
GDDR6 was preceded by GDDR5X which was more of a half-generation upgrade of sorts. GDDR5X features transfer rates of up to 14GBit/s per pin, twice as much as GDDR5. There are two modes in GDDR5X:
- The memory controller can run at twice the speed (double data rate) with a prefetch of 8n. This is identical to how GDDR5 runs.
- There’s also a quad-rate mode that has increases the prefetch to 16n
GDDR6, like GDDR5X, has a 16n prefetch but it’s divided into two channels. So GDDR6 fetches 32 bytes per channel for a total of 64 bytes just like GDDR5X and twice that of GDDR5. While this doesn’t improve memory transfer speeds over GDDR5X, it allows for more versatility.
GDDR6 can fetch the same amount of data as GDDR5X but in two separate channels, allowing it to function like two smaller chips instead of one, in addition to a wider single one.
Other than that, GDDR6 also increased the density to 16Gb (2x compared to GDDR5X) and significantly improves bandwidth by increasing the base clock from 12Gbps to up to 16Gbps.
High Bandwidth Memory (HBM)
First popularized by AMD’s Fiji graphics cards, high bandwidth memory or HBM is a low power memory standard with a wide bus. HBM achieves substantially higher bandwidth compared to GDDR5 while drawing much lesser power in a small form factor.
HBM adopts clocks as low as 500 Mhz to conform to a low TDP target and makes up for the loss in bandwidth with a massive bus (usually 4096 bits). AMD’s Radeon RX Vega cards are the best example of HBM2 implementation in consumer hardware. HBM2 solved the 4GB limit of the HBM1, but limited yields coupled with memory shortage prevented AMD from capitalizing on the consumer GPU front.
LPDDR4 vs DDR4
LPDDR4 is the mobile equivalent of DDR4 memory. Compared to DDR4, it offers reduced power consumption but does so at the cost of bandwidth. LPDDR4 has dual 16-bit channels resulting in a 32-bit total bus. In comparison, DDR4 has 64-bit channels. Therefore, LPDDR4 RAM halves the bus but makes up for this with a lower operating voltage of 1.1-1.2V.
On the plus side, enhanced variants of the memory also allow 16n prefetch or 64 bytes per channel like GDDR6 memory.
This allows for greater power efficiency in smartphones and battery standby times of up to 8-10 hours. Micron’s LPDDR4 RAM tops out the standard with a 2133 MHz clock for a transfer rate of 4266 MT/s while Samsung follows shortly after with a clock of 1600MHz and a transfer rate of 3200 MT/s.