Memory or RAM is one of the most important components of a computer. Without it, you just can’t boot up your PC. Different kinds of memory are employed in modern computers. Most primary systems generally use DDR4 memory while graphics cards and display adapters are equipped with GDDR5 or GDDR6 memory. Both are based on DRAM (Dynamic Random Access Memory) but there are some key differences.
Many people get confused between the two and use them interchangeably. There’s also LPDDR4 memory used in smartphones and other mobile devices, and HBM utilized in servers and exascale computers. In this post, we explore the differences between DDR4 and GDDDR5 memory along with a brief explanation of HBM, LPDDR4 and the newer GDDR6 standard.
Double Data rate Generation Four (DDR4)
Nearly every kind of memory is based on dynamic random access memory or DRAM.
DDR4 is the latest iteration of DRAM. Released in 2014, it initially focused on reducing the voltage and power consumption rather than increasing the operating frequencies. With the coming of AMD’s Ryzen processors and the MCM design
DDR4 vs DDR3
Aside from the obvious (faster frequencies and lower latency), the primary advantages of DDR4 memory over DDR3 are higher DIMM sizes (up to 64 GB, DDR3 is limited to 16GB). It also draws considerably less power and runs at a lower voltage.
With that out of the way,
DDR3 vs DDR4 Vs GDDR5
- DDR3 runs at a much higher voltage than GDDR5, 1.25-1.65 volts to be exact. GDDR5, on the other hand, is usually limited to 1V.
- Both DDR4 and DDR3 use a 64-bit memory controller per channel which results in a 128-bit bus for dual-channel memory and 256 bit for
quad-channel. GDDR5 memory, on the other hand, leverages a puny 32-bit controller per channel.
- Where CPU memory configurations have wider but fewer channels, GPUs can support any number of 32-bit memory channels. This is the reason many high-end GPUs like the GeForce RTX 2080 Ti and RTX 2080 have a 384-bit and 256-bit bus width, respectively.
Both the RTX 20 series cards are connected to 1GB memory chips via 8 (for 2080) and 12 (for the Ti) 32-bit memory controllers or channels. GDDR5/6 can also operate in what is called clamshell mode, where each channel instead of being connected to one memory chip is split between two. This also allows manufacturers to double the memory capacity and makes hybrid memory configurations like the GTX 660 with its 192-bit bus width possible.
- Another core difference between DDR3 and GDDR5/6 memory involves the I/O cycles. Just like
SATA, DDR3 can only perform one operation (read or write) in one cycle. GDDR5 can handle input (read) as well as output (write) on the same cycle, essentially doubling the bus width.
- All this might put DDR3/4 memory in a bad light, but this configuration actually suits both setups. CPUs are largely sequential in nature while GPUs run thousands of parallel cores. The former benefits from low latency and slimmer channels, while GPUs require a much higher bandwidth with loose timings.
GDDR5 vs GDDR5X vs GDDR6
GDDR6 was preceded by GDDR5X which was more of a half-generation upgrade of sorts. GDDR5X features transfer rates of up to 14GBit/s per pin, twice as much as GDDR5.
This was achieved by using a higher prefetch. Unlike GDDR5, GDDR5X has a 14n prefetch architecture (vs 8n on G5). This allows it to fetch 64-bytes (512-bits) of data per cycle while GDDR5 was limited to 32-bytes.
GDDR6, like GDDR5X, has a 16n prefetch but it’s divided into two channels. So GDDR6 fetches 32 bytes per channel for a total of 64 bytes just like GDDR5X and twice that of GDDR5. While this doesn’t improve memory transfer speeds over GDDR5X, it allows for more versatility.
GDDR6 can fetch the same amount of data as GDDR5X but in two separate channels, allowing it to function like two smaller chips instead of one, in addition to a wider single one.
Other than that, GDDR6 also increased the density to 16Gb (2x compared to GDDR5X) and significantly improves bandwidth by increasing the base clock from 12Gbps to up to 16Gbps.
High Bandwidth Memory (HBM)
First popularized by AMD’s Fiji graphics cards, high bandwidth memory or HBM is a low power memory standard with a wide bus. HBM achieves substantially higher bandwidth compared to GDDR5 while drawing much lesser power in a small form factor.
HBM adopts clocks as low as 500 MHz to conform to a low TDP target and makes up for the loss in bandwidth with a massive bus (usually 4096 bits). AMD’s Radeon RX Vega cards are the best example of HBM2 implementation in consumer hardware. HBM2 solved the 4GB limit of the HBM1, but limited yields coupled with memory shortage prevented AMD from capitalizing on the consumer GPU front.
LPDDR4 vs DDR4
LPDDR4 is the mobile equivalent of DDR4 memory. Compared to DDR4, it offers reduced power consumption but does so at the cost of bandwidth. LPDDR4 has dual 16-bit channels resulting in a 32-bit total bus. In comparison, DDR4 has 64-bit channels.
However, at the same time, LPDDR4 has a prefetch of 16n per channel for a total of (16 words x 16 bit) 256 bits/32 bytes. That results in an overall of 512 bits or 64 bytes for both the channels.
DDR4, on the other hand, has two 8n prefetch banks per channel. The two banks are separate and can execute two independent 8n prefetches. This is done by using a multiplexer to time division multiplex its internal banks.
This allows LPDDR4 to run at a greater power efficiency in smartphones and battery standby times of up to 8-10 hours. Micron’s LPDDR4 RAM tops out the standard with a 2133 MHz clock for a transfer rate of 4266 MT/s while Samsung follows shortly after with a clock of 1600MHz and a transfer rate of 3200 MT/s.