What the heck are memory timings anyway? RAM is already complicated enough: You have to deal with the memory speed, capacity, and whether or not it’s in dual or single-channel mode. All of these affect its performance in a number of different ways. But on top of all that, you have to contend with four more numbers, the memory timings (primary). If you’ve ever looked at a memory module, you’ll see an extra four numbers that look something like this: 8-8-8-24. Those are the memory timings.
Before we get to the memory timings themselves, let’s have a quick look at how RAM transfer speeds are calculated.
Memory or DDR transfer rates?
DDR4 RAM features a double data rate, meaning that 2 transfers take place per cycle. What else did you think “DDR” stood for? Let’s take standard DDR4-2133. The actual, nominal frequency of this memory is 1066 MHz, but effectively this amounts to 2133 million transfers per second (MT/s). Because they’re effectively the same, people also refer to DDR4 memory as running at 2133 MHz, since a DDR module at 1066 is the equivalent of a single-pumped module at 2133. RAM is connected to the CPU across a 64-bit bus.
What’s the Difference Between DDR4 vs LPDDR4 vs LPDDR4X: PC and Mobile RAM
So, in order to calculate the total transfer rate (which is expressed in bytes, not bits), you’ll need to multiply the effective speed (2133MHz) by the bus width. Then divide this by 8, since 1 byte is eight bits. For 2133 MHz DDR4, this translates into 17,064 MB/s of bandwidth, or approximately 17 GB/s. Now, if you’re running in dual channel mode, your CPU is connected over 2x 64-bit buses to the RAM, meaning that your effective bandwidth is approximately 34 GB/s.
What’s the Difference Between DDR4 vs GDDR5 vs GDDR6 Memory
Theoretically, this is all you’d need to know in order to determine RAM speed. In practice, there are other factors. This is where RAM timings come into the picture. Each of the four RAM timing numbers represents a different variable. Let’s start with the first:
tCL (CAS Latency):
This refers to the delay (latency) between your CPU requesting data from the RAM and the time that the RAM starts sending it. The lower the CAS latency, the less delay. The number refers to the number of clock cycles of delay introduced. For example, CL 9 means a delay of nine clock cycles between the CPU requesting data and the RAM starting the transfer.
tRCD (RAS to CAS Delay):
This has to do with the way that memory is stored in RAM–in a matrix made of logical rows and columns. The tRCD refers to the length of time between when the row for a piece of data is activated and its column is activated.
tRP (RAS Precharge):
RAS Precharge is functionally related to tRCD. Only one line in the data matrix can be activated at a time. tRP refers to the length of time between disabling access to one line and initiating access to another line. The Precharge command is issued once data is collected from a given row. It closes the row that was used and allows for a new one to be activated.
Just kidding. tRAS is short for Active to Precharge delay. It refers to the length of time between instances of memory access.
How are Timings and RAM Speed connected?
If you’re familiar with the concept of buffering, it makes sense as to how RAM timings can have a notable impact on overall RAM performance. Each timing figure represents the time taken for actions taking place on the RAM module, something distinct from the transfer rate. Regardless of how high the RAM is clocked, your overall performance will be impacted by how fast data is stored and retrieved on the RAM module itself.
These two factors are interconnected, though. A higher clock-speed means a higher transfer rate, which in turn means that the CPU is fed data faster (and can, therefore, request it faster). In order to prevent a bottleneck, this means that memory timings need to increase as clock speed increases. Otherwise, delays on the RAM module will lead to the CPU sitting idle between instances of data access.
At first glance, it might not seem like a big delay: a CL 9 RAM module is only delaying transfer by 9 clock cycles, right? Well, yes. But this is happening every time that particular function (the CPU requesting data) takes place. Factor that in for all four timing aspects, and you’ll see that these delays stack up because they can take place thousands of times per second.
With lose enough timings, this can actually result in effective read-write rates that are actually slower than you’d get with a lower clock.
How do you Tweak RAM Timings?
First, identify the highest frequency your RAM will achieve with the default timing settings. Most BIOSes will compensate for higher-than-stock RAM speeds by loosening the RAM timings: if the timings stay low and your transfer rate increases, your RAM ends up with less breathing room and you end up with potential instability.
Once you get your maximum nominal clock rate, start tweaking your RAM timings in small increments, of 1 cycle or so per time setting. What you’re aiming for is a combination of timings and clock speed similar to manufacturer-tweaked overclocked modules. Visit Newegg, check out the specs for a “target” overclocked module. Once you have your clock speed up as high as it goes (remember to up your RAM voltage accordingly. 1.35v is safe), start tweaking RAM timings downwards by single-cycle increments.
Make sure the timings are stable
To stability test your memory, use Memtest64. We suggest running 5 loops at a time. If you luck out, you’ll have overclocked memory with clock speeds and timings that approximate much faster factory OC’d modules. A life hack? Get yourself lower-clocked DDR4 from established names like Samsung. Even if the base price is a bit higher, these modules tend to have plenty of overclocking headroom on them. I run a pair of ADATA 2400 MHz modules and a pair of Samsung 2133 MHz modules in dual channel mode, with all four happily running at 3000 MHz.