Difference between memory and core clock speed

According to Google’s English dictionary (provided by Oxford Languages, the publisher of the Oxford English Dictionary), clock speed is the operating speed of a computer or its microprocessor, expressed in cycles per second (megahertz). Most modern CPUs, GPUs, or RAM chips have their clock speed expressed in gigahertz instead of megahertz. And this is relatively easy to understand, at least when it comes to CPUs or GPUs. Nowadays, you have the base clock speed and boost clock speed (Intel CPUs have both the single and all-core boost speed, but that’s a story for another time), both shown in gigahertz, and that’s it. 

Expressing memory clock speed is a bit more complex. When it comes to RAM, most modern computers use DDR, or double data rate, memory. DDR memory can transfer data two times per clock cycle by sending two signals during one data cycle instead of one. This means that the effective RAM clock speed is two times its raw frequency. This is why you see DDR4 memory labeled as “2133,” “3200,” or “3600,” even though its raw frequency is 1066MHz, 1600MHz, and 1800MHz, respectively. Just open CPU-Z and click on the “Memory” tab, and you’ll see the raw frequency of your memory, not the effective clock speed. 

Now, let’s talk about graphics cards. They feature two different clock speeds; the core clock and the memory clock speed. 

GPU (Core) clock speed

Each graphics card has both the GPU chip and the memory chips needed for video data. And both of these have their independent clock speeds. Now, the core clock, or the GPU clock speed, is the frequency of the GPU chip, and this value is easy to understand. It’s simply the frequency at which the GPU runs. 

Modern GPUs, similar to modern CPUs, have multiple clock speeds, depending on the workload. Nvidia GPUs have the base and boost clock speeds. It would be best to be interested in the boost clock speed since this is the GPU frequency when under load. Or, in other words, the frequency when playing games or using the GPU for graphically intensive workloads (i.e., rendering). In the case of the RTX 3080, its reference boost clock speed is 1710MHz. This is the minimum boost clock the GPU should achieve when under load, and in real-world tests, the boost clock of the RTX 3080 Founders Edition is set at about 1920-1950MHz. 

On the other hand, AMD counterparts have three different clock speeds listed; base, game, and boost clock. Out of the three, you should be interested in the boost clock, which is the maximum GPU frequency the chip can achieve when under load, according to AMD. For the RX 6800XT, the boost clock is set at 2250MHz, which is, similarly to the RTX 3080, surpassed in real-world tests. And that’s all there is to know about the GPU or core clock speed. It’s simply the average frequency of the GPU when under load.

Memory clock speed

Now, the memory clock speed is the frequency of the video memory found in every graphics card. But the thing is, just looking at the frequency doesn’t show you the whole picture. For instance, the memory clock speed of the RTX 3070 is set at 1750MHz, while the RTX 3080 has its memory clock speed set at 1188MHz. Of course, memory on the RTX 3080 is noticeably faster than the memory used on the RTX 3070, even though its base clock is lower. This is because the RTX 3080 uses newer GDDR6X memory while the RTX 3070 is “only” using GDDR6  memory. 

The GDDR6X memory is faster than the GDDR6 even though it runs at a lower frequency because it has a higher data rate than the GDDR6 (The speed at which data is transferred within the computer or between a peripheral device and the computer, according to PCMag). For instance, the memory used on the RTX 3080 has a data rate of 19Gbps (Gigabits per second), while the GDDR6 used in the RTX 3070 has a data rate of 14Gbps. 

The data rate is also called the effective memory clock, and this is the memory spec you should be interested in, not the base memory clock. But how can one type of memory achieve a higher data rate, even though it runs at a lower frequency? The science behind it all is too complex for us to explain in detail, but its gist is that it all boils down to memory architecture. 

For instance, the GDDR5 memory used in graphics cards such as the GTX 1060 and the GTX 1660 is double the data rate—the same as the DDR (double data rate) random access memory (RAM). But the GDDR5X (used, for instance, on the GTX 1080 Ti), GDDR6, and the GDDR6X are all quad data rates. This means they can push four bits of data during one clock cycle instead of just two data bits like the GDDR5 memory. They do this by sending four signals instead of just two during one clock cycle. And the final piece of advantage the GDDR6X has over the GDDR6 that allows it to have a higher data rate at lower frequencies is something called PAM4. 

PAM4 is a new kind of signaling used exclusively in the GDDR6X memory. PAM4 is multilevel instead of single-level signaling, used in older video memory architectures. In other words, one PAM4 signal can send two bits of data instead of just one, like signaling techniques used in older memory architectures (GDDR5, GDDR5X, GDDR6, etc.) This means that, when sending the four signals as mentioned earlier during one clock cycle, the GDDR6X memory can send eight bits of data instead of four. This allows the GDDR6X to be faster than the GDDR6 even though it runs at lower frequencies. 

So, when looking at the memory clock speed in graphics cards, do not focus on the base memory clock/base frequency. Focus on the effective memory clock, which is usually expressed in Gbps, or Gigabits per second. The great thing about this is that modern graphics cards all have their memory speed shown in Gbps instead of Megahertz or Gigahertz. As we already said, the RTX 3080 has an effective memory clock of 19Gbps, while the RTX 3070 has an effective memory clock of 15Gbps. The RX 6800 XT, for instance, has an effective memory clock of 15Gbps

Last but not least, we have the memory bus, which presents the number of lanes the graphics card memory has at its disposal for transferring data. The RTX 3080 has a 320-bit memory bus, while the RTX 3070 has a 192-bit memory bus. In other words, the RTX 3080 memory has 320 lanes for transferring video data, while the RTX 3070 only has 192 lanes. The memory bus width is needed for calculating the maximum memory bandwidth, but it’s not essential when discussing the effective memory clock. 

How to calculate effective memory clock and memory bandwidth

Now that we’ve explained the effective memory clock let’s teach you how to calculate it. Below we’ll calculate the effective memory clock for GDDR5, GDDR5X, GDDR6, and GDDR6X.

Calculating effective memory clock of GDDR5 memory

Let’s calculate the effective memory clock of the GTX 1060 (2002MHz frequency, 8Gbps effective clock). As we already said, the GDDR5 memory is double-data rate. We should multiply the base frequency by 2 (two bits of data per clock cycle) and then divide the result by 1000, so we get gigabits instead of bits. But doing that only gives us 4Gbps (2002MHz x 2/1000 equals 4 and not 8). Why? The memory used in graphics cards is SGRAM (Synchronous Graphics RAM), a specialized form of SDRAM

While the SGRAM is single ported (it supports sequential read and write ops), like SDRAM, it can keep two memory pages opened at once, thus simulating dual-ported memory. In other words, SGRAM can write two bits of data simultaneously, instead of just one like the regular SDRAM (DDR RAM). This means that we should multiply the base clock by 2 and then again by 2. If we do this, the result is 8Gbps (2002MHz x 2 x 2/1000 equals 8), which is the correct result.

Calculating effective memory clock of GDDR5X and GDDR6 memory

Now, let’s repeat the process for the GDDR5X and GDDR6 memory. For the GDDR5X, we’re using the GTX 1080 Ti (1375MHz, 11Gbps effective clock). Remember, the GDDR5X memory is quad data rate, so we multiply the base clock by four and not by two:

1375MHz × 4 x 2 / 1000 = 11

For the GDDR6 effective memory clock calculation, we’re using the RTX 3070 (1750MHz, 14Gbps effective clock):

1750MHz × 4 × 2 / 1000 = 14

Calculating effective memory clock of GDDR6X memory

Now, let’s calculate the GDDR6X effective clock using the RTX 3080 (1188MHz, 19Gbps effective clock). Remember, the GDDR6X memory is quad data rate, and it utilizes the PAM4 signaling, which means two bits per signal. In other words, we multiply the base frequency by four, then by two, and then again by two to get the effective memory clock:

1188MHz × 4 × 2 × 2 / 1000 = 19

Remember that by overclocking the video memory, you’re also increasing its effective memory clock. Further, you can undervolt your graphics card and, at the same time, overclock the memory to get lower temperatures and a bit higher performance. 

Calculating RTX 3080 memory bandwidth

Last but not least, let’s calculate the memory bandwidth of the RTX 3080. This isn’t as important as the effective memory clock, but knowing how to do it is good. So, now we should multiply the effective memory clock (19Gbps) by the card’s bus width (320-bit) and then divide the result by eight so we get the result in bytes instead of bits since the memory bandwidth is shown in gigabytes instead of gigabits. Remember that you should express the effective memory clock in bits and not in gigabits:

19000 bits × 320 / 8 = 760,000 bytes per second/1000 = 760 gigabytes per second (760GB/s)