What is the difference between core clock and shader clock




















For a better experience, please enable JavaScript in your browser before proceeding. You are using an out of date browser. It may not display this or other websites correctly. You should upgrade or use an alternative browser. Thread starter MaxAwesome Start date Feb 16, Joined Dec 2, Messages 0. Hey guys, I have an Asus gt the one with the glaciator cooler and I have it clocked Core: mhz stock mhz Shader: mhz stock mhz Mem: mhz - mhz effective stock mhz - mhz effective This is without a doubt a great overclock, especially for the memory.

Not bad at all :lol: That's above Ultra performance right there But now this brings me to my question: When overlocking, which of these clock domains will yield more performance?

Is is the core clock? Notification Preferences. Forum Actions. Report Post. Shader "cores" are processing units that manage complex tasks like lighting.

It does not take over from the GPU core, but core instructions can be placed upon the shader. Increasing the speed will make these functions faster. Learn more HERE. Both of these things are completely separate from each other. Increasing one by a certain amount will not correspond to the same performance increase on the other, and an increase on one can cause an instability earlier or later depending on the card.

Increasing both will have a greater total overall effect. Different cards overclock differently, even within the same model. Your question is, therefore, subjective.

I understand. I left them married. Or locked, and currently they r both maxed out haha, but I still dunno the difference functionally, or if I could lower one The core clock is the frequency at which the graphics processor runs at. I believe it instructs what the shaders etc do. The shader clock is the frequency at which the shaders work.

So I am guessing increasing the core clock means the shaders will be given instructions more frequently, so spend less time waiting?

Increasing the shader clock should increase the speed of rendering directly. However in some cases increasing the core clock can have the same effect. This is one you setting you will have to play with to get the right combination of core vs. Now, since the tool you're using seems to use different terminology, here's the translation for core clock and shader clock taken from geforce. The processor clock refers to the shader domain frequency , which governs the speed of special function units and stream processors otherwise known as shader cores and CUDA cores within a stream multiprocessor.

As for which setting to touch to best alleviate the heat issues, I can't give you an definitive answer. This way you likely wouldn't produce a bottle-neck in the graphics system, thus keeping performance at a good level while still decreasing heat generation significantly.

If it is possible to tinker with the settings without needing to reboot for them to take effect I have no first-hand experience of this you might also just try the same with one of the clocks at a time and see how it affects the performance and temperatures. There are also plenty of overclocking instructions, and while you're aiming for the opposite, some of those might contain relevant info on how the different parameters affect the graphics performance. Sign up to join this community.

The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Learn more. Difference between Graphical and Processor clock and underclocking a video card Ask Question.



0コメント

  • 1000 / 1000