Jump to content

gtx 970 vs 780?


Recommended Posts

Upgrading from my gtx580. Tossing up between the new 970 and the 780 which are similarly priced; but one area the 970 (and indeed the 980) lose to the older 780 is texel fill rate http://www.anandtech.com/show/8568/the-geforce-gtx-970-review-feat-evga/13

 

Is this likely to be a relevant factor in how fast a card is in Max 2015 viewport? Typical scene would be no more than 5m polys.

Link to comment
Share on other sites

Most likely there will be no difference in viewports - at least no meaningful difference, and probably the 970 will be a tad ahead.

Initial benchmarks imply the 970 being almost equal with the 780 and the 980 a bit faster than Titan Black / 780Ti.

 

But that's for benches / games.

 

In CAD & games alike, the framework on which the GPU will draw what you see in the screen is calculated by the CPU, frame by frame.

This process is single threaded, thus having a 5960X won't be better than a 4590K for example. The contrary, again the faster clocked CPU with "enough" + 1 cores will be better.

 

I don't understand the specifics, but apparently the way CPU offloads some of the work with games & 3D benchmarks is more efficient in games than CAD viewports - perhaps because the tolerances for precision are far lower in games than CAD. Thus after you reach a performance level - roughly GTX 760 - adding faster GPUs don't really help with complex scenes, simply because the CPU cannot process the data fast enough for the extra GPU stream processors to care.

 

In your case, I would get a 970 simply because it will be of lower power consumptiion / newer and will have 4GB of RAM should you like to play with activeshade /VRay RT etc. For the latter, I think the Kepler GK110 based cards - like the 780 - are faster than the new Maxwell, but that's probably an optimization issue + the fact that CUDA performance probably did not improve a lot with Maxwell.

 

What did improve massively is OpenCL performance, that now reaches R9 290 levels (nVidia sucked in OpenCL hands down) and is at last pretty good.

Link to comment
Share on other sites

Interesting point about CPU bottleneck, so obviously that'd be the same with the most expensive Quadro?

 

re CUDA, the Maxwells have slightly fewer CUDA units than the Keplers don't they

 

Thanks for the insight though, will likely go with a 970.

Link to comment
Share on other sites

Yes, but the compute performance per core with the Maxwell Architecture is vastly improved over the Kepler architecture, much like the onboard cache per core/SMX cluster is multiple times that in the Kepler GPU.

 

Applications that were optimized to use the limited resources Kepler head, spreaded over 2.5K cores (like VRay RT after 2.4) will give the massive GK110 Kepler a slight upper hand. For generic GPGPU, Maxwell (and AMD R9) is like 2-fold faster. There is no comparison.

 

67918.png

 

pcfoo_LuxmarkV2.0_2014Q2_01_600.jpg

 

I've added my own chart to illustrate Maxwell's great compute potential witth the 750Ti, a $120 card that matches or slighty surpasses th GTX Titan in this OpenCL bench, while it trumps the GTX 670.

I did not have the whole line to test at hand, but it would of course beat the 780/770 (760 slightly slower than 670).

 

And that is with 640 Maxwell Cores.

Edited by dtolios
Link to comment
Share on other sites

I have a gtx 670 but i'm ordering a gtx 980 this week. Gonna be siiiick! Putting the 670 for sale if anyone's interested!

 

I'm working alot with unreal engine 4...so everything is about the gpu!

 

My pc :

 

alienware x51 small form factor

 

core i7 2600

gtx 980

8gb (soon be become 16 which is the max my pc can handle unfortunately)

128 gb ssd + 1 TB HDD

win 7

Edited by philippelamoureux
Link to comment
Share on other sites

That's probably true... viewport is horribly optimized in max 2013...I don't know about more recent versions but I have the feeling it's not much better!

 

I get 120 fps in unreal engine viewport, with my scene fully lit/textured etc. And that's with my gtx 670.

 

Going 9xx is for gpu renderer and real-time and gaming!

Link to comment
Share on other sites

That's probably true... viewport is horribly optimized in max 2013...I don't know about more recent versions but I have the feeling it's not much better!

 

I get 120 fps in unreal engine viewport, with my scene fully lit/textured etc. And that's with my gtx 670.

 

Going 9xx is for gpu renderer and real-time and gaming!

 

 

 

Just installed 3 gtx 980's for Iray but Max reverts back to CPU rendering. I guess Iray doesn't support the new Maxwell architecture. Now i am playing the waiting game until new drivers/iray update is release. Anybody hear/see anything different?

Link to comment
Share on other sites

Just a sort of old general rule of thumbs for myself for gaming graphic card which is the TS refer to. And i think it is still adoptable is:

 

1st digit refer to release time (the higher then it is the newer)

2nd digit refer to the class (performance & sophistication level) the device (the higher the number, then the higher the class)

3rd - next following digit just refer to variance.

 

E.g.

Gtx580 is older version than both 970 and 780, but at the same class with 780

While 970 is lower in class than both 580 and 780, it is a newer release. Meaning newer techs..

 

Base on that, i tend to choose 780 if i were you to decide. The tech is not too far outdated while i am still keep the class level of the device i have for a reasonable price. If there is available on the market, look for 790 for know why reason :-)..

Link to comment
Share on other sites

Since i'm planning to follow your choice and build up an intel based workstation with 2 or more Gtx 970 in sli mode, focused on IRAY rendering... i would be glad to know if you find a chance or any news about effective use of these maxwell cards with that rendering engine.

Thanks in advance

 

Guido

Link to comment
Share on other sites

  • 9 months later...
  • 2 weeks later...
http://blog.boxxtech.com/2014/11/17/geforce-gtx-rendering-benchmarks-and-comparisons/

 

Nice reference charts.

Not too much related to the original post's question, but I think it can help! ;)

 

I've dropped my 2 cents on that post @ the 1st comment when it was first published.

Unfortunately it is discussing Vray RT GPU performance and not 3DS viewport that is a completely different animal.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...