Jump to content

Would a Quadro 2000 1gb be Better than a GTX 670 ?


Recommended Posts

Hello Everyone ,

 

From past 2months i have been planning to get a GPU as i very Crappy 9500gt which Lacks alot .

 

My Comp Specs :-

 

i7 2nd Gen 3.4ghz on stock

10gb Ram Ocz 1600mhz ddr3

ASUS Maximus 4 Extreme Mobo

Nvidia 9500gt 1gb

NZXT Lexa S Cabby with 4fans.

 

My ques are :-

1) Which one would be Better for GPU rendering as 670 has more CUDA Cores ?

2) I do have some Gaming Passion if i opt for Quadro would help me ?

3) While working with Maxwell Render Plugin for max Pressing M for the MXM library it lags alot is it the problem with GPU ?

 

Awaiting your replies :)

Link to comment
Share on other sites

1) The Quadro 2000 is in general a very weak GPU as far as GTX Geforce's go. The GTX 670 will definitely beat it in GPU renderings, just like GTX 560 / 560ti / 570 / 580 etc.

 

I really never had the 2000 to work with, so I don't know how it performs in viewport acceleration.

I have the FX1700 @ work, and it is mediocre, but it is a much older card. I doubt you will have much of an issue with a GTX 560ti 2GB to get good viewport acceleration, and a 670 2GB will only make it better.

 

The 670 has multiple times the CUDA cores the Quadro 2000 has, but those are not directly comparable, as the Kepler (6xx) cards use using completely different CUDA core architecture. As an example, the GTX 680 - the current flagship based on Kepler arch, cannot outperform the GTX 580 (Fermi Architecture) in GPU rendering, despite having 3x times the CUDA cores and the same memory bandwidth.

 

The Quadro 2000 tho, having 7x less, and much much slower clock and RAM speed than the 670 is no match - at least for brute force calculations.

In Viewport acceleration they might be comparable due to driver optimizations for the Quadro, but low end Quadros are and should be gradually phased out as they no longer provide the goods they promise over the much cheaper mainstream cards.

 

 

2) The Quadro 2000 is pretty slow for modern games. It should be similar to a GT 550 1GB or something like that in the GeForce world.

Depending on your gaming habits, a GTX 560ti 2GB would work fine for both gaming and light GPU rendering, and it is $200-220. Half that of a 670.

If you go up to $300+ for a 570, esp. the 2.5GB, you will get equal or better GPU rendering vs. a 670, and not sensible difference in single monitor 1080-1200p gaming (unless you go for 120Hz = more than 60 fps minimum).

 

3) I don't think Maxwell Renderer is GPU accelerated. The PC struggles to generate previews of all the materials etc when you open your library, thus it slows / lags a lot.

 

Save some money by going the 5xx series, and get a decent cooler in order to overclock your CPU a tad...like 1GHz maybe, which is not that hard with 2600K and air cooling.

Edited by dtolios
Link to comment
Share on other sites

It's true, Maxwell is strictly CPU, even Fire. Fire just appears to act like a GPU realtime renderer because it's optimized to "do the easy calculations first" and give you an image quickly.

 

Dimitris, would you agree that it's generally true that Quadro products are undesirable for gaming? I've always heard the sentiment that they are "accurate, not fast." We have Quadro 6000s at my studio and they're absolutely beastly, but I've obviously never thrown gaming at them.

Link to comment
Share on other sites

Well, this is a table I've assembled based on info floating around...

As you can see, the texture processing power for the Quadros is nothing that great, so I wouldn't believe that those - even the Quadro 6000 - would be a good gaming card.

The GF prefix in Core code names is for Fermi, and GK for Kepler.

 

nVidia_Comparison.jpg

 

Keep in mind though, that gaming cards today are called to play on multiple monitors at the same time, using medium/small sized textures and relatively simple geometries, but using FSAA and complex filterings etc - in complete contradiction with the creative program's workflow, where view-ports rarely cover more than one or a portion of one monitor, yet we have on occasion extremely complicated models, many and huge textures etc.

 

Thus gaming cards are obviously geared towards powerful GPUs, while workstation cards are armed with more VRam and rely on software and driver optimizations to their specific role. The driver portion has been tested in the past, were some Quadros were 100% identical with mainstream GeForces and the users could "soft-mod" them or hard-mod (remove or add a capacitor on the board) to change them into Quadro from GF, fooling the driver and gaining 3D/CAD performance, while losing some D3D gaming performance.

 

In the chart, you will see some popular nVidia choices. I am not responsible for the numbers, which btw are theoretical output based on the architecture layout and clocks, and not measure performance. Those numbers are the ones claimed by nVidia in their brochures and site.

 

You can see the amazing progress within less than 3 years: the GFLOP output increast per Watt spent is insane with the Kepler cards. So is theoretical GFLOP in general, but unfortunately the new architecture is directly translated in gaming performance increase only. Computational tasks like GPU rendering still have to be optimized for the new architecture. I have no idea now well the new Kepler GTX fold for proteins etc either.

 

Did not inlcude Teslas other than the K20, which is announced with "big kepler", the GK110 chip that is still out of the GTX line, and perhaps will never be added in it. c2050 etc Fermi generation Teslas are identical with GTX 580 in most numbers (same GF110/512 cores), which proves that the GTX cores were far from "inferior" than the Quadro ones. Simply different. Quality control does play a role ofc, and many of the cards in both lines are exactly that: products of QC: the best chips in the GK104 line for example, might become GTX 680s and 690s, some with slight defects might get some clusters disabled and instead become 680M and 670s. The lower yield chips will become 660Ti or something like that, and the unusable ones are thrown away.

 

Maybe - Maybe, the best of the best chips out of the foundry are becoming Teslas / Quadros (when comparable) and the less perfect GTX etc. But if it wasn't for the extra VRam or other unique features in the workstation line, I doubt it would worth getting them going after the "reliability" argument, as the price difference is so big, you can replace a GTX 3-4 times of it's lifetime and still get better value for money out of it. Thus the GTX 580 3GB was and still is so popular. Has still the best applied computational potential, and decent buffer, but the raw potential of the Kepler cards - yet to be unleased in VRay RT at least - is also great, and you can get 4GB GTX 670s cheaper than you did 3GB 580s less than 3 months ago. A VRay 2.x update/patch that would fully utilize Keplers in RT GPU would be simply great. Today a GTX 680 with 2x the theoretical GFLOPs of a GTX 580, is actually slower.

Edited by dtolios
Link to comment
Share on other sites

Wow, fantastic in-depth answer.

 

The price jump from any other card to the Quadro 6000 has always struck me as pure insanity and I think that from a cost/benefit perspective, most people would obviously agree. However opposite of most companies, mine just seems to tick whatever is the highest-end choice when configuring a new computer, so who am I to complain?

 

I have noticed eerily good stability. We run 200mil+ poly models through a variety of processing/CG/CAD software and I can honestly report no crashes. However, who's to say that another card wouldn't perform just as well? The lesser Quadros don't interest me but I'd be really curious to throw a gaming card in here one day.

 

The only software to give us headaches has oddly been Cinema 4D with rather severe OpenGL problems. Strange for an app that has a time-tested reputation for stability. PNY was quick to support, though.

 

3) While working with Maxwell Render Plugin for max Pressing M for the MXM library it lags alot is it the problem with GPU ?

Anwar, I have this problem in Max too. It's not a GPU problem, it's that it's trying to refresh every single thumbnail every time you bring up the material library. There are settings in the Maxwell plugin as well as in the Max module for changing or turning off thumbnail refresh.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...