Jump to content

Sticky: Video card buying advice


Recommended Posts

  • Replies 147
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

So, after all, can someone help me work out if I should get (for VRay 3.0 and only CPU Rendering) a GTX670 or a Firepro W5000? Is it worth going for the Firepro? Or getting a higher GTX one? Mainly for viewport performance, since I'll be getting enough CPU power to render CPU. I'll do no GPU renderings at all.

 

 

Do you already have a 670 or are you getting a new GPU regardless?

If you have a 670, stick with it. It is as fast as a 760 (or slightly more so) and faster but more power hungry than a 750Ti.

If you don't have a GPU, the 750Ti is a great card for the price. Note, 750 =/= 750Ti.

 

 

The W5000 won't give you a real edge in 3DS. Would be great if you were doing solidworks and the such engineering / industrial design CAD/CAM OpenGL software packages, but 3DS, Maya etc don't care much. The W5000 was easily a better choice over the K2000, but the new K2200 (guess what, same core as the 750Ti, but vastly better for OpenGL drivers than any GTX) is easily the safe choice for an OpenGL workstation card in the $400-450 range and phases out the W5000 and in some cases even the K4000.

Link to comment
Share on other sites

Thanks Dimitris. Since the last time I wrote, there seems to be a tendency in my office to look forward to some of these "higher end" graphics cards, not to do rendering, but to be able to visualize within minutes what the latest rendering is going to look like. Hence, if I understood correctly, we are looking at doing some tests with GPU rendering on a workstation with VRay 3.0RT and then when we want to do the final pass, do that using the CPU. If I understand correctly, this gives an immediate vizualization of the rendering without all the additives which I can add afterwards on the CPU rendering with VRay. I hope I make some sense, but that's kind of my workflow sometimes. Being able to show my boss what the rendering will look like within a few minutes knowing that I still have to do a final CPU rendering with elements and passes that the GPU is not able to do. Currently, I use MR and so I have to show a low resolution, sometimes it takes "many minutes" and it simply does not show anything someone can make a final decision about.

 

So, based on what you wrote, for the workstations, a K2200 gives me good value for money, the question is "will it be efficient enough to do fast enough renderings in VRay RT so that I can vizualize a concept within minutes?" And I'm not sure what the answer is, perhaps at this point I'd have to try it out for myself. But the data you give is important. Also, what are the important specifications to look for when buying a GPU card that I can use? And at this point of the game, for my purposes, should I look at getting a Tessla instead of a KSeries?

Edited by padre.ayuso
Link to comment
Share on other sites

Just use "Progressive" sampler in Vray3, it works "almost" flawlessly (will work supposedly like it should in next service pack).

Has no limits and provides identical look as adaptive sampler.

 

RT is simply lost game. It never went anywhere and has far too many limitations.

 

None of your listed cards are good enough for GPU rendering at that matter. If you buy GTX970 with 4GB of VRam, you will have both quite powerful enough to do some GPU rendering, but cheap enough not to waste money to realize gpu rendering might not be what you expect.

 

Quadro/Tesla series are not more powerful in performance, they're mostly slightly underclocked version of same card as GTX with unlocked features (db float point for precise scientific calculations), different set of drivers that still benefit certain CAD application (Quadro), color pipeline (14bit lut--> Quadro) or provide additional memory (NOT performance) for rendering (Quadro and Tesla with 6-12GB of Vram).

Edited by RyderSK
Link to comment
Share on other sites

^This.

If you can work around Vray RT limitations, you will need a fast GTX (or many of that).

The K2200 is a respectful GPGPU card for the wattage, but that is in comparison with the pathetic K2000, or the anemic K4000.

A 750Ti will do better (albeit having 2GB of VRam) for VRay RT, for 1/4 the cost.

 

 

If you want to add more cards, there is no need to go Tesla. Any GTX can work in tandem with a K2200 or a W5000 or another GTX, and have one as a dedicated video card and the other as a compute card. In fact you can add as many GPUs as you can fit in your case / on your mobo, or even distribute GPGPU jobs over LAN - if say you were using it to render an animation sequence.

 

 

The K2200 is decent as a workstation card goes, yes, and if you have to have something with the Quadro label for your employer to be happy, it a value buy. But if you are using it for 3DS, Revit, Sketchup, AutoCAD etc, it won't do much for your. Might aswell get a 750Ti.

Link to comment
Share on other sites

I have a 750Ti in my work computer and it has been great for the cost. I even use it with Lumion which demands a lot on the GPU for performance and I have been very satisfied.

 

I just put in an order for upgrade parts for one of my home computers...Zotac GTX 970, 512GB Samsung 850 Pro SSD, i7-3770. Very excited to see how these parts perform. Should last me until DDR4 becomes the norm and then I will do a full refresh. I was considering waiting for Skylake but it seems that Intel is targeting low-power for that series, so I will likely wait beyond Skylake to do a new desktop.

 

Still have a i7 920 at home that I have had to replace the mobo on once (accident involving iced tea), that thing is a tank, so glad that I adopted early on that series!

Link to comment
Share on other sites

I have an ATI Firepro V7800 (FireGL), using mostly Rhino/3ds Max, also Revit/AutoCAD/Photoshop, it seems to be a robust card and I've had no issues with it.

 

My company are thinking of investing in Lumion or something similar, I've tested the demos on my computer and it doesn't run very well at all. I've seen the benchmarks and the Firepro is very low for running this kind of software. What is my solution to this?.. is it possible to get a dual card machine and have 1 dedicated to Lumion?.. cards with high bench marks are not particularly expensive but I don't want it to be at the expense of my other card and the other software I used.

 

Most GPU-accelerated software lets you choose which GPU in your workstation to use for this task, but Lumion being real-time engine, it might by default go for the one driving the display, i.e. main one. You might just send them email, they will be able to answer this fairly quickly.

 

With that said, V7800 is older, low-mid end card, and any current mid-to-top mainstream card you will get, will probably be better even for CAD tasks, making it probably obsolete to even keep it.

 

What's "robust" about it ? The fact it's workstation grade doesn't really say much about it, neither is "no issue", which any non-faulty graphic card should be. No reason to be attached to it just because it's former "pro" card.

Link to comment
Share on other sites

Again, good points by Juraj.

There is nothing "magic" about workstation cards other than better crafted drivers. These drivers contain optimizations for certain generations of certain software and/or graphic engines.

 

 

Older pieces of hardware - GPUs in this case - are supported by driver packages, but that doesn't mean that developers update what is contained in this packages for each card individually. E.g. new Maya comes out, has a new graphic engine mode, don't bet that the drivers you've just downloaded for your V7800 is not a repackaged / rewarmed version of what was already there with a tweak or two, and all the real work - if any - has been down to optimize the Wxxxx series only.

 

 

Bearing the same name, doesn't mean it is the same architecture or work, so it is natural for products 4-5yo like the V7800 to be neglected before officially abandoned, but even if we were talking about a newer GPU, I doubt Lumion would be favoring "drivers" vs. raw fill rates/compute that is simply not there in a 4yo, single slot workstation card optimized for OpenGL viewports. A GTX 750 will match it for 1/2 the power, a 970 will embarrass it.

 

 

Of course most stuff GPU related are independent of CPU power up to a certain point: if you have a 4-5yo CPU to prep frames or compute work for a 3-4yo GPU, you are in a different ballpark vs. a new system trying to do certain intensive stuff, while at the same time you might work just fine working with older software suites that carry 1-2 generations old tech - like most CAD software for example.

Link to comment
Share on other sites

I'm in no way attached to it and if it's recommended that I buy a mid-to-top end card I will speak to my boss about it. What mid-end cards would you recommend?

 

What are people using to run 3ds Max/Lumion within a budget of £300-500?

 

At the lower spectrum (~300) it's GTX970 with 4GB of Vram. At the higher (>500) somewhere around December GTX 980 will come out with 8GB of Vram (Currently there is only 4GB version, so it only offers performance benefit over 970), which will be monster gpu as it will enable to work with quite complex scenes without possibility of hitting the ceiling of would could fit into memory.

Link to comment
Share on other sites

PC Boards on the 9xx cards already have spots for double the RAM chips to be soldered on - including 970 & 980s.

 

 

Chances are that an 8GB version of the 970 will be out soon after - if not the same time - an 8GB 980 will be available, much like it was the case with 6xx and 7xx cards (reference 2GB GK104 cards shared boards with reference 4GB cards, the reference 780 shared board with the Titan).

 

 

8GBs are overkill for anything viewport, an nearly impossible for a single GPU to access them simultaneously @ 256bit bus (what 980/970 uses).

It becomes relevant with Tri-SLI or so setups and large resolution monitors or multiple monitor setups - always for gaming.

 

 

The only programs that can probably push this kind of data in a GPU are GPU rendering engines, but I don't know whether Lumion has been designed to exploit that much RAM. It is to be seen. Soon™.

Link to comment
Share on other sites

I have no issue working with Kepler cards & Revit, I would see no reason for the Maxwell line to have issues.

In my current HP Z workstation I have a K2000, and I dare to say vs. the GTX 660 2GB powered one I had right before it, I see no improvement GFX wise. I dare to say Sketchup worked a tad better with the 660, but Revit doesn't care whatsoever.

 

 

Of course I am working on pretty beefy Revit files, and the whole process is more CPU limited than anything else, but I did not feel hindered by the 660, I bet a 970 would be the same if not better.

 

 

Again, I think there is a "myth" around GPUs and software being so critically paired and yada yada, but with Autodesk D3D software (like Revit) I don't see any "cutting edge" updates pushing performance forward etc. At best you hope for adaptive degradation algorithms being better and 3DS getting some work, but ACAD & Revit engines appear to be severely outdated to take advantage of the real potential of modern GPUs, and driver optimization appears to be a placebo.

 

 

As for the certified & recommended list of hardware for Autodesk products...lets say it is outdated and not done by an independent / unbiased entity. The list contains only workstation cards by default (Firepro & Quadros), but I would think that if a 2010 low end Quadro with 512MB or RAM cuts it as "recommended", a GTX 9xx can pull it off.

 

 

After all the real hardware requirements are DX11 support and nothing more. It doesn't even change for the "recommended high performance" system spec for Revit 2015. So yeah, I would take the certified / recommended list with a grain of salt.

Link to comment
Share on other sites

Thanks for the advice, I just need to convince my IT manager to take that list with a grain of salt!

 

 

Then just make this simple argument: Lumion and other GPU accelerated apps care about GFLOPs of compute powa. (Single precision works fine, no need to sweat double precision)

 

 

GFLOPS

K5000: 2100

K4200: 2100

K5200: 3000

GTX 970: 3500

GTX 780: 4000

GTX 980: 4600

K6000, 780Ti, Titan Black : 5200 ***

 

 

Don't sweat the numbers or the ***. Tell him that you will "need" at least a K5200 Quadro to do just close to what a 970 will do.

Then leave the GFLOPS and the cuda cores and the rest of the crap, and let him ask & justify the $$$ to the guys paying instead.

 

But if you go for $ & "compatibility list" as your justifier, you will end up with a K2000 as I did (thanks to our IT guys - not that a 9xx was an option for a factory configured HP workstation).

 

 

*** Of course those are theoretical max rates, in reality the Maxwell architecture (GTX 9xx / K4200 / K5200) do better in real life than Kepler cards with faster GPUs due to having more "cache" that allows more operations to happen within the GPU. Kepler has more CUDA cores per cluster unit, so it has a higher theoretical throughoutput that doesn't translate to linear gains in real life - unless we are talking really small computational units.

 

 

Thus a 980 does with 2048 Maxwell cores better than a Titan Black with 2880 Kepler cores.

But the extra cache comes at a cost: more die space. The GM204 on the 980 is ~400mm2 in area, the GK104 (680/770/K5000) was 300mm2 and the GK110 (titan / k6000) is 550mm2. And we know it is increasingly harder to go past the GK110 size, which means the "big Maxwell" chip, say a Titan 2, would be in the region of 2500 cores (25%) more roughly @ 550mm2, while the GK110 had 87% more than the GK104.

Edited by dtolios
Link to comment
Share on other sites

  • 3 weeks later...

Regarding 3DS Max viewport performance and architectural visualizaton, is there any other card right now worth getting over GTX 750 Ti in the price range up to ~220 euros? I'm talking about single building (low and high rise) with a decent ammount of vegetation / interior design with a few light sources and stuff like that. No GPU rendering, pure viewport performance.

Link to comment
Share on other sites

At the lower spectrum (~300) it's GTX970 with 4GB of Vram. At the higher (>500) somewhere around December GTX 980 will come out with 8GB of Vram (Currently there is only 4GB version, so it only offers performance benefit over 970), which will be monster gpu as it will enable to work with quite complex scenes without possibility of hitting the ceiling of would could fit into memory.

 

I wanted to buy a normal gtx 980 but since I've heard a 8gb is coming, I have to wait. Even 6gb would be fine for me. I want to use Octane Render more seriously and this card is gonna be nice. I hope to be able to rely on Octane Cloud (or whatever version works on X.IO) for the most intensive tasks in a VERY near future. Fingers crossed!

Link to comment
Share on other sites

Because so many wonder (sarcasm) I ended up getting an Asus Strix GTX 970 4GB this week.

I was expecting the 9xx to be good, so I've sold my GTX Titan (not black) for pretty much full retail, spending the last few months with a GTX 750Ti that I had as a backup since it 1st came out.

 

 

I didn't know if the 8GB would worth it for what I am doing. I wasn't using the 6GB on my Titan for GPGPU, so I thought I would not risk much, just keep $80-100 in the bank (what I expect the 8GB version would be over the 4GB at a minimum - it was that for the 4GB 760/770/670/680 over the 2GB, so it could be more for the 9xx series that add double the buffer).

 

 

Appears to be pretty solid.

The built quality of the Asus board is pretty convincing. The fan shroud is all aluminum and very solid, aspiring quality. The card is big, but does fit behind my push-pull front AX360 Rad, so it worked for me.

The card is virtually silent, no coil whine (yet) and fans don't even spin for regular 3D viewport work and mild gaming. Don't even know if I should watercool it as I was initially thinking :p

 

 

My only regret leaving the Titan, is that I did not get to test Specapc 3DS Max 2015 with it...

Will soon add a page in my lil blog for those results...won't be as thorough as the Maya one, as most of the GPUs used are no longer in my possession.

For sure you will see K4000 / K2000 / V7900 / W5000 / 970 / 750Ti.

Edited by dtolios
Link to comment
Share on other sites

  • 1 month later...
  • 1 month later...
  • 2 months later...
I don't know what you mean with 2x4 but yes it fit but it depends a lot from case

 

sorry man, it was part of the full name of the motherboard, i mustn't have copied properly...

 

Asus X99-A M.2X4, 1 x SATA Express port, 8 x SATA 6Gb/s port, 6 x USB 3.0, ATX

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×
×
  • Create New...