Jump to content

Desktop Workstation Build - Workstation GPU or Gaming?


Recommended Posts

I have been rendering for years with Vray for Rhino, from my laptop. My school, RPI, used to allow us VPN access for distributed rendering, but the adminstration does not maintain the system, so the only way to render is without textures (why render at all?). I have recently started rendering with Mental Ray on 3ds max, which will continue to work for several years outside of college because of the 3 year key.

 

In my thinking, I want to build my own render station. I have a budget of $2000. I have added a picture of my newegg cart, with options for gpu. Please let me know what you think!

 

Capture.PNG

Link to comment
Share on other sites

I believe the photo I posted had everything I need. I don't understand why people buy workstation GPU if the gaming ones are more powerful, on paper. I realize there's a difference for drivers, and updating drivers makes a HUGE difference, but still... why?

 

I like to do animations and this will be separate from my T530 laptop, which has a workstation GPU K5400 NVS? I like ATI better, since it's cheaper. i don't think it matters much, correct?

 

Does anyone also know if student licenses through Autodesk work on multiple machines?

Link to comment
Share on other sites

Haven't heard any memory issues with the R9 280X (or other radeons tbh) - at least not in hardware levels.

The R9 280X is an updated 7970 GHz GPU, pretty much proven design over the last 3-4 years.

 

Workstation cards are pushed by computer companies simply because they can mark them up for a greater profit.

They do have their application in for of optimized drivers, slower / more stable timings & often more Ram or ECC Ram (usually only for the top of the line models, past the $1-1,500 mark).

 

Some software engines love WS cards, others - like most Autodesk products - don't really care.

Usually OpenGL applications favor them, but D3D applications - like most Autodesk products - run fine on gaming cards with "generic" factory drivers, and since you can get gaming cards with the same if not more raw performance for much less money, it is hard to justify a workstation card unless you get some significant perks to outweigh the extra cost.

 

Solidworks, Catia, Siemens NX, ProE etc, all put them to good use. Maya does ok with gaming cards since Viewport 2.0, Rhino is probably more CPU limited to benefit from uber-fast GPUs, same for Sketchup.

3DS, Revit, AutoCAD etc are all D3D, and quite often Quadro/FirePro drivers might even introduce issues, and manufacturers often don't even bother to pay Autodesk to add them in the hardware compatibility list.

Link to comment
Share on other sites

Haven't heard any memory issues with the R9 280X (or other radeons tbh) - at least not in hardware levels.

The R9 280X is an updated 7970 GHz GPU, pretty much proven design over the last 3-4 years.

Back in Greece, Dimitri, the problems with 280X cards are a very well known issue, especially with some editions from Asus, XFX and others. Greek tech forums are load up with complaints and reports about RMA requests. Fact is that these "X" editions are factory (highly) overclocked versions that have reached the hardware limits of this card, especially in the VRAM field. That leads to artifacts in many cases. I though I should mention this, since I've whitnessed many forum threads about faulty 280X's up to now.

Link to comment
Share on other sites

I don't understand why people buy workstation GPU if the gaming ones are more powerful, on paper.

 

It is mostly because of marketing. If the decision maker is uninformed, it is just easier for them to stipulate "Autodesk Certified" hardware.

 

http://www.nvidia.com/object/autodesk-design-suite.html

 

There is a long legacy of users hacking the gaming cards to accept the professional drivers, proving that there is little difference in the hardware.

 

http://www.tomshardware.com/news/Nvidia-GTX-690-Quadro-K5000,21656.html

 

Since it is considered a professional application used to make money, they charge differently for it. It is not much different than Dell/HP/Lenovo etc charging a premium for "workstations" that have the same or in some cases under-performing components compared the their "gaming" offerings.

Link to comment
Share on other sites

For some reason these posts don't appear under my profile for cgarchitect.

 

Thanks for the help, I assumed Benjamin was right, but I kept wondering why do they pitch the difference, I've heard the driver argument, but I never knew they could force the quadro/firegl drivers onto a "gaming" card.

 

Considering that most gaming cards are more poewrful on paper and in the real world I figured I go with those.

 

I got an offer from a student at my school for his machine, but I feel it isn't worth it, or is it? Let me know what you think, I feel the computer might be worth the price he wants to sell it for, although I wouldn't buy it for that. But is it a start for a greater machine?

 

I've also heard rumors about the R9 390X coming out soon, I believe they'll drop the prices on the current generation, so that might make more sense to me, to wait, that is...

 

Here is the machine I am being offered:

 

Hey!

These are the desktop specs:

Processor: AMD Phenom II X4 965 Black Edition 3.4GHz

Graphics Card: XFX Radeon HD 5770

RAM: 8GB OCZ DDR3 at 800 MHz

Motherboard: Gigabyte GA-MA770T-UD3P

Hard Drive: Western Digital Caviar Black Edition WD6402AAEX 640GB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5"

 

I am asking for $400 for the complete desktop with the case and power supply. I think this is a fair price considering I could sell the parts individually and easily get $500 out of it all. Let me know if you have any questions or offers. Thanks

Link to comment
Share on other sites

I could sell the parts individually and easily get $500 out of it all

 

Your friend is a funny bunch. Anyway, there is zero reason why you should buy something like that build. There is nothing you could do about it, or upgrade about it, it's simply outdated, and far bellow any current needs in every regard.

 

Regarding other things:

 

-forcing of pro drivers on gaming cards is practice of past, it was never particularly successful endeavor and worked only briefly for certain models, it wasn't 100perc. working solution either. It's definitely not possible today.

-AMD cards are still good contenders in gaming arena, but I would avoid them for CGI work since OpenCL doesn't get anywhere the level of attention as nVidia's proprietary CUDA does. Most software vendors either support it poorly (VrayRT GPU) or not at all (almost every other GPU renderer on market run purely on CUDA).

-Mental Ray is CPU only. iRay is CPU & CUDA, so again, nVidia only.

 

-why did you supply a list without motherboard and CPU in your original post ?

 

Have a look at Dimitris's blog and build a proper workstation.

 

http://pcfoo.com/2014/12/cg-workstation-the-pro-q4-2014/

Link to comment
Share on other sites

As far as the used machine goes, I would not bother. You can built something faster for the same money if you would go for used parts, or buy a new one for a bit more - again, faster - or even a $600 or so laptop that would match it in speed as far as modeling / rendering goes.

 

As far as new GPUs coming in the near future - well, this is true every semester practically...AMD will announce the 300 series, than nVidia will bring more GM200 cards a few months later, and perhaps both of them will renew their lines & pricing for next Xmas season and the story goes on...if you need something, you get it and make best use for it. Waiting for the next best thing = there will always be an excuse for you to buy nothing.

Link to comment
Share on other sites

CPU renderer (Vray/VrayRT, MentalRay, Corona, Maxwell, etc.. ) = Runs on CPU only. The more cores, and the higher their frequency (and newer their architecture), the better is performance.

 

GPU renderer (VrayRT GPU, Octane, Redshift, etc.. ) = Run on GPU only. The more GPU [stream(AMD)/CUDA(nVidia)] cores, and the higher their frequency, the better is performance. Almost each of them run on CUDA only, with exception of VrayRT, which runs on either of them, but is poorly optimized at OpenCL (Chaos group points fingers at AMD).

 

Combined GPU&CPU renderer (iRay, Thea,Indigo etc..) = Is able to use both CPU and GPU. How much each contributes, depend on the architecture of particular renderer. If it's primarily CPU renderer like Thea and Indigo, GPU only provides additional speedup, if it's primarily GPU renderer like iRay, 95perc. of performance is done through GPU. Even though this sounds the best on paper, in practice, it doesn't very much hold up to above specialized competitors.

 

Theoretical difference between OpenCL and CUDA= Irrelevant, because you're not choosing based on which one is better on paper (it would be OpenCL since it's open standard, just like OpenGL/Vulcan should be superior to DirectX, but in reality what matters is that proprietary standards are more pushed into practice by hardware vendors), but which one you'll get to use because software vendors decide to use it. And on that field, CUDA is undisputed winner without any need for discussion. If 90perc. of renderers use CUDA, you will get to use CUDA. End of needless comparison.

Link to comment
Share on other sites

CPU renderer (Vray/VrayRT, MentalRay, Corona, Maxwell, etc.. ) = Runs on CPU only. The more cores, and the higher their frequency (and newer their architecture), the better is performance.

 

GPU renderer (VrayRT GPU, Octane, Redshift, etc.. ) = Run on GPU only. The more GPU [stream(AMD)/CUDA(nVidia)] cores, and the higher their frequency, the better is performance. Almost each of them run on CUDA only, with exception of VrayRT, which runs on either of them, but is poorly optimized at OpenCL (Chaos group points fingers at AMD).

 

Combined GPU&CPU renderer (iRay, Thea,Indigo etc..) = Is able to use both CPU and GPU. How much each contributes, depend on the architecture of particular renderer. If it's primarily CPU renderer like Thea and Indigo, GPU only provides additional speedup, if it's primarily GPU renderer like iRay, 95perc. of performance is done through GPU. Even though this sounds the best on paper, in practice, it doesn't very much hold up to above specialized competitors.

 

Theoretical difference between OpenCL and CUDA= Irrelevant, because you're not choosing based on which one is better on paper (it would be OpenCL since it's open standard, just like OpenGL/Vulcan should be superior to DirectX, but in reality what matters is that proprietary standards are more pushed into practice by hardware vendors), but which one you'll get to use because software vendors decide to use it. And on that field, CUDA is undisputed winner without any need for discussion. If 90perc. of renderers use CUDA, you will get to use CUDA. End of needless comparison.

 

Very enlightening and collective info, Juraj, for us that haven't tried out all of these renderers. Sticky it is...

Link to comment
Share on other sites

I've been reading into alot of these renders, being that my experience in Vray and Mental Ray are good and all, albeit not that experienced, but most PBR work alike, on the software side.

 

Since I'm leaving college, I'm trying to decide on hardware that would match my desired set of rendering abilities. Now I have to buy my own software! ARGHHH, It seems like Mental Ray would be most commonly used at basic architecture firms, being that it's included with 3DS Max, which is a program I've only recently started to explore.

 

To me, it sounds like the best option now would be an Nvidia GPU and Intel CPU, not many websites compare that system to an ATI AMD setup. I realize that the AMD supposedly has more processor cores, but I've read they aren't true cores. It seems that the hardware isn't clear cut like it used to be. AMD seems to constantly plan for the future, but never delivers at the moment. I am partial to them because they are the underdogs, to me, it would be faster than my laptop either way.

 

That being said, Juraj, would you recommend that I get a renderer that utilizes both, or stick with supposedly the best VRAY? I have to make a decision on the software side before I can move forward on the hardware.

 

THANKS EVERYONE, this community is fantastic!

Link to comment
Share on other sites

I found this machine for $200, I think it's a great start, I can add some HDD and some Mem, then a new GPU!

http://albany.craigslist.org/sys/4935721685.html

 

If you don't want to look at the link, here are the specs:

Components:

Cooler Master Elite 310

MSI M5A97 LE R2.0 AM3+

AMD FX 8350 Black Edition 4.0 GHz

Kingston HyperX 8 GB DDR3

MSI TwinFrozr III GTX 660 2 GB

Corsair CX750M

2x 7200 RPM 160 GB HDDs (RAID 0 (320 GB)), 7200 RPM 1 TB HDD

Link to comment
Share on other sites

AMD seems to constantly plan for the future, but never delivers at the moment. I am partial to them because they are the underdogs,

 

That's pretty much it. When it comes to gaming world, both AMD CPUs and GPUs, deliver good performance, at good price value, often slightly (it's not really drastic difference) better point than Intel and nVidia, which can dictate higher margins since they're market leaders.

 

But when it comes to CGI work, using apps like 3dsMax, and various GPU renderers, for whatever is the reason behind (whether it's really hardware architecture, or poor support from developers, we will never know for 100perc.), they perform worse. FX series CPUs get easily beaten by average i5s, AMD drivers crash far more often than any nVidia ever did in 3dsMax, OpenCl provides less speedup than CUDA does,etc.. the list goes forever.

So while it may be sentimental to support the smaller (and morally perhaps more righteous ?) company, the pragmatism should prevail and you should always go for what gives you the best option without doing any sacrifice.

 

I don't want to go into rendering engine comparisons, that can never be truly unbiased ground and it inflames too many emotions. But nonetheless, you should always consider primarily engines that are most popular within community. The reason is simple, their development is faster (fueled by more income), their resources (assets, tutorials,...) more populous due to bigger community, their use more prevalent in professional studios,etc..

 

That currently has a clear winner, V-ray. MentalRay, for however its fans try to counter-argue, is thing of past. It has no development, it's cumbersome compared to current market leaders, achieving comparable visual quality requires more effort. The only positive is it's bundled with Max, so it comes at no additional cost. But that's about the only benefit it has.

I personally use Corona, imho right now, the second most popular engine due to very easy and comfortable workflow.

 

I could write something for each of the multitude above engines, each has something great going for it but imho their underdog position precludes them to be suggested as real competitors. That's not necessarily negative, but if you want to choose from them, you should be advanced enough to know why. If you're not at that level of knowledge, then stick your hands away rather. Objective discussion on them would be the size of smaller research paper...

Link to comment
Share on other sites

I found this machine for $200, I think it's a great start, I can add some HDD and some Mem, then a new GPU!

http://albany.craigslist.org/sys/4935721685.html

 

If you don't want to look at the link, here are the specs:

Components:

Cooler Master Elite 310

MSI M5A97 LE R2.0 AM3+

AMD FX 8350 Black Edition 4.0 GHz

Kingston HyperX 8 GB DDR3

MSI TwinFrozr III GTX 660 2 GB

Corsair CX750M

2x 7200 RPM 160 GB HDDs (RAID 0 (320 GB)), 7200 RPM 1 TB HDD

 

OK, that's not really that bad of a deal, esp. 660 is still very nice card. If you're up to save budget, you can always use that GPU in future build once you scrap everything else.

Link to comment
Share on other sites

So those specs were a scam listing... damn, oh well, onward and upward.

 

Ok, so I figured Intel is boss, every rendering program loves cores and clock speeds. Why not double the ante and start with this:

 

http://www.ebay.com/itm/HP-Z800-Dual-Xeon-Hex-Core-2-67GHz-12GB-RAM-1x1TB-1x500GB-2x300GB-HDDs-/291408713663?pt=LH_DefaultDomain_0&hash=item43d95017bf

 

Fill up the RAM slots, add some SSD for OS, and put in a better video card. Then I have a rendering beast!

 

Thanks for the help everyone, this has been a great conversation.

Link to comment
Share on other sites

Well you will do that, and you will end up with 1000 budget. For the same budget, you can build that machine anew from ground.

 

Don't be fooled that just because it's dual xeon it's powerful. First of all, it's 5 years old xeon, and there have been 4 architectures introduced since. Then, it's quite mild-clocked. All together, in rendering, it's equal to current i7 hexacore on slight overclock (like i7 5820). For everything else, like modelling,etc.. it will only use single core of those two CPUs, and that will be 1/4 of performance compared to modern day i7. That's hardly good. And lastly, nothing lasts forever. Something from CPU/Motherboard/PSU will go awry anytime in next 3 years if you opt to use that machine for rendering, you can basically count with that.

Link to comment
Share on other sites

Well you will do that, and you will end up with 1000 budget. For the same budget, you can build that machine anew from ground.

 

Don't be fooled that just because it's dual xeon it's powerful. First of all, it's 5 years old xeon, and there have been 4 architectures introduced since. Then, it's quite mild-clocked. All together, in rendering, it's equal to current i7 hexacore on slight overclock (like i7 5820). For everything else, like modelling,etc.. it will only use single core of those two CPUs, and that will be 1/4 of performance compared to modern day i7. That's hardly good. And lastly, nothing lasts forever. Something from CPU/Motherboard/PSU will go awry anytime in next 3 years if you opt to use that machine for rendering, you can basically count with that.

 

I know very little about Xeon processors, as I have literally never cared about them until this point. I do notice that the E5 and E7 from current generation are WAY more expensive. I've done some research and it seems that nobody has bothered to compared ati and nvidia cards side-by-side in vray, but either card will function well, albeit certain issues may exist with specific drivers, although they are constantly updating those and the software.

 

So, in your professional opinion what is a great way to balance performance and cost? In terms of the xeon-style setup mentioned previously. Since those are rather cheap, but still powerful. Is there a specific dual xeon architecture? I do not know the names like the haswell ones in i7(?)

(edit: reading into it haswell and ivy bridge architecture are behind the e7 and e5 too)

 

You have been a great help thus far, you should have a bitcoin tip jar!

Edited by matthewhickey
Link to comment
Share on other sites

Pairing capable Xeons (E5-2xxx vX) are always expensive, and become cheaper when their performance is already considered low due to moral age ( moral age depends on technological progress, the faster it is, the faster is moral age compared to physical age).

 

So Xeons in your refurbished HP workstation were also expensive and considered fast, 5 years ago with price of 1200 dollars per unit. So was Pentium 4...and Athlon64,etc..

 

So you are correct they are cheap, but wrong regarding powerful. As I said, both together are as powerful as single modern hexacore i7 when it comes to rendering because you can still utilize all 12 cores together. But workstation tasks (running 3dsMax, Photoshop,etc..) only use one core, so you will be using 1 core, that is only clocked at 2.7Ghz and 4 generations old, each generation bringing anywhere from 30 to 5perc. performance increase. In reality, even cheap i5 Haswell CPU will be faster when running 3dsMax modeling tasks.

 

Now regarding their physical age, they're probably are 5 years old so is the whole workstation. You can't expect it to run another 5 years without something going to grave.

 

The price seems good only at first glance, those workstations are actually worth far less. Just watch the auction, it will end in one day, and no one will bid on it. It's not good deal.

 

Regarding contemporary 2p Xeons, yes, they're more expensive than i7 based workstations, they also aggregate much more performance in single box. So it's more question regarding budget and comfort, which comes at cost. They become performance/value effective starting roughly from 5000 euros a node. Without such budget, they make little sense.

 

I would suggest you to simply build proper i7 within your budgetary means and use it for the next 4 years.

Link to comment
Share on other sites

Wow, you really know your stuff. I understand alot more about scalability, budgets, purpose, etc.

 

What would you recommend I offer on this computer?

http://newyork.craigslist.org/mnh/sys/4912812296.html

 

COMPONENTS DETAILS

Motherboard Asus P6t deluxe LGA1366

CPU Intel Xeon X5670 6-core 12-thread

RAM Kingston Hyper 12GB DDR3 1866

PSU EVGA 750w modular 80plus

Heatsink Corsair H100i water cooling

GPU Gigabyte R9 280X 3GB 384bit

SSD Corsair 120GB

CASE Corsair Midtower

 

I will be using Vray 3.0 for 3dsMax, so it uses mainly CPU, with support for GPU rendering (390x, is possible in future). I think this is a great place for me to start, but the Xeon is an X5670, so it's a tad old. But it's upgradable. If anything, I'd upgrade to an 18 core when it becomes financially feasible. Let me know what you think and once again,

 

Thanks for all the help! I greatly appreciate the knowledge and thoroughness of this community.

Link to comment
Share on other sites

But it's upgradable.

 

No, it isn't. LGA 1366 is 6 years old socket. There have been since 2 regular sockets (1150&1151) with 3 architectures (Sandy-)Ivy--)Haswell), with respective high-end variations (Ivy Bridge-E, Haswell-E).

 

Regarding X5670, it can't be upgraded either, there is no successor. Current Xeons are based on Haswell-E and fit to LGA 2011-3 socket. The 18core one fits only to highend workstation motheboards (which costs 500+ euros each) and the Xeon itself costs 3000 euros and... that's another long discussion.

 

Regarding to how powerful this "6-core/12 threads" 5 years old Xeon is. Well, rendering performance is roughly 200 dollars i5 (4cores/4 threads) Haswell. Amazingly....obsolete deal.

Worth 800 dollars ? Of course not.

r9 280x is obviously not anywhere near GTX970 either, more like older 670. Not bad but you're buying this machine to also work as workstation not to play Call of Duty only on budget.

 

Now, this is last stupid craigslist/ebay offer I commented on. Either you go to Dimitris's blog and build something like he outlines http://pcfoo.com/ .

Or you're on your own to waste money on stupid deals from imbeciles selling their own gaming machines overpriced because they know other imbeciles know even less and will buy them.

 

 

I understand budgets can be tight and one wants the best for his money. I came from super ramen eating college days and I learned CGI on laptops of dormitory girl friends. I also kept building mountain bikes from second hand parts for years since that was only way to afford them until I realized it became such a money hog anyway completely unworthy of their actual performance.

So when I bought my first workstation, i7 2600k, 8gb ram, 64GB SSD 4 years ago, I simply took 1200 euro loan. it was absolutely worth it, the computer upgraded survived over the years and now still works as fileserver in current form (2TB of SSDs, 32gb ram) where others would still be working on it for another 2 years at least.

 

Computers aren't cars. The development is so rapid they loose value instantly the moment you click "order" button. Anything you buy that is older than year has since been successed by same price but half faster successor, or has similar offering for half the price but with same performance as new. Very rarely is it worthy to buy anything older in hope of finding some magic deals. The exception is perhaps GPUs which get resold immidietaly the moment successor comes by enthousiast who need the 'latest shiny toy'. But we're talking one-year cycle, not 3+.

Edited by RyderSK
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...