Jump to content

Off-The-Shelf Render Node Recommendations


David Turner
 Share

Recommended Posts

You might want to consider building them yourself. It might seem daunting, but it is really not that bad. What you pay for someone else to build you one, you could build another node or get all top of the line components with the money you save doing it yourself. It will help you understand the node to troubleshoot it as you built it.

 

There are plenty of videos on YouTube on how to build your PC that are highly informative and make it a pretty easy process, even for a first time builder.

Link to comment
Share on other sites

I also agree with Scott, but it is also true that not everybody feels comfortable building or doing maintenance them self. Some people or small companies also value warranties and tech support when it is offered for about the same price that you will sped building your own.

as example in the company that I work they got us a few of these dells to replace our old render nodes, (Dell) They are not exactly what we spec but surprisingly the perform pretty good. you can get them with i7 or Xeon. with all Dell corporate support and all that mambo jambo :p

 

 

My self I always build my workstations but time ago when in a project crash I bough two off lease 2 HP Z600 at $750 each. they are dual 6 cores xeon 2.6Ghz. 32Gb Ram. Maybe I could build some i7 with similar performance, but for the price and no time to build PC I think this are a good deal too.

Link to comment
Share on other sites

I also agree with Scott, but it is also true that not everybody feels comfortable building or doing maintenance them self. Some people or small companies also value warranties and tech support when it is offered for about the same price that you will sped building your own.

as example in the company that I work they got us a few of these dells to replace our old render nodes, (Dell) They are not exactly what we spec but surprisingly the perform pretty good. you can get them with i7 or Xeon. with all Dell corporate support and all that mambo jambo :p

 

 

My self I always build my workstations but time ago when in a project crash I bough two off lease 2 HP Z600 at $750 each. they are dual 6 cores xeon 2.6Ghz. 32Gb Ram. Maybe I could build some i7 with similar performance, but for the price and no time to build PC I think this are a good deal too.

 

 

 

Dell/IBM workstation stuff with on-site support has always been amazing deal for companies, you can get bulk price up to 1/2 of their off-shelf price per unit, that's unbeatable. It's just funny when freelancers buy them convinced they get superior build or something too :- ) Than, just than only it is pure rip-off.

 

Recently I don't have much time or rather patience to build stuff so I found a local college student who lives close and has a lot of time, he's very cheap and builds with perfect attention to detail. It's win/win situation.

 

 

Regarding the link above, spec it with dual 2660v3 (or you can go higher if you wish, just not lower) Xeons, 64gb ram (you can go higher but you can't have more in your workstation so such assymetry would be useless), 256GB SSD and simplest passive cooled GPU available (like GT720 for 30 dollars). See what price they can offer you. If the price is fine with you then go for it.

Link to comment
Share on other sites

Our render nodes don't even have GPU cards at all. They just use the onboard/motherboard graphics and we log onto them using remote desktop.

 

Regarding onboard graphics, LG2011 1/2/3 platforms don't have them, in which dual-xeons belong.

 

But, you can configure UEFI to skip GPU check if you want headless system and run it with software emulated GPU remote desktop (feature majorly present in Server Windows but I found out you can do it with regular as well) but considering it's 30 dollars investment and it runs 3dsMax almost as good as 980Ti :- ) (ok I am hyperbolizing here but it's perfectly usable) you'll have more versatile system. Esp, if this is going to be your only major node.

Link to comment
Share on other sites

Ok, than the second part of my post applies ;- )

 

The GPU built into the motherboard, not the CPU.

 

No, LGA 2011 doesn't have integrated GPU in the socket. (also last time it was integrated in CPU was consumer grade 1155 (SandyBridge) ).

There is exception to this, which is small subset of SuperMicro boards which have additionally soldered on pseudo-GPU like ( like AspeedTech ), but that is neither. More common choice boards like Asus WS adhere to the design and don't include this, so the need for separate cheap GPU stands. Or just software emulation.

Edited by RyderSK
Link to comment
Share on other sites

We are currently adding some new machines to our farm, we have gone pre-built and piecemeal in the past. The best of both worlds (for us) is to buy bare bones servers, which are pre-built with everything except processors, memory and hard drives. This means the cases have cooling, power supply and motherboard pre-loaded. We put these together quickly and we purchase extra parts, which we can swap out easily when components fail.

 

Due to the per-machine cost for licensing we decided to go with the highest horsepower we could afford (there is a threshold where things get expensive beyond return) the current machines will each be 2 Xeon (24 cores total) and 128 gb of memory per node, this should give our farm a good bump in speed (we are building 15 of them) but when you start to add this many machines, there are important considerations, power, cooling, and noise.

 

We have a dedicated server room here at our main office, but need to get an additional APC UPS (APC Symmetra LX 16kVA) unit to support the power load. In our previous NYC office (open floor plan with no dedicated server room) we had a small farm of these types of machines and had to get a special rack (NetShelter CX 38U) which has built-in cooling and noise suppression. The heat was one thing but when all the machines were fired up, it sounded like a jet engine.

 

Just some considerations, the cost of hardware is one thing, then there is licensing, power, cooling, and noise suppression, don't forget about the cost of a license of windows.

 

-Nils Norgren

Edited by NilsN
Link to comment
Share on other sites

It means with server platforms for Xeons you can go up to 1536 GB of ram depending on motherboard, but what would be the purpose if you don't utilize so much in your personal workstation (which, if it's LGA-2011 based i7, is 'only' up to 64 GB ). You only need as much memory in your node as your scene requires.

Since you will still render quick previews on your workstation or co-render in distributed/Backburner , I presume it fits within this amount, otherwise you wouldn't be able to work comfortably (as it would have to swap to harddisk, which makes everything very slow).

 

There is no performance increase based on amount of ram (after sufficient memory is allocated to feed all the cores, but that would be breaking point around 1gb per 1 core, roughly), so while buying 128 and more GB of memory to server-based node could be rather future-proof (and very expensive..), unless you have scenes that take 100GB to render right now, you don't need it. And if you do, then you need to have so much in your workstation as well, thus need for server-based workstation as well.

Link to comment
Share on other sites

We have two air cooler that keep us and the machines cool. Or are you talking about processor temperature?

 

I am talking about room cooling, in the front of our racks, we have cold air supply that is around 60f/15C and the exhaust fans at the back of the racks register about 99f/37C. We have a dedicated air handling unit for the server room, which is enclosed. If/when the cooling unit goes offline the room can shoot up to 120f/48C in 10-20 min, we monitor the room and can initiate remote shutdown if this happens.

 

The cost of adding render capacity needs to include power/cooling/noise/and IT consideration was the point.

 

-Nils

Link to comment
Share on other sites

I am talking about room cooling, in the front of our racks, we have cold air supply that is around 60f/15C and the exhaust fans at the back of the racks register about 99f/37C. We have a dedicated air handling unit for the server room, which is enclosed. If/when the cooling unit goes offline the room can shoot up to 120f/48C in 10-20 min, we monitor the room and can initiate remote shutdown if this happens.

 

The cost of adding render capacity needs to include power/cooling/noise/and IT consideration was the point.

 

-Nils

 

I guess your room is small compared to what you have inside. machines and humans both produce heat.

Link to comment
Share on other sites

I think Nils is talking about a dedicated rack room and humans generally stay out of there unless something goes wrong. With the amount of racks you have, you wouldn't want to have workers sit next to those. Plus to keep the machines happy, you need the room temp a bit cooler than most workers would like. Unless you want them wearing parkas and gloves on their hands while they work. We had one where I used to work with a dedicated AC unit on it's own temp control as well. We had a failure on that AC unit over a weekend and by Monday morning you could easily cook a ham in that room it was so hot. Our temp warning system failed as well, so one one was notified of the increase in temp. We damn near had a major loss on our hand. So cooling and monitoring of said cooling is probably a major investment you need to think about when looking at spending a good chunk of money on render boxes. You need to protect that investment.

Link to comment
Share on other sites

I think Nils is talking about a dedicated rack room and humans generally stay out of there unless something goes wrong. With the amount of racks you have, you wouldn't want to have workers sit next to those. Plus to keep the machines happy, you need the room temp a bit cooler than most workers would like.

 

Scott's got it right, we don't allow anyone in the server room, there is no need for anyone to go in there, it is not only cold, the air is whipping around, and it's so loud you can't hold a conversation without shouting.

 

-N

IMG_1576.JPG

Link to comment
Share on other sites

Yes I agree with Nils, and maybe there is some confusion when you don't have any background of who is who in here, but for instance, for Neils in Neoscape, is a different environment than Freelancing Joe's at home.

 

But still many things we can learn from bigger guys like Neils.

in Our Arch Firm (about 400 employees) we also have a dedicated everything for computer servers. Been there, and is loud and cold, clean and crazy too.

 

As par-time freelancer I settled with 2 dual Xeon and my work station, When all of them renderings it gets loud and hot in my little home office (15 Ft x 10 Ft) that's why instead of getting more computers I just rely in Renders farms, for me is way more cost effective than buying and licensing more computers.

 

As Cloud technology gets more affordable I think even for small companies Cloud rendering will be a better option instead of having 2 extra computers at home/office

I know we'ole dude like to have things under out sight but, sometimes I rather send to render out instead of fry my self in a warm summer here in South California.

Link to comment
Share on other sites

Juraj,

I guess that I could pretend to completely understand your reply, but I will cut to the chase.

Please see below for what I have spec'd-out and the price.

Do you think that it's worth it or should I shop around some more?

 

- 2x Fourteen-Core/28-Threads Intel® Xeon® E5-2697 v3 2.6GHz-35MB Shared L3 Cache-9.6GT/s QPI-145W-22nm

- 256GB DDR4-2133 ECC Registered 72bit Interleave Supermicro®

- On-Board or Basic Graphics Card accordingly to M/B specifications

- 500GB 7200RPM SATA 6Gb/s Seagate® Barracuda™ NCQ 16MB Cache ST500DM002

- Genuine Microsoft® Windows® 7 Professional Edition SP1 64-bit

- Supermicro® Intel® C612 C.S. X10DAL-i -2xPCIe 3.0 x16-1xPCIe x8(in 16x)-2xPCIe x4(in x8)-(Some PCIe slots disabled with a single CPU)-To - 256GB-DDR4 R-ECC 2133MHz.-2xGbE-10xSATA 6Gb RAID 0/1/10/5-9xUSB 2.0/3.0x-7.1HD-audio-1xTPM-ThunderBolt Add-On Card supp.

- 2x Corsair® H75 Silent Liquid CPU Cooling System Radiator with 12cm Fan for Quiet Extreme Performance with superior thermal conveyance for Two installed CPUs

 

As Configured: $10,899.00 USD

 

I plan just to set it up as another workstation dedicated to rendering. I wish that I had the space an resources for all of the discussed server room cooling options, but I don't.

Thanks.

-DT

Link to comment
Share on other sites

2 x 14 cores!!!!!!!!

now that's rendering power...

 

What Juraj explained was that it is really no use to put so much RAM in your render nodes when the work station can not match. Not sure what your have as workstation now. If you are thinking in upgrade your work station too, then try to keep things matched.

But not much different RAM wise.

For example you are planing to put 256 Gb RAM in your render node. Your actual work station can match that number?

 

I would recommend a larger hard drive, they are cheap now, or a RAID setup, shoot you can even go solid state, that will really increase your rendering processing. Depending of your scenes, loading files and textures and writing the final image can be very stressing task.

 

Where you storage your textures and models?? a server or a NAS drive? it is your network up to the task too?

 

I don't want to give you a headache really but thinking that upgrading CPU and RAM will speed up the whole process it is a little short really. there is different factor that can optimize your workflow, and if you are in good monetary stage now, you should maybe reconsider your whole network or work machines so you get the best investment.

 

If you fell happy the way things work now, that machine will really cook images faster than it take to load them :p

Link to comment
Share on other sites

Francisco,

 

Thanks for the reply.

 

My planned workflow is to have my scenes stored on my server (not a render server), check them out using PerForce, work on them locally on my primary workstation and then check them back into the server. Once I am ready to render (final, high-resolution image), I would check them out onto the rendering workstation and render from there.

 

My primary workstation (HP Z820) has 80Gb of RAM and my current situation might be cause to upgrade that as well.

 

I am not a fan of SSD's currently. They have too little space for the money and HP Z820's can be fussy when you mix different drive types and sizes. Although I might add a drive to the current spec'd machine. My current render workstation is a HP Z820 as well, but I don't have the time/ patience/expertise to upgrade it.

 

-DT

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...