Jump to content

A simple render farm setup


Recommended Posts

Hi all,

 

Im a CAD guy and would like to venture more into visulisation and rendering initialy producing high quality stills.

Im going to purchase Vray for the rendering and Rhino for modeling and would like to achieve the fastest way of producing high quality stills?

I am going to buy a new system including i7 920, Asus P6T, 8GB ram etc...but would like to know if I could utilise my old spare machines

2x Dell laptops (celeron 1.33mhz cpu, 1GB)

1xPC (think? 1.8mhz cpu, 2GB)

Can I use these in a render farm? What difference will it make in render times? Is it simple to do?

Like I said Im new to rendering!!

Thanks

Link to comment
Share on other sites

apparently it is easy to do however, many problems can arise as i've recently found out.

 

i successfully setup a render farm, then did exactly the same thing at another company and all hell broke loose.

 

anyhow, attached is a pretty good start :)

 

there are some great guys on here willing to invest time and effort to help you achieve your goal as i found out (although we never solved it as the guys i was working for shelved the render farm due to intense workload - so at least i have something to look forward to - not)

 

ask questions, i'm sure they will be answered.

 

i can't comment on your setup as i have only setup render farms using 3ds max and mental ray for the time being, but yes your other computers i would imagine could be used as extra power in someway

Link to comment
Share on other sites

Thanks for that! - it looks usefull

 

Im sure I can find many other documents on the web re setup but I would still like to hear from first hand experiance. I suppose the initial question is... is it worth it with the additional 3 computers at that power?

 

I am going to be following trainning CDs and tutorials so I am interested in only using the one computer (not render on one and working on the other). So I want the fastest render speed. If adding the other computers is only going to add up to 10% then probably wont be worth it for me:rolleyes:

Edited by hardcak
Link to comment
Share on other sites

I'd say that adding those laptops to the render farm won't help that much unless you are producing quite simple scenes that don't require much RAM to render. We have a small farm in our office, but our new machines tend to render about 5 times quicker or more, and large scenes won't even render on the oldermachines. For reference, our older machines have 2GB RAM, and we still manage to crash them with large poly counts..

 

Setting up the farm isn't that difficult using Backburner, but as Dave mentioned, there are often things which stop the farm from working and it's often difficult to work out exactly what the problem is... Either way, have a go and see if they do help...

Link to comment
Share on other sites

Can anybody give me some typical examples of render times for simple scenes based on a system setup (or better mine without the added computers)? - Im trying to get an idea!

 

The computer Im using at the moment is very low spec so its hard to judge any sort of times - thats why Im going to invest in a new one!

Link to comment
Share on other sites

There is no such thing as typical render time I'm affraid.. There are hundreds of factors that affect the render time; number of poly's, light bounces, GI settings, materials, DOF and other effects, motion blur etc etc etc so no one will be able to give you any times.. The best way is to just have a go... Model a scene and render it on both machines...

 

I think all you really need to know is that the more RAM you have, and better CPU, the faster you will render.. Spec the machine to your budget and go from there...

 

People on here will tend to be quite helpful, but "give me a typical render time" is a too open ended to get a sepcific answer...

Link to comment
Share on other sites

I wont be doing animations and stills dont have to be really big (maybee 700x700px) as Im only trainning.

 

Just concerened to how the tutorials will go as I will be constantly going through many itterations:confused: I can`t see it going very smoothly. Anyway I have speced my machine on my max budget so theirs not much I can do about it.

Edited by hardcak
Link to comment
Share on other sites

The biggest single thing that speeded up my working process when it came to producing several iterations was discovering the 'region render' setting in max, which I found out far to late in life. Until then I was always rendering out a complete images with every iteration and change, which took a lot of time.

 

If you're new to max check this feature out, it is a MASSIVE time saver when it comes to rendering.

 

Regarding render farms, I regret not getting the absolute fastest machines I could, at the time I instead went for a greater number of slower ones. My slower machines render at half the speed of my newer ones, and are already pretty obselete. I can achieve with 3 machines now what I used to do with the other 7, seems like a lot of wasted space and network leads to me.

Edited by Bewdy
Link to comment
Share on other sites

I wont be doing animations and stills dont have to be really big (maybee 700x700px) as Im only trainning.

 

If you are not going to render animations, just "little stills" maybe is not neccesary to set up a renderfarm, one or two powerfull workstations will be enough. I agree about the "region render" comments of Bewdy.. It will help a lot with your educational purposes.

Link to comment
Share on other sites

  • 1 year later...

Has anyone built a farm since then (August 10th, 2009) using modern hardware?

 

It would be interesting to see people's experiences since this thread originally began almost two years ago.

 

I figured it would be better to revive this article than to create a whole new one.

 

I am considering building multiple Dual Xeon Six-Core systems and arrange them together with render management software to facilitate a modern setup.

 

I am not sure if Quadro/FireGL based GPU cards are best, or should I install several nVidia Tesla cards.

600-pxTesla_C2050-C2070.jpg

600px-NVIDIA_Quadro_4000_5000_6000.jpg

 

 

 

600px-FireGL%20V8650.jpg

 

 

Your views?

Link to comment
Share on other sites

Are you certain that Tesla cards will help you? Do your research there as you may be wasting your money. Most render farm boxes are lots of CPU grunt and RAM, nothing more.

 

I still stick to my old formula of "if you cant buy it off the shelf then leave it alone". I also do comparative analysis between all ideas I have and work out bang for buck based on the latest Cinebench results. Right now I am getting the best value building 1U 6-core Phenoms with DDR3 RAM. The parts are available anywhere, and the hardware combination works out well.

 

Ultimately Xeons or multiple Opterons would be faster per machine but the cost per box vs processing power reduces their value and neither can be fixed in-house after a quick trip to Tiger Direct. i7s are a close second because they run faster but unless you use the old socket type you cant get onboard video, therefore you need a card, you impede cooling in the 1U case, etc. I tip towards i7s a lot but still end up doing the Phenoms.

 

I have found that the new SSD drives are a MUST. They make everything smoother and cooler again.

Link to comment
Share on other sites

Wow, that is the first I heard of a Phenom cluster, very impressive, and glad to see you are getting good results.

 

The Tesla's power I feel could be tapped, but not without manual tweaking to the source code and piping it to the/through the CUDA architecture.

 

Essentially, I am seeking a good open-source renderer that would be customized to take advantage of all hardware, be it a CPU, Video Card GPU, Tesla, or combination of all. I am no expert programmer, so when I speak of programming the Tesla, that is a year or two from now. I just want to scale and pack-a-punch in the setup I end up getting this year. Add to that, that I am stuck in an apartment as of right now and rumor has it, 1600W is the max strain I could put on the outlets, before the circuit breaker would trigger.

 

Ten 140W systems equates to 1400W, add a hair dryer being turned on, to the equation and "pop" goes the circuit breaker.

 

The scenario I wish not to accept is that I have to wait until I move into a house or rent commercial space to increase the power I wish to obtain.

Link to comment
Share on other sites

Wow, that is the first I heard of a Phenom cluster, very impressive, and glad to see you are getting good results.

 

The Tesla's power I feel could be tapped, but not without manual tweaking to the source code and piping it to the/through the CUDA architecture.

 

Essentially, I am seeking a good open-source renderer that would be customized to take advantage of all hardware, be it a CPU, Video Card GPU, Tesla, or combination of all. I am no expert programmer, so when I speak of programming the Tesla, that is a year or two from now. I just want to scale and pack-a-punch in the setup I end up getting this year. Add to that, that I am stuck in an apartment as of right now and rumor has it, 1600W is the max strain I could put on the outlets, before the circuit breaker would trigger.

 

Ten 140W systems equates to 1400W, add a hair dryer being turned on, to the equation and "pop" goes the circuit breaker.

 

The scenario I wish not to accept is that I have to wait until I move into a house or rent commercial space to increase the power I wish to obtain.

 

Seriously? You are looking at buying ten tesla machines and you want an open source renderer? Am I missing something?

Link to comment
Share on other sites

+1 on your scheme being out of whack.

 

First off, do you need a render farm for professional work, or is this hobby/learning?

 

If it professional work I would stick to CPU rendering for now as it will provide you with faster times using biased rendering, and more flexibility on the output. The vast majority of special extra things out there have been designed to work with CPU rendering. This means if your plans are to make money by doing production work, i would recommend sticking to CPU rendering for 2 more years.

 

There are a few people doing high quality production work on a GOU, but I do really mean there are only a few.

 

CPU gives you more flexibility and there are still not enough cores on the GPU to beat a biased CPU engine. There will be, but the broad adaption of the technology is still very young.

 

If it is for hobby/learning, and you are just starting out, then I would simply buy a robust GTX card for $550, and learn the fundamentals of how to code for the technology. By the time you learn the fundamentals of taking advantage of the GPU power the card(s) you are using will be out of date, and will need to be replaced.

 

By the time you learn how to use those Tesla's like you want they are going to be old technology. The graphics card world is changing to fast right now to make the type of investment you are talking about unless you already know how to program for them.

 

There is another advantage to starting on a more affordable setup, compared to the 10 card Tesla setup....

 

If yo have 10 Tesla cards, then you have a lot of GPU power. SO you will begin to throw code at it, and it will crunch it like it is a walk in the park. So you will throw more code at it, and that will crunch it like it is a walk in the park. But, ...you will be throwing this code at a more robust system than most people will have, so you will have no quality control measures on how it might perform in a real world environment.

 

I really believe if you learn how to take advantage of the card in a lower power system, then you will learn how to keep your code lean, clean, sleek, and fast. And believe me, when rendering speed is important, so anything you can learn that will help teach you tricks on keeping things efficient will only help you in the long run.

 

Besides that... think of how many dinners with wine you can buy if you don't drop all of your money on those Tesla's. Just saying...

Edited by Crazy Homeless Guy
Link to comment
Share on other sites

I rent 2x 56U high racks in a data centre 2 floors down. It works out as less money and better facilities. I suggest checking what is in your area.

 

Good idea, my worry is about piping data back and forth, I figure it works out good for you since you are in the same building and get maximum transfer speeds. If it is ok to ask, what specs are the nodes in your dual cabinet/rack setup?

 

 

Seriously? You are looking at buying ten tesla machines and you want an open source renderer? Am I missing something?

I apologize for that, I did not mean to say I am buying ten nodes, each with a Tesla, I do not have that kind of capital at the moment. The desire to have an open source renderer is for the sake of "working under the hood" to get direct access to code to improve it and integrate the hardware in the particular way I am seeking, versus contracting a team of software engineers to develop a one-off special in-house solution, as that too does not seem feasible nor am I at that level, hehehe.

 

 

+1 on your scheme being out of whack.

 

First off, do you need a render farm for professional work, or is this hobby/learning?

 

If it professional work I would stick to CPU rendering for now as it will provide you with faster times using biased rendering, and more flexibility on the output. The vast majority of special extra things out there have been designed to work with CPU rendering. This means if your plans are to make money by doing production work, i would recommend sticking to CPU rendering for 2 more years.

 

There are a few people doing high quality production work on a GPU, but I do really mean there are only a few.

 

CPU gives you more flexibility and there are still not enough cores on the GPU to beat a biased CPU engine. There will be, but the broad adaption of the technology is still very young.

 

If it is for hobby/learning, and you are just starting out, then I would simply buy a robust GTX card for $550, and learn the fundamentals of how to code for the technology. By the time you learn the fundamentals of taking advantage of the GPU power the card(s) you are using will be out of date, and will need to be replaced.

 

By the time you learn how to use those Tesla's like you want they are going to be old technology. The graphics card world is changing to fast right now to make the type of investment you are talking about unless you already know how to program for them.

 

There is another advantage to starting on a more affordable setup, compared to the 10 card Tesla setup....

 

If yo have 10 Tesla cards, then you have a lot of GPU power. SO you will begin to throw code at it, and it will crunch it like it is a walk in the park. So you will throw more code at it, and that will crunch it like it is a walk in the park. But, ...you will be throwing this code at a more robust system than most people will have, so you will have no quality control measures on how it might perform in a real world environment.

 

I really believe if you learn how to take advantage of the card in a lower power system, then you will learn how to keep your code lean, clean, sleek, and fast. And believe me, when rendering speed is important, so anything you can learn that will help teach you tricks on keeping things efficient will only help you in the long run.

 

Besides that... think of how many dinners with wine you can buy if you don't drop all of your money on those Tesla's. Just saying...

 

 

Very well put, and I like the various angles you present different options. I also appreciate the advice on maximizing available low-end hardware resulting in cleaner and leaner code plus speed from the low-end to all the way to HPC levels of computational power .

 

The setup is for professional work by the way, but I do realize there is only so much I can do from home, and like Archiform 3D said up above, he rents a set of racks for rendering power.

 

Regarding a hybrid CPU, GPU, and Tesla setup, wouldn't the power (in theory) of all three working together out-perform just raw CPU clustering though?

 

Also... I have not contacted Cray for pricing, but on an even more commercial big budget level, a setup like this seems quite attractive (watch the video)...

 

Video: Horton Wison Deepwater Uses Cray CX1000 System for Offshore Simulations

http://www.cray.com/Assets/Videos/HortonWison/video.html

Link to comment
Share on other sites

I watched the Cray video, which confuses me slightly more. The company interviewed appears to be using the render farm to compute data simulations of currents and other environmental things that would effect oil rigs.

 

I am guessing these calculations would be very fast if they could be offloaded to the GPU because I am guessing they need to be linear calculations in order to process properly. So going GPU on this would make sense today as long as you have the actual technical knowledge or software to tap into the GPU power for the computations.

 

What I was referring to earlier was in relation to image rendering on the GPU. The GPU can provide some really nice results, but a CPU with a biased engine can still outperform a GPU unbiased engine on both speed and price.

 

At least in all of the reading I have done, I have not seen anything that suggests the opposite. At least not yet. There are a lot of people pouring there time and effort into development so I am sure prices will drop, while at the same time cores will increase making GPU image rendering more cost and time effective in the near future.

 

Here is my simple understanding between CPU and GPU calculations.....

 

GPU rendering excels at simple linear instructions that it can spread across hundreds or thousands of cores. GPU can calculate linear very quickly compared to CPU.

 

CPU rendering excels at complex calculations that can take advantage of time saving algorithms that approximate what is happening. GPU's can not do this, or at least can not do it quickly when compared to CPU.

 

So I don't think it is as simple as just throwing them both into the mix, and expecting excellent results.

 

Also, I don't know what your budget is, but I would guess the CPU rack solution shown in that video was in the rough range of $25-45k.

Edited by Crazy Homeless Guy
Link to comment
Share on other sites

Good idea, my worry is about piping data back and forth, I figure it works out good for you since you are in the same building and get maximum transfer speeds. If it is ok to ask, what specs are the nodes in your dual cabinet/rack setup?

 

 

 

I apologize for that, I did not mean to say I am buying ten nodes, each with a Tesla, I do not have that kind of capital at the moment. The desire to have an open source renderer is for the sake of "working under the hood" to get direct access to code to improve it and integrate the hardware in the particular way I am seeking, versus contracting a team of software engineers to develop a one-off special in-house solution, as that too does not seem feasible nor am I at that level, hehehe.

 

 

 

 

 

Very well put, and I like the various angles you present different options. I also appreciate the advice on maximizing available low-end hardware resulting in cleaner and leaner code plus speed from the low-end to all the way to HPC levels of computational power .

 

The setup is for professional work by the way, but I do realize there is only so much I can do from home, and like Archiform 3D said up above, he rents a set of racks for rendering power.

 

Regarding a hybrid CPU, GPU, and Tesla setup, wouldn't the power (in theory) of all three working together out-perform just raw CPU clustering though?

 

Also... I have not contacted Cray for pricing, but on an even more commercial big budget level, a setup like this seems quite attractive (watch the video)...

 

Video: Horton Wison Deepwater Uses Cray CX1000 System for Offshore Simulations

http://www.cray.com/Assets/Videos/HortonWison/video.html

 

I think you are trying to re-invent the wheel. Its difficult to givew advice without knowing what kind of production work you're doing. If its 3d animation work and you dont know what renderer you're going to be using that leads me to believe you are either a novice or a well seasoned pro with the bonus of a programming background.

If you're the latter, I don't think people here can help you too much. If your the former, you really are biting of more than you can chew.

Link to comment
Share on other sites

If you're tired of thinking about and just want something that will work out of the box, I'd give BOXX a chance: http://www.boxxtech.com/products/RenderBOXX/rendering_Series.asp

 

Indeed a good solution and I am still weighing that option, I actually went there before discovering this great forum and topic, I do like their solutions, but I wish to research a bit further before solidifying that interest.

 

I watched the Cray video, which confuses me slightly more. The company interviewed appears to be using the render farm to compute data simulations of currents and other environmental things that would effect oil rigs.

 

I am guessing these calculations would be very fast if they could be offloaded to the GPU because I am guessing they need to be linear calculations in order to process properly. So going GPU on this would make sense today as long as you have the actual technical knowledge or software to tap into the GPU power for the computations.

 

What I was referring to earlier was in relation to image rendering on the GPU. The GPU can provide some really nice results, but a CPU with a biased engine can still outperform a GPU unbiased engine on both speed and price.

 

At least in all of the reading I have done, I have not seen anything that suggests the opposite. At least not yet. There are a lot of people pouring there time and effort into development so I am sure prices will drop, while at the same time cores will increase making GPU image rendering more cost and time effective in the near future.

 

Here is my simple understanding between CPU and GPU calculations.....

 

GPU rendering excels at simple linear instructions that it can spread across hundreds or thousands of cores. GPU can calculate linear very quickly compared to CPU.

 

CPU rendering excels at complex calculations that can take advantage of time saving algorithms that approximate what is happening. GPU's can not do this, or at least can not do it quickly when compared to CPU.

 

So I don't think it is as simple as just throwing them both into the mix, and expecting excellent results.

 

Also, I don't know what your budget is, but I would guess the CPU rack solution shown in that video was in the rough range of $25-45k.

 

Very good point in GPU Vs CPU usage/effectiveness. I went ahead and contacted Cray, and that 7U rackmounted setup of 18 nodes (each node contains Dual Xeon Six-Core CPUs (12 cores combined), 24GB of memory) totaling 216 cores, 432GB of memory ...runs between a ballpark of $150,000 and $200,000... is what the sales agent communicated to me.

 

I really think I am going to end up renting render-time from an off-site service for now and/or build a few Xeon based workstations, with emphasis on CPU cores and memory like you suggested.

 

What type of complexity are your renders, and how long do they take with the solutions you have in place now, if you don't mind me asking?

 

I think you are trying to re-invent the wheel. Its difficult to givew advice without knowing what kind of production work you're doing. If its 3d animation work and you dont know what renderer you're going to be using that leads me to believe you are either a novice or a well seasoned pro with the bonus of a programming background.

If you're the latter, I don't think people here can help you too much. If your the former, you really are biting off more than you can chew.

 

Indeed, I do feel like I was eyeing too big of a cake for my appetite, re-inventing the wheel would not be a wise use of my time and funds, so I need to be more cautious with both regarding my goals here in obtaining rendering power.

 

Regarding my experience, I have been involved with the industry for about 11 years now, but render solutions was never my focus point or concern until recently when I seeked a more independent route.

 

I wish to ask you the same as I did with Crazy Homeless Guy... "What type of complexity are your renders, and how long do they take with the solutions you have in place now, if you don't mind me asking?"

Edited by First CG Architect
Link to comment
Share on other sites

What type of complexity are your renders, and how long do they take with the solutions you have in place now, if you don't mind me asking?

 

Not a problem... As for complexity. The majority of our models originate in Revit, and then are moved to Max for visualization and rendering. The projects cover a broad range; Airports/Healthcare/Judicial/Mixed Use/Etc.. This means the models are usually large and heavy. I don't know the poly counts, but Max is usually eating somewhere between 6gb and 11.5gb of RAM.

 

For rendering we use Vray with approximately dedicated quad core Xeon machines for the farm.

 

Renderings are typically 4,000 to 5,000 pixels wide, and take 50-90 minutes when distributed across 3 or 4 machines. It all depends on complexity, but those are average numbers for our typical projects.

 

When we rendering an animation we expand the farm to all of the CPU's we can after everyone leaves for the evening. We topped the farm out a couple of months ago at 960 CPU cores. They were Quad core Xeons that were hyperthreaded, so think of it as 480 physical CPU cores and 480 Hyperthreaded CPU cores. Which translates to 120 physical computers.

 

I should also mention that GPU rendering isn't really an option when your scenes tend to be the size that we are currently faced with. Your entire model, plus textures, and I believe room for the rendered image need to be able to fit on the physical RAM of the graphics card.

Link to comment
Share on other sites

... and I believe room for the rendered image need to be able to fit on the physical RAM of the graphics card.

 

There will be times when we all'll be laughing about rendering from RAM+CPU :)) but who want to stand in a line waiting.. ? BTW what are oldest PCs you harvest to do work? I think there is lower edge of line when there older machines are contra-productive.. eg you waiting too long for frame to finish..

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...