Jump to content

having problem with render farm


Dave Buckley
 Share

Recommended Posts

i have setup a render farm, however i think i've missed a fundamental step.

 

the setup is as follows, i have 2 workstations both with a license of max on (A and B), i also have a server acting as a slave.

 

when working from A, all my textures and scene files are saved on my desktop.

 

i start the manager and monitor on computer A, i then run the server app on the slave. the slave appears as an idle server in my monitor.

 

i then send the scene to do a net render and it appears in the monitor queue.

 

at this point i assume the slave should jump to action. it doesn't.

 

if i then assign the slave to the job, it returns an error about starting plugins. i haven't got any third party plugins though. they are all default installs.

 

if i then stop the server app running on the slave and start the server on workstation A, then the render works as i imagined.

 

i then ran a test using workstation B. i started the server app on workstation B and it once again appeared as an idle slave in the monitor window on workstation A. so i know that all of the machines involved can see each other on the network.

 

however workstation B does not start to render the scene submitted by workstation A. again i get the same error message about 'starting plugins'

 

i don't know what i've done.

 

i was hoping to b able to create scene files using A and B and the send them to the slave to free up the workstations. when A and B aren't being used, they too can be used as slaves.

 

i don't have access to a dedicated machine to run the manager/monitor, therefore i have chosen A as the master workstation to run the manager and monitor.

 

both A and B are creating/saving scenes/maps locally on their local desktop.

 

i have a feeling this is the problem but don't know how to get round it. any suggestions?

 

do i need to work from a shared network drive all the time and configure all of the machines involved to have the same user paths with full UNC naming?

 

or can i still work locally on each workstation and send them to the slave as and when? if so, then how?

 

this is kind of urgent? i need to fix it at 8am and its now 9pm BST :( confused please help

 

sorry for the long post but the attachment may help explain my setup

Link to comment
Share on other sites

  • Replies 69
  • Created
  • Last Reply

Top Posters In This Topic

A couple of quick things I noticed:

1) I'd never leave your maps and materials on your desktop.

 

2) I have 2 computers running back burner and before I submit a job to the render queue, I click on file -> asset manager and convert everything to UNC pathing.

 

3) I have all of my materials on a common network share that has the same drive letter no matter what machine I'm on.

 

4) I had to uninstall BB on my existing computer and reinstall it - I had upgraded Max versions on that and it didn't like to play with a clean BB install on my new computer

 

hope this helps - not in front of my machines at the present time

Link to comment
Share on other sites

so i guess having two machines with local scenes on each wouldn't work. because then each time i submit a render job, i'm going to have to configure the user paths of each slave in order to match the scene workstation user paths.

 

however if i have everything on a network share for example N:

 

then my textures images would be in N:textures, and so then . . . what do i do here?

Link to comment
Share on other sites

As Joel says, always uninstall Backburner and reinstall Backburner when upgrading Max. If the Backburner install sees a version of Backburner already on the computer, it will skip the install, even if it is an older version that is already installed.

 

As for the plug-ins... Assuming you don't have a network machine to store the plugin directory on, you might try repathing your slave machine to read from the plugin directory on your workstation.

 

Open the plugin.ini file, which is either in your 3dsmax root directory, or Documents and Settings depending on how you have your setup configured. Then change the lines of text so that they point to your workstation.

 

Below is a snip form my plugin.ini, hopefully that gives you an idea of what I mean....

 

 

[Directories]

 

standard plugins=\\group\files\SF Render Farm\Visualization Lab\Configuration\3dsMax2009\Plugins

 

Yours might read something like...

 

standard plugins="\\davescomputer\program files\autodesk\3dsmax\plugins\"

 

 

.

Edited by Crazy Homeless Guy
Link to comment
Share on other sites

if its a case of changing user paths, then what happens if workstation A is using project folder X and workstation B is using project folder Y, both project folders are on the network location.

 

both jobs get submitted, the paths on the server can surely only be confugred to one of the project folders

Link to comment
Share on other sites

As Joel says, always uninstall Backburner and reinstall Backburner when upgrading Max. If the Backburner install sees a version of Backburner already on the computer, it will skip the install, even if it is an older version that is already installed.

 

As for the plug-ins... Assuming you don't have a network machine to store the plugin directory on, you might try repathing your slave machine to read from the plugin directory on your workstation.

 

Open the plugin.ini file, which is either in your 3dsmax root directory, or Documents and Settings depending on how you have your setup configured. Then change the lines of text so that they point to your workstation.

 

Below is a snip form my plugin.ini, hopefully that gives you an idea of what I mean....

 

 

 

Yours might read something like...

 

standard plugins="\\davescomputer\program files\autodesk\3dsmax\plugins\"

 

 

ok yeh so let's say i don't have a network share. what happens if i have two jobs in the queue. one from workstation A, one from B, both have local UNC paths associated with the jobs. if i repath the server to workstation A, then it's still going to fail on job B as the paths will then no longer be valid. is this right? how do i make sure the server can co-operate with both machines

Link to comment
Share on other sites

"\\davescomputer\program files\autodesk\3dsmax\plugins\"

 

"Davescomputer" is computer A. The above is how the plugin.ini file should look on computer B. Both computers will read from the same plugin directory, which will be hosted on your workstation.

 

This requires your drive on computer A to be shared so that computer B can access the file on it.

Link to comment
Share on other sites

ok i get that bit, but let's say i have two workstations both with seats of max, both sending jobs to one slave.

 

the jobs being sent by the two workstations are local to those workstations.

 

surely i can't configure the .ini file on the server machine to take this into account, it's one or the other surely.

 

which i guess is why i need the network share. so then if everything is looking at the same place, then i will never have a problem even if i want to start using workstations as slaves when they are not in use.

 

i guess the setup i'm talking about would mean configuring the .ini file on the server machine for each job, which defeats the object.

 

or am i still missing something

 

workstation A sending jobs that are local to A(manager and monitor)

workstation B sending jobs that are local to B

server C doing jobs sent by both

Link to comment
Share on other sites

o

i guess the setup i'm talking about would mean configuring the .ini file on the server machine for each job, which defeats the object.

 

or am i still missing something

 

workstation A sending jobs that are local to A(manager and monitor)

workstation B sending jobs that are local to B

server C doing jobs sent by both

 

Maybe I am missing something, or missed part of the problem? ...why does changing the ini on C defeat the object?

 

What I am basically describing is that you make Workstation A double as your network asset server. This is where all textures and plugins will reside. Workstation A, Workstation B, and Server C would all pull the textures and plugins from the same place. This would guarantee that they all have the same information for the rendering.

Link to comment
Share on other sites

ah right ok.

 

what i was saying that A submits a job to C that contains textures and the scene local to A

 

B submits a job to C with textures and scene local to B

 

the job from both B and A are in the queue. C can only be configured to look at either A or B, so my current setup isn't going to work, unless i manually change the paths, on C to look at A when job A is top of my queue, then when job A finishes, i will need to reconfigure paths on C to match that of B in order to complete that job.

 

(for the above example, A is taking textures and plugins from A's desktop therefore all paths for jobs submitted by A are local to A, B is taking textures and plugins from B's desktop therefore all paths for jobs submitted by B are local to B, both are submitting to C) - that is how it is at the minute - i am yet to configure any paths on either A, B or C as i wasn't aware i needed too :)

 

we'll get there eventually ;)

Link to comment
Share on other sites

If all 3 machines are on the same network, then they should be able to see each other, which means they each have a unique network name.

 

so, stop thinking in terms of drive letters. :)

\\DavesBigHonkingComputer\textures

is the same for all three computers.

 

You should be able to do the following:

Work with your scene locally, with local drive letters

You can save it, modify it, etc.

save it just before you submit it to the render queue

then click file -> asset tracking (i forget - it's not in front of me) and convert everything to a UNC path

then net render and submit

all of your materials and everything should now have NO drive letters, rather the machine name in the path

Link to comment
Share on other sites

Right, ..so why can't B pull textures and plugins from A as well, while you are working on the project? This way you would only have to maintain 1 master library.

 

Then, if you sent a job to C from B, C would also pull the textures from A. Or, if you opened the B job on A, everything would still be perfect.

Edited by Crazy Homeless Guy
Link to comment
Share on other sites

ok guys, i now understand that this is a method i could use.

 

what i need to know is whether the way it is at the minute is the reason it fails as described in the first post?

 

A and B are working on different projects, i only want one server machine doing the jobs.

 

what you are saying is that A and B need to be getting assets from the same location in order for it to work.

Link to comment
Share on other sites

Does the job that is causing the problem render fine on both A and B? ...meaning, both of those machines have all of files?

 

...there is a chance that this file had a plugin associated with it at one time, and there are still ruminants of that plugin in the file. Which would mean tracking down those remnants and getting rid of them. The can leave traces in several places.

 

Edit: Even if this does not solve the problem, it is a better way to be set up in general.

Edited by Crazy Homeless Guy
Link to comment
Share on other sites

Travis is probably having a beer or three by now (530p) - I'm waiting on my morning tea break (1030am)... hehehe

 

Hmmmm I don't use any plug-ins, so I can't advise you there.

 

I'll apply some basic troubleshooting to this and we'll see what happens.

 

01) Try to render the scene on the local machine (NOT a net render - untick net render)

02) if that works, then we know the scene 'works'

03) if that doesn't work, try a very basic scene. Like, make a plane and add some primitives and use scanline or mental ray (not vray, as it's a 3rd party plug-in)

04) does that net render?

05) start adding some textures render locally and then net render (but click file -> asset tracking -> convert to UNC) - does it render both locally and on the network?

 

hmmm that's all I can think of now.

 

I had a problem with textures, but when I use the Convert to UNC, everything was resolved. It doesn't matter where the textures are, it uses the machine name to get to them.

Link to comment
Share on other sites

Dave,

 

Not sure if you've already said (I skim read the posts as there were quite a few...) but what format are you rendering out? I assume you are trying to render individual frames, which you will put together later in After Effects or something similar right? If you're trying to render out as a movie format, such as AVI, or MOV it won't be able to use more than 1 machine...

 

I've just been having problems with our mini-farm (About 6 machines) and encountered enough problems... As already said, make sure nothing is looking at a mapped drive letter, otherwise it won't work. Make sure you have admin rights on each of the machines. And make sure you're rendering out as images to put together later...Make sure all versions of backburner are the same on each machine.. They are most of the problems we've had.

 

Let us know how you get on...

 

Carl

Link to comment
Share on other sites

i don't understand what is going wrong.

 

here is what i have done so far exactly.

 

all machines involved are on the same network

 

installed max on both workstations and the single slave.

 

opened scene file on one of the workstations, this scene file is local to the workstation as are all of the textures (they are all on the desktop)

 

i run the server app on the slave

 

i run the manager and monitor on one of the workstations.

 

i connect the monitor to the manager on the workstation

 

the monitor picks up the slave machine name running the server app

 

i then do a net render from the workstation.

 

it sits in the queue but the slave machine that is sitting idle in the monitor doesn't jump to the job.

 

this is where i am up to.

 

i haven't configured any paths/ini files yet on any machine, everything is default installs

 

i don't know which things with regards to paths/ini files i need to configure on which machines

Link to comment
Share on other sites

I forgot another thing that caused errors, but it doesn't sound like you've got errors yet, as the slave hasn't even started... Anyway, you need to open up MAX on each of the machines and get rid of the View Cude and the Navigation Wheel, and they cause problems when using Backburner..

 

Have you tried going to your render job in the Monitor, select the computer that doesn't seem to be responding, then click on the 'Remove Server icon' at the top of the monitor window. Then, click the button next to it to re-assign it and that sometimes helps...

Link to comment
Share on other sites

i have had an error, because the slave didn't jump to the job automatically, i right clicked on it and said assign to job.

 

It then gave me a failed arror saying it could not start plugins?

 

So i did a bit of trouble shooting.

 

I closed down the server on the slave.

 

i then started the server app on the machine sending the job, basically sending it to itself via net render. it jumped straight to the job and completed the task???

 

i then carried on trouble shooting.

 

i started the server app on a different workstation that has max installed. and resent the job, same error as with the slave before.

 

so it must be something to do with seeing the workstation across the network, as the local-net render (if that makes sense) worked.

 

but i need to know fully what i need to configure

Link to comment
Share on other sites

Is that the exact error? "Could not start plugins?" ...or was it something slightly different?

 

Are you able to launch 3dsmax on the server machine? ...can you render standalone on this machine? ...even if you don't have a license for it, you should get a 30 day trial the first time you launch. This won't effect you network rendering.

 

Also, delete the 3dsmax.ini file. When Max launches the next time, this file will be recreated. This often fixes a lot of problems with Max.

Link to comment
Share on other sites

 

...opened scene file on one of the workstations, this scene file is local to the workstation as are all of the textures (they are all on the desktop)....

 

 

 

Hi Dave,

I think this is the culprit. I haven't done netrendering in a while (like, 3dsmax7.5 era), but here's what i was doing last time:

 

1. if your textures are located locally, they have to be in a shared folder and from my humble experience, you can't share the desktop. Textures are like x-refs (if you're like me, started with autocad), so you have to take the point of view of the slave machine: Can i see the textures of the job given? So if the job points to textures located on the desktop, it won't find it (unless you copied your textures into the slave's desktop, which is redundant work). I'm not a script guy myself, so i never messed with the ini file, but what i do with the job is that when i choose a texture i search for it through the network, even though it's located locally. i.e. the path starts: //davescomputer/...That way, it kinda assures me that the path can be found (on the network) and when you send it to the slave it points back to your local textures folder.

 

2. If your local max machine has plugins, i believe that these should be installed on the slave as well. This one, i'm not too sure though.

 

But one thing that bugged me all the time, and never really figured out is that why some pc's on the LAN start to disappear and reappear sporadically. Our LAN guy blamed it on XP, maybe it was a hub issue, but is irritating cos slaves disappear every now and then.

 

Hope your job turns out well. Update us on how you solved it too.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×
×
  • Create New...