Jump to content

v-ray rt gpu rendering


Recommended Posts

Do any of you guys use v-ray RT? I have just installed two nvidia kepler 5000 gpu cards with one quadro 4000 CPU card for primary card and I am having issues with RT. I render large 20,000sf under roof luxury homes and it seems when I have large scene it crashes V-ray or does not render out right. I have tried interior shots that are large size same problem. I have been in comunication with v-ray and it seems I am being taken around the block on on RT problem. I understand RT with the new GPU cards are new techonlogy and the bugs need to be worked out. The cost of the kepler 5000 is alot of money and if RT is not going to work right I need to return the cards. Any feed back?

Link to comment
Share on other sites

Regarding RT itself, you can check the other thread going to see a few of us are still waiting for a driver fix to run it on our AMD 79xx cards. Your errors seem to be different than what I'm running into so maybe we can solve it without Chaos Group ^_^.

 

Are you running the CUDA or OpenCL client in RT? Whichever it is, try running the other and compare the results.

 

I'd suggest simplifying the materials, lights, and object types (to Editable polys) in your scene. Maybe ensure GI is enabled in your VRay production renderer (it only gives me a warning in RT when I don't do it but it's nice to clear it out of the way), all your materials are VRay native, ditto with the cameras lights. Disable exposure control and effects. There's a list of Vray features that RT is known not to support so maybe check those out. I get the impression that Production VRay is much more forgiving and stable with non-VRay supported assets and functions than RT so incompatibilities that may not be apparent in your scene will only crash the later.

 

Hope that helps.

 

Riley

Link to comment
Share on other sites

I use all v-ray materials and supported list of v-ray materials for RT. I have read what is not suppoted by V-ray RT. The kelper 5000 cards are the biggest GPU Cards on the market todate as far as memory. nvidia just released cards three weeks ago in the states. This is what they where using at AU Show in Vegas last week on the machines. They cant tell me why I am having problems? V-ray is going to send me over the nightly builds to see if that fixs problem. I have the right drivers installed in the machine. we reset the BIOS to computer per nvidia. NVIDIA is saying I might only be able to run the two kepler GPU cards and take the quadro 4000 cpu card out and see how it runs. I had my partner at AU show in vegas last week and he just got back and said the engineers at vray are going to look at issue this week and get back with us. We just have been jacking with this for three weeks now. Boxx computers has been helping out in this issue trying to figure out whats causing this problem. I was just wondering if any one else is having this problem.

Link to comment
Share on other sites

theoretical performance numbers provided by NVIDIA, Quadro K5000 uses a fully enabled GK104 GPU clocked at around 700MHz core, and is paired with 4GB of VRAM operating at 5.4GHz. Meanwhile NVIDIA has the max power consumption of the card listed at an incredibly low 122W, I have two of these cards so that would mean 8Gb of VRAM with 1400 GPU core? What's your thoughts.

Link to comment
Share on other sites

So my question is what is the primary use for RT? Are guys just using this for cars, jewelry, smaller content or for adjusting lights and textures?

We render large 20,000sf homes with a lot of content and RT is not working right. We were at the AU show and they had a large building that rendered out ok. The v-ray guys are trying to fix the issue. It's not our scenes I had them check the files. The computer they had at the AU conference is the same computer I have at our design studio. I will post the solution V-ray, nvidia and boxx comes up with or not.

Link to comment
Share on other sites

I usually use RT for initial lighting setup for my scenes, before things start to get heavy (furniture, cars, trees, etc). I've seen a few videos where boxx team use powerstation with like 6 or 8 GTX600's series gaming cards, and their scenes were rendering flawlessly, however, one guy did mention that they had to reduce texture sizes in order to use RT, because of memory issues. By the way, a decent $300 gaming card has more clock core Mhz than Quadro 4000, not sure exactly how it helps either render or viewport...

Link to comment
Share on other sites

I usually use RT for initial lighting setup for my scenes, before things start to get heavy (furniture, cars, trees, etc). I've seen a few videos where boxx team use powerstation with like 6 or 8 GTX600's series gaming cards, and their scenes were rendering flawlessly, however, one guy did mention that they had to reduce texture sizes in order to use RT, because of memory issues. By the way, a decent $300 gaming card has more clock core Mhz than Quadro 4000, not sure exactly how it helps either render or viewport...

 

Gaming cards are "crippled". Either with a hardware "switch" or simply through drivers, certain functions and instruction sets are re-routed or removed completely - Quadro and GTX cards throughout the last years share the same GPU design and get manufactured at the same time.

The measurable performance deficiencies with gaming cards are in double point precision calculations and viewport performance - especially in OpenCL mode.

 

Raw graphic performance and single point precision calculations - what apparently is required for games is favored by gaming card drivers that contain specific settings for each and every popular game, and the higher clock speeds apparently favor GPU renderers alike.

Thus "workstation/pro" cards, like Quadro/Tesla/FirePro etc, do not have any advantage over high-end gaming cards in anything but coming in versions with more onboard VRam. Something that is changing now, as gaming cards with 4 and even 6GBs are available at increasingly lower prices.

 

Now, Texture sizes...

Textures are perhaps the largest asset renderers have to handle. In their current implementation, renderers like VRay RT automatically shrink any large texture to 512p. They do that to simplify things, and it is happening outside of our control, regardless of the available Vram and regardless of us using a gaming or workstation oriented card.

Rumor has it that this will change in various ways, both buy allowing it adapting to the available on-board Vram to allowing the cards to access the main system memory increasing the available buffer way past their on-board memory.

 

2-3 years? Who knows...

 

theoretical performance numbers provided by NVIDIA, Quadro K5000 uses a fully enabled GK104 GPU clocked at around 700MHz core, and is paired with 4GB of VRAM operating at 5.4GHz. Meanwhile NVIDIA has the max power consumption of the card listed at an incredibly low 122W, I have two of these cards so that would mean 8Gb of VRAM with 1400 GPU core? What's your thoughts.

 

Unfortunately the Vram is not shared between the cards.

Each and every card has to fit the scene + assets individually.

Each compatible card will receive the model's geometry, lighting setup and texture/shader assets (resized), and start "blasting away" interdependently. The renderer assembles the final image from the calculations performed by the individual cards.

 

The K5000 is an "unlocked", slower clocked GTX 680 with ECC ram support.

VRay RT doesn't seem to care about it that much. GTX 680 4GB will most likely be faster in all aspects of GPU rendering (and did I mention how much cheaper?)

Edited by dtolios
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...