Jump to content

Wanting to learn about CPU vs. GPU rendering


Recommended Posts

I'm just starting to get into learning about this and if GPU rendering is something I may want to try and utilize in a home render farm. If anyone has some good links to places I can start learning about subject matter that would be helpful. I've tried doing a search here, but quickly became overwhelmed with all the comments and realized I may need to back-up and get a more base level understanding before worrying about what shadders will or wont work yet with GPU. THANKS!

Link to comment
Share on other sites

It's far too quickly evolving field to find comprehensive and "unbiased" (no pun) article concerning it.

Would you trust all those bombastic (1000times faster !) articles sponsored by nVidia :- ) ?

 

You have few fully GPU accelerated engines (I write few dominant characteristics next to it):

 

VrayRT - Part of Vray, supports roughly 70perc. of VrayAdvanced's featureset. Easiest to try if Vray is currently your engine.

iRay- Until recently part of 3dsMax, supports MentalRay shaders, easy to use, slowly developed, eventually will become abandonware. The core is licenced to 'easy-to-use-but-limited' such as Keyshot, popular within artistic and car enthusiasts.

Octane - most developed GPU engine on market, choice number 1 when going this route

Redshift - Vray's equivalent of GPU engine, offers robust tweakability and fastest raytracing at behest of user's ease of use

 

Then there are various GPU "co-accelerated engines" but they don't offer the same kind of speed boost GPU engines do since you need the full scene allocated within GPU's memory in order to get the speedup, and the data transfer through PCI-express is still not fast enough for this to work well.

 

In order to get the most out of them, you need few decently powerful GPUs, at which point, they become slightly more price/performance effective than similarly priced CPU option. The magnitude is usually up to twice/three times faster (not 100-1000 tiles as in marketing studies...)

 

You would be comparing something like 4x980Ti vs 2x2680v3 Xeons. 5000 euros for each type of workstation but the former would be up to twice faster when using similar type of engine (for example VrayRT GPU vs VrayAdvanced (CPU) or let's say Octane(GPU) vs Corona(CPU) .. )

 

With GPU engines you're still mostly limited by on-board memory ( GTX 980Ti has 6GB, GTX Titan-X has 12GB). Your whole scene has to fit within in order to work (there are exceptions to this by using "memory cycling", but it slows down the process currently). Next year (2016-2017), cards up to 16-32GB will come on the market. So the limitation is getting less severe each year. Generally, GPU engines are also more efficient by using brutal compressing algorithms, so the same scene will take less memory footprint.

 

Best way is still to simply try and see if you like it. They currently have equal amount of benefits (mostly raw rendering speed) and drawbacks (they are hard to develop, both hardware architecture and software language are limited, so the featureset is behind their CPU counterparts and mostly it will stay like that in near future).

 

Previously, one of the biggest benefit was 'interactive viewport', but currently all major CPU engines offer their own versions as well.

Link to comment
Share on other sites

  • 6 months later...

Hi Juraj,

I know this thread was a bit old, but still really hope you can respon to my question.

What do you really mean by "memory cycling"? Did you refering to ray bundle size & rays per pixel ratio?

I always wondering and googling looking anywhere for perhaps there is some leeway or trick to overcome this memory limitation...? (GPU rendering wise)

The other thing is regardless as you said that GPU engines are generally more efficient so it would take less memory footprint, we are still left being blind as we do not have any tools to help us estimate whether a scene along with its polys & its shadder texture maps file size would fit or not on certain amount of vram of such a GPU card. No statistic ratio or any kind of tools for this or maybe i am who was the one in the dark side :-P

Link to comment
Share on other sites

Hi Thomas, thanks for point that out. Yes i have, there are clue at least that we can estimate on rough calculation regarding polygons based on that guide. But looking at render log each time we render RT, we could see that the most dominan RAM occupant is consistent (which is the textures) and still we have no believeable clue about it..

Edited by inpowwatir
Link to comment
Share on other sites

There are always estimates (for poly and textures) but then things like framebuffer and various algorithms (like displacement) will skew this, so, it's pretty useless to try making such estimates.

I personally don't use GPU engines still, some of my recent scenes take 40+ GB if I want to render them 360 VR ready resolution (12k). That's a lot...

 

But I can't lie, I like to watch the GPU progress so much by each generation.

I currently own various dual-xeon setups (2x2670v1; 2x2680v2; 2x2698v4) and you know what is interesting ? If we discount the core amount (which changed towards "a lot" in v3/v4) version, the performance upgrade from SandyBridge-E to Broadwell-E is laughable. Intel is out of juice, they're simply milking the market.

 

By "memory cycling" I meant the layman term for out-of-core rendering, when the renderer can swap in and out data that would otherwise not fit within memory at same time. It does come with performance hit, and will still have even when it will be hardware-supported in near future ( although nVidia has been promising this...since forever ?).

 

If I was doing product shots, cars, or what have you, I would build 7x1080GTX single workstation. But since I need to still render large scenes, my ever-growing CPU farm has me covered. For now and I guess at least few more years...

Link to comment
Share on other sites

I'm really interested in the possibilities of GPU rendering also. I currently have a GTX980 which I bought to use with Vray, however I disappointing to report that with 2.5 it doesn't seem to be supported with hardware rendering, unless I'm mistaken. So I'm yet to really see the performance benefit. With CPU rendering in RT, I find the view-port painfully slow,I'm still using an i5 2400 chip.

 

But what I don't want to do is spend say £3k on upgrading a load of render nodes, only to discover that building a single mega multi GPU machine would have been a faster and easier option.

Link to comment
Share on other sites

I'll be doing a review of Redshift 2. Imho they did excellent improvements and now I can finally see why it's getting popular :- ). Have you guys seen the shaders ? That's an otherwordly stuff.

 

ChaosGroup should just give up on the RT engine. No one uses it.

 

Octane is still niche meanwhile, will all the money from Otoy, it's kind of strange.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...