Jump to content

3Ds Max GPU impact on rendering - GPU vs CPU.


Recommended Posts

Hi guys, I've been using 3Ds Max with VRay for a few years now, but only on a standard PC. I'm in the process of buying a new workstation, I have all the components I wish to buy but have come unstuck on the GPU.

 

I am unsure how the GPU affects render times - I'm currently utilising VRay with the CPU only on long render runs - Up to 48 hours rendering vehicle/character animations. Can I impact these render runs with the GPU or shall I stick to pure CPU power or can I combine the two together.?

 

If I can use the CPU & GPU in unison would investing in a Quadro card such as the k4000 yield any improvement over say a GTX 780.?

 

I'm essentially asking two questions - 1. Is it beneficial to use the GPU & CPU together when rendering or simply the CPU.? If so how can this be done.?

 

2. If using the GPU to render long animation scenes does a Quadro card have a better impact than a GTX.?

 

Thanks for any answers.! : )

Link to comment
Share on other sites

1. The GPU is not used for any aspect of "normal" renders with VRay advanced, Mental Ray or any other traditional renderer. It never had. It will get there, but it is not right now.

Will make virtually no difference rendering something on a workstation with "on-board" graphics/IGP, GTX 780, GT 610, Radeon 7750 or any Quadro. The fact that machines labeled as "workstations" marketed to people "rendering stuff" come options that contain "workstation" GPUs, mainly Quadros & Firepros have nothing to do with "rendering" itself.

 

2. VRay RT GPU, is another rendering method, different and not 100% compatible with all the features of Vray Advanced (yet). It is an "unbiased" method that uses brute force to literary calculate ray by ray and bounce per bounce of light for the whole GI solution. These very small "problems", are a waste for the long, complicated compute threads of modern CPUs: the CPU is "done" with it very fast, but it has to wait for the next problem in queue to come up. Calculating hundreds of thousands or millions of bounces with 8 or 12 or 24 (depending on the threads your CPU(s) have) threads is tedious and takes lots of time.

 

 

Rendering engines, utilize certain "shortcuts", involving in a nutshell grouping neighboring pixels and interpolating (e.g. Irradiance mapping is such a technique) that allow for faster rendering times. These techniques are characterizing a rendering engine as "biased", as it doesn't independently calculate each and every pixel on the final frame, but "cheats" through interpolating results.

 

 

The massive parallelism built into the 100s or 1000s of simple compute units (aka CUDA cores, shaders, etc) in GPUs is very efficient in calculating these exact small problems. Instead of calculating bouches in a handful of CPU threads, you get to calculate one bounce per shader (or something like that) on each cycle of the clock, so if you are throwing thousands of shaders to the task, you can achieve decent rendering speeds - so much faster than the CPU (in the same task) that there is no merit in combining the CPU in this "loop"...it will just "burn" electricity. Also, for most intensive GPU tasks, you need at least a CPU thread "open" to feed data back and forth the GPU efficiently, thus occupying the CPU with something else to 100%, might even be counter-productive.

 

The best GPU for the job, is usually the one with the "better" aggregate of shaders * core clock. Quadro or Tesla cards don't have features that give them an edge over "gaming" cards atm. Some claim that Quadro cards are better binned (higher quality chips) and might last longer, otherwise the differences are purely in software (firmware/drivers) and in some cases in ECC RAM which might be useful for scientific compute applications, but not so much for graphics (if anything, ECC is a tad slower).

 

So, from fastest to "less fast" it should be like:

 

GTX 780Ti > K6000 > GTX Titan > GTX 780 >> GTX 770/680 > K5000 > GTX 670 > GTX 760 >> GTX 660 >> K4000.

A K4000 is slow for GPU rendering, a K2000 is nearly useless in GPU rendering (too slow). You can use them, sure, but compute power / $ is horrible. I am mentioning nVidia cards only, simply because the current VRay RT GPU (2.x) is horribly optimized for AMD cards (despite the latter being probably much better in compute than nVidia).

 

The key is in the words for CPU vs. GPU rendering are "unbiased/biased".

VRay Advanced (i.e. the normal VRay you are using now) is a biased engine. It pre-calculates / predicts and interpolates results in smart ways to save up time. That's what irradiance map / light cache etc are: passes that "cheat" your way out of the need to calculate everything ray by ray and bounce per bounce. Many of the features / options / fx of Vray are actually based on this "biased" method, thus unavailable (still) to Vray RT GPU. Maybe in future versions they will iron everything out, and CPUs will be used less and less in the process.

Edited by dtolios
Link to comment
Share on other sites

Dimitris,

 

you know a lot on this topic. thank you.

 

I really need better VIEWPORT performance with 3ds max mental ray.

Can i pick from one of these cards if i want to improve VIEWPORT performance with mental ray?

 

 

So, from fastest to "less fast" it should be like:

 

GTX 780Ti > K6000 > GTX Titan > GTX 780 >> GTX 770/680 > K5000 > GTX 670 > GTX 760 >> GTX 660 >> K4000.

A K4000 is slow for GPU rendering, a K2000 is nearly useless in GPU rendering (too slow). You can use them, sure, but compute power / $ is horrible. I am mentioning nVidia cards only, simply because the current VRay RT GPU (2.x) is horribly optimized for AMD cards (despite the latter being probably much better in compute than nVidia).

 

 

EDIT: I have a k4000 video card. I have a 750W power supply.

Edited by yourfather
Link to comment
Share on other sites

Dimitris,

 

you know a lot on this topic. thank you.

 

I really need better VIEWPORT performance with 3ds max mental ray.

Can i pick from one of these cards if i want to improve VIEWPORT performance with mental ray?

 

 

So, from fastest to "less fast" it should be like:

 

GTX 780Ti > K6000 > GTX Titan > GTX 780 >> GTX 770/680 > K5000 > GTX 670 > GTX 760 >> GTX 660 >> K4000.

A K4000 is slow for GPU rendering, a K2000 is nearly useless in GPU rendering (too slow). You can use them, sure, but compute power / $ is horrible. I am mentioning nVidia cards only, simply because the current VRay RT GPU (2.x) is horribly optimized for AMD cards (despite the latter being probably much better in compute than nVidia).

 

 

EDIT: I have a k4000 video card. I have a 750W power supply.

 

You assigned renderer doesn't affect your viewport performance much.

It is more like the API (OpenGL/Direct3D or Direct3D Nitrous) used and the shading mode (e.g. simple shaded, shaded with highlighted edges, wireframe etc). So 3DS Viewport is 3DS viewport, regardless of Mental Ray or Vray.

That said, if you have a progressive renderer engine running on parallel as an activeshade window for real time feedback, what iRay or VRay RT GPU could do, then the sequence I gave above is valid for that active-shade part.

 

For pure viewport performance, the version of 3DS used is also of great importance. 3DS Max 2012 is vastly worse with GTX cards than the 2013, and 2014 is even faster than 2013 with GTX cards.

 

I don't think there will be any GTX that will give you a meaningful viewport performance increase over the K4000.

Any of them, especially 770 or faster in the above scale, would be a good addition if you want to use it with iRay or any other real time renderer.

750W of PSU are plenty for any card in the list (plus your CPU and K4000)

Link to comment
Share on other sites

  • 4 months later...

Hello! I found this thread very helpful,thanks!!

I just wished to know if a graphics card will influence in baking physx solutions (with RayFire for example).

Will a faster card solve a baked debris collision in lesser time?

 

Just like the physx solutions that I made for this scene.

 

Thanks in advance!!

Caroline

Debris_81-b.jpg

Edited by rosarein
add attachment
Link to comment
Share on other sites

  • 4 weeks later...

Hello guys!

 

I'm absolutely new to the forum so at first forgive me for the beginner mistakes if any.

 

I kindly ask for your thoughts for what I went trough and summarized below.

 

Recently I got myself interested into the GPU rendering with V-ray RT. It is not me but my wife being an interior designer being a user of 3ds max and vray, however she is not at all aware of hardware configuration. Therefore here is my role to deliver a proper equipment at reasonable price. As I’m not an expert I went to search the web looking for the answers. Already few years ago my understanding was that CPU is the key point here and necessary RAM size. Recently I discovered that there is an GPU support becoming more popular. Unfortunately going trough tons of opinions, test and comparisons I do not found all the right answers so I decided to ask the question to the source of knowledge about V-ray potential and possibilities.

Now, at first I’m planning to buy a laptop, that occasionally would be used for rendering if needed while traveling. I look for one of i7-47xx processors and 8 to 16 GB of RAM. Using CPU rendering I quite know what I can expect. The questions are:

- Having this i7 is there any sense in investing into the powerful graphic card?

- What would be the minimum nVidia model that would beat-up CPU in rendering time around few times (let say at least 5 times)?

- are the 840M or GT 755 worth of looking at in such perspective at all?

- is Vray RT so quick with the correct hardware mostly for Active shading or Production Render as well?

 

Similar questions I would have for desktop station while wondering if it is worth of upgrade by spending couple hundred dollars for graphic card or not at all. My current desktop CPU is i5-2400 supported by 16 GB of DDR3 RAM and nVidia 8500GT. Motherboard ASUS P8P67 LE.

I got myself couple more powerful g.c. available for one day testing. These were Quadro FX1800 and Quadro 6000. I made a very simple test as I’m not an expert at all. I took one of the scenes created by my wife and or downloaded from web and switched Active shade window on and further run production rendering as well. Here are my observations:

- Both 8500 GT and FX 1800 did not even started Active shade as well as no progress was observed for Production render processing while CPU was going on, though quite slow for Active shade.

- Quadro 5000 worked with Active shading however still slower than the CPU

I wonder if there is something that I would need to change with settings to get better results on GPU or did I just made something wrong. I expected 2000$ card would show something better than this, but maybe the case is that this is not the right one or not enough and much more shall be spend for better results. One think I noticed was the message while Active shading started with GPU “GI is disabled but Vray RT always use GI”. In reality while comparing AS with CPU and GPU results the second one was indeed more illuminated than done by CPU where only single lights effects were visible in the Active shade window. Still either CPU or GPU AS window update took quite some minutes for simple open office scene.

 

As this topic is highly complicated to me and all test, comments, articles and other stuff I found is about number of different hardware combinations I would appreciate if you could answer for my question specifically as much as possible.

 

I think I made my questions specific with specific software and hardware so I would appreciate straight forward answers to the questions rather than generic expressions that I already went trough on the different forums.

 

Regards

Kamil

Link to comment
Share on other sites

  • 1 month later...
Hi guys, I've been using 3Ds Max with VRay for a few years now, but only on a standard PC. I'm in the process of buying a new workstation, I have all the components I wish to buy but have come unstuck on the GPU.

 

I am unsure how the GPU affects render times - I'm currently utilising VRay with the CPU only on long render runs - Up to 48 hours rendering vehicle/character animations. Can I impact these render runs with the GPU or shall I stick to pure CPU power or can I combine the two together.?

 

If I can use the CPU & GPU in unison would investing in a Quadro card such as the k4000 yield any improvement over say a GTX 780.?

 

 

I'm essentially asking two questions - 1. Is it beneficial to use the GPU & CPU together when rendering or simply the CPU.? If so how can this be done.?

 

2. If using the GPU to render long animation scenes does a Quadro card have a better impact than a GTX.?

 

Thanks for any answers.! : )

 

 

I have a wonderful New answer for you! (I joined just so I could share this..) I bought a new card recently (a Quadro K4000) and I remembered that 'GPU' renderers required cards like this (which I had put to the back of my mind for years as my card at that time was a measly Quadro FX 4600)

So I started to look into them again, and VOILA' At Last ----> In addition to the 'photorealistic iRay and VrayRT there is a new OPTION for people who want to render FAST as HELL without worrying about total accuracy (or as a Carton look or CGI'Mation etc. It's called: FURRYBALL and it works with all 3D Packages, it has 2 renderers 'Normal mode' where you can do whatever you want. Unlimited options and looks possible. Then theres Raytrace mode for total accuracy- It's the first CGI thing I have gotten this excited about in a long time (well that and my QUADRO KEPLER4000 ;~) Here's the info page, http://furryball.aaa-studio.eu/

 

 

As you can see from my Animated work here:

I try for a more 'Illustrated' feel in my CGI and don't need physical accuracy! I need freedom and FAST FAST FAST renders!! Scenes like these go from 3-12 minutes a frame (rendering on 12 CPU cores) to more like,... 12-20 SECONDS! (or faster...) Rendering on CPU+GPU (I also wanted to mention that, You can control what does what, but the GPU renderer's in general will male use of every GPU AND CPU to get the job done!! The world begins anew today for smaller animation studios and Freelance Character Animators Like Me!

 

 

I have found it helps to offload PhysX to the CPU to leave more room on the GPU for interactivity (in Nvidia control panel, 3D Settings / Set PhysX Configuration (Defaults to Auto, switching to CPU can help a GPU render)

 

 

ENJOY!

Link to comment
Share on other sites

  • 2 months later...

hello everyone, i find this post very interesting.

but i have a question that i would love to find a answer:

i am a student in university( studying architecture) and i resent but a computer for rendering ( hp H8-1403) for my rendering projects. this computer have a AMD Radeon HD 7570 graphic card. this card don't work with 3Ds max, so i use my intel i7-3770 to render everything. some time it takes more than 48 hour to render the project.

My question is : if I buy a evga GeForce gtx 750 ti will it helps me to render faster or not. if no, do you know any graphics that will help me ( my computer have a 430w power supplie)

thanks a lot (sorry for the mistake. i am a french persons)

Link to comment
Share on other sites

I am not sure what you mean by engine, but I normally use iray or Mental Ray in rendering.

 

For the GPU accelerated rendering, I don't thing I am using it because I don't know how (would love to know more). Also 3ds max doesn’t support my graphic card (I tried to use the gpu to render but it didn’t find It. My graphic doesn’t have cuda cores).

Sorry I don’t know a lot in this stuff so please be passion with me.

Link to comment
Share on other sites

Engine = the framework around which an operation gets facilitated. Nitrous for example is a "viewport engine" within 3DS. Gets data that describe geometry and texture and physical properties of light, and creates an approximate 2D reconstruction of that on screen.

 

 

Mental Ray, iRay, Vray etc are rendering engines. Get the same set of data from the model file + assets, and raytrace / render one or more 2D frames of that through your cameras etc.

 

 

iRay is an NVidia proprietary engine, and NVidia GPU accelerated by default. Thus naturally your AMD card is incompatible with it. iRay =/= 3DS tho. Just a tool in 3DS.

 

 

Mental Ray on the other hand, is a CPU based engine. GPUs play no role in how fast your CPU renders through Mental Ray, or Vray etc x86 based rendering engines.

 

The whole idea of my question is to define what you are trying to do, with which tool in 3DS, and all of us together to figure out whether what you are trying to do does in fact require a different piece of hardware to be accelerated, and if that would be a GPU.

Trying to accelerate a non-GPU based operation / engine upgrading your GPU, is obviously not the way to go.

Edited by dtolios
Link to comment
Share on other sites

If you are after iRay compatibility, you will need to add an nVidia GPU.

Which one depends on your budget and the version of 3DS Max / iray you are using.

 

 

I believe Maxwell architecture GPUs (eg GTX 750Ti / GTX 9xx , Quadro 2200, 4200, 5200) are supported with 3DS 2015 SP2 and newer. If you have an older version of 3DS Max, iRay won't work with Maxwell architecture, so you will have to go for a 6xx or 7xx GTX card.

 

 

If you do have 2015 SP2, then I definitely suggest going with Maxwell cards.

The 750Ti if you are on a low budget, or the 970 if you can afford it.

 

 

Follow the instructions below for making sure your software is updated:

http://blog.mentalray.com/2014/10/10/mental-ray-iray-maxwell-support-in-3ds-max-2015-sp2/

Link to comment
Share on other sites

1. The GPU is not used for any aspect of "normal" renders with VRay advanced, Mental Ray or any other traditional renderer. It never had. It will get there, but it is not right now.

Will make virtually no difference rendering something on a workstation with "on-board" graphics/IGP, GTX 780, GT 610, Radeon 7750 or any Quadro. The fact that machines labeled as "workstations" marketed to people "rendering stuff" come options that contain "workstation" GPUs, mainly Quadros & Firepros have nothing to do with "rendering" itself.

 

2. VRay RT GPU, is another rendering method, different and not 100% compatible with all the features of Vray Advanced (yet). It is an "unbiased" method that uses brute force to literary calculate ray by ray and bounce per bounce of light for the whole GI solution. These very small "problems", are a waste for the long, complicated compute threads of modern CPUs: the CPU is "done" with it very fast, but it has to wait for the next problem in queue to come up. Calculating hundreds of thousands or millions of bounces with 8 or 12 or 24 (depending on the threads your CPU(s) have) threads is tedious and takes lots of time.

 

 

Rendering engines, utilize certain "shortcuts", involving in a nutshell grouping neighboring pixels and interpolating (e.g. Irradiance mapping is such a technique) that allow for faster rendering times. These techniques are characterizing a rendering engine as "biased", as it doesn't independently calculate each and every pixel on the final frame, but "cheats" through interpolating results.

 

 

The massive parallelism built into the 100s or 1000s of simple compute units (aka CUDA cores, shaders, etc) in GPUs is very efficient in calculating these exact small problems. Instead of calculating bouches in a handful of CPU threads, you get to calculate one bounce per shader (or something like that) on each cycle of the clock, so if you are throwing thousands of shaders to the task, you can achieve decent rendering speeds - so much faster than the CPU (in the same task) that there is no merit in combining the CPU in this "loop"...it will just "burn" electricity. Also, for most intensive GPU tasks, you need at least a CPU thread "open" to feed data back and forth the GPU efficiently, thus occupying the CPU with something else to 100%, might even be counter-productive.

 

The best GPU for the job, is usually the one with the "better" aggregate of shaders * core clock. Quadro or Tesla cards don't have features that give them an edge over "gaming" cards atm. Some claim that Quadro cards are better binned (higher quality chips) and might last longer, otherwise the differences are purely in software (firmware/drivers) and in some cases in ECC RAM which might be useful for scientific compute applications, but not so much for graphics (if anything, ECC is a tad slower).

 

So, from fastest to "less fast" it should be like:

 

GTX 780Ti > K6000 > GTX Titan > GTX 780 >> GTX 770/680 > K5000 > GTX 670 > GTX 760 >> GTX 660 >> K4000.

A K4000 is slow for GPU rendering, a K2000 is nearly useless in GPU rendering (too slow). You can use them, sure, but compute power / $ is horrible. I am mentioning nVidia cards only, simply because the current VRay RT GPU (2.x) is horribly optimized for AMD cards (despite the latter being probably much better in compute than nVidia).

 

The key is in the words for CPU vs. GPU rendering are "unbiased/biased".

VRay Advanced (i.e. the normal VRay you are using now) is a biased engine. It pre-calculates / predicts and interpolates results in smart ways to save up time. That's what irradiance map / light cache etc are: passes that "cheat" your way out of the need to calculate everything ray by ray and bounce per bounce. Many of the features / options / fx of Vray are actually based on this "biased" method, thus unavailable (still) to Vray RT GPU. Maybe in future versions they will iron everything out, and CPUs will be used less and less in the process.

 

This is one of the very best explaination ever posted here ;) Very helpfull for people starting their CGI adventure :)

Link to comment
Share on other sites

I know almost nothing about 3ds max, but it happened so I was asked to choose a laptop for architecture student, so the first thing I want to say is true thank you to Dimitris Tolios, for his detailed explanation, it was very helpful.

From the above I have learned that the GPU determines the performance of viewport and render of the result frames can be made by GPU or CPU depending on the engine.

 

The budget and range of products allows to purchase:

i7 4710 HQ and GeForce 840m

or

i5 4200H and GTX 760m

 

Most likely i should choose a more powerful CPU, but only if there will not be any problems with viewport performance on 840m?

P.S Projects most likely will not be very difficult and far away form the pro

Link to comment
Share on other sites

The i5 4200H is an "ubnormal" i5 by desktop standards, as it is a dual core with hyperthreading = 2C/4T.

The i7 4710 HQ is a more conventional i7 by desktop standards, quad core with HT = 4C/8T.

 

For general viewport performance, the i5 + GTX combo will likely be faster, but will seriously lack in rendering performance vs. the i7.

 

Which is the budget? I like the Lenovo Y50 as a "decent" ArchViz laptop in the $1000 (USD) range:

$950 ~ i7, GTX 860m 2GB, 8GB RAM, 1080p 15" and 500+8GB Hybrid HDD/SSD.

$1100 takes you to 16GB & 4GB GTX 860m, with 1TB+8GB hybrid HDD/SSD.

Link to comment
Share on other sites

Hello All,

 

Im new guy in this forum, nevertheless I found this treat as the one of the most valuable for all. That is why I would like to take a opportunity and ask you too. Im going to buy a new computer. I like the V-RAy RT in 3DS Max (my current version is 2012). Im the one which is lucky and is not limited by budget. My budget is something around 3600$. My choice is:

 

Motherboard: "ASUS RAMPAGE V EXTREME"

Procesor: "Intel Core i7-5820K"

Cooler : "NOCTUA NH-U14S"

RAM: "Corsair 16GB KIT DDR4 2666MHz CL16 Vengeance LPX"

Graf. Card (3x SLI): "GAINWARD GTX690 4GB DDR5" / "some GTX 780TI"

SSD: "Intel 530 240GB SSD bulk"

Disk: (2x Mirror) "Seagate Barracuda 7200.14 1000GB s Advanced Format"

Case: "CM STORM Trooper"

Power: "EVGA SuperNOVA 1300 G2"

 

My Question is whether these components are a good choice in the manner of what today market offer in the Value/Price option. Please focus on Grafic card Im not sure which has better performance.

 

I do architectural rendering, short videos (10 - 30 seconds 24 frame per second). To be honest i realy like V-RAY on GPU RT even if it has few limitation.

 

Many thanks in advance for any reply.

 

Jan Vaško

Link to comment
Share on other sites

This Lenovo Y50 looks good in every aspect and i like this brand, but announced budget is only 700$, 750 may be. So i have to sacrifice something, but i'm not sure what is more comfortable in real use, slower rendering but faster viewport or vice versa

 

There is no way the GTX 9xx/8xxM or whatever will be offering the performance difference over the GT 840M that a Quad i7 offers over a dual core i5 (i.e. nearly double rendering speed).

 

 

GT 840M is meh by desktop standards, but many users make it work with their laptop workflow, its fine. The top of the line rMBP has nothing more than a GeForce GT 750M.

Link to comment
Share on other sites

Thanks you again, i7 4710HQ(2.5), 8192mb RAM, GT840M 2Gb looks like a better option now.

So is it gonna be fine with 840m or may be it is worth upgrading 840m to 750\850m?

there is alternative like this:

i7 4700HQ(2.4), 6144mb RAM, NV GT750M 4Gb, but i afraid 6Gb RAM is a bad idea.

 

At the worst case i can try to insist on a budget increase to get both i7 and 750m if it is realy worth it.

Link to comment
Share on other sites

The GT750M should be in the region of 10% faster than a GT840M.

If the GT840M would be "unusably slow" in any scenario, so would a GT750M.

 

 

Doesn't worth the fuzz is you are asking me.

 

 

We have to have realistic expectation of:

 

 

1) What kind of tool we can buy for that little money

2) What kind of limitations mobile devices impose.

3) What kind of benefits we will see getting the best hardware, when the software we are using is actually dated and cannot take advantage of it

4) What kind of work architecture students produce (hint, unless you are trying to push the limits of your PC, it won't really matter).

Edited by dtolios
Link to comment
Share on other sites

All the information I'm reading on these threads about CPU vs GPU and Geforce vs GTX vs quadro is really interesting and it's only now, after days of reading through it all am I starting to understand it but I do still have one question....if I only use vray advanced and render with CPU am I wasting my money if I get a GTX770? Should I just get a GeforceXXX as I won't be rendering with GPU? My CPU is i7 and I've 6GB of RAM. Oh and is that 6GB way too low....I read somewhere else on here that 8GB is too low these days.

 

Thanks a million for all the advice on here!

Link to comment
Share on other sites

Hi, I've finished my master in architecture and I need to buy a new laptop that I'll be using for work purposes. Mainly I'll be working with these sort of software: AutoCad, Photoshop, 3D Max, Rhino, Sketchup.

 

Could you help me with the decision of which one of these would be the best graphics card for my needs?

 

NVIDIA GTX 850 4GB or

NVIDIA QUADRO K1100M 2GB GDDR5?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...