Jump to content

3Ds Max GPU impact on rendering - GPU vs CPU.


Recommended Posts

  • 3 weeks later...

Dimitris :) you are not correct at all !!! up to date both most used rendering regimes support GPU , I mean mental ray in particular iray avaliable with autocad and 3ds max 2014 and Vray Vray Rt both for 3ds max and Sketchup. They all report incredible performance in speed for much lower prices compared to standart CPU rending + that the technology just begans since Iray and Vray Rt are avaliable since 2014 and the future is just infront ot this GPU technologies.

 

So its the superb upgrade for every middle class 3d animator, arhitect and so on.

http://area.autodesk.com/renderingr/iray

 

Dimitris You are wrong ! since 2014 :)

" 1. The GPU is not used for any aspect of "normal" renders with VRay advanced, Mental Ray or any other traditional renderer. It never had. It will get there, but it is not right now.

Will make virtually no difference rendering something on a workstation with "on-board" graphics/IGP, GTX 780, GT 610, Radeon 7750 or any Quadro. The fact that machines labeled as "workstations" marketed to people "rendering stuff" come options that contain "workstation" GPUs, mainly Quadros & Firepros have nothing to do with "rendering" itself.

Link to comment
Share on other sites

  • 2 weeks later...

Sorry, but I am not standing corrected. I am cautious using words, even with my limited English skills.

 

Metal Ray =/= iRay

VRay Adv =/= VRay RT

 

I am pretty aware of what GPU rendering is and I do use it regularly for testing purposes / material tweaking / lighting tweaking etc.

 

GPU rendering has amazing potential, it is used by many pros but it is still limited, thus those "many pros" are a very very very small % of the people in CG ArchViz. It will be more mainstream as more features are added.

 

For most people, GPU renderers are working great providing feedback on what the CPU rendering will be like, and not as the exclusive production renderer for the final images. "It's just not there yet".

Edited by dtolios
Link to comment
Share on other sites

  • 2 weeks later...

Hello!

 

I have a question for you guys, as you know much more about the field of architectural visualisation. I am an architecture student and I have a Lenovo Y510P laptop, and yes, while I am aware that laptops are not the optimal solution for rendering, it is still much more powerful than my desktop computer, and I am interested in what would be the faster and more effective way for me to render in 3DS max, via the CPU or GPU? My hardware is a Nvidia GeForce 755M SLI and a Intel 4th gen I7-4700MQ processor.

http://www.game-debate.com/hardware/index.php?gid=2017&graphics=GeForce%20GT%20755M%20SLI

http://www.game-debate.com/hardware/index.php?pid=1865&cpu=Core%20i7-4700MQ%204-Core%202.4GHz

 

Thank you very much for your help!

Link to comment
Share on other sites

  • 2 weeks later...

Hello!

 

How about Alienware 13 inch laptop with GTX 860M with 2GB GDDR5 and 16GB of DDR3 memory and a good SDD and this new feature called "Graphics Amplifier".

Put a serious Graphic Card in the Amplifier and do some serious rendering.... Alienware said "460 Watts internal power supply" enough for a serious card or not?

Just an idea. What you guys think about it?

Link to comment
Share on other sites

It is Dell/Allienware's take on an external GPU, much like Thunderbolt solutions that are available for some time now.

 

http://www.anandtech.com/show/8653/alienware-graphics-amplifier

 

Yes, you can use it for GPGPU, but it will be device specific, will probably be using a short expensive cable and will have the size and weight of a full ITX system, acting as a big "docking station" for your laptop.

 

Its price for the chassis alone will be $300 or so, and that includes the chassis / PSU & cable. The Alienware 13 and the external GPU will be another decent chunk ontop.

 

I don't see why this is so unique, you could built a ITX PC with a CPU more powerful than that in the Alienware 13 and a PSU that can support the same GPU for a bit more than $300 before the GPU is factored in, then get any laptop you want to use on the go and in the long run have more versatility & power, while maintaining upgrade-ability without locking yourself into the Alienware ecosystem.

Link to comment
Share on other sites

  • 2 weeks later...

Hi all!

Just wanted to give my own input. After a lot of research and being new to VRay, I ended up buying a few GTX970 to get several of my office workstations up to being able to do GPU for vizualization purposes and then kick the final rendering on CPU (VRay Adv). Thanks to Dimitris for all his help and his answers, but basically my workstation has two GTX970 from Zotac. It does great on the viewport performance, actually better than I have ever had before. Then on VRay RT, the vizualizations have been awesome fast. Sure, I only let it render for about 10 to 15 minutes with lots of lights and geometry (resolution 2000 HDTV), so in some cases it is not perfectly noiseless, but that was not the purpose. Having that set up, I can get the vizualization for either the client or the final Designer in minutes, whereas before it would take 4 times as long to get a similar quality. Having said that, two GTX 970 have done great for me and saved my neck in more instances over the last two days than I can count. So again, thanks Dimitris. And anyone else still thinking about, do the jump, it is worth it. If later you want to go with higher cards to change your workflow, then you should do that.

Link to comment
Share on other sites

  • 3 weeks later...

Hi all,

 

Firstly, really thanks to share opinions that help people decisions. I've read all the posts and had some idea . But I need some advice about laptop that I go for it.

 

I mostly use sketch-up and su vray , and also keyshot renderer.

I am a freelance interior designer. I usually work with different companies and I need a laptop that I can take it with me everywhere.

 

Could you please give me some advice if this laptop is ok with my softwares? (it is a bit expensive and seems like a gaming laptop but I cannot find a cheaper one for 3D rendering . I couldn't be sure about the processor on this laptop.)

 

Intel® Core™ i7-4860HQ Processor

2.4 GHz / 3.6 GHz with Turbo Boost

32 GB Ram

NVIDIA GeForce GTX 980M (GDDR5 4 GB)

1 TB HDD, 7200 rpm

512 GB SSD

 

link here

 

Thank you very much,

Link to comment
Share on other sites

IMO, better than my computer from last year. I think it will be good and lasting for a Rendering Computer. I'd not suggest it to be your rendering station though, just to do the modeling, mapping and lighting. Maybe Dimitris has some other input on laptops.

 

Thank you Alex,

 

I am still searching . According to my researches Quadro cards aren't too necessary . Nvidia GTX980M looks like a good choice . 3D rendering speed and quality of final image depends on CPU power. But still I can't be sure which processor is the best.

Link to comment
Share on other sites

1. The GPU is not used for any aspect of "normal" renders with VRay advanced, Mental Ray or any other traditional renderer. It never had. It will get there, but it is not right now.

Will make virtually no difference rendering something on a workstation with "on-board" graphics/IGP, GTX 780, GT 610, Radeon 7750 or any Quadro. The fact that machines labeled as "workstations" marketed to people "rendering stuff" come options that contain "workstation" GPUs, mainly Quadros & Firepros have nothing to do with "rendering" itself.

 

2. VRay RT GPU, is another rendering method, different and not 100% compatible with all the features of Vray Advanced (yet). It is an "unbiased" method that uses brute force to literary calculate ray by ray and bounce per bounce of light for the whole GI solution. These very small "problems", are a waste for the long, complicated compute threads of modern CPUs: the CPU is "done" with it very fast, but it has to wait for the next problem in queue to come up. Calculating hundreds of thousands or millions of bounces with 8 or 12 or 24 (depending on the threads your CPU(s) have) threads is tedious and takes lots of time.

 

 

Rendering engines, utilize certain "shortcuts", involving in a nutshell grouping neighboring pixels and interpolating (e.g. Irradiance mapping is such a technique) that allow for faster rendering times. These techniques are characterizing a rendering engine as "biased", as it doesn't independently calculate each and every pixel on the final frame, but "cheats" through interpolating results.

 

 

The massive parallelism built into the 100s or 1000s of simple compute units (aka CUDA cores, shaders, etc) in GPUs is very efficient in calculating these exact small problems. Instead of calculating bouches in a handful of CPU threads, you get to calculate one bounce per shader (or something like that) on each cycle of the clock, so if you are throwing thousands of shaders to the task, you can achieve decent rendering speeds - so much faster than the CPU (in the same task) that there is no merit in combining the CPU in this "loop"...it will just "burn" electricity. Also, for most intensive GPU tasks, you need at least a CPU thread "open" to feed data back and forth the GPU efficiently, thus occupying the CPU with something else to 100%, might even be counter-productive.

 

The best GPU for the job, is usually the one with the "better" aggregate of shaders * core clock. Quadro or Tesla cards don't have features that give them an edge over "gaming" cards atm. Some claim that Quadro cards are better binned (higher quality chips) and might last longer, otherwise the differences are purely in software (firmware/drivers) and in some cases in ECC RAM which might be useful for scientific compute applications, but not so much for graphics (if anything, ECC is a tad slower).

 

So, from fastest to "less fast" it should be like:

 

GTX 780Ti > K6000 > GTX Titan > GTX 780 >> GTX 770/680 > K5000 > GTX 670 > GTX 760 >> GTX 660 >> K4000.

A K4000 is slow for GPU rendering, a K2000 is nearly useless in GPU rendering (too slow). You can use them, sure, but compute power / $ is horrible. I am mentioning nVidia cards only, simply because the current VRay RT GPU (2.x) is horribly optimized for AMD cards (despite the latter being probably much better in compute than nVidia).

 

The key is in the words for CPU vs. GPU rendering are "unbiased/biased".

VRay Advanced (i.e. the normal VRay you are using now) is a biased engine. It pre-calculates / predicts and interpolates results in smart ways to save up time. That's what irradiance map / light cache etc are: passes that "cheat" your way out of the need to calculate everything ray by ray and bounce per bounce. Many of the features / options / fx of Vray are actually based on this "biased" method, thus unavailable (still) to Vray RT GPU. Maybe in future versions they will iron everything out, and CPUs will be used less and less in the process.

 

Hi Dimitris!

 

Great explanation of the use of GPUs. Don't mean to be a bother, but I still have some questions!

 

1. What is this "certified hardware" that Autodesk talks about for 3DS Max? See the attached image. The disclaimer at the end makes me think it's bullshit but correct me if I'm wrong.

2. I'm gonna buy a graphics card ASAP. I mainly use Revit (which uses only the CPU) and I will be using Lumion also, for large projects several hundred acres in size, but not very complicated. Eventually I'm gonna be using 3DS Max, most likely by linking Revit into 3DS Max and using it only for rendering (but keeping my options open; may use 3DS Max for modelling too) so I feel it would be wise to consider its requirements too while buying the graphics card.

I narrowed down my search according to affordability and number of PassMark points that Lumion says is recommended for large and very large models to the following -

GeForce GTX 760 - 4969 PassMark points

 

GeForce GTX 660 - 4117 PassMark points

 

Radeon HD 7950 - 4687 PassMark points

 

Radeon R9 285 - 5187 PassMark points

 

Radeon R9 270x - 4515 PassMark points

 

But as you can see, the GTX 760 is at the bottom of the list in the attached image. So I was just wondering if I should consider that list or the 760 and the others in my shortlist will do the trick. What are your views on this?

 

Oh I have an AMD FX 4100 3.6 GHz processor

16 GB DDR3 RAM

ASUS M5A78L-M LX V2 motherboard (can be changed)

 

And thanks in advance!

 

Autodesk_GPU_certification_medium.jpg

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...