Jump to content

Quadro vs. GTX for 3DS Max machine: Here are my priorities...


Recommended Posts

Hi,

 

I'm building a machine for several 2D/3D programs, but 3DS Max is my current bottleneck, so I want to tailor my hardware to it.

 

Here are my top priorities, in order of importance:

1. Viewport performance with thousands of objects. I would regularly be working with scenes that have ~10 million polygons, as well. (I use instances where I can, but lots of unique stuff in them). I want to minimize viewport lag and the need for bounding box substitution. I also want to conquer the distracting "flickering" (flipping back and forth between wireframe and realistic mode).

2. Quick preview renders. I don't care as much about final render speed, as that's something I can get up and walk away from. I spend 95+% of my time in the viewport building scenes. What I really want is fast preview renders, nitrous, IRay, etc...

 

Things I care about somewhat:

1. Final render speed, should I go the GPU rendering route. I currently use Mental Ray but am thinking of switching to Vray soon.

2. Power consumption

3. Bang for the buck. (Performance is important and I'm willing to spend 1K on a graphics card, perhaps a little more. But if I'm only getting a barely noticeable performance bump for twice the price, then I'll go with the cheaper option.)

4. Photoshop performance/compatibility. This is one of the reasons I wish to stick with Nvidia (I use the Nvidia GPU accelerated features frequently as a heavy duty PS user.)

 

Things I don't care about at all:

1. Animation rendering/performance

2. Fluid dynamics, particles, etc...

3. Longevity

4. Gaming

 

Things I'm not sure whether or not I should be caring about:

1. Error checking (apparently Quadros are better than this).

2. Heat (does this slow down performance?)

3. "Accuracy" - this is a term I've seen thrown around among heavy Autocad users in reference to Quadro superiority, though with what appears to be differing definitions. I'm 95% sure this isn't a priority to me, as I'm dealing with concept art/matte painting and not precision-based software.

 

General questions:

1. If I opt to not collapse most of my modifiers, what part of the graphics card handles them? I had assumed that extra processing (CUDA) would play a bigger part of this is the case.

2. Am I correct in assuming that I don't need much memory on the graphics card for viewport performance? I read somewhere that the viewport doesn't consume much, maybe edging on 2GB, so 8GB would be WAY overkill.

3. Is there a hypothetical situation where a large scene could be opened by the Quadro but not by a GTX?

 

 

Information about my current build (subject to reluctant change, as I already have these parts):

Not sure if other info is relevent. Oh, and I'm using Max 2014 but may upgrade to 2015.

 

 

--------------------------------------------------------

 

Nvidia Quadro k4200, GTX 980/970, or something else...?

 

So I was going to buy the Nvidia Quadro K4200, because I was told that its drivers supported better viewport performance. However, I saw many users on various forums say that they were not impressed by the viewport performance bump it provided, and that it was negligible. They were in the "GTX-has-much-better-specs-for-the-price" camp. "Team GTX"

 

I've then seen people with the viewpoint that specs don't matter at all, and that it's "all in the drivers." ("Team Quadro") They proclaim that Quadro's superior drivers make a dramatic difference in Max workflow, and is totally worth the hefty price. They also say that there are important hardware differences as well, that it's not just optimized Quadro/throttled GTX drivers.

 

"Team GTX" then counters that this USED to be true, but that Quadro and GTX have converged in recent years. They give anecdotes on how well their

Many of the benchmarks and discussions online are either outdated (Quadro NON-Kepler series compared, for instance), or they just compare just gaming cards/workstation cards without crossover. I've used head-to-head benchmark sites which show the GTX 980 being superior by a wide margin. But again, the benchmarks seem to be targeted at gamers.

 

Further complicating things are the GTX 970/980 vs. the Titan. It seems that there is little advantage offered by the Titan to justify the price for me.

 

--------------------------------------------------------

 

 

I'm new to this sphere, so I don't have much trial-and-error to draw upon, so any guidance would be greatly appreciated. I hope I brought enough specifics to the table that this won't be seen as a generic contribution to the debate.

Link to comment
Share on other sites

I'm sure others will weigh in with some good advice, but I would say that for your needs GTX is the way to go. Quadro has benefits for engineering software packages etc. with high poly counts and where accuracy is crucial. For concept art in 3ds max I think they offer no real benefit at all. If you are hoping to use iRay or VrayRT for look development of scenes containing millions of polys, as you describe, you'd be better off with a GTX card (or multiple) with a lot of onboard memory.

Link to comment
Share on other sites

All the "driver optimized workflow" that people still believe is simply harkening to old times (pre-2012), where that did apply in certain cases (OpenGL viewports were all the rage, nVidia offered tailored drivers for certain apps like 3dsMax). Today it only apply to niche CAD application (Catya,etc..).

 

It has zero relevance today, most viewports like 3dsMax 2014+ run on DirectX11+ and take almost zero benefit from the unlocked drivers. People also less work in wireframe modes when the shaded ones provide very little penalty and look much better.

 

Quadro still has lot of other benefits (but purely artificially by business decision, simply market segmenting strategy), like having unlocked 14bit LUT output (for which you would need appropriate wide-gamut monitor and whole tailored color pipeline to take benefit) for precise color grading environment (think crop of creme post-production houses), double floating point precision (for very precise calculations in economic/medicine and other science industries), and generally higher on-board memory option (though only in very high-end models like K6000 and even that will soon be available through semi-'regular' options like Titan). But they don't apply much for our industry.

 

If you're very serious about GPU rendering (iRay,etc..) being your main renderer of choice, then it makes sense to wait for 980GTX with 8GB memory to fit as big scenes as possible within. Or wait for the rumored Titan 2 with 12GB ram. Regarding your question to justify the price of Titan series...well, very little. Originally it came with higher memory, which was offset year later by similar (but rare) offering of 780/8GB. Titan 2 should be perhaps more specialized, and will again come with higher memory to niche audience, which in nVidia eyes, warrant asking a hefty premium (though not as hefty as for Quadro, it's compromise).

Link to comment
Share on other sites

I just want to point out this is one of the best "what computer should I buy?" threads I've seen in awhile. The OP did a great job of outlining her workflow, needs, concerns, etc. and has clearly done some research on her own and I believe she got some well deserved good advice.

 

Unfortunately, in two hours someone is going to ask the exact same question without having done any leg work on his/her own (including reading this thread).

Link to comment
Share on other sites

I just want to point out this is one of the best "what computer should I buy?" threads I've seen in awhile. The OP did a great job of outlining her workflow, needs, concerns, etc. and has clearly done some research on her own and I believe she got some well deserved good advice.

 

Unfortunately, in two hours someone is going to ask the exact same question without having done any leg work on his/her own (including reading this thread).

 

Yep, very refreshing to see in ocean of "Guyz, plx, should I buy Pentium II or 3xTitanZ ? It's for rendering with Google. I am from Italy so I can't use my brain on internet"

 

This might actually well be the most well-thought out hardware request :- )

Link to comment
Share on other sites

/

Hi,

 

I'm building a machine for several 2D/3D programs, but 3DS Max is my current bottleneck, so I want to tailor my hardware to it.

 

Here are my top priorities, in order of importance:

1. Viewport performance with thousands of objects. I would regularly be working with scenes that have ~10 million polygons, as well. (I use instances where I can, but lots of unique stuff in them). I want to minimize viewport lag and the need for bounding box substitution. I also want to conquer the distracting "flickering" (flipping back and forth between wireframe and realistic mode).

 

 

3dsMax doesn't really scale well with GPU performance, almost irregardless of type used. Any current (even lower end type) GPU offer decent performance in 2014+ Max, but the higher-end counterparts offer relatively little additional increase on top of that. Other software like Maya scale better and more linearly, but Max is simply doomed for incomplete solutions in this regard. Nitrous viewport is great improvement over previous version, but hardly ideal.

 

2. Quick preview renders. I don't care as much about final render speed, as that's something I can get up and walk away from. I spend 95+% of my time in the viewport building scenes. What I really want is fast preview renders, nitrous, IRay, etc...

 

GPU rendering on other hand scales very linearly, so 30perc. faster GPU, will mostly increase your rendering speed by 30perc. If you're after GPU rendering, go as high as your budget allows (GTX 980/+), or little bellow for the best value/performance (GTX 970).

 

Things I care about somewhat:

1. Final render speed, should I go the GPU rendering route. I currently use Mental Ray but am thinking of switching to Vray soon.

 

Depends on your need. GPU route is still slightly less universal, and limiting for certain type of work (mainly that which exceeds GPU's lower memory limitations, although there are rendering engines which bypass this needs (such as RedshiftGPU) at performance cost). VrayRT in recent Vray update (3.10.xx) can take benefit of faster algorithm (Lightcache, were previously it was fully path traced) but it doesn't offer the same options as regular Vray Advanced (CPU). It's probably better to decide on this ground to tailor your needs to particular engine.

 

2. Power consumption

 

High-end GPUs are powerhungry by default, although this greatly improved in recent nVidia generation (Maxwell). Difference between same family Quadro and GTX is not big, Quadros are slightly lower-clocked in general, but it doesn't provide much lower consumption. As long as the power-consumption fits under you PSU, you're fine.

 

3. Bang for the buck. (Performance is important and I'm willing to spend 1K on a graphics card, perhaps a little more. But if I'm only getting a barely noticeable performance bump for twice the price, then I'll go with the cheaper option.)

 

Obviously there are diminishing returns and lower performance/value the higher you go, but it's less drastic difference than it used to be. Both GTX980/970 offer good deal, but 970 is superior in this regard. Depending on your priorities, multiple upper mid-range GPUs like 970 can be better deal for GPU rendering, than single or multiple 980s. But for viewport performance, and CPU rendering only, even 970 is way above the needs.

 

4. Photoshop performance/compatibility. This is one of the reasons I wish to stick with Nvidia (I use the Nvidia GPU accelerated features frequently as a heavy duty PS user.)

 

You're fine with with anything you take here. Most Adobe apps support equally OpenCL and Cuda acceleration.

 

 

Things I'm not sure whether or not I should be caring about:

1. Error checking (apparently Quadros are better than this).

 

"Pro" cards are often equipped with ECC memory, which can correct certain types of internal data corruption. This happens very infrequently and doesn't affect at all regular workstation tasks. It's only important in scientific calculations, where errors cannot be tolerated.

 

2. Heat (does this slow down performance?)

 

Quadro can actually be worse in this regards, as they are most commonly available in reference design from nVidia, which is subpar to some of the other vendors (like Asus/Gigabyte,..) selling GTX with multiple fans cooled units. Heat only slows performance when it reaches extreme and throttles. Doesn't happen under regular circumstances.

 

3. "Accuracy" - this is a term I've seen thrown around among heavy Autocad users in reference to Quadro superiority, though with what appears to be differing definitions. I'm 95% sure this isn't a priority to me, as I'm dealing with concept art/matte painting and not precision-based software.

 

"Pro" cards offer double (64bit) floating point calculations option, which are much slower and take more bandwidth, but offer better range of magnitudes that can be represented and higher precision. But it has no benefit for common rendering and CGI tasks, which are all single (32bit) floating point for performance reasons.

 

General questions:

1. If I opt to not collapse most of my modifiers, what part of the graphics card handles them? I had assumed that extra processing (CUDA) would play a bigger part of this is the case.

 

As I answered in first question, the performance in 3dsMax viewport doesn't drastically improve by more powerful GPU. Lot of these processes are also single-core CPU dependent only.

 

2. Am I correct in assuming that I don't need much memory on the graphics card for viewport performance? I read somewhere that the viewport doesn't consume much, maybe edging on 2GB, so 8GB would be WAY overkill.

 

For viewport, generally, no, 2GB can be enough for average user although I frequently went to its limits, so I run 4GB in workstation (but my scenes can reach hundreds of millions, not just tens). But the GPU memory matters for GPU rendering, where the whole scene has to fit inside it, otherwise it won't render (with exception of Redshift render, which offers memory cycling feature at performance cost).

 

3. Is there a hypothetical situation where a large scene could be opened by the Quadro but not by a GTX?

 

In 3dsMax 2014/2015 ? I don't believe so.

Link to comment
Share on other sites

Juraj, I really hope the 980 with 8gb coming in March is going to happen! I want a 980 but I would almost feel cheated to buy the 4gb right now hehe! Unless the 8gb is like 1000$, then I'll stick to the regular 980 I guess.

 

Where did you see the March release date :- ) ? The Samsung chips (there were supposedly waiting for) were supposed to be in full production by Q1/2015 but whether that means anything for nVidia, I am not sure.

Link to comment
Share on other sites

Where did you see the March release date :- ) ? The Samsung chips (there were supposedly waiting for) were supposed to be in full production by Q1/2015 but whether that means anything for nVidia, I am not sure.

 

Propably a shady source haha...but I want to believe it's coming soon :-) Maybe the 4 extra GB of ram won't make a huge difference (it's for unreal engine only). Most stuff isn't done for ultra high-res anyway! I've decided to stick to Corona and CPU for my offline renderings anyway. Not gonna have to pursue the dream of getting a rack full of 780 6gb lol.

Link to comment
Share on other sites

I just would like to add one thought

I don't know what type of rendering you will be doing, product viz or Architectural Viz. but if you have experience with GPU renderings you know that your limit will be always the RAM memory as explained by Juraj.

If you are new to this, please review your workflow and do not believe everything that Autodesk and NVidia try to sell you.

I see several people new to rendering (Engineers, Architects, IT people) that get sold by all the Autodesk NVidia propaganda, that GPU is "The best way" that cloud rendering is the future and all that.

Yes and not, the price tag on all that is very height, compared to the flexibility and capacity that you have with CPU renders.

 

Just be careful, it seems you do your home work and that's is great, that exactly what you should do and not believe everything that this companies tell you.

I am very tired or arguing with our CAD Technitian at my office, every time he goes to any of those shows. he always come back trying to put every machine in this company to render with the great and magnificent NVidia super duper $$$$$ card.

Then he always ask me why his REVIT rendering is taking forever or can't render, why the cloud rendering can't upload his monster file to render, why the glossies not look the same than my VRay renderings. This big companies abuse of the ignorance of many people regarding this matter.

Link to comment
Share on other sites

Heh, seems to be often mentioned story among people from larger departments here :- ) I can only imagine how that plays in real, but I compare it mentally to some unethical dish selling overpriced to old people that's pretty popular in my country...

 

Almost forgot to add, that iRay, even when fueled by vast battalion of gpus, is still slower than average CPU render (Vray, Corona,etc..). For those who do benefit from GPU rendering, or are simply fan of it, I would advice choosing some of the more advanced and actually evolving renderers like Octane (for precise but slower speed, but easier workflow overall) or Redshift (sort of Vray-ish renderer just running on GPU with bunch of cool features ).

Link to comment
Share on other sites

I agree that this is one of the most well prepared threads in the forum. I take part in many hardware oriented forums, and I admit, this phenomenon is rare.

/

Depends on your need. GPU route is still slightly less universal, and limiting for certain type of work (mainly that which exceeds GPU's lower memory limitations, although there are rendering engines which bypass this needs (such as RedshiftGPU) at performance cost). VrayRT in recent Vray update (3.10.xx) can take benefit of faster algorithm (Lightcache, were previously it was fully path traced) but it doesn't offer the same options as regular Vray Advanced (CPU). It's probably better to decide on this ground to tailor your needs to particular engine.

 

I think Juraj "hit the nail on the head" with this statement. The problem is that TS must answer this question first before anything else. Viewport performance in Autodesk's products, Open GL and Open CL support in Adobe CC etc, would be fine with a GeForce consumer card like gtx 970. The whole story here is if iray, vray RT or other gpu based renderers are going to suite TS's needs in the rendering part (materials, light settings, final output etc). I know. for example, that many car designers use iray successfully, but this is a very specific area of 3d visualization. Architects and other designers seam to mostly approve cpu renderers. If this matter is solved by TS, or answered by members with greater experience in the field of TS's work, everything else would be almost automatically answered. I sense that a gtx 970 with 8gb of Vram (coming out soon I guess), would fit like a glove in any case. If gpu rendering is chosen/approved, then 2 or 3 of them together would make a pretty fast rendering machine. If not, 4790K would be a decent chip for most tasks.

 

PS In any case, I would choose a better motherboard than the Asrock H97M-Pro4.

Edited by nikolaosm
Link to comment
Share on other sites

Maybe the 4 extra GB of ram won't make a huge difference (it's for unreal engine only).

If it is for real time engines only, the latest move to combined VRAM in SLI/crossfire could have a huge impact. I don't know if really all of the RAM will add up, because it seems to be dynamically, but even if you get only 13-14GB out of two 8GB cards or 20GB with 3 cards then this will change everything. And there are also the 12GB cards...

I'm not sure if they will manage to use it for GPU rendering - i don't think it is possible at the moment, because normally the whole scene has to be stored in the VRAM, but who knows...

 

http://wccftech.com/geforce-radeon-gpus-utilizing-mantle-directx-12-level-api-combine-video-memory/

Link to comment
Share on other sites

The 300 series from AMD is speculated to have the ability to combine VRam in CF configurations, but it is not 100% sure, and if it was, that would be probably only under mantle API, which is a 3D graphic acceleration and not GPGPU scenario.

 

The 8GB versions of the 980 and perhaps 970 should drop after the AMD 300 series gets launched.

If the top of the line 300 will be that good, we will probably see a "Titan II" or whatnot being anounced for the near future, just to break some momentum should some of the enthousiasts start thinking switching to red. i.e. business as usual for the last decade :p

 

CUDA 6 allows for unifying system and DRAM memory, but I don't know if there is any current mainstream dev adaptive its GPGPU rendering engines for that, and/or when we will be able to get our hands on a commercially available product that does that.

 

I would not hold my breath. Will take some time.

Link to comment
Share on other sites

that would be probably only under mantle API, which is a 3D graphic acceleration and not GPGPU scenario.

 

That's what i said - Mantle for AMD and DirectX 12 will bring it to nvidia too. As far as i know it should be possible with the current cards as well - at least for AMD (R9 2X0).

Link to comment
Share on other sites

First off, thank you everyone for the thorough replies. You guys have by far been the best help here.

 

It sounds like I should clarity to you (and myself) what my rendering objectives are. I am much closer to an architectural visualizer than a product visualizer, and no I don't work in house (yet) so this is my own purchase.

The type of work I'm focusing on for the long haul is detailed fantasy/sci-fi cityscape renders for film pre-visualization, closer to matte painting in quality than the traditional 2D photobash variety of concept art. (As of recently, studios are demanding higher quality images for use in marketing, as well as for high end pitch work to some degree.) 3D is a huge plus for quickly showing multiple angles.

 

So while there's a good amount of crossover with architectural rendering, this means that accuracy of lighting and material behavior is less important to me than those in the architectural world. More important to me is the ability to churn out "complexity" very quickly.

 

That said, I've been working with a LOT of glass (think, light shining through multiple layers of it) and mirrored surfaces, so speeding up preview renders in that department would be really useful.

 

Wish I was allowed to show you guys actual examples.

 

 

 

3dsMax doesn't really scale well with GPU performance, almost irregardless of type used. Any current (even lower end type) GPU offer decent performance in 2014+ Max, but the higher-end counterparts offer relatively little additional increase on top of that. Other software like Maya scale better and more linearly, but Max is simply doomed for incomplete solutions in this regard. Nitrous viewport is great improvement over previous version, but hardly ideal.

This helps to clarify things, thanks. I was aware that the viewport used single core, but not to which extent the CPU was used overall. What do you think of the idea of boosting to 4.4GHz on my CPU? I've never overclocked before, so I'm hesitant.

 

 

Depends on your need. GPU route is still slightly less universal, and limiting for certain type of work (mainly that which exceeds GPU's lower memory limitations, although there are rendering engines which bypass this needs (such as RedshiftGPU) at performance cost). VrayRT in recent Vray update (3.10.xx) can take benefit of faster algorithm (Lightcache, were previously it was fully path traced) but it doesn't offer the same options as regular Vray Advanced (CPU). It's probably better to decide on this ground to tailor your needs to particular engine.

Interesting. I'm not sure whether I will be buying Vray or not. I've heard good things about it, especially that it's more intuitive. It would be safe to assume I'm using Mental Ray for now anyway. By "doesn't offer the same options" I'm not sure what you mean. Will have to look into it.

 

 

Obviously there are diminishing returns and lower performance/value the higher you go, but it's less drastic difference than it used to be. Both GTX980/970 offer good deal, but 970 is superior in this regard. Depending on your priorities, multiple upper mid-range GPUs like 970 can be better deal for GPU rendering, than single or multiple 980s. But for viewport performance, and CPU rendering only, even 970 is way above the needs.

Yeah, I'm thinking that I'll just try one for now and see if it fits my needs. I still feel like I'd be getting a great deal with the 980 when compared to a Quadro, hah.

 

 

For viewport, generally, no, 2GB can be enough for average user although I frequently went to its limits, so I run 4GB in workstation (but my scenes can reach hundreds of millions, not just tens). But the GPU memory matters for GPU rendering, where the whole scene has to fit inside it, otherwise it won't render (with exception of Redshift render, which offers memory cycling feature at performance cost).

This helps a lot. I read one user say that he wasn't using much memory with a "big file" but never designed how big the file was, or what other variables there were. I had assumed that the whole scene didn't have to "fit inside it" so that is also good to know. It sounds like I would still be safe within the 4GB range, since I haven't come close to a hundred million polygons! Curious as to what file size that makes on disk.

 

 

 

Juraj, I really hope the 980 with 8gb coming in March is going to happen! I want a 980 but I would almost feel cheated to buy the 4gb right now hehe! Unless the 8gb is like 1000$, then I'll stick to the regular 980 I guess.

Yeah, this is good to know. I could definitely wait until March, though I know that "soon" for companies can mean a very different thing for consumers...

 

 

If you are new to this, please review your workflow and do not believe everything that Autodesk and NVidia try to sell you.

I see several people new to rendering (Engineers, Architects, IT people) that get sold by all the Autodesk NVidia propaganda, that GPU is "The best way" that cloud rendering is the future and all that.

Yes and not, the price tag on all that is very height, compared to the flexibility and capacity that you have with CPU renders.

I am indeed new to all these hardware considerations. Being completely self-taught on the software has made me cautious, because I haven't had a chance to hobnob with veteran pros/teachers and compare hardware firsthand. I also had the suspicion that the big competition for the gaming market has driven the price down so much for gamers that Nividia and the like attempt to subsidize the narrow margins and GTX R&D cost with extra profits from the Quadro line.

 

 

PS In any case, I would choose a better motherboard than the Asrock H97M-Pro4.

Is this recommendation due to only having one CPU/GPU slot, or for other reasons? It seemed to have everything I needed and got decent reviews, but if you can make a good case for something better I'll see if I can resell it and buy a new one. I actually got it from a friend, though, so I haven't done as much homework as I would've if I'd bought one myself...

Edited by kirstenzirngibl
Link to comment
Share on other sites

Is this recommendation due to only having one CPU/GPU slot, or for other reasons? It seemed to have everything I needed and got decent reviews, but if you can make a good case for something better I'll see if I can resell it and buy a new one. I actually got it from a friend, though, so I haven't done as much homework as I would've if I'd bought one myself...

The ASRock will be just fine. No worries. Only if you want to overclock your CPU, as you mentioned above, you should look for a Z97 instead, because the H97 boards are only capable of very limited overclocking.

 

Btw. you're already at 4,4GHz single core and 4,2GHz all core turbo with the 4790K. So overclocking this CPU will give you only a few 100MHz more (maybe 4.6-4.8GHz depending on your cooling). And i just looked into the manual of the ASRock H97 Pro4 ( ftp://66.226.78.21/manual/H97%20Pro4.pdf ) and it supports "Non-Z overclocking", which means that you can fully overclock K-CPUs with this board. Not sure if there are any limitations regarding voltage, CPU phases or other settings compared to the Z97 boards, but overclocking is basically possible.

Edited by numerobis
Link to comment
Share on other sites

This helps to clarify things, thanks. I was aware that the viewport used single core, but not to which extent the CPU was used overall. What do you think of the idea of boosting to 4.4GHz on my CPU? I've never overclocked before, so I'm hesitant.

 

Your CPU already has 'turbo-core' feature, which overclocks cores depending on their use. If single core is used presently, it will overclock it quite high already. You will not get very drastic improvements past this point in viewport.

 

Interesting. I'm not sure whether I will be buying Vray or not. I've heard good things about it, especially that it's more intuitive. It would be safe to assume I'm using Mental Ray for now anyway. By "doesn't offer the same options" I'm not sure what you mean. Will have to look into it.

 

GPU kernels like CUDA or OpenCL are more limited than general x86 CPU architecture. So there are usually limitations when implementing the same feature set from regular CPU renderer (amount of textures, displacement,etc..). It improves steadily, but it's still not at equal point completely.

 

This helps a lot. I read one user say that he wasn't using much memory with a "big file" but never designed how big the file was, or what other variables there were. I had assumed that the whole scene didn't have to "fit inside it" so that is also good to know. It sounds like I would still be safe within the 4GB range, since I haven't come close to a hundred million polygons! Curious as to what file size that makes on disk.

 

This doesn't correlate in such way. My scenes (un-compressed) take about 1GB on average on disk space, but render-wise, anywhere from 16 to 40GB. Thus, GPU rendering, is obviously not for me. The GPU has to fit geometry, render-time geometry like displacement, textures, framebuffer, and various other features, so it's no purely just amount of polygons, and it's hard to guess how much a scene will take.

Link to comment
Share on other sites

The type of work I'm focusing on for the long haul is detailed fantasy/sci-fi cityscape renders for film pre-visualization, closer to matte painting in quality than the traditional 2D photobash variety of concept art. (As of recently, studios are demanding higher quality images for use in marketing, as well as for high end pitch work to some degree.) 3D is a huge plus for quickly showing multiple angles.

 

So while there's a good amount of crossover with architectural rendering, this means that accuracy of lighting and material behavior is less important to me than those in the architectural world. More important to me is the ability to churn out "complexity" very quickly.

 

That said, I've been working with a LOT of glass (think, light shining through multiple layers of it) and mirrored surfaces, so speeding up preview renders in that department would be really useful.

 

Wish I was allowed to show you guys actual examples.

 

As a side note of all the hardware related that it been very well explained here, after reading again you post I would like to point a few things.

For the type of work you are trying to pursuit, I would recommend two major features to choose a software,

First is compatibility. from what I understand you will start as solo artist or freelancer, with hope to jump in to a studio. if so I would recommend to learn and craft your skill in a software that is more standard. Mental Ray was used very widely in the VFX industry, but development is so slow that many of the studios are jumping to V Ray or other software such, Arnol, Renderman and so on. I am not sure of the popularity of Octane or other GPU that is not in house rendering solution. Again I may be wrong but from the people I know the software above are the standard, besides in studios, most of the comps are finished in Fusion or NUKE so your scene is re-render within this software render engine. I would recommend to look in to those software too, unless you will be only focused in create assets for further development.

 

Also you need a software that give you enough flexibility to do images photo real and not photo real. Most of the GPU engines tend to be very photo real, you can bend the rules of course, but that's the goal of them, Redshift seems to be more flexible in this case but I don't think they have release a 3D Max compatible plugin yet.

Mental Ray can be very flexible, and sometimes very fast if you are not using GI, but again it is sad to see how much Autodesk/NVidia have forgotten the software. No matter what hardware you use, it won't be as fast as you hope.

IMHO I would recommend to try to save money for V Ray, in the middle time you should try Corona render(Free still), it is way faster than Mental Ray, and it really help you to concentrate more in the artistic side of design instead thinking how to optimize your scene for fast renderings. but again I don't think there is many studios putting their money in Corona yet.

You should try Octane render and see if that look help you in your workflow, then invest money in a hardcore video cards if GPU rendering is your thing.

 

Also about previews renderings, when you are new to the software, you tend to do more previews compared when you are more comfortable with it. I remember I used to do previews after I added each material or each object, nowadays I can go way longer adjusting stuff and setting my scene without the need to do that many previews, I understand what needs to be made before the image get closer to finish. Yes this is experience, but I just point it out so you don't feel that preview renderings will be as much important in your workflow your whole life

 

just random thought ;)

Link to comment
Share on other sites

As a side note of all the hardware related that it been very well explained here, after reading again you post I would like to point a few things.

For the type of work you are trying to pursuit, I would recommend two major features to choose a software,

First is compatibility. from what I understand you will start as solo artist or freelancer, with hope to jump in to a studio. if so I would recommend to learn and craft your skill in a software that is more standard. Mental Ray was used very widely in the VFX industry, but development is so slow that many of the studios are jumping to V Ray or other software such, Arnol, Renderman and so on. I am not sure of the popularity of Octane or other GPU that is not in house rendering solution. Again I may be wrong but from the people I know the software above are the standard, besides in studios, most of the comps are finished in Fusion or NUKE so your scene is re-render within this software render engine. I would recommend to look in to those software too, unless you will be only focused in create assets for further development.

 

I think I still want to stay in the realm of "creating assets for further development." There's a line to toe between being a designer and being the visualizer, and I want to make sure I always have a foot in the earlier design process. It sounds like Vray might be a safe bet. I thought I heard that Renderman was optimized for animation rather than still scenes, but that was awhile back.

 

Also you need a software that give you enough flexibility to do images photo real and not photo real. Most of the GPU engines tend to be very photo real, you can bend the rules of course, but that's the goal of them, Redshift seems to be more flexible in this case but I don't think they have release a 3D Max compatible plugin yet.

Mental Ray can be very flexible, and sometimes very fast if you are not using GI, but again it is sad to see how much Autodesk/NVidia have forgotten the software. No matter what hardware you use, it won't be as fast as you hope.

IMHO I would recommend to try to save money for V Ray, in the middle time you should try Corona render(Free still), it is way faster than Mental Ray, and it really help you to concentrate more in the artistic side of design instead thinking how to optimize your scene for fast renderings. but again I don't think there is many studios putting their money in Corona yet.

You should try Octane render and see if that look help you in your workflow, then invest money in a hardcore video cards if GPU rendering is your thing.

This is some great advice- perhaps I get carried away in features that have diminishing returns for the render time. I tend to do a lot of post editing in my work - more than the average visualizer on this forum. What I need most is to have a blueprint of where light falls on complex geometry, curved reflective surfaces, and through lots of glass.

I'm probably also limiting myself regarding of artistry that can be achieved through rendering alone. I'd love to play with somewhat cel shaded techniques and grimy lens effects.

Sounds like I have some experimenting to do - I'll download Corona and get to researching some of this stuff as soon as I'm done with my current gig. Also, Would you consider the GTX 970 a "hardcore video card?"

 

Also about previews renderings, when you are new to the software, you tend to do more previews compared when you are more comfortable with it. I remember I used to do previews after I added each material or each object, nowadays I can go way longer adjusting stuff and setting my scene without the need to do that many previews, I understand what needs to be made before the image get closer to finish. Yes this is experience, but I just point it out so you don't feel that preview renderings will be as much important in your workflow your whole life

 

just random thought ;)

It would be great to get there someday! I also feel like I've been spoiled somewhat by using Vue and Keyshot. They have fairly gratifying real time render updates (especially Keyshot.) I moved on from them when I wanted more control over my scene, but sometimes it feels like taking the bumpers off the bowling alley - a rite of passage!

Link to comment
Share on other sites

  • 9 months later...
I think I still want to stay in the realm of "creating assets for further development." There's a line to toe between being a designer and being the visualizer, and I want to make sure I always have a foot in the earlier design process. It sounds like Vray might be a safe bet. I thought I heard that Renderman was optimized for animation rather than still scenes, but that was awhile back.

 

 

This is some great advice- perhaps I get carried away in features that have diminishing returns for the render time. I tend to do a lot of post editing in my work - more than the average visualizer on this forum. What I need most is to have a blueprint of where light falls on complex geometry, curved reflective surfaces, and through lots of glass.

I'm probably also limiting myself regarding of artistry that can be achieved through rendering alone. I'd love to play with somewhat cel shaded techniques and grimy lens effects.

Sounds like I have some experimenting to do - I'll download Corona and get to researching some of this stuff as soon as I'm done with my current gig. Also, Would you consider the GTX 970 a "hardcore video card?"

 

 

It would be great to get there someday! I also feel like I've been spoiled somewhat by using Vue and Keyshot. They have fairly gratifying real time render updates (especially Keyshot.) I moved on from them when I wanted more control over my scene, but sometimes it feels like taking the bumpers off the bowling alley - a rite of passage!

 

Hello Kirsten Zirngibl

 

I know it had been a while for your post, but can you share us your final decision for hardware rigs. I am in charge for hardware management of small studio and i keep studying for hardware knowledge.

 

Sorry for my bad English :)

 

Best regards

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...