Jump to content

best graphic gard for 3dsmax viewport shading


Recommended Posts

hi, i know this Question has been asked a million times before , but i have been searching and i'm getting mixed opinions and results

 

i'm looking for good graphic card that can deliver a usable fps in 3dsmax's viewport as my 4870 is starting to fail me

currently i can get a quadro 4000 for ~135 $ or a gtx 660 for ~150 $

which one would be better for viewport shading in 3dsmax

i have an intel w3680 , 24g of ram and my scenes are usually around 15~25 mil polys

i use vray3.0 and max2015

thx :)

Link to comment
Share on other sites

Sounds like you have a similar set up to me. I have two GXT 770's and my frame rate is around 280fps with a 9 Million poly scene in wire frame. Total cuda cores are around 3000 combined which is about the same as the high end quadros.

Quadros aren't worth the cost. Especially since your not using the GPU to render. (Assuming your not using VrayRT). Not to mention if your board supports it, you can do an SLI set up with 2 Tesla cards and still be under the cost of a Quadro 5000, get better refresh rates and faster GPU rendering.

Link to comment
Share on other sites

I was looking at Nvidia quadro cards...is there a purpose to these cards? Considering the prices and the not-so-crazy specs compared to the latest gtx 900 series and titans (especially titan Z) it doesn't sound look a good deal at all!!!

 

the quadro k6000 is like 4500$...I fail to see the point of such graphic card.

 

Here are two really good answers to that question.

 

"It's called market segmentation. nVidia produces one very configurable chip and then sells it in different configurations that essentially have the same elements and hence the same bill of materials to different market segments, although one would usually expect to find elements of higher quality on the more expensive Quadro boards.

 

 

What differentiates Quadro from GeForce is that GeForce usually has its dual precision floating point performance severely limited, e.g. to 1/4 or 1/8 of that of the Quadro/Tesla GPUs. This limitation is purely artificial and imposed on solely to differentiate the gamer/enthusiast segment from the professional segment. Lower DP performance makes GeForce boards bad candidates for stuff like scientific or engineering computing and those are markets where money streams from. Also Quadros (arguably) have more display channels and faster RAMDACs which allows them to drive more and higher resolution screens, a sort of setup perceived as professional for CAD/CAM work.

 

 

Hardware wise the Quadro and GeForce cards are often idential. Indeed it is sometimes possible to convert some models from GeForce into Quadro by simply uploading new firmware and changing a couple resistor jumpers.

The difference is in the intended market and hence cost.

Quadro cards are intended for CAD. High end CAD software still uses OpenGL, whereas games and lower end CAD software use Direct3D (aka DirectX).

Quadro cards simply have firmware that is optimised for OpenGL. In the early days OpenGL was better and faster than Direct3D but now there is little difference. Gaming cards only support a very limited set of OpenGL, hence they don't run it very well.

CAD companies, e.g. Dassault with SolidWorks actively push high end cards by offering no support for DirectX with any level of performance.

Other CAD companies such as Altium, with Altium Designer, made the decision that forcing their customers to buy more expensive cards is not worthwhile when Direct3D is as good (if not better these days) than OpenGL.

Because of the cost, there are often other differences in the hardware, such as less use of overclocking, more memory etc, but these have relatively minor effects compared with the firmware support."

- http://stackoverflow.com/questions/10532978/difference-between-nvidia-quadro-and-geforce-cards

Link to comment
Share on other sites

Here are two really good answers to that question.

 

"It's called market segmentation. nVidia produces one very configurable chip and then sells it in different configurations that essentially have the same elements and hence the same bill of materials to different market segments, although one would usually expect to find elements of higher quality on the more expensive Quadro boards.

 

 

What differentiates Quadro from GeForce is that GeForce usually has its dual precision floating point performance severely limited, e.g. to 1/4 or 1/8 of that of the Quadro/Tesla GPUs. This limitation is purely artificial and imposed on solely to differentiate the gamer/enthusiast segment from the professional segment. Lower DP performance makes GeForce boards bad candidates for stuff like scientific or engineering computing and those are markets where money streams from. Also Quadros (arguably) have more display channels and faster RAMDACs which allows them to drive more and higher resolution screens, a sort of setup perceived as professional for CAD/CAM work.

 

 

Hardware wise the Quadro and GeForce cards are often idential. Indeed it is sometimes possible to convert some models from GeForce into Quadro by simply uploading new firmware and changing a couple resistor jumpers.

The difference is in the intended market and hence cost.

Quadro cards are intended for CAD. High end CAD software still uses OpenGL, whereas games and lower end CAD software use Direct3D (aka DirectX).

Quadro cards simply have firmware that is optimised for OpenGL. In the early days OpenGL was better and faster than Direct3D but now there is little difference. Gaming cards only support a very limited set of OpenGL, hence they don't run it very well.

CAD companies, e.g. Dassault with SolidWorks actively push high end cards by offering no support for DirectX with any level of performance.

Other CAD companies such as Altium, with Altium Designer, made the decision that forcing their customers to buy more expensive cards is not worthwhile when Direct3D is as good (if not better these days) than OpenGL.

Because of the cost, there are often other differences in the hardware, such as less use of overclocking, more memory etc, but these have relatively minor effects compared with the firmware support."

- http://stackoverflow.com/questions/10532978/difference-between-nvidia-quadro-and-geforce-cards

 

pretty complete answer, hehe. Thx!

Link to comment
Share on other sites

  • 3 weeks later...
Here are two really good answers to that question.

 

"It's called market segmentation. nVidia produces one very configurable chip and then sells it in different configurations that essentially have the same elements and hence the same bill of materials to different market segments, although one would usually expect to find elements of higher quality on the more expensive Quadro boards.

 

 

What differentiates Quadro from GeForce is that GeForce usually has its dual precision floating point performance severely limited, e.g. to 1/4 or 1/8 of that of the Quadro/Tesla GPUs. This limitation is purely artificial and imposed on solely to differentiate the gamer/enthusiast segment from the professional segment. Lower DP performance makes GeForce boards bad candidates for stuff like scientific or engineering computing and those are markets where money streams from. Also Quadros (arguably) have more display channels and faster RAMDACs which allows them to drive more and higher resolution screens, a sort of setup perceived as professional for CAD/CAM work.

 

 

Hardware wise the Quadro and GeForce cards are often idential. Indeed it is sometimes possible to convert some models from GeForce into Quadro by simply uploading new firmware and changing a couple resistor jumpers.

The difference is in the intended market and hence cost.

Quadro cards are intended for CAD. High end CAD software still uses OpenGL, whereas games and lower end CAD software use Direct3D (aka DirectX).

Quadro cards simply have firmware that is optimised for OpenGL. In the early days OpenGL was better and faster than Direct3D but now there is little difference. Gaming cards only support a very limited set of OpenGL, hence they don't run it very well.

CAD companies, e.g. Dassault with SolidWorks actively push high end cards by offering no support for DirectX with any level of performance.

Other CAD companies such as Altium, with Altium Designer, made the decision that forcing their customers to buy more expensive cards is not worthwhile when Direct3D is as good (if not better these days) than OpenGL.

Because of the cost, there are often other differences in the hardware, such as less use of overclocking, more memory etc, but these have relatively minor effects compared with the firmware support."

- http://stackoverflow.com/questions/10532978/difference-between-nvidia-quadro-and-geforce-cards

 

 

I have 660oc but I am not sure if I put another one that It can handle more polygons than now is operating. You put your scene inside one card, so if you put another one in SLI it will be only faster, it can't handle more polygons than single card... or I am wrong?

Link to comment
Share on other sites

  • 3 months later...
I have 660oc but I am not sure if I put another one that It can handle more polygons than now is operating. You put your scene inside one card, so if you put another one in SLI it will be only faster, it can't handle more polygons than single card... or I am wrong?

 

Personally I prefer the GTX Cards as it seems their refresh rates are better within 3d Max. I've tried putting them in SLI mode, but since the software doesn't utilize SLI I found it didnt really do much to improve frame rate. Currently I run two GTX 770's and they were able to handle a 14 Million poly scene in Realistic Shade mode with Edges without any lag in the view port.

Link to comment
Share on other sites

I posted a thread twice but it just doesn't appear?!? So I'm going to post my question here, I apologize for the off topic (or maybe not).

 

I want to buy one of these cards - AMD R9 390X or NVIDIA GTX 980 Ti, they are still not available but we already have the specs leaked that most likely will be true.

 

AMD Radeon R9 390X - http://videocardz.com/55146/amd-radeon-r9-390x-possible-specifications-and-performance-leaked

 

NVIDIA GeForce GTX 980 Ti - http://videocardz.com/55299/nvidia-geforce-gtx-980-ti-to-be-released-after-summer

 

So which one would you recommend and why? Software that I use is - 3ds Max, V-ray, Photoshop, AutoCAD. What I'm most interested in is viewport performance. As far as I know if you use V-ray RT you have no other option but to go with Nvidia because V-ray RT and AMD don't like each other, is this still the case ? I don't use V-ray RT, so I'm interested only in viewport performance.

 

What do you guys think? Thanks in advance.

Link to comment
Share on other sites

Once you have a GK104 card performance levels (pretty much a 760), the CPU is more often than not the bottleneck for the most part, so getting a GK110 (780/780ti/Titan) or a 9xx GPU, doesn't really bring anything more to the table as far as viewport goes: the CPU simply cannot feed enough data to the card for the extra shaders to matter.

 

Thus technically even with a GTX 750Ti or 660Ti, or a R9 270 if you are on the red side, you already shift most of the viewport acceleration bottleneck to the CPU. There are some cases where having 2GB of VRam might start being a hindrance for certain types of model with high textures and high screen resolutions but again, usually people tend to be over-ambitious when guesstimating what kind of GPU they really need.

 

Better shift your attention to getting a faster CPU - unless ofc you have other plans for your GPU outside 3DS Viewports - like GPGPU, or gaming, which either can potentially use up all the GPU grunt you can throw at them.

Edited by dtolios
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...