ahmedtawfiq Posted December 12, 2014 Share Posted December 12, 2014 hi, i know this Question has been asked a million times before , but i have been searching and i'm getting mixed opinions and results i'm looking for good graphic card that can deliver a usable fps in 3dsmax's viewport as my 4870 is starting to fail me currently i can get a quadro 4000 for ~135 $ or a gtx 660 for ~150 $ which one would be better for viewport shading in 3dsmax i have an intel w3680 , 24g of ram and my scenes are usually around 15~25 mil polys i use vray3.0 and max2015 thx Link to comment Share on other sites More sharing options...
artmaknev Posted December 14, 2014 Share Posted December 14, 2014 I have GTX660Ti, its pretty good card, but slows down after 10+ mil polys, so I am also looking for better GPU in the same polys range as you (15-25mil). Not sure if Quadro 4000 is better, I had that card at work and the experience was similar like with 660ti.. Link to comment Share on other sites More sharing options...
philippelamoureux Posted December 15, 2014 Share Posted December 15, 2014 I think what version of max you use might be important. I used to run 2013 and it was much slower than 2014 or 2015 with my gtx 670. At the same time, it's not really optimized heh! Try unreal engine 4 and see what a real viewport should be hehe! Link to comment Share on other sites More sharing options...
Graphite Posted December 16, 2014 Share Posted December 16, 2014 Sounds like you have a similar set up to me. I have two GXT 770's and my frame rate is around 280fps with a 9 Million poly scene in wire frame. Total cuda cores are around 3000 combined which is about the same as the high end quadros. Quadros aren't worth the cost. Especially since your not using the GPU to render. (Assuming your not using VrayRT). Not to mention if your board supports it, you can do an SLI set up with 2 Tesla cards and still be under the cost of a Quadro 5000, get better refresh rates and faster GPU rendering. Link to comment Share on other sites More sharing options...
philippelamoureux Posted December 16, 2014 Share Posted December 16, 2014 Might want to start considering VRAY RT now with the new SP1 addition of light cache with RT. https://labs.chaosgroup.com/index.php/rendering-rd/5-minute-v-ray-rt-gpu-rendering-architectural-interiors/ Link to comment Share on other sites More sharing options...
philippelamoureux Posted December 20, 2014 Share Posted December 20, 2014 I was looking at Nvidia quadro cards...is there a purpose to these cards? Considering the prices and the not-so-crazy specs compared to the latest gtx 900 series and titans (especially titan Z) it doesn't sound look a good deal at all!!! the quadro k6000 is like 4500$...I fail to see the point of such graphic card. Link to comment Share on other sites More sharing options...
Graphite Posted December 20, 2014 Share Posted December 20, 2014 I was looking at Nvidia quadro cards...is there a purpose to these cards? Considering the prices and the not-so-crazy specs compared to the latest gtx 900 series and titans (especially titan Z) it doesn't sound look a good deal at all!!! the quadro k6000 is like 4500$...I fail to see the point of such graphic card. Here are two really good answers to that question. "It's called market segmentation. nVidia produces one very configurable chip and then sells it in different configurations that essentially have the same elements and hence the same bill of materials to different market segments, although one would usually expect to find elements of higher quality on the more expensive Quadro boards. What differentiates Quadro from GeForce is that GeForce usually has its dual precision floating point performance severely limited, e.g. to 1/4 or 1/8 of that of the Quadro/Tesla GPUs. This limitation is purely artificial and imposed on solely to differentiate the gamer/enthusiast segment from the professional segment. Lower DP performance makes GeForce boards bad candidates for stuff like scientific or engineering computing and those are markets where money streams from. Also Quadros (arguably) have more display channels and faster RAMDACs which allows them to drive more and higher resolution screens, a sort of setup perceived as professional for CAD/CAM work. Hardware wise the Quadro and GeForce cards are often idential. Indeed it is sometimes possible to convert some models from GeForce into Quadro by simply uploading new firmware and changing a couple resistor jumpers. The difference is in the intended market and hence cost. Quadro cards are intended for CAD. High end CAD software still uses OpenGL, whereas games and lower end CAD software use Direct3D (aka DirectX). Quadro cards simply have firmware that is optimised for OpenGL. In the early days OpenGL was better and faster than Direct3D but now there is little difference. Gaming cards only support a very limited set of OpenGL, hence they don't run it very well. CAD companies, e.g. Dassault with SolidWorks actively push high end cards by offering no support for DirectX with any level of performance. Other CAD companies such as Altium, with Altium Designer, made the decision that forcing their customers to buy more expensive cards is not worthwhile when Direct3D is as good (if not better these days) than OpenGL. Because of the cost, there are often other differences in the hardware, such as less use of overclocking, more memory etc, but these have relatively minor effects compared with the firmware support." - http://stackoverflow.com/questions/10532978/difference-between-nvidia-quadro-and-geforce-cards Link to comment Share on other sites More sharing options...
philippelamoureux Posted December 21, 2014 Share Posted December 21, 2014 Here are two really good answers to that question. "It's called market segmentation. nVidia produces one very configurable chip and then sells it in different configurations that essentially have the same elements and hence the same bill of materials to different market segments, although one would usually expect to find elements of higher quality on the more expensive Quadro boards. What differentiates Quadro from GeForce is that GeForce usually has its dual precision floating point performance severely limited, e.g. to 1/4 or 1/8 of that of the Quadro/Tesla GPUs. This limitation is purely artificial and imposed on solely to differentiate the gamer/enthusiast segment from the professional segment. Lower DP performance makes GeForce boards bad candidates for stuff like scientific or engineering computing and those are markets where money streams from. Also Quadros (arguably) have more display channels and faster RAMDACs which allows them to drive more and higher resolution screens, a sort of setup perceived as professional for CAD/CAM work. Hardware wise the Quadro and GeForce cards are often idential. Indeed it is sometimes possible to convert some models from GeForce into Quadro by simply uploading new firmware and changing a couple resistor jumpers. The difference is in the intended market and hence cost. Quadro cards are intended for CAD. High end CAD software still uses OpenGL, whereas games and lower end CAD software use Direct3D (aka DirectX). Quadro cards simply have firmware that is optimised for OpenGL. In the early days OpenGL was better and faster than Direct3D but now there is little difference. Gaming cards only support a very limited set of OpenGL, hence they don't run it very well. CAD companies, e.g. Dassault with SolidWorks actively push high end cards by offering no support for DirectX with any level of performance. Other CAD companies such as Altium, with Altium Designer, made the decision that forcing their customers to buy more expensive cards is not worthwhile when Direct3D is as good (if not better these days) than OpenGL. Because of the cost, there are often other differences in the hardware, such as less use of overclocking, more memory etc, but these have relatively minor effects compared with the firmware support." - http://stackoverflow.com/questions/10532978/difference-between-nvidia-quadro-and-geforce-cards pretty complete answer, hehe. Thx! Link to comment Share on other sites More sharing options...
erkutacar Posted January 6, 2015 Share Posted January 6, 2015 no way, i had quadro 5000.. total waste of money! its basically the same card as gtx 580... right now im using dual 980sc and i still dont get much fps in viewport.. but im using max2013, newer ones r faster. Link to comment Share on other sites More sharing options...
komyali Posted January 6, 2015 Share Posted January 6, 2015 Here are two really good answers to that question. "It's called market segmentation. nVidia produces one very configurable chip and then sells it in different configurations that essentially have the same elements and hence the same bill of materials to different market segments, although one would usually expect to find elements of higher quality on the more expensive Quadro boards. What differentiates Quadro from GeForce is that GeForce usually has its dual precision floating point performance severely limited, e.g. to 1/4 or 1/8 of that of the Quadro/Tesla GPUs. This limitation is purely artificial and imposed on solely to differentiate the gamer/enthusiast segment from the professional segment. Lower DP performance makes GeForce boards bad candidates for stuff like scientific or engineering computing and those are markets where money streams from. Also Quadros (arguably) have more display channels and faster RAMDACs which allows them to drive more and higher resolution screens, a sort of setup perceived as professional for CAD/CAM work. Hardware wise the Quadro and GeForce cards are often idential. Indeed it is sometimes possible to convert some models from GeForce into Quadro by simply uploading new firmware and changing a couple resistor jumpers. The difference is in the intended market and hence cost. Quadro cards are intended for CAD. High end CAD software still uses OpenGL, whereas games and lower end CAD software use Direct3D (aka DirectX). Quadro cards simply have firmware that is optimised for OpenGL. In the early days OpenGL was better and faster than Direct3D but now there is little difference. Gaming cards only support a very limited set of OpenGL, hence they don't run it very well. CAD companies, e.g. Dassault with SolidWorks actively push high end cards by offering no support for DirectX with any level of performance. Other CAD companies such as Altium, with Altium Designer, made the decision that forcing their customers to buy more expensive cards is not worthwhile when Direct3D is as good (if not better these days) than OpenGL. Because of the cost, there are often other differences in the hardware, such as less use of overclocking, more memory etc, but these have relatively minor effects compared with the firmware support." - http://stackoverflow.com/questions/10532978/difference-between-nvidia-quadro-and-geforce-cards I have 660oc but I am not sure if I put another one that It can handle more polygons than now is operating. You put your scene inside one card, so if you put another one in SLI it will be only faster, it can't handle more polygons than single card... or I am wrong? Link to comment Share on other sites More sharing options...
Graphite Posted April 13, 2015 Share Posted April 13, 2015 I have 660oc but I am not sure if I put another one that It can handle more polygons than now is operating. You put your scene inside one card, so if you put another one in SLI it will be only faster, it can't handle more polygons than single card... or I am wrong? Personally I prefer the GTX Cards as it seems their refresh rates are better within 3d Max. I've tried putting them in SLI mode, but since the software doesn't utilize SLI I found it didnt really do much to improve frame rate. Currently I run two GTX 770's and they were able to handle a 14 Million poly scene in Realistic Shade mode with Edges without any lag in the view port. Link to comment Share on other sites More sharing options...
miroslavstoyanov Posted April 13, 2015 Share Posted April 13, 2015 I posted a thread twice but it just doesn't appear?!? So I'm going to post my question here, I apologize for the off topic (or maybe not). I want to buy one of these cards - AMD R9 390X or NVIDIA GTX 980 Ti, they are still not available but we already have the specs leaked that most likely will be true. AMD Radeon R9 390X - http://videocardz.com/55146/amd-radeon-r9-390x-possible-specifications-and-performance-leaked NVIDIA GeForce GTX 980 Ti - http://videocardz.com/55299/nvidia-geforce-gtx-980-ti-to-be-released-after-summer So which one would you recommend and why? Software that I use is - 3ds Max, V-ray, Photoshop, AutoCAD. What I'm most interested in is viewport performance. As far as I know if you use V-ray RT you have no other option but to go with Nvidia because V-ray RT and AMD don't like each other, is this still the case ? I don't use V-ray RT, so I'm interested only in viewport performance. What do you guys think? Thanks in advance. Link to comment Share on other sites More sharing options...
philippelamoureux Posted April 13, 2015 Share Posted April 13, 2015 I now have my gtx 980 superclocked edition and I don't really notice a difference in 3ds max viewport compared to my previous gtx 670. Can't wait to try vray rt 3 sp1 tho! At the root, 3ds max viewport kinda sucks anyway imo! Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted April 13, 2015 Share Posted April 13, 2015 (edited) Once you have a GK104 card performance levels (pretty much a 760), the CPU is more often than not the bottleneck for the most part, so getting a GK110 (780/780ti/Titan) or a 9xx GPU, doesn't really bring anything more to the table as far as viewport goes: the CPU simply cannot feed enough data to the card for the extra shaders to matter. Thus technically even with a GTX 750Ti or 660Ti, or a R9 270 if you are on the red side, you already shift most of the viewport acceleration bottleneck to the CPU. There are some cases where having 2GB of VRam might start being a hindrance for certain types of model with high textures and high screen resolutions but again, usually people tend to be over-ambitious when guesstimating what kind of GPU they really need. Better shift your attention to getting a faster CPU - unless ofc you have other plans for your GPU outside 3DS Viewports - like GPGPU, or gaming, which either can potentially use up all the GPU grunt you can throw at them. Edited April 13, 2015 by dtolios Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now