Jump to content

Gpu


Recommended Posts

What is best card now for 3ds max vray, performance/money ? I am planning two monitors, maybe one will be 3D monitor, also looking for GTX cards and room for SLI connection.

best performance/money card for me

GTX660TI MSI http://www.newegg.com/Product/Product.aspx?Item=N82E16814127696 , second one

GTX660TI EVGA http://www.newegg.com/Product/Product.aspx?Item=N82E16814130837

I am going to EVGA because of two fans?

Link to comment
Share on other sites

The best card for what you want is not easy to define. Will you be gaming also on this PC? Otherwise 3D monitor and SLI capabilities are irrrelevant to 3DS.

 

For purely viewport acceleration in a 3D CAD machine, even a Quadro 600 will work better than a 660ti - it does work better than my GTX 670 SC 4GB...:mad:

A Quadro 2000 will be better ofc, if you can swallow the pricetag. As far as gaming and/or iRay/VRay RT those are far slower than a 5xx/6xx GTX but...

 

Between those two 660s:

MSI's dual fan design is proven to be one of the best out there. Along with Gigabyte's, their open fan shroud design was giving them better temps and lower noise levels in comparison with the single fan and/or closed shroud cooling options by other manufacturers.

The only downside is that with open or no shrouds, a portion of the hot air is not exhausted directly outside, but is released inside the case getting the air temp around other components a tad higher. In cases with good air-flow, this is almost irrelevant, and if the component with the highest heat dissipation is happy (that is the GPU in most mid-range to high-end systems), everything is happy.

 

For cases with tight internals / limited airflow, a closed shroud could be a better option. Note that this EVGA's cooling solution is not closed either - just not that open as with the MSI.

 

Most single fan, reference coolers for both nVidia and AMD cards are closed shroud - i.e. there is a single exhaust direction to the rear of the card. With modern cards offering multiple display ports, this becomes increasingly problematic, and even dual slot cards are left with only a small usable grill area to do that.

Link to comment
Share on other sites

  • 4 weeks later...

not gaming like playing games but game engine :D

But I dont want to spend a lot of money, my limit is 250 euros or 330$ but 200€ or 265$ would be best for me prices are little bit hotter in Europe than in America :(

What to buy Radeon or GeForce? Personally dont like Radeon...

Also dont want card to be under 1,25 Gb, because my dead GTX 470 is like that so now I need more :) 2 GB would be ideal

What is the best manufacturer of GTX 660? I think that is the best card for me?

Link to comment
Share on other sites

Hmm...so the picture of your artifacts having Steam on the background was not of my concern - got it :)

 

At this price point, close to #300 or so, the best card for all around use is the Radeon 7950. But if what you are after is CUDA or it has to be nVidia, then I guess the 660/660ti are the choice to make.

 

You might get a good deal on an older (even used?), cheaper GTX 570 1.25~2.5GB Ram or a GTX 580 1.5-3GB. If what you want it for is for CUDA, those will be as fast or a tad faster than the top of the line 6xx cards, and as far as "gaming engines" go, for a single 1080p-1200p monitor (talking where the "gaming engine" is running, not how many you have hooked up) those 5xx cards work just great - even with the latest "gaming engines".

Link to comment
Share on other sites

which one is better in 3dsmax view port?

 

ASUS GTX680-DC2T-2GD5 GeForce GTX 680 2GB 256-bit GDDR5 (540$)

Core Clock: 1137MHz

Boost Clock: 1201MHz

CUDA Cores: 1536

 

Or

 

ASUS GTX680-DC2-4GD5 GeForce GTX 680 4GB 256-bit GDDR5 (570$)

Core Clock: 1006MHz

Boost Clock: 1058MHz

CUDA Cores: 1536

Link to comment
Share on other sites

This is a non-question. Both are the same card, one having slightly higher clocks so will be slightly faster.

 

I don't believe you will be working on scenes that need more than 2GB of VRam but if you can get the 4GB version with only $30 more, go for it.

 

However, if all you car about is viewport performance, for this kind of money you can get a Quadro 2000 or used 4000 that will beat those cards.

 

And plz - try to place your posts in topics that are relevant, and/or name your topics in a way that is descriptive and helpful for others searching for similar topics.

An almost out of context question in a topic named "gpu" is counterproductive in every possibly way.

Edited by dtolios
Link to comment
Share on other sites

thank you Dimitris for your reply

excuse me for wrong post. i read your previous posts in this topic and i thought you were talking about VGA so i asked it here.

i'm confused what's the most important spec in a VGA for view port? core clock? memory(Interface-Bandwidth)? cuda core?

and for gpu rendering which one is faster, Quadro 4000 or Gtx 680?

Link to comment
Share on other sites

I am confused too :)

Generally, GeForce drivers are doing so-so with viewports. This is balanced out with the raw performance of high end gaming cards, so a GTX 580/680 etc is generally "good enough" for most people as far as viewports go, but it won't surpass a Quadro 4000 and in many cases a Quadro 2000 or even 600, as the Quadro's have drivers that work better with most programs, despite the underwhelming in comparison hardware.

 

You also have to "know" what kind of architecture you are dealing with: each generation might be a direct evolution of the previous one, or based on completely different architectures:

 

For example the Quadro 5000/4000/2000/600 are all 1st Gen Fermi architecture, which was also improved and used in the 5th generation GTX cards like the 560/570/580. For GPU rendering 4xx and 5xx are comparable i.e. a 4xx with same cores as a 5xx but slower clocks will be roughly slower as the MHz difference indicates.

 

6xx cards are based on the "Kepler" architecture, which is different than the "Fermi". Much higher clocks, much more cores, but worse compute performance per core. The GTX 680 has 3x the cores of the GTX 580, yet in general is worse for GPU renderings and GPU accelerated apps.

 

The Fermi is thus much more "efficient" per core for computation tasks, yet that doesn't mean that all the Fermi cards are faster than Kepler cards - remember, mid-range Quadro cards are less powerful as far as raw hardware power goes, and the Fermi Quadro's are also based on the 4xx cores, so a 670/680 will out-perform those in pure computation scenarios.

 

Another great advantage of the Kepler cards, is that those run much cooler and draw less power, so if you were after absolute performance within a reasonably sized case, you could power 4x Kepler cards with a 1000-1200W PSU, where with GTX 580s you would be stressing it adding more than 3x.

The 10-15% computation advantage of the 580 is not enough to out-weight adding an extra 670/680.

 

670/680cards also draw much less power when idling, and come with higher VRam buffers - up to 4GB instead of 3GBs for the 580.

Edited by dtolios
Link to comment
Share on other sites

wow you are great in hardware info Dimitris!!you should work for NVDIA or INTEL...!

very useful post for me, finally i understood differences between gtx and quadro also kepler and fermi very clearly.

and one more question, in a workstation with a corei7 3770k and gtx 680 which one is faster, GPU rendering or CPU rendering?

Link to comment
Share on other sites

, in a workstation with a corei7 3770k and gtx 680 which one is faster, GPU rendering or CPU rendering?

I think you cant compare directly, but pure logic will tell you GPU is much faster than CPU. I am using max and vray, vray doesnt have GPU rendering, only RT but I use it for preview because GPU rendering with vray I cant use full power of vray and some types of maps it doesnt use. But next vray I think it will have option for full GPU potencial... I hope so :D You can use Iray for GPU rendering, but I never used it, but I will try it, also I am not sure what would happened with GTX cards if you stress it on 100% for 24 hours or more, that is why Quadro cards are better :) Dimitris will tell am I right :D

 

Now my cards, I made decision, I am going for GTX 660 TI cards, looking for Gigabyte, 351$ in Germany... We dont have Newegg here in Europe :(

What do you think what manufacturer to choose GTX 660TI ?

 

P.S.

GTX660-Ti-N66TOC-2GD-2048MB

Is this overclocked?

Edited by komyali
Link to comment
Share on other sites

Consumer cards have been computing A LOT the last few years non-stop, especially for Folding@Home, BOINC, Bitcoin mining, etc applications that utilize GPUs at almost 100% and can be left running for weeks or months.

 

Yes, there is wear and tear, and eventually the cards with even minor defects will give in under such stress, but that is true for both GTX and Quadros (and ofc AMD equivalents that are also used a lot). I doubt that any GPU rendering workflow can actually be harsher than the above distributed computing routines, or as uninterrupted for as long.

 

Cards run hot, but well within their specs - at least from my experience folding with my GTX 670.

Get a card with good warranty. Mild O/C is fine provided you have good cooling.

Yes, this Gigabyte has a mild o/c from the factory. You can always do a small google search and find the original specs of a card to compare:

http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-660ti/specifications

 

Reference 660ti = 915MHz/980boost

This one is more @ 1032 MHz

 

Your 4xx died without that harsh conditions. It happens. Usually a small quality control failure might let a sub-component to slip in and compromise the integrity of the board - usually those pieces are capacitors and/or mosfets that handle power. Premember, the newer GPUs are made to run at 80-90oC, and few consumer electronics can handle that for a long time. The CPUs and GPU chips themselves are of the highest standards and the actual core failing is extremely rare, but not the little guys around them.

Edited by dtolios
Link to comment
Share on other sites

I cannot vote for either. I just know that the open shroud coolers from MSI and Gigabyte appear to give better temps on similarly clocked cards in comparison with the reference boards.

 

Asus is receiving some bad critiques as far as support goes lately, but what is also a fact is that the % of buyers that opts for Asus is significantly higher than any other company. Dissatisfied customers are far more likely to go vocal and complain in communities or user reviews about their experience, so it is hard for someone that tries to be objective to trust a couple of bad user reviews here and there without knowing what % of the market is controlled by a company (i.e. if Asus has sold 500K cards, probably you will see more complains vs. a company that sold 100K cards, even if the latter has worse QC).

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...