Jump to content

6GB GTX TITAN or 2x 3GB gtx 780 FOR VRAY RT GPU


Recommended Posts

Hi all,

 

I'm configuring a new system, gonna be using 3dsmax & vray, and during the process of creating scenes, i like to spend quite a bit of time adjusting lighting and materials in vray RT for immediate feedback, as opposed to test render, then tweak a light, then test render, then tweak the same light again, then test render etc etc...

 

At the minute i'm on a single GTX580 but usually just use the vray RT CPU of my i7 2600k. It can be ok sometimes but when dealing with a high polycount (20m+) and when vray proxies are enabled, it lags quite a bit, and crashes now and again.

 

So if i upgraded to an i7 3970X do any of you know whether a single 6GB GTX Titan would be better than 2x 3GB GTX 780's, as the price is fairly similar. And if anyone has used either before, could you let me know how it handles fairly complex scenes, both interior and exterior. Its a big enough investment for me so any feedback would be greatly appreciated.

 

Cheers,

 

Stephen.

Link to comment
Share on other sites

I have 2 GTX 580 3gb and think that i will be quite limited with the amount of ram. So my projects will be a little "easier" on that part and I have to think twice when i model, but It's okay since I'am still a architect student.

 

Is it a GTX 580 1.5gb you have now?

 

Unfortunately the speed improvement hasn't been that great two generations later (from GTX 580 to 780) when you render with physically accurate render engines. 6gb memory on the GPU though is a very neat improvement.

 

http://www.tomshardware.com/reviews/geforce-gtx-titan-opencl-cuda-workstation,3474-15.html

 

Tom's Hardware has very GPU renderengine benchmarks that is really good. No Vray RT, but hopefully it will compare pretty good with iRay or Octane. I think, and I'am pretty shure that Vray 3.0 will change these things quite drastically. It will be way faster on never cards like GTX 780, but GTX 580 might keep up okay. I hope so since I will keep my 2x GTX 580 3gb for a while and make some sequences.

 

http://www.tomshardware.com/reviews/geforce-gtx-780-performance-review,3516-26.html

 

This is a pretty good link aswell.

 

Good luck, hope it helps a bit.

Link to comment
Share on other sites

Thanks Dean,

 

Where would i find out such information for certain? I'm not a hardware enthusiast by any stretch but it's scary to think i may have made a mistake like this and spent a lot of money on a secondary card that would be useless to me in terms of what i need it for. Again, cheers for the heads up.

 

Stephen.

Link to comment
Share on other sites

As Dean said, the video RAM can't be summed up, every card works independently using the full scene data. This applies also to cards with two chips where you have to divide the RAM capacity of the card (GTX690/790).

This whole RAM problem hopefully will be solved with the next generation nvidia cards (Maxwell /February-March 2014), which are announced to be able to use the system RAM. (http://www.theinquirer.net/inquirer/news/2255921/nvidias-maxwell-gpu-architecture-will-access-system-ram )

Link to comment
Share on other sites

It is certain that VRAM in cards doesn't add up. Not for VRay RT, not for any other progressive rendering or GPGPU application.

Not for games either - doesn't matter if you SLI or not. Just doesn't.

 

Each GPU needs to have all the assets available in its own VRam buffer to get started.

 

Dual GPU cards, like the GTX 690 or the Radeon 7990 are often advertized as "4GB" and "6GB" cards, but this is marketing bluntly lying: those are 2x individual GPUs on a single PCB SLI/Crossfired together, with one being the primary and the other being the secondary. Each one has half of the RAM "advertized" for individual and exclusive use, and if an application doesn't support SLI/CFX (like 3DS Max or Maya viewports as of 2014 versions for example), you are only using 1x GPU and half the ram.

 

On the "pro" side, you don't need to pair them with identical models as you do with SLI/CFX: i.e. adding a 780 to your current system will be working fine, with you having the option to render with either the 580 (actually the fastest single GPU GTX in VRay RT before the 780 and Titan) or the 780 or both simultaneously - given the scene "fits" in both's VRam individually.

If say you need more than 1.5GB or RAM and you have a 1.5GB 580 and the 780, only the 780 will kick in - even if you select them both as active compute cards.

 

If you want to use the RT engine as an activeshade window checking lighting and shaders from different angles etc, it is actually useful to have 2x or more cards: a dedicated viewport accelerating card that doesn't stumble each time you touch your middle mouse button, and one or more GPGPU cards that do just VRay RT acceleration. For the final image export, if your "viewport" card has enough VRam and meaningful GPGPU performance (for example K2000 while good @ viewports is nearly useless in GPGPU, and the K4000 is not that much better, being 2x of meaningless is not a feat), can always be added in the mix to help.

Again, ANY mix of cards and performances can be used. The slowest one won't be dragging the others - results will always be rendered faster, but burning extra watts etc through slow cards is not that productive.

Link to comment
Share on other sites

  • 1 month later...
It is certain that VRAM in cards doesn't add up. Not for VRay RT, not for any other progressive rendering or GPGPU application.

Not for games either - doesn't matter if you SLI or not. Just doesn't.

 

Each GPU needs to have all the assets available in its own VRam buffer to get started.

 

Dual GPU cards, like the GTX 690 or the Radeon 7990 are often advertized as "4GB" and "6GB" cards, but this is marketing bluntly lying: those are 2x individual GPUs on a single PCB SLI/Crossfired together, with one being the primary and the other being the secondary. Each one has half of the RAM "advertized" for individual and exclusive use, and if an application doesn't support SLI/CFX (like 3DS Max or Maya viewports as of 2014 versions for example), you are only using 1x GPU and half the ram.

 

On the "pro" side, you don't need to pair them with identical models as you do with SLI/CFX: i.e. adding a 780 to your current system will be working fine, with you having the option to render with either the 580 (actually the fastest single GPU GTX in VRay RT before the 780 and Titan) or the 780 or both simultaneously - given the scene "fits" in both's VRam individually.

If say you need more than 1.5GB or RAM and you have a 1.5GB 580 and the 780, only the 780 will kick in - even if you select them both as active compute cards.

 

If you want to use the RT engine as an activeshade window checking lighting and shaders from different angles etc, it is actually useful to have 2x or more cards: a dedicated viewport accelerating card that doesn't stumble each time you touch your middle mouse button, and one or more GPGPU cards that do just VRay RT acceleration. For the final image export, if your "viewport" card has enough VRam and meaningful GPGPU performance (for example K2000 while good @ viewports is nearly useless in GPGPU, and the K4000 is not that much better, being 2x of meaningless is not a feat), can always be added in the mix to help.

Again, ANY mix of cards and performances can be used. The slowest one won't be dragging the others - results will always be rendered faster, but burning extra watts etc through slow cards is not that productive.

 

If you have a card like the K5000 or K6000, do you need a second card for Maya + Vray? With Maya by itself everything I read say K5000 best (did not see K6000 benchmarks). When Maya V-ray discussed everyone says GPU Titan but have yet to find mention of viewport card being used but am assuming they all have at least two GPUs?

 

With up to three cards to choose for Maya 2013 + V-ray over 60+% of the time, and Adobe CC the rest, which config would be best? Would a K5000 viewport and Titan GPU outperform a single Quadro K6000?

 

Thanks alot for all your thoughtful feedback.

 

C

Link to comment
Share on other sites

You don't need 2x GPUs for VRay. One, either Quadro or GTX, will work.

You just have to realize that when you demand GPU compute tasks, you cannot expect fluid viewport performance at the same time.

It is the exact equivalent of rendering on all cores/threads with "normal" priority, and trying to do other CPU intensive tasks simultaneously: something has to give, one of the two or both will be slower and/or sluggish.

 

Having more than one GPUs, dedicating one as the primary viewport card, allows the rest to perform GPGPU tasks, while the former does nothing but its primary role = moving geometry on the screen. A K2000 or K4000 will do fine in this scenario as the viewport accelerator, tho as mentioned above their GPGPU potential (in case you wanted to go all-out rendering with those ontop of the GTX) is very limited.

 

Using this logic ofc, upgrading from a K4000 to a K5000, will set you back almost as much as a GTX Titan.

That means for the same budget you can have both a pretty decent OpenGL card, and a GPGPU (CUDA or OpenCL) accelerator, or just a K5000 that might be a tad more fluid in viewport with very complex scenes, but not even close to a Titan in GPGPU.

 

If the K5000 + GTX Titan duo is within your reach and you are serious about progressive GPU renderings, you could also consider a K4000 + 2x GTX Titans, etc.

 

As far as the K6000 goes...

the K6000 has more cores than the Titan (full GK110 die, with 2880 cores, like the 780ti = 7-8% faster clock per clock), and has 12GB of RAM. But it costs as much as 5x Titans, or a K5000 and 3x titans etc.

Ofc the Titan can easily be overclocked to cover that 7% without blinking (much like a 780 can be pushed above stock Titan speeds).

 

I don't know how Adobe CC deals with more than one cards to be honest: I don't know if the GPU accelerator used for GPGPU is automatically set, or if you can select from a list of compatible devices, much like progressive rendering engines allow you to.

In either case, the K6000 will over negligible advantage in speed for anyone not being afraid of a 10% overclock (stock coolers allow Titans and 780s to hit 1200Mhz clocks without overheating, that's a 30+ % OC over stock Titan speeds), given ofc you don't care for 12GB of VRam. Within the same logic, 780 and 780Ti remain alternatives to the Titan, as the only real difference in favor of the Titan is VRam.

Even then, it is sad that a Radeon 7950 that sells for $180-200 lately, humiliates any of the above in OpenCL and Adobe CC.

 

With the above in mind, the Quadro + GTX duo (or trio) is a far more versatile solution and in most GPGPU cases vastly faster than a single, more expensive Quadro.

Edited by dtolios
Link to comment
Share on other sites

  • 2 months later...

Hi ,

Sorry to up this old thread and for my bad english.

 

I am an architect and wants to make a new computer, but I 'm a little confused with graphics cards.

I usually work with ArchiCAD, 3dsmax, Vray RT, iRay, Sketchup and sometimes Revit, Vectorworks.

I'll may be trying soon Maxwell Render.

 

I read carefully what you said here, but yet I had a few questions.

 

Do you think that PNY Quadro 4000 can suit instead of a Quadro K2000.

Is the Quadro 4000 really less efficient ?

 

How many polygons will saturate the GTX-780 3GB memory ?

 

What configuration you think is best in terms of price/efficiency :

Quadro K2000 + GTX-780

Quadro 4000 + 2xGTX-780

Quadro 4000 + GTX Titan

Quadro 4000 + GTX Titan + GTX-780 (very (too) expensive !)

 

Finally in case there would be 3 graphics card, which mother-board would you choose ?

(16x 8x 8x : Asus Z87-WS C2)

(8x 8x 4x Asus Z87-DELUXE C2/Asus Z87-PRO C2)

(8x 4x 4x Asus Z87-EXPERT C2)

(8x 8x 2x : Asus Z87-PLUS C2)

 

Many questions, but I'm a simple guy on hardware.

Thank you very much for your answers.

 

Have a good day.

Link to comment
Share on other sites

  • 2 weeks later...
Have you read up on Nvidia's Maximus configurations?, this may be of interest:

 

http://www.nvidia.com/object/multi-gpu-technology.html

 

Well, that's what we are trying to do, only replacing super-expensive Teslas with GTX cards as accelerators.

Raytracing is apparently not related to DP/FP64 performance, so DP limited GTX cards work as good as their "pro" counterparts.

 

Maximus is a great option, but far from a necessity for the most part, as most apps can utilize multiple GPUs regardless of it being enabled.

Link to comment
Share on other sites

OK, I see - yes it does seem expensive. I'm interested in this subject because I splurged on Quadro 6000 a couple of years ago, and as great as it is, I still ask myself if I shopped smart. I'm forever reading that another great CG guy uses a gaming card! and I wonder if I missed a trick, but of course without using both on the same system one can't easily compare.

Incidentally I was advised by a more knowledgeable peer than myself that the GPU ram would be an issue with those two in one cards. I guess that's part of what one needs to work around, right?

Edited by TomasEsperanza
Link to comment
Share on other sites

OK, I see - yes it does seem expensive. I'm interested in this subject because I got splurged on Quadro 6000 a couple of years ago, and as great as it is, I still ask myself if I shopped smart. I'm forever reading that another great CG guy uses a gaming card! and I wonder if I missed a trick, but of course without using both on the same system one can't easily compare.

Incidentally I was advised by a more knowledgeable peer than myself that the GPU ram would be an issue with those two in one cards. I guess that's part of what one needs to work around, right?

 

I don't get what you mean with "two in one cards".

If you mean dual GPU cards, like the GTX 590 or 690 that are techically two cards in SLI mode, on a single PCB, yes, you are right, there is some mis-information going as those are marketed as 3GB and 4GB cards respectively, while in reality each of the two GPUs has access only to each own memory bank, or 1/2 of the total memory on the card. Memory controllers between the cards are not communicating / RAM is not shared.

 

Maximus has no issue with that, in fact that is how most GPGPU programs are using GPU accelerators: each one has to fit whatever needs to be fitted in its own memory, or the card that doesn't, simply doesn't contribute to the compute, with the rest that do keeping business as usual.

 

You don't need identical cards, you don't need SLI anything between them. Just a CUDA or OpenCL compatible GPU and enough buffer on that GPU for the given task.

Link to comment
Share on other sites

I just want to say thanks to Dimitris, who has obviously been very helpful to those of us asking questions here. Cheers Mate :)

 

Also, this topic gets me wondering:

 

How does one upgrade from a Quadro 6000? (not K), considering these three specific factors: a) Two large monitors are being used. b) Large Scenes are present. c) Highquality RT (V-Ray) is required.

 

At present the scenes and the monitors are OK, however, add to that the RT, and either the monitors or RT or both can become sluggish with heavy scenes.

 

If Dimitris or anyone else can recommend an efficient (both financially and otherwise) augmentation to this hardware configuration, it would be much appreciated.

 

I am hoping that the addition of a GTX may help, but how, I am not so sure. Both Viewport and RT quality are desired.

 

Regards,

 

Tom

Link to comment
Share on other sites

Thanks Juraj,

 

No I'm not rich :) (In fact because I have already spent a lot is why I need to be astute now!)

So when you say "nicely" priced, I assume that this would be the less expensive and more economic addition?

(I do think a 12GB K6000 does seem extravagant)

 

The GTX Titan 6GB is attractive, would you recommend using the Quadro 6000 or the GTX Titan 6GB for RT? (which way around?)

 

Thanks

Edited by TomasEsperanza
Link to comment
Share on other sites

They are both the same card in the core (together with TeslaK40) and will provide same performance under CUDA, but K6000 is more versatile with 12GB ram, which is the least minimum I would even consider if GPU rendering is used to achieve the final result, not only in the beginning for tests. Of course, it's costs 4600 dollars/3600 euros, and that is not so nice. Titan still has hefty price tag, but it's actually a steal for what it offers. 6GB is ok, if one can compromise his scenes to fit into this, that's it the best choice out there. Highly clocked 780/780Ti offer slightly better performance, but with much lesser vram, thus comprimising the GPU rendering versatility further.

But you own FX6000 which still costs likewise of 3000+ at the moment way past its time :- ) Maybe it's not really necessary to uprage it right now. Would be worth to wait what this year brings. NVidia Maxwell gen is slowly coming out at the moment.

Link to comment
Share on other sites

Yes, always worth keeping one eye on the market. I think I'll to go read about Nvidia Maxwell generation, wasn't aware tbh. Cheers

 

Maxwell in official 8xx series will be out in last half of 2014 - that's some time from now, and I don't know if they will come with a model that will rival the GK110 cards in compute straight away. Could be 2015 when they will launch the GM110 or whatever the replacement will be.

 

The 1st Maxwell piece out is the 750, which is promising in gaming and especially performance/watt delivered vs the 650 it is replacing, yet still too slow to compete vs. high-end Kepler cards.

Link to comment
Share on other sites

So have you guys seen the latest "Black edition" Titan :- ) ? Same performance as 780Ti but 6GB of ram of Titan. Smart choice, now some exclusive design by some of the vendors would be really appreciated.

 

Will be a fancy card, but I would not get my hopes high for it...the original Titan already has a GK110 with 14/15 SMX units enabled. That's 2688 CUDA cores.

The "Black" will be featuring all 15 SMX units, or 2880 cores. That's 7% more.

 

Also open is the scenario of the Black not featuring the same FP64 performance, and tho this is of little importance for most users looking into its performance for GPGPU for the CG world (and ofc games, the prime market for the card), it might be important defining its appeal to other markets (e.g. scientific research or financial predictions etc). Should this be a limited supply card, like the original Titan, the less parties interested in volume, the longer the card will be available and the longer it will stick with a price that is at or below MSRP (Titan was selling for more than MSRP a couple of months after its launch due to supply shortages).

 

Most of the performance the 780Ti (and the new Black) have over the Titan, is due to higher stock clocks.

But the Titan (and 780 non-Ti) have better voltage control and can be overclocked higher than either of the full-GK110 counterparts, easily overcoming a 7% core deficiency. For our purposes heavy overclocking either version is not that wise: too much power demand when deploying more than 1-2 cards, but a few MHz can be gained without raising the voltage at all.

 

In a nutshell, be ready for used Titans to be released in the used market by enthusiasts that want the latest and greatest - that's probably the only tangible benefit.

Link to comment
Share on other sites

I am deadly afraid of used computer components :- ) It's odd, because I've been building high-end mountain bicycles my whole life exactly on pure used stuff (otherwise I couldn't afford anything).

 

I don't see the card that negative, I am aware it's not drastic improvement, but it's like buying 780Ti with 6GB ram and that's pretty good thing to have. [seems to be also slightly higher clocked than 780Ti so it's not just the shader cores]

 

Imma buy 4K monitor very soon and than have some fun in CryEngine :- )

 

I am also pretty interested ho 790 will benchmark. It's so funky how GPU's went long way for past years...and CPUs not.

Edited by RyderSK
Link to comment
Share on other sites

I am not at all negative about the card...just disappointed about the turn of things...its this awkward period, where crypto-mining has practically removed high-end AMD products from the market (or is pushing prices of new and used parts to insanity, at least in the US), nVidia is messing with us releasing products that are just a "reheat" but still demand 4-figures for etc.

 

The only release rumor than got me kinda hyped was that of a "790", i.e. a dual GK110 card. Some sources were mentioning 10GB (i.e. 5GB per GPU, that would be a "realistic number if 320bit RAM bus was used, with 5/6 64bit memory controllers of the GK110 running). Now, that would be a hell of a GPGPU card for the consumer. It would have less SMX units per GPU, but still the compute density would be amazing: imagine 6 GK110 cores with a Quadro/Firepro, or 8* GK110 that could fit in a single tower, using a readily available E-ATX...It would be an expensive piece of kit, but...

 

The Titan Black, I see as nothing more than my current Titan...it's probably not even black PCB! (no pimp, pfff).

 

The R290X and ofc the upcoming Maxwell cards (second half 2014...prob teasers around early fall, readily available before xmas as usual) are far more exciting imho than yet "another" single GK110 card that will be pushing $1000.

 

EDIT: First Titan Black Review

 

Apparently FP64 is still there, and tho no substantial increase in performance as expected, this is surely a fast card. If it wasn't for CUDA tho...far better offerings.

Edited by dtolios
Link to comment
Share on other sites

  • 3 weeks later...

Hello,

Well past few years i read alot about GPU rendering. Last time i read about this subject, Vray RT was not 100% ready for production. In general cards are expensive and their RAM insufficient. My first question is this: Have things changed?

As a part of my research, I ended up buying last year a GTX 680 3gb. I didnt see any improvements from the previous 580 i had. In fact comparing the times in a specific bench scene in Vray forums, i was like 30-40% slower from people using same card as me.. Plus only specific driver version was working with Vray RT. Every update i did, RT didnt work. My advice so far? Dont base ur research in benchmarks. Unfortunately a lot of parameters play major role for how the gpu will behave from system to system. I still havent figure out why mine is so slow. Is slow both in Vray + After effects... While it runs great every benchmark for games etc, doesnt fit my work needs.

So my final question is this: Has anyone here actual experience from Titan? Its not only the performance, its also the temperature. My 680 is burning when i run RT and for a fact this card is not made for this kind of abuse. Its not a cheap card at all i think i spend around 600 euros to buy it. Titan is also promoted as a gaming card right? So any person with actual experience on the card? thank u

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...