Chris MacDonald Posted March 26, 2014 Share Posted March 26, 2014 Can't believe there isn't already a topic about this. Quite a game-changer, I think... When you consider GPU render engines don't need to be quadro's, in the same way that fluid dynamics etc do. So... Who's buying one?! http://www.techradar.com/news/computing-components/graphics-cards/nvidia-announces-new-5k-ready-titan-z-graphics-card-1237079 http://blogs.nvidia.com/blog/2014/03/25/titan-z/ Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted March 26, 2014 Share Posted March 26, 2014 (edited) Yep, a beast of a card...it is actually two 780Ti/Titan Black (or full GK110 cores with all 15 SMX units enabled - original Titan had 14/15). The compute density achievable with this beast is nuts...you can have up to 8*GK110 in a midi case!... Ofc it is a dual GPU card, and since unified memory is not a feature of Kepler cards (promised with the upcoming Maxwell, but you never know), thus apples to apples it should not be referred to as a "12GB card", but as a 2*6GB card: each GPU has access to 6GB, and both for Gaming or GPGPU, each one needs to load all the assets in its own buffer...both AMD and nVidia are miss-leading when they are advertising their twin GPU cards with "X memory onboard". I don't know who will be getting one...I probably won't, MSRP is at $3,000. Also be aware that EVGA already announced the upcoming 6GB versions of 780 and 780Ti models, so those who bought one of those from EVGA the last 2-3 months might be able to qualify getting them through the step-up program. Ofc you need to be in a region covered by this. Since SLI is not a "thing" for modeling app viewports, this card has no utility for us, unless it is purely for GPGPU. And since you buy 3x Titan Black's for the same price, this thing is out mostly for show and claiming the absolute fastest card crown regardless of cost etc. I have no idea on the power demand of it yet...should be less of 2x 780Ti/Titan Black, probably in the expense of base clocks / turbo clocks, as those cards do draw in the region of 250-300W when pushed (around 350~380W from the wall plug including CPU/Mobo etc in typical 100% GPU and some CPU utilization). Even with lower voltages / clocks, this cards should be pulling more than 400W under load, needing at least 2*8pin + 1*6pin or 3*8pin aux connectors to be on the "safe" side. Would call it a great ITX case card...too bad the ITX cases I fond are SFX/FLEX atx PSU powered, and we don't get bigger than 450W PSUs for those. Oh well, think positive ($3,000 aside), AMD did it fine just above 300W with the 7990, maybe they can hold it lower than 400W. Edited March 26, 2014 by dtolios Link to comment Share on other sites More sharing options...
Chris MacDonald Posted March 26, 2014 Author Share Posted March 26, 2014 Ofc it is a dual GPU card, and since unified memory is not a feature of Kepler cards (promised with the upcoming Maxwell, but you never know), thus apples to apples it should not be referred to as a "12GB card", but as a 2*6GB card: each GPU has access to 6GB, and both for Gaming or GPGPU, each one needs to load all the assets in its own buffer...both AMD and nVidia are miss-leading when they are advertising their twin GPU cards with "X memory onboard" Well that's a deal breaker. They really, really do need to stop advertising this way! Link to comment Share on other sites More sharing options...
beestee Posted March 26, 2014 Share Posted March 26, 2014 No love for the Iray VCA? At $50,000 it kinda reminds me of the old SGI workstations required to run Max. Link to comment Share on other sites More sharing options...
klonk Posted March 26, 2014 Share Posted March 26, 2014 No love for the Iray VCA? At $50,000 it kinda reminds me of the old SGI workstations required to run Max. That's probably also because Nvidia is the old SGI graphics department that was sold off if I remember correctly. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted March 26, 2014 Share Posted March 26, 2014 (edited) You could build an equivalent iray VCA system for quite some time now... Just replace the Titans i'm mentioning there, with K6000 cards (your choice if you have to have 12GB @ GPU Chris)... 8* $4500 give or take, plus a couple of Xeons (you don't really need 2*10C if it is an iRay workstation) and the $3,500 chassis, and you might have some decent change from those $50,000. Edit: to be fair, the VCA has 2x 10 GBit NICs...but this Tyan Barebone is out 2 years, so...not cutting edge (some retailers are pushing it for $2,500 now). Edited March 27, 2014 by dtolios Link to comment Share on other sites More sharing options...
RyderSK Posted March 26, 2014 Share Posted March 26, 2014 Yeah, I am also annoyed about the 12gb advert, I knew it was dual-gpu but I still thought something could have been different about this and they really meant 2x12...but looks like not. Well... The IrayVCA lol, I don't know what to say about their car rotating propaganda, and super exaggared speed-up claims, it's been funny 4 years ago, it's still funny now. But I guess people are easily impressed. When I can get 12GB vram in under 1500 euros for a single card (and not 4000+ like Q K6000 or TeslaK40) then, just then only I will try it. In meantime, I have 124 ivy bridge buckets for fraction of such price and identical rendering speed in any engine and with no memory limitations. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted March 27, 2014 Share Posted March 27, 2014 Its funny cause for the VCA, the same ppl specify "12GB per GPU", as it is a product that more or less is aimed towards people who want "more" by their investment. You would expect the same treatment to consumers depositing $3,000 (or even 1/3rd that) for a single card, but I guess it still works for them. And when it works, they won't fix it. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted March 27, 2014 Share Posted March 27, 2014 I won't start a new thread, but lets throw some fire(pro) in here... AMD Firepro W9100 - fully enabled Hawaii GPU, 2816 SPs (around 40% more than the W9000) and....16GB VRam! (8Gb GDDR5 modules / 8Gb = 8Gbit = 1GB = 1 GigaByte) and up to six (6) 4K displays simultaneously. Man, that's some OpenCL grunt...now, MAKE IT WORK WITH THAT FRIKKIN VRAY RT GPU! Link to comment Share on other sites More sharing options...
Chris MacDonald Posted March 27, 2014 Author Share Posted March 27, 2014 IF that was ever made to work with VRay RT, I'd be all over it like a tramp on a hot big mac. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted March 27, 2014 Share Posted March 27, 2014 IF that was ever made to work with VRay RT, I'd be all over it like a tramp on a hot big mac. LoL...(so sad to laugh) Well, to be more precise, compatibility does come and go with both 2.4 and now 3.0. It is applied performance that has nothing to do with the rest of the OpenCL benches and Luxmark renderer. AMD's devs hinted a breakthrough with Blender - was I tweet, I don't follow Tweeter to know details - perhaps Cycles will have a proper-for-amd OpenCL mode - otherwise they would not hint/brag etc. In Chaosgroup's test scene (a simple scene used in the forums for members to see whether they are in the ball-park as fast as they should be and roll-back drivers if necessary - 7970s are getting worse times than GTX 670s, around 2.5x slower than Titans & K6000 (when in other benches even 7950s are much faster than Titans). This W9100, much like its 280X sibling, should be beasts, but deliver like kittens Link to comment Share on other sites More sharing options...
RyderSK Posted March 27, 2014 Share Posted March 27, 2014 I don't know what is it with Chaos's implementation of OpenCl, they always hint toward AMD's fault but I just know know. They couldn't get any proper performance out of XeonPhi either much. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted March 27, 2014 Share Posted March 27, 2014 I don't know what is it with Chaos's implementation of OpenCl, they always hint toward AMD's fault but I just know know. They couldn't get any proper performance out of XeonPhi either much. I know. It is a back and forth thing, one side blaming the other. But seeing Adobe managing to exploit AMD's advantage in OpenCL - and not just this benchmark here and there - I tend to believe that Chaosgroup has a decent share of responsibility. Many complain that OpenCL is trickier than CUDA to implement, as support is not as mature but...if more than one get it right, it means the problem can't be blamed on just one side. Link to comment Share on other sites More sharing options...
beestee Posted March 27, 2014 Share Posted March 27, 2014 Just saw this interesting tidbit on chaosgroup forums in response to a post about the VCA: We actually got V-Ray RT GPU running on it - final images in a few seconds, it's pretty cool Best regards, Vlado Might not be too hard to justify the steep price if you can keep the thing chugging. I could see it being viable for a studio with 10 artists or more. Might even be a good option if somebody wanted to start their own render farm service. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted March 27, 2014 Share Posted March 27, 2014 (edited) Thing is, the whole idea for such an investment is flying when iRay or VRay RT are actually "nearly real time". You don't care for imperfections and lack if features etc, cause the mere fact that you orbit around the thing in real time is ... well, that...important in itself enough for niche applications that favor speed over absolute quality. So it is not about how big a studio needs to be to justify working with it, is more about the need in presenting stuff live, something that is more suited for a large architecture or industrial design studio (like the car in NVidia's example) than for Arch Viz studions - unless ofc you are hired as a consultant exactly to facilitate that live rendering presentation. For production stills, you can buy lots of x86 compute with a $50,000-100,000 render farm - especially if you are not buying an OEM solution, and you are not writing your own code for it (e.g. what film studios do in large production movies - like with Avatar where technically they've invented their own GPU rendering engine for the film). In this scenario there is little no nothing changing in how you approach things, you don't have to re-adapt or compromise your workflow due to the RT engine's limitations. Both approaches boil down to "moar cores" really. Thus I don't know if you could really "sell" the iRay remote farm idea that well at this point. Could be a hit, I don't know, seems niche right now. Edited March 27, 2014 by dtolios Link to comment Share on other sites More sharing options...
johnrygielski Posted April 13, 2014 Share Posted April 13, 2014 I only see the Titan Z being really useful in cluster computing environments, where physical computing density is at a premium. It's a triple slot width card, so 2 Titan Blacks will give you more performance for $1,000 less and only use 1 more slot space. I say more performance because a single Titan Black handles 5TFlops, where a Titan Z handles 8TFlops instead of 10, probably because it's clocked lower for power considerations. So only in large clusters will that one slot width savings really amount to something that justifies the price premium. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted April 13, 2014 Share Posted April 13, 2014 I only see the Titan Z being really useful in cluster computing environments, where physical computing density is at a premium. It's a triple slot width card, so 2 Titan Blacks will give you more performance for $1,000 less and only use 1 more slot space. I say more performance because a single Titan Black handles 5TFlops, where a Titan Z handles 8TFlops instead of 10, probably because it's clocked lower for power considerations. So only in large clusters will that one slot width savings really amount to something that justifies the price premium. It is highly dependent on the slot configuration. Many ATX boards will be unable to fit more than 2 cards, but those would probably fit only 3x Dual slot cards anyways, so with 2* Titan Z you end up with 4 vs 3 GK110 cores. mATX and ITX builders are also not really cared for. No mATX baords cannot do more than one, and only a handful of ITX cases will be abled to do one, as most are limited to 1-2x PCI bays. In Mobos with provisions for 4x dual slot cards (Quad-Fire/Quad SLI capable with 4-5 PCIe 16x), you won't be able to do better with the Titan Z than with 4* separate Titans/780Tis, unless of-course you watercool the whole thing, in which case the cards - using a regular 780/Titan bracket can be limited to 2 slots width (actually I've seen mods where ppl de-solder the Titan/780 dual height DVI-D ports, and turn it into a single slot card when a full cover waterblock is providing cooling, but that's a bit drastic). Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now