bradccc Posted February 23, 2013 Share Posted February 23, 2013 Main specs of two card (just differences): ..........................................GTX Titan............................GTX 690 Cuda Cores............................2688....................................3072 Base Clock...........................837 MHz...............................915 MHz Boost Clock..........................876 MHz..............................1019 MHz Texture Fill Rate................187.5 billion/sec....................234 billion/sec Memory Config...................6144 MB GDDR5.........4096 MB (2048 per GPU) GDDR5 Memory Interface Width..........384-bit......................512-bit (256-bit per GPU) Memory Bandwith................228.4 GB/sec........................384 GB/sec OpenGL..................................4.3.........................................4.2 The Titan has several advantages from HDMI output but these are not gain performance.Ok, i wanna ask why did they release Titan for high end with these specs ? Link to comment Share on other sites More sharing options...
Scott Schroeder Posted February 24, 2013 Share Posted February 24, 2013 http://www.pcmag.com/article2/0,2817,2415642,00.asp "The GeForce GTX Titan is the new kid on the block. With a single GK110 (Kepler) GPU, the GTX Titan has the potential to give high-end gamers similar performance to the GTX 690 with more efficient power and cooling needs. Like the GTX 680, the GTX Titan can be configured as a 3-way SLI system. The GTX Titan's clock speed is lower at 837MHz base and 876MHz boosted, but overclockers should be able to top 900MHz easily. The GTX Titan dynamically overclocks the GPU based on temperature, rather than raw power and voltage controls. The end result is a card that operates in a wider selection of chassis choices. You can drop two GTX Titan cards into a mid-tower with an 850W power supply, a configuration that would alternately starve and bake a pair of GTX 690 cards." Link to comment Share on other sites More sharing options...
branskyj Posted March 2, 2013 Share Posted March 2, 2013 "Why did they release Titan for high end with these specs" ? 1. In a perfect world- NVidia grew conscious and decided to address the issue with low memory for people doing GPGPU on a budget (I know 1000.00 USD is a lot but Tesla costs more. 2. In a real world- NVidia is only selling 10000 units AFAIK. It's a niche product, they want to fill the gap between the current and next generation cards, want to remain competative at the high- end segment. How knows, if it was 500.00 even I would have taken the plunge. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted March 9, 2013 Share Posted March 9, 2013 "Why did they release Titan for high end with these specs" ? Depending on what you are after, the Titan can be a bargain. It is not the fastest "board", but it is the fastest card. Not enough to make it worth it for everyone, but has its niche. For GPU renderings, I would honestly prefer a 690 should those came at the same price with 2x4GB, but as it sits, it is hard to opt for the dual board, even if it gets you more raw processing power. Many gamers dislike SLI/Crossfire as those solutions are not perfect, so the 690 wasn't competing anyways. nVidia needed something to dethrone the 7970GHz as the fastest single GPU for gaming, and with the Titan they managed to recycle the cores that never made it into K20 specs. Yes, the GTX Titan is named after the supercomputer Titan that is using K20s. Link to comment Share on other sites More sharing options...
Zdravko Barisic Posted March 14, 2013 Share Posted March 14, 2013 it's all about DRIVERS and apps, so those numbers do not affect, too much... Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted March 14, 2013 Share Posted March 14, 2013 it's all about DRIVERS and apps, so those numbers do not affect, too much... This is a big truth as far as viewport acceleration goes, but existing drivers - at least with nVidia - don't make a distinction between gaming and workstation cards for GPU computing. Both types allow full utilization of the GPU. This is concluded through measuring the performance - i.e. time needed for VRay RT renderings of a set # of passes - and it is consistent to what you would expect "aggregating" stream/cuda processors and MHz - i.e. a card of the same architecture, say Kepler, with 1000 cores and 1000MHz (1000 GHz aggregate) is roughly 20% faster than a card with 512 cores and 1562MHz ( 800GHz aggreagate) in theoretical numbers. If this is backed up with VRay performance numbers in real life - and it is today - , you can take an educated guess that VRay RT has access to the full potential of each card equally well, and it is not the drivers that castrate performance / are biased towards Tesla performance vs. GTX performance. The latter also helps you deduct that GeForce drivers are intentionally "castrated" to reinforce the "need" for Quadro cards, as the "gaming" cards are actually as fast as promised all-around when properly utilized, but nVidia (and AMD) doesn't' want them to. Perhaps if more and more stuff become OpenCL accelerated making default viewport acceleration techniques today less important, we will see such biased drivers "killing" performance even in computation tasks, but that is not true today. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now