Closed Thread
Results 1 to 6 of 6

Thread: GTX Titan VS GTX 690

  1. #1
    Junior Member bradccc's Avatar
    Join Date
    Jun 2012
    Posts
    3

    Name
    Brad Ccc
    Forum Username
    bradccc

    Spain

    Default GTX Titan VS GTX 690

    Main specs of two card (just differences):
    ..........................................GTX Titan............................GTX 690
    Cuda Cores............................2688............. .......................3072
    Base Clock...........................837 MHz...............................915 MHz
    Boost Clock..........................876 MHz..............................1019 MHz
    Texture Fill Rate................187.5 billion/sec....................234 billion/sec
    Memory Config...................6144 MB GDDR5.........4096 MB (2048 per GPU) GDDR5
    Memory Interface Width..........384-bit......................512-bit (256-bit per GPU)
    Memory Bandwith................228.4 GB/sec........................384 GB/sec
    OpenGL..................................4.3....... ..................................4.2

    The Titan has several advantages from HDMI output but these are not gain performance.Ok, i wanna ask why did they release Titan for high end with these specs ?

  2. #2
    Veteran Member VelvetElvis's Avatar
    Join Date
    Oct 2002
    Location
    Denver, CO
    Posts
    1,347

    Name
    Scott Schroeder
    Forum Username
    VelvetElvis

    United States

    Default Re: GTX Titan VS GTX 690

    http://www.pcmag.com/article2/0,2817,2415642,00.asp

    "The GeForce GTX Titan is the new kid on the block. With a single GK110 (Kepler) GPU, the GTX Titan has the potential to give high-end gamers similar performance to the GTX 690 with more efficient power and cooling needs. Like the GTX 680, the GTX Titan can be configured as a 3-way SLI system. The GTX Titan's clock speed is lower at 837MHz base and 876MHz boosted, but overclockers should be able to top 900MHz easily. The GTX Titan dynamically overclocks the GPU based on temperature, rather than raw power and voltage controls. The end result is a card that operates in a wider selection of chassis choices. You can drop two GTX Titan cards into a mid-tower with an 850W power supply, a configuration that would alternately starve and bake a pair of GTX 690 cards."
    Scott S.

  3. #3
    Senior Member
    Join Date
    Feb 2010
    Location
    UK
    Posts
    229

    Name
    Julian Bransky
    Forum Username
    branskyj

    United Kingdom

    Default Re: GTX Titan VS GTX 690

    "Why did they release Titan for high end with these specs" ?

    1. In a perfect world- NVidia grew conscious and decided to address the issue with low memory for people doing GPGPU on a budget (I know 1000.00 USD is a lot but Tesla costs more.
    2. In a real world- NVidia is only selling 10000 units AFAIK. It's a niche product, they want to fill the gap between the current and next generation cards, want to remain competative at the high- end segment.

    How knows, if it was 500.00 even I would have taken the plunge.

  4. #4
    Veteran Member dtolios's Avatar
    Join Date
    Jan 2012
    Location
    Los Angeles (imported)
    Posts
    1,035

    Name
    Dimitris Tolios
    Forum Username
    dtolios

    Greece

    Default Re: GTX Titan VS GTX 690

    Quote Originally Posted by branskyj View Post
    "Why did they release Titan for high end with these specs" ?
    Depending on what you are after, the Titan can be a bargain.
    It is not the fastest "board", but it is the fastest card. Not enough to make it worth it for everyone, but has its niche.
    For GPU renderings, I would honestly prefer a 690 should those came at the same price with 2x4GB, but as it sits, it is hard to opt for the dual board, even if it gets you more raw processing power.

    Many gamers dislike SLI/Crossfire as those solutions are not perfect, so the 690 wasn't competing anyways. nVidia needed something to dethrone the 7970GHz as the fastest single GPU for gaming, and with the Titan they managed to recycle the cores that never made it into K20 specs.

    Yes, the GTX Titan is named after the supercomputer Titan that is using K20s.
    Forgive my rants - I can be laconic in Greek if you prefer.
    PCFoo.com // DIY PC Resource Site

  5. #5
    Veteran Member okmijun's Avatar
    Join Date
    May 2004
    Location
    Kragujevac
    Age
    40
    Posts
    659

    Name
    Zdravko Barisic
    Forum Username
    okmijun

    Serbia and Montenegro

    Default Re: GTX Titan VS GTX 690

    it's all about DRIVERS and apps, so those numbers do not affect, too much...

  6. #6
    Veteran Member dtolios's Avatar
    Join Date
    Jan 2012
    Location
    Los Angeles (imported)
    Posts
    1,035

    Name
    Dimitris Tolios
    Forum Username
    dtolios

    Greece

    Default Re: GTX Titan VS GTX 690

    Quote Originally Posted by okmijun View Post
    it's all about DRIVERS and apps, so those numbers do not affect, too much...
    This is a big truth as far as viewport acceleration goes, but existing drivers - at least with nVidia - don't make a distinction between gaming and workstation cards for GPU computing. Both types allow full utilization of the GPU. This is concluded through measuring the performance - i.e. time needed for VRay RT renderings of a set # of passes - and it is consistent to what you would expect "aggregating" stream/cuda processors and MHz -

    i.e. a card of the same architecture, say Kepler, with 1000 cores and 1000MHz (1000 GHz aggregate) is roughly 20% faster than a card with 512 cores and 1562MHz ( 800GHz aggreagate) in theoretical numbers.

    If this is backed up with VRay performance numbers in real life - and it is today - , you can take an educated guess that VRay RT has access to the full potential of each card equally well, and it is not the drivers that castrate performance / are biased towards Tesla performance vs. GTX performance.

    The latter also helps you deduct that GeForce drivers are intentionally "castrated" to reinforce the "need" for Quadro cards, as the "gaming" cards are actually as fast as promised all-around when properly utilized, but nVidia (and AMD) doesn't' want them to.

    Perhaps if more and more stuff become OpenCL accelerated making default viewport acceleration techniques today less important, we will see such biased drivers "killing" performance even in computation tasks, but that is not true today.
    Forgive my rants - I can be laconic in Greek if you prefer.
    PCFoo.com // DIY PC Resource Site

Closed Thread

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

     

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts