Jump to content

Search the Community

Showing results for tags 'titan x'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • CGARCHITECT.COM
    • Forum Rules
  • MAIN FORUMS
    • News
    • Tutorials
    • General Discussions
    • Hardware and Technical Discussions
    • Off Topic
    • Pro Plan Members
  • VISUALIZATION GALLERIES
    • Best of the Week
    • Architectural Visualization Gallery
    • Work in Progress (WIP)
  • SOFTWARE
    • 3ds Max
    • V-Ray
    • Other Renderers
    • Other Visualization Software
    • CAD Software
    • Post Production Software
    • VR/AR/MR/Real-Time
    • Vegetation
    • Color Management
  • MISCELLANEOUS
    • New Member Introductions
  • SITE FEEDBACK AND SUGGESTIONS
    • Comments and Feedback

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Display Name


Country

Found 2 results

  1. Hi all! Apologies in advance - lengthy post as I've tried to do as much research as humanly possible before seeking and third party advice. We built a Vray RT - GPU orientated workstation a few months back (Thanks Dimitris for the helpful advice) & after a few teething issues all has been going great. We have now utilized all of the available space in our build with 3 Titan X's & due to increased workflow we really need to double that GPU render power (3 more sc Titan X's due to the 12gb - will expect to replace these once a pascal alternative is available, from what I understand the 1080 doesn't utilize nvlink so I don't think that's a viable option). I've been researching on various forums the most cost effective solution and so far I've come up with the following options: 1. PCIe risers / splitters (best option appears to be http://amfeltec.com/products/flexible-x4-pci-express-4-way-splitter-gpu-oriented/ £300): These look like the cheapest & most scale-able route to go down, however I'm learning that we may be capped by the number of gpu's that windows can handle (We're currently running windows 8.1) am I right in thinking that we could in all essence get up to 8 titan X's been utilized by one workstation? 2. PCIe Expanders / enclosures (Netstor, Cubix, Amfeltec etc avg £2k before gpu's) - these seem like an expensive option to say that the above is in all essence the same thing (- the housing & fans) & the fact that we could build our workstation again for the equivalent cost if not less (- the Titans of course), also wouldn't the same windows GPU limitations apply here? 3. Workstation tear-down & rebuild with further expansion in mind (£?) - no idea where to start here, any recommendations would be hugely appreciated if you feel this is the route you would take. 4. Additional / identical workstation (shopping list comes to £1700 before GPU's) - this is obviously the easiest option for me as we've already built the one here and know the pitfalls that come with the current config - however I expect that this could be another can of worms in terms of getting dr running for both animations & stills, not to mention file sharing, network latency issues etc etc (I've never messed about with DR in a production environment & I currently have all assets saved on secondary local hardrive with cloud backup) 5. GPU render node (£?) - no idea where to start here, any recommendations would be hugely appreciated if you feel this is the route you would take. Additional notes that I have taken down from various sources - hopefully you can help me sort out the chaff: 1. PCie lanes(?) 16x 8x etc don't have a detrimental impact on Vray RT rendering speed, however ray bundle size & rays per pixel will need adjusting in order to maximise efficiency. 2. One titan X counts as 2 gpu's under windows environment - device manager tells me different? 3. Windows limits the number of available GPU's to 8 4. GTX 1080 doesn't utilize nvlink so it's memory won't be stack-able. 5. 40 lane CPU (specifically 5960x in our case) will be able to serve all available gpu's 6. Each Titan X needs at minimum 250w of available power supply. Main goals: Stable - scalable - cost effective (I know the Titan X's can be costly but there doesn't seem to be a cheaper option available that fits our needs right now.) Our current workstation build is as follows: Corsair Obsidian 750D (we were in in the fractal R5 but the third titan x forced an upgrade ) X99 deluxe 5960x with h100i 64GB ballistix sport 1Tb 850 pro (x2) Titan x sc hybrid (x3) SuperNova 1300 gold The above workstation was our first ever custom build & I'm really happy with the results, the route for expansion seems somewhat bewildering to me though & the more forum posts I read the further from a conclusion I get - hopefully someone can chuck their 2 cents in and help us reach a final decision. Thanks in advance
  2. I've just started testing out RT GPU rendering for the first time and I seem to be getting excessive render times (8 hours) to produce near noiseless images (max. noise 0.02) for relatively simple interior scenes. Obviously with optimisation vray adv can outperform these render times ten fold. I believe I have set everything up right in RT: CUDA, in process, coherent tracing, max render time 0, ray bundle size (512), Rays per pixel (128), Trace Depth 5, resize textures, disabled probablistic. I'm also using light cache (+glossy rays). I can see that material / light subdivs etc aren't applicable so I'm wondering what could be the cause of these excessive render times - or is this expected?? Workstation config: Titan X 5960X 64gb ballistix sport 3dsmax 2016 vray 3.3 FYI. We had anticipated using rt as part of our sign off process & purchasing another titan x but we're unsure if it is a wise choice now... Any comments / advice etc will be appreciated!
×
×
  • Create New...