Jump to content

Recommended Posts

Hi all! Apologies in advance - lengthy post as I've tried to do as much research as humanly possible before seeking and third party advice.

 

We built a Vray RT - GPU orientated workstation a few months back (Thanks Dimitris for the helpful advice) & after a few teething issues all has been going great. We have now utilized all of the available space in our build with 3 Titan X's & due to increased workflow we really need to double that GPU render power (3 more sc Titan X's due to the 12gb - will expect to replace these once a pascal alternative is available, from what I understand the 1080 doesn't utilize nvlink so I don't think that's a viable option).

 

I've been researching on various forums the most cost effective solution and so far I've come up with the following options:

 

1. PCIe risers / splitters (best option appears to be
http://amfeltec.com/products/flexible-x4-pci-express-4-way-splitter-gpu-oriented/
£300): These look like the cheapest & most scale-able route to go down, however I'm learning that we may be capped by the number of gpu's that windows can handle (We're currently running windows 8.1) am I right in thinking that we could in all essence get up to 8 titan X's been utilized by one workstation?

 

2. PCIe Expanders / enclosures (Netstor, Cubix, Amfeltec etc avg £2k before gpu's) - these seem like an expensive option to say that the above is in all essence the same thing (- the housing & fans) & the fact that we could build our workstation again for the equivalent cost if not less (- the Titans of course), also wouldn't the same windows GPU limitations apply here?

 

3. Workstation tear-down & rebuild with further expansion in mind (£?) - no idea where to start here, any recommendations would be hugely appreciated if you feel this is the route you would take.

 

4. Additional / identical workstation (shopping list comes to £1700 before GPU's) - this is obviously the easiest option for me as we've already built the one here and know the pitfalls that come with the current config - however I expect that this could be another can of worms in terms of getting dr running for both animations & stills, not to mention file sharing, network latency issues etc etc (I've never messed about with DR in a production environment
:p
& I currently have all assets saved on secondary local hardrive with cloud backup)

 

5. GPU render node (£?) - no idea where to start here, any recommendations would be hugely appreciated if you feel this is the route you would take.

 

Additional notes that I have taken down from various sources - hopefully you can help me sort out the chaff:

1. PCie lanes(?) 16x 8x etc don't have a detrimental impact on Vray RT rendering speed, however ray bundle size & rays per pixel will need adjusting in order to maximise efficiency.

2. One titan X counts as 2 gpu's under windows environment - device manager tells me different?

3. Windows limits the number of available GPU's to 8

4. GTX 1080 doesn't utilize nvlink so it's memory won't be stack-able.

5. 40 lane CPU (specifically 5960x in our case) will be able to serve all available gpu's

6. Each Titan X needs at minimum 250w of available power supply.

 

Main goals: Stable - scalable - cost effective

(I know the Titan X's can be costly but there doesn't seem to be a cheaper option available that fits our needs right now.)

 

Our current workstation build is as follows:

Corsair Obsidian 750D (we were in in the fractal R5 but the third titan x forced an upgrade
:mad:
)

X99 deluxe

5960x with h100i

64GB ballistix sport

1Tb 850 pro (x2)

Titan x sc hybrid (x3)

SuperNova 1300 gold

The above workstation was our first ever custom build & I'm really happy with the results, the route for expansion seems somewhat bewildering to me though & the more forum posts I read the further from a conclusion I get :confused:- hopefully someone can chuck their 2 cents in and help us reach a final decision. Thanks in advance

Link to comment
Share on other sites

Just have 5 minutes, so not proper answer but,

 

Imho 3:. If you're feeling bit adventurous, simply sell the board (you will easily get 70perc. value back, they sell well) and buy 99-e WS.

 

Strip down GPUs to single-slot with aftermarket heatsink. Connect into single water loop.

 

Build this: (image from TomGlimps blog )

 

detail-inside-7GPU-1000-OctaneBench-rig.jpg

 

http://tomglimps.com/7_gpu_workstation-1000_octanebench/

 

(I would personally use two loops, separate one for CPU)

Link to comment
Share on other sites

Now that looks interesting!!

 

Custom water loops will be a brand new venture for us so I will do my best to build up a shopping list and post back here for thoughts / comments if you don't mind. Thanks for your help & enjoy the rest of your weekend!

Link to comment
Share on other sites

Disclaimer: Since I have very limited knowledge about running something like PCI splitters, I won't comment on this option right now, but I'll do bit research and come back. It could prove to be the most reasonable and economical solution provided it won't clamp the performance and can be housed reasonably. Have you visited Otoy/Octane forum ? Seems this would be their niche.

 

After some brief thought about the above build, its price feasibility and your plan to future upgrade with even more powerful GPUs like Titan-P, brought quite few issues:

 

 

- General advice with such complicated builds is to find experienced local builder specializing in enthusiast builds, they aren't usually expensive but most spent past 10 years obsessing about this thing 24/7. But you can do it yourself too, building the loop is more on the simple side these days, but putting the heatsinks onto Titans requires nerves of steel. Like modifying and voiding warranty of any expensive item :- )

 

- There are rack server HPC motherboards (from Tyan, Supermicro,etc.. ) which take 8 (and more, but Windows7+ (8/10) only support up to 8 GPUs) GPUs in dual-unit configuration, so no need to void warranty and mess with water loop, but those are pretty much all dual-socket server boards.

While it's possible to run single-cpu in dual-socket boards (even i7 I believe) you loose access to half of PCI-e slots, so this solution wouldn't work. You would have to buy two Xeons.

 

- And even that (while very pricy) might have its limitations, because the upcoming Pascal Titans are rumoured to be bottlenecked by CPU single-core performance (as ridiculous as this would be from nVidia), and even the best E5 Xeons have turbo bins that go up to 3.5Ghz. I don't believe this will be severe bottleneck, but... who knows. Let's wait for some benchmarks, the card is around corner.

 

The single model which has higher than 3.5Ghz is Xeon E5-2689 v4 which has highest turbo bin of 3.9Ghz. But two of them will run you 5000 euros. This would make for some severely over-priced workstation, given you won't utilize the CPU performance. But this is the only solution where you can run 8 GPUs in single PC (no performance loss from scaling via Distributed rendering, no need to wait for DR to kick-in)

 

- As much as I would say to copy the build above, which is very utilitarian (no unnecessary frill or expense) it would only work if you kept the current generation Titans. The build above consumes

But you plan to switch to Titans P, which will consume up to

You would also need a (WAY) more radiator area and that will be a problem to stuff in case like Corsair900D, as good as the case is. You would have to go for some enthousiast case like CaseLabs Magnum.

 

All in, this would in whatever path you would choose, make for very expensive machine. So maybe the single (non-rack) machine isn't such an economical solution, and even goes a bit far into the fantasy realm (voiding warranty on far too expensive items, although it would be nice, tidy and silent).

 

It's possible but, I guess it's time to investigate properly those other options instead :- )

Edited by RyderSK
Link to comment
Share on other sites

"Nerves of steel" - haha, I was bad enough fitting the evga hybrid upgrades to the cards and they're a closed loop. The one thing that struck me with the above build was the fact that cutting the DVI ports would render the GPUs worthless for resale (I expect to keep up with the rat race we would look at upgrading over the next year or so) which poses an issue in itself.

 

I've spent hours researching the splitters on the octane forums - my head ended up burnt out trying to figure out the best option (I did however manage to drop the amfeltec GPU cluster as an option due to the x1 limitations, although http://elasticpictures.com/1662/octane-render-render-node-expansion/ make out as though this is no issue). I've got the niggling feeling that the http://amfeltec.com/products/flexible-x4-pci-express-4-way-splitter/ is the best bet as I've seen quite a few success stories (including one from Mate Steinforth on Tom Glimps's twitter feed).

 

In a perfect world I'd have waited & just upgraded to a Titan P oriented build over the next couple of months but workload is dictating otherwise - the last Titan X we bought was used from ebay & works flawlessly so I'd consider going down this route again in the interim (more than viable @ £450 per unit) I also recently found out that there is 48 day money back guarantee through paypal with any (new or used) electrical component which is ample time for us to test the cards for any flaws - which brings me back round full circle (getting dizzy now): what is going to be the most cost effective option to get these cards hooked up without having speed or reliability being compromised.

 

FYI. This is the exact PCIe splitter we were looking at: https://www.thedebugstore.com/amfeltec-SKU-042-43-GPU-PCIe-splitter-ext.html

Edited by luketulley1
ADDITIONAL INFO
Link to comment
Share on other sites

- And even that (while very pricy) might have its limitations, because the upcoming Pascal Titans are rumoured to be bottlenecked by CPU single-core performance

 

The CPU limitation for the Pascal cards should only apply for realtime 3D, where the CPU has to calculate the geometry, not for GPGPU.

 

 

I think, i would also lean towards an extender solution if you need more than 4 cards. Cutting the panels of such expensive cards would be no option for me.

Link to comment
Share on other sites

Thanks! Thats interesting to know in terms of the Pascal Cards - makes sense if we consider that the entire scene and textures are currently loaded to the gpu before rendering in Vray RT & similar, at worst I'm guessing it would take a little longer to load the scene assets prior to rendering - fingers crossed!

 

I've been reading various threads by tutor over on the octane forums, he seems to have had multiple success stories with the splitters (though he's also hit various limitations) I think it might be time to bite the bullet!!

Link to comment
Share on other sites

Some thoughts:

 

  • Single box with more than 4 GPUs and no modding of the back plate will be tough to implement, regardless of PCIe extenders / splitters. You will need an open frame custom enclosure if you are using some ribbon solution, or a dedicated GPGPU enclosure like the Tyan FT77 series, as no ATX case has provisions for more than 8 slots / 4 dual width cards.
  • "Enough PCIe lanes" on the CPU side are almost pointless for pure GPGPU performance. Even if you have 1-2 lanes per GPU, you get more than enough bandwidth, so not only a 5960x but any 4C i7 or even 2C i3 or Pentium would yield similar results. PCIe is not the bottleneck unless we are talking 3-4way SLI which is irrelevant to GPGPU, and actually phased out by nVidia starting with the 10xx cards that don't support more than 2-way SLI for games. Don't know if this will change with the new "Titan" based on pascal.
  • Networking works with VRay RT GPU, so you can add cards in cheap boxes with basic CPU / RAM that are all slaves to your main WS. Just have enough ventilation & PSU in each. Honestly, I don't think any solution other than LAN distributed expansion is really "scalable"

Edited by dtolios
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...