Dimitris Tolios Posted November 30, 2012 Share Posted November 30, 2012 (edited) Is it possible to combine different GPU's to render together from separated machines? Like Distributed Rendering in vray, but with GPU's from various machines?? Yes. Distributed GPU rendering is possible and I believe supported by VRay RT GPU. If I am not mistaken you can read about it in the chaosgroup's forums (free to join, you have to have a valid Vray licence in order to be allowed posting though). I haven't tried it myself. Side note: Avatar was rendered using distribute GPU rendering with hundreds of GPUs (again i believe it was on 512MB-768MB cards GTX 2xx cards back then)...the renderer was custom coded though. It is funny that we struggle and want moar Vram for actually simpler scenes. Today, all textures are forced/down-sampled to 512p, so Vram usage is actually far less than what the production renderer reports using when we are doing renders. Most of that RAM is for textures that usually are much bigger than 512p. The issue is that powerful GPU cards are pretty expensive to boot, and require a adequately fast CPU/mobo/ram to run, so in order for example to get a single GPU accelerated node, you are spending almost as much as 2x CPU based nodes would require. There are also certain limitations / missing features that for example VRay RT GPU has in comparison with the production CPU renderer, i.e. displacement mods, Vray fur, certain bugs with invisible lights etc. Having a workstation that locally produces GPU accelerated previews or renders simple animations is one factor you have to balance in your workflow. In a way the argument goes the same way it goes when you are deciding between a faster for modelling, single CPU i7 workstation vs. a dual CPU Xeon system that will be faster when rendering but slower when modelling, or the fusion of the above in the form of an i7 workstation and independent rendering nodes, with the 1st almost exclusively modelling while the latter carry the rendering tasks exclusively, without stopping you from queuing up more scenes and/or producing more models. As the number of CPU TFlops is scaled through a render-farm, you have to find the niche where distributed GPU rendering with fewer machines featuring massive brute force computation through GPUs, outperforms more machines that use solely CPUs. At some point where all the features of the production renderer will be implemented in the RT GPU version, and the net result will be the same in both cases, it will be easier for someone to run the logistics behind it and conclude. Right now you have to find the balance between not just cost and time effieciency, but the importance of certain features in your workflow that is totally subjective. By reading the above rants, I realize that I have little respect for Elizabeth Clarkson Zwart (or proper syntax for what matters) who said: "The older I grow, the less important the comma becomes. Let the reader catch his own breath". I disrespect both comma and period...which reflects to the readers too...sorry guys. Edited November 30, 2012 by dtolios Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted November 30, 2012 Share Posted November 30, 2012 (edited) Question ref. CPU: if I'm not planning to overclock my 3820, is it really useful to purchase a CPU cooler like the H80 or 100? If you don't plan on O/Cing, a Coolermaster Hyper 212 EVO (i wrote it doesn't matter, but it does, as older versions do not have a s2011 mount) - I think will be a great choice. If the 3820 would come with a stock cooler, that would probably be more than sufficient, but with the 212 you can have the ease of mind that you get something very reliable and easily better than any stock cooler. Pricing is in the $25-30 range and it is pretty amazing in this class. I would dare to say that 4.3-4.5GHz with a ceiling of 80oC and less than 1.38-1.39V under prime95 is actually attainable easily with a 212. Same speeds won't produce more than 75-77oC under full load when rendering, and idling will be in the high 30s. Edited November 30, 2012 by dtolios Link to comment Share on other sites More sharing options...
mondex Posted November 30, 2012 Author Share Posted November 30, 2012 Thx Dimitris Link to comment Share on other sites More sharing options...
mondex Posted November 30, 2012 Author Share Posted November 30, 2012 So, to conclude this thread (or not!), here's my final spec: ASUS P9X79 LGA 2011 Intel X79 SATA 6Gb/s USB 3.0 ATX Intel Motherboard £200Intel Core i7-3820 Quad-Core Processor 3.6 GHz 10 MB Cache LGA 2011 £225EVGA GF GTX 660Ti SC+ 3GB GDDR5 £255Corsair CMZ16GX3M2A1600C10 Vengenace 16GB 1600MHz CL10 DDR3 Memory Two Module Kit £59Corsair Obsidian Series Black 650D Mid Tower Computer Case £122.66 Cooler Master Hyper 212 EVO £29 Western Digital WD1002FAEX Caviar Black 1 TB 7200 RPM Internal Hard Disk Drive £71Samsung 840 Series 120GB 2.5 inch SATA Solid State Drive £782x Acer S240HLbid 24'' Full HD widescreen LCD monitor with LED Backlight £125 (£250)Corsair Enthusiast Series 850-Watt 80 Plus Bronze Certified Power Supply Compatible with Core i3, i5, i7 and platform £94LG GH24NS90.AUAA50B 24x SATA Bare Internal DVD Rewriter - Black Total on Amazon: £1,407.78 Might there be something about the RAM or the PSU? I'm keen on getting quality stuff, rather than cheap - even if it means loosing a bit of power. Link to comment Share on other sites More sharing options...
mondex Posted November 30, 2012 Author Share Posted November 30, 2012 I did have to calm down and downgrade my initial "dream"! I'll be happy with that Link to comment Share on other sites More sharing options...
artmaknev Posted November 30, 2012 Share Posted November 30, 2012 Make sure you get two HDD, so you can create Mirror Raid, in case one goes bad, you will have the other one to back it up. Also get 24GB RAM its not that expensive, but will boost your performance on heavy scenes. Link to comment Share on other sites More sharing options...
mondex Posted November 30, 2012 Author Share Posted November 30, 2012 Thanks Art: I'll copy that info on my file: tomorrow I'm going to see a friend who can help me find suppliers and good deals in the area. Whatever cash I can save, I'll put into that. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted November 30, 2012 Share Posted November 30, 2012 Setup looks solid. Few pieces of advice (I hate getting too technical, then again I cannot help myself) * SSD: avoid the samsung 840 non-pro. It is based on TLC NAND chips. Long story short, it offers considerably smaller life expectancy than MLC - what the Samsung 830, 840 Pro and most of the high performance SSDs use. Well worth's the price difference to get the 840 Pro in my humble opinion, or settle back for the 830 despite the slight performance drop in some areas. * PSU: I would consider getting a smaller, PSU - maybe a 700-750 to play it safe should you would wish in the future to add a second GPU, a 3930K or both, but get a 80+ silver or gold rated unit. Those devices are "Greener" and consuming less electricity, but also have by average higher quality components that will last longer (almost all PSUs have an expiration date linked to the life expectancy of their capacitors). * HDD: backup is important. I don't know what fits your style best. You can choose from LAN attached file server (preferably some RAID 1 or 5 soho solution), cloud backup, USB or other type of external enclosure and finally internal RAID 1/5 - or ofc combinations of the above. Should you start having more than one PCs rendering with the same assets, network storage starts becoming more and more important + usually more secure. Link to comment Share on other sites More sharing options...
mondex Posted November 30, 2012 Author Share Posted November 30, 2012 Cool! I agree ref. the PSU: going green is the way to go. I also switched the SSD spex to avoid encountering the issues you mentioned. As for HDD backup, I'll look into Art's suggestion, and see what can be done down here: my partners have servers. Thanks guys, you've been super great! Link to comment Share on other sites More sharing options...
dougjohnston1 Posted November 30, 2012 Share Posted November 30, 2012 Boxx computers is what we use. Company is out of Austin Texas. Link to comment Share on other sites More sharing options...
artmaknev Posted November 30, 2012 Share Posted November 30, 2012 Boxx is nice if you can afford it , but for those who can build their own workstation, Boxx is slightly overpriced. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted November 30, 2012 Share Posted November 30, 2012 Boxx is nice if you can afford it , but for those who can build their own workstation, Boxx is slightly overpriced. "Slightly"? "Slightly" above the already "barely" overpriced HP, Dell and Lenovos? Some Boxx configs are nice, but tbh if you can build it or assign someone to custom build it by copying their config 90-100%, you get it more than "slightly" cheaper, and with equal if not better factory warranty. I don't know about actual "support". Link to comment Share on other sites More sharing options...
artmaknev Posted November 30, 2012 Share Posted November 30, 2012 It is the support, I heard they have outstanding support and warranty, so I guess thats where the other half of your money goes. Link to comment Share on other sites More sharing options...
asiacc Posted December 31, 2012 Share Posted December 31, 2012 hello I would like to buy a new computer. please help me. Intel Core i7-3770K 3,5 GHz 8MB cache s. 1155 GIGABYTE GA-Z77X-UD4H Intel Z77 LGA 1155 (3xPCX/VGA/DZW/GLAN/SATA3/USB3/RAID/DDR3/SLI/CROSSFIRE) Patriot Viper 3 DDR3 2x8GB 2133MHz CL11 XMP PV316G213C1K dysk ssd 128 gb samsng MSI GeForce GTX 560 Ti, 2GB DDR5 (256 Bit) Is it a good configuration? I wonder I should choose gigabyte motherboard or perhaps MSI? Or Asus better? I always use gigabyte but I do not know now. and one questgion about graphic card- MSI GeForce GTX 650 2048MB DDR5/128bit DVI/HDMI PCI-E (1071/5000) (wer. OC - OverClock)or EVGA GeForce GTX 560 Ti 2048MB DDR5/256bit DVI/HDMI PCI-E (822/4000) or ASUS GeForce GTX 650 2048MB DDR5/128bit DVI/HDMI PCI-E (1071/5000) (wer. OC - OverClock) or ASUS GeForce GTX 660 2048MB DDR5/192bit DVI/HDMI/DP PCI-E (1137/6108) (wer. OC - TOP) (wentylator DirectCU II) or MSI GeForce GTX 660 Ti 2048MB DDR5/192bit DVI/HDMI/DP PCI-E (1019/6008) (wer. OC - OverClock) which i shpuld choose? Link to comment Share on other sites More sharing options...
Christian Cole Posted January 6, 2013 Share Posted January 6, 2013 Hi Guys i've been following this thread closely as i'm putting together some hardware myself. My only question would be, is there any benefit to choosing the Intel Core i7-3820 Sandy Bridge (2011) over the newer Intel Core i7 3770K Ivy Bridge (1155) other than price? I have the 3770K in my list as of now, along with a Asus P8Z77-V PRO, but wanted to see if i'm making a mistake? Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted January 6, 2013 Share Posted January 6, 2013 Hi Guys i've been following this thread closely as i'm putting together some hardware myself. My only question would be, is there any benefit to choosing the Intel Core i7-3820 Sandy Bridge (2011) over the newer Intel Core i7 3770K Ivy Bridge (1155) other than price? I have the 3770K in my list as of now, along with a Asus P8Z77-V PRO, but wanted to see if i'm making a mistake? Hello. Your question is not that hard to answer. For the vast majority of users, the 3820 has very few occasions where it outperforms the 3770K, or the 2700K for what matters. Ivy bridge CPUs like the 3770K have better IPC, so for comparable clock speeds, they do offer a 5-10% speed performance gain. The 3820 has more cache, but very few tasks can benefit from it, so generally it performs as a Sandy Bridge 1155. It is also not cheaper than a 1155 solution, as the 3820 itself might be cheaper through certain retailers/offers, but the motherboards for the s2011 are usually covering up the difference and then some. What the 2011 offers is compatibility with the 39xx hex-cores in case you feel the need to upgrade to something with better rendering performance in the near future, and it will also be compatible with the up-coming ivy-bridge based s2011 CPUs (IB-E). It is unclear if we will be lucky enough to get more than 6-cores with the IB-E, but nonetheless the performance should be improved over the current 39xx CPUs anyways. X79 based mobos offer advantages for multi-GPU setups with more than 2-3 powerful cards, but I think such applications are rarely applicable. More will probably like the fact that X79 mobos offer usually 8 dimm slots, with capability for up to 64GB of RAM vs the 32GB ceiling the 1155 socket is limited to. Honestly, I doubt that most of us really use more than 32GB (even 16GB), but some people do and welcome the capability for doing that without having to go into a Xeon or Opteron solution. The LGA 1155 is unfortunately at the end of its operational life, and the next architecture - Haswell - will be using LGA 1150, rendering systems with 2nd and 3rd gen i7 1155 CPUs unable to be upgraded to anything better in the future. Haswell won't be readily available before Spring-Early summer i think, so if you need a new workstation today, it won't be a horrible choice going 3770K. It will still be darn fast. Disclaimer: I happen to have a 3820 system, as I was able to find the CPU at a great price, paired with a $50 discount on a bundled mobo (Asus P9x79-Pro). Ended up paying $45 more than a 3770K would be paired with the equiv. Asus P8Z77-Pro mobo, so I opted for that with the wet dream of a 6 or 8-core CPU being dropped in there sometime in the future. Link to comment Share on other sites More sharing options...
nitinsharma Posted January 6, 2013 Share Posted January 6, 2013 hi, i am new to this forum so please forgive me if i am asking a question which have been asked before and its a bit off the topic but as i am new i don't know why i am not able to start a new thread so how much performance hit can i get if i opt for i7 3770k over fx 8350 i will only work in 3dsmax design and use VRAY and mental ray if i render a frame on i7 3770 k in 1 hour then i think fx8350 will take at max 1hour 10minutes to render a frame, is it true??? if yes then isn't it a good idea to get frames 10 minutes late and save money to buy a better GPU. please enlighten me on this... Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted January 6, 2013 Share Posted January 6, 2013 Depends on your workload type. Yes, in absolute multi-threaded applications like rendering, the FX 8350 is roughly 9-10% slower than the i7-3770K and roughly 12-14% faster than a i5-3570K. That is in Cinebench 11.5, other renderers might vary slightly. Single thread performance, which is important for certain plugins/modifiers etc (for example I think the Cloth Modifier in 3DS Max is not multithreaded or it uses up to 2 cores? plz correct me if I am wrong, I don't know if things changed in Max 12 or 13), is usually much faster with Intel CPUs over AMD cpus - many times up to 50% or more. For general modelling, this is usually irrelevant, as most 3.5-4GHz CPUs available today are already pretty fast. Depending on your budget / target GPU / applications / workload, the switch to a FX 8350 might have different impact. Surely if you plan on rendering a lot of frames, the i7 will come on-top, and if you do it professionally (not aspired to be, really do it), a 3930K might really pay back fast as we don't talk single digit performance gains. If you do the occasional still etc, 10% is a drop in the ocean most of the times, as most ppl spend hours and hours modeling / setting up scenes, when they will save minutes with a slightly faster processor. 3570K / 3770K / FX 8320 or 8350...all could work just fine for the casual user Link to comment Share on other sites More sharing options...
nitinsharma Posted January 6, 2013 Share Posted January 6, 2013 i have recently started to learn 3dsmax and have covered basic modelling but i want to upgrade and get ready for demanding operations now i just saw it today that while working in the viewport of 3dsmax only one core of my Phenom 965be gets utilized and while rendering each and every core works at full load. later i found on net that "viewport" of 3ds max 2012 is single threaded. so here in this situation would i benefit form intel i7 3770k ?????? Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted January 6, 2013 Share Posted January 6, 2013 Yes and no. While most current production renderers are purely CPU based, the viewport performance is in most cases heavily GPU dependent. So, if what you get is a PC struggling when you orbit around your scene etc, then the GPU is more important to upgrade than the CPU. The Phenom - believe it or not - has better IPC per core than the Piledriver architecture, and since you have one of the fastest Quad Phenoms, in many ways the FX 8350 won't be an upgrade in anything but pure rendering tasks. You have to actually O/C the 8350 to "beat" the old Phenom in many cases, as the 600MHz initial advantage is not enough to cover the difference. Not just the "viewport", but 3DS Max in general is single threaded for almost anything but the actual rendering process - in which case we are probably saying mental ray or VRay etc plugins for 3DS are multi-threaded. 3DS is not - at least not for the most part. What kind of GPU do you have now? Are you sure that you are doing your best with optimizing your workflow using layers, proxies etc to lower the complexity of your scenes when working? It is usually better to try and improve our technique than blaming the hardware for the shortcomings. Even if you had a top-of-the-line Quadro or FirePro you could easily be careless with high-poly models and bring the machine to its knees within minutes. Link to comment Share on other sites More sharing options...
nitinsharma Posted January 6, 2013 Share Posted January 6, 2013 i am using GTX 275 and i am not upgrading because i am saving for a quadro 4000 so it is better to upgrade to haswell directly and as of now i can get a quadro 4000.??? Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted January 6, 2013 Share Posted January 6, 2013 If you are really interested in 3DS Max, a Quadro 4000 is probably not a bad investment. There are also a few nice deals going around on eBay - I don't know how active is ebay in your country and/or if you can access deals from UK or US that usually have a better "used" market. I would only consider the 83xx series as a drop in upgrade, compatible with your current mobo, and only if you plan to do a lot of rendering - videos etc. Otherwise your Phenom while definitely "lacking" in comparison with current intel CPUs, it is not lacking that much. I think it will be a good choice to hold back on CPU/Mobo upgrades, and see if Haswell and LGA 1150 will be offered in a better package as far as price/performance/unreadability goes sometime in mid-late 2013 or so. If you are serious about rendering a lot on your local machine, without bothering with render nodes etc (tho turning your current rig into a render node with a basic GPU/PSU is hardly a bad choice if you switch to another platform - either intel or AMD), a 3930K and the LGA 2011 is clearly a better choice - not to far fetched if you are willing to drop the money for a Quadro 4000. No current GTX (5 or 6 series) will be a much better choice for viewport acceleration in comparison with your ol-275. They did go a long way as far as GPU acceleration goes, but not for viewport (clearly a choice nvidia made to push Quadro sales). Link to comment Share on other sites More sharing options...
Christian Cole Posted January 7, 2013 Share Posted January 7, 2013 Thanks Dimitris Exactly the info I needed. I wont be looking to upgrade the cpu, so i'll stick to the 3770k. Hello. Your question is not that hard to answer. For the vast majority of users, the 3820 has very few occasions where it outperforms the 3770K, or the 2700K for what matters. Ivy bridge CPUs like the 3770K have better IPC, so for comparable clock speeds, they do offer a 5-10% speed performance gain. The 3820 has more cache, but very few tasks can benefit from it, so generally it performs as a Sandy Bridge 1155. It is also not cheaper than a 1155 solution, as the 3820 itself might be cheaper through certain retailers/offers, but the motherboards for the s2011 are usually covering up the difference and then some. What the 2011 offers is compatibility with the 39xx hex-cores in case you feel the need to upgrade to something with better rendering performance in the near future, and it will also be compatible with the up-coming ivy-bridge based s2011 CPUs (IB-E). It is unclear if we will be lucky enough to get more than 6-cores with the IB-E, but nonetheless the performance should be improved over the current 39xx CPUs anyways. X79 based mobos offer advantages for multi-GPU setups with more than 2-3 powerful cards, but I think such applications are rarely applicable. More will probably like the fact that X79 mobos offer usually 8 dimm slots, with capability for up to 64GB of RAM vs the 32GB ceiling the 1155 socket is limited to. Honestly, I doubt that most of us really use more than 32GB (even 16GB), but some people do and welcome the capability for doing that without having to go into a Xeon or Opteron solution. The LGA 1155 is unfortunately at the end of its operational life, and the next architecture - Haswell - will be using LGA 1150, rendering systems with 2nd and 3rd gen i7 1155 CPUs unable to be upgraded to anything better in the future. Haswell won't be readily available before Spring-Early summer i think, so if you need a new workstation today, it won't be a horrible choice going 3770K. It will still be darn fast. Disclaimer: I happen to have a 3820 system, as I was able to find the CPU at a great price, paired with a $50 discount on a bundled mobo (Asus P9x79-Pro). Ended up paying $45 more than a 3770K would be paired with the equiv. Asus P8Z77-Pro mobo, so I opted for that with the wet dream of a 6 or 8-core CPU being dropped in there sometime in the future. Link to comment Share on other sites More sharing options...
gurmukh23 Posted January 10, 2013 Share Posted January 10, 2013 I noticed the setup had a dual 24" option for the monitors. Would a single 27" be better especially if you do alot of post work and paint in most of the details? Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted January 10, 2013 Share Posted January 10, 2013 Yes and no. First of all it is good to distinguish between 27" screens - there are 1080p and 1440p 27" panels. Having a 1080p 27" offers no benefits over a 1080p 22~24" panel - just bigger pixels in a device that takes up more space on your desk and you need to move further back to see from edge to edge. In my opinion - that that's all I can offer - that is of no practical use. It is actually impractical in most ways - at least for productivity work. Some ppl might prefer it for gaming over a smaller 1080p, some might brag about having a bigger monitor, but that's it. Dual screens are pretty important when you want to have reference material on your screen at the same time with your model/image, either during initial creation and/or during PP. 1920x1080p = 2.07 MP 1920x1200p = 2.30 MP 2560x1440p = 3.69 MP Obviously if you need a lot of real estate on a single screen, the 1440p option is great, but those monitors usually run for $650+. Some ppl opt for ordering 1440p monitors aimed for Asian markets directly from Korea, some of them easily found through ebay and small importers for $370-400 (including shipping - some countries might be subject to import taxes ontop of that). Almost all available 1440p monitors - including those "cheaper" options, are made with the same LG 1440p IPS panel and offer very similar to identical characteristics with the top shelf brands. Great 1200p 23-24" IPS monitors can be found for half that (or less), and decent 1080p monitors can be found for $120 more or less (with DVI connectors - that's important). Users with small budgets can start using any off the above. Other than color accuracy which in most cases is trivial*, the only thing you will be missing with a basic screen is pretty much "more space", which you can add later on by buying another screen. Again, I am a big advocate for dual screens as I find it vastly easier to organize multiple apps running simultaneously. If your workflow revolves around a single app at a time, and you don't work based on reference photos / drawings etc (outside those you might import in your viewport), then by all means, a 1440p will work great. More and more cards offer more Display Port ports than DVI (usually newer cards are limited to a single DVI port), so make sure you have accounted for DP -> DVI or HDMI to DVI or w/e adapters or adapter cables you will need to connect more than one screen. *if color accuracy is important, then there is no substitute to a good colorimeter + calibrating suite, using which you can turn mainstream monitors as good or more accurate (or should I say uniform?) to most monitors -regardless of price. Factory calibrations can only go that far, and ofc screens deteriorate and drift away from their original specs and calibration over time. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now