inpow watir Posted June 23, 2016 Share Posted June 23, 2016 (edited) I recently watch this ==> and paid attention to the RAM usage by each of V.Card... turns out GTX1070 seems like consumes RAM more than GTX1080 (in other words, does it mean GDDR5X has more efficient memory management?) In relation to GPU rendering, Vray GPU rendering in particular, can we guestimate to say that GTX1080 has a better memory management when it comes to payload process the poligons into VRAM than GTX1070? Really curious whether it affect only the viewport performance or it also affect memory allocation management if we would use them for GPGPU rendering.. Really'd love to hear from those who has a hands on test related to this.. :-) Edited June 24, 2016 by inpowwatir Link to comment Share on other sites More sharing options...
beestee Posted June 23, 2016 Share Posted June 23, 2016 I don't have a hands on, just an additional observation on the video. It appears that the 1080 graphics memory is running at a notably higher frequency, maybe that is what is making the difference. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted June 23, 2016 Share Posted June 23, 2016 The "full" GP104 is comprised of 40 SMP units = 64 shaders / SMP x 40 SMP = 2560 "cores" = GTX 1080. For each SMP unit we have 4 texture mapping units, so for GTX 1080 = 40 SMP * 4 = 160 Texture mapping units The cut-down GTX 1070 has only 30 SMP units active, which maths out to 1920 cores, 120 texture mapping units. So the same "job" is spreader over to 33% "more" in the 1080, which I am guessing has easier time / less overhead using the Pascal texture compression engine to compress those textures better. Ontop of that, I believe the GDDR5X chips do some compression of their own, but I doubt this is biggest driving factor here. The 1080 is just "more" of a card as a whole. Too bad it will be shaded by the GP110 in 5-6 months (or what it will appear as soon as that) Link to comment Share on other sites More sharing options...
Francisco Penaloza Posted June 23, 2016 Share Posted June 23, 2016 For what is worth, here are some test from Chaos group. Comparing gtx 1080 with Titan X and GTX980. https://plus.google.com/+VladimirKoylazov/posts/R1XPacoWvB4 Link to comment Share on other sites More sharing options...
inpow watir Posted June 24, 2016 Author Share Posted June 24, 2016 Hi Guys.. I also find this as well dated 16Jun16 which is a bit worrying.. (beware before you decide to buy / upgrade you card for GPGPU purpose) take a look here: https://devtalk.nvidia.com/default/topic/942442/gtx-1080-does-not-support-with-octane-render/ it said there: not just OCTAN, but also VRay's GPU support - fails. RedShift GPU renderer - crashes Furryball GPU renderer - doesnt see any GPU, renders black with errors Octane - same as OP. Dunno if this should be split on a new thread or better here instead. Link to comment Share on other sites More sharing options...
inpow watir Posted June 24, 2016 Author Share Posted June 24, 2016 other comparison video shows the otherway.. weird as its inconsistent.. or perhaps this has no meaning at all relating to viewport performance nor GPGPU rendering haha.. can't found a reliable reference so far other than wait for Mr. Vlado to finish his test on Vray.. ;-) Link to comment Share on other sites More sharing options...
VukDjordjevic Posted June 26, 2016 Share Posted June 26, 2016 Any1 so far had any chance of trying the gtx 1070 in 3ds Max and actually test the viewport performance? I'am on a 750ti and I am considering to upgrade to the 1070 mainly for better viewport performance since I usually work on large files, masterplans and tend to produce mainly birds eye render views... I would love to know and hear from some1 who actually owned a 970 and upgraded to the 1070. And sorry for the off topic Link to comment Share on other sites More sharing options...
ralphdecapite Posted June 27, 2016 Share Posted June 27, 2016 (edited) it will be really interesting to see the performance of new cards after new driver with cuda version 8.0 soley made for Pascal's architecture. Edited June 27, 2016 by ralphdecapite Link to comment Share on other sites More sharing options...
inpow watir Posted June 28, 2016 Author Share Posted June 28, 2016 what really appealing to me, if we are not talking abt cuda counts, of the new pascal are the higher memory property per card which is gives more flexibility & possibility; and its wattage efficiency. Otherwise the old GTX780 is pretty much decent for certain scene. I am also considering to upgrade to 1070 myself mainly for those 2 reasons above.. Link to comment Share on other sites More sharing options...
Dimitris Tolios Posted June 28, 2016 Share Posted June 28, 2016 I don't think you will be able to see considerable energy bill savings to justify the move from a 780 unless you are rendering on the GPU all the time - and I mean ALL THE TIME, but if you have cooling issues as-is, it might be the case that a lower TDP card might work better for you. It is very hard to justify such moves best on Wh and $ if we are talking single cards and a few hours of 100% utilization a week. In the US, with electricity being super cheap vs. many EU countries, it is even harder. If we are talking dedicated GPGPU solutions with multiple cards, or instances where we push the hardware to or close to 100% for prolonged time, efficiency does play a major role and scales outside the PC itself, as often private offices / homes need to be air-conditioned disproportionately more due to that extra heat etc. and its a vicious circle. There are specific reasons tho that could turn the argument and point you to more power efficient cards, other than $ savings, even heat comfort around them: E.g. I was looking into a mITX build myself, and although I don't see considerable performance gains between a 1070 and say a mildly o/ced 780Ti, the latter will do its thing while pulling and dissipating 100W of extra heat. That's a big % of the total system consumption a rig with a stock 1070 would have, and in the limited volume of a ITX case & more restricted envelope of a SFX PSU, can make a big difference. Link to comment Share on other sites More sharing options...
beestee Posted June 29, 2016 Share Posted June 29, 2016 We have put in a order for an AMD RX 480 8GB to test it out as a replacement for the 750 Ti that we have been using in most of the AutoCAD/Revit machines at our office. Link to comment Share on other sites More sharing options...
inpow watir Posted June 30, 2016 Author Share Posted June 30, 2016 (edited) I understand your point of view Dimitris, but unfortunately not everybody stay in US or Europe. Just like me. For singleton company like mine, the PC you are working most of the time is manytimes also the main render machine sometimes, so yeah.. Utilization is no longer in casual mode but rather more constant nearly to the max, therefore Wh and $ concern are still one of the nice thing to consider.. Anyway apart from that, if wh and $ are no concern, i agree with Benjamin to void NVIDIA option all together and headup for RX480(x).. It'll gives you acquisition cost bonus as well. :-D Edited June 30, 2016 by inpowwatir Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now