Jump to content

Viewport performance guess: GTX 1070 consume more VRAM than GTX1080


Recommended Posts

I recently watch this ==>

and paid attention to the RAM usage by each of V.Card... turns out GTX1070 seems like consumes RAM more than GTX1080 (in other words, does it mean GDDR5X has more efficient memory management?)

 

In relation to GPU rendering, Vray GPU rendering in particular, can we guestimate to say that GTX1080 has a better memory management when it comes to payload process the poligons into VRAM than GTX1070?

 

Really curious whether it affect only the viewport performance or it also affect memory allocation management if we would use them for GPGPU rendering..

 

Really'd love to hear from those who has a hands on test related to this.. :-)

Edited by inpowwatir
Link to comment
Share on other sites

The "full" GP104 is comprised of 40 SMP units = 64 shaders / SMP x 40 SMP = 2560 "cores" = GTX 1080.

For each SMP unit we have 4 texture mapping units, so for GTX 1080 = 40 SMP * 4 = 160 Texture mapping units

 

The cut-down GTX 1070 has only 30 SMP units active, which maths out to 1920 cores, 120 texture mapping units.

 

So the same "job" is spreader over to 33% "more" in the 1080, which I am guessing has easier time / less overhead using the Pascal texture compression engine to compress those textures better.

 

Ontop of that, I believe the GDDR5X chips do some compression of their own, but I doubt this is biggest driving factor here. The 1080 is just "more" of a card as a whole.

 

Too bad it will be shaded by the GP110 in 5-6 months (or what it will appear as soon as that) :p

Link to comment
Share on other sites

Hi Guys..

I also find this as well dated 16Jun16 which is a bit worrying.. (beware before you decide to buy / upgrade you card for GPGPU purpose) take a look here: https://devtalk.nvidia.com/default/topic/942442/gtx-1080-does-not-support-with-octane-render/

 

it said there: not just OCTAN, but also

VRay's GPU support - fails.

RedShift GPU renderer - crashes

Furryball GPU renderer - doesnt see any GPU, renders black with errors

Octane - same as OP.

 

Dunno if this should be split on a new thread or better here instead.

Link to comment
Share on other sites

other comparison video shows the otherway.. weird as its inconsistent.. or perhaps this has no meaning at all relating to viewport performance nor GPGPU rendering haha.. can't found a reliable reference so far other than wait for Mr. Vlado to finish his test on Vray.. ;-)

Link to comment
Share on other sites

Any1 so far had any chance of trying the gtx 1070 in 3ds Max and actually test the viewport performance? I'am on a 750ti and I am considering to upgrade to the 1070 mainly for better viewport performance since I usually work on large files, masterplans and tend to produce mainly birds eye render views... I would love to know and hear from some1 who actually owned a 970 and upgraded to the 1070. And sorry for the off topic :(

Link to comment
Share on other sites

what really appealing to me, if we are not talking abt cuda counts, of the new pascal are the higher memory property per card which is gives more flexibility & possibility; and its wattage efficiency.

Otherwise the old GTX780 is pretty much decent for certain scene.

I am also considering to upgrade to 1070 myself mainly for those 2 reasons above..

Link to comment
Share on other sites

I don't think you will be able to see considerable energy bill savings to justify the move from a 780 unless you are rendering on the GPU all the time - and I mean ALL THE TIME, but if you have cooling issues as-is, it might be the case that a lower TDP card might work better for you.

 

It is very hard to justify such moves best on Wh and $ if we are talking single cards and a few hours of 100% utilization a week. In the US, with electricity being super cheap vs. many EU countries, it is even harder. If we are talking dedicated GPGPU solutions with multiple cards, or instances where we push the hardware to or close to 100% for prolonged time, efficiency does play a major role and scales outside the PC itself, as often private offices / homes need to be air-conditioned disproportionately more due to that extra heat etc. and its a vicious circle.

 

There are specific reasons tho that could turn the argument and point you to more power efficient cards, other than $ savings, even heat comfort around them:

 

E.g. I was looking into a mITX build myself, and although I don't see considerable performance gains between a 1070 and say a mildly o/ced 780Ti, the latter will do its thing while pulling and dissipating 100W of extra heat. That's a big % of the total system consumption a rig with a stock 1070 would have, and in the limited volume of a ITX case & more restricted envelope of a SFX PSU, can make a big difference.

Link to comment
Share on other sites

I understand your point of view Dimitris, but unfortunately not everybody stay in US or Europe.

Just like me. For singleton company like mine, the PC you are working most of the time is manytimes also the main render machine sometimes, so yeah.. Utilization is no longer in casual mode but rather more constant nearly to the max, therefore Wh and $ concern are still one of the nice thing to consider..

 

Anyway apart from that, if wh and $ are no concern, i agree with Benjamin to void NVIDIA option all together and headup for RX480(x).. It'll gives you acquisition cost bonus as well. :-D

Edited by inpowwatir
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...