Jump to content

GTX 675m vs. AMD 7970m


Recommended Posts

Greetings

 

Like everyone else that ask this question, I came across this while searching for information regarding 3D rendering programs and GPUs. I am also an architect student looking into purchasing a new laptop. This post in particular is about the GPU. I've narrowed my choices down to the AMD 7970M or the Nvidia GTX 675M. I was interested in the GTX 680M, but I was told that it being Kupler base would be a bad choice. Now as I know you might be aware, the AMD 7970 has been having some issues regarding drivers. That the only thing that strays me away from it, but the fact thats its more powerful and cheaper than the GTX 675M isn't easy to over look. The programs I mainly use are Autocad 2013 2D, Revit 2013, and Photoshop CS5.5 (looking to upgrade to CS6 soon) and for rendering (more important ) 3ds Max 2013 + Vray 2.3 running in windows 7 ultimate x64. I don't use Vray RT, but am interested in it being that it can save time. Which graphic card would you suggest? Also any other insight you could provide would be greatly appreciated. Thanks in advance.

 

Also the laptop I'm looking into getting is a Alienware M18X R2 with this configuration:

 

3rd Generation Intel® Ivy Bridge Core™ i7-3720QM (2.6GHz - 3.6GHz, 6MB Intel® Smart Cache, 45W Max TDP)

16GB Kingston HyperX DDR3 1600MHz CL9 Dual Channel Memory

SAMSUNG 256 GB 830 Series Sata III SSD for boot and programs

500GB (w/ 4GB SSD Memory) Seagate XT 7200RPM NCQ Hybrid 32MB Cache for storage

Link to comment
Share on other sites

Disclaimer: the blocks of text below are boring and offer my personal opinion, nothing more, nothing less.

 

Don’t over-think it.

Unless you go all-out for the custom build Clevo’s with desktop components etc, a laptop is a laptop.

 

I just went through architecture (grad) school with using a $800 Acer 7745G i7 for 3 years (2.5, had another cheap laptop the 1st quarter). Ok, it wasn’t stock, being upgraded @ 12GB Ram and Momentus XT 500 (later a 256GB SSD too as it had 2x HDD bays, that is now moved to my desktop), but it served me well – actually outperforming daily same year MBPs 3x as expensive, much more expensive low-end Alienwares etc.

 

Worked with all – only the G74s that 3-4 ppl had in the undergrad program were faster.

The 7745 was still half as thick and almost half the weight if you include power bricks (still had a hefty 120W PSU…does draw a lot of power, as speed doesn’t come free). Newer MBPs were ofc faster featuring 2nd gen i7s, but graphics were same or worse – not much that apple can do about it, aiming more on portability than performance was always their focus.

 

Prioritize your needs: portability vs. speed, but what does affect speed?

 

Most of the programs you will be using, won’t really care much about your GPU. Rendering in Vray with the same 3rd Gen i7 inside a $700 acer or a $1500 chassis won't make any difference in time.

 

The M18 is just fancy, big and heavy gaming laptop. The 3rd Gen i7 is surely a very good chip, offering nearly (low-end, no overclockable) desktop performance, but that is true using any laptop with 3rd gen i7s – more than decent models start @ $800-1000. In the 950-1000+ range you already have 1080p monitors, and that pretty much leaves the GPU being the only advantage for going to something more expensive performance wise. Most of these high performance “mobile” cards like the 675, 680 and equiv. AMD versions, consume more power under load than a whole MBP or most i7 laptops out there…thus the laptop chassis that is called to dissipate that heat, has to be big.

 

In case of the M18, it being even bigger than an Asus G75, I thing is its worse quality. Remember that you will be hauling the thing commuting back and forth from school (I don’t know if you will be using mass transit / bicycle / car etc), in between classes every few hours, for lunch breaks, for fooling around after school etc…a big bulky laptop with a power supply to match (the latter will be 240W & 4lb-1.9kg alone), is rarely a good idea. Also the battery life without it is simply ridiculous (tho switchable graphics improve it over the 2nd gen i7s). Just a warning, this is not a laptop to walk around with – you will try hard even to find a padded backpack that can fit it and doesn't look crap.

 

Personally I believe the G75 is a better deal. Has anything the M18 has, other than multi-color cheesy lights (still has backlit keyb), it is smaller and slicker looking – but that’s personal preference. It is also a bit cheaper and it is Asus: statistically the most reliable laptops around.

 

Now, back to the GPU…

 

Gaming laptops offer a range of GPUs to choose from. Gaming wise, the 675M is an absolute beast, and the 680M is even better. These cards, will set you back around $600, yet their performance barely touches what $150-200 desktop cards do, as the Mobile chips are restricted in the energy consumption (and heat produced is also a much bigger issue).

 

Vray RT will "work" with the high end M GPUs - it won't "fly". I will work ok for setting up your scene with mid-range 640M, 660M (kepler chips), or 560M/580M=675M chips. The best of those cards struggle just to match a desktop GTX 560 - a card that is flirting with the really low-end as far as desktop workstation CUDA cards go, which means that your expectations for price/performance have be be much different in the mobile computing world. Always keep in mind power draw when you guesstimating performance: you have a 150-200W top of the line laptop. That GTX 560Ti might be more than that alone. It is science, no magic secrets. The "machine" that goes through more energy, produces more work. Matching that 200W desktop card with a 130-150W card, is as "magic" as it gets.

 

A bit better, or a bit worse in GPU acceleration doesn’t matter for most of the stuff, as we are talking 5-10% worse speeds for the kepler ones vs the equiv. Fermi model: when pefromance benefits over CPU only previews are there, you should be happy. If you get 18x faster activeshade with VRay RT instead of 20x by not going all-out in your GPU, is not a big deal.

 

640M/660M consume much much less energy, and find their way into much smaller, cheaper and more portable laptops (like MBPs, which is nowhere cheap, but pretty thin and light). The 680M is supposingly the fastest M GPU today, but I have no sense of how fast it is in VRay rt or viewport acceleration – never seen it in action. I have to say that I did use a G74 with a 3GB GTX 580M, and it was definitely not faster than my 1GB Radeon 5850M in viewport acceleration and OpenGL programs. Sketchup and Rhino 3D are very happy with my AMD. Did not work with it using Revit or 3DS, which are D3D based and I am sure will be faster in the GTX.

 

In the long run, a good laptop to get you through school doesn’t have to be expensive.

Past the $1000 mark, laptops don’t have better building quality in the electronics used to justify the higher pricetag. Other than the better screen and chassis which rarely fail (and usually are not covered by any warranty when you do manage to break them, as we are talking usually physical damage to them), have it clear in your head that a $1000 Asus vs a $2000 Alienware vs. a $2.5-3000 MBP have the same statistical failure rate, as the components that fail are usually made by the same OEM suppliers.

 

Finishing my endless rant…

 

If you are paying it yourself, save your money, get a decent laptop that will get you through with decency. If you want a slightly better than what $1000-1200 can get you in a gaming laptop with 1080p and “good enough” gpu can get you, I would say don’t go more than $1400-1500 – what a mid-range G75 or perhaps other alienware models can get you. Stay away from really big and heavy stuff if you can.

If you are “milking” parents to get what you want and the $2000-2500 budget is there and you want to get as much as you can out of it, getting a decent laptop that focuses on portability and a desktop workstation will make a much more powerful combination.

 

In desktops you can do custom builds that blow Alienware and pretty much any other “gaming desktop” assembler/manufacturers put out there. Always talking price/performance.

 

If you do need a better-than-average gaming laptop as well as a mobile workstation, the big fat alienwares and G75s become better deals.

 

Keep in mind that this forum doesn't have 1000s of users up their feet ready to answer questions - especially over weekends - so you have to be patient instead of adding multiple posts about the same question

Edited by dtolios
Link to comment
Share on other sites

Disclaimer: the blocks of text below are boring and offer my personal opinion, nothing more, nothing less.

 

Don’t over-think it.

Unless you go all-out for the custom build Clevo’s with desktop components etc, a laptop is a laptop.

 

I just went through architecture (grad) school with using a $800 Acer 7745G i7 for 3 years (2.5, had another cheap laptop the 1st quarter). Ok, it wasn’t stock, being upgraded @ 12GB Ram and Momentus XT 500 (later a 256GB SSD too as it had 2x HDD bays, that is now moved to my desktop), but it served me well – actually outperforming daily same year MBPs 3x as expensive, much more expensive low-end Alienwares etc.

 

Worked with all – only the G74s that 3-4 ppl had in the undergrad program were faster.

The 7745 was still half as thick and almost half the weight if you include power bricks (still had a hefty 120W PSU…does draw a lot of power, as speed doesn’t come free). Newer MBPs were ofc faster featuring 2nd gen i7s, but graphics were same or worse – not much that apple can do about it, aiming more on portability than performance was always their focus.

 

Prioritize your needs: portability vs. speed, but what does affect speed?

 

Most of the programs you will be using, won’t really care much about your GPU. Rendering in Vray with the same 3rd Gen i7 inside a $700 acer or a $1500 chassis won't make any difference in time.

 

The M18 is just fancy, big and heavy gaming laptop. The 3rd Gen i7 is surely a very good chip, offering nearly (low-end, no overclockable) desktop performance, but that is true using any laptop with 3rd gen i7s – more than decent models start @ $800-1000. In the 950-1000+ range you already have 1080p monitors, and that pretty much leaves the GPU being the only advantage for going to something more expensive performance wise. Most of these high performance “mobile” cards like the 675, 680 and equiv. AMD versions, consume more power under load than a whole MBP or most i7 laptops out there…thus the laptop chassis that is called to dissipate that heat, has to be big.

 

In case of the M18, it being even bigger than an Asus G75, I thing is its worse quality. Remember that you will be hauling the thing commuting back and forth from school (I don’t know if you will be using mass transit / bicycle / car etc), in between classes every few hours, for lunch breaks, for fooling around after school etc…a big bulky laptop with a power supply to match (the latter will be 240W & 4lb-1.9kg alone), is rarely a good idea. Also the battery life without it is simply ridiculous (tho switchable graphics improve it over the 2nd gen i7s). Just a warning, this is not a laptop to walk around with – you will try hard even to find a padded backpack that can fit it and doesn't look crap.

 

Personally I believe the G75 is a better deal. Has anything the M18 has, other than multi-color cheesy lights (still has backlit keyb), it is smaller and slicker looking – but that’s personal preference. It is also a bit cheaper and it is Asus: statistically the most reliable laptops around.

 

Now, back to the GPU…

 

Gaming laptops offer a range of GPUs to choose from. Gaming wise, the 675M is an absolute beast, and the 680M is even better. These cards, will set you back around $600, yet their performance barely touches what $150-200 desktop cards do, as the Mobile chips are restricted in the energy consumption (and heat produced is also a much bigger issue).

 

Vray RT will "work" with the high end M GPUs - it won't "fly". I will work ok for setting up your scene with mid-range 640M, 660M (kepler chips), or 560M/580M=675M chips. The best of those cards struggle just to match a desktop GTX 560 - a card that is flirting with the really low-end as far as desktop workstation CUDA cards go, which means that your expectations for price/performance have be be much different in the mobile computing world. Always keep in mind power draw when you guesstimating performance: you have a 150-200W top of the line laptop. That GTX 560Ti might be more than that alone. It is science, no magic secrets. The "machine" that goes through more energy, produces more work. Matching that 200W desktop card with a 130-150W card, is as "magic" as it gets.

 

A bit better, or a bit worse in GPU acceleration doesn’t matter for most of the stuff, as we are talking 5-10% worse speeds for the kepler ones vs the equiv. Fermi model: when pefromance benefits over CPU only previews are there, you should be happy. If you get 18x faster activeshade with VRay RT instead of 20x by not going all-out in your GPU, is not a big deal.

 

640M/660M consume much much less energy, and find their way into much smaller, cheaper and more portable laptops (like MBPs, which is nowhere cheap, but pretty thin and light). The 680M is supposingly the fastest M GPU today, but I have no sense of how fast it is in VRay rt or viewport acceleration – never seen it in action. I have to say that I did use a G74 with a 3GB GTX 580M, and it was definitely not faster than my 1GB Radeon 5850M in viewport acceleration and OpenGL programs. Sketchup and Rhino 3D are very happy with my AMD. Did not work with it using Revit or 3DS, which are D3D based and I am sure will be faster in the GTX.

 

In the long run, a good laptop to get you through school doesn’t have to be expensive.

Past the $1000 mark, laptops don’t have better building quality in the electronics used to justify the higher pricetag. Other than the better screen and chassis which rarely fail (and usually are not covered by any warranty when you do manage to break them, as we are talking usually physical damage to them), have it clear in your head that a $1000 Asus vs a $2000 Alienware vs. a $2.5-3000 MBP have the same statistical failure rate, as the components that fail are usually made by the same OEM suppliers.

 

Finishing my endless rant…

 

If you are paying it yourself, save your money, get a decent laptop that will get you through with decency. If you want a slightly better than what $1000-1200 can get you in a gaming laptop with 1080p and “good enough” gpu can get you, I would say don’t go more than $1400-1500 – what a mid-range G75 or perhaps other alienware models can get you. Stay away from really big and heavy stuff if you can.

If you are “milking” parents to get what you want and the $2000-2500 budget is there and you want to get as much as you can out of it, getting a decent laptop that focuses on portability and a desktop workstation will make a much more powerful combination.

 

In desktops you can do custom builds that blow Alienware and pretty much any other “gaming desktop” assembler/manufacturers put out there. Always talking price/performance.

 

If you do need a better-than-average gaming laptop as well as a mobile workstation, the big fat alienwares and G75s become better deals.

 

Keep in mind that this forum doesn't have 1000s of users up their feet ready to answer questions - especially over weekends - so you have to be patient instead of adding multiple posts about the same question

 

 

Thank you for your comment. I understand the M18X is a large laptop, but that really doesn't bother me. As far as the comment about rate of faliure, I plan on purchasing there protection plan. I drive myself to campus so I don't have to worry about lugging it around. When I'm on campus, I'm always in studio do the majority of the time, the laptop will be on my desk. The reason why I want something tonight is because I do all my work at school, so have a desktop would almost be pointless. Also I would be able to take it to work with me. As far as the Gpu, were you saying that it wouldn't really matter?

Link to comment
Share on other sites

GPU acceleration usability in VRay RT is relevant...

Since not all features are supported and/or there are slight bugs here and there, you might not be able to use it as your "production" engine in its current implementation for more complex stuff.

Arch Viz @ school projects is rarely that complex tbh (and it should not be, you are studying architecture, not arch viz in itself)

 

So, if you end up using it as a fast "preview" engine, setting up your shaders and lights etc on the fly, even a 660M will be much faster than the CPU itself. Ofc the 670M/675M/680M cards will be even faster, and while give or take a 10-20% performance is not negligible, when you are already multiple times faster than the CPU alone, is not a deal breaker.

 

I honestly believe that the real productivity differences between "best" and "second best" are very rarely getting your money's worth back. Especially when we are talking school work.

Edited by dtolios
Link to comment
Share on other sites

GPU acceleration usability in VRay RT is relevant...

Since not all features are supported and/or there are slight bugs here and there, you might not be able to use it as your "production" engine in its current implementation for more complex stuff.

Arch Viz @ school projects is rarely that complex tbh (and it should not be, you are studying architecture, not arch viz in itself)

 

So, if you end up using it as a fast "preview" engine, setting up your shaders and lights etc on the fly, even a 660M will be much faster than the CPU itself. Ofc the 670M/675M/680M cards will be even faster, and while give or take a 10-20% performance is not negligible, when you are already multiple times faster than the CPU alone, is not a deal breaker.

 

I honestly believe that the real productivity differences between "best" and "second best" are very rarely getting your money's worth back. Especially when we are talking school work.

 

 

I was told to stay away from Kepler based GPUs. How do you fell about the AMD 7970M?

Link to comment
Share on other sites

Who told you to stay away from kepler? Any particular reason?

The 7970M is an amazing card. Second fastest single Mobile GPU - losing by a very small margin to the 680M...

 

According to notebookcheck, the 7970M is 70% faster than the 675M - the best Fermi based card - and the 680M is 81% faster...not to shabby for Kepler :p

 

That is in 3D Mark 11 tho, no OpenCL/Cuda computation comparison.

Still, keep in mind that the 680M is pretty much a low speed/power optimized GTX 670 (same shaders/cuda cores), while the 675M could be remotely compared to the GTX 560.

 

The 7970M could be compared to a 7870 Desktop card - as far as CGN architecture, cores and mem badnwidth, again with much lower power consumption and clocks.

 

I would expect the 7970M to deliver better viewport performance in 3D graphic creation with OpenGL viewports, and comparable with the 680M in Direct3D. Apparently both smoke the 580/675 in D3D, so 3DS max should not be a problem for either.

----

To be honest, I was very skeptical with what to expect of Kepler in VRay RT watching it being embarrassed by Fermi in some computation tests, but multiple users of the program report real life test results timing GTX 580 vs GTX 680 cards, and they get 4-7% differences in rendering times (time needed to get a fixed amount of rays per pixel cast), which is not bad at all. It is not an improvement in performance, but surely it does run cooler and draws much much less power.

 

Benchmark results cannot be always directly extrapolated to real life performance differences.

Edited by dtolios
Link to comment
Share on other sites

Who told you to stay away from kepler? Any particular reason?

The 7970M is an amazing card. Second fastest single Mobile GPU - losing by a very small margin to the 680M...

 

According to notebookcheck, the 7970M is 70% faster than the 675M - the best Fermi based card - and the 680M is 81% faster...not to shabby for Kepler :p

 

That is in 3D Mark 11 tho, no OpenCL/Cuda computation comparison.

Still, keep in mind that the 680M is pretty much a low speed/power optimized GTX 670 (same shaders/cuda cores), while the 675M could be remotely compared to the GTX 560.

 

The 7970M could be compared to a 7870 Desktop card - as far as CGN architecture, cores and mem badnwidth, again with much lower power consumption and clocks.

 

I would expect the 7970M to deliver better viewport performance in 3D graphic creation with OpenGL viewports, and comparable with the 680M in Direct3D. Apparently both smoke the 580/675 in D3D, so 3DS max should not be a problem for either.

----

To be honest, I was very skeptical with what to expect of Kepler in VRay RT watching it being embarrassed by Fermi in some computation tests, but multiple users of the program report real life test results timing GTX 580 vs GTX 680 cards, and they get 4-7% differences in rendering times (time needed to get a fixed amount of rays per pixel cast), which is not bad at all. It is not an improvement in performance, but surely it does run cooler and draws much much less power.

 

Benchmark results cannot be always directly extrapolated to real life performance differences.

 

 

 

I've been searching this topic like crazy, so these are comments from notebook review.

 

 

Hey, I saw you comment regarding the post about the AMD 7970m vs GTX 680m. You stated that you were contemplating between the two. I'm kind of in the same boat as of now, but my choices are between the AMD 7970m vs GTX 675m. I was told that the GTX 680m being the new Kupler base would be terrible for 3ds Max. I use Max a lot with Vray for rendering. I also use Auto Cad, Revit, and the Adobe suite. Any input you can give me between the different GPU's would be greatly appreciated.

 

Well, I have no plans on getting either gpu or a new laptop at the moment.

However, the AMD (7970m) is a lot better compared to 680m in compute performance (OpenCL).

3d Studio Max has yet to transition fully to OpenCL (though it will).

Other programs you mentioned will definitely benefit from 7970m.

Plus, the 7970m is cheaper compared to the 680m.

 

 

Thank you for answering back. What about CUDA. If I'm not mistaken 3ds Max is using it now. Also in your opinion, if you were to choose a GPU it would be the 7970? I'm just trying the get as much advice as I can get. This will be my first time building my own laptop so I want to make sure I spend my money wisely. Also any other suggesting you can make will be greatly appreciated.

 

Max currently utilizes CUDA , yes, however... the CUDA capabilities of latest Kepler graphic cards have been severely downsized by Nvidia (intentionally) to focus more on gaming.

To sum it up, Kepler cards are MUCH slower in CUDA oriented tasks compared to Fermi.

Realistically speaking though, the AMD variants are offering same/similar performance for much lower cost which is why I'd personally go with them and the increased support for OpenCL.

 

Also... there are few external plugins for 3dsMax if I'm no mistaken that can use OpenCL. They are in experimental stages (sadly), but as I said, its probable that Autodesk will officially transition to OpenCL relatively fast.

Its entirely up to you.

My advice would be to further study the differences between the 2 gpu's.

Also... did you use CUDA up until now heavily or not and on what kind of computer/laptop?

Check with Autodesk and see if they have any news regarding OpenCL integration into Max.

 

I'm using Max myself on a heavy basis, but hadn't really kept up with those news because of lack of money to get anything new.

 

 

 

The laptop that I will be using is the Alienware M18X. As for you other question, I've never used CUDA before because I never used Vray RT. I'm interested in looking into though being that it could save time. Once again thank you for taking time to answer my question.

 

Its a pretty powerful laptop in its own right.

What was the previous configuration (computer) you used for Max if you don't mind me asking?

 

As I said before... Vray RT is currently using predominantly CUDA, but I wouldn't really opt for the Kepler variant because its a lot slower compared to last gen Fermi.

 

You will benefit either way because an IB entry quad is pretty fast already for that kind of work - granted it doesn't come close to the gpu, but either way.

 

The way I see it, you have several options:

1. get the 7970m and pay less, and you will be able to use OpenCL on other programs (excluding Vray RT for now - until they decide to implement OpenCL).

2. get the previous generation Nvidia top end gpu (renamed into 675m - its Fermi, and on the previous manuf. process, just renamed) - you will get pretty good CUDA performance for what you need, but that gpu is over 50% slower than 7970m or 680m.

3. get the 680m (by paying through the nose) and live with the diminished CUDA capacity compared to Fermi (though honestly, you WOULD see acceleration in Vray RT with it either way).

 

I wouldn't get the 680m simply because its overpriced. 7970m is basically its equal power-wise in games (and it BLASTS it into oblivion when it comes to compute performance - OpenCL), but much cheaper, and the market is going towards OpenCL (so its just a matter of time before Vray implements it).

 

 

The comments above were given to me

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...