Jump to content

GTX or Quadro?


Recommended Posts

I kind of thought that is what you meant. Since I understand the point, I am curious to how much of a performance gain there is to such builds. I am thinking of building a second workstation. My first one was based on an I7 2600K 3.4 overclocked to 4.5. I have 16 gigs of 1600 speed ram with a ATI firepro GL7800 video card. I have a 128 gig SSD and 2 1 terabyte barracudas. I run regular air cooling but I have hit temps above 80 degrees C when doing renderings that last longer than 3 hours. I've actually topped out at 93 degrees which concerns me. Part of me feels the processor was not quite seated correctly with thermal paste but I've left it go since I normally run between 35-50 doing normal 3D stuff with quick renderings. Plus, the unit has run quite stable and pretty hard over the last 3 years.

 

I've done some research and many say that jumping from my current I7 to the new extreme I7 is not worth the money. I am willing to jump if I am realizing around a 20% gain in rendering speed with vray or vray RT but I have not read anywhere that this is yet the case. I did hear the next Intel chip release could be worth it if it hits 8 cores and utilizes the new ram coming out. So my thoughts are at this point center around probably to keep using my current config and wait until early 2015 to build new again.

 

I am curious what you opinion would be.

 

Clock for clock, a new 4930K will be 10% faster, per core, than a 2600K.

That means for regular modeling etc (mostly single threaded operations), you are right, the upgrade doesn't really worth it and the 2600K holds its own surprisingly well.

 

For pure rendering tho, a 4930K will offer 50% more cores, so along with the added 10% efficiency per thread, you could see as much as 65% better performance overall. Yes, the s2011 platform is expensive, so it won't be an "easy" choice to make when you still have a respectable CPU for everyday tasks.

 

Yes, the next extreme line will probably have 8-core, Haswell-E based models in the $600 range, and utilize DDR4. More details will be out around Q3 2014, and you should be able to buy them Q4 2014. We don't know how expensive DDR4 will be, and if the initial batches will offer any performance difference over the seasoned DDR3. Already, RAM seams to be far from a bottleneck for DCC/CAD etc applications.

 

As far as your 2600K temps, what kind of "regular" air cooler are you using?

Link to comment
Share on other sites

  • Replies 53
  • Created
  • Last Reply

Top Posters In This Topic

I've actually topped out at 93 degrees which concerns me. Part of me feels the processor was not quite seated correctly with thermal paste but I've left it go since I normally run between 35-50 doing normal 3D stuff with quick renderings. Plus, the unit has run quite stable and pretty hard over the last 3 years.

93°C is really high for a 2600K@4,5GHz! I would say everything above 70°C at normal rendering duty is too high for a 2600K. They normally stay quite cool compared to their follow-ups 3770K and 4770K, where more than 90°C are normal @4,5GHz on air.

There must be something wrong with your setup. Bad cooler, or wrong mounting. Or too high voltages for this kind of overclock. I'm running 3 2600/2700K@4,7GHz and 4,8GHz (air cooled) and they normally stay below 65°C while rendering.

 

But concerning water cooling... the main advantage of a good water cooling setup is that you get a higher cooling power at much lower noise. You can get much bigger cooling surfaces compared to even the biggest air coolers on the market and use bigger and slower running fans. I'm using a hexacore i7 3930K@4,6GHz and a GTX 560 TI in my workstation, all cooled by a 400x400 mm radiator (a MoRa 3 LT like this one http://www.manuel-aka-mdk.net/gallery3/var/resizes/PC-Stuff/008/MoRa3_4x180mm_11.jpg?m=1308684426) at the outside of my case with four very slow 180mm fan. The result is a very quiet system at the noise level of maybe a 2,5" HDD.

 

I've done some research and many say that jumping from my current I7 to the new extreme I7 is not worth the money. I am willing to jump if I am realizing around a 20% gain in rendering speed with vray or vray RT but I have not read anywhere that this is yet the case. I did hear the next Intel chip release could be worth it if it hits 8 cores and utilizes the new ram coming out. So my thoughts are at this point center around probably to keep using my current config and wait until early 2015 to build new again.

 

It's true that an i7 Extreme (3960X/3970X/4960X) is not worth the money - but only because you can get the same speed with the 3930K/4930K when overclocked.

The performance gain of a 3930K@4,5GHz to your 2700K@4,5GHz will be roughly 50%, simply because you have 50% of the same chip genereation - a 4930K at the same speed will be a few percent faster, if you manage to get it to that speed (i have one that hit a wall at 4,2GHz - really bad).

 

But if you can wait, you probably doesn't have to wait until 2015, but only Q3 2014 to get an 8-core Haswell-E (with DDR4) - if the lastest release dates are true...

But then it is possible that you have to buy the Extreme series CPU to get 8-cores - it's not clear yet if the cheaper one (5930K) will have 8 cores or only 6.

 

edit: i didn't saw that Dimitris already answered... ;)

Edited by numerobis
Link to comment
Share on other sites

While I am of course, not happy that CPUs are moving so slow at the "moment" (past 3 years), 2600k to 4930k is great jump, and much worth imho. I bought 2600k the very week it came out 3 years ago, and at the moment it's the slowest part in my "farm" (rest are 1x3930k and 3x4930k). It will go, the 60perc. rendering difference at stock speeds makes it absolutely obsolete. We're talking 60perc., that's few hours off from each render, how's that not worth upgrade ? For me it's not even worth keeping in farm, since it's so much slower than its "bigger" brothers that it contributes marginally to the whole rendering performance. Swapping it for another 4930k is quite cheap investment (750+/- euros for cpu+board).

 

And that's not considering all the benefits of LG2011 platform.

 

Last, I don't subscribe to the philosophy of waiting for "next iteration around corner". This just goes forever. If you can use the power now, why not ? You should probably know when the performance is simply not enough (but that should be like...always, CGI is super hungry, and you can NEVER have enough performance). All the time is shaves-off test-renderings alone pays for the investment in few weeks (if not days).

Link to comment
Share on other sites

  • 10 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×
×
  • Create New...