Jump to content

buying computer (3dsmax,vray,mari)


Recommended Posts

i have my computer now . and i am very happy. thanks again to all for help.

 

something i am interested in. about the CPU temperature. is it normal that its 30°C when no software is on? and when i used vray RT with GPU the CPU was 50°C. a little hot isnt it?

how hot is your cpu when you render with vray on cpu?

Link to comment
Share on other sites

CPU temperature under load can reach or exceed 70°C, with mid 50s being actually on the low side, but of course this is a function of the CPU type and model, and the ambient temperature it operates in. Thus most good cooler reviews talk about "delta" temperatures: aka how many deg over ambient does the temperature normalizes at (under load and/or idling).

 

Same case with GPUs: GPUs run a bit hotter, but that does not mean it is dangerous: powerful cards draw a lot of power as we wrote above, and this power has to be dissipated using a small (in comparison) heat sing. Equilibrium comes at high temperatures (or high deltas from ambient) for most cards, with operation temps at 100% load ranging normally between 70°C (for medium/low end cards) to 90°C (med/high end cards).

 

Chips are designed to work under these conditions, and are not in real danger of "wearing out", at least not before the hardware is outdated and replaced (a decade or so). All modern GPUs/CPUs have built in thermal sensors that will reduce the speed of operation + the power feed to the chip to prevent it from burning out, so in general you are safe. Prolonged operation right before these "throttling down" thresholds kick in though, does stress the chip and most people try to avoid it from happening using better coolers.

 

Heavily overclocked chips might wear out faster, and this degradation is becoming obvious with time, as the chips become gradually unstable in the same temperature/Vcore/speed settings where they worked perfectly fine initially, and need readjustments to be stable again (higher Vcore, better cooling, or even lowering speed). We are talking situations with hefty Vcore increases tho, that ofc lead to much higher operating temperatures that all together far exceed the original specifications: the 3770K has a stock max consumption (full load for all threads) around 77W, when the same chip overclocked @ 4.8-4.9GHz might reach 200W...that's 2,6x more heat, and naturally extremely overclocked pieces of hardware operate in the 70-80°C even with high-end watercooling.

 

CPU Temperatures in the high 80s or 90°C and above should be avoided for prolonged usage, to avoid damaging the chip prematurely, but most high-end GPUs and high-end Ram chips can operate in these temperatures within their specifications.

Edited by dtolios
Link to comment
Share on other sites

Hi everyone.

I need help for my new pc. Im an architecture student and the programs that i use more often are Autocad 2013 2D, Photoshop CS6 and for rendering (more important ) Skethup 8 + Vray 1.49 running in windows 7 ultimate x64.

I just bought a new desktop: i5 3570K, C.M. 212 EVO, ASUS p8z77-V LX, 8GB Corsair Vengen 1600, Corsari TX650W, Seagate Barracuda 1T, Max resolution 1920 x 1080.

I need an advice for the GPU.

My budget is about 300 max 350 Euro.

 

I was thinkink of:

 

EVGA GTX 570, HD Double Shot,

1GB,

Memory Clock 3800 MHz

Memory Interfac 320-bit

732 MHz

Cuda 480

$255 - 210 Euro

http://www.amazon.com/gp/product/B0057608W2/ref=ox_sc_act_title_3?ie=UTF8&smid=ATVPDKIKX0DER

 

 

 

http://www.newegg.com/Product/Productcompare.aspx?Submit=ENE&N=100006662&IsNodeId=1&Description=GTX%20570&bop=And&CompareItemList=-1%7C14-130-662%5E14-130-662-TS%2C14-130-620%5E14-130-620-TS

 

EVGA GTX 570, SC,

1GB,

Memory Clock 3900 Mhz,

Memory Interfac 320-bit

797 MHz

Cuda 480

305 Euro

http://www.amazon.it/gp/product/B004S6Z0FM/ref=ox_sc_act_title_2?ie=UTF8&smid=A3V3RSA42JIMIT

 

 

EVGA GTX 560 SC

2GB,

Memory Clock 4008 MHz

Memory Interface 256-bit

850 MHz

Cuda 336

250 Euro

http://www.amazon.it/GF-GTX-560-2GB-GDDR5/dp/B005GNKKU6/ref=sr_1_12?s=electronics&ie=UTF8&qid=1342812970&sr=1-12

 

Quadro 2000 1Gb: 380 Euro (a little out of budget)

http://www.ebay.com/itm/380443486572?ssPageName=STRK:MEWAX:IT&_trksid=p3984.m1438.l2649#ht_6189wt_1272

 

Please can at least post a guide, how to choose a gpu based on the programs that i use, and how to enable the options.

For example I think Vray use OpenCL and the CUDA core are irrelevant. At least form me, i dont use VrayRT that need CUDA.

But im not sure.

Same for the amount of memory, i dont know if 1 GB is enough for the complexity of my work, not a professional but a moderate one.

My choise will be EVGA GTX 570, HD Double Shot for the price and becouse is just a little less powerful of the EVGA GTX 570, SC.

 

Thanks

Link to comment
Share on other sites

The programs you mention above do not utilize CUDA, and other than Photoshop CS6 none utilizes OpenCL.

 

Adobe introduced CUDA acceleration with CS5.5 suite, but now with CS6 they've re-authored the programs/filters that benefited by GPU acceleration around OpenCL, to open up to all GPU manufacturers (AMD, as nVidia works with either API), and give a great boost to CPUs with intergrated OpenCL GPUs (AMD APUs).

 

VRay does not use OpenCL or CUDA outside of the VRay RT GPU itteration, and RT is not supported yet in VRay for Sketchup.

Vray RT was recently introduced for VRay for Rhino 3D 1.5, but is CPU accelerated only. Most likely if and when they will upgrade the VRay for Sketchup they will add RT, but I doubt it will be GPU acceleration enabled, just like in Rhino. Hopefully they will prove me wrong (yes, I do use VRay with SU a lot...).

 

So, at the moment you are mostly interested in viewport performance, as whichever GPU you choose, other than some PS CS6 filters (and games ofc) you won't see any real difference.

 

Also keep in mind that GPU Accelerated renderings are VRam limited in a lot of occasions - depending on your scene's complexity.

The GTX 570 is a noticeably faster card for GPU renderings than the GTX 560 Ti, but that's only when your scene can fit in its frame buffer.

Depending on this limitation, defining a clear winner between a GTX 570 1.2GB and a GTX 560ti 2GB is hard: for smaller scenes that fit in 1.2GBs (1GB or so will be available for the renderer if the card is also acting as your main display driver), the 570 will be definately faster. If the scene does not fit in 1-1.2GBs, then GPU acceleration simply will not work at all for the 570. 2GBs will "buy you" more options, even if you make some speed compromises. (do some reading on the various posts on the matter a quick search in this forum will reveal a few).

 

As far as viewport acceleration goes, both GTX 560 and 570 are powerful cards - at least for a budget workstation are more than enough.

AMD cards are respectful contenders as well. In applications using OpenGL (diff than OpenCL) Radeons tend to provide better viewport acceleration. In Direct3D applications, results vary with nVidia being ontop most of the times - always comparing similarly priced cards.

I tend to see better viewport performance in Radeons with Sketchup, tho mind you I don't own a fast desktop GTX to compare to. My comparison/conclusion is based on comparing similarly priced mobile GPUs in workgroups in my school: aka same SU model in GT 5xxM nvidia laptop vs. AMD 56xx/58xxM laptop, Dell Inspiron one all-in-one nVidia GPU vs. imac + AMD GPU etc. And no, it's not platform based, usually AMD RadeonMs PCs are much better than MBPs with nvidia GPU with Sketchup. Maybe it was a drivers/compatibility glitch, but I would say my 5670M 1GB 128bit is actually better in SU8 viewport performance than an otherwise beastly GTX560 3GB I used in a Asus G74 (same - huge- model, used sitting next to each-other).

Again, these are my personal observations and IMHO.

Link to comment
Share on other sites

Thanks for your advices.

I don’t want to annoy you but I want to ask you something else.

Since I’m not interested in games and use Photoshop only for simple projects my real problem is the rendering process with Vray.

I’m actually working with an urban scene in sketchup. It has 79365 Edges and 39412 Faces.

It’s the more complex work that I have ever deal and it’s not finished yet, I have to add the materials and reflections?

I have no idea how to measure the complexity in other ways.

How can I know if a program uses OpenGL OpenCL?

For example Vray for sketchup uses OpenGL OpenCL or what else?

How to choose the GPU most appropriate for my work, which parameters I have to look for?

After I choose my GPU and installed it how can I unable the option that use that king of acceleration.

It will be perfect if you have any personal advice.

 

Thanks for all.

Link to comment
Share on other sites

Vray for scetchup does not use either OpenGL or OpenCL.

 

OpenGL (Open Graphics Library)[3] is a standard specification defining a cross-language multi-platform API for writing applications and simulating physics, that produce 2D and 3D Graphics. OpenGL was developed by Silicon Graphics. Direct3D or D3D is an equivalent API developed by Microsoft.

 

Most programs involving 3D graphics today, use one of these APIs to accelerate the geometries on screen - aka accelerate the viewport of 3DS max, your 3D game etc. Some platforms can switch to any of these APIs, like 3DS max does. Sketchup uses OpenGL only.

 

OpenCL is a broader language that allows cross-communication and data exchange to compatible devices, including CPUs, GPUs etc + the API to interface this parallel computing possibilities with other applications (software).

 

CUDA is something like OpenCL, but is proprietary to nVidia and unavailable to other hardware manufacturers, while OpenCL is supported by all CPU and GPU manufacturers to some extend, yet it is more complicated (and newer in many areas) than CUDA and takes more effort for developers to implement it without prior experience. Thus CUDA accelerated applications were more prominent.

 

Vray does not use either OpenGL or D3D. It is a CPU based renderer and has nothing to do with stuff displayed on screen in real time. Having a faster GPU does nothing for VRay. It wont render faster.

 

Your workflow might improve with really complex models when you have a GPU that better suits your 3D content creation software: e.g. Sketchup, as you will be able to display more stuff and orbit around them faster etc on your screen. That's viewport acceleration and improves modelling speed, nor rendering speed.

 

VRay RT uses OpenCL either through CPU or GPU (refered to Vray RT GPU) trying to emulate the production renderings the regular VRay (CPU only) would produce. It is helpful for setting up lighting and cameras, and many people use it for their final production images as well. It is not for everybody, as it imposes various limitations - with hefty hardware restrictions being the main issue (fast GPUs with a lot of ram), and incompatibility with various modifiers and displacement maps etc being among others. Also Vray RT computes light bounces in brute force only, without the option for Light Cache / Irradiance map and other sampling methods that make CPU rendering more efficient. In some cases the results are perfectly acceptable, and with a fast GPU renderings that would take hours, can be realized in minutes.

 

Unfortunately VRay RT for sketchup hasn't been released yet, nor has been announced.

VRay RT was released recently for Rhino 3D but is CPU only - so again, no matter what GPU you would chose for programs other than Max (iRay/VRay RT), Mari (CUDA only) etc, you won't get rendering speed gains.

Link to comment
Share on other sites

Thanx again Dimitris. So practicly you are saing that is better to overcocle my cpu from 3.4 to about 4.5Mhz or 5 Mhz, if i wont to se some improvement and for the moment is unuseful tu bay an axpensive GPU that will give me no significant imorovement for tha programs that i use.

I have to say, I have asked in so many sites but this is the firs valird and understandable answer.

Link to comment
Share on other sites

Everything is relative:

what you are doing, how's your patience doing it etc: What is passable for me might me unbearable for you.

A decent GPU will help you boost your productivity with a smoother running viewport. It won't speed up your rendering time. That is still reserved for Vray RT GPU, iRay, Octane etc users.

 

A GTX 560ti is still a great CPU for a budget workstation. Both the 1 and 2GB version will work, with the latter being a bit more flexible in case you want to try GPU accelerated renderings. It is not that cheap yet, but there is a wave of used 560/570/580s coming out of gaming rigs that upgraded to 670/680s. I would consider buying one of those if I would find one in a good price. Also, all the Radeons in that price range (€200 or less) are good performers for viewport acceleration, and actually might even outperform nVidias in OpenCL too, but unfortunately more often that not newer drivers ruin compatibility with VRay RT. Same old story. Sketchup viewport likes Radeons nonetheless.

 

Hi-end i5s and i7s (teh Ks) beg for at least a bit of O/C. I don't know how high you can push the Ivy Bridge i5 - I doubt you will hit 5GHz with this cooler and mobo, but 4.5-4.6 is usually not that hard.

Link to comment
Share on other sites

So practicly you are saying that the solutions are only this.

1 - Oveclock the CPU for faster rendering using Sketchup, and baying low-medium (cheaper) range GPU for the viewport.

2 - Bay a meduim (expensive) GPU whith CUDA cores, work in sketchup and use a program that use GTX GPU like Lumion for the render.

Link to comment
Share on other sites

  • 1 month later...
The programs you mention above do not utilize CUDA, and other than Photoshop CS6 none utilizes OpenCL.

 

Adobe introduced CUDA acceleration with CS5.5 suite, but now with CS6 they've re-authored the programs/filters that benefited by GPU acceleration around OpenCL, to open up to all GPU manufacturers (AMD, as nVidia works with either API), and give a great boost to CPUs with intergrated OpenCL GPUs (AMD APUs).

 

VRay does not use OpenCL or CUDA outside of the VRay RT GPU itteration, and RT is not supported yet in VRay for Sketchup.

Vray RT was recently introduced for VRay for Rhino 3D 1.5, but is CPU accelerated only. Most likely if and when they will upgrade the VRay for Sketchup they will add RT, but I doubt it will be GPU acceleration enabled, just like in Rhino. Hopefully they will prove me wrong (yes, I do use VRay with SU a lot...).

 

So, at the moment you are mostly interested in viewport performance, as whichever GPU you choose, other than some PS CS6 filters (and games ofc) you won't see any real difference.

 

Also keep in mind that GPU Accelerated renderings are VRam limited in a lot of occasions - depending on your scene's complexity.

The GTX 570 is a noticeably faster card for GPU renderings than the GTX 560 Ti, but that's only when your scene can fit in its frame buffer.

Depending on this limitation, defining a clear winner between a GTX 570 1.2GB and a GTX 560ti 2GB is hard: for smaller scenes that fit in 1.2GBs (1GB or so will be available for the renderer if the card is also acting as your main display driver), the 570 will be definately faster. If the scene does not fit in 1-1.2GBs, then GPU acceleration simply will not work at all for the 570. 2GBs will "buy you" more options, even if you make some speed compromises. (do some reading on the various posts on the matter a quick search in this forum will reveal a few).

 

As far as viewport acceleration goes, both GTX 560 and 570 are powerful cards - at least for a budget workstation are more than enough.

AMD cards are respectful contenders as well. In applications using OpenGL (diff than OpenCL) Radeons tend to provide better viewport acceleration. In Direct3D applications, results vary with nVidia being ontop most of the times - always comparing similarly priced cards.

I tend to see better viewport performance in Radeons with Sketchup, tho mind you I don't own a fast desktop GTX to compare to. My comparison/conclusion is based on comparing similarly priced mobile GPUs in workgroups in my school: aka same SU model in GT 5xxM nvidia laptop vs. AMD 56xx/58xxM laptop, Dell Inspiron one all-in-one nVidia GPU vs. imac + AMD GPU etc. And no, it's not platform based, usually AMD RadeonMs PCs are much better than MBPs with nvidia GPU with Sketchup. Maybe it was a drivers/compatibility glitch, but I would say my 5670M 1GB 128bit is actually better in SU8 viewport performance than an otherwise beastly GTX560 3GB I used in a Asus G74 (same - huge- model, used sitting next to each-other).

Again, these are my personal observations and IMHO.

 

 

 

Dimitris, it seems like you def. know a lot about this issue. I've been searching around for about a month with almost the same question. I too am an architect student who is looking to build a new machine. My choice has come to the Alienware M18x.

 

This will be my configuration:

 

3rd Generation Intel® Ivy Bridge Core™ i7-3720QM (2.6GHz - 3.6GHz, 6MB Intel® Smart Cache, 45W Max TDP)

16GB Kingston HyperX DDR3 1600MHz CL9 Dual Channel Memory

SAMSUNG 256 GB 830 Series Sata III SSD for boot and programs

500GB (w/ 4GB SSD Memory) Seagate XT 7200RPM NCQ Hybrid 32MB Cache for storage

 

Now my question pertains also to the GPU. My choices are the AMD 7970M or the Nvidia GTX 675M. I was interested in the GTX 680M, but I was told that it being Kupler base would be a bad choice. Now as I know you might be aware, the AMD 7970 has been having some issues regarding drivers. That the only thing that strays me away from it, but the fact thats its more powerful and cheaper than the GTX 675M isn't easy to over look. The programs I mainly use are Autocad 2013 2D, Revit 2013, and Photoshop CS5.5 (looking to upgrade to CS6 soon) and for rendering (more important ) 3ds Max 2013 + Vray 2.3 running in windows 7 ultimate x64. Which graphic card would you suggest? Also any other insight you could provide would be greatly appreciated. Thanks in advance.

Link to comment
Share on other sites

  • 3 weeks later...
which software do you recommend for using to check geforce temperature and fan speed?

http://www.techpowerup.com/gpuz/ - GPU-Z supports a lot of cards and its a great, free and light monitoring app.

Along with CPU-Z is probably the most popular monitoring tool last few years.

 

EVGA, MSI and ASUS - guess other companies too - have their own monitoring programs, usually enabling you to tweak/overclock your card, alter the fan speed / GPU temperature curve etc. Most of them work with cards from other manufacturers - the MSI afterburner is I believe the most popular.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...