Jump to content

Dear people with more knowledge than me


Recommended Posts

Dear people with more knowledge than me

 

I am about to invest a stupidly big amount of money on a prebuild pc workstation (warranty, service and so on),

I come from Mac so I really don’t have any idea what to look for.

 

 

It needs to be a good all-round workstation (Modelling, v-ray rendering and mental ray rendering in 3Ds Max). I would like to have the possibility to use GPU and CPU render power without any of it is lacking to much.

 

Question one:

Vision DTP-352 - Intel® Core™ i7-3820 4x3.60GHz (Turbo 3.80GHz) 10MB cache

Or

Vision DTP-352 - Intel® Core™ i7-3930K 6x3.20GHz (Turbo 3.80GHz) 12MB

 

Will it do any deferens?

 

AND

 

nVidia Quadro K4000 3GB

Or

Asus GTX Titan 6GB GDDR5

 

I will not be able to understand long technical explanations, so please keep it simple since I am technologically challenged when it comes to PC’s.

 

Thanks from

Jakob (Denmark)

 

 

(Moderator edit: Vendor link removed)

Link to comment
Share on other sites

I don't think I'd buy a Quadro unless you know you need it and why. For GPU rendering in mental ray, the Titan 6GB is the current high end. (I'm not sure what the best for Vray GPU is right now but I'm sure the Titan is at or near the top of that list.)

 

The 3930K is a 6-core and the 3820 is a 4-core. The 3930K will run a render (such as a convention production non-GPU mental ray or Vray render) faster than the 3820, and will run some games faster, but that's it - it won't speed up other 3DSMax tasks. Since you're buying a high end GPU, if you know you want to lean on GPU rendering more than CPU rendering and you want to save money, this is an area where you can comfortably do so by going with the 3820.

 

If your budget is enormous and you just want both CPU and GPU rendering to run fast, get the 3930K or the newer 4930K and, since the motherboards for those have capacity for extra video cards, get the Titan to run GPU renders on and also get something like a FirePro W7000 to hook the monitor up to (so the Titan can devote its entire capacity to the render). The 3930K/4930K will also have advantages if you want to get into overclocking (which is an advanced subject that you don't have to think about right now).

Link to comment
Share on other sites

Andrew summed it up pretty much...

Doesn't take much thought to understand that for multithreaded tasks like rendering, "the more cores with similar speeds" is usually "better", but certain traits do make a difference despite price: i.e. sometimes Quadro Driver optimizations are helping cheaper or on-paper underwelming hardware to perform better than gaming cards with more raw power. But that is for viewports only (i.e. what you see in the limited resolution of the screen, not the actual rendering).

 

CPU wise, I would not pick a "ready made" system with a 3820. The s2011 platform has certain niche advantages over the s1150 (4770K), namely better support for more than dual SLI (that is a gaming niche, SLI and PCIe lanes have little to no effect in GPGPU renderings, even 1x PCIe lane works just fine), support for a max of 64GB of unbuffered RAM over 32GBs for the s1155/s1150 and ofc access to 6-core CPU parts.

 

The 3820 is slower than the 4770K by a decent to mention margin (not slow in any way, just a tad slower), and getting a s2011 that is even a tad slower and doesn't capitalize on the above niches is pointless. The new 4820K might worth it as a s2011 quad as it at least offers a higher base clock than the 4770K.

 

Titans and 780s are the best performing GPGPU card (CUDA) for progressive renderings (Vray/iray). Practically indentical in speed, it is the Vram that makes the difference in the size of scene you can process with each. The only Quadros that offer decent GPGPU are VERY pricey, namely the K5000 (~the raw power of a GTX 680/770) and the K6000 (~a Titan, but with 12GB VRam).

Link to comment
Share on other sites

Well, as I've tried to say above, there is no "clear cut" answer.

Is the 4930K better than the 3930K? Yes, but by a very small margin. Outside power consumption there should be no issue for anyone really thinking upgrading to one if he already has a 3930K.

 

The Titan is kinda overkill for most applications (including gaming) and it is seriously overpriced for its niche. I imagine a boutique like MM Vision could be adding a profit above retail to make the whole price/performance ratio even worse.

If you know that you need 6GB of VRam (doubtful), sure, no other choice really, but if you don't, why go there?

Viewport performance is nothing spectacular or really better than a 770 will give you, GPGPU rendering performance will be almost the same with a 780 (3GB), almost double with 2x 780s that don't cost that much more than a single (upmarked) titan etc etc

 

DDR3-1333 is way to slow for that kind of investment...1600/1866 worths the price difference (should be very very small over 1333 really), above 1866 the returns are negligible in real life performance with all intel CPUs.

 

Also MM Vision appears to be using Raidmax PSUs in its configs, which are not very praised.

Link to comment
Share on other sites

I'm a believer in video cards with large amounts of RAM (and think it's cool that you can get gamer grade ones with 4-6GB these days) only for GPU rendering purposes. I'd go for something like the Titan or the 4GB 770 if I needed it for running, say, iray or Vray RT-GPU. (That, and I can see where you'd want it if you wanted to run new games on high detail settings.) But regular CPU based production renders (normal mental ray and Vray etc.) don't use GPUs at all and the 3DSMax interface doesn't benefit from a huge amount of video card RAM and the benefits to the Max interface of having a fast GPU are probably not worth the expense. If not for GPU rendering I'd be looking at under-$400 video cards instead of $1000 ones.

Link to comment
Share on other sites

Thank you so much Dimitris & Andrew, I really appreciate your comments. But the thing is where do I go from here then?

 

The only reason I will buy from MM Vision is because I think I need the 4 years of support, warranty and so on.

 

I am taking the biggest risk of my life = quitting long term education, spending a big amount of money for that one reason to do archviz for a living. So I am going to need a high end and all-round workstation that is ready for everything I or the fast evolving computer industry may throw at it.

 

Is there a high-end workstation hardware guide, where I can find the name of the product, purchase it, and get someone to put it all together professionally?

 

 

All the best.

 

Jakob

Link to comment
Share on other sites

Some values for vray RT from this thread: http://forums.chaosgroup.com/showthread.php?52415-GPU-benchmarks

 

2x GTX580 3GB = 1m 28s

1x GTX 690 4GB = 1m 31s

1x GTX Titan 6GB = 1m 46s

1x GTX 780 Superclocked 3GB = 1m 56s

1x GTX 780 3GB = 2m 5s

1x GTX580 3GB = 2m 44s

1x GTX680 2GB = 3m 03s

1x GTX670 4GB = 3m 07s

1x GTX570 1.26GB = 3min12s

1x GTX470 1.26GB = 3min57s

1x GTX560Ti 1GB = 4min50s

1x GTX460 1GB : 6min40S

Link to comment
Share on other sites

'So I am going to need a high end and all-round workstation that is ready for everything I or the fast evolving computer industry may throw at it.'

 

Dude, just buy what you need a don't spend too much money on parts thinking it would be wise for the future.

 

What software do you use, how much time do you spend with it and what is the most important thing for you?

Edited by joelmcwilliam
Link to comment
Share on other sites

That's good advice. The only thing I would add is, if you go with one of the 4-core CPUs on an 1155 or 1150 motherboard, go with one of the boards (like the Asus "WS" ones) that have added PCIE hardware to enable two video cards at full speed, and a bigger power supply than you think you need right now. In case you want to use a second video card for GPU computing later. (I think that we need to start thinking of ability to accommodate that upgrade as a requirement for 3D workstations.)

 

I'm trying that now on a box that has only one card (a GTX 760 4GB, which is really no slouch) and running a render job on the card that's driving the monitor makes the computer unable to do anything else at the same time.

Link to comment
Share on other sites

That's good advice. The only thing I would add is, if you go with one of the 4-core CPUs on an 1155 or 1150 motherboard, go with one of the boards (like the Asus "WS" ones) that have added PCIE hardware to enable two video cards at full speed, and a bigger power supply than you think you need right now. In case you want to use a second video card for GPU computing later. (I think that we need to start thinking of ability to accommodate that upgrade as a requirement for 3D workstations.)

 

I'm trying that now on a box that has only one card (a GTX 760 4GB, which is really no slouch) and running a render job on the card that's driving the monitor makes the computer unable to do anything else at the same time.

 

Pardon my post, but there is nothing that really needs to be added so that current GPGPU applications get to full speed as long as you have 1x PCIe lane feeding it.

 

s1150 (4770K/4760K + all smaller i7/i5s) and s1155 (3770K/3570K + all smaller i7/i5s) CPU support 16x PCIe 3.0 Lanes. So, in theory, only one PCIe 3.0 16x card.

 

Bandwidth wise, that is double of what the older PCIe 2.0 protocol supports, so that an SLI configuration with a 3770K and 2x PCIe 3.0 cards, should run with each using up 8x lanes PCIe 3.0 , but the same bandwidth as if it was 2x 16x PCIe 2.0.

 

There is no single GPU card out there that really cares about PCIe 3.0 8x (that = PCIe 2.0 16x) or PCIe 3.0 16x, as there is no single card that can saturate the available bandwith of 8 3.0 lanes.

The total bandwidth comes into play when we do SLI or crossfire X with powerful cards, running high resolutions in multi-monitor setups, where the exchange of information between the two or more synced cards is continuous and happening two ways simultaneously. In this extreme cases, with 2-3 cards and 3 monitors in surround etc, you might see a low single digit advantage with 3.0 16x vs. 3.0 8x (i.e. 3-5%). Nothing really significant to dictate that "you've reached the limit of the platform".

The PLX controllers added in some boards, facilitate exactly the increase of connections from card-to-card, not CPU / main memory to card(s).

 

In GPGPU, like progressive rendering, protein folding, cryptography mining etc, the GPUs in the chain work independently of each other.

Each one receives a package to process - in progressive renderings, that's the whole scene and assets etc - and fires away till what was asked of it is "done" - in which case it might be "raytracing 1000 rays for up to 16 bounces for each of the pixels on the output image".

So the PLX or a CPU with more lanes wouldn't add anything to it.

 

Well, you might say, isn't the the card receiving packages slower if it is not "fed" by 16x lanes and is fed by 8x or 4x or 1x?

Yes, is the answer, it is...but 16x 3.0 is 32GB/s (two way, 16GB/sec 1 way), and goes down to 1 GB/s for 1x 3.0 (one way)...

That's VERY fast...even huge scenes will be loaded in fractions of a second and most cards will get their memory saturated in less than 3-4 seconds. Note, this is for the initial "startup" of the real time engine, not each and every update in a light or a material.

 

So, the worst case scenario for running all your GPGPU cards in PCIe 1x, is waiting 2-3 more seconds...

The main display driving card, probably would prefer more than 1x lane, but I doubt it even needs 16x or even 8x for 3D CAD work.

 

The issue with single card GPGPU setups, is that the card has lined up some million operations before you've decided to issue an "orbit and pan model" request, so it has to notify one by one that they are going into hold-mode, as a higher priority request is on the way, clear the pipelines, inform the buffer RAM that work (packages) are coming from a new direction etc etc. The system stutters for a few moments to get back its senses and resume viewport tasks over GPGPU ones.

Also, keep in mind that s2011 and nVidia drivers don't do very well...there has been an issue with cards operating in 2.0 regardless of available lanes, and even when you do "hack" the drivers to make em work 3.0, results don't suprass 3770K with 2x cards.

Edited by dtolios
Link to comment
Share on other sites

Just forget about GPU rendering straight up. There is no archviz (in quality worth of 2013) that will fit even within 6GBs of Titan (and even upcoming Quadro with 12). No other argument is needed, it's simply not used in archviz production outside of basic testings. Even product visualizers can't fit single high-poly car inside it if it's in quality for marketing. There is SINGLE studio in whole Europe (Delta Tracing) utilising fully gpu rendering for extremely limited scenes. Everyone else just plays with it as enthusiast.

 

You can still buy Titan though, unless the money is very tight (seems to be your case), it's hardly bad investment, but you can live with much cheaper stuff as well (the very budget-awesome but nicely powerful 760..). Save that money for some nice 27" IPS monitor instead ? And 32GB of ram. Not 16, not in any case possible.

 

Since it looks like for some time, this will be your only computer, go with the hexa-core (3930/4930).

 

Above guys are giving you awesome in-depth, but too universal advice, I usually love to read it, but it's almost too nerdy now, it doesn't even adress your point anymore..

 

 

Anyway, 2 years ago I was in same shoes as you. Decided to quit school, sold my bicycle to afford single 2600k. More than enough to do fine as freelancer even today if you manage your time well. I wouldn't worry so much, it's not that great risk. It's rather easy, nicely broad market today and opportunities are endless. There are more important things to worry about than hardware and gpu bullshit, 99perc. will be marketing yourself ;- ) Trust me.

Link to comment
Share on other sites

Some values for vray RT from this thread: http://forums.chaosgroup.com/showthread.php?52415-GPU-benchmarks

 

2x GTX580 3GB = 1m 28s

1x GTX 690 4GB = 1m 31s

1x GTX Titan 6GB = 1m 46s

1x GTX 780 Superclocked 3GB = 1m 56s

1x GTX 780 3GB = 2m 5s

1x GTX580 3GB = 2m 44s

1x GTX680 2GB = 3m 03s

1x GTX670 4GB = 3m 07s

1x GTX570 1.26GB = 3min12s

1x GTX470 1.26GB = 3min57s

1x GTX560Ti 1GB = 4min50s

1x GTX460 1GB : 6min40S

 

I just have a simple question and I don't think I need to stard another topic for it:

 

Where would the gtx 770 sit in this list?

I'm considering 2x770 since some have 4GB whereas the 780 only has 3GB. Would you consider 2x770 as an alternative to a Titan ?

Thank you very much.

Link to comment
Share on other sites

The 770 should be slightly faster than a 680 (technically it is the same card, with slightly faster clocks).

2x 770s should be ahead of a Titan for GPU rendering. Not a big difference, but ahead.

 

Keep in mind that we are talking GPGPU progressive renderings alone - like VRay RT GPU.

More than one cards don't add up viewport performance in 3D CAD applications.

 

Lately I've seen some very good offers on 670 4GB too (sub $300), a pair of which should also be a great choice for GPGPU (always was, just got cheaper after 7xx cards came out and shops are getting rid of stock).

 

Remember: This is for GPGPU renderings ONLY. Viewports in 3DS, Maya, C4D, Autocad, Revit etc DON'T CARE ABOUT SLI/CFX. Only one card is used for that.

Edited by dtolios
Link to comment
Share on other sites

Joel, those tests only cover performance in games - which are usually set up to take advantage of SLI. Generally, 3D authoring apps such as Max don't benefit from SLI.

 

It goes like this:

 

- When you're interacting with the program (in the modeling interface, for example) Max relies heavily on one GPU and one CPU core.

- When rendering, it's a heavy user of all available CPU cores but not the GPU.

- If you run a GPU render (e.g. in iray) it will use all the CPU and GPU resources that you tell it to that are compatible. (For example, in iray only nVidia GPUs are usable, but Vray RT-GPU can use both nVidia and AMD/ATI GPUs, and the regular mental ray and Vray engines that most people use in production don't use GPUs at all.)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...