Jump to content

Discrete Workstation - AMD or INTEL


Recommended Posts

Hi, I'm a new member in the forum, but i'm doing some research in the posts in the past few days. Im looking to build a new PC (discrete workstation) to work with Sketchup+VRay, Rhino with Grasshopper, Autocad and in the future with Revit and3DSMax. Im a recently graduated architect, and I have budget limitations (plus here in Brazil the taxes for imported computer parts are astronomical).

 

I'm working in an old AThlon II X4 640 machine, 4GB RAM and an old Radeon 6650 1GB.

 

With my budget, i can go in 2 directions:

 

AMD with better VGA

 

FX 8120 @4.5GHz

Gigabyte 990FXA-UD3

8Gb DDR3 1866 (or 16GB)

GTX 670 2GB

 

INTEL with faster processor

 

I74770K ou I73770K

~150US$ Motherboard (Gigabyte or ASUS)

8GB DDR3 1866

GTX 650Ti 1GB 128bits OR Quadro 600 1Gb GDDR3 (is this quadro model sufficient for architecture CAD applications and 3d modeling? better then the GTX?)

 

My doubts are:

 

-Is the AMD config sufficient to use VRAY RT?

-The speed difference between the processors is dramatic?

 

Please, feel free to change components!

 

Thanks!

Link to comment
Share on other sites

  • 2 weeks later...

The speed difference of the 2x GTX cards is substantial for VRay RT GPU, but both are underwhelming in viewport performance and there you probably won't see big differences. I doubt that any of the two GTX cards will be really faster than your current card outside of games (and ofc RT won't work with the AMD card).

 

The cheaper Quadro 600 probably surpasses both of them with ease in viewport, will be horrible in CUDA/OpenCL compute. Due to drivers, the performance advantage of the Quadro will be the only "dramatic" difference. Still, nothing new here.

 

8120 is oldish. The 8320 is a tad faster, and consumes a tad less power. I don't know why you would go with the 8120, unless the price is pretty lower. The Gigabyte you’ve listed is a "cookie cutter" for AM3+.

 

The 3770K and even more the 4770K are considerably faster than the 83xx series in single threaded applications, and a tad faster in rendering (average, in some cases its a toss or a clocked 83xx might be better). Still most applications are single threaded, and the performance difference is notable.

 

I work on a 8350 / 16GB machine daily now, and I have to say my home 3930K looks better and better every day. The 8350 is by no means slow, but it is slower. Sure, an overclocked 8350 would be a tad better, but…my home PC is already clocked @ 4.8…hard for any AMD Vishera to catch up. Add that 5-10% IPC increase between Sandy (my chip) – Ivy Bridge and Haswell, and I would imagine that a 4770K @ 4.4-4.5GHz would be simply untouchable by anything less than 5+ GHz 2700K, i.e. cherry picked chips with extreme cooling etc.

 

Unless you are really pressed by your budget, I would recommend sticking with intel for the CPU. An i5 will work great for Revit heavy workloads and other CPU intensive tasks where you won't be rendering locally a lot and the 8350 advantage in that won't show often, but the i7s leave no room for an 8350 outside perhaps affordable render nodes.

 

For gaming, it would be a no-brainer, the better GPU on the AMD would help it propel ahead 95% of the times, and saving a bit on the CPU to get the better GPU is always a safe bet. The new 760 is also a great deal, almost matching 670 performance, yet being 25-30% cheaper or so.

 

That from an AMD supporter.

Link to comment
Share on other sites

Hi Dimitris,

 

you say that the FX 8350 is slower than the i7 3930K/ i7 4770K/ i7 3770K in single threaded apps (which I believe because of the design of the FX 8350), but is it possible to say how much slower, like in percentages (at stock speed without any overclocking)?

 

And in which cases you notice the FX 8350 is being slower? Can you please specify? Thank you for your time.

Link to comment
Share on other sites

Kept all charts/figures from Anandtech. There are some variations in different reviews, but in the big picture the ladder is identical: the 8350, despite being a good leap forward over the 8150, it cannot hold its own in single threaded vs. anything after the Sandybridge architecture (i7-2700K). It is a great chip for the price to use as the heart of a rendering node, but if you are after a good all-around performance for a CG workstation, most intel CPUs will serve you better.

 

c11.5_intel_vs_AMD.png

 

 

 

 

55319.png

 

55318.png

 

51136.png

 

51135.png

Link to comment
Share on other sites

  • 3 weeks later...

Thank you.

 

But he graphs you posted are synthetic benchmarks. Sure they give a good indication, but hands on experience is what i am interested in.

 

Because you earlier stated you had a 8350 at work and a i7 at home I was curious just how much slower the 8350 is in the real world.

 

Could you please explain in which way you noticed the differences between the 8350 and the i7 in software, like for example 3dsmax?

Link to comment
Share on other sites

Cinebench is nothing of a synthetic benchmark. It is a multithreaded rendering test based on a real rendering engine by Maxon (the makers of Cinema 4D), one of the very few benchmarks the 8350 shows its strengths.

 

Unfortunately where I work, I don't use the 8350 for renderings, but for single threaded heavy tasks (large Revit models and Sketchup).

Of course it is a stock clocked 8350, and it would be notably slower than both a i5 or a i7 in those tasks, as you rarely see more than 1 core (or 12.5% of the total CPU power) utilized. And that is at stock speeds. My 4.7GHz 3930K is simply in a different league, there is little point comparing it directly.

 

Yes, the budget is different, but for single threaded stuff, using a 4.7GHz 3930K or a 4.7GHz i5-2500K makes almost no difference: performance will be very close, and very much above the 8350.

 

I want to be clear: I don't say the 8350 (and similarly the 8320) is slow. Especially if you overclock it as you show in your sample built, where the 8320 will clock almost as high and will save some extra cash. It is a pretty fast CPU and the difference will be notable over your current system.

 

Just AMD never made that insane leap forward the Sandybridge architecture made (which was like 45-50% faster than the previous generation of Core CPUs by intel), so it is comparably slower.

 

Slower, doesn't mean slow, but it is notably slower vs. intel per clock/per core.

The answer AMD had to that "deficiency" was "more-cores", but unfortunately the software industry did not catch up yet: most applications other than rendering engines, math and compute applications, video transcoders and a few new games, don't really care about more than one cores, further more for more than 4. For those apps that do care, the FX CPUs do pretty well.

 

So, depending on how multi-thread heavy your workflow is, the 8320/8350 might be as good or better than a similarly priced intel build - which means an i5 quad - but as soon as i7s are within your reach, there is no contest. intel is faster.

 

If it was for gaming, I would say even FX-4300 or 6300 are fine, just go for the best GPU, as that's the most important (and again, AMD Radeons offer the best price/performance with a few exceptions). If you were going for the best price/performance rendering node, again the FX-8350 scores pretty high. But for a CG/DCC workstation with today's software that does require both single threaded and multi-threaded performance, intel is a much safer bet.

 

EDIT: with all gaming consoles being 8-core AMD x86 chips, there is a high hope that the developers will be forced to gradually adapt their codes creatively, and embrace the multiple threads. It won't be an immediate change tho. Might take years, as the new specs are monstrous by comparison to the 7+yo PS3/X360 in both single core performance and GPU power, while gaming resolutions are still restricted to 1080p.

 

Will take time for them to really need to tap into the multithreaded power and do engines that can seamlessly be parallel processed in multiple cores. Than this knowledge and methods will take time to start permeating to other programs' developing methods.

 

It is also not clear whether that will happen at all, as already we are seeing the push towards OpenCL and offloading compute tasks to the GPUs, chips that now feature massive parallelism and 100s or 1000s of cores that are tho specialized for specific tasks, unlike the x86 architecture that has grown to be pretty complex in many ways. We will have to balance it out sometime/somehow that we lil users cannot possibly know. Time will tell.

 

Edited by dtolios
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...