Jump to content

when will we see an 8 core chip


Recommended Posts

Aleksey - 4 cores are better than 1 for several reasons. For starters, being able to execute more instructions than a single core makes it much more efficient. Also, higher clock speeds mean more power, which means more heat. A 12GHz processor would run very hot and require more power. While it may seem that 4 "slower" processors wouldn't measure up to 1 super fast processor, the reality is, it's the most efficient and indeed faster way to perform. Multi core processors are also much cheaper to manufacture than a super fast single core.

As far as boot speeds, as mentioned, that's up to your hard drive. All of your system resources are loaded from a hard drive on boot-up. Your CPU has little to do with that. If you want to boost your boot-up speeds, check out some SSD's. SSD's are solid state drives (no moving parts) with much higher throughput than any physical hard disk drive.

Link to comment
Share on other sites

  • Replies 63
  • Created
  • Last Reply

Top Posters In This Topic

Now that's an expensive idea. Still, it looks like the prices on those are set to drop as a few more companies enter the market, so an SSD in a desktop might be a workable option for a lot of people in a couple years.

 

Keep in mind that today's SSDs are faster than magnetic hard drives at seeking but slower at sequential read/write. Using current technology, an SSD will improve boot speed because of the number of files that need to be read but won't improve performance for large file handling. (But anyway, what's the big deal with boot speed? Who cares if a desktop takes an extra minute to boot? That happens maybe once a day.)

Link to comment
Share on other sites

  • 1 month later...
That's what I've got now and yes Max and Vray use all 8 cores.

 

 

Yes, but it does not utilize them fully. If i remember correctly, there shouldn't be much of a difference between a dual quad 2.4 and a dual quad 3.0 because of certain bandwidth limitations.

 

I was working on a mac pro (or whatever its name is) when the 8 core version came out. It wasn't a whole lot faster than an overclocked quad core. It's somewhere along the lines of 150%, so while you actually get 8 buckets, it's not twice as fast than when rendering with 4 buckets.

 

I know that a certain percentage gets lost, but it's in the 90s with dual and quads, but dual quads its less.

Link to comment
Share on other sites

Yes, but it does not utilize them fully. If i remember correctly, there shouldn't be much of a difference between a dual quad 2.4 and a dual quad 3.0 because of certain bandwidth limitations.

 

I was working on a mac pro (or whatever its name is) when the 8 core version came out. It wasn't a whole lot faster than an overclocked quad core. It's somewhere along the lines of 150%, so while you actually get 8 buckets, it's not twice as fast than when rendering with 4 buckets.

 

I know that a certain percentage gets lost, but it's in the 90s with dual and quads, but dual quads its less.

 

im sure all bandwidth issues will be taken into account on the new x58 platform..

or maybe ur 8core system's rendering setup wasnt properly configured, cuz when this happens, even a 4core wont be utilized fully..

Link to comment
Share on other sites

It's somewhere along the lines of 150%, so while you actually get 8 buckets, it's not twice as fast than when rendering with 4 buckets.

 

I know that a certain percentage gets lost, but it's in the 90s with dual and quads, but dual quads its less.

 

Your computer is broken; my dual processor quad core system is twice as fast as my dual processor dual core which is twice as fast as my dual processor single core system.

Link to comment
Share on other sites

I am SORRY -- But I just have to laugh!!!

 

I remember when I was running a 192 core RenderDrive setup... Now that was AMAZING!!!!!!!!!!!!!!!

 

4,000 frames a night...

 

One thing I realized, after having infinite power for awhile... It all comes back to talent, polygons, and design material. At some point, your Nvidia graphics card will do it ALL -- in real-time. At that point -- it gets back to ideas...

 

8-cores, how cute!

Link to comment
Share on other sites

192 cores...that's all?

Go render on this:

 

'Ranger'

 

Operating System: Linux

Number of Nodes: 3,936

Number of Processing Cores: 62,976

Total Memory: 123TB

Peak Performance: 504TFlops

Total Disk: 1.73PB (shared)

31.4TB (local)

Link to comment
Share on other sites

  • 2 weeks later...

I guess my question is how many cores do you need in a workstation with today's software? Is anything above two able to be realistically used? I guess I would wonder what kind of model and software would require 8-cores?

 

That line of thinking leads to a much larger philosophical question. With the ever increasing hardware that is readily available today, what is the software doing on a macro-basis to use the power. I guess the BIM market and segment is the only area where real horsepower is required outside massive model with billions of polygons.

Link to comment
Share on other sites

  • 3 weeks later...
  • 2 months later...

Larrabee?

 

http://www.intel.com/pressroom/archive/releases/20080804fact.htm

 

http://news.cnet.com/8301-13512_3-10006184-23.html

 

"a broad potential range of highly parallel applications including scientific and engineering software will benefit from the Larrabee native C/C++ programming model." ...

 

"The Larrabee architecture fully supports IEEE standards for single and double precision floating-point arithmetic. Support for these standards is a pre-requisite for many types of tasks including financial applications. " ...

 

"The paper will be available at this Web site: http://doi.acm.org/10.1145/1360612.1360617."

 

It all depends on what the meaning of "Many" is...

 

It will be interesting, because there is the 'work/creation/process' part of the equation and the rendering function. How long would it take the VRAY (et.al.) guys to port to the C++ library functions? 15 minutes, 45 minutes? It can't be that long...

Link to comment
Share on other sites

I am SORRYIt all comes back to talent, polygons, and design material. At some point, your Nvidia graphics card will do it ALL -- in real-time. At that point -- it gets back to ideas...

 

... that's the real question... when will such a thing get here? [gpu rendering that is;)]

Link to comment
Share on other sites

GPU rendering isn't suited for the kind of work we do; from what I understand it's difficult for the people creating the engines to incorporate the power of the graphics card into a rendering application. I do think that eventually someone will figure it out and we will see a massive reduction in render times, I hope it comes soon.

Link to comment
Share on other sites

Whether it is NVIDIA, INTEL, a revised version of Renderdrive, cloud computing, or some other platform has yet to be determined; however, the embedded support for 64-bit and DOUBLE-PRECISION calculations is a game changer. Previously, all of the rendering supported in GPUs was single-precision, so it is true--it was not great for high-end rendering. Double-precision is why the i386 et.al. platform (and similar platforms) are used for rendering. They calculate and calculate well...

Link to comment
Share on other sites

  • 3 weeks later...

Just remember, the hardward side of things is an "eco-system". The big boys in research, finance, Forbes 1000 reporting, databases, and other support the creation of "BIG-IRON".

 

Day one, the 6-ways will be expensive. That is just the way it goes. However, the meek shall inherit the earth. The current muli-core CPUs, 2 or 4, were cutting edge just a couple of years aog.

 

Here is the thing about the hardware -- By August 2010 +/- Intel will have a 16-way CPU, with full-fledged functionality. By 2012 +/-, they should have a native 32-way CPU. That will all trickle down to the desktop modelling, rendering, and production desk to create cinema quality games, renderings, and off-the-shelve imaging.

Link to comment
Share on other sites

  • 8 months later...

Without some significant changes in the way software is developed and used, these 16,32, 64, 128 core hardware platforms will remain largely irrelevant and impractical.

 

I'd like to see mental images/Autodesk or the other rendering technology developers add functionality that would let a single system spawn multiple, simultaneous rendering jobs.

 

Two 8-core rendering jobs on a 16 core machine is going to be much more efficient than a single 16-core job/process because the rendering software just doesn't scale well past about 8 cores.

 

Hybrid approaches such as Larabee - and Caustic Technologies' Caustic RT

may be game changers - but again, software developers will need to change how they have been doing things...

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×
×
  • Create New...