Jump to content

Intel i7-Quadro fx(1800) or geforce series


Recommended Posts

Hi !!!

I am very confused in graphic card.Pls.suggest

Quadro fx(1800) or geforce series(GeForce® 250 GTS 1024MB DDR3 Core )????

 

I am doing 3d architectural work & v-ray rendering.Is this proper cards?????

If no Pls.suggest me.My P.C config is

64x bit intel core i7 920 @2.67ghz.(8 CPUs)

-4gb RAM

-I am using 64x bit 3dsmax 8 & 2009 for 3d architectural with v-ray.

-I am using so many proxys,cars,trees & plants in my project

Link to comment
Share on other sites

With 3ds max we find little difference between the cards but believe for professional work the FX1800 or preferably the FX3800 would provide better long term value. The thing to consider is that GPU accelerated renders are getting closer to market so a card with the most memory and gpu-cores will serve you well. GTX285 cards can be bought with up to 2GB DDR3 memory and it has 240 gpu-cores.

Link to comment
Share on other sites

I am still in the intial phase of testing and comparing, but according to benchmarks, and frames per second in 3dsMax wireframe mode my home card is outpacing the card I am testing at work.

 

My home card is a sub $200 GTX260. The card I am testing at work is a Quadro4600, though the 260 card is newer than the 4600 I have in my computer. But still, ..I would much rather drop $200 on my GTX 260 than $800 on a 4600.

 

I am having trouble convincing IT of this though. They are caught up in Dell's marketing speak about how the GTX is a gamer card and not a professional card.

Edited by Crazy Homeless Guy
Link to comment
Share on other sites

GPU accelerated rendering has been the "next big thing" for years and so far has produced nothing but vaporware. Unless you're very excited about spending the money, don't take it into consideration when making a hardware decision. By the time the GPU accelerated renderers actually exist, the current video cards will no longer be current and you'll be able to read reviews of release products that might address such questions as "is a Quadro better than a Geforce for GPU rendering?"

Link to comment
Share on other sites

I agree with you Travis and Andrew.

 

The Quadro line is the "professional" line of cards from nvidia that simply have a more developed driver and a bit better technical support. From a cost stand point, the geforce lines are a much better bang for the buck. it is a catch 22 I know, but I would go for the geforce card. I really don't see the need for a Quadro card, even for cad or 3dsmax, even though nvidia has posted marketing that the quadro is superior performance for cad and 3dsmax vs the other.

I say it is hog wash and nivida just is trying to get more money for a card that "architecturally" is the same as the geforce card. The only difference is the Quadro line comes with more gpu ram but that really isnt a big issue these days.

 

Now, with that said, I do think the fermi cards will be a huge opening, especially when gpu rendering becomes mainstream. I then would spend the money for a Tesla card with 6gb of ram on board vs the geforce based fermi line with less ram.

 

For the present time, spend the money on a regular geforce card, not the quadro line. When gpu rendering becomes mainstream, that is when I think the professional line of cards will really be useful. Hope this helps.

Edited by Slinger
Link to comment
Share on other sites

I've been working with GPU-acceleration for scientific computing for almost four years. I can tell you it boosts our application on a four card system about 300X from a quad-core processor. Weta Workshop used gpu-acceleration “to optimize artistic iterations on "Avatar"’s huge data sets, we [Weta] moved the bulk of the calculation to a pre-computation step." for "Avatar". (see our blog for the reference: Are You Ready For The GPU Revolution? Part 1 at blog dot renderstream dot com)

 

And they just announced that they are going to use MachStudio's gpu-accelerated render for the TV commercial pipeline. I think it is now moving from vaporware.

 

Anyway for 3DSMax I think GTX cards are fine we have never seen a difference in render times. Maya is a different story there we've seen render times that were nearly 10% faster using an FX3800 card than a far more powerful GTX285.

 

So as I mentioned buy a 2 GB GTX285 card and you will be in great shape.

Edited by John.RenderStream
Error correction Octane was not used for the render-engine
Link to comment
Share on other sites

Avatar wasn't GPU rendered. They used RenderMan. There are some scientific computing applications that are out, but there's no GPU renderer you can get to speed up your production and we're in the same state we've been in for years - companies like nVidia making press releases telling us this is the next big thing, nothing usable to show for it. I can pay nVidia for a high end card - and many are, based on as-yet unrealized GPU computing claims - but I can't get the software to run on it, so nVidia can do nothing at all for me that my 5-year-old FireGL can't. That's vaporware.

 

BTW, of course you've never seen GPU affect 3DSMax render time. 3DSMax does not have a GPU renderer. The Maya renderer probably ran faster with a Quadro as a side effect of having to load less of the OpenGL subsystem into software.

Link to comment
Share on other sites

Everyone seems to be in agreement that the Quadro cards are not all that for Max, but I thought I would post the calculated performance benchmarks here anyway.

 

These are not real world benchmarks, just testing app benchmarks. ....but, as you can see the GTX260 kicked the 4600's butt.

 

These were tested in different systems, and in theory the system that the 4600 hundred on is a faster overall system with dual quad core 3.0 ghz Xeons, but the system the 260 was tested on has a SSD, paired with an i7, and 12 gigs of ram.

 

In terms of physical size these cards are nearly identical.

 

Edit: The 4600 was tested on a winXP 64 system, and the 260 was tested on a Win7 64 system.

Link to comment
Share on other sites

Thanks for posting Travis but those systems are so different. it would be great to have one system and pop both gfx cards in the same set up, run the benches, and see what happens.

 

I still agree the Quadro line is not worth the price premium at all.

Link to comment
Share on other sites

Thanks for posting Travis but those systems are so different. it would be great to have one system and pop both gfx cards in the same set up, run the benches, and see what happens.

 

I still agree the Quadro line is not worth the price premium at all.

 

Different yes, but it should be noted that the CPU in the dual quad Xeon system with the 4600 posts speeds nearly double that of my quad core, hyperthreaded i7920. The i7 can not touch the dual quad 3ghz Xeon's in performance. It can cut the advantage with extreme overclocking, but most of the number I have seen only show that closing 25% of the gap.

Link to comment
Share on other sites

CGArchitect is currently doing an exhaustive graphics card evaluation and we should see results soon.

 

As for GPU-acceleration and AVATAR,I urge you to read our blog:

"Are You Ready For The GPU Revolution? Part 1"

 

In the conclusion is a link that discusses what Weta did but here is an excerpt from our conclusion:

 

"...in the collaboration between nVidia and Weta Digital on the production of Avatar which is discussed in this article: “go to link”. As stated by Weta’s head of R&D, the final beauty pass was still rendered with RenderMan however, “to optimize artistic iterations on Avatar’s huge data sets, we [Weta] moved the bulk of the calculation to a pre-computation step. The issues we needed to solve weren’t as much about rendering as they were about high-performance computing, and we realized that using the massively parallel power of a GPU to solve problems is NVIDIA’s expertise” says Sebastian Sylwan, Weta’s head of research and development."..."

 

We do have a ways to go to a full accelerated render-engine but I think by my last count there are eight companies demonstrating gpu-accelerated engines. We think these are all heterogeneous systems using the MPU and GPU to do what they can do better than the other but that should be of no surprise. What differs now is that the programming tools are now becoming available. There are just a lot of smart people working on the problem. While I think mature answers are 18 months out Octane and companies like them are beginning to make me rethink it. Stay tune and unless you need memory don't buy your 6 GB Fermi C2070 just yet.

Edited by John.RenderStream
Link to comment
Share on other sites

Exactly. They're not on the market and will not be on the market soon. Look at how quickly GPU development goes, and by the time GPU renderers hit the market the high-end GPU you buy now in anticipation of the new technology will be one, two or three generations old. Further, it is impossible to predict right now how good the GPU renderers will be, how much power they'll need, how appropriate the current cards will be, or how much the technology will cost.

 

So, consider the GPU renderers to be vaporware for the time being and do not buy a video card expecting to use it for them. There is a long history of customers being burned by making purchasing decisions in anticipation of vaporware (call it "irrational exuberance") and the point I'm making is that it would be foolish not to learn from those mistakes. Ask anybody who bought a video card to use with Duke Nukem Forever in 1997 and you'll get the idea.

Link to comment
Share on other sites

If those are your two choices and if you are only running 3DSMax then I think you are okay with the GTX260. If you go with the GTX260 make sure it is one of the newer ones with the core-216 because it faster than the older version. Otherwise if your budget allows I suggest you go for the GTX275 it is significantly faster or GTX285 because it is faster and has more memory. I added a reference table from our blog.

 

Good luck

John

 

Reference Table

Video Card/Memory(GB)/Single Precision(GFLOP/s)/Double Precision(GFLOP/s)/Bandwidth(GB/s)

GTX 260 core-216/0.896/805/67/112

GTX 275/0.896/1011/84 (est.)/127

GTX 280/1.024/933/78/142

GTX 285/1.024 (2.048GB on some cards)/1062/89/159

GTX 295/1.792 (2 X 0.896)/1789/149/ 224

Edited by John.RenderStream
Table needed correction
Link to comment
Share on other sites

Enh. Unless you have some special, huge-poly-count needs I'd cheap out on this part, go with a $100-200 card instead of a $300-400 card and save the extra money to wait and see if anything else comes up that you need. Honestly most of these cards are more powerful than you could need for the current software product cycle (and probably the next one), and this board is full of people who are perfectly happy with cards that currently cost $100 such as a 9800GT or an 8800GTS.

Link to comment
Share on other sites

Thank you so much John,

Pls.finaly check herewith attached my pc.configmy system config.

 

 

Ok,Finaly I am going with GTX series.But

 

If u dont mind,pls.ans.me

 

I think you know,I am doing 3dArch.work & v-ray rendering.I am using 3ds max and used so many proxies,trees,cars & etc.so my question is GTX card is proper for all that things?????

Link to comment
Share on other sites

For urban environments where there are crowds and heavy traffic then you may want more memory. But if you are planning a more sparse settings, which are more typical, then any of these cards will work.

 

One thing not discussed is the power supply unit. The GTX cards will take two 6-pin supplementary connectors and the GTS only one. Does your current system handle that extra connector. Also the PSU should be 500 Watts or bigger where the GTS 250 needs 450 Watts or bigger. We tend to use 650W and bigger but that allows for expansion.

 

Also the GTX cards are longer GTS 250 is 228.6mm and the GTX 260 is 267mm. So will the GTX fit in your enclosure.

 

Good luck, I've enjoyed all of the excellent posts.

Link to comment
Share on other sites

  • 4 weeks later...

Just wondering, why are you guys only mentioning NVIDIA cards ? I mean if you're using OpenGL in any way shape or form I would stay far away from any ATI cards, but since Max has lost it's OpenGL roots a while ago in favor for DX I'd think ATI might have some serious advantages here.

I mean for about 70usd more than a GTX 260 Core 216 you could buy a 5850 that will literally run circles around pretty much any Nvidia card in DX.

 

The only other thing that I might consider is if I have anything dependent on CUDA, otherwise the AMD field seems to be much greener these days.

 

Just a thought

Link to comment
Share on other sites

I mean if you're using OpenGL in any way shape or form I would stay far away from any ATI cards

 

No way - FireGL/FirePro cards are great for OpenGL. My FireGL v5200 beats current gamer cards in some OpenGL areas, and for OpenGL in Revit an ATI card is the only way to go.

Link to comment
Share on other sites

Quadros aren't as bad as Geforces and a lot of people don't encounter the nVidia bugs because OpenGL is turned off by default. Probably because of the nVidia bugs. But if you were going to buy any machines specifically for Revit use, what you want (ranked preferentially) is:

-The current FirePro V5xxx series

-The current FirePro V3xxx series

-A higher end Radeon

-A Quadro

-And distantly behind those, a Geforce. (Don't turn on OpenGL.)

Link to comment
Share on other sites

Just wondering, why are you guys only mentioning NVIDIA cards ? I mean if you're using OpenGL in any way shape or form I would stay far away from any ATI cards, but since Max has lost it's OpenGL roots a while ago in favor for DX I'd think ATI might have some serious advantages here.

I mean for about 70usd more than a GTX 260 Core 216 you could buy a 5850 that will literally run circles around pretty much any Nvidia card in DX.

 

The only other thing that I might consider is if I have anything dependent on CUDA, otherwise the AMD field seems to be much greener these days.

 

Just a thought

 

Personally we are agnostic and we build systems with either Nvidia or ATI; however 99% of our business is Nvidia.

 

In part this is because these days we are very much tied to Nvidia's CUDA and the PhysX engine. For instance viewport functions with CS4 is a big thing for our customers and it is accelerated using CUDA.

 

But if someone wants ATI we indeed like the 5850.

 

Best regards,

John

Link to comment
Share on other sites

CUDA in Photoshop? That is so overrated and unnecessary anyway - stick with the OpenGL and don't encourage nVidia's bonehead attempts to split the market with useless proprietary technology.

 

Of all the things that need to be accelerated in Photoshop, viewport pan and zoom is near the bottom of the list.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...