Jump to content

Athlons in Production


Recommended Posts

  • Administrators

I'm looking to upgrade my system in the next few weeks and am seriiously considering switching over to a Dual Athlon system. Has anybody else had any experience with these in production. Any suggestions as far as reliable components or stuff to shy away from? Any stability problems?

 

Thanks,

Jeff

Link to comment
Share on other sites

Hi Jeff,

 

I want to build the same system (x2 athlon mp). I was doing some research and they are supposed to be much faster than P4. As far as stability, a proffesor in college said he preffered Athlons over Intel bc of speed, and that stability was not an issue (he was running maya 4 on win2000). ALso from what I have read, athlons and 3D Max are supposed to be buddies. So I can't wait until I can afford to build me one. I hope this give you some confidence. Let me know what happens. I'll let you be my giniea pig on thig one.

Link to comment
Share on other sites

Yes Jeff, great minds think alike!

 

I am putting in an order for a dual AMD 2000+ MP system this week also. The big hold-up right now is trying to get an Elsa Synergy 4 with the new Quadro 4. Supposed to be shipped last week, hopefully the cards arrive this week.

 

Here is a comprehensive review of AMD MP dual systems vs. P4’s, Xeon’s, etc.

 

http://www.aceshardware.com/read.jsp?id=45000255

 

They ran the systems through many CAD and DCC applications. The Dual AMD’s stole the show. I’ll let you know how things run once I get my system in. Best wishes on your setup! [Are you treating yourself to a LCD monitor?]

 

~Paul

Link to comment
Share on other sites

Hi Guys,

 

I have a Dual Athlon 1800+ since a couple of months and it's running fine so far. I can't test it over a P4 but it works well in Lightscape and VIZ.

 

I just got a Quadro4 128Mb card and it flies! No highlight problems in Lightscape either (but still crashes with the "AutoOrient" command.

 

I use the following driver from Nvidia:

6.13.10.2832

 

In LVS, I had a problem with the BLock previews (they won't display) but it got resolved by changing the Buffer Fliping Mode to "Use Block Transfer" in the OGL properties of the card driver.

Link to comment
Share on other sites

  • Administrators

Paul Griger wrote:

Yes Jeff, great minds think alike!

[Are you treating yourself to a LCD monitor?]

 

Yeah I wish! I want dual 19" so, I'm affraid two LCD is a bit out of my price range right now. As it is this system is looking to be just over $6000 Canadian.

 

Pierre-Felix Wrote:

I just got a Quadro4 128Mb card and it flies! No highlight problems in Lightscape either (but still crashes with the "AutoOrient" command.

 

What brand did you end up getting Pierre-Felix? I have noticed the same crash with my ELSA Gloria II. Doena't happen at work, so must be a driver thing.

 

PIXL8TED wrote: 3 gigs of RAM

 

I thought that W2K could only use 2GB though. It this differnt in XP? Can you use more than two gigs on anything other than Linux?

Link to comment
Share on other sites

Brian,

 

While I thought that the AMD left the P4 and the Xeon behind in all tests in the linked article, I saw your concern regarding the observation that radiosity solutions are apparently calculated slower on a dual AMD MP system than the Intel chip w/SSE-2 instructions

 

Keen eye, I never caught that paragraph before.

 

“The first tests with radiosity, another function which has been optimized for SSE-2, shows similar boosts (39-40%). The SKULL_HEAD_NEWEST benchmark is Newtek's showcase for SSE-2 optimization: we found the Dual Athlon MP 1800+ to be almost 30% slower than the Dual Xeon 1.7 GHz in this test."

 

You wisely pointed out that a drop in radiosity solutions would be of great interest to us, and I agree that this can put a damper for those us contemplating a dual AMD setup.

 

I wonder what the trade off is for the excelled times for using the AMD chip in Raytraced solutions vs. the drops in the AMD Radiosity solutions? Two things that make me stick with the AMD setup are:

 

1. A radiosity solution should only need to be calculated once in a model. Once that is calculated, doing multiple renderings of the same model incorporating Raytracing would make the Raytrace speed of a workstation the more important factor over the Radiosity calculations

2. Bottom line, price

a. A Xeon system setup the same way I have an AMD system bid out would cost me another $700. I could get a entry level 17” LCD screen for that money!

 

Now I know next to nothing of SSE-2 instruction sets in Intel chips vs. 3Dnow, MMX, etc., but there is another article here that has a VERY technical breakdown of both chips:

 

http://www.aceshardware.com/Spades/read.php?article_id=40000189

 

I’d be interested in what others think of the radiosity solution reportedly being slower in a AMD setup vs. Intel

 

Paul

Link to comment
Share on other sites

Hey Guys,

 

Jeff Mottle dropped me a line and asked me to comment on this thread. Its about Athlons and P4's so how could I resist :).

 

If you don't know me, I write the techbits and hardware section for www.3dluvr.com as well as providing max hardware support on the discreet forum at support.discreet.com. I also have a small studio in NYC known as ThirdEye which specializes in design aspects of CG. We've currently got an exibition in NYC on the 18-22th sponsered by Bombay Sapphire show casing our new designs. I also build an maintain a fleet of machines for the Department of Entomology at the University of Maryland.

 

Why am I telling you all this stuff? Because I think my knowledge of computer hardware, combined with my ability to actually use it as an artist, puts me in a very unique position to actually give out some useful info.

 

First off, the current systems I own are as follows.

 

Single Athlon 1900+ XP on a Abit KR7A-133, 1.0 gigs of mushkin PC2100 2-2-2 DDR.

 

1.8A Pentium IV Northwood on an Asus P4B266-C with 1.0 Gigs of Mushkin PC2700 DDR

 

Dual Athlon 1900+ MP on an Asus A7M266-D with 1.0 Gigs of Registered ECC Mushkin 2-2-2 PC2100.

 

The Video cards in each system from top to bottom are... Leadtek Geforce 2 Pro, Leadtek Geforce 4 Ti 4400, Nvidia Quadro DCC.

 

All systems are self built.

 

I recently finished writing the updated 3dluvr Single vs Dual Amd article, featuring over 24 hours of tests, both viewport and rendering. Here's some quick numbers to wet your palette.

 

[i forgot to mention. All these tests are with Max 4.2 on windows 2000, running service pack 2 and the latest compatibility and security patches. System was cleaned and defragged before testing, and the two athlon systems were freshly installed. All except for arnoldmark, which is commandline driven in one of its current conceptions]

 

Using the Specmark 3dsmaxR3 Test at 2048x1536

 

Single 1900+ XP 2 Hours, 8 Minutes, 35 seconds

Dual 1900+ MP 1 Hour, 9 minutes, 1 second.

 

Using Max Scanline Raytracing, 1.6 million polygons. (2048x1536)

 

www.3dluvr.com/crossbow/incoming/cubes.jpg

 

Single 27 minutes, 26 seconds

Dual 14 minutes, 23 seconds

 

Afterburn volumetric test (2048x1536)

 

Single 1 hour, 12 minutes, 30 seconds

Dual 35 minutes, 58 seconds.

 

And the numbers you guys are interested in

 

First...fake radiosity with schlorby's HengeMark

 

Single 220 seconds

Dual 118 seconds.

 

And the whopper. Arnoldmark, pure radiosity and GI.

 

Single 45 minutes, 56 seconds

Dual 28 minutes

 

The NTSC tests, Web tests, and other film tests on the other 6 scenes will be at www.3dluvr.com when the article posts.

 

Now that you know how fast those second cpu's are, hows the Northwood system compare?

 

To put it bluntly.

 

The Northwood system is slow. Slower then pretty much everything the 1900+ XP deals with. Games, apps, you name it. It looks like it'll be faster in the viewport though, but I haven't swapped the cards around to check yet.

 

Here's the Northwood attempting to do the radiosity test. (Remember that SSE instructions are included in anything with SSE2, and that the Athlon XP and MP processors have full SSE encoding, thus gaining acceleration from all SSE2 applications.)

 

(Actually I thought I had this rendered already but I don't so I'm rendering now, will post again in an hour or so)

 

Here's some pure Math calculations using some DNA analysis software designed for the Mac.

 

G3 240 hours

G4 80 hours

Northwood 4 hours 55 minutes

1900+ XP 3 hours 50 minutes.

 

There will be a 1900+ XP vs Northwood article on 3dluvr, but it's going to take a long time to do, so don't expect it for another month or so.

 

As for the Newtek test....

 

1) Try calling Newtek and asking to talk to the lightwave team. Heh. They're not there anymore.

 

2) The Newtek code was specifically designed for the P4...however I don't think anyone has actually ever tested it correctly enough to see if that performance gain is real, or if its just showing up in short rendered frames. (Most sites do very short renders, under 300 seconds, which doesn't showcase true performance differences)

 

3) I've been trying to get a NFR of lightwave to test, but it seems that there isn't anyone to actually contact anymore to get one.

 

4) One of my main lightwave buddies, RipLee absolutely LOVES his dual amd's and doesn't understand what the !@%(@& all the bs about xeon's being faster is about.

 

Hope that helps some until the render finishes.

 

[ April 23, 2002, 09:11 AM: Message edited by: Greg Hess ]

Link to comment
Share on other sites

Here's the Data for the Northwood, 1.8A Gigahertz.

 

For Arnoldmark Radiosity and GI.

 

2048x1536

 

1 Hour 11 minutes and 39 seconds.

 

[Remember this program is command line driven, and thus the version of max does not factor into performance.]

 

And before anyone says "Well maybe he's biased cause he didn't buy that system with his own money." Well I did. I paid about 1,200 USD for it, and the athlon system kicks its butt. Well at least it plays soldier of fortune 2 nice.

 

One thing to note however....

 

Make sure you purchase an athlon system from a reputable retailer, like www.boxxtech.com or www.alienware.com, unless you have a good amount of experience constructing machines yourself. These machines take a good amount of knowledge to get running (especially if something goes wrong) and require alot of patience during the first week of running. (To finish tweaking and getting optimial stability).

 

[ April 23, 2002, 10:18 AM: Message edited by: Greg Hess ]

Link to comment
Share on other sites

Greg,

 

Well it’s very reassuring to have a guru of your caliber on board.

 

Your info is laid out well enough so that even a non-tech guy as myself can understand the test comparisons.

 

Thanks for your input, some of the best that I have seen for our field to date.

 

I put in the order for my new AMD duel 2000 MP system today, I can not wait to run it through the hoops! biggrin2.gif

 

Paul

Link to comment
Share on other sites

We've had this debate alot on the LWG3d.org forums lately with people coming in and asking what would be the better system...a P4 Northwood or an Athlon based. In Lightwave at the moment I think I'd have to go with a Northwood based system unless your looking at dual processers in which case the Athlon would steal the show. Newtek really optimized the hell out of Lightwave when it came to SSE-2 and it shows in render times. Toms Hardware also does a fair job benchmarking the systems under Lightwave and and Cinema 4DXL. While I might not like the guys website for alot of his other reviews he's one of the few that uses those software packages.

 

In other software there, I seriously doubt the P4 Northwood could touch an Athlon based system, let alone a dual Athlon. :) Acehardware (already pointed out below) does a great job benchmarking the systems in 3Ds Max and Maya and I'd consider they're reviews. Overall though I think you'll be happy with the dual Athlon system. Ive had the chance to play with a dual Athlon 1.4Ghz system for two weeks and it never crashed once on me under almost constant rendering.

Link to comment
Share on other sites

Well.. ever so slightly off topic... But do we really need a $6000 system. I consider myself to be a fairly well experienced 3D artist with some talent. I can't imaging being able to push a $6000 machine to it's limits.

 

I know people out there who have maybe $3000 systems... but the real limit is their skills, not the machine. Sure the machine does makes life a little less frustrating, but would it not be worth either putting more hard earned cash into either further training or advertising?

 

I remember when I was a relative beginner.. and thinking that I needed a *SHOCK* 600Mhz with 512 ram etc.... but even now, I am getting along without any problems running a P800 laptop with 16mb graphics card.

 

Maybe I just wish I had a $6k workstation in my room.

Link to comment
Share on other sites

Originally posted by Cesar Rullier:

hey guys, what's the scoop with a single athlon xp 1.8 for example vr's a P4 of simillar speed ?

mzagorski,

 

If you look at my first post and the second post, you'll see the arnold render as compared between a 1.8A Northwood and a 1900+ XP. Here's the two scores again for easy comparison. Both are rendered via command line (Basically were looking at JUST a render score here, no program in particular) with an upcoming radiosity and GI renderer. Both scenes are rendered at 2048x1536.

 

Northwood 1.8A

(Pentium IV with 512k L2 Cache, Socket 478 mPGA)

 

1 Hour 11 minutes and 39 seconds.

 

Athlon 1900+ XP

(256k L2 cache, 128 L1 cache, Socket A 462)

 

45 minutes, 56 seconds

 

The Athlon's score is pretty impressive, considering its 200 megahertz slower then the Northwood processor.

 

Andril,

 

I would take the information posted on Tomshardware with a grain of salt. Recently all the head editors of the site quit, and its basically being run by a bunch of kids right now, who are all pretty new to the scene. Tom Pabst no longer oversees the site and instead spends time working on a TV show, where he discusses computer hardware.

 

Recently their reviews have gone sharply downhill, starting with the first use of lightwave to test processor speed. In that particular review, they claimed that blizzard exlusively used Lightwave to create Diablo II. Thousand of people emailed the site to correct this mistake, and it was never fixed.

 

This continues with the Dual Athlon review, where they make outrageous claims that the Second cpu only gives a 7% increase in performance in 3dsmax, and that linux compiles FASTER on a single processor system. These small points tend to point out the utter inaccuracy and unwillingness to do any sort of research before posting a live article.

 

They are also being sued for criminal slander of a former employee. Additionally their benchmarks are sometime impossible to reproduce on other websites, and generally are poorly executed, and are not indicative of actual performance.

 

I would seriously recommend if you do continue to read tomshardware, that you read a variety of othersites before making purchasing decisions, so you can better counter the inaccuracy of the site.

 

Now that I'm done bashing tomshardware....

 

One of the fundamental problems with gaming hardware sites reviewing professional level hardware is that they don't really understand the way the programs work. They tend to bench video card performance at low resolutions (Who the hell ever works at 800x600), and usually record data incorrectly (They just use the FPS timer instead of actually timing the scenes). They also tend to use very fast rendering scenes, which are usually not very heavy in either their geometry or setups.

 

Using a light scene to test rendering performance, is like jumping in a ferrari and seeing how quick it goes 0-20 kpm. Everyone really wants to know how fast it goes 0-100 kpm, but thats not the info their giving you.

 

I wish I was home to give you an example of what I'm talking about, but lets look at it this way in a walkthrough example.

 

Lets take apollo.max for example. If you render this bad boy at NTSC res, it'll take about 11 seconds on a single 1900+ XP. It takes around 7-8 seconds on a dual processor. Looking at this data off hand, you could immediately assume that a dual processor system only gives about a 25% increase in performance.

 

However if you jack the res up to 2048x1536, the increase jumps to around 45-50% in this particular scene.

 

Why is this? Its all in the multithreading of the render subsystems. Most renders are 90% multithreaded, 10% single threaded. This means if you go and test a bunch of really fast rendering scenes, the actual show increases in performance are not indicative of actual use. (We all know how long radiosity and GI solutions take to render).

 

Now with most of the lightwave reviews, they're doing realitively quick renders (anything under 300 seconds is really fast). This tends to bias towards different stages of the render pipeline, which could artifically enhance the supposed performance of the Pentium IV processor over the Athlon. Or it could expand the performance gap between the two systems if it was rendered a higher resolution.

 

I've been trying to get an NFR copy of lightwave 7 to run some 3dluvr benchmarks at a variety of resolutions and scenes, but have been unable to do so.

 

Anyway, thats my 2.50.

Link to comment
Share on other sites

hello, well, i think we most know the other side of the coin. here are a test of the last Professional 3D Accelerators in 3D Studio MAX 4.26: well here are compared to,

 

Pentium4 2200 MHz based computer:

Intel Pentium 4 2200 MHz;

ASUS P4T-E (i850) mainboard;

512 MBytes RDRAM PC800;

Quantum FB AS HDD, 20 GBytes;

 

Athlon XP 2000+ based computer:

AMD Athlon XP 2000+ (1666 MHz);

EPoX 8KHA+ (VIA KT266A) mainboard;

512 MBytes DDR SDRAM PC2100;

Seagate Barracuda IV HDD, 40 GBytes;

 

"As for the platforms, the Intel's one seems an optimal solution for professional applications, and the AMD's platform suits better for games. In our further tests we won't use a stand based on the AMD processors any more, at least, until professional applications are optimized for them. " (march 2002, By Alexander Kondakov) :rolleyes:

 

this is the conclusion but the all article is very interesting:

 

digitlife

 

and now, i think we are lost again. who knows what is bether? no body i think. but what we realy know is that both systems are very fast compared to old systems, then, both are good.

Link to comment
Share on other sites

this is the conclusion but the all article is very interesting:

 

digitlife

 

Here's the conclusions I found interesting with that article....

 

"Testing technique for accelerators in the 3D Studio MAX 4.26

 

"The method below is used to test the accelerators in the first three scenes.

 

..Installation of an operating system, drivers, 3D MAX 3.1 and all updates."

 

[Hmm...Odd..isn't this test for Max 4.26? Hehe.]

 

"The tests were held under the Windows XP Professional."

 

[There are known OGL performance issues with Windows XP]

 

"The tests were carried out in 1280x1024 at 32-bit color depth."

 

[This resolution isn't high enough to stress the cards, A geforce 2 GTS isn't even stressed at this res.]

 

"The scenes we used in the tests haven't changed much, but we added 11 new scenes which came with the 3D MAX ver.3 in the benchmark folder."

 

[Their using the wrong set. There are max4 benchmarks available from discreet for testing max4, their testing max4 with max3 scenes. This doesn't make the data invalid, it would just be alot more useful to use the new updated scenes, which are a bit more heavy in their complexity. Thats like benchmarking a geforce4 ti 4600 with quake1]

 

[What i find most interesting about the review, is how much FASTER the Athlon system is then the Pentium IV. Its widely known that P4's are usually very very speedy in OGL or viewport modeling, but these tests tend to push in the athlons favor, especially considering the price differences between the two systems. ]

 

[You can see this gap appear in the Geom2 scenes, where the ogl abilities of the P4 manifest itself as the cards become stressed, which is one of the reasons the tests need to be run at higher resolutions, or updated to use the R4 benchmarks]

 

Remember when looking at the article to READ the scores and not just look at the length of the bars, as each chart is scaled differently.

 

All and all digi-life does possibly the best job of testing max (at least the most thourough) over xbitlabs or cgchannel.

 

Don't get me started on xbit labs though, they just piss me off.

 

To sum up this little blurb.

 

Athlons rule in rendering. Pentium IV's rule in OGL. Why? Intel paid everyone to optimize a variety of drivers for their cpu's.

 

Possibly the dream configuration would be something like this...

 

A single 2.4 Ghz P4 with a Quadro 4 900XL, with 512-1.0 gigs of ram (Don't need much since you'd just be viewport modeling) as the modeling station, with a whole crapload of dual athlon systems sitting in 1U and 2U rackmounts for the render boxes.

 

Either way when you factor the different in ogl performance between the two systems (10-15%) vs the price differences (600 USD per cpu for the 2.4 P4 vs 200ish for a 2100+ XP) I believe the Athlon still has the advantage.

Link to comment
Share on other sites

Hello Greg Hess and Jeff Mottle,

 

You both are doing an "A1 job" with your web sites! In fact they are the two sites I visit almost daily! Great to see you two team up!

 

Greg, as a coincident, I was just reading your first couple of replies and was thinking of an article which I recently read at xbit labs, thinking that it would fit perfectly with this discussion.

 

The title is "Dual-Processor Platforms in 3ds max 4" and can be seen at www.xbitlabs.com/cpu/3dsmax4-dual/index.html .

 

Sorry, it looks like I might be 'getting you started'!

 

It appears that the dual Pentium 4 system outperforms, and in some cases by several factors of magnitude, in many of the viewport benchmarks in comparison to the dual Athlon systems. The reason given is that the AGP port implementation on the Athlon motherboard is poor. Is this just on certain motherboards or all?

The Athlon compares favorable on other viewport tests and consistently outperforms on final rendering tests with its superior performance in FPU calculations.

 

Smooth and quick viewport responsiveness has to be very important to 3D designers while they are interacting with the display, perhaps more important than a final render.

 

I would like your knowledgeable take on this subject.

 

I don't believe you have any head-to-head Pentium vs. Athlon viewport rendering tests at www.3dluvr.com. or did I miss it? I sure would like to see you take that one on!

 

Also, I am in the market for a 3D MAX system but would like to stay under CDN$2000 (US$1300). Could you recommend a single Athlon CPU 'best-bang-for-the-buck' configuration? Could you recommend any more modestly priced system builders?

Also importantly, can you advise on a quality motherboard or point to some good comparison articles?

I am thinking an Athlon XP 1800+ or 1900+ , with one or two nVidia cards (as you suggest in order to get dual monitor display, in another of your great articles), perhaps an older GF2 GTS, PRO or Ultra (if I can still find them) with the softquadro utility. What does the newer "Ti" designation add to the GF2 Ti and GF3 Ti cards?

 

Also which GF3 card with the softquadro utility would compare with the DDC? and which GF3 would add the most bang-for-the-buck? How much extra 'bang'?

 

I know you will refer me to your excellent articles at www.3dluvr.com , so I will beat you to the punch. They are http://www.3dluvr.com/content/techz/nvidiadcc.php and http://www.3dluvr.com/content/techz/maxfaq.php.

You did test the GF2 GTS which held its own but I can only imagine that a GF2 Ultra with softquadro would help it close the gap with the more expensive cards. What's you thoughts?

 

Perhaps we should take this video card discussion to a new thread.

 

As you can see I'm killing quite a few birds with one stone. This is my first forum posting ever, so perhaps you can forgive me.

 

Again, Greg and Jeff, Keep up the GREAT work!

Link to comment
Share on other sites

You both are doing an "A1 job" with your web sites! In fact they are the two sites I visit almost daily! Great to see you two team up!

 

[3dluvr is run by pedja L., and not me. I merely humbly write articles for the technology section, which he was nice enough to give me. Its all his baby and he deserves full credit for such a fantastic site.]

 

The title is "Dual-Processor Platforms in 3ds max 4" and can be seen at www.xbitlabs.com/cpu/3dsmax4-dual/index.html .

 

Here are some quite notes on that article :)

 

I'm going to paraphrase since I don't feel like reading it again.

 

First off, OpenGL drivers are not multithreaded unless specifically designed for it. Most OpenGL [drivers/cards] are single threaded, and thus do not make use of the second cpu, making the whole article's use of dual cpu's pretty hilarous. (Its basically a single p3 vs an athlon MP [prepalimno...i think, aka no SSE instructions yet])

 

However if you run software heidi, its multithreaded, and you can actually see the second cpu come into play (40-50% increase in viewport performance).

 

The resolution they use to test video performance was 1024x768x32. Thats laughable, it pretty much makes every bit of data utterly useless. The scores will change COMPLETELY at higher resolutions. I'm pretty sure the position will reverse the scoring at 1600x1200x32 when memory bandwidth comes into play.

 

"All Lighting benchmarks feature "lite" geometry that is why the major workload is laid upon the AGP bus, which is better implemented in Dual Socket370 platform"

 

The video cards aren't even running at 50% power at that res. The AGP bus has nothing to do with the scores, its all driver optimizations for the P3 vs the older athlon. The only time the AGP bus truely gets tested, is if you run outta video card bandwidth, or ram, and you have to swap across the bus. When that happens YOU KNOW because your viewports turn into jerky hell instead of heavenly smoothness.

 

Here's another laughable report...

 

"I carried out final rendering of three scenes from 3ds max 4 package with the same settings and resolutions set to 800x600, as the results for platforms tested will correlate similarly no matter in what resolution you run the tests: 640x480 or 1600x1200. Here are these scenes:"

 

Someone want to tell me when a render takes the same amount of time for a NTSC render vs a Film render? These people probably aquired an illegal version of max, read a quick short articles, and started bsing their way through some benchmarks. Yes its harsh, yes I'm mean, but I deal with having to put out the flames of their crap on a daily basis. Rendering at 800x600 IS NOT going to show you performance differences.

 

[is this just on certain motherboards or all?]

 

Driver optimizations between the Intel and AMD cpu's. The AGP implementation is pretty much the same. There was recently an article on Nvidia and ATI making their drivers work faster on Intel chipsets, because of Intel funding. I'll try and dig it up.

 

[i don't believe you have any head-to-head Pentium vs. Athlon viewport rendering tests at]

 

Its not done yet. I don't get paid for it, so it must be done on my free time, which of course, I don't have too much of. Usually takes me a month or so to write an article. First is the Single vs Dual AMD article, then the northwood vs XP.

 

Could you recommend a single Athlon CPU 'best-bang-for-the-buck' configuration?

 

You based in the states? Or have access to state retailers?

 

Could you recommend any more modestly priced system builders?

 

boxxtech.com is pretty inexpensive.

 

Also importantly, can you advise on a quality motherboard or point to some good comparison articles?

 

Please post this question to the support.discreet.com forum (Hardware/OS) so that more people can benefit from the replies.

 

What does the newer "Ti" designation add to the GF2 Ti and GF3 Ti cards?

 

Means its a smaller .15u process, which results in cooler running cards, with smaller dies, which can be produced for a lower cost.

 

Also which GF3 card with the softquadro utility would compare with the DDC?

 

The Geforce 3 Ti 500. I'd really recommend a GF4 Ti 4400 for a new card.

 

You did test the GF2 GTS which held its own but I can only imagine that a GF2 Ultra with softquadro would help it close the gap with the more expensive cards. What's you thoughts?

 

Geforce 2 Ti 450 from gainward is awesome, and would be a kickin workstation card for less then 200 USD.

 

Hold on for a few minutes while I post a retort to another Xbitlabs post.

 

(Here's the retort to their Quadro4 - Firegl 8800 article)

 

A few quick notes on this article....

 

1) They use the wrong set of max benchmarks. These are the R3 benchmarks,

and not the R4. So all their comments about being the "recommended" set of

benchmarks is false.

 

A link to the R4 benchmarks (via discreet) is...

 

ftp://ftp.ktx.com/vcards/bmark4.zip

 

I have informed xbit labs of this, but they refuse to fix any errors, or

reply to any emails.

 

2) The data reported in this article, does not mirror the other Quadro4

review article. In fact on some of the same benchmarks, the data is

different....hmm interesting.

 

3) Tests were run at 1280x1024, and not 1600x1200 where the stress starts to

show up in heavily textured scenes.

 

4) The quote

 

" decided not to run any tests with the enabled Anti-Aliasing, as all the

contemporary graphics accelerators can do 3ds max anti-aliasing without

performance losses. "

 

Is completely and utterly false. There can be a 10-20% performance drop in

viewport performance switching between AA enabled and disabled.

 

http://www.3dluvr.com/content/techz/nvidiadcc/dcc/dccmaxvsspeedfps.jpg

 

(Quadro DCC fastest and quality settings)

 

5) By only running the tests once, the article becomes biased due to the way

Max cache's the scene data. (Aka depending on which scenes were opened

first in the string of benchmarks, determines how the card performs in a

certain scene). This can be easily reproduced by working in max for an half

an hour or so, taking a benchmark, then performing a full system reboot,

opening up a fresh copy of max, and running the benchmark. In most cases

you'll see different data, especially if you were opening multiple scenes

previously.

 

All and all its a useful article to get a general gauge of the different

card performances, but don't use it as the end all judge of how a FireGL

8800 will performance in Max4.

 

Greg Hess

 

Greg Hess

 

[ April 24, 2002, 11:34 AM: Message edited by: Greg Hess ]

Link to comment
Share on other sites

Greg,

Thanks for the feedback.

 

I will check out boxxtech.

 

I am in Canada but could consider purchasing from a retailer in the States. Do you think a Dell or Gateway could put together a decent single Athlon budget conscience system?

 

Regarding display resolution, I work on a 19" monitor at 1152x864 or 1280x1024. Therefore I don't think I need the bleeding edge video card.

 

Jeff, perhaps you would like to poll the audience on the size and display resolution used.

 

Greg, I'm looking forward to you articles. Thanks again.

Link to comment
Share on other sites

"Do you think a Dell or Gateway could put together a decent single Athlon budget conscience system?"

 

Neither Dell nor Gateway feature AMD cpu's in their systems. They are Intel only. Give www.alienware.com, www.monarchcomputer.com, www.xicomputer.com, and www.puicorp.com a look as well.

 

I'd especially try and stay away from Dell after their latest annoucement to break away from the "ATX" standard, and create their own DELL standard for Powersupplies and motherboards. This prevents you from upgrading a variety of components without going through dell, because their form factor is all screwed up, as is their power supply voltage feeds. Pretty much means that Dell has now joined Compaq in the annoying integrated crap that nobody professional wants or needs. Great for the consumer, bad for the professional.

 

"Therefore I don't think I need the bleeding edge video card. "

 

I'd still recommend, depending on budget....

 

Geforce 2 Ti, Geforce 3 Ti 500, or a Geforce 4 Ti 4400.

 

With more money, Quadro DCC or Quadro4 750XL (Middle ranged Quadro4).

Link to comment
Share on other sites

Guest MarkH

as has been stated fifty bazillion times over....

 

AMD's generally have better floating point calc. speeds which equals faster render times for us, better preformance in photoshop, etc.

 

P4's have been recently proven to work faster w/ nVida graphics cards...which tend to be the staple for medium end stations / gaming rigs.

 

If your going dualy AMD's, look into asus motherboards. also, if you dont wanna pay through the nose for MP's look into the Duron processors...and if you do a little poking around on some hardware sites, you can find out how to use the less expensive XP's in dual action.

 

I'm a fan of Hard-OCP and Tech Report

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...