Jump to content

Dell Precision 650 workstation (part II)


Recommended Posts

I write this new topic only to say thanks a lot to all for your help and comments. I've been taking a look to all the sites you told me. After that, I had this two options:

 

3DBOXX R3s

 

Dual Athlon MP Processor Model 2600+

1GB PC2100 Registered ECC DDR (2 DIMMS)

NVIDIA Quadro4 980 XGL 128MB

36GB 10.000 rpm U320 SCSI drive

CDRW 52x

16x DVD

 

Total Price: $4.465

 

 

DELL PRECISION 650

 

Dual Intel® Xeon™ Processor 2.80GHz 512K Cache

1GB DDR266 SDRAM Memory ECC (2 DIMMS)

NVIDIA Quadro4 900XGL 128MB

36GB 10.000 rpm U320 SCSI drive

CDRW 48x

16X DVD

 

Total Price: $4.124 (US price. In Spain this is cheaper, and with a 17' CRT monitor!)

 

 

And my choice is the DELL because:

- DELL is cheaper (with Dual Xeon 2.8!!!)

- I live in Spain. There is a DELL Spain here.

- Delivery costs are free.

- If I have problems with this machine, there is a DELL technician in my studio next day.

 

These things are more important for me than minimum diferences about these two workstations. I have no time to get as high level knowledges as you got about memory, building a workstation by myself, etc.

 

And again, thanks a lot to all.

 

Jose Maria Ataide.

Sevilla. Spain.

Link to comment
Share on other sites

Scsi isn't necessary for a 3d workstation. Its nice to have, but unless your doing alot of compositing/video work, you can easily get by with IDE.

 

Moving to a system without onboard scsi, and moving to a larger cache IDE drive (with a hella of a lot more space) should save you 500-800 USD, depending on what particular configuration you were looking at.

Link to comment
Share on other sites

Hell.... over $4000 for a workstation...

Check this thread out for an, at least, equally setup, if not faster, for much less then what you are going to pay.

Pay someone to build it and save for other goodies (monitors, printer/s, digicam) and software.

Hell, pay me the flight ticket, I'll come and build it for you... :ebiggrin:

Link to comment
Share on other sites

Moving to a system without onboard scsi, and moving to a larger cache IDE drive (with a hella of a lot more space) should save you 500-800 USD, depending on what particular configuration you were looking at.
LOL, i am delighted Ataide IS spending his money where it really will pay off in the longer term. With the freedom he will buy using SCSI, to push the system out to its absolute maximum. I mean in a couple of years time, even if his cpus seem a bit whimpy, he will still have a decent hard drive and bullet-proof system in his hands.

 

Go for SCSI, money very well spent IMHO at least. Then again, i always used to safe alot of money on my cpu/memory to go for the best SCSI hardware and graphics board i could buy. Overall, i have learned to build systems like this, and find it suits the particular way that i use a computer to model in 3D.

 

Dunno, because some more guys i know, always crave after faster cpus and memory - but that just never really worked for me.

 

I just like the way the mouse, dialogue boxes pop-ing up, just the really smooth feel of a dual cpu, SCSI box. I can tolerate slower cpus, slower system architecture - but i REALLY like my drive subsystem spanking good. (Which does mean good and expensive but always money well spent from my point of view of working)

Link to comment
Share on other sites

Greg, I have shown the 3DBOXX with SCSI and the same HD to compare as equal as posible to the Dell Precision 650, because in this one, SCSI is included as default basic configuration (in its MOBO).

 

bigcahunak, if this computer is going to be my only 3D workstation, I couldn't wait if a component die (you know, send defectuous part... wait to receive a new one...). And what's up with my work? Dell says that if I have problems with defectuous components, my workstation will be running again the next day after I call them.

 

garethace, I absolutly agree with you.

 

Why so many people here don't like DELL? Why only one of you said "go ahead with Dell"? Honestly, if you live here (Spain), would you buy a $4000 computer to BOXX (USA)?

 

 

Jose Maria Ataide

Sevilla. Spain

Link to comment
Share on other sites

Why so many people here don't like DELL?
Two reasons spring to my mind automatically.

 

Reason no.1.

 

It you are willing, knowledgeable enough and able to keep track of 12-20 different warantees for the parts and software you need to set up a workstation then it is nice to build your own, or ask someone to build for you.

 

On the other hand, Dell manages to consolidate all those 12-20 warantees into one single very manageable agreement.

 

But that is were the Dell system breaks down for many people Ataide. You see, many people here may wish to customise their system with some specialised part or alteration.

 

If you do that with Dell system, it rarely works out for the best, since Dell do not recognise the 'foreign part', that was NOT in your original configuration.

 

Reason no. 2.

 

If you buy systems for clients, as Greg does from BOXX mainly, it is a wise move. Just imagine building 300 odd different systems for all those clients means a nightmare in driver co-ordination, keeping track of all the warantees, clients complaining about systems are this, are that.... once the clients know they can drive into my own house, at any hour of the day or evening, practically hand you a system and say "I want this done by tomorrow", .... you are in a world of s***.

 

I have been there, i have built systems for 10 different clients at the same time. When it comes to replacing something simple as a cpu fan etc, they will not pay and you're profits evapourate.

 

So i would actually buy Dell, if performing in the role of IT contractor for a small cg practice. That way, i take less responsibility for systems.

 

The point about BOXX, is they are more willing to customise to the requirements of the visual artist. They provide a bigger range of options.

 

But most of all, it appears BOXX use standard compliant components, so that when you HAVE to adjust or upgrade the system a year from now - you are not screwed because of irregular parts. It is much easier to trouble-shoot regular parts, because you can swap out parts individually until you find the sourse of your problem. That is impossible with Dell, because they use their own components - making the job of fixing an old Dell system a bit of a task indeed.

 

C'mon, you know how US people operate - they fix everything from there car to their lawn sprinkling system. Not like us here in Europe, who pick up the phone for everything - in Europe we are trained 'not to look under the bonet' from an early age.

 

I.e. to establish a trust-based relationship with a dealer who will sell us a good car, a good fridge, a good computer - whatever product it happens to be.

 

In a vast territory like the USA, cowboys are around every corner and most people there get ripped off half a dozen times in their life time. This is just a way of life in America, so people learn independence, resourceful-ness, and this general American 'fix-it' mentality.

 

Which i find altogether refreshing and different from the European approach. I mean, have you seen the way ordinary people are taken for a ride when buying home pcs here in Europe?

 

It sickens my stomach.

 

Honestly, if you live here (Spain), would you buy a $4000 computer to BOXX (USA)?
If it was my own system yeah i would. Because i know everything in that system is standard regular kit, i can easily maintain and upgrade myself if i so wish later on. I can do practically anything i wish with the BOXX system, but with the Dell it is too much of a sealed contract and a sealed ATX unit. With Dell you are just a serial number written on the side of the machine.

 

Any parts i wish to change in the BOXX i can easily get here in Europe - standard system/standard parts work with it no probs. But i understand, from your point of view, going under the bonet is out of the question. However, if you bring your BOXX system to a good computer repair shop in Spain, he can help you out. If you bring your Dell to a guy in Spain, most likely you are asking for trouble.

 

[ April 28, 2003, 11:52 AM: Message edited by: garethace ]

Link to comment
Share on other sites

Originally posted by JM Ataide:

bigcahunak, if this computer is going to be my only 3D workstation, I couldn't wait if a component die (you know, send defectuous part... wait to receive a new one...). And what's up with my work? Dell says that if I have problems with defectuous components, my workstation will be running again the next day after I call them.

Well... if you won't buy the Dell, then with that amount in your pocket, it doesn't have to be your "only 3D workstation".

Dell is no magic. They buy components and put systems together with it. Only that they twist the whole platform so much - it makes it impossible to change a thing without the Dell guys themselves (who charge millions for every stupid memory upgrade). So, Dell is no computer power at all; its only a name for service, which I doubt it really works anywhere but the US, as I got really bad service and attitude from them when I used to be a client (sucker me...)

 

But what could be so bad with getting a costum built machine from a local computer shop? you won't have to wait for the Dell guys if you run into problems. All you'll have to do is take your computer down to them, and thats only if something really happens, which is unlikely. Most problems are during installation, after that there is usually nothing. And if, only if, a component die on you, just go down again and buy a new one. You pay Dell all that much for next day service, for a machine you are going to trash in 2 years anyway.

Also, help your fellow Spanish people earn some money, not only mr. Dell

 

Its not that I really care, its just stupid... and I hate brand names machines (not only Dell...)

Link to comment
Share on other sites

Gare,

 

I agree with alot of valid points on scsi configs. But not when doing cg work.

 

You'll never even approach the throughput levels of a fast WDJB drive if your just moving files around, loading scenes, or rendering.

 

You'll actually find that most cg artists, who have been all scsi most of their lives, will notice no difference in performance switching to a IDE platform. Especially if their not doing disk intensive operations.

 

In fact the only real difference is cost/noise.

 

Of course if your dealing with specific sub aspects of cg, like compositing, or video editing, or operations which require disk throughput, then a IDE raid array, or SCSI disk makes all the difference.

 

One thing you have to remember though is...

 

Don't go scsi just because its scsi. Thats just a waste of money. Go scsi BECAUSE you NEED scsi.

 

There seems to be an old adage of individuals who are stuck thinking there is an utterly massive difference between SCSI and IDE. If you go back a few years ago (around 5) there was. The difference was so huge you could instantly notice whether you were on a scsi machine, as opposed to an IDE one. Those individuals (including myself) vowed never to go back to an IDE system. Unfortuantly this also means that they never experience the advancements in speed and performance that have been made in the past 5 or so years. Sure SCSI's gotten faster. It always will get faster. But comparabily, IDE has gotten A HELL of a lot faster. So fast that its actually matching if not exceeding the performance of mainstream (7200 rpm) SCSI drives. There are even 10,000 rpm IDE drives hitting the market now.

 

Of course you still have certain limitations of IDE, including the whole IRQ bs, and the master/slave relationship, which all must be taken into account.

 

One of the biggest limitations of SCSI now a days, is its mainly just a disk only subsystem. Back in the day, all the latest and greatest cdrom's, burners, periphals were always scsi. Then somewhere along the way, they all swapped to IDE. Its almost impossible to have an all scsi system now a days, unless you make some sort of sacrifice, either in readibility, or performance of secondary drives. (Aka you can't get some of the newer/nicer dvdburners in scsi form)

 

If you need scsi, you can always add it later as a expansion card.

 

I was an all scsi guy for almost 5 years before I switched. Then I realized I wasn't ever utilizing the power of scsi, and went with IDE config's. No performance difference, tons extra cash :) .

 

(Previous config was Seagate 18/36 Gig 10,000 rpm cheetahs on a u2w 80 meg/sec adapter card, with plextor ultrascsi burners and roms).

 

Current config is WD800JB's, Sony DRU500AX's, and Pioneer 120S.

 

[ April 29, 2003, 07:07 AM: Message edited by: Greg Hess ]

Link to comment
Share on other sites

Of course if your dealing with specific sub aspects of cg, like compositing, or video editing, or operations which require disk throughput, then a IDE raid array, or SCSI disk makes all the difference.
You do not want to get lumped with a slow drive for my work no matter what kind of drive it is. Which is basically CAD modelling in software like AutoCAD and MicroStation. The opengl acceleration in both can be desperate without a fast drive subsystem - they haven't been optimised for graphics boards at all really - idea being that most people using those softwares aren't going to have more than entry level acceleration. I recommend personally using striped disk arrays of IDE or SCSI for the workstation autocad/microstation, and a separate storage system for valuble files.

 

I think there is a very valid argument nowadays to be made with SCSI drives versus more memory. For certain software. But in the old days, memory was usually smaller sized so SCSI was a great way to avoid problems with disk paging thrashing. Especially when you defraged the SCSI drive now and again. I also recommend a really bullet-proof SCSI drive in practices where the boss is just too stupid/mean to put more than the standard 128MB amount of Dell memory that comes with the workstation as standard. You wouldn't believe how many 128MB Dell workstations i have run into in my travels. Ouchhhh.

 

I remember reading an amusing story about a guy using RAM doubler on his MAC. He said i had 8MB of memory and a bought ram doubler so now i have 16MB, i liked it so much i bought another ram doubler and installed it, so now i have 32MB of ram! Other common complaints i here from people who buy Dell workstations - "i think that memory upgrades should be put onto cdrom, so that upgrading should not require opening of the chassis!" Yeah man, for real!

 

 

If you need scsi, you can always add it later as a expansion card.
That is preferrably how i would do it - i don't like the integrated SCSI on Dell boards.

 

(Previous config was Seagate 18/36 Gig 10,000 rpm cheetahs on a u2w 80 meg/sec adapter card, with plextor ultrascsi burners and roms).
Ahh! That is the problem, u160 and u320 make a biggggggg difference over the u2w, even though the u2w was a really fine product in the days of early uDma 66 noisy, unstable, slow IDE drives. Couple a U160 or u320 with a 66mhz/64 bit pci slot, using a couple of reasonable SCSI drives in a software array (if you cannot stretch to a 3ware SCSI RAID card) you would be in for a bit of a shock with certain applications.

 

All of my advise does come with one qualification though - you will only see the difference in software that has a really good OpenGL driver, with good enough Quadro card or WildCAT card to take advantage. MicroStation and autocad have terrible graphics acceleration, so that drive subsystem is the one area where i can 'crank it up' a bit. As i am stuck with those softwares for doing 90 percent of my work.

 

I would love to do alot more with more suitable titles like Z, lightwave, VIZ.... but most architectural firms i am with don't pony up for training for other members in the practice - So i have to use that which other people will understand better - CAD softwares that you can model with.

 

The point of using SCSI is still suspect for a workstation though - since yeah, you are not using every aspect of its design features, and reliability as a long-term server investment does. There is no great requirement to hot-swap raid 5 drives without powering down the sytem- thats the biggest cherry with using SCSI drives for servers i think.

 

Another big minus with SCSI nowadays is the top-end 73GB drives are sinfully expensive, before you even shell out for a decent controller card. Compared to IDE WDJBs with 3 year warantee and tonnes of space for digital photography/renders at high resolution.

Link to comment
Share on other sites

Too much quoting to quote :) .

 

The biggest difference I notice with a primary scsi system, its just responsiveness. Going (this is 100% scsi or 100% ide) 100% ide to 100% scsi on a single processor system, has a similar feel effect to going from a single processor system to a dual processor system. It just feels a bit more snappy, a bit faster, a bit..well you know :) .

 

As for disk speed vs ram speed/amount...it really depends what your doing.

 

 

Lets say your running a DC-DDR 512 PC3200 800 FSB Pentium IV, with a Raid 0 array of 15k 76 Gig 3rd gen cheetahs. Now that would be one hella of a fast system.

 

So your rendering along, and suddenly your client asks for their render in print form. This of course requires an insanely huge render, like 6000x4800 (thats about 40x30x150 dpi). You hit render.

 

You watch your ram spike, and suddenly switch to vm...your performance goes to utter shit, peggin what would have been a few hour long render, into a day long disk thrashing fest.

 

Why? Because your bandwidth is dropping from GIGS a second, to MEGS per second. The maximum transfer rate (recorded via storagereview.com) for a 73 (or 76 gig forget) Rev3 X15-76LP Cheetah is around the 80meg/sec mark burst, and drops to around 50 sustained.

 

http://www.storagereview.com/benchimages/ST373453LW_str.png

 

Slap those two in a u160/320 raid 0, and your looking at 160 megs burst (theoretical) and 100 megs sustained/sec (theoretcial).

 

160 megs/sec vs the transfer rate offered by the PC3200 DCDDR is substaintally different.

 

The SCSI drives may be fast, but in no way can they make up for raw ram speed.

 

This is an example of a situation where disk throughput doesn't really have any effect. If you run out ram while rendering (known as paging) no matter what disk subsystem you have, your screwed.

 

Though this is an old example, here's what happens with two systems when one runs out of ram, and one doesn't.

 

Single Athlon 1600+ XP 256 Mb @ 266 MHz

 

12/22/74 47/125/723 67/98/263 1431

 

Single 1700+ Athlon XP 512 Mb @ 133 MHz

 

11/20/66 41/108/577 41/64/179 1107 Rob L

 

There is only a 70 megahertz difference between these two systems. The #'s in bold are 2048x1536 islands.max renders. Note the significance difference in render speed on the 1600+, due to ram issues. If you check the 3dluvr benchmarks...

 

http://www.3dluvr.com/content/maxbench.php?min=30&sort=t_time

 

You can actually see how much running out of ram can effect render times. Regardless of disk subsystem.

Link to comment
Share on other sites

My situation may be a bit different to yours or Ataides Greg - i am mostly a front line CAD modeller and architectural designer - my skills are needed mostly to keep that end of the production line going, i don't involve myself much in the rendering and outputting anymore. Basically because, if the firm has that done externally it is much easier to bill the client with the receipt.

 

To be honest, Ataide did ask specifically about a good box to render. I tend to separate my rendering systems from my workstation systems Greg, and optimise either one for the task at hand. In that sense, putting SCSI drives in my rendering farm machines is a complete waste of resources. A bit like web servers, only cache most of the web pages and the client (you or me) reads it direct from the memory. However the database servers used to host a web site are thrashing around in the disks quite alot.

 

I think that many cad files contain millions of lines of code (especially softwares based around reference files like allplan, triforma and so fourth) that are arranged very much like a database. Getting quick access to a particular line that specifies the exact vertex you are moving is improved by the disk itself.

 

Whereas rendering algorithms are a repetitive cycle of code strings that are better contained in the cpu cache preferably (notice how much better northwood 512mb was for rendering compared with 128kb williamette) or close by in a fast access memory bank. Barton core from AMD only comes in single processor systems but features a whopping 786kbs of combined L2 cache. But the AMD chips also have a much bigger, and faster L1 cache than P4 northwood, which says why AMD chips can keep up with much higher clocked p4s.

 

This guy Charlie Demerjian who writes sometimes on the inquirer has this to say about building a system using scsi:

 

http://www.aceshardware.com/forum?read=95034222

 

So your rendering along, and suddenly your client asks for their render in print form. This of course requires an insanely huge render, like 6000x4800 (thats about 40x30x150 dpi). You hit render.
Mind you i have the luxury of using my workstation only for modelling buildings, and the rendering and animation is sent out. At home, i sometimes like to whip up a few little VIZ animations, not at a particularly high resolution but there aswell i find the disk subsystem can play a big part. But you have repeatedly mentioned print resolution in our discussions on systems - that is something not to be taken lighly at all for any serious cg professionals. I think i only ever done one high res print output myself in the last year or so. Lucky me.

 

What is also found, was in Photoshop i pieced together four quarter of the image rendered separately in VIZ to save memory - i composited them together in photoshop afterwards. I found without enough memory working with high res print quality images in photoshop without enough memory/processing power was a nightmare. I was hossed completely. I even collapsed the image into 1-channel B+W format to work with the 4000 pixel image. Otherwise, i could not have handled the image at all in photoshop as a 4-channel RGB image.

 

160 megs/sec vs the transfer rate offered by the PC3200 DCDDR is substaintally different. The SCSI drives may be fast, but in no way can they make up for raw ram speed.
I think the opteron system running 64 bit linus must be a big hit here in the future. what a 64 bit operating system may eliviate, is when a file is no longer used by the cpu, but the ram is nowhere near full - windows will default to moving the data from memory to the disk paging file anyhow. This is stupid and a limitation of 32-bit windows - hopefully longhorn or linux workstation 64 bit might help here, coupled with 4GB+ of memory.

 

altogether what i am trying to impress is i can spec or build a system to perform stable and responsively for front-end cg modelling usage. But from a point of view of print resolution compositing and image working, i would not have a clue any more than the next guy. I simply haven't enough experience in that department.

 

But do bear in mind the models i am dealing with can be 10MB each per reference file - if put altogether i would probably have to deal with a CAD model file for maybe 100MB.

 

For some reason, with a good fast disk array i can handle files of 10-20 mbs in size, with reference files attached over a cat 5 network. But when it comes to handling photoshop RGB files of 10mbs or greater, or for rendering print res i really do need a faster memory/processor system than i have now.

 

[ April 29, 2003, 01:46 PM: Message edited by: garethace ]

Link to comment
Share on other sites

Hello

 

Hmm... Interesting stuff...

 

Basically I would recommend SCSI systems only for NLE Systems and Server Systems.

 

I guess no other application needs as high and time consuming dick access.

 

While a CAD designer using HD's for only 1% of his work, server (or NLE) based systems needs disc assecc for over ~90% of their work session.

 

If you want faster drives I would suggest a IDE RAID 0 system to go with.

Why? It's easier (easier=cheaper) to replace IDE Drives than SCSI Drives.

 

And don't forget: For faster Drives you will need faster Motherboards/Systems too (ie 64bit PCI, separated/more than one PCI I/O Busses).

 

So the numbers are coming from Mr. Gunnman are a little bit confusing for me (If you know what I mean ;) ).

 

Richie

Link to comment
Share on other sites

Greg, I think that

 

Making good replies to questions about hardware takes alot of careful consideration, ability to express oneself and a keen ability to analyse the problem in all its aspects. The expert posters at other forums can make that look very simple indeed. Then i realised that discussion is very much a skill in itself, irregardless of the knowledge aspect. Posters can be investors in tech companies, programmers, administrators, resellers, markerting types, enterprise solution providers.... so many different points of view in anything related to the IT scene.

 

It is very much like a whole education about the world rolled up neatly into one convenient package. I never grew up as quickly until i began to provide technical support for various businesses i have been with. I realised there is more people skills involved than just pressing the raytrace button all the time.

 

And don't forget: For faster Drives you will need faster Motherboards/Systems too (ie 64bit PCI, separated/more than one PCI I/O Busses).
I remember reading through manufacturers specs for all components of my last major build. Having sourced all the right components, trying to order them... I did business with at least half a dozen people to get the right parts.

 

I remember pc800 ecc rambus memory was hugely difficult to find at the time. I remember sourcing out slot type 1GHZ pentium III parts was a nightmare - even though the vendor had assured me he had them - the final build was fully realised about two months after it should have. Dual Pentium 4 solutions were non-existent at the time, and i had very little experience with the AMD platform. Even when i finally got the cpus, i have to try 10 places before i found suitable secc 2 fans.

 

It was like building a small house, but i was determined to do everything right for once. For my next workstation - i think i might opt for the lazy approach than go through it all over again.

 

Ordering all those expensive parts again, sourcing ecc memory, workstation boards, proper power supplies, choosing the correct kind of case, running my cables, sorting out ventilation - before going to the trouble of flashing bios on scsi cards, mobos, graphics boards, even hard drives themselves.

 

Half ways through the build, i would change my plan too, and opt for a different component - this would have knock-ons for various softwares.... but in the end i did get something i am happy with, i have not changed the system since.

Link to comment
Share on other sites

a poster at aces asked about this setup:

 

http://www.aceshardware.com/forum?read=95034222

 

over on the storage review, and this was what SR had to say on the matter:

 

http://forums.storagereview.net/index.php?act=ST&f=2&t=9667&s=4f8c7893726285c06c788fc7aa337f83

 

The applications that poster is using are Matlab, Maple and Houdini. So i imagine some pretty serious rendering and visualisation going on then.

Link to comment
Share on other sites

  • 2 weeks later...
Originally posted by garethace:

quote:
If you need scsi, you can always add it later as a expansion card.
That is preferrably how i would do it - i don't like the integrated SCSI on Dell boards.
The previous dual Xeon Dell workstation (the i860-based Precision 530) sports an integrated Adaptec U160 SCSI adapter IIRC. Granted, some of Adaptec's previous RAID controllers had poor RAID 0 performance (the rather new *9320 line uses Hostraid and reportedly perform a great deal better). However, Adaptec's non-raid controllers appear to have a good rep, and since the Precision 530 comes with a non-raid controller it should theoretically not be crap.

 

Furthermore the Precision 650 (E7505) has an integrated U320 single channel RAID 0 capable SCSI controller from LSI Logic. That company seems to be rated among the top-tier RAID controller manufacturers. I believe that some people in the know (at StorageReview) think that only Mylex (which is owned by LSI) make better RAID controllers than LSI.

 

Of couse none of this guarantees that the particular controllers used in those workstations are any good, but they might be :) . Independent PCI SCSI cards offer many advantages over built-in ones, but I guess it wouldn't hurt to have an integrated controller as long as it's not a piece of junk. Hmm, I lack a conclusion so I'll shut up now.

Link to comment
Share on other sites

Independent PCI SCSI cards offer many advantages over built-in ones, but I guess it wouldn't hurt to have an integrated controller as long as it's not a piece of junk. Hmm, I lack a conclusion so I'll shut up now.
Watch anything integrated, because the bios is always upgradable on the retail box version of the SCSI or RAID controller. Oems add-in controllers and integrated ones, come with hardly no support at all - the biggest problem of all, is that Dell really writes its own SCSI controller bios for the integrated parts - you will not be supported with a very good bios therefore, from LSI or even adaptec - thats where the crunch happens.

 

Similarly with graphics boards - a good retail box version from an Elsa (alot of these are gone unfortunately) or somebody, can be ultimately a much better deal than the oem Dell packaged one. I have flashed bioses on graphics boards too - sometimes i needed a bios upgrade on gpu, mobo and scsi controller to clean out problems properly. Only with a separate bios rom chip on each component, is this operation normally as smooth as it should be.

 

Worst of all, with integrated adaptec or lsi parts, you simply do not qualify for the 'goodies' later on - the performance tweaked bios with the bugs all ironed out. Small, subtle point but WOW! what a definite one always in my own personal experience.

 

To get the max out of any storage system like the U320 kit, you do need dual PCI buses, with an option for 64-bit operation. Otherwise, it is only wheel spinning. Systems like that are hard to find, and get a basic mobo with a rock solid, performance tweaked bios to support a top-end controller is even harder again to find. I would envy users of top end Supermicro Xeon boards and cases in this department actually. Serverworks have had alot more sucess at the top-end here, than Intel - but Intel make a good generic product too - i just don't fancy waisting too much money on adaptec or LSI full-blown raid kit for an intel chipset though.

Link to comment
Share on other sites

Thank you for the good info garethace!

 

Originally posted by garethace:

Similarly with graphics boards - a good retail box version from an Elsa (alot of these are gone unfortunately) or somebody, can be ultimately a much better deal than the oem Dell packaged one. I have flashed bioses on graphics boards too - sometimes i needed a bios upgrade on gpu, mobo and scsi controller to clean out problems properly. Only with a separate bios rom chip on each component, is this operation normally as smooth as it should be.

Would you recommend against getting a Quadro board bundled with a Dell workstation, even if it was cheaper that way?

 

Would you take a Supermicro E7505 board over Intel's one? Which would be the deciding factors? Tweakability, stability, design, features, support etc.?

 

To get the max out of any storage system like the U320 kit, you do need dual PCI buses, with an option for 64-bit operation.
Not that it matters, but the Precision 650 has separate PCI-X buses running at 64 bit 66/100 MHz. Not as good as the 133 MHz slots featured by some boards though. I probably don't need to tell you the specs of Dell machines, since you've worked there, sorry redface2.gif

 

i just don't fancy waisting too much money on adaptec or LSI full-blown raid kit for an intel chipset though.
Do you refer to full-blown integrated raid kits?
Link to comment
Share on other sites

Basically, you have to start with the component parts like the graphics board, the Raid or SCSI board and memory - then work back from there to make the kind of system you require. Greg has got a hell of alot of experience dealing with the graphics board and xeon building end of things. I always worked the disk subsystem performance more - because in geometry intensive CAD software, this counts big.

 

One thing, that impressed me most about INtel systems, was the application accelerator, chipset driver and storage driver service packs for windows - Intel has got much better at writing the necessary service packs for its boards now. I would be happy enough with the Raid expansion card job that Dell do, with an Intel chipset of course - but still, would fancy my chances more with a proper 64 bit PCI expansion card. However, when going with 64-bit PCI hardware, like RAID cards, DPS compositing boards, Raytracing boards like the renderdrive solutions.... you would have to go the Tyan, or supermicro route to find boards/chipsets/systems/psu solutions etc,... chassis suitable to take such a load.

 

I don't think buying an Asus, of MSI is going to provide you with the support - bios wise revision wise and everywise. I know prices of integrated peripherals and mobos have come down alot - but so have the number of faulty boards. It seems really that boards 2-3 years back (say in the BX days and K6 days) didn't have nearly as many problems with capacitators, voltage, power, agp problems,... you know faulty disk drive controllers etc.

 

I like the Supermicro boards because of their reliability most of all, and would like to have a supermicro board and supermicro case and power supply - that way you are fully supported - really off to a great start in building a system. You may need to invest in registered ECC memory though.

 

Would you take a Supermicro E7505 board over Intel's one? Which would be the deciding factors? Tweakability, stability, design, features, support etc.?
http://www.supermicro.com/PRODUCT/MotherBoards/E7505/X5DA8.htm

 

Manual

BIOS

Support

 

You need it ALL really with building a great system i think.

 

You will find that alot of CG hardware vendors base their systems along the general lines of using Supermicro parts for the base, then just customising up from there as the buyer demands. You are building from a really solid base, and coupled with a retail box for a Quadro, a 3Ware Raid card and a couple of guaranteed SCSI drives - you cannot go to far off track.

 

[ May 14, 2003, 11:04 AM: Message edited by: garethace ]

Link to comment
Share on other sites

Thanks for your thorough reply garethace!

 

Originally posted by garethace:

One thing, that impressed me most about INtel systems, was the application accelerator...

Some people find the IAA to cause trouble, for example when dealing with DMA and optical devices etc. Do you find it flawless?

 

Originally posted by garethace:

I would be happy enough with the Raid expansion card job that Dell do...

The PERCs?

 

Originally posted by garethace:

However, when going with ... you would have to go the Tyan, or supermicro route to find boards/chipsets/systems/psu solutions etc,... chassis suitable to take such a load.

Do you consider Tyan and SM to make tougher equipment than Dell and/or Intel?

 

Originally posted by garethace:

...would like to have a supermicro board and supermicro case and power supply...

Do you consider SM's PSUs to be of better quality than those from PC P&C?

 

Originally posted by garethace:

You may need to invest in registered ECC memory though.

How significant is the difference in stability between unbuffered and registered memory?

 

Originally posted by garethace:

...coupled with a retail box for a Quadro...

I ask again: would you not consider getting a Quadro bundled with a Dell box, even though the card would come out cheaper that way?
Link to comment
Share on other sites

The posters here at this discussion may have a question or two for you Martin. Just said i would point it out to you.

 

http://www.cgarchitect.com/ubb/ultimatebb.php?ubb=get_topic;f=1;t=000406;p=1#000003

 

Some people find the IAA to cause trouble, for example when dealing with DMA and optical devices etc. Do you find it flawless?

 

Yes, i have found it working sweet with software raid too :)

 

The PERCs?

 

Those are the ones - proprietary standup slot cards.

 

Do you consider Tyan and SM to make tougher equipment than Dell and/or Intel?

 

Dell are about pushing systems out the door as quick as possible, they are very good at that. Intels job is really just to keep Dell fed with fool-proof chipsets and boards.

 

Dell or Intel do not really provide the level of basic motherboard component level support available from either Tyan or SM in my opinion - after waiting an hour on the phone, or email to a Dell tech support, try getting anything other than the usual "insert your Dell system Cd..." With Tyan and Supermicro, at least you can ask them specifically about the board, trouble-shoot, diagnostics....

 

Do you consider SM's PSUs to be of better quality than those from PC P&C?

 

Always be suspicious of the performance of your PSU, no matter where it comes from. When you are talking about Xeons, Athlons, Pentium 4s, multiple drives, graphics boards with big fans, chassis with big fans you do really did a certified PSU. Remember, even though SM advises a 400watt psu for the board i linked, no psu has total 100% efficiency - most or a large part of that power is lost through heat - that is why ventilation of the psu is important, and great build quality. If the system cannot get the power it needs at any time, your software, os, system everything just caves in.

 

How significant is the difference in stability between unbuffered and registered memory?

 

Cooling of the memory is also important for stability - notice how Supermicro boards and chassis increase the airflow over the memory banks.

 

Depends on how long you are willing to wait between your windows crashing on you doesn't it - some guys here at the forum are doing great big scheduled rendering jobs, whilst still using there machine as a workstation, and to do render jobs. ON the other hand if you don't mind restarting twice or three times a day...

 

I ask again: would you not consider getting a Quadro bundled with a Dell box, even though the card would come out cheaper that way?

 

Depends on the bios revision and pcb revision of the Dell card really doesn't it? Of course you are better of with a retail box card - cooling and stability is the key thing now with lastest nVidia cards - go for a retail one with the best cooling solution/stability/performance in benchmarks - see web reviews of Geforce FX to see variation in quality between manufacturers.

 

Do not make the mistake of thinking Dell is best because of cheap quadros - shameful - analyse the rest of the system first, to find the real problems - power supply, board quality, support and driver availability, chassis design. Graphics boards are really not suited anymore for AGP slots - like the slot Pentium chips were replaced by flip chip designs for better cooling solutions - the same could happen to the graphics acceleration eventually - the things aren't getting enough air at the moment.

 

With nVidia, i would not be too concerned though, because of the unified driver that nvidia uses. To be honest with a graphics board, it is really down to the driver you install - not the hardware. I can install, three different drivers for my quadro - each one nVidia certified for different programs - the only problem is when i install a Bentley certified driver, the performance suffers in some other application.

Dell don't sell cheaper SCSI drives, and for my own CAD applications a fast drive is as important as a fast gpu.

Link to comment
Share on other sites

Your opinions are appreciated garethace.

 

Originally posted by garethace:

Depends on how long you are willing to wait between your windows crashing on you doesn't it - some guys here at the forum are doing great big scheduled rendering jobs, whilst still using there machine as a workstation, and to do render jobs. ON the other hand if you don't mind restarting twice or three times a day...

Are you saying that there's a night-and-day difference in stability between unbuffered and registered DIMMs?

 

Originally posted by garethace:

Of course you are better of with a retail box card - cooling and stability is the key thing now with lastest nVidia cards - go for a retail one with the best cooling solution/stability/performance in benchmarks - see web reviews of Geforce FX to see variation in quality between manufacturers.

According to a Dell rep all Quadro boards are manufactured entirely by NVIDIA and then delivered to OEMs and distributors. So there should be no difference in quality.

 

Originally posted by garethace:

Do not make the mistake of thinking Dell is best because of cheap quadros - shameful - analyse the rest of the system first, to find the real problems - power supply, board quality, support and driver availability, chassis design.

Cheap Quadros have nothing to do with choosing Dell. A quiet E7505 system is desired and no commercially available motherboard that I know of (including SM) have thermally regulated _CPU_ fans. Since the CPU fans are reportedly the loudest parts of a typical E7505 system, this point is crucial. Dell, IBM, Fujitsu-Siemens and HP all have temp controlled CPU fans. Dell is the least expensive and most customizable of the four mentioned OEMs.

 

Originally posted by garethace:

Graphics boards are really not suited anymore for AGP slots - like the slot Pentium chips were replaced by flip chip designs for better cooling solutions - the same could happen to the graphics acceleration eventually - the things aren't getting enough air at the moment.

Do you mean after video cards have transferred to PCI-express?

 

Originally posted by garethace:

Dell don't sell cheaper SCSI drives, and for my own CAD applications a fast drive is as important as a fast gpu.

This is why I intend to get the cheapest possible IDE drive from Dell and not use it. A 36 GB 15k.3 is planned for the OS/pf/apps and 7200.7 SATA for storage (either 1*160 GB or 2*80 GB)
Link to comment
Share on other sites

Are you saying that there's a night-and-day difference in stability between unbuffered and registered DIMMs?

 

I have been using ECC Rambus for some time now, and yeah, i find it really hard to crash the system i am using - even with multiple applications running, rendering jobs, photoshop.... my only problem at the moment is the need for faster cpus, and much, much more memory - but definetly, the stability of even a slower solution was worth the considerable extra money i paid for guaranteed ECC memory. I would definetly go that route again, either if i was buying or building. It is impossible to benchmark this, but while using Autodesk software especially i always complained alot about stability, but with ecc rimms/dimms the difference in stability was hugely apparent - i found i could open a couple of dwg drawings without the dreaded memory crash dumps of before. Even using the ecc rambus system with a cheaper IDE drive was very, very stable indeed - something i had not known with Autodesk software until then. orangemad

 

According to a Dell rep all Quadro boards are manufactured entirely by NVIDIA and then delivered to OEMs and distributors. So there should be no difference in quality.

 

Makes it much more important to design the chassis ventilation and power supply well then doesn't it? Because those new nVidia cards are getting dangerous power hungry, in fact, i would nearly prefer a cooling Ati card at the very moment. Extremetech have looked at the new NV35 here:

 

http://www.extremetech.com/article2/0,3973,1086857,00.asp

 

A quiet E7505 system is desired and no commercially available motherboard that I know of (including SM) have thermally regulated _CPU_ fans.

 

So what happens when the motherboard service pack driver, that controls the power saving measures of the specific board, starts to act up with your new windows xp service pack, or dx driver, or ide service pack????? what then? call dell today? LOL.

 

If supermicro or tyan have not implemented this feature, i.e. left it out - then perhaps the are the wiser ones. Get prepared for it, in todays world, a quite system is going to become a rarity. If you go for a top brand SCSI drive, you will cut out alot of disk noise - the biggest problem believe me. A solid chassis will also sort out most cpu fan noise. The enermax manual variable power supply units have a switch to regualte power, and make the system quite while doing less intensive work.

 

Do you mean after video cards have transferred to PCI-express?

 

No, just the whole ATX standard for building desktops will not last much longer - based on two reasons - air cooling of high mhz chips and instability of other faster driven components like the memory, gpu, pci bus itself, northbridge (particularly problematic that one) ... etc.

 

Drives are becoming very scary in terms of cooling requirements - i do like a system with a blow hole on top - stands to reason as heat likes to rise - no brainer.

 

This is why I intend to get the cheapest possible IDE drive from Dell and not use it. A 36 GB 15k.3 is planned for the OS/pf/apps and 7200.7 SATA for storage (either 1*160 GB or 2*80 GB)

 

Most intelligent move you could ever make - even when you buy a simple desktop or laptop from Dell - always replace with a decent drive (WD, Maxtor, Seagate) yourself. You would not believe the amount of hassle that will save you, your software stability, and general piece of mind. Dell standard option drives and amount of memory is shite.

Link to comment
Share on other sites

Originally posted by garethace:

I have been using ECC Rambus for some time now, and yeah, i find it really hard to crash the system i am using...

Ok, so you've got ECC RAM, but are the dimms registered or unbuffered?

 

Originally posted by garethace:

So what happens when the motherboard service pack driver, that controls the power saving measures of the specific board, starts to act up with your new windows xp service pack, or dx driver, or ide service pack????? what then? call dell today? LOL.

The temp regulation should be independent of the OS (i.e. implemented on a lower level).

 

Originally posted by garethace:

If you go for a top brand SCSI drive, you will cut out alot of disk noise - the biggest problem believe me.

That's why the Seagate Cheetah 15k.3 (15krpm U320 SCSI) is the preferred drive. It's reportedly less annoying than the Raptor, which in turn is less annoying than WD's current 7200 rpm drives.

 

Originally posted by garethace:

Dell standard option drives and amount of memory is shite.

Word :ebiggrin:

 

Plan on doing the same for the memory; getting the least amount and not use it. Then get 4*512 elsewhere.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...