Jump to content

Computer buying help thread for October


Recommended Posts

I've got to say I agree with Andrew about there not being any real examples of iray's production capabilities, at least not on the mental images iray site. Based on this FAQ PDF iray might not be any faster than Mental Ray and is probably slower in many cases because of the brute force method it's using which I agree is probably much like Maxwell. When I first saw it I thought it was another real time render engine but they make a point of saying it's specifically a final production tool. I also find it annoying that you can't just see what the cost of this service is, instead you have to e-mail someone to find out. That just screams expensive to me and it's not a good way to start a new product off.

 

http://www.mentalimages.com/fileadmin/user_upload/PDF/iray%20FAQ%20%2812%20March%202009%29.pdf

Link to comment
Share on other sites

  • Replies 107
  • Created
  • Last Reply

Top Posters In This Topic

i very much agree with u AJ... u put in my words.. in better technical language.. GPU stuff till now sounds a big hype by the GPU lovers.... there is no evidence on the net.. that can prove its productive adaptable capability... its just another viewport renderer.. for which they want us to invest thousands in the technology which is not even bugg free till now.. adaptive GI is only solution in current time for any production quality rendering... n that is proven on CPU renders till today.. GPU is a Hype..

 

till Nvidia doesnt come up with Gaming GPU card with say min 16 Gb or More of memory.. n we dont use something like octane ( still in premitive stages ).. its no use listening to these GPU lovers.. GPU solutions ve still a log way to go.. Or lets write to nvida n ati.. to make a new PC with GPU instead of CPU.. :D

Edited by rats
Link to comment
Share on other sites

I think that you are taking what I am saying in my posts the wrong way. I am supporting cloud computing resources which can also be on the CPU. As I am sure some of you may say that cloud computing is hype as well while it is now being used by a lot of people who know that it is not hype. GPU rendering scales very well in the cloud and until some of you try it once it is available then I would not just go off and completely attack it as if it is threatening your way of life.

 

CPU/GPU rendering in the cloud is becoming much more affordable and very soon GPU rendering will be available in a REMOTE service. Everyone keeps talking about buying thousands of dollars worth of GPUs as if you are going to have 30 GPUs buzzing away next to your desk. I don't think you realize the power that is required and the heat it offsets. This type of equipment works much better in clusters inside a datacenter with the proper cooling.

 

Yes some of this technology is still in development, but then at the same time some are already using this technology. Just to prove a point! Motion graphics studios....the largest ones are ALREADY using this technology and it is giving them huge results. They are able to do the processing of scenes for fast action movies in a fraction of the time by using GPUs and special storage that runs as fast a RAM memory to extend the video RAM. Can any argue that all of the technology that visualizations artists in the architecture field use comes from the motion graphics and gaming industries? They usually are using the latest technology first and then it makes it way to the AEC industry.

 

You are not going to get artistic looking renderings from a GPU like it is a magical device. Post rendering production is still done just how it is now with CPU based rendering with a lot of work done after the fact in photoshop to add in a beautiful illustrator's touch.

Link to comment
Share on other sites

ehh, how can u get "40GB models rendered almost instantly using 4 Tesla cards" but then have those beginner's-grade-looking renders off the GPU Cloud showcased on ur site take 3 minutes per frame to render? *scratches head*

Link to comment
Share on other sites

FJ, Yes it was rendered on 4 Tesla FERMI GPU's very fast. The renderings on our website you refer to our from Revit and inside of Revit this quality was not possible and at the high settings would take hours.

 

When I say almost instantly, well it renders the model to a very high quality almost instantly when you stop moving the model. Then it can take just a few minutes to render to a much higher quality. We state this on the website, and we will show some recorded video demos soon.

 

I hope that helps answer your questions.

Link to comment
Share on other sites

My question's still not answered. Of course, "cloud computing" for rendering is a workable solution, because it's just another name for render farming, which we all know is useful in many applications. What I want is any real evidence that iray, which I'm going to specifically pick on here because they asked for it with that ridiculous video from the nVidia event, has any substantive merit at all that can sell it to a user who is already good at Vray, mental ray or anything else of that nature. So far that case has not been made.

 

Until it is the question of whether GPU rendering has merits in the high quality, production environment is answered, the question of whether the GPUs belong in the office or in the datacenter is moot. Why would a user who is used to high quality rendering pay an outside service to run jobs that could be done faster and/or better on his own PC?

 

Don't you guys see the problem? If I contend, in argumento, that all iray renders are either slow, or easy, or shit, or some combination of those, and I want somebody to disprove this, to do so requires nothing more than a link to some final-product-worthy samples that are good, fast and difficult. But I'm seeing nothing that fits that description. Can anybody post links to some samples?

Link to comment
Share on other sites

NVDIA is sending me a Quadro 2000 (which they are marketing towards the arch market), so I can run some tests between mental ray, iray and iray on the cloud. What I will need are some good scenes to render. They need to render in mental ray. If anyone can provide me a number of these we can run some good tests. I'll get Ry and Renderstream involved too so we can run these tests across several levels and types of CUDA enabled cards and on the cloud. I personally don't know what the results will be, but I'm keen to find out.

Link to comment
Share on other sites

Hey,

 

I think calling iRay "shit" several times is a bit out of line and especially for the moderator of the forum.

 

I am not promoting an outside service to do anything that the artist can't do. The service is simply processing data for them. Some of the most respecting rendering studios use cloud services on the CPU and have for years. This is really no different. I do not understand how you can possibly think that rendering on the CPU is faster then the GPU. Let's look at the statistics then if we can't find renderings samples right away. A GPU can render 1000x faster then a CPU.

 

Really I am not trying to promote my company in the forums but rather more so I am trying to help people understand more about the GPU technology. Jeff told me himself that people on the forums think you need to go out and buy 30 GPUs.

 

You keep coming back to a point that you think I am saying that the user should go out and buy a bunch of GPU's. iRay is already inside Max, if you have the latest version then it is free and I seriously doubt that any hardcore rendering artist does not have a CUDA enabled GPU so they can use iRay for themselves for free now for the most part. It is so new that maybe you are just not seeing the renderings to compare to mental ray renderings. MR has been out for over 20 years. There has been a lot of time for people to make a lot of nice work with mentalRay. Why don't you give the users a chance to come up with some good work. Give them a chance! Would Picasso have created a new masterpiece right after the paintbrush was reinvented?

 

We are all on the same side and you are arguing two products that are made by the same company. I am sure there are examples out there that you want to see.

 

FYI: RevUp Render is not only all about the cloud. We are working on a way to create an appliance with special proprietary technology that will allow us to place this into studios inside your dedicated datacenter. It is not all about the public cloud but also the private cloud. Call it a render farm then or whatever you want but the principal is what we have been using for years. We are just working on ways to make it more efficient. We have servers that can be placed inside your office and fit 10 high end graphical workstations on there. You can easily see the benefits of this. Each user has access to HPC type hardware which you will never get inside a workstation. Then our concept is simple that when you want to render you send the model out to the GPU cluster and that could be inside your office as well.

Link to comment
Share on other sites

FJ, Revit models are never 40GB's :-). The biggest I have seen is 1GB or so and I think Revit starts to get unstable with files going much larger than that. The 40GB model was a demo that mental images had at the Nvidia GTC. It was an amazing demo showing a 40GB scientific type 3D model being rendered very fast on 4 Tesla 2050 GPUs. It was rendered with iRay though.

Link to comment
Share on other sites

Ry,

 

I do want to see the results and I'm not calling iray or your products shit, I'm just using that (and the bluntness of my language reflects my frustration with the current hype-to-product ratio) to differentiate between renders that the users can give to clients, and renders that they can't give to clients (speed without quality is already easy to achieve and is useless, so only examples of speed with quality are useful).

 

If the people here think you need to buy a lot of GPUs (I don't think I've seen anybody seriously suggesting more than 3 in a workstation, but maybe I missed somebody) that's because that's what's needed to run Vray RT-GPU at high speed. I don't think anybody meant to rule out the possibility of using farm services with GPU capacity.

 

But I'm getting more than a bit fed up with lofty and unproven (and too often incorrect) statements like "A GPU can render 1000x faster then a CPU" or "from what I have seen the quality is much higher than what you can achieve with the older rendering engines" and talk of what one will or might in the future be able to do with GPUs or farmed solutions used to support (too often bad) advice on what users ought to buy. This isn't productive, and it sounds like your company is making some serious investments in it, so I'm hoping your internals back all this up.

Link to comment
Share on other sites

well then, if its just yet another rendering farm service i dont see how it matters whether its a CPU or GPU cloud (except maybe for the mentioned "design collaborative review" purposes), since most available services of the sort already got THz worth of CPU power ready to be disposed of..

so in that regard it all comes down to $/GHzhr..

also, i wouldnt see many people here switching their pretty much established workflows to some other rendering system compatible with this GPU-cloud only to benefit from what seems to be some sort of enhanced viewport navigation, which probably wont even handle things like displacement, instancing, SSS, DoF, volume lights or fog or particle systems..

i must admit i only checked out iray superficially, but if it really was such a great breakthrough im sure we'd be hearing about it by now..

Link to comment
Share on other sites

I have zero experience with GPU rendering as of yet, so I am borrowing one of the clay model examples I have run across on another board. I am also borrowing that persons clay model rendering since I currently don’t have access to iRay.

 

The first rendering in the list took 0:46.

The second rendering in the list took 1:32.

 

The CPU version was rendered on a dual quad Xeon 2.66ghz with hyperthreading, 12gb of ram, and a Quadro 1800.

The GPU version was rendered on a Tesla 2050 + GTX 470, Corei7 960, 12gb ram.

 

Now, …the CPU was a biased engine, and the GPU is of course an unbiased engine.

 

I think it is important to follow what is going on with iRay and other GPU engines as they will more than likely eventually be the way of the future. But, for my money on day to day production the CPU rendering still appears to be both more affordable and faster.

 

I would like to see more complex tests, including roughly 5000 pixel wide production renders, but right now this as far as I have made it. The model I am testing is far to simple to really accurately represent a real world situation.

 

If possible I would like to see the test model that everyone is going to use for GPU rendering be put through its paces on CPU rendering as well. Yes it is not a direct comparison, but this thead talks a lot about speed vs quality. On the technically side, arch viz is a lot about speed vs quality regardless of what engine you are rendering with.

 

Oh, and as a side not. If I down the samples on the light to what I would use for production, my render time for the clay render drops to 22 seconds. All of my CPU times include calculation of the lighting solution.

 

I did start to play with full material renderers, and the speed difference was not quite so dramatic, but I didn’t play with them for a great deal of time.

 

The scene I was testing came from this thread….

http://www.vizdepot.com/forums/showthread.php?t=10291

 

EDIT: The third image is the CPU version with curves tweaked slightly in post to more closely match the GPU version.

Edited by Crazy Homeless Guy
Link to comment
Share on other sites

Mr. revup... rather than arguing for ur Cloud no. 9 ( GPUes )... why dont u just render some nice looking typical residential or commercial bldg. n post the comparative time results... why r u arguing with no base.... n jeff as u stated.. we definitely n desperately need an unbiased comparison of same ( similar ) scenes.. on GPU n CPU engines...

 

rest seems to me.. just an ego arguement from a technological representative ( virtual )..

 

n GPU 1000x faster is a fool's statement..

 

also please all of u refer to .. octane website.. ( i think its www.refractive.... com).. atleast we can see some decent renderings with practically acceptable quality.. they ve the memory limitation for there software.. as with the iray.... iray doesnt show anything even near to whats been shown on octane's site...

 

mr. revup.. please tel ur nvidia guys.. to make a new system... where the CPU is replaced by there CUDA GPU.. n the MB chipset can feed it with the required ram... may be then.. atleast the unbiased GPU n adaptive biased CPU rendering willl be at par with quality, complexity n speed.. still i think a fast unbiased engine.. n a slow ( so called ) baised adaptive engine will go almost close in performance n quality... that too once these GPU engines ( renderes ). incorporate all missing stuff like SSS, instacing etc....

 

 

the amount of time u wasted in arguement since these few days.. mr. revup cud ve used some sample scenes or his own to render n post it here.. the results...

 

following is one link i found about iray.. here also.. it took 15 min... to render a simple scene.. on quadro 5000.....

 

http://jeffpatton.net/2010/10/27/optimize-interior-scenes-for-iray/

Edited by rats
Link to comment
Share on other sites

All of us need to check this..

 

http://vizdepot.com/forums/showthread.php?t=10291&page=1&pp=15

 

iray has a long way to go... heheeheheh...

 

n yes.. revuprender... as GPU rendering is atleast 10x faster ( which it is not. atleast the iray )... n so great quality... say just like a maxwell render... so if my maxwell scene takes 10 nights to render... this GPU renderer will take only one night.. so i dont need ur service for rendering... i shud be happy buying single quadro or tesla.. with 6 gb ram.. as ur farm also hasnt any card with this much of ram..

Edited by rats
Link to comment
Share on other sites

Now, be nice.

 

Let's see here, this scene is a fairly reasonable test. Not very complex geometry and only a few materials, but relies on GI and got some caustics and some glossy so it's reasonably difficult. Now, is it fast? Hard to say. I don't have iray on my home PC, but looking at the stats posted there it looks like a GTX 470 is about 30% faster than an i7-920 at running this, a GTX 460 is 14% faster than a 4GHz Core 2 Quad, and any GPU that's not a Fermi is slow at this.

 

My CPU at home is nothing special, a quad core i5 2.8GHz with 4GB (and a Radeon 5750 but that's not important here) with 64-bit Max 2011. If I load that scene and switch to mental ray and don't do anything special, I get a render that's not as good as what's on that Vizdepot thread in 9:37. I'll see if I can improve that a bit...

Link to comment
Share on other sites

Mr Rats,

 

This discussion started because you were talking about building some kind of six processor type server for rendering. I suggested that you might want to just wait until you can see what is coming out with GPU technology until you spend a lot of money on a system with six XEON processors. I think the type of system you were thinking about buying or did buy is way more unpractical than what I am talking about.

 

I said a GPU can be 1000x faster than the CPU and what I meant is that it can be for some applications. Rendering happens to be one of the applications that runs very well on a GPU and that is all I was trying to say to you and AJLynn. Over the last two years I have been working with a lot of very talented 3D Programming who specialize in making software run on the GPU and I have seen some really interesting things. If you went to NVIDIA GTC then you would not be arguing with me because you would have seen all the amazing things being done with the GPU and CUDA and not just rendering. Something like a database might not be an application that can run very well on the GPU. Yes I was not throwing out an exact figure as I was being general but there are cases where GPU's are just simply many many X's faster then the CPU for rendering....

 

http://www.ixbt.com/video3/images/cuda/gflops.png Take a look at this chart which does not even show the latest cards and stops in 2008. You can see roughly that the GPU is actually almost 10x faster in achieving peak GFLOP/S. Of course it is not always that much faster but I was referring to the general technology. I am sure AJLynn will try to tear about this statement too. I am here just trying to give you guys some knowledge from my experience with working with GPU's. You could be a little kinder since I am spending my time trying to help and answer your questions the best I can right now. Honestly though the responses you have been giving me makes me really not want to spend much more time going back and forth with you.

 

Some of the statements you are making: "mr. revup.. please tel ur nvidia guys.. to make a new system... where the CPU is replaced by there CUDA GPU.. n the MB chipset can feed it with the required ram." This statement just shows how unfamiliar you are with this technology. iRay does use CUDA, it is based off of CUDA. Again mentalray has been out for over 20 years and iRay for just one year. I am sure they are working on the features you are talking about such as instancing.

 

As we stated we are going to do some tests in the near future. When we will do these and have the results to post I cannot say but you will just have to wait till then.

 

As AJLynn stated it really just is not productive for me to keep responding. It is not anything about an ego but rather I have just been trying to respond with what knowledge I know right now.

Link to comment
Share on other sites

hmmmmm... ok.. i apologize for my words.. if they meant bad to u revup.. n thanx for putting ur advice for my querry.. n yes.. i m not making a system of six xeons.. it was of four opteron each one at a cost of $300... less than 1/10 of a decent quadro.... GPU rendering does look promising ( if u dont mind plz. refer to www.refractivesoftware.com gallery ).. but not practical for productive rendering for even lil bigger project veing a dozen of 3d trees n cars.. n it doesnt seem in the near future.. even if i invest in the system.. with quad opterons.. with total of 32 cores.. it looks sure that my current software n materials, setup.. will get me productive results reasonably fast n in time.. with decent quality.. n i m looking for a investment not more than a couple of years from now.. may be by that time.. GPU technology n hardware wud be in affordable limit .. n with a practically adaptable state... ( just looking at the samples scenes.... feels funny.. )... so what i ve learnt from all this is that.. CPU still wins for near future investment... GPU cud be kept for small adventures.... n yes.. the quad opteron system costs less than the maxer form 3dats...

 

thanx...

Link to comment
Share on other sites

Hey AJ..... man... plz check octane's gallery.. on www.refractivesoftware.com... it indeed looks promising... but tiny scenes..

 

Hey Revup.. man.. u know better of the technology n also a developer for GPU stuff.. please help us all by making a good renderer.. which will be xxxx times faster as u believe... n practically adaptable.. n also a hardware which wont ve memorry bottleneck...

Edited by rats
Link to comment
Share on other sites

If you went to NVIDIA GTC then you would not be arguing with me because you would have seen all the amazing things being done with the GPU and CUDA and not just rendering. Something like a database might not be an application that can run very well on the GPU. Yes I was not throwing out an exact figure as I was being general but there are cases where GPU's are just simply many many X's faster then the CPU for rendering....

 

Someone posted a section of the GTC conference video online where they were showing real time cloud rendering with iRay. It did look extremely impressive until you looked a little closer....

 

When they did the comparison between a biased engine, mental ray, and unbiased engine, iRay, the unbiased engine running on a reality sever came across as being 1000 times faster. It was presented as crazy fast compared to the 'old' solution.

 

Now, when you looked at what the presented as the old solution is was shocking. nVidia had set up the biased CPU rendering with settings that are never used in production environments. They basically set that test up to fail, and to make it look like iRay was that much faster than the way we are currently doing things.

 

Lets say you have a Porsche and a Audi lined up next to each other. You weight the Porsche down with a 2 ton trailer, and use bad gas for good measure. Which car do you think it going to look more impressive on their 0 to 60 time? In that case the Porsche would have been set up to fail.

 

I use a biased engine, which speeds things up a great deal in rendering. If I understand correctly, biased rendering can not currently be done on the GPU because while it is fast, it can not handle the complex calculations that biased rendering utilizes. These calculations are best handled by high end CPU's.

 

Now, say I needed to use an unbiased engine because I absolutely needed everything to be physically accurate, ...then yes, GPU computation appears to be the better choice. And again, ..this would be because unbiased calculations require simple math, and therefore become a measure of how quickly can you do several simple calculations.

 

In my world being physically accurate is currently not that important to me because I have not seen rendering speeds that justify the hit impact in time that comes with being physically accurate.

 

Now we are starting to be presented with concepts that say that physically accurate rendering may be able to blow non physically accurate rendering speeds out of the water. The problem people are having is that there is little proof that this technology appears to be ready for production use. When you look at the speeds people are getting with their cards in their computer it has been disappointing so far. Maybe that will change, but I am talking about what I can do today, and what I can do with a hardware spending budget. That is why I am curious as to how much a Reality Server actually costs?

 

What I am seeing is that I can get a fast rending workstation when using a biased CPU render for under $2,000, and in my case another $1,000 for the biased engine. Or, ...I can get a Tesla 2050 for $2,000 and maybe another $1,500 for the workstation to go with it. When I compare the rendering speeds I get from those 2 options head to head the biased engine is winning in speed, and is cheaper to implement.

 

I know you stated earlier that we don't need a Reality Server sitting next to us, we can farm this off to the cloud. This is where I am confused on how successful of a work flow this would be for me, and what the cost would be to do this, and how predictable the results would be.

 

Currently when I am using our biased CPU based farm I test the rendering on my local machine. When I am testing I can distribute buckets of the image out to other CPU's on the farm. Then when I go to render I can either que up a high res locally, and distribute out for speed, or if rendering an animation I can submit it to Backburner, and have all of the farm machines render individual images.

 

Typically the speed I get when distributing a 5,000 pixel wide production between 3 machines is roughly 1.5 to 2 hours depending on complexity of lights and materials.

 

My farm is mainly built of Dell 3500's with single core hyperthreaded Xeon's in them. I believe these machines were about $1,500 each, though I would need to check that. These machines have very low end Quadro cards in them, so they won't be useful at all for GPU rendering. If I render an animation at 720p I can typically expect anywhere from 6 minutes a frame to 25 minutes a frame depending on complexity of lighting and materials.

 

I have 10 dedicated machines on the farm, but can expand to 80-90 when people leave for the evening. This gives me a lot of local power to do things.

 

Currently biased CPU solutions work very well in environments like I am in because the render engine is fairly cheap, and the hardware needed to get good results is becoming cheaper, and is very predictable. And maybe most important, we understand the rate at which the technology will advance and change, and can therefore calculate the impact of cost in terms of real dollar numbers over a given time period.

 

For me, all of these things are currently question marks when it comes to GPU rendering. The technology is new, there is a huge question mark in terms of costs, and there are not enough functioning work flows setup in order to measure it predictability and speed. and unfortunately, until their are everything will remain marketing speak, and not proof of process.

 

It is dificult to justify spending extra money in a low proffit margin industry if there is not significant proof that the money spent will yield both faster and better results.

Edited by Crazy Homeless Guy
Link to comment
Share on other sites

Ratnakar - Some very nice things in that gallery. BTW, you can't make a PC with just a GPU and no CPU, or at least not a PC as we know it. There are the Tesla boxes but that's a bit different.

 

Ry - The gigaflops numbers are misleading. Some gigaflops are more useful than others. Take what you read about them with several large grains of salt.

 

Travis - I think we're in agreement on most of this.

 

BTW I spent a few more minutes on that render from that Vizdepot thread. I was able to get it in 9 minutes, then changed it to a bit more traditional lighting with a normal mr area light and photon caustics, and got rid of most of the noise, which gave 15 minutes and a much more usable render. So I'd put the CPU (currently $205 on Newegg) at about the level of a Geforce 470 (about $260) though I'm maybe being too kind to the GPU because I have no idea how long it would take to clear the noise in iray. (An AMD 1090T would have done it in under 11 minutes, and that's $230.)

 

Moral of the story: it's not how many cores you have, it's how you use them.

 

Note: I still want to be able to buy into the GPU thing. Somebody convince me. With pictures, not words.

Link to comment
Share on other sites

Just a quick response for right now but something I thought of is a program I downloaded and played around with a couple of months ago. It is called design garage from nVidia. Some of you may know of it already but if you have a capable GPU then you can down it and see some amazing results very fast. Here is a forum post with some results to show: http://www.evga.com/forums/tm.aspx?m=289470&mpage=1

 

Of course these are not specific architectural scenes but like I said, this technology always comes from entertainment industry and then gets to use last. However there are some renders here with very detailed architectural scenes as back drops. I used Design Garage with (2) GTX 480 fermi GPU's and I was able to get very fast almost real-time photorealistic results. The other reason I mention this is because RealityServer functions very similar to this software. mental images had a demo of a car at GTC at their booth with a scene around it and the whole thing was actually rendering almost instantly on many GPU's and it was the same high quality as the examples on this link.

 

When I referred to amazing things being done with the GPU that I saw at GTC I was not referring to the 3DS iRay technology demo but I was just talking about all the amazing things being done that I saw there overall...from medical research to video games. All doing things that would simply not be possible on the CPU and there is no way you can argue that if you know of the things I am referring to.

Link to comment
Share on other sites

Also, right now while I am not using my personal workstation much right now the GTC 480 I have in there is crunching away on Folding Home with an advanced GPU version and folding proteins for cancer research about 10x faster then I could with my i7 CPU that is also in this system. I have used both and the GPU one goes through jobs of 50,000 units in just a few hours while I think it would used to take days to do the same.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×
×
  • Create New...