Jump to content

Thinking about realtime tools.


Hulag
 Share

Recommended Posts

Lately I have been thinking about the real-time business and its use in architecture. I have been looking for different tools and one of the biggest problems for me is the lighting. Is radiosity really necessary for a real-time presentation of a building at the expense of having everything be static? I must say radiosity looks really good in still images, and I think that they are really a need, it just gives a much better quality to a single image, and that is required because the person that will look at the image will have time to really look at every single pixel rendered. But in a real-time demo the person will be moving around, and I think that in a real-time demo what's really required is really dynamic demos and make the lighting be the best possible. I think so solution like Doom 3 is better than using say rtre, while rtre allows you to use the radiosity, its all very static compared to Doom 3. And what's even better is that Doom 3 for example doesn't require any rendering time that means that you don't need a render farm (if you work in a big firm of course :) ) or you don't need to be waiting for the radiosity render to finish.

I would like to know what you people think about this because I am a bit disappointed by most of the architectural real-time presentations I have seen, and what I have been able to do with the standard real-time tools.

Link to comment
Share on other sites

For a good real time presentation you will need to render it with GI or radiosity for it to be convincing. I've yet to see a video game that comes close to a real look. While the newer game engines are nice, and you won't need to render a million frames, you will need to bake them and you will never a superb graphics card.

 

Next, it depends on how you want to present. If it's online, you won't be able to much, because of bandwidth. Distribution via CD is better, but you have to assume that most peoples graphics cards are poor and old, so again, you will be limited. We are working on real time presentation that require a custom Shuttle computer, that we can control what graphics card is used, the processor, etc.

 

Real time will be limited to certain presentations where someone can control the computer pieces and kiosks. It'll be a while before it's something you can 'distribute' or hand off to a client.

Link to comment
Share on other sites

I asume you have seen the previous thread about Doom3...? If not I think it brings up some of the limitations of Doom3. Half-Life 2 looks much more promising (download the demo links there - you'll be glad you did).

 

You brought up a really good point though:

 

Texture Baking + GI = Max7 w/ Mental Ray

 

anyone given it a try yet?

Link to comment
Share on other sites

For a good real time presentation you will need to render it with GI or radiosity for it to be convincing.

What my point was that a real time presentation doesn't have to be just a model with a nice texture and lighting, it has to be more of a presentation showing what a person might actually do in a building. You will have NPRs all around(supposing you are showing something like an office building), allow the user to turn on and off lights, fans, etc. What I have see is that most VR programs(focused for architects) is that they are all VERY static, and the only demo I saw of rtre that wasn't static had something called vertex lighting(doing some google it seems they set a lighting value for every vertex in the scene and interpolate among them) that looks really bad compared to Doom 3 o even any old game such as Quake 1. And I think that Doom 3 doesn't look too bad even on open scenario such as this screenshot

screenshot_8.jpg. I don't know about you people, but I would sure give up some of the lighting quality to have more interactive demos.

 

Next, it depends on how you want to present. If it's online, you won't be able to much, because of bandwidth. Distribution via CD is better, but you have to assume that most peoples graphics cards are poor and old, so again, you will be limited. We are working on real time presentation that require a custom Shuttle computer, that we can control what graphics card is used, the processor, etc.

I'm looking at demos in a "controlled environment" for people that may invest in a building, and things like that. So yeah, I would have a dedicated computer for presentations just as some studios have dedicated computers for rendering(and some even render farms).

 

I asume you have seen the previous thread about Doom3...? If not I think it brings up some of the limitations of Doom3. Half-Life 2 looks much more promising (download the demo links there - you'll be glad you did).

Yeah, I saw it and HL2 looks really impressive too, but the lighting is static too, not everything casts shadows. Basically they seem to have all the static geometry with baked GI (called lightmaps I think) and all the dynamic geometry is vertex lighting. At least that's what it looks like in the demos I have seen. So yeah, it would make much more sense to use HL2 kind of technology if you plan to make a demo that isn't too interactive, but then why wouldn't you use rtre or Visuall?

Link to comment
Share on other sites

Also, what do you people think of rendering times? Isn't it nice not to have to wait hours to get the GI baked in the geometry at the cost of some visual quality? Some times that's not a problem but usually I'm on tight schedules and since I do have two computers or a render farm, I have to wait without doing much while everything is being rendered. I think that's a big plus for people that don't have that much resources, at least to have another computer do the rendering while you keep on working on other stuff.

Link to comment
Share on other sites

That's what I like about real-time stuff, post processing rendering just takes too long for me ... I like instant gratification :D

 

Talking about lightmaps, they are really pre-baked radiosity lighting solution that's been added with "lighting" information (eg. moving light...basically like a dynamic light painting in VIZ4) It's not really true lighting simulation. What I perceive in the next couple of years is the development of realtime radiosity/GI which would do limited calculation in the rendering tree... a more optimised approach. If you have seen Lightscape in action, you would understand what I mean... progressiveloy refine the lighting solution in realtime while not impacting the performance.

 

I think the most impressive thing about HL2 is that it's also a good simulation, it's got physics simulations (both static single object and linked multi-body object like Ragdoll physics) and acoustic simulation (supposely..) Even the weight of the object in large body of water is simulated ... THAT itself is an achieve and can give designer possibility never dream of before....

Link to comment
Share on other sites

I think the most impressive thing about HL2 is that it's also a good simulation, it's got physics simulations (both static single object and linked multi-body object like Ragdoll physics) and acoustic simulation (supposely..) Even the weight of the object in large body of water is simulated ... THAT itself is an achieve and can give designer possibility never dream of before....

Exactly, I think that the physics engine is important to make edifications more interactive, not in HL2's "shoot-at-this-barrel-it-will-roll" kind of way, but in more usefull ways such as doing more useful simulations. Just think for example if you told a client "Look, you not only get to run around you building, but you can see for example what would happen in case of a dissaster where all the lights go off and things fall, etc." I think that adds so much more to the presentation, and it can make your client happier. And a guess you all know what happens when your clients are happy :)

Link to comment
Share on other sites

  • 3 weeks later...

Hi All,

 

Lately I have been thinking about the real-time business and its use in architecture. I have been looking for different tools and one of the biggest problems for me is the lighting...

 

I agree! Being a real-time software developer, our major concern is light. Modern gfx cards are so powerfull that polycount isn't really a big issue, in addition, the fillrate and antialiasing capabilities are possible to use at very high resolutions now, at good framerates.

 

...the person that will look at the image will have time to really look at every single pixel rendered. But in a real-time demo the person will be moving around, and I think that in a real-time demo what's really required is really dynamic demos and make the lighting be the best possible.

 

..or maybe the person will stand still, upclose to a wall and really look closely at the details!? The point is that you never really know what the user will do in a real-time application. So you need to get fixed at some overall quality working for both static and moving situations.

 

I think so solution like Doom 3 is better ...

 

Yep, it sure looks astonishing! However it still requires so much more gfx hw horsepower than what you can do with lightmaps. In addition the hardedged shadows IMHO isn't a viable solution for visualizations!? And my guess is that the nature of the DOOM3 rendering methods require even more strict level design than previous game engines...making it even more difficult to create acurate models. However...these are only initial concerns, I havn't actually tried building a arch-kinda level for Doom3 yet.

 

Real time will be limited to certain presentations where someone can control the computer pieces and kiosks. It'll be a while before it's something you can 'distribute' or hand off to a client.

 

I disagree. Both Web3D applications and bigger standalone applications are currently used and distributed to end-users. I guess the gameindustri is a good way to measure it. Just subtract a few years and you get the kind of technology "normal" users have. Doing that I must point out that games like Quake 3 is 5 years old! And the original Unreal even older, so I guess there is a lot of potential real-time visualization power out there :-)

 

What my point was that a real time presentation doesn't have to be just a model with a nice texture and lighting, it has to be more of a presentation showing what a person might actually do in a building. You will have NPRs all around(supposing you are showing something like an office building), allow the user to turn on and off lights, fans, etc. What I have see is that most VR programs(focused for architects) is that they are all VERY static...

 

....Just think for example if you told a client "Look, you not only get to run around you building, but you can see for example what would happen in case of a dissaster where all the lights go off and things fall, etc."

 

I agree, but you still have to limit your scene in a clever way. To make a scene work for real-time purposes, you need to fake alot of things. So if you have a lot of static light (which is actually most of the light in a scene) there is no need to make it dynamic. Also you can't make a scene 100% dynamic, you need to limit it's functionality to some degree.

 

IMHO the solution is to find some way to combine HDR rendering with dynamic light at a per pixel level. Shadowvolumes are simply too expensive right now compared to the result you actually get, but some other shadow technique might be viable, but not necessaryly needed!

 

NPC's are very important, like plants, etc. however they steal a great deal of power if you model them, and IMHO RPC's simply doesn't work that well. They make the scene seem artificial!? And on a related note they do not work if you add stereoscopy.

 

Also, what do you people think of rendering times? Isn't it nice not to have to wait hours to get the GI baked in the geometry at the cost of some visual quality?

 

This is a very intersting point you got there! Our research shows that the bottleneck in the content creation pipeline isn't the actual export and viewing in real-time, but the "max->realtime-preperation" step - lighting definately being one of the biggest issues! We are currently R&D'ing this process to autogenerate alot of lighting stuff, however it could be a very cool feature to be able to compose your light in a Doom 3 style way, and later bake a GI version with HDR for the static parts!

 

Best regards

Thomas Rued

Digital Arts

Link to comment
Share on other sites

..or maybe the person will stand still, upclose to a wall and really look closely at the details!? The point is that you never really know what the user will do in a real-time application. So you need to get fixed at some overall quality working for both static and moving situations.

I agree with you, someone may decide to stand still. I think a solution to that is the use of normal maps where you can have a lot of detail encoded in the normal map without adding more geometry.

 

Yep, it sure looks astonishing! However it still requires so much more gfx hw horsepower than what you can do with lightmaps. In addition the hardedged shadows IMHO isn't a viable solution for visualizations!? And my guess is that the nature of the DOOM3 rendering methods require even more strict level design than previous game engines...making it even more difficult to create acurate models. However...these are only initial concerns, I havn't actually tried building a arch-kinda level for Doom3 yet.

Here I think that a very important part in visualization is the need for instant feedback, I think using stencil shadows isn't bad at all. Of course using soft shadows looks better when you look at it but if your shadows are all based on the position of the light sources and the geometry, and it gets updated in realtime I think it’s more useful for the person that's looking at it. For example stencil shadows look bad compared to nice rendered shadows, but in the case of Sketch-Up being able to see the shadows depending on the day, season and hour it’s very useful, and that's why people use it over actually waiting for the shadows to render.

 

I disagree. Both Web3D applications and bigger standalone applications are currently used and distributed to end-users. I guess the gameindustri is a good way to measure it. Just subtract a few years and you get the kind of technology "normal" users have. Doing that I must point out that games like Quake 3 is 5 years old! And the original Unreal even older, so I guess there is a lot of potential real-time visualization power out there :-)

I think it has more to do with the architects than with the technology itself. I have found that architects get a little nervous in general with the idea of people looking on things they don't really want them to look at. But most of the time they prefer to be in the same room with the person that's going through the interactive demo.

 

I agree, but you still have to limit your scene in a clever way. To make a scene work for real-time purposes, you need to fake alot of things. So if you have a lot of static light (which is actually most of the light in a scene) there is no need to make it dynamic. Also you can't make a scene 100% dynamic, you need to limit it's functionality to some degree.

I think that is that the visualization system should be as flexible as possible. That means that for example if someone wants to visualize how would it be if for example if there is a small earthquake in a house then you could see swinging and flickering lights, it’s good. The more flexible the better, but it doesn't mean you actually have to use it.

 

IMHO the solution is to find some way to combine HDR rendering with dynamic light at a per pixel level. Shadowvolumes are simply too expensive right now compared to the result you actually get, but some other shadow technique might be viable, but not necessaryly needed!

Well, to implement HDR lighting you need support for fragment programs (pixel shaders for Direct3D people) which means that you probably can do shadow volumes at a fair speed supposing that you can do the HDR lighting effects at a good speed.

 

This is a very intersting point you got there! Our research shows that the bottleneck in the content creation pipeline isn't the actual export and viewing in real-time, but the "max->realtime-preperation" step - lighting definately being one of the biggest issues! We are currently R&D'ing this process to autogenerate alot of lighting stuff, however it could be a very cool feature to be able to compose your light in a Doom 3 style way, and later bake a GI version with HDR for the static parts!

In big studios with render farms it isn't such a big issue but for all the other studios and freelance people it really makes them sick to wait. I would definitely give up lighting quality for instant feedback, and many of the people I talked to feel the same. But here in CGarchitect most people seem to feel that they need more visual quality than instant feedback. It seems the issue has those two clear opposing views.

Link to comment
Share on other sites

I agree with you, someone may decide to stand still. I think a solution to that is the use of normal maps where you can have a lot of detail encoded in the normal map without adding more geometry..

 

Yep, in theory this is a nice trick. However if you think about it, normalmaps and bumbmapping really need some kind of dynamic lighting mechnism to come to it's right! Without these you might aswell bake the detail into the diffusemap directly. So we are back to the beginning discussing real-time light!

 

Here I think that a very important part in visualization is the need for instant feedback, I think using stencil shadows isn't bad at all. Of course using soft shadows looks better when you look at it but if your shadows are all based on the position of the light sources and the geometry, and it gets updated in realtime I think it’s more useful for the person that's looking at it. For example stencil shadows look bad compared to nice rendered shadows, but in the case of Sketch-Up being able to see the shadows depending on the day, season and hour it’s very useful, and that's why people use it over actually waiting for the shadows to render.

 

I see your point, likewise Hulag's earlier point on getting quick response at the cost of visual quality. And in general I think this is a very good idea and something to be researched further. However shadowvolumes are still a too expensive thing to use IMHO compared to what you get and how much you have to pay for it. Users might not care about this big overhead, but I'm afraid the use of shadowvolumes will influence indirectly on other aspects that the user isn't aware of. An example could be a scene with a lot of detailed objects. These all work fine on a modern gfx card since the geometric processing power is so big, however if you add shadowvolumes, to this scene. its probably to heavy to work with in any way. Consequently the user will cut down on the detailed models to make it work with the stencilshadow. In the end he might use this sketch up to bake light anyway, but now he is using the lower detailed model, resulting in a worse endresult. Unaware that he could actually have used the detailed model at nearly the same speed, now that he doesn't use the shadowvolumes anymore!

 

Still I agree that day / night scenarios, the suns movement through a room, etc. all are good examples of the use of dynamic light and shadows, I just don't think shadowvolumes are a step in the right direction.

 

I think it has more to do with the architects than with the technology itself. I have found that architects get a little nervous in general with the idea of people looking on things they don't really want them to look at. But most of the time they prefer to be in the same room with the person that's going through the interactive demo.

 

I have experienced similar situations with architects, and in one case we made what we called a "rollercoster" presentation to work around that. We simply made a traditional camerapath animation of the scene, but made the cameras rotation controlable by the user. In this way he still got a real-time feel of the scene, but was in many ways restricted by us. Even though this might be one step in the right direction, I think it shows that architectural presentations in real-time need much more work on the interaction side. We can't just rely on game style interaction!

 

Well, to implement HDR lighting you need support for fragment programs (pixel shaders for Direct3D people) which means that you probably can do shadow volumes at a fair speed supposing that you can do the HDR lighting effects at a good speed.

 

No, you can't connect HDR light and shaowvolumes like that. The problem with shadowvolumes is the work it requires of the CPU to generate the volume itself and later the additional rendering of these extra triangles.

 

HDR rendering is solely a GPU thing, which therefore scales better with the rapid growth of the GPU power.

 

...But here in CGarchitect most people seem to feel that they need more visual quality than instant feedback. It seems the issue has those two clear opposing views.

 

Hmm, interesting point! I wasn't aware of these two opposing sides. Maybe some other CG architect readers could share their opinion on this!?

 

Regards

Thomas Rued

Digital Arts

Link to comment
Share on other sites

Yep, in theory this is a nice trick. However if you think about it, normalmaps and bumbmapping really need some kind of dynamic lighting mechnism to come to it's right! Without these you might aswell bake the detail into the diffusemap directly. So we are back to the beginning discussing real-time light!

Bake it then, if you can show correct shadows based on the lighting of the scene, do it. It adds a lot of detail without adding more geometry, and the "shadows" in the texture would look right. That means that you have to bake it on the diffuse map but you can't use that baked texture anywhere you want, it has to be specific with the tris to be rendered. Or you can put it on the lightmap. So even if it isn't dynamic, it still shows more detail than not having shadows at all, and all the shadows are based on the actual light sources.

 

I see your point, likewise Hulag's earlier point on getting quick response at the cost of visual quality. And in general I think this is a very good idea and something to be researched further. However shadowvolumes are still a too expensive thing to use IMHO compared to what you get and how much you have to pay for it. Users might not care about this big overhead, but I'm afraid the use of shadowvolumes will influence indirectly on other aspects that the user isn't aware of. An example could be a scene with a lot of detailed objects. These all work fine on a modern gfx card since the geometric processing power is so big, however if you add shadowvolumes, to this scene. its probably to heavy to work with in any way. Consequently the user will cut down on the detailed models to make it work with the stencilshadow. In the end he might use this sketch up to bake light anyway, but now he is using the lower detailed model, resulting in a worse endresult. Unaware that he could actually have used the detailed model at nearly the same speed, now that he doesn't use the shadowvolumes anymore!

 

Still I agree that day / night scenarios, the suns movement through a room, etc. all are good examples of the use of dynamic light and shadows, I just don't think shadowvolumes are a step in the right direction.

I think the matter here isn’t if hard edged shadows are better or worse than soft shadows. I think the real question is “We need instant feedback, are you going to give me the tools or not?”. I think stencil shadows is the fastest solution right now, and even if it is hard on the CPU (supposing that you don’t use SVBSP at all), it’s the fastest way to provide the feedback needed.

 

I have experienced similar situations with architects, and in one case we made what we called a "rollercoster" presentation to work around that. We simply made a traditional camerapath animation of the scene, but made the cameras rotation controlable by the user. In this way he still got a real-time feel of the scene, but was in many ways restricted by us. Even though this might be one step in the right direction, I think it shows that architectural presentations in real-time need much more work on the interaction side. We can't just rely on game style interaction!

Yeah, I saw the Nykredit demo, I think that’s a nice solution. But yes, I agree that demos should be much more interactive.

 

No, you can't connect HDR light and shaowvolumes like that. The problem with shadowvolumes is the work it requires of the CPU to generate the volume itself and later the additional rendering of these extra triangles.

 

HDR rendering is solely a GPU thing, which therefore scales better with the rapid growth of the GPU power.

It depends on the optimizations that you use, if you use SVBSPs then it has to do more with the operations of the video card than with the generation of the shadow volumes themselves. Also, based on the current market, most of the architects and CG people have huge computers and not so updated videocards, so it isn’t such a big concern either.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...