Jump to content

Next major CG breakthrough ???


martin walker
 Share

Recommended Posts

Ive just been looking at work I did only 3 years ago which was scan-line....and it looks soooooooo dated.

 

So what do people think will be next, now that GI / caustics, sss etc are an everyday thing

 

I personally think some sort of realistic noise / imperfection calculation will be next step...ie no walls are perfectly flat, edge perfectly straight, reflections aren't uniform across a material etc . I appreciate we can replicate this with current technology....but a "hyper realistic" rendering pass would be great :)

 

what do you guys think will be next the "big thing" ?????

Link to comment
Share on other sites

Well in the immediate future I thing that render engines like Maxwell will become more popular as computer speed increases and the program becomes more refined. I also expect to see much more virtual reality used as well as real-time rendering being something that we will be able to use as easily as we now use the preview window.

Link to comment
Share on other sites

I want viewport interaction to be a render preview, with reflections, bump, disp, etc, and all att 300dpi resolution -there's the next opportunity for nVidia: make cards that do 6000x4000, or whatever (monitor makers too, like IBM and Viewsonic did.).

Link to comment
Share on other sites

I think the next breakthrough would be more toward the artistic side of things with advances in non-photorealistic methods. After all, if I need a really good watercolor, there are about 4 people I have to call but if I need a realistic rendering of a kitchen, I can contact about 3 dozen or so freelancers and expect about the same level of product from each at this point.

Link to comment
Share on other sites

  • 3 weeks later...

There was a great thesis paper on the Chaos site about multi sampled HDRI scenes. Currently an HDRI gives off amazing light info but from a fixed spot in space. The example was a hall way with an open door in the background from that room was a strong sunlight light source. If you place an image in a regular HDRI image of a room it doesn't matter where that image is in space it wont react much differently to the change in light ie you move your 3d object closer to the light it wont go from shadow to light. This thesis was about creating many HDRI sample points walking down the hall so when your object moves closer to the lit room it recieves more light. Anyway he explins it better than me.

 

http://www.thereisnoluck.com/thesis.php

Link to comment
Share on other sites

I think it will probably move more towards ease of use of existing systems etc. From what I understand, modo is taking a step that direction.... and although its arguably a slightly different kettle of fish, there is going to be the inevitable issue of where archviz fits into BIM.

Link to comment
Share on other sites

With the diffusuion of cheaper hi-res 3D printers on the market I think that wisualization will become more of a dynamic design process. In other words, it shall be a must during the design process in all sorts of offices. The costs of cg tend do downfall during the next years since softwares are becoming more popular and easier to use. For CG professionals like us, new technologies will allways be available. It depends on us to be up to date with them. Would't ti be nice to have something like a 3D hollogram rendered image? Imagine the client being able to visualize it on his computer screen somehow.

 

mcorrea

Link to comment
Share on other sites

yeah lol...... a pair of sunglasses that when you put on allows you to view a virtual environment that has been previously downloaded onto them. Not only that but will detect other pairs of glasses in the vicinity- like a sort of advanced 3D bluetooth - and will represent the wearer in the same 3D environment as a person/avatar. This could allow teams to examine spaces together etc. It will be the norm eventually Im sure... but im willing to bet money a similar thing to this has already been invented lol.

 

But this is a hardware thing though, so is its maybe the usual case of hardware having to catch up with software etc...

Link to comment
Share on other sites

I would pray for more psychoperceptual routines to be incorporated into post production (or more likely as applied to tone mapping of luminance values) as well as better and more realistic material properties that incorporates not just 'model functions' (BRDF/BTDF) but actual data from sampled real world materials. Right now most CGI packages are using simplified material models even if we have some control on how they 'look' and render, it is far from actual real world materials. Too bad we dont have a library of real world materials that have been scanned properly for our use (gonioreflectophometry).

 

Lastly I wanna see some kind of 'media participation' happen. This means most renderer's assume that there is no 'media' between the luminiare and the surfaces in the scene. This is not how it is in the real world, depending on the dust and temperature and humidity there's a fair amount of light scattering and absorption that happens.

Link to comment
Share on other sites

hmmmm a breakthrough for CG would be getting clients that are easy to deal with :D

 

but in all honesty, I think the prementioned incorporation of realtime based rendering, which could support something along the same lines of the virtual glasses type them, for imediate immersion and presentation. Sure the idea isn't new, but the future is the refinement of the concept allowing incorporation into workflow (without a lantern battery)

 

or if it's not immersion, it would be that capability and computing power to pump out full GI animations with all the bells and whistles with frame times no longer requiring a huge farm to complete a full sequence.

Link to comment
Share on other sites

Workflow...

Workflow...

Workflow...

 

How hard is it to build a great scene, with great landscape, with great materials, with great lighting, that is worthy of publishing.

 

Intel just promised an 80 core processor in 5 years. Maybe then there will be enough power for a great scene to take less than 40 days...

 

Just dreaming!

Link to comment
Share on other sites

I can see Mr Smith and Neo coming along...

 

For me, tomorow's picture is simply a dynamic picture, I mean a still one but with dynamic parameters like: I want to see the building under the rain or opend windows... What if it's the night, etc... It would be a sort of new format of file with new datas that que can fill...

eh ! hello morpheus, come back !

Link to comment
Share on other sites

Well I wouldn't call it a breakthrough ... but real-time will probably become

a big player.

 

But I also think(and this has already been voiced) is non real-time more expresive style of CG. Which would probably be more interesting than real-time because of its artist(ic) factor, which of course is supreme.

Link to comment
Share on other sites

Faster hardware, the advent of Direct X 10 games (look up Crysis), more demands by our clients for more detailed environments, realtime or not...this all leads to a greater labor demands in our field because its means we just have to create more geometry, higher-res textures, more lights..etc etc.

The video game industry is a good analogy. The faster the hardware gets, the more "stuff" they need to create. Bigger teams, longer hours and development cycles. The next big thing in our niche industry will be the sheer numbers of people required, and the need for experienced management to handle teams of people.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...