Jump to content

mirkonev

Members
  • Posts

    12
  • Joined

Personal Information

  • Country
    Poland

mirkonev's Achievements

Newbie

Newbie (1/14)

10

Reputation

  1. How do you have your materials defined? If it's a set of textures, You could consider importing them to Substance Designer. It can pack them into a '.sbsar' form that works as a drag'n'drop for Unity, with texture channels properly shuffled. Their Asset Store is a huge market, offer of archviz models is quite scarce at the moment.
  2. Anybody here particularly experienced with mentioned Flatiron or comparable render-to-texture solution? Please, PM me.
  3. Yes, pricing really is quite steep for Simplygon now. I tested their cloud version few months ago, to check possibilities to ease up model preparation for VR. Here is my sum-up from then (based on retopo of a typical sofa for offline rendering). Blender's output is on the left, Simplygon on the right: Comparison of Blender and Simplygon's retopology methods, both programs set to reduce original 225K triangle mesh with a factor 0.2 (final mesh needs to have 0.2*original polygon amount): - Blender produces better looking topology and model silhouette using Decimate on Collapse mode compared to Simplygon's automatic method, as visible on attached screenshot (this needs to be checked, as I did not used Simplygon's option to better preserve the silhoutte) - Simplygon's alghorhytm is better at preserving UV's, with Blender having few bigger stretches on output model
  4. @Francisco This exact project took me a bit longer then 5 hours, due to additional functionality with gamepad controls. Otherwise then that, yes, 5 hours is enough time to make that presentation and it does not includes baking and unwrapping. We are talking about unwrapping UV0 channel, of course? That's the only channel the input file needs to have, containing wrapping onto the baked VRay textures. @Benjamin Definitely, for people producing renders in VRay, Flatiron is a good tool to automatically bake your scene for use in real-time engines. As for interactivity, I would add possibility to change materials and textures on the baked scene. On the other hand, there are solutions that automatically unwrap and lightmap a scene in game engines, almost zero manual unwrapping is needed (such as Unity's Enlighten). Quality does not much VRay's, but I would certainly call it decent for an automatic (and free!) solution to be used in a VR scene.
  5. Here are four screenshots from the scene and one view showing rendering statistics (tris/drawcalls/FPS) on GTX 780Ti: IMHO, this level of quality certainly won't serve as final visualization of a project. Still, workflow I mentioned can enable a very quick VR visualization of a project that meets most of the needs that clients wanting VR have. This visualization is owned and has been done for: Enter the Japanese market, with the best digital creative agent Makoto Shirose
  6. @Robert M, I will ask my client whether he agrees for me to share a screenshot or two in here. @Juraj As I mentioned, scene's lighting was baked into textures with VRay. As that was not part of my work, I did not count that time. My input was a 'FBX file from MAX with a model linked to baked out textures that VRay produced' Certainly I confirm that time needed grows exponentially with quality level. What I want to point out is that a quality level that satisfies most of the needs I met from people for now can be met in the time schedule that I have mentioned.
  7. To answer the OP's question: there is significantly less time needed to produce a VR walkthrough than what seems to be general opinion in here and rest of the forums. I recently finished a hybrid Oculus/Vive visualization in few hours, following this workflow: - my input was a FBX file from MAX with a model linked to baked out textures that VRay produced - ye ol' drag and drop to Unity yielded a ready to use asset with already linked textures to materials to model (usually people tend to waste so much time re-linking the textures to materials, proper FBX export is all that is needed) - manual setup of collision took an hour as performance friendly primitives were used, can be much quicker if, for example, naming convention in initial input file was used, for mesh colliders - simple setup of the scene with no active lights and static geo yielded great framerate, even though whole model had almost 2 million triangles. As all the lighting information was already baked into textures, ambient light was all that was needed. Forward rendering and 8X MSAA - import of free and also drag and drop OSVR library for VR support enables that the same build (application) works on both Vive and Oculus - from my experience, this kind of application meets majority of 'I want this project in VR' client requests. It can be done in ~5 hours. If lighting needs to be baked in Unity (you only have textures applied to the model), it can be done on decent quality in ~2 hours (naming convention in initial file can make this faster also) - prepared Unity project template with non-model specific steps can be used in future projects, cutting the time further TL,DR: medium sized two floor apartment with VRay lighting baked into textures needs around 5 hours of setup (with Unity) to produce a walkthrough application that works on both Oculus and Vive.
  8. @landvr1 ... Thanks, quality posts on this forum, definitely plan to stick around Storytelling is something that VR mediums have proven to be good at. Almost all currently available content for both Vive and Oculus are far more of a 'experience' than a game. Even though it steers from this forum topic a bit, I found this blog post very insightful about storytelling in VR: https://storystudio.oculus.com/en-us/blog/5-lessons-learned-while-making-lost/ Would definitely like to hear from some architect/designer his opinion on how and would he leverage storytelling in his work...
  9. My experience for now is also steering real time rendered presentations into the design phase of the project. And definitely I agree that due to different roles and nature of offline and real time solutions both of them have a place in the project already today. With a tendency that real time solutions have taken some roles of offline mediums and, as quality increases, keep taking more of them. Currently, my typical scenario is where small to medium sized projects can not be well thought out or initial ideas can not be grasped by end clients from 2D plans. Then, architects and designers ask for a VR (real-time) solution to help in communication and design decision. One major pitfall at the moment is material exchange. A standardized widely supported format for material definition would really ease up use of different mediums for different purposes at different time. I hope for Substance Designer files to become that standard. P.S. Regarding quality of real time GI distribution, here is an example of details that can be achieved at the moment (image by Alex Lovett using Enlighten):
  10. Beast lightmapper proved very well in Unity versions below 5. New Enlighten is much less reliable and predictable, to the point that older Beast got hacked into 5+ Unity versions. Why are LPV and VXGI useless for archviz? Is the resolution too low, are they too rend. expensive?
  11. This topic could maybe benefit from an update with state of lighting/GI solutions in currently wide-spread game engines. My two cents Generally, solutions that aim to show scene light distribution/GI can be separated into: - precomputed or 'baked': these solutions precompute the light distribution in the scene and any lights or geometry added after it will not affect scene light distribution. Example (notice that you can perfectly well program interactive actions such as changing colors of walls, but scene light distribution will not change accordingly): Best quality / low cost > VR ready / no dynamic changes in light distribution. - precomputed real-time: this is the term used to describe solutions that precompute relations between scene parts and then use this data in realtime to change light distribution if, for example, one big surface changes color and starts to bounce back more light or a new light is added to the scene. Changing the geometry is not possible in runtime as relations between parts (visibility, weight factors) are precomputed. Example (notice how bounced light behind the sofa changes as it's color is changed at 0:23 or a Day&Night cycle at 2:34): Unity Interactive Scene Good quality / high cost > VR ready at least GTX970 / dynamic changes supported besides moving geometry (you can perfectly well move geometry in runtime of course, but light dist. will not change accordingly). - fully real-time: solutions that support all changes on the scene in runtime and reflect those changes in light distribution: changed geometry (height of walls, wider window, moved sofa), materials (floor is now from dark wood so reflects less light) and lights (adding new ones, changing current ones). Example (Unity's SEGI Beta solution, Unreal's VXGI development branch): and previously linked VXGI example videos Good quality / very high cost > Unity SEGI Beta used in successful VR tests, not sure about VXGI / all dynamic runtime changes are reflected in scene light distribution State per game engine: - Unreal: release version contains precomputed solution 'Lightmass', separate development branch has fully realtime NVidia's 'VXGI' - Unity: release version comes with licensed third party solution 'Enlighten' that offers both precomputed and precomputed real-time functionality. Also, an independent developer has created a fully real-time solution for Unity: Sonic Ether's SEGI Beta on Asset Store Above linked example video with SEGI Beta. - CryEngine: has a fully real-time solution, not sure about other solutions - Stingray: has a precomputed solution, not sure about other solutions
  12. Hello to everybody, After lurking the forum for some time, I decided to become an active part of it ! Doing realtime/VR visualizations tutoring/development, so hope to contribute to constructive discussions about them.
×
×
  • Create New...