Light & Art: Using SVOTI to Build Better Games

Just a quick post to link an interview I did on the excellent website

Thanks Eighty Level!

The Cloud VR

The Cloud VR is a quick experiment about virtual reality in CRYENGINE.

Since 3.8.1, the engine officially support Oculus Rift. I've been experimenting VR since the beginning of Oculus with the DK1, and I think, if the technology is correctly used, it could be a great evolution about how we consume media. It could change education, culture, cinema, video games... So many fields can benefit of it. So I just wanted, with this small demo to check the tech side, all the console variables, some tips about performances, the way our eyes understand volumes and distance.

There is nothing to do in it, I don't have time to make it interactive, even if I have few ideas. This was actually made a saturday night in 2 hours.

You can find the download link here:

My inspiration for this project is the latest work of Charles Pétillon at the London's Covent Garden.

Making of: Photogrammetry Père-Lachaise

Photogrammetry is a great way to generate 3D models. And scanned data is now often used in video game and VFX industry. Megascans is a great example, the beta will start soon, they've done an amazing work to generate physically based textures, and detailed geometry.

There is now a lot of information about the subject, Classy dog made some cool videos about it.

So, that saturday afternoon, the sky was overcast, so I decided to make a more complex model than last time. Here is the spot, 40 photos:

I used Agisoft Photoscan to make the model and textures. Medium settings, with my 8gb of ram, not possible to generate a really dense point cloud. I'll try later with a more powerful computer.

From that model I generated two 8192px textures. The rest is easy, export and integrate to the Cryengine.

What I didn't do, and should have been done, is delete every lighting information in the textures. I'm still working on a proper way to do it. It's quite complicated, and the best way to do it is capturing the environment to have an HDRI. But I don't have the equipment yet, just a middle class camera.

A team from Epic did a good job for their Kite level: Video

So in the demo it's not possible to change the lighting. Everything is baked. Sadly.

Actually, the biggest part of this small project was to do the Flowgraph's logic:

First, I made the rotation logic. In fact, the camera is still, and it's the all level that rotates. It's possible here because the level is very simple. Every geometry is linked to one main geom entity. But it brings some artifacts, when the objects are rotating, the rendering is buggy, I don't know why yet, I'm thinking about a system where the camera is rotating instead. In this case, when one button of the mouse is pressed, I take the X coordinates and transform is to a vec, which at the end rotates the main entity.

The next logic was the zoom, quite more complicated in fact, because I wanted to block it between fov 30 and 45. So I use a combination of gates, and InRange.

What I wanted after, is link the depth of field to the zoom system. To have some kind of physically based camera. The more you zoom, the more the blur is strong and the range short. The distance is defined by a camera raycast to the geometry.

For the rendering part, I sharp a bit the image, and use a linear filter of the textures, to avoid the blur on the mipmaps.

You can find the main topic here and here download the demo.

Baron Haussmann 0.4

After the release of the 3.8.1 version of the Cryengine, I wanted to test the new feature SVOTI.

Two goals. First, showcase the feature as a tech demo, second, see if it's possible to have an high quality archviz with real time GI.

I decided to test it with an indoor scene, to try to reach the limits of the system. Indoors need precision.

I used the Baron Haussmann scene of Bertrand Benoit, optimized it for real time, and the GI system worked really nice instantly. After days of tweaks, optimizations, and a lot of help from Vladimir Kajalin (who made the system), the BaronHaussmann demo version 0.4.6 is available for download.

This version is stable, it works pretty well, but it lacks many feature I will introduce for the final release 1.0. For the moment, the sky intensity is not dynamic, the reflections from the voxels does not work yet, the cubemaps are static, and it miss furniture who will hopefully bring life to the apartment.

There is not a lot of images, and no video from me, because it's still very WIP, and I don't want to spread it until I consider it done.

The demo is heavily demanding, my goal is realism, and the optimization process is not complete. The SVOTI system is used at its best, high quality settings : minimum voxel size, 2 bounces, fully dynamic. You can access to the full information about the demo here. Or directly download Baron Haussmann 0.4.6

In this demo, it is possible to:

-Visit the apartment freely, the physic is optimized.

-Change to time of day, with right and left clic, and see how the GI reacts with the sun moving.

-See the apartment with GI disable, 1 bounce or 2 bounces.

-Zoom with [E], a raycast determine the distance and range of the depth of field.

-Activate/desactivate many lights in the apartment with [R] (dynamic GI)


Here is some flowgraph I made for the logic.

The global FG for the level looks like this:

Depth of field:

When [E] is pressed, I get the current fov, and modify it to 35 in 0.3 seconds. In the same time I activate the node dof, and open the gate to trace raycast every frame to get the distance. I found out that the raycast is not generated from the exact position of the camera, so I add one meter, and multiply this value by 1.5 to set the range. When [E] is released, I disable the dof node, and close the gate.


For every light there is a proximity trigger, when in, you can activate or not the light. When [R] is pressed, you change the gametoken value (bool) to activate or deactivate the light. In this example, I modify the material too, to change the diffuse color and add some glow.

For the release of the 3.8.3 of the Cryengine, I'll go with the version 0.5, which will add some lighting improvements and possibilities.

SVOTI Cryengine scene 1

I've been testing the new Sparse Voxel Octree Total Illumination feature of the Cryengine. And it's amazing. It's fast, accurate, allow 2 bounces in real time and even voxel reflections. Here is a first quick test about a full time of day animation:

In this scene I use the integration mode 2, with specular. Cone max length 30, and diffuse cone width 8.

After rain pavement

First time I try photogrammetry. Overcast weather, a bit of free time, let's go outside and take some photos! This is part of a future project I'd like to do, so, more to come.

Here an example with pavements, 18 photos, turning around, no special precisions.

I used Photoscan, around 2 millions polygons, I wanted to test the maximum possible. texture 8k.

Great software, simple, and pretty fast.

Rendering in Octane, PMC kernet, 500 samples/pixels, camera dof, around 7minutes at 1800x1200.

The problem was the color-grading. Indeed, I don't have (yet) a gray card, so the texture is not a correct albedo, it contains lighting information. I anyway tried to delete the shadows in photoshop, but the shadows in the final image are too dark. A specular map is present, bump not necessary with so many polys. The solution to not have a blueish render was to de-saturate the HDRI for this time.

Marble SSS

Not easy to have a good SSS on Octane. There is only two shaders with it, the specular, and the diffuse. I don't really understand why we need to mix materials to have the effect. Well, I don't really like to mix a diffuse and glossy mtl, so it is a specular material. Not so many tests, but the result is not so bad. I really looking forward to see a good implementation of a skin shader on Octane, like Arnold does.

Model by Bertrand Benoit.

(Yes, it is way too glossy)

Parisian apartment

Old renders, actually, it was my first renders with Octane. Models by Bertrand Benoit. At the time I was testing the GI. How many depth do we need to have a realistic render without too many render time? Conclusion: it depends of the scene, of course, but most of the time, diffuse depth 3 or 4 maximum (2 or 3 bounces).

And, I played with the shaders parameters, because, well, it is what I like to do.

Cornell Box

Yep, didn't succeed to sleep the other night, so here is my Cornell Box. Fog Volume inside, PMC Kernel, diffuse depth 4, glossy depth 6, caustic blur 0.01.


I always wanted to recreate virtual water realistically. It can be really hard (interactions, soft particles, different wind effects etc.), and now, with the tools, really simple. So, I chose the simple way for this time. Well, you know, it was just for 3 hours during a night I couldn't sleep.

At work I tested Houdini (and the great renderer Mantra) and their ocean system was quite impressive. And a guy, Guillaume Plourde, created a plugin for 3dsMax, Hot4Max. Few settings, easy to set up. Here some quick tests.

I used Octane render, Direct Lighting mode, about 2 minutes per frame 720p. Physical sky, Glass shader with absorbtion.

I hope to have more time to continue those little tests.