Previous    Next

Photogrammetry is a great way to generate 3D models. And scanned data is now often used in video game and VFX industry. Megascans is a great example, the beta will start soon, they've done an amazing work to generate physically based textures, and detailed geometry.

There is now a lot of information about the subject, Classy dog made some cool videos about it.

So, that saturday afternoon, the sky was overcast, so I decided to make a more complex model than last time. Here is the spot, 40 photos:

I used Agisoft Photoscan to make the model and textures. Medium settings, with my 8gb of ram, not possible to generate a really dense point cloud. I'll try later with a more powerful computer.

From that model I generated two 8192px textures. The rest is easy, export and integrate to the Cryengine.

What I didn't do, and should have been done, is delete every lighting information in the textures. I'm still working on a proper way to do it. It's quite complicated, and the best way to do it is capturing the environment to have an HDRI. But I don't have the equipment yet, just a middle class camera.

A team from Epic did a good job for their Kite level: Video

So in the demo it's not possible to change the lighting. Everything is baked. Sadly.

Actually, the biggest part of this small project was to do the Flowgraph's logic:

First, I made the rotation logic. In fact, the camera is still, and it's the all level that rotates. It's possible here because the level is very simple. Every geometry is linked to one main geom entity. But it brings some artifacts, when the objects are rotating, the rendering is buggy, I don't know why yet, I'm thinking about a system where the camera is rotating instead. In this case, when one button of the mouse is pressed, I take the X coordinates and transform is to a vec, which at the end rotates the main entity.

The next logic was the zoom, quite more complicated in fact, because I wanted to block it between fov 30 and 45. So I use a combination of gates, and InRange.

What I wanted after, is link the depth of field to the zoom system. To have some kind of physically based camera. The more you zoom, the more the blur is strong and the range short. The distance is defined by a camera raycast to the geometry.

For the rendering part, I sharp a bit the image, and use a linear filter of the textures, to avoid the blur on the mipmaps.



You can find the main topic here and here download the demo.