workflow

Environment reconstruction + HDR projections by Xuan Prada

I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.

I tried to make it as simple as possible, spending just a couple of hours on location.

  • The first thing I did was drawing some diagrams of the environment and using a laser measurer cover the whole place writing down all the information needed for later when working on the virtual reconstruction.
  • Then I did a quick map of the environment in Photoshop with all the relevant information. Just to keep all my annotations clean and tidy.
  • With drawings and annotations would have been good enough for this environment, just because it's quite simple. But in order to make it better I decided to scan the whole place. Lidar scanning is probably the best solution for this, but I decided to do it using photogrammetry. I know it takes more time but you will get textures at the same time. Not only texture placeholders, but true HDR textures that I can use later for projections.
  • I took around 500 images of the whole environment and ended up with a very dense point cloud. Just perfect for geometry reconstruction.
  • For the photogrammetry process I took around 500 shots. Every single one composed of 3 bracketed exposures, 3 stops apart. This will give me a good dynamic range for this particular environment.
  • Combined the 3 brackets to create rectilinear HDR images. Then exported them as both HDR and LDR. The exr HDRs will be used for texturing and the jpg LDR for photogrammetry purpose.
  • Also did a few equirectangular HDRIs with even higher dynamic ranger. Then I projected these in Mari using the environment projection feature. Once I completed the projections from different tripod positions, cover the remaining areas with the rectilinear HDRs.
  • These are the five different HDRI positions and some render tests.
  • The next step is to create a proxy version of the environment. Having the 3D scan this so simple to do, and the final geometry will be very accurate because it's based on photos of the real environment. You could also do a very high detail model but in this case the proxy version was good enough for what I needed.
  • Then, high resolution UV mapping is required to get good texture resolution. Every single one of my photos is 6000x4000 pixels. The idea is to project some of them (we don't need all of them) through the photogrammetry cameras. This means great texture resolution if the UVs are good. We could even create full 3D shots and the resolution would hold up.
  • After that, I imported in Mari a few cameras exported from Photoscan and the correspondent rectilinear HDR images. Applied same lens distortion to them and project them in Mari and/or Nuke through the cameras. Always keeping the dynamic range.
  • Finally exported all the UDIMs to Maya (around 70). All of them 16 bit images with the original dynamic range required for 3D lighting.
  • After mipmapped them I did some render tests in Arnold and everything worked as expected. I can play with the exposure and get great lighting information from the walls, floor and ceiling. Did a few render tests with this old character.

VFX footage input/output by Xuan Prada

This is a very quick and dirty explanation of how the footage and specially colour is managed in a VFX facility.

Shooting camera to Lab
The RAW material recorded on-set goes to the lab. In the lab it is converted to .dpx which is the standard film format. Sometimes the might use exr but it's not that common.
A lot of movies are still being filmed with film cameras, in those cases the lab will scan the negatives and convert them to .dpx to be used along the pipeline.

Shooting camera to Dailies
The RAW material recorded on-set goes to dailies. The cinematographer, DP, or DI applies a primary LUT or color grading to be used along the project.
Original scans with LUT applied are converted to low quality scans and .mov files are generated for distribution.

Dailies to Editorial
Editorial department receive the low quality scans (Quicktimes) with the LUT applied.
They use these files to make the initial cuts and bidding.

Editorial to VFX
VFX facilities receive the low quality scans (Quictimes) with LUT applied. They use these files for bidding.
Later on they will use them as reference for color grading.

Lab to VFX
Lab provides high quality scans to the VFX facility. This is pretty much RAW material and the LUT needs to be applied.
The VFX facility will have to apply the LUT's film to the work done by scratch by them.
When the VFX work is done, the VFX facility renders out exr files.

VFX to DI
DI will do the final grading to match the Editorial Quicktimes.

VFX/DI to Editorial
High quality material produced by the VFX facility goes to Editorial to be inserted in the cuts.


The basic practical workflow would be.

  • Read raw scan data.
  • Read Quicktime scan data.
  • Dpx scans usually are in LOG color space.
  • Exr scans usually are in LIN color space.
  • Apply LUT and other color grading to the RAW scans to match the Quicktime scans.
  • Render out to Editorial using the same color space used for bringing in footage.
  • Render out Quicktimes using the same color space used for viewing. If wathcing for excample in sRGB you will have to bake the LUT.
  • Good Quicktime settings: Colorspace sRGB, Codec Avid Dnx HD, 23.98 frames, depth million of colors, RGB levels, no alpha, 1080p/23.976 Dnx HD 36 8bit

Colour Spaces in Mari by Xuan Prada

Mari is the standard tool these days for texturing in VFX facilities. There are so many reasons for it but one of the most important reasons is that Mari is probably the only texturing dedicated software that can handles colour spaces. In a film environment this is a very important feature because working without having control over colour profiles is pretty much like working blind.
That's why Mari and Nuke are the standard tools for texturing. We also include Zbrush as standard tool for texture artist but only for displacement maps stuff where color managment doesn't play a key role.

Right now colour management in Mari is not complete, at least is not as good as Nuke's, where you can control colour spaces for input and output sources. But Mari offers a basic colour management tools really useful for film environments. We have Mari Colour Profiles and OpenColorIO (OCIO).

As texture artists we usually work with Float Linear and 8-bit Gamma sources.

  • I've loaded two different images in Mari. One of them is a Linear .exr and the other one is a Gamma2.2  .tif
  • With the colour management set to none, we can check both images to see the differences between them
  • We'll get same results in Nuke. Consistency is extremely important in a film pipeline.
  • The first way to manage color spaces in Mari is via LUT's. Go to the color space section and choose the LUT of your project, usually provided by the cinematographer. Then change the Display Device and select your calibrated monitor. Change the Input Color Space to Linear or sRGB depending on your source material. Finally change the View Transform to your desired output like Gamma 2.2, Film, etc.
  • The second method and recommended for colour management in Mari is using OCIO files. We can load these kind of files in Mari in the Color Manager window. These files are usually provided by the cinematographer or production company in general. Then just change the Display Device to your calibrated monitor, the Input Color Space to your source material and finally the View Transform to your desired output.

Breaking a character's FACE IN MODO by Xuan Prada

A few years ago I worked on Tim Burton's Dark Shadows at MPC. We created a full CG face for Eva Green's character Angelique.
Angelique had a fight with Johnny Depp's character Barnabas Collins, and her face and upper body gets destroyed during the action.

In that case, all the broken parts where painted by hand as texture masks, and then the FX team generated 3D geometry and simulations based on those maps, using them as guides.

Recently I had to do a similar effect, but in this particular case, the work didn't require hand painting textures for the broken pieces, just random cracks here and there.
I did some research about how to create this quickly and easily, and found out that Modo's shatter command was probably the best way to go.

This is how I achieve the effect in no time.

First of all, let's have a look to Angelique, played by Eva Green.

 

  • Once in Modo, import the geometry. The only requirement to use this tool is that the geometry has to be closed. You can close the geometry quick and dirty, this is just to create the broken pieces, later on you can remove all the unwanted faces.
  • I already painted texture maps for this character. I have a good UV layout as you can see here. This breaking tool is going to generate additional faces, adding new uv coordinates. But the existing UV's will remain as they are.
  • In the setup tab you will find the Shatter&Command tool.
  • Apply for example uniform type.
  • There are some cool options like number of broken pieces, etc.
  • Modo will create a material for all the interior pieces that are going to be generated. So cool.
  • Here you can see all the broken pieces generated in no time.
  • I'm going to scale down all the pieces in order to create a tiny gap between them. Now I can see them easily.
  • In this particular case (as we did with Angelique) I don't need the interior faces at all. I can easily select all of them using the material that Modo generated automatically.
  • Once selected all the faces just delete them.
  • If I check the UVs, they seem to be perfectly fine. I can see some weird stuff that is caused by the fact that I quickly closed the mesh. But I don't worry at all about, I would never see these faces.
  • I'm going to start again from scratch.
  • The uniform type is very quick to generate, but all the pieces are very similar in scale.
  • In this case I'm going to use the cluster type. It will generate more random pieces, creating nicer results.
  • As you can see, it looks a bit better now.
  • Now I'd like to generate local damage in one of the broken areas. Let's say that a bullet hits the piece and it falls apart.
  • Select the fragment and apply another shatter command. In this case I'm using cluster type.
  • Select all the small pieces and disable the gravity parameter under dynamics tab.
  • Also set the collision set to mesh.
  • I placed an sphere on top of the fragments. Then activated it's rigid body component. With the gravity force activated by default, the sphere will hit the fragments creating a nice effect.
  • Play with the collision options of the fragments to get different results.
  • You can see the simple but effective simulation here.

  • This is a quick clay render showing the broken pieces. You can easily increase the complexity of this effect with little extra cost.
  • This is the generated model, with the original UV mapping with high resolution textures applied in Mari.
  • Works like a charm.

Introduction to scatterers in Clarisse by Xuan Prada

Scatterers in Clarisse are just great. They are very easy to control, reliable and they render in no time.
I've been using them for matte painting purposes, just feed them with a bunch of different trees to create a forest in 2 minutes. Add some nice lighting and render insane resolution. Then use all the 3D material with all the needed AOV's in Nuke and you'll have full control to create stunning matte paintings.

To make this demo a bit funnier instead of trees I'm using cool Lego pieces :)

  • Create a context called obj and import the grid.obj and the toy_man.obj
  • Create another context called shaders and create generic shaders for the objs.
  • Also create two textures and load the images from the hard drive.
  • Assign the textures to the diffuse input of each shader and then assign each shader to the correspondent obj.
  • Set the camera to see the Lego logo.
  • Create a new context called crowd, and inside of it create a point cloud and a scatterer.
  • In the point cloud set the parent to be the grid.
  • In the scatterer set the parent to be the grid as well.
  • In the scatterer set the point cloud as geometry support.
  • In the geometry section of the scatterer add the toy_man.
  • Go back to the point cloud and in the scattering geometry add the grid.
  • Now play with the density. In this case I’m using a value of 0.7

  • As you can see all the toy_men start to populate the image.

  • In the decimate texture add the Lego logo. Now the toy_men stick to the Logo.
  • Add some variation in the scatterer position and rotation.
  • That’s it. Did you realise how easy was to setup this cool effect? And did you check the polycount? 108.5 million :)
  • In order to make this look a little bit better, we can remove the default lighting and do some quick IBL setup.

Final render.

Find references with Google sets by Xuan Prada

Some time ago I wrote a post about my workflow to find image references and general information, useful when you are in the researching phase of your work.

I’ve found a nice tool of Google which helps you to find information related to one topic written by you.

For example, if you need references of American cars, but you only know the companies “Chevrolet” and “Buick”, Google gives you related companies of American cars.
It’s very useful!

Google Sets.