photography

Intro to gaussian splatting by Xuan Prada

Bibliography

  • https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

  • https://github.com/aras-p/UnityGaussianSplatting

  • https://www.youtube.com/watch?v=KFOy354zf9E

Hardware

- You need an Nvidia GPU with at least 24GB VRAM.

Software

  • Git https://git-scm.com/downloads

  • Once installed, open a command line and type git --version to check if it's working.

  • Anaconda https://www.anaconda.com/download

  • It will install all the packages and wrappers that you need.

  • CUDA toolkit 11.8 https://developer.nvidia.com/cuda-toolkit-archive

  • Once installed open a command line and type nvcc --version to check if it's working.

  • Visual Studio with C++ https://visualstudio.microsoft.com/vs/older-downloads/

  • Once installed open Visual Studio Installer and install Desktop development C++.

  • Colmap https://github.com/colmap/colmap/releases

  • This tool is for creating camera positions.

  • Add it to environment variables.

  • Edit environment variables, doble click on "path" variable and add a new one and paste the path where Colmap is stored.

  • ImageMagik https://imagemagick.org/script/download.php

  • This tool is for resizing images.

  • Test it by typing these lines one by one in the command line.

  • magick logo: logo.gif

  • magick identify logo.gif

  • magick logo.gif win:

  • FFMPEG https://ffmpeg.org/download.html

  • Add it to environment variables.

  • Open a command line and type ffmpeg to check if it's working.

  • To convert a video to photos, go to the folder where ffmpeg is downloaded.

  • Type ffmpeg.exe -i pathToVideo.mov -vf fps=2 out%04d.jpg

  • Finally restart your computer.

How to capture gaussian splats?

  • Same rules as photogrammetry but less images are needed.

  • Do not move too fast, we don't want blurry frames.

  • Take between 200 - 1000 photos.

  • Fixed exposure, otherwise it will create flickering in the final model.

Processing

  • Create a folder called "dataset".

  • Inside create another folder called "input" and place all the photos.

  • Now we need to use Colmap to obtain the camera poses. You could use RealityCapture or Metashape to do the same thing.

  • We can do this from the command line, but for simplicity let's use the gui.

  • Open Colmap, file - new. Set the database to your "dataset" folder and call it database.db Set the images to the "input folder". Save.

  • Processing - feature extraction. Enable "shared for all images if there is no changing in zoom in your photos". Click on extract. This will take a few minutes.

  • Processing - feature matching. Sequential is faster and exhaustive more precisse. This will take a few minutes.

  • Save the Colmap scene in "dataset" - "colmap". (create the folder).

  • Reconstruction - Reconstruction options. Uncheck multiple_models as we are reconstructing a single scene.

  • Reconstruction - Start reconstruction. This will take the longer, potentially hours, depending on the amount of photos.

  • Once Colmap has finished you will see the camera poses and the sparse pointcloud.

  • File - Export model and save it in "dataset" - "distorted" - "sparse" - "0". Create directories.

Train the 3D gaussian splatting model

  • Open a command line and type git clone https://github.com/graphdeco-inria/gaussian-splatting --recursive

  • This will be downloaded in your users folder. gaussian-splatting

  • Open an anaconda prompt and go to the directory where the gaussian-splatting was downloaded.

  • Type these line one at a time.

  • SET DISTUTILS_USE_SDK=1

  • conda env create --file environment.yml

  • conda activate gaussian_splatting

  • Cd to the folder where gaussian splatting was downloaded.

  • Type these lines one at a time.

  • pip install plyfile tqdm

  • pip install submodules/diff-gaussian-rasterization

  • pip install submodules/simple-knn

  • Before training the model we need to undistor the images.

  • Type python convert.py -s $FOLDER_PATH --skip_matching

  • This is going to create a folder called sparse and another one called stereo, and also a couple of files.

  • Train the model.

  • python train.py -s $FOLDER_PATH -m $FOLDER_PATH/output

  • This will train the model and export two pointclouds, one at 7000 iterations and another one at 30000 iterations.

Visualizing the model

  • Download the viewer here: https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/binaries/viewers.zip

  • From a terminal: SIBR_gaussianViewer_app -m $FOLDER_PATH/output

  • Unity 2022.3.9f1

  • Load the project. https://github.com/aras-p/UnityGaussianSplatting

  • Tools - Gaussian splats - Create.

  • Select the pointcloud, create.

  • Select the gaussian splats game object and attach the pointcloud.

  • Do your thing!

Professional photogrammetry 02 by Xuan Prada

Here is the trailer for "Professional photogrammetry episode 02".

In this 4-ish hours video I will talk about scanning in natural environments. We will cover everything needed for scanning and processing two different types of natural assets.

The first asset is a 3D asset, the second one is a ground surface. Both scanning and processing workflows are completely different for each, but hopefully after this video, you should be able to grow your own library of natural environment assets.

We go through equipment, scanning patterns, camera settings, etc. Then we will process everything in order to create final high quality assets ready for visual effects and real time.

I hope you enjoy this video, if you like it I guess I will do more videos about scanning in the wild/city.

Thanks!
Xuan.

All the info on my Patreon.

Professional photogrammetry 01 by Xuan Prada

Hello patrons,

Let's take a break from the USD training, we'll come back to it in the next video.
For now, let's enjoy a 4 hours video about professional photogrammetry.
I reckon this will be a short series, maybe 3 videos about photogrammetry acquisition and processing.

In today's video we will talk about cross polarization in a controlled environment for capturing 3D assets. We will discuss how polarization works, camera gear, how to take photos properly and how to process the data to create nice looking assets for VFX and real time.

All the information on my Patreon.

I hope you like it.
Thanks!
Xuan.

Introduction to Reality Capture by Xuan Prada

In this 3 hour tutorial I go through my photogrammetry workflow using Reality Capture in conjunction with Maya, Zbrush Mari and UV Layout.

I will guide you through the entire process, from capturing footage on-set until asset completion. I will explain the most basic settings needed to process your images in Reality Capture, to create point clouds, high resolution meshes and placeholder textures.
Then I will continue to develop the asset in order to make it suitable for any visual effects production.

This are the topics included in this tutorial.

- Camera gear.
- Camera settings.
- Shooting patterns.
- Footage preparation.
- Photogrammetry software.
- Photogrammetry process in Reality Capture.
- Model clean up.
- Retopology.
- UV mapping.
- Texture re-projection, displacement and color maps.
- High resolution texturing in Mari.
- Render tests.

Check it out on my Patreon feed.

On-set tips: Creating high frequency detail by Xuan Prada

In a previous post I mentioned the importance of having high frequency details whilst scanning assets on-set. Sometimes if we don't have that detail we can just create it. Actually sometimes this is the only way to capture volumes and surfaces efficiently, specially if the asset doesn't have any surface detail, like white objects for example.

If we are dealing with assets that are being used on set but won't appear in the final edit, it is probably that those assets are not painted at all. There is no need to spend resources on it, right? But we might need to scan those assets to create a virtual asset that will be ultimately used on screen.

As mentioned before, if we don't have enough surface detail it will be so difficult to scan assets using photogrammetry so, we need to create high frequency detail on our own way.

Let's say we need to create a virtual assets of this physical mask. It is completely plain, white, we don't see much detail on its surface. We can create high frequency detail just painting some dots, or placing small stickers across the surface.

In this particular case I'm using a regular DSLR + multi zoom lens. A tripod, a support for the mask and some washable paint. I prefer to use small round stickers because they create less artifacts in the scan, but I run out of them.

I created this support while ago to scan fruits and other organic assets.

The first thing I usually do (if the object is white) is covering the whole object with neutral gray paint. It is way more easy to balance the exposure photographing again gray than white.

Once the gray paint is dry I just paint small dots or place the round stickers to create high frequency detail. The smallest the better.

Once the material has been processed you should get a pretty decent scan. Probably an impossible task without creating all the high frequency detail first.

Manfrotto Befree for visual effects by Xuan Prada

I've been using Manfrotto Befree tripods for a while now, and I just realize that they are a perfect tool for my on-set work.
I rarely use them as primary tripod, specially when working with big and heavy professional DSLRs and multi zoom lenses. In my opinion these tripods are not stable enough to support such as heavy pieces of gear.

I mean, they are if you are taking "normal" photos, but in VFX we usually do bracketing all the time. Like for texturing references, or HDRI's. The combination of the gear, plus the rotation of the mirror plus the quick pace of the bracketing, will result in slightly different brackets. Which obviously mean that the alignment process will not be perfect. I wouldn't recommend using these tripods for bracketing with big camera bodies and multi zoom lenses. I do use them for bracketing with prime lenses such as 28mm or 50mm. They are not that heavy and the tripods seem to be stable enough with these lenses.

I do strongly recommend these tripods for photogrammetry purposes when you have to move around the subject or set. Mirrorless cameras such a Sony A7 or Sony a6000 plus prime lenses are the best combination when you need to move a lot around the set.

I also use Befree's a lot as support tripods. They just fit perfectly my Akromatic kits, both Mono and Twins. Befree tripods are tiny and light so I can easily move around with two or three and they even fit in my backpacks or hard cases at once.

As you can see below, these tripods offer great flexibility in terms of height and expansion. They are tiny when compact and middle sized when expanded completely. Check the features on Manfrotto's site.

I also use these tripods as support for my photogrammetry turntable.
Moving around with such a small setup has never been so easy.

Obviously I also use them for regular photography. Just attach my camera to the provided ball head and start shooting around the set.
Finally I also use Befree to mount my Nodal Ninja. Again you need to be careful while bracketing and always use a remote trigger, but having the possibility to move around with two or three of these tripods is just great.

They have two different version (both in aluminium and carbon fibre). Both of them come with ball head and quick release plate. But, the ball head in the smallest tripod is fixed, can't be removed. Which means a lot of limitations because you won't be able to attach most of the accessories normally used for VFX.

Promote Control + 5D Mark III by Xuan Prada

Each camera works a little bit different regarding the use of the Promote Control System for automatic tasks. In this particular case I'm going to show you how to configure both, Canon EOS 5D Mark III and Promote Control for it's use on VFX look-dev and lighting image acquisition.

  • You will need the following:
    • Canon EOS 5D Mark III
    • Promote Control
    • USB clable + adaptor
    • Shutter release CN3
  • Connect both cable to the camera and to the Promote Control.
  • Turn on the Promote Control and press simultaneously right and left buttons to go to the menu.
  • In the setup menu 2 "Use a separate cable for shutter release" select yes. 
  • In the setup menu 9 "Enable exposures below 1/4000" select yes. This is very important if you need more than 5 brackets for your HDRIs.
  • Press the central button to exit the menu.
  • Turn on your Canon EOS 5D Mark III and go to the menu.
  • Mirror lock-up should be off.
  • Long exposure noise reduction should be off as well. We don't want to vary noise level between brackets.
  • Find your neutral exposure and pass the information on to the Promote Control.
  • Select the desired number of brackets and you are ready to go.



Akromatic base by Xuan Prada

As VFX artists we always need to place our color charts and lighting checkers (or practical spheres) somewhere on the ground while shooting bracketed images for panoramic HDRI creation. And we know that every single look-development and / or lighting artist is going to request at least all these references for their tasks back at the facility.

I'm tired of seeing my VFX peers working on set placing their lighting checkers and color charts on top of their backpacks or hard cases to make them visible on their HDRIs. In the best scenario they usually put the lighting checkers on a tripod with it's legs bended.

I've been using my own base to place my lighting checkers and all my workmates keep asking me about it, so it's time to make it available for all of you working on set on a daily basis.

The akromatic base is light, robust and made of high quality stainless steel. It is super simple to attach our lighting checkers to it and keep them safe and more important, visible in all your images. Moving all around the set with your lighting checkers and color charts from take to take is now simple, quick and safe.

The akromatic base is compatible with our lighting checkers "Mono" and "Twins".

HDRI shooting (quick guide) by Xuan Prada

This is a quick introduction to HDRI shooting on set for visual effects projects.
If you want to go deeper on this topic please check my DT course here.

Equipment

This list below is a professional equipment for HDRI shooting. Good results can be achieved using amateur gear, don't necessary need to spend a lot of money for HDRI capturing, but the better equipment you own the easier, faster and better result you'll get. Obviously this gear is based on my taste.

  • Lowepro Vertex 100 AW backpack
  • Lowepro Flipside Sport 15L AW backpack
  • Full frame digital DSLR (Nikon D800)
  • Fish-eye lens (Nikkor 10.5mm)
  • Multi purpose lens (Nikkor 28-300mm)
  • Remote trigger
  • Tripod
  • Panoramic head (360 precision Atome or MK2)
  • akromatic kit (grey ball, chrome ball, tripod plates)
  • Lowepro Nova Sport 35L AW shoulder bag (for aromatic kit)
  • Macbeth chart
  • Material samples (plastic, metal, fabric, etc)
  • Tape measurer
  • Gaffer tape
  • Additional tripod for akromatic kit
  • Cleaning kit
  • Knife
  • Gloves
  • iPad or laptop
  • External hard drive
  • CF memory cards
  • Extra batteries
  • Data cables
  • Witness camera and/or second camera body for stills

All the equipment packed up. Try to keep everything small and tidy.

All your items should be easy to pick up.

Most important assets are: Camera body, fish-eye lens, multi purpose lens, tripod, nodal head, macbeth chart and lighting checkers.

Shooting checklist

  • Full coverage of the scene (fish-eye shots)
  • Backplates for look-development (including ground or floor)
  • Macbeth chart for white balance
  • Grey ball for lighting calibration 
  • Chrome ball for lighting orientation
  • Basic scene measurements
  • Material samples
  • Individual HDR artificial lighting sources if required

Grey and chrome spheres, extremely important for lighting calibration.

Macbeth chart is necessary for white balance correction.

Before shooting

  • Try to carry only the indispensable equipment. Leave cables and other stuff in the van, don’t carry extra weight on set.
  • Set-up the camera, clean lenses, format memory cards, etc, before start shooting. Extra camera adjustments would be required at the moment of the shooting, but try to establish exposure, white balance and other settings before the action. Know you lighting conditions.
  • Have more than one CF memory card with you all the time ready to be used.
  • Have a small cleaning kit with you all the time.
  • Plan the shoot: Write a shooting diagram with your own checklist, with the strategies that you would need to cover the whole thing, knowing the lighting conditions, etc.
  • Try to plant your tripod where the action happens or where your 3D asset will be placed.
  • Try to reduce the cleaning area. Don’t put anything on your feet or around the tripod, you will have to hand paint it out later in Nuke.
  • When shooting backplates for look-dev use a wide lens, something around 24mm to 28mm and cover always more space, not only where the action occurs.
  • When shooting textures for scene reconstruction always use a Macbeth chart and at least 3 exposures.

Methodology

  • Plant the tripod where the action happens, stabilise it and level it
  • Set manual focus
  • Set white balance
  • Set ISO
  • Set raw+jpg
  • Set apperture
  • Metering exposure
  • Set neutral exposure
  • Read histogram and adjust neutral exposure if necessary
  • Shot slate (operator name, location, date, time, project code name, etc)
  • Set auto bracketing
  • Shot 5 to 7 exposures with 3 stops difference covering the whole environment
  • Place the aromatic kit where the tripod was placed, and take 3 exposures. Keep half of the grey sphere hit by the sun and half in shade.
  • Place the Macbeth chart 1m away from tripod on the floor and take 3 exposures
  • Take backplates and ground/floor texture references
  • Shoot reference materials
  • Write down measurements of the scene, specially if you are shooting interiors.
  • If shooting artificial lights take HDR samples of each individual lighting source.

Final HDRI equirectangular panorama.

Exposures starting point

  • Day light sun visible ISO 100 F22
  • Day light sun hidden ISO 100 F16
  • Cloudy ISO 320 F16
  • Sunrise/Sunset ISO 100 F11
  • Interior well lit ISO 320 F16
  • Interior ambient bright ISO 320 F10
  • Interior bad light ISO 640 F10
  • Interior ambient dark ISO 640 F8
  • Low light situation ISO 640 F5

That should be it for now, happy shooting :)

Photography assembly for matte painters by Xuan Prada

In this post I'm going to explain my methodology to merge different pictures or portions of an environment in order to create a panoramic image to be used for matte painting purposes. I'm not talking about creating equirectangular panoramas for 3D lighting, for that I use ptGui and there is not a better tool for it.

I'm talking about blending different images or footage (video) to create a seamless panoramic image ready to use in any 3D or 2D program. It can be composed using only 2 images or maybe 15, it doesn't matter.
This method is much more complicated and requires more human time than using ptGui or any other stitching software. But the power of this method is that you can use it with HDR footage recorded with a Blackmagic camera, for example.

The pictures that I'm using for this tutorial were taken with a nodal point base, but they are not calibrated or similar. In fact they don't need to be like that. Obviously taking pictures from a nodal point rotation base will help a lot, but the good thing of this technique is that you can use different angles taken from different positions and also using different focal and different film backs from various digital cameras.

  • I'm using these 7 images taken from a bridge in Chiswick, West London. The resolution of the images is 7000px wide so I created a proxy version around 3000px wide.
  • All the pictures were taken with same focal, same exposure and with the ISO and White Balance locked.
  • We need to know some information about these pictures. In order to blend the images in to a panoramic image we need to know the focal length and the film back or sensor size.
  • Connect a view meta data node to every single image to check this information. In this case I was the person who took the photos, so I know all of them have the same settings, but if you are not sure about the settings, check one by one.
  • I can see that the focal length is 280/10 which means the images were taken using a 28mm lens.
  • I don't see film back information but I do see the camera model, a Nikon D800. If I google the film back for this camera I see that the size is 35.9mm x 24mm.
  • Create a camera node with the information of the film back and the focal length.
  • At this point it would be a good idea to correct the lens distortion in your images. You can use a lens distortion node in Nuke if you shot a lens distortion grid, or just do eyeballing.
  • In my case I'm using the great lens distortion tools in Adobe Lightroom, but this is only possible because I'm using stills. You should always shot lens distortion grids.
  • Connect a card node to the image and remove all the subdivisions.
  • Also deactivate the image aspect to have 1:1 cards. We will fix this later.
  • Connect a transfer geo node to the card, and it's axis input to the camera.
  • If we move the camera, the card is attached to it all the time.
  • Now we are about to create a custom parameter to keep the card aligned to the camera all the time, with the correct focal length and film back. Even if we play with the camera parameters, the image will be updated automatically.
  • In the transform geo parameters, RMB and select manage user knobs and add a floating point slider. Call it distance. Set the min to 0 and the max to 10
  • This will allow us to place the card in space always relative to the camera.
  • In the transform geo translate z press = to type an expression. write -distance
  • Now if we play with the custom distance value it works.
  • Now we have to refer to the film back and focal length so the card matches the camera information when it's moved or rotated.
  • In the x scale of the transform geo node type this expression (input1.haperture/input1.focal)*distance and in the y scale type: (input1.vaperture/input1.focal)*distance being input1 the camera axis.
  • Now if we play with the distance custom parameter everything is perfectly aligned.
  • Create a group with the card, camera and transfer geo nodes.
  • Remove the input2 and input3 and connect the input1 to the card instead of the camera.
  • Go out of the group and connect it to the image. There are usually refreshing issues so cut the whole group node and paste it. This will fix the problem.
  • Manage knobs here and pick the focal length and film back from the camera (just for checking purposes)
  • Also pick the rotation from the camera and the distance from the transfer geo.
  • Having these controls here we won't have to go inside of the group if we need to use them. And we will.
  • Create a project 3D node and connect the camera to the camera input and the input1 to the input.
  • Create a sitch node below the transfer geo node and connect the input1 to the project3D node.
  • Add another custom control to the group parameters. Use the pulldown choice, call it mode and add two lines: card and project 3D.
  • In the switch node add an expression: parent.mode
  • Put the mode to project 3D.
  • Add a sphere node, scale it big and connect it to the camera projector.
  • You will se the image projected in the sphere instead of being rendered in a flat card.

Depending on your pipeline and your workflow you may want to use cards or projectors. At some point you will need both of them, so is nice to have quick controls to switch between them

In this tutorial we are going to use the card mode. For now leave it as card and remove the sphere.

  • Set the camera in the viewport and lock it.
  • Now you can zoom in and out without loosing the camera.
  • Set the horizon line playing with the rotation.
  • Copy and paste the camera projector group and set the horizon in the next image by doing the same than before; locking the camera and playing with camera rotation.
  • Create a scene node and add both images. Check that all the images have an alpha channel. Auto alpha should be fine as long as the alpha is completely white.
  • Look through the camera of the first camera projector and lock the viewport. Zoom out and start playing with the rotation and distance of the second camera projection until both images are perfectly blended.
  • Repeat the process with every single image. Just do the same than before; look through the previous camera, lock it, zoom out and play with the controls of the next image until they are perfectly aligned.
  • Create a camera node and call it shot camera.
  • Create a scanline render node.
  • Create a reformat node and type the format of your shot. In this case I'm using a super 35 format which means 1920x817
  • Connect the obj/scene input of the scanline render to the scene node.
  • Connect the camera input of the scanline render to the shot camera.
  • Connect the reformat node to the bg input of the scanline render node.
  • Look through the scanline render in 2D and you will see the panorama through the shot camera.
  • Play with the rotation of the camera in order to place the panorama in the desired position.

That's it if you only need to see the panorama through the shot camera. But let's say you also need to project it in a 3D space.

  • Create another scanline render node and change the projection mode to spherical. Connect it to the scene.
  • Create a reformat node with an equirectangular format and connect it to the bg input of the scanline render. In this case I'm using a 4000x2000 format.
  • Create a sphere node and connect it to the spherical scanline render. Put a mirror node in between to invert the normal of the sphere.
  • Create another scanline render and connect it's camera input to the shot camera.
  • Connect the bg input of the new scanline render to the shot reformat node (super 35).
  • Connect the scn/obj of the new scanline render and connect it to the sphere node.
  • That's all that you need.
  • You can look through the scanline render in the 2D and 3D viewport. We got all the images projected in 3D and rendered through the shot camera.

You can download the sample scene here.

New akromatic lighting checkers by Xuan Prada

News from akromatic.

"Based on the feedback and requirements of some VFX Facilities, we decided to release a new flavour of our calibrated paint.

Some Look-Development Artists prefer to use grey balls with higher specular components and other Artists are more comfortable using less shiny spheres.
It is matter of personal preference, so let us know which one is your flavour.

Original spheres: Gloss average around 30%
New spheres: Gloss average around 18%
Both of them are calibrated as Neutral Greys and hand painted."

New grey sphere, half hit by the sun, half in shade.

New grey flavour, close up. Soft lighting transition.

The mirror side remains the same. Carefully polished by hand.

Mirror side, close up.

All the information here.

Animated HDRI with Red Epic and GoPro by Xuan Prada

Not too long ago, we needed to create a lightrig to lit a very reflective character, something like a robot made of chrome. This robot is placed in a real environment with a lot of practical lights, and this lights are changing all the time.
The robot will be created in 3D and we need to integrate it in the real environment, and as I said, all the lights will be changing intensity and temperature, some of then flickering all the time and very quickly.

And we are talking about a long sequence without cuts, that means we can’t cheat as much as we’d like.
In this situation we can’t use standard equirectangular HDRIs. They won’t be good enough to lit the character as the lighting changes will not be covered by a single panoramic image.

Spheron

The best solution for this case is probably the Spheron. If you can afford it or rent it on time, this is your tool. You can get awesome HDRI animations to solve this problem.
But we couldn’t get it on time, so this is not an option for us.

Then we thought about shooting HDRI as usual, one equirectangular panorama for each lighting condition. It worked for some shots but in others when the lights are changing very fast and blinking, we needed to capture live action videos. Tricks animating the transition between different HDRIs wouldn’t be good enough.
So the next step it would be to capture HDRI videos with different exposures to create our equirectangular maps.

The regular method

0002.jpeg

The fastes solution would be to use our regular rigs (Canon 5D Mark III and Nikon D800) mounted in a custom base to support 3 cameras with 3 fisheye lenses. They will have to be overlapped by around 33%.
With this rig we should be able to capture the whole environment while recording with a steady cam, just walking around the set.
But obviously those cameras can’t record true HDR. They always record h264 or another compression video. And of course we can’t bracket videos with those cameras.

Red Epic

To solve the .RAW video and the multi brackting we end up using Red Epic cameras. But using 3 cameras plus 3 lenses is quite expensive for on set survey work, and also quite heavy rig to walk all around a big set.
Finally we used only one Red Epic with a 18mm lens mounted in an steady cam, and in the other side of the arm we placed a big akromatic chrome ball. With this ball we can get around 200-240 degrees, even more than using a fisheye lens.
Obviously we will get some distorsion on the sides of the panorama, but honestly, have you ever seen a perfect equirectangular panorama for 3D lighting being used in a post house?

With the Epic we shot .RAW video a 5 brackets, rocedording the akromatic ball all the time and just walking around the set. The final resolution was 4k.
We imported the footage in Nuke and convert it using a simple spherical transform node to create true HDR equirectangular panoramas. Finally we combined all the exposures.

With this simple setup we worked really fast and efficient. Precision was accurate in reflections and lighting and the render time was ridiculous.
Can’t show any of this footage now but I’ll do it soon.

GoPro

We had a few days to make tests while the set was being built. Some parts of the set were quite inaccessible for a tall person like me.
In the early days of set constructing we didn’t have the full rig with us but we wanted to make quick test, capture footage and send it back to the studio, so lighting artists could make some Nuke templates to process all the information later on while shooting with the Epic.

We did a few tests with the GoPro hero 3 Black Edition.
This little camera is great,  light and versatile. Of course we can’t shot .RAW but at least it has a flat colour profile and can shot 4k resolution. You can also control the white balance and the exposure. Good enough for our tests.

We used an akromatic chrome ball mounted on an akromatic base, and on the other side we mounted the GoPro using a Joby support.
We shot using the same methodology that we developed for the Epic. Everything worked like a charm getting nice panormas for previs and testing purposes.

It also was fun to shot with quite unusual rig, and it helped us to get used to the set and to create all the Nuke templates.
We also did some render tests with the final panoramas and the results were not bad at all. Obviously these panoramas are not true HDR but for some indie projects or low budget projects this would be an option.

Footage captured using a GoPro and akromatic kit

In this case I’m in the center of the ball and this issue doesn’t help to get the best image. The key here is to use a steady cam to reduce this problem.

Nuke

Nuke work is very simple here, just use a spherical transform node to convert the footage to equirectangular panoramas.

Final results using GoPro + akromatic kit

Few images of the kit

Nikon D800 bracketing without remote shutter by Xuan Prada

I don’t know how I came to this setting in my Nikon D800 but it’s just great and can save your life if you can’t use a remote shutter.

The thing is that a few days ago the connector where I plug my shutter release fell apart. And you know that shooting brackets or multiple exposures is almost impossible without a remote trigger. If you press the sutter button without a release trigger you will get vibration or movement between brackets, and this will end up with ghosting problems.

With my remote trigger connection broken I only had the chance to take my camera body to the Nikon repair centre, but my previous experiences are to bad and I knew I would loose my camera for a month. The other option it would be to buy the great CamRanger but I couldn’t find it in London and couldn’t wait to be delivered.

On the other hand, I found on internet that a lot of Nikon D800 users have the same problem with this connection so maybe this is a problem related with the construction of the camera.

The good thing is that I found a way to bracket without using a remote shutter, just pushing the shutter button once, at the beginning of the multiple exposures. You need to activate one hidden option in your D800.

  • First of all, activate your brackets.
  • Turn on the automatic shutter option.
  • In the menu, go to the timer section, then to self timer. There go to self timer delay and set the time for the automatic shutter.

Just below the self time opcion there is another setting called number of shots. This is the key setting, if you put a 2 there the camera will shot all the brackets pressing the shutter release just once.
If you have activated the delay shutter option, you will get perfect exposures without any kind of vibration or movement.

Finally you can set the interval between shots, 0.5s is more than enough because you won’t be moving the camera/tripod between exposures.

And that’s all that you need to capture multiple brackets with your Nikon D800 without a remote shutter.
This saved my life while shooting for akromatic.com the other day :)

Fixing “nadir” in Nuke by Xuan Prada

Sometimes you may need to fix the nadir of the HDRI panoramas used for lighting and look-development.
It’s very common that your tripod is placed on the ground of your pictures, specially if you use a Nodal Ninja panoramic head or similar. You know, one of those pano heads that you need to shoot images for zenit and nadir.

I usually do this task in another specific tools for VFX panoramas like PtGui, but if you dont’ have PtGui the easiest way to handle this is in Nuke.
It is also very common when you work on a big VFX facility, that other people work on the stitching process of the HDRI panoramas. If they are in a hurry they might stitch the panorama and deliver it for lighting forgetting to fix small (or big) imperfections.
In that case, I’m pretty sure that you as lighting or look-dev artist will not have PtGui installed on your machine, so Nuke will be your best friend to fix those imperfections.

This is an example that I took while ago.One of the brackets for one of the angles. As you can see I’m shooting remote with my laptop but it’s covering a big chunk of the ground.

When the panorama was stitched, the laptop became a problem. This panorama is just a preview, sorry for the low image quality.
Fixing this in an aquirectangular panorama would be a bit tricky, even worse if you are using a Nodal Ninja type pano head.
So, find below how to fix it in Nuke. I’m using a high resolution panorama that you can download for free at akromatic.com

  • First of all, import your equirectangular panorama in Nuke and use your desired colour space.
  • Use a spherical transform node to see the panorama as a mirror ball.
  • Change the input type to “Lat Long map” and the output type to “Mirror Ball“.
  • In this image you can see how your panorama will look in the 3D software. If you think that something is not looking good in the “nadir” just get rid of it before rendering.
  • Use another spherical transform node but in this case change the output type to “Cube” and change the rx to -90 so we can see the bottom side of the cube.
  • Using a roto paint node we can fix whatever you need/want to fix.
  • Take another spherical transform node, change the input type to “Cube” and the output type to “Lat Long map“.
  • You will notice 5 different inputs now.
  • I’m using constant colours to see which input corresponds to each specific part of the panorama.
  • The nadir should be connected to the input -Y
  • The output format for this node should be the resolution of the final panorama.
  • I replace each constant colour by black colours.
  • Each black colour should have also alpha channel.
  • This is what you get. The nadir that you fixed as a flat image is now projected all the way along on the final panorama.
  • Check the alpha channel of the result.
  • Use a merge node to blend the original panorama with the new nadir.
  • That’s it, use another spherical transform node with the output type set to Mirror Ball to see how the panorama looks like now. As you can see we got rid of the distortions on the ground.

Shooting gear for VFX photography by Xuan Prada

This is the gear and setup that I’ve using lately for my shootings.
I’ve been shooting in Northern Spain for a few days surrounded by amazing places.

This two images were taken with my iPhone and you can show here all my HDRI for VFX gear to be used “on the go”. The panoramas for this location will be posted on akromatic.com soon.

For now, you can check another panorama taken that same day with the same gear.
Find it below.

More information at akromatic.com

Digital Colour Checkers by Xuan Prada

Just in case you forget your real colour checkers, you can download these ones and use them on your iPad mini, iPad air or iPhone.
They don’t replace the original ones but at least, you won’t be completely lost on set.

The only thing you have to do is download the following images and open them in your device.
I recommend using 100% of brightness.

Colour checker for iPad mini.

White Balance checker for iPad mini.

Colour checker for iPad air.

White Balance checker for iPad air.

Colour and White Balance checker for iPhone.