modelling

VDB as displacement by Xuan Prada

The sphere is the surface that needs to be deformed by the presence of the cones. The surface can't be modified in any way, we need to stick to its topology and shape. We want to do this dynamically just using a displacement map but of course we don't want to sculpt the details by hand, as the animation might change at any time and we would have to re-sculpt again.

The cones are growing from frame 0 to 60 and moving around randomly.

I'm adding a for each connected piece and inside the loop adding an edit to increase the volume of the original cones a little bit.

Just select all in the group field, and set the transform space to local origin by connectivity, so each cone scales from it's own center.

Add a vdb from polygons, set it to distance VDB and add some resolution, it doesn't need to be super high.

Then I just cache the VDB sequence.

Create an attribute from volume to pass the Cd attribute from the vdb cache to the sphere.

To visualize it better you can just add a visualizer mapped to the attribute.

In shading, create an user data float and read the Cd attribute and connect it to the displacement.

If you are looking for the opposite effect, you can easily invert the displacement map.

Detailing digi doubles using generic humans by Xuan Prada

This is probably the last video of the year, let's see about that.

This time is all about getting your concept sculpts into the pipeline. To do this, we are going to use a generic humanoid, usually provided by your visual effects studio. This generic humanoid would have perfect topology, great uv mapping, some standard skin shaders, isolation maps to control different areas, grooming templates, etc.

This workflow will speed drastically the way you approach digital doubles or any other humanoid character, like this zombie here.

In this video we will focus mainly on wrapping a generic character around any concep sculpt to get a model that can be used for rigging, animation, lookdev, cfx, etc. And once we have that, we will re-project back all the details from the sculpt and we will apply high resolution displacement maps to get all the fine details like skin pores, wrinkles, skin imperfections, etc.

The video is about 2 hours long and we can use this character in the future to do some other videos about character/creature work.

All the info on my Patreon site.

Thanks!

Xuan.

Houdini topo transfer - aka wrap3 by Xuan Prada

For a little while I have been using Houdini topo transfer tools instead of Wrap 3. Not saying that I can fully replace Wrap3 but for some common and easy tasks, like wrapping generic humans to scans for both modelling and texturing, I can definitely use Houdini now instead of Wrap 3.

Wrapping generic humans to scans

  • This technique will allow you to easily wrap a generic human to any actor’s scan to create digital doubles. This workflow can be used during modeling the digital double and also while texturing it. Commonly, a texture artist gets a digital double production model in t-pose or a similar pose that doesn’t necessary match the scan pose. It is a great idea to match both poses to easily transfer color details and surface details between the scan and the production model.

  • For both situations, modeling or texturing, this is a workflow that usually involves Wrap3 or other proprietary tools for Maya. Now it can also easily be done in Houdini.

  • First of all, open the ztool provided by the scanning vendor in Zbrush. These photogrammetry scans are usually something around 13 – 18 million of polygons. Too dense for the wrapping process. You can just decimate the model and export it as .obj

  • In Maya align roughly your generic human and the scan. If the pose is very different, use your generic rig to match (roughly) the pose of the scan. Also make sure both models have the same scale. Scaling issues can be fixed in Wrap3 or Houdini in this case, but I think it is better to fix it beforehand, in a vfx pipeline you will be publishing assets from Maya anyway. Then export both models as .obj

  • It is important to remove teeth, the interior of the mouth and other problematic parts from your generic human model. This is something you can do in Houdini as well, even after the wrapping, but again, better to do it beforehand.

  • Import the scan in Houdni.

  • Create a topo transfer node.

  • Connect the scan to the target input of the topo transfer.

  • Bring the base mesh and connect it to the source input of the topo transfer.

  • I had issues in the past using Maya units (decimeters) so better to scale by 0.1 just in case.

  • Enable the topo transfer, press enter to activate it. Now you can place landmarks on the base mesh.

  • Add a couple of landmarks, then ctrl+g to switch to the scan mesh, and align the same landmarks.

  • Repeat the process all around the body and click on solve.

  • Your generic human will be wrapped pretty much perfectly to the actor’s scan. Now you can continue with your traditional modeling pipeline, or in case you are using this technique for texturing, move into Zbrush, Mari and or Houdini for transferring textures and displacement maps. There are tutorials about these topics on this site.

Transferring texture data

  • Import the scan and the wrapped model into Houdini.

  • Assign a classic shader with the photogrammetry texture connected to its emission color to the scan. Disable the diffuse component.

  • Create a bakeTexture rop with the following settings.

    • Resolution = 4096 x 4096.

    • UV object = wrapped model.

    • High res object = scan.

    • Output picture = path_to_file.%(UDIM)d.exr

    • Format = EXR.

    • Surface emission color = On.

    • Baking tab = Tick off Disable lighting/emission and Add baking exports to shader layers.

    • If you get artifacts in the transferred textures, in the unwrapping tab change the unwrap method to trace closest surface. This is common with lidar, photogrammetry and other dirty geometry.

    • You can run the baking locally or on the farm.

  • Take a look at the generated textures.

Creases from Maya to Houdini by Xuan Prada

This is a quick tip on how to take creases information from Maya to Houdini to be rendered with Arnold. If you are like me and you are using Houdini as scene assembler this is something that you will have to deal with sooner or later.

  • In Maya, I have a simple cube with creases, on the right side you can see how it looks once subdivided twice.

  • Not only you can take creases information into Houdini, you can also export subdivision information and HtoA will interpret it automatically. Make sure you add catclark subdivision type and 2 iterations, or whatever you need.

  • When exporting the alembic caches you need to include the arnold parameters that take care of subdivision and creases. Actually there is no extra parameter for creases, by including subdivision parameters you will already get the creases information.

  • Note that the arnold parameters in Maya start with ar_arnold_parameter, for example ar_subdiv_iterations. But in Houdini arnold parameters don’t use de ar prefix. Because of that make sure you are exporting the parameter without the ar prefix.

  • All this can be of course happen automatically in your pipeline while publishing assets. It actually should to make artists life easier and avoid mistakes.

  • That’s it, if you import the alembic cache in Houdini both creases and subdivisions should render as expected. This information can be overwritten in sops with arnold parameters.

On-set tips: Creating high frequency detail by Xuan Prada

In a previous post I mentioned the importance of having high frequency details whilst scanning assets on-set. Sometimes if we don't have that detail we can just create it. Actually sometimes this is the only way to capture volumes and surfaces efficiently, specially if the asset doesn't have any surface detail, like white objects for example.

If we are dealing with assets that are being used on set but won't appear in the final edit, it is probably that those assets are not painted at all. There is no need to spend resources on it, right? But we might need to scan those assets to create a virtual asset that will be ultimately used on screen.

As mentioned before, if we don't have enough surface detail it will be so difficult to scan assets using photogrammetry so, we need to create high frequency detail on our own way.

Let's say we need to create a virtual assets of this physical mask. It is completely plain, white, we don't see much detail on its surface. We can create high frequency detail just painting some dots, or placing small stickers across the surface.

In this particular case I'm using a regular DSLR + multi zoom lens. A tripod, a support for the mask and some washable paint. I prefer to use small round stickers because they create less artifacts in the scan, but I run out of them.

I created this support while ago to scan fruits and other organic assets.

The first thing I usually do (if the object is white) is covering the whole object with neutral gray paint. It is way more easy to balance the exposure photographing again gray than white.

Once the gray paint is dry I just paint small dots or place the round stickers to create high frequency detail. The smallest the better.

Once the material has been processed you should get a pretty decent scan. Probably an impossible task without creating all the high frequency detail first.

UV to Mesh by Xuan Prada

Mi friend David Munoz Velazquez just pointed me to this great script to flatten geometries based on UV Mapping, pretty useful for re-topology tasks. In this demo I use it to create nice topology for 3D garments in Marvelous Designer. Then I can apply any new simulation changes to the final mesh using morphs. Check it out.

Breaking a character's FACE IN MODO by Xuan Prada

A few years ago I worked on Tim Burton's Dark Shadows at MPC. We created a full CG face for Eva Green's character Angelique.
Angelique had a fight with Johnny Depp's character Barnabas Collins, and her face and upper body gets destroyed during the action.

In that case, all the broken parts where painted by hand as texture masks, and then the FX team generated 3D geometry and simulations based on those maps, using them as guides.

Recently I had to do a similar effect, but in this particular case, the work didn't require hand painting textures for the broken pieces, just random cracks here and there.
I did some research about how to create this quickly and easily, and found out that Modo's shatter command was probably the best way to go.

This is how I achieve the effect in no time.

First of all, let's have a look to Angelique, played by Eva Green.

 

  • Once in Modo, import the geometry. The only requirement to use this tool is that the geometry has to be closed. You can close the geometry quick and dirty, this is just to create the broken pieces, later on you can remove all the unwanted faces.
  • I already painted texture maps for this character. I have a good UV layout as you can see here. This breaking tool is going to generate additional faces, adding new uv coordinates. But the existing UV's will remain as they are.
  • In the setup tab you will find the Shatter&Command tool.
  • Apply for example uniform type.
  • There are some cool options like number of broken pieces, etc.
  • Modo will create a material for all the interior pieces that are going to be generated. So cool.
  • Here you can see all the broken pieces generated in no time.
  • I'm going to scale down all the pieces in order to create a tiny gap between them. Now I can see them easily.
  • In this particular case (as we did with Angelique) I don't need the interior faces at all. I can easily select all of them using the material that Modo generated automatically.
  • Once selected all the faces just delete them.
  • If I check the UVs, they seem to be perfectly fine. I can see some weird stuff that is caused by the fact that I quickly closed the mesh. But I don't worry at all about, I would never see these faces.
  • I'm going to start again from scratch.
  • The uniform type is very quick to generate, but all the pieces are very similar in scale.
  • In this case I'm going to use the cluster type. It will generate more random pieces, creating nicer results.
  • As you can see, it looks a bit better now.
  • Now I'd like to generate local damage in one of the broken areas. Let's say that a bullet hits the piece and it falls apart.
  • Select the fragment and apply another shatter command. In this case I'm using cluster type.
  • Select all the small pieces and disable the gravity parameter under dynamics tab.
  • Also set the collision set to mesh.
  • I placed an sphere on top of the fragments. Then activated it's rigid body component. With the gravity force activated by default, the sphere will hit the fragments creating a nice effect.
  • Play with the collision options of the fragments to get different results.
  • You can see the simple but effective simulation here.

  • This is a quick clay render showing the broken pieces. You can easily increase the complexity of this effect with little extra cost.
  • This is the generated model, with the original UV mapping with high resolution textures applied in Mari.
  • Works like a charm.

Retopology tools in Modo by Xuan Prada

A few months ago I wrote a post about retopology tools in Maya. I'm not using those tools anymore, now I deal with retopology using Modo.
I'm doing a lot of retopo these days working with 3D scanners and decimated Zbrush models coming from the art department.

Pretty much all the 3D packages these days have similar retopology tools, but working with Modo I feel more freedom and I work more comfortable doing this kind of task.

These are the tools that I usually use.

  • Before starting I like to set the 3D scanner as "static mesh". Doing this I will hide all the item's components making this process much easier.
  • Pen tool: I use this tool to draw my first polygon. That's it, after drawing the firs poly face I drop the tool and don't use it anymore.
  • The type of geo should be polygon and make sure the option "make quads" is activated.
  • As I said, I draw just one face and drop the tool.

To carry on with retopology I use the "topology pen tool" which combines all the other retopology options. I use this tool to make 90% of the work.

These are some of it's options.

  • LMB: Move vertex, edges and faces.
  • Shift+LMB & drag edges: Extrude edges.
  • Shift+LMB & drag points: Create faces.
  • Shift+RMB & drag edges: Extrude edge loops.
  • RMB & drag edges: Move edge loops.
  • CTRL+MMB: Delete components. (faces, edges y vertex).
  • Inner snap: This option allows you to weld interior vertex.
  • Sculpt -> smooth: Allows you to relax the geometry, very useful to make a better distributions of the edge loops.
  • If you work with symmetry you probably want to align the middle points to the center of the world.
  • In order to do so, select all of them and go to vertex -> set position.
  • Then you can assign a common values for all of them.
  • To create the mirror press "shift+v".
  • To merge both parts just select both items, RMB and click on merge meshes.
  • Once merged just select the points and go to vertex -> merge to weld them.
  • Topology sketch tool: Allows you to draw polygons very quick.
  • Contour tool: Allows you to draw curves that will be connected using the bridge option. Very useful for kinda cylinder parts like arms or legs.
  • Obviously if you draw more curves you will get more resolution to match the 3D scanner.
  • You can create a very quick geometry and then add resolution using the "topology pen tool".

Quick Lidar processing by Xuan Prada

Processing Lidar scans to be used in production is a very tedious task, specially when working on big environments, generating huge point clouds with millions of polygons. That’s so complicated to move in any 3D viewport.

To clean those point clouds the best tools usually are the ones that the 3D scans manufacturers ship with their products. But sometimes they are quite complex and not artist friendly.
And also most of the time we receive the Lidar from on-set workers and we don’t have access to those tools, so we have to use mainstream software to deal with this task.

If we are talking about very complex Lidar, we will have to spend a good time to clean it. But if we are dealing with simple Lidar of small environments, props or characters, we can clean them quite easily using MeshLab or Zbrush.

  • Import your Lidar in MeshLab. It can read the most common Lidar formats.
  • This Lidar has around 30 M polys. If we zoom in we can see how good it looks.
  • The best option to reduce the amount of geo is called Remeshing, Simplification and Reconstruction -> Quadric Edge Collapse Decimation.
  • We can play with Percentage reduction. If we use 0.5 the mesh will be reduced to 50% and so on.
  • After a few minutes (so fast) we will get the new geo reduced down to 3 M polys.
  • Then you can export it as .obj and open it in any other program, in this case Nuke.

Another alternative to MeshLab is Zbrush. But the problem with Zbrush is the memory limitation. Lidar are a very big point clouds and Zbrush doesn’t manage the memory very well.
But you can combine MeshLab and Zbrush to process your Lidar’s.

  • Try to import your Lidar en Zbrush. If you get an error try this.
  • Open Zbrush as Administrator, and then increase the amount of memory used by the software.
  • I’m importing now a Lidar processed in MeshLab with 3 M polys.
  • Go to Zplugin -> Decimation Master to reduce the number of polys. Just introduce a value in the percentage field. This will generate a new model based on that value against the original mesh.
  • Then click on Pre-Process Current. This will take a while according with how complex is the Lidar and your computer capabilities.
  • Once finished click on Decimate Current.
  • Once finished you will get a new mesh with 10% polys of the original mesh.

Retopology tools in Maya 2014 by Xuan Prada

These days we use a lot of 3D scans in VFX productions.
They are very useful, the got a lot of detail and we can use them for different purposes. 3D scans are great.

But obviously, a 3D scan needs to be processed in so many ways, depending on the use you are looking for. It can be used for modelling reference, for displacement extraction, for colour and surface properties references, etc.

One of the most common uses, is as base for modelling tasks.
If so, you would need to retopologize the scan to convert it in a proper 3D model, ready to be mapped, textured and so on.

In Maya 2014 we have a few tools that are great and easy to use.
I’ve been using them for quite a while now while processing my 3D scans, so let me explain to you which tools I do use and how I use them.

  • In this 3D scan you can see the amount of nice details.  They are very useful for a lot of different tasks.
  • But if you check the actual topology you will realize is quite useless at this point in time.
  • Create a new layer and put inside the 3D scan.
  • Activate the reference option, so we can’t select the 3D scan in viewport, which is quite handy.
  • In the snap options, select the 3D scan as Live Surface.
  • Enable the modelling kit.
  • Use live surface as transform constraints.
  • This will help us to stick the new geometry on top of the 3D scan with total precision.
  • Use the Quad Draw tool to draw polygons.
  • You will need 4 points to create a polygon face.
  • Once you have 4 pints, click on shift yo see (preview) the actual polygon.
  • Shift click will create the polygon face.
  • Draw as many polygons as you need.
  • LMB to create points. MMB to move points/edges/polys. CTRL+LMB to delete points/edges/polys.
  • CTRL+MMB to move edge loops.
  • If you want to extrude edges, just select one and CTRL+SHIFT+LMB and drag to a desired direction.
  • To add edge loops SHIFT+LMB.
  • To add edge loops in the exact center SHIFT+MMB.
  • To draw polygons on the fly, click CTRL+SHIFT+LMB and draw in any direction.
  • To change the size of the polygons CTRL+SHIFT+MMB.
  • To fill an empty space with a new polygon click on SHIFT+LMB.
  • To weld points CTRL+MMB.
  • If you need to do retopology for cylindrical, tubular or similar surfaces, is even easier and faster.
  • Just create a volume big enough to contain the reference model.
  • Then go to Modeling Toolkit, edit -> Shrinkwrap Selection.
  • The new geometry will stick on to the 3D scan.
  • The new topology will be clean, but maybe you were hoping for something more tidy and organize.
  • No problem, just select the quad draw. By default the relax tool is activated. Paint wherever needed and voila, clean and tidy geometry followinf the 3D scan.

Zbrush insufficent memory error! by Xuan Prada

You have probably experienced this error a few times already, haven’t you?
It is quite common specially when you are working with huge assets.

It happened to me last week a lot of times when working with a 40 UDIM asset and trying to export a 32 bit displacement maps.
My machine couldn’t handle it and Zbrush started to giving error saying “Insufficent memory error”.

If this happens to you and don’t know how to extract your displacement maps out of Zbrush, don’t worry, this small trick could help you.

  • Execute Zbrush using your root account in Mac or Administrador account in Windows.
  • In Windows just right click on the Zbrush icon and select “run as administrator”.
  • In Mac start a terminar and logging as root.
  • Then execute Zbrush.
  • Then in Zbrush go to Preferences -> Mem and increase the Compact Memory.
  • That’s it. It should work now.
  • Unfortunately this trick only worked for me with simple displacement, but it didn’t work with vector displacement :(

Zbrush to Maya and Vray 2.0 by Xuan Prada

I know how tricky can be sometimes to make your Zbrush displacements look great outside Zbrush.
Maya, Softimage, Vray, Renderman or Arnold, just to name a few treat Zbrush displacements in a different way.
Let me explain to you my way to export displacement from Zbrush to Maya and Vray 2.0

- First of all, if you are working with a final asset you will have to export your displacement using your base geometry imported in Zbrush. If you did the scult from scratch in Zbrush you may want to export your lowest subdivision mesh, create a good uv mapping and re-project your sculpted detail in that mesh.
If this is the case, check this.

  • Go to the lowest subdivision level.
  • Turn off all your layers.
  • Export as .obj
  • This is the object that you are about to render. If you had imported a base mesh before, you won’t need to export it again, it would be in your 3D application already.
  • Go back to the highest subdivision level.
  • Turn on all your layers.
  • Go down to the lowest subdivision level.
  • Store a new morph target and import the previous exported .obj or your original base mesh from your 3D application.
  • Your sculpted model will be substituted by the original mesh with no sculpt information.
  • Click on switch morph target to activate again your sculpted mesh.
  • You are ready to export the displacement maps, just check my settings below for 16 bits, 32 bits and vector displacement.
  • Finally to set-up your shaders and render settings for Zbrush displacements in Maya and Vray 2.0 check my previous post about it.

Projecting details in Zbrush by Xuan Prada

  • Export the lowest subdivision model.
  • Export the highest resolution model.
  • Work on the uv mapping using the lowest resolution model.
  • Go back to Zbrush and import the high resolution model.
  • Now import the low resolution model.
  • Select the high resolution model and go to Subtool -> Insert -> and select the low resolution model.
  • Once inserted you will see both models overlapped in the viewport.
  • You need to be complete sure that only the two models that you’re interest on are shown. All the additional stuff that you would have in your zbrush scene should be hidden.
  • Select the low resolution model and subdivide it as much as you need.
  • Store a Morph Target so you can always come back to the starting point in case that you need it in the near future. (and you will).
  • With the low model selected go to Subtool -> Project -> Project All
  • The most important parameters are Distance and PA Blur. Try to use low values as Distance and keep blur to 0. This is a trial and error process. Default distance value is a really good starting point.
  • Once the projecting process is done, check your model.
  • If you find big errors in the mesh try to use a Morph brush to reveal your original mesh. Remember that we stored a Morph Target while ago. Revealing the original model you can easily remove projection artifacts and sculpt quick fixes.
  • You are ready to export the displacement maps for this model. Just select the low resolution model and go back to the lowest subdivision level.
  • Check the screenshots to see the parameters that I’m using for 16bits 32bits and vector displacement.
  • Check the final displacement maps.

You can watch a detailed video tutorial with all these steps here, only available in Spanish.

Si quieres puede ver aquí un videotutorial con todos estos pasos y explicaciones más detalladas.