Saturday, July 28, 2012

Quick tutorial: Create a vector displacement stamp from an existing model


When it comes to creating highly detailed stamps and stencils to be used in a sculpting workflow, the vector displacement method in Mudbox cannot be beaten
Vector displacement stamps can help you create stunning surface effects with ease
For the creation of stamps and stencils, vector displacement requires that the target mesh must have the same topology as the source mesh.
There are a few tricks for achieving this. Once you understand the process, creating a stamp is easy.
The key to creating a vector displacement map from existing geometry (or from a portion thereof) is to create a regular displacement map as a starting point.
Mudbox provides two ways to do this: via the map extraction UI, for more accurate results using a higher bit-depth image format; and directly within the viewport filter, for quicker results.
Let’s take a look at both methods.
First, create a simple plane by selecting Create > Mesh > Plane.
Leave the default settings for the plane as they are. This plane will be your background for baking the displacement map.
Next, import your model via File > Import.
In this example, we’re using a model of an old, beaten-up car. Be sure the model is placed at the origin, facing up in the Y axis.
Any details you wish to capture in the map should be pulled up above the plane. Use the transformation tools in the Select/Move Tools tray if your model is not positioned in this manner.
Everything above the plane is included in your ultimate displacement stamp
Viewport filter method
For the ‘quick and dirty’ method to create a simple displacement map, go to the Object List tab (top right corner beside the Layers tab).
Right-click on the Top camera and select Look Through. You’re now looking down the Y axis.
Zoom in and frame the plane to within the boundaries of the viewport.
You can also pull the edges of the viewport to fit the exact square-shape of the plane as well.
Next, go to the Viewport Filters tab (in the top-right corner by the Layers and Object List tabs) and toggle Screen Distance on.
Click the Save 16-bit Image button in the Screen Distance Properties tab and select a location to save.
For other image formats, select Render > Save Screen Image and select the resolution, image format and save location.
This method is fast and effective, but it’s limited to working with 8-bit images. The higher the bit depth of the image is, though, the more accurate the data from the model will be, yielding better results in your stamp.
That’s where the alternative method for creating a displacement map comes in.
Map extraction UI method
To use a higher-bit-depth image to store more detail, first select Maps > Extract Texture Maps > New Operation. Select Displacement Map.
To create your stamp, bring in any existing geometry you like, then scale, rotate and position it within your stamp area to establish the design
In the Target Models space, click Add All, then highlight and remove everything else except plane.
In the Source Models space, click Add All; this time, highlight and remove the plane.
In the Search Distance box, click Best Guess for an appropriate envelope to calculate the sampling. Select your image size. (1K is fine for most circumstances: remember you are just using this map as a starting point to sculpt with.)
Down in the Base File Name box, click the envelope icon, select your path and then choose an image format.
The higher the bit depth, the more detail you will preserve in the map.
Import the displacement map
Now select File > New Scene followed by Create > Mesh > Plane. Subdivide the plane up a few levels (to at least Level 6).
Import your displacement map as a stencil into the Stencil tray by selecting the arrow icon and then Add Stencil.
Select the Sculpt Tool and then use the [S] key to position the stencil over the plane. Create a new sculpt layer.
You will need to adjust the strength of your Sculpt Tool to displace the geometry to the closest depth of the
original mesh.
Once you’ve used the displacement map to re-create the base model shape, it’s up to you to sculpt and change the mesh to your desired effect.
In the video, I show a displacement map that I use as a base mesh compared to a final vector displacement map.
The regular displacement map provides a starting point for some simple sculpting.
Don’t be afraid to use the Pose tools to bend or twist the geometry as well. Vector displacement is accurate for storing overhanging and undercutting geometry, as well as very smooth or fine details.
Expert tip
After you import geometry for your stamp, pay attention to its subdivision level: you may need to increase it to remove any faceting.
By Craig Barr
Technical marketing specialist at Autodesk
the-area.com/blogs/craig

Discover Scott Spencer’s workflow tips for ZBrush


Weta Workshop’s Scott Spencer shares some of his techniques for sculpting better-looking models in less time
These tips accompany a tutorial in 3D World issue 147, where you’ll find additional context and detail for this project.












































































Tutorial: Animate a rotating pivot in 3ds Max


Learn how to animate a rotating pivot point using Autodesk’s 3ds Max software
Usually this type of problem is handled best by a simple rig of a few null objects. Even though a simple hierarchical rig or scripted controller are options, the first logical thing users want is to animate a pivot point.
It hasn’t always been possible in 3ds Max to animate the pivot point of an object so it can rotate around a different pivot at varying points in time, but in the more recent releases (2010 Subscription Pack and above), a new controller provides the ability to have an animated pivot point.
This functionality came along with the addition of CAT (Character Animation Toolkit) to the base package of 3ds Max. CAT has the functionality to animate pivots for rigs: it achieves this by adding a new Transform controller.
Fortunately, this controller can be added to any object to give it the ability to have an animated pivot point. Let’s have a look at this using a basic rolling cube example.
First, create a cube in the Top viewport. Go to the Motion panel and open the Assign Controller rollout.
Here, you will assign a new controller to the Transform track of the object.
It’s common to assign a controller to a Position, Rotation or Scale track, but it’s not too often that you would assign a controller to replace the entire Transform.
Select the main Transform track and click the Assign Controller button. From the list that appears, you may see a few new CAT controllers you haven’t seen before.
Choose CATHDPivotTrans. Now you have an extra track to play with that will allow you to animate your pivot point.
In the Motion panel, click the SubObject button to get into PivotControls SubObject mode. Move the pivot point to the left corner of the cube.
Turn on Autokey. At frame 10, rotate the cube over this side. Next, go back into SubObject PivotControls mode; with Autokey still on, set a hold keyframe for the pivot point at frame 9.
At frame 10, move the pivot point to the next forward edge. Now you can hop back out of SubObject mode and rotate around the new pivot point. This will make the box look as though it’s rolling up on its edges over the ground.
While switching to SubObject mode can be cumbersome, this is a great way to improve your animation toolset. You can even apply this controller to custom bone or vehicle rigs.
Expert tip When animating the pivot point, use the Stepped animation curve. This allows for making fewer keyframes because you eliminate hold keys.

Top VFX technique as used by MPC artists


Niall Flinn, head of FX at MPC, explains how to simulate believable sand and soil particles
In the ‘150 CG secrets’ article in issue 150 of the magazine, Niall Flinn discusses the need to achieve a realistic particle size distribution when simulating granular materials such as soil.
Niall explains: When simulating granular materials like soil, randomise the particle sizes. Don’t just use a rand() function, since you will end up with as many large as small particles. In Maya’s nParticles, you can fix this via the ramp editor under the Radius Scale tab, with the Radius Scale Input set to Randomized ID. Just tweak the curve to give a natural-looking distribution of radii.
The following techniques can be used to achieve this distribution in Maya:
You can use Gaussian noise to create a distribution weighted toward smaller radii. In Maya, this is easily accomplished using the gauss() function. I typically create a particle creation expression that looks something like this:


float $minRad = 0.01;

float $sd = 0.1;

radiusPP = abs( gauss( $sd )) + $minRad;


In this example, $minRad is the smallest radius you want to see, and $sd is the standard deviation of the function.
Another option is to use a simple weighted noise algorithm (again in a creation expression) by feeding the results of successive rand() calls into one another:
$min = 0.01;
$max = 0.1;
$rand = rand($min, $max);
$rand = rand($min,$rand);
$rand = rand($min,$rand);
radiusPP = $rand;
The above code will produce a distribution overwhelmingly weighted toward smaller values. With 3 rand() calls, it’s relatively expensive, but it generally only needs to be done once for each particle.

Top tip: Make the most of specularity


Anders Langlands, CG supervisor at MPC, explains why specular properties are important for creating realistic materials
In the ’150 CG secrets’ article in issue 150, Anders Langlands discusses the need to prioritise specularity in order to achieve realistic materials.
Anders explains: The most important thing with any material is to get the specular properties right. The specular/glossy response of a surface (or lack of it) tells us whether it’s made of metal, plastic or rubber. You know you’ve got a shader right when you can look at an image and instantly know what each surface would feel like to the touch.
This means getting two things right:
1. Your Fresnel settings determine what your surface is actually made of. If you don’t apply a Fresnel falloff, you’re going to be limited to something that looks like metal. Otherwise apply a Fresnel falloff that’s appropriate to your material.
Water: 1.34
Skin: 1.4
Plastic and glass: 1.5-1.7.
If you’re unsure, just set it to 1.5: better to have something slightly wrong than no Fresnel at all. 
For the best metal effects, use measured reflectance data (‘nk data’) to get the subtle shifts in the colour and brightness of reflections as the incidence angle changes. If your renderer doesn’t support this, you can often use a very high IOR with a standard Fresnel to get some of the same effect.
 
2. If specular reflections are the most important part of a shader, bump and displacement maps are the most important part of your texture stack. 
If you’re using ZBrush or Mudbox for overall displacement, make sure you’re happy with your base model and displacement before you start shading.
If you’re going to try extracting bump maps from your colour textures as a starting point, do so immediately after you get a first colour pass. Once you have your bump and displacement down, move on to specular maps. If you’ve done your displacement well these can usually be quite simple. What you want to paint is the ‘exponent’, ‘glossy’ or ‘shininess’ map, depending on what your software calls it – in other words, the one that controls the shape of the specular highlight. You should always use an energy-conserving specular BRDF so that as the highlight gets broader, it also gets dimmer. If your software does not support an energy-conserving BRDF, you can plug the inverse of your shininess map into the specular strength parameter to fake this effect.
Once you’ve got your displacement right and your Fresnel settings correct, you can just tweak your shininess settings until your CG material matches your reference images.

How to convert 2D footage to stereoscopic 3D


Developing a process for efficient 2D to S3D conversion
In this article, Autodesk’s Media and Entertainment EMEA Business Development Manager, Nick Manning, explains how to develop a process to take 2D footage and turn it into stereoscopic 3D (S3D).
Over the past few years, demand for stereoscopic cinema has grown steadily. Many films today are now released in stereoscopic 3D (S3D) at the cinema, and there are large libraries of movies that can be converted to S3D and re-released for new audiences to enjoy.
As available content grows, so too will demand for S3D, especially since stereo-enabled hardware such as televisions and computers are increasingly becoming available to consumers.
With this growth in demand, there remains the challenge of the intensive labour requirements involved in re-dimensionalising 2D footage. The technology to shoot directly in stereo may exist but it is likely to be expensive. Having the added option of converting 2D to S3D is a benefit to a production toolset. In the case of older 2D footage, re-shooting in 3D is obviously impossible. There needs to be a cost-effective way of converting footage to S3D.
James Cameron's Avatar was shot in stereoscopic 3D, but how do you change 2D footage to S3D?

Finding a solution

Currently, the process of converting 2D footage to 3D can be approached in several different ways:
2D displacements and distortions of footage: In this method, the user produces depth maps that displace an otherwise flat image in 3D space. Once a stereo camera rig is in place and the artist applies the appropriate depth maps, the software can be used to generate the illusion of depth. This is a quick and effective way of converting simple 2D shots to 3D as artists can hand paint depth information. On simple scenes, artists can use luminance information to generate a depth map and distort the footage appropriately.
Cards in 3D space: This is a more complex process than 2D displacements and distortions of footage. The artist rotoscopes the footage and isolates objects into cards, each of which can be positioned closer or further from the camera. The key advantage is that artists are utilising real 3D space, the stereo camera field of view can be set and artists can position each card according to real-life dimensions to obtain a more accurate representation of the scene. However, a key challenge is that there might be areas that need to be filled in when converted to stereo due to parallax effects.
Match-moved geometry with texture projections: The basis of this method involves recreating the 3D scene with basic geometry that matches the set and objects in the scene. Footage is projected onto the geometry which provides more accurate depth information to create the stereoscopic effect. While this method can be more accurate, it is more labour intensive and requires a 3D artist to create the sets. With characters, 3D representations of characters have to be created and match-moved to the footage.
All three approaches to stereo conversion are viable. However, the best solution is to use high-quality software solutions that allow users to employ all of the different approaches and mix them where appropriate.

Scoping the challenges

Stereo conversion can be complex and labour intensive, typically involving a range of disparate challenges. What users need above all is access to high-quality visual effects software solutions that enable them to tackle and overcome these issues.

Simulating eyes

At the most basic level, the artist has to simulate how human eyes work in a way that is conducive for artists to use in a compositing environment. Some software solutions enable users to write the scripts and presets for stereo camera rigs. In Autodesk Flame 2012, a stereo camera rig is ready to use. The FBX Camera in Flame is a pre-rigged camera with many settings commonly required for high-quality stereo work: such as toe-in and inter-axial separation, for example.
This stereo camera rig comes with workflow enhancements. Hanging the zero parallax plane, safe stereo views can be viewed and adjusted as semi-opaque overlays in the scene. As layers are moved forward or backwards from the camera, the rig auto-scales the layer so that it retains its perceived size.

Parallax effects

A common issue with generating offsets for the left or right eyes is that they fundamentally look at objects from different viewpoints. In some cases, this means that there is missing image information as the second eye sees ‘around’ objects.
Fixing this requires a certain amount of manual labour in painting in the missing information. Again the best software solutions in this area can help in that they typically feature integrated tools enabling artists to automatically generate the missing information.
Image courtesy of Andy Davenport - the producer of the world's first stereoscopic 3D cinema advert

Viewing stereo

Another key requirement is the need to view the stereo itself. The best solutions in this area enable artists to use Dubois Anaglyph stereo on the primary UI monitor viewport, which doesn’t require any special stereo monitor. This is a cost-effective way to enable a stereo workflow for most artists in a studio. For aggregating content and monitoring the entire production, software solutions should theoretically be able to output full screen previews of video images via HD-SDI, whi ch can then be viewed on stereo monitors.

Rotoscoping and cutouts

In this context, users should look for software that contains masking and tracking tools well suited to isolating elements across multiple frames efficiently. Artists should be able to more quickly generate usable masks across multiple frames, helping reduce the time spent doing rotoscoping.
There is also a workflow efficiency advantage in having these masking, tracking and rotoscoping tools integrated into the main software solution.

Mixing 2D and 3D footage

Artists should also look for software solutions that feature real-time 3D environments which allow them to approach compositing from a true 3D perspective. Using real-time shading technology running on modern GPUs, the artist then needs to be able to achieve high-quality 3D composites in real-time.
In a typical production, elements may come from a variety of sources that may or may not be in stereo. Users will be benefit from the ability to combine stereoscopic 3D and mono footage and to mix and match these elements seamlessly.

Characters and 3D geometry

Characters and 3D objects (environments and objects) present a challenge because the shot requires collaboration with 3D artists. In addition to having to communicate between multiple artists, there is a workflow challenge in sharing data between the 3D application and the compositing application.
In this context, there are several benefits that stem directly from using a workflow featuring high-quality software solutions from a single vendor. Typically, they include tighter integration between the applications using FBX and the ability to achieve efficient character modelling, rigging and animation.

Workflow and data interchange

A common challenge faced by productions is in getting data between people and between different software packages. Key issues include:
Exporting and reading data. Can artists use the different packages to read the same formats? If not, are there development resources available to create the necessary importers/exporters?
Storing and managing data. At a basic level, artists can move data represented as images, or raw geometry. This can create a logistical issue as they will often end up with a lot of data (per frame, per layer, different versions) that needs to be managed.
In this context, using software solutions that can read and write the FBX format, enables the passing of 3D geometry data between the 3D and compositing applications to be achieved more easily, without needing to render out hard mattes/image files.

Stereo finishing

In a typical stereo conversion pipeline, the majority of individual shots are processed by artists using individual compositing seats. The data then has to be aggregated, sequenced and finished. This can create a workflow issue if multiple finishing tools are being used.
Here, once again, artists need integrated software solutions that allow them to process individual shots and convert them to stereo. The project can then be aggregated, incorporating shots sequenced on a timeline and adding depth and colour-grading in real-time.

A real-world solution

Today, the practice of stereoscopy is becoming increasingly popular in film due to the latest developments in digital cinema and computer graphics. Growing numbers of studios are releasing animated and live-action feature films in S3D format.
For artists working on these productions, the ability to achieve an integrated 2D/3D workflow and to create, edit, and view stereo content using industry leading software solutions can be a key competitive differentiator‚ especially if these solutions enable them to effectively tackle many of the key challenges they face along the way.
Artists are able to make creative decisions within the context of what the audience will see – helping to eliminate guess work and resulting in a greater ability to use stereo as a storytelling aid. Stereoscopic 3D is likely to be an important element of the film production industry long into the future.

Free 3D training: Disney’s 12 classic principles of animation


Master Disney’s 12 classic principles of animation in 3D with Steve Lambert’s regular series of articles on the fundamentals of CG. This week: squash and stretch
In this series, Weta Workshop’s animation director Steve Lambert will guide you through the 12 classic priniciples of Disney animation – starting this week with the concepts behind squash and stretch.
FOR: Any software | TIME TAKEN: 10 minutes | TOPICS COVERED: Squash and stretch |ALSO REQUIRED: Maya (to view scene files)
Download the project and animation files for this training: fundamentals.zip (12.3MB)
Back in 1981, Frank Thomas and Ollie Johnston – two of Disney’s ‘Nine Old Men’ – published The Illusion of Life: a landmark book that set out in print the 12 principles of animation that have guided the company’s animators since the 1930s. Over the next 12 weeks, I’m going to show you how to apply these classic animation principles to your 3D work.
It’s important to note that none of the principles stand in isolation from each other. All combine to create a successful animation – but just like a toolbox, not every job requires every tool. Bear in mind that I will also hesitate to refer to anything as ‘wrong’ or ‘incorrect’. They’re principles, not rules!
Here, we’ll cover the first principle: squash and stretch. The caramel-covered marshmallow of the animation chocolate box, squash and stretch is the animator’s attempt to mimic the way objects deform in motion.
It is about so much more than the bouncing balls often used to demonstrate it: it can be used to convey weight, accentuate movement and enhance a character’s flexibility. It isn’t just for cartoony animation, either.
Adding squash and stretch to Andy and the ball really makes the animation come alive. This still doesn't show it off, so make sure you download the zip doc to watch the animation in action
One thing to bear in mind when squash-and-stretching is the need to maintain a constant volume. When you animate an arm stretching, the thickness of the limb should decrease.
Think of a rubber band: if you pull the ends, the rubber is distributed along a greater distance, so the band thins out. The same is true if you’re squashing an object: the mass has to go somewhere, and it generally bulges outward – keeping the volume, if not the shape, constant.

Squashing and stretching Andy

To illustrate this, I’ve put together a few example animations, which you can find in the Animation folder of the Fundamentals download. (If you’re using Maya, you can also explore the corresponding scene files.)
The Andy01 clip shows Andy, our character for the next 12 issues, running from a billiard ball with no squash and stretch.
In Andy02, I’ve applied squash and stretch to both Andy and the ball. You can see how the stretching of Andy’s body as he falls and the compression as he hits the ground add a bit of punch to the animation, emphasising his weight and movement.
In this example, however, the ball no longer seems right. The squishiness kills the impression that it is a hard, rigid object. Replace the texture to make the ball a basketball, though (see Andy03), and the result is much more believable.

Facial animation

The second example is of a basic facial animation take.
Take01 has no squash and stretch.
In Take02, I’ve distorted the features to elongate the expression. This drags the face along the path of movement and enhances the motion. Take03 goes one step further, changing the shape of the entire head as it moves. I’ve intentionally made the result over the top, but this sort of thing can be done more subtly to great effect.
That’s it for this week. Next time, we’ll look at principle two: anticipation. Bet you can’t wait…

Free 3D training: Disney’s 12 classic principles of animation: anticipation


Master Disney’s 12 classic principles of animation in 3D with Steve Lambert’s regular series of articles on the fundamentals of CG. This issue: anticipation
Back in 1981, Frank Thomas and Ollie Johnston – two of Disney’s ‘Nine Old Men’ – published The Illusion of Life: a landmark book that set out in print the 12 principles of animation that have guided the company’s animators since the 1930s.
In this series, Weta Workshop’s animation director Steve Lambert will show you how to apply these classic animation principles to your 3D work.
Last week we covered the concepts behind squash and stretch; this week it’s all about the next move.

Last time, we talked about squash and stretch: this time, we’ll look at anticipation, a principle that is all about broadcasting thought, intent and directing focus

FOR: Any software | TIME TAKEN: 10 minutes | TOPICS COVERED: Anticipation | ALSO REQUIRED: Maya (to view scene files)
Download the project and animation files for this training: fundamentals2.zip (10.4MB)
Anticipation can be used to prepare the viewer for an action about to be performed. There are many obvious examples, such as a pitcher winding up to throw a ball, or a bow being pulled to fire an arrow. It is the reverse action of the one about to follow.
Anticipation is not limited to the character performing the action. One can direct attention to another action or object – for example, a look or gesture (possibly off-screen) will direct us to something happening out of our focus area, or even indicate to us to an object that the character might be about to pick up.
Anticipation can also imply thought, because it shows that the character intends to do something and that they are not just moving from one position to another.
Most actions (with the exception of mechanised movement) have some sense of anticipation, and the bigger or dramatic the action, the bigger the preceeding anticipation.
But it can also be very subtle – the weight shift from one leg to another before starting a walk or the intake of breath before a sigh.
I’ve made a few examples to illustrate the above. In the first clip (Anticipation_01.mov), I’ve used little to no anticipation on either character.
When the first one runs off, it comes as a surprise and the viewer’s attention may or may not be in the right place, therefore they possibly miss the first part of the action. This can make for jarring viewing and confuse the narrative.
In the second clip (Anticipation_02.mov), I’ve applied a degree of anticipation to both the characters. You focus on the boy as he pulls back before leaning in; then your attention is pulled to the girl before she runs off. It’s not much, but it’s enough to prepare you for their manoeuvres.
In the third and final clip (Anticipation_03.mov), I’ve exaggerated the anticipation to really wind up the run. This is also an example of a ‘surprise’ anticipation, where the anticipation might indicate one thing is about to happen, but another actually takes place.
In this case, the viewer might think she’s leaning in to respond to the boy, but this movment is then used to reverse her direction.
As with any of these guidelines, anticipation can easily be overdone, and you should experiment until you achieve the timing that works best. Just don’t forget the 13th and unwritten principle – the principles of animation are tools, not rules.
That’s it for this week. Next time, we’ll look at principle three: staging.