Saturday, July 28, 2012

How to convert 2D footage to stereoscopic 3D


Developing a process for efficient 2D to S3D conversion
In this article, Autodesk’s Media and Entertainment EMEA Business Development Manager, Nick Manning, explains how to develop a process to take 2D footage and turn it into stereoscopic 3D (S3D).
Over the past few years, demand for stereoscopic cinema has grown steadily. Many films today are now released in stereoscopic 3D (S3D) at the cinema, and there are large libraries of movies that can be converted to S3D and re-released for new audiences to enjoy.
As available content grows, so too will demand for S3D, especially since stereo-enabled hardware such as televisions and computers are increasingly becoming available to consumers.
With this growth in demand, there remains the challenge of the intensive labour requirements involved in re-dimensionalising 2D footage. The technology to shoot directly in stereo may exist but it is likely to be expensive. Having the added option of converting 2D to S3D is a benefit to a production toolset. In the case of older 2D footage, re-shooting in 3D is obviously impossible. There needs to be a cost-effective way of converting footage to S3D.
James Cameron's Avatar was shot in stereoscopic 3D, but how do you change 2D footage to S3D?

Finding a solution

Currently, the process of converting 2D footage to 3D can be approached in several different ways:
2D displacements and distortions of footage: In this method, the user produces depth maps that displace an otherwise flat image in 3D space. Once a stereo camera rig is in place and the artist applies the appropriate depth maps, the software can be used to generate the illusion of depth. This is a quick and effective way of converting simple 2D shots to 3D as artists can hand paint depth information. On simple scenes, artists can use luminance information to generate a depth map and distort the footage appropriately.
Cards in 3D space: This is a more complex process than 2D displacements and distortions of footage. The artist rotoscopes the footage and isolates objects into cards, each of which can be positioned closer or further from the camera. The key advantage is that artists are utilising real 3D space, the stereo camera field of view can be set and artists can position each card according to real-life dimensions to obtain a more accurate representation of the scene. However, a key challenge is that there might be areas that need to be filled in when converted to stereo due to parallax effects.
Match-moved geometry with texture projections: The basis of this method involves recreating the 3D scene with basic geometry that matches the set and objects in the scene. Footage is projected onto the geometry which provides more accurate depth information to create the stereoscopic effect. While this method can be more accurate, it is more labour intensive and requires a 3D artist to create the sets. With characters, 3D representations of characters have to be created and match-moved to the footage.
All three approaches to stereo conversion are viable. However, the best solution is to use high-quality software solutions that allow users to employ all of the different approaches and mix them where appropriate.

Scoping the challenges

Stereo conversion can be complex and labour intensive, typically involving a range of disparate challenges. What users need above all is access to high-quality visual effects software solutions that enable them to tackle and overcome these issues.

Simulating eyes

At the most basic level, the artist has to simulate how human eyes work in a way that is conducive for artists to use in a compositing environment. Some software solutions enable users to write the scripts and presets for stereo camera rigs. In Autodesk Flame 2012, a stereo camera rig is ready to use. The FBX Camera in Flame is a pre-rigged camera with many settings commonly required for high-quality stereo work: such as toe-in and inter-axial separation, for example.
This stereo camera rig comes with workflow enhancements. Hanging the zero parallax plane, safe stereo views can be viewed and adjusted as semi-opaque overlays in the scene. As layers are moved forward or backwards from the camera, the rig auto-scales the layer so that it retains its perceived size.

Parallax effects

A common issue with generating offsets for the left or right eyes is that they fundamentally look at objects from different viewpoints. In some cases, this means that there is missing image information as the second eye sees ‘around’ objects.
Fixing this requires a certain amount of manual labour in painting in the missing information. Again the best software solutions in this area can help in that they typically feature integrated tools enabling artists to automatically generate the missing information.
Image courtesy of Andy Davenport - the producer of the world's first stereoscopic 3D cinema advert

Viewing stereo

Another key requirement is the need to view the stereo itself. The best solutions in this area enable artists to use Dubois Anaglyph stereo on the primary UI monitor viewport, which doesn’t require any special stereo monitor. This is a cost-effective way to enable a stereo workflow for most artists in a studio. For aggregating content and monitoring the entire production, software solutions should theoretically be able to output full screen previews of video images via HD-SDI, whi ch can then be viewed on stereo monitors.

Rotoscoping and cutouts

In this context, users should look for software that contains masking and tracking tools well suited to isolating elements across multiple frames efficiently. Artists should be able to more quickly generate usable masks across multiple frames, helping reduce the time spent doing rotoscoping.
There is also a workflow efficiency advantage in having these masking, tracking and rotoscoping tools integrated into the main software solution.

Mixing 2D and 3D footage

Artists should also look for software solutions that feature real-time 3D environments which allow them to approach compositing from a true 3D perspective. Using real-time shading technology running on modern GPUs, the artist then needs to be able to achieve high-quality 3D composites in real-time.
In a typical production, elements may come from a variety of sources that may or may not be in stereo. Users will be benefit from the ability to combine stereoscopic 3D and mono footage and to mix and match these elements seamlessly.

Characters and 3D geometry

Characters and 3D objects (environments and objects) present a challenge because the shot requires collaboration with 3D artists. In addition to having to communicate between multiple artists, there is a workflow challenge in sharing data between the 3D application and the compositing application.
In this context, there are several benefits that stem directly from using a workflow featuring high-quality software solutions from a single vendor. Typically, they include tighter integration between the applications using FBX and the ability to achieve efficient character modelling, rigging and animation.

Workflow and data interchange

A common challenge faced by productions is in getting data between people and between different software packages. Key issues include:
Exporting and reading data. Can artists use the different packages to read the same formats? If not, are there development resources available to create the necessary importers/exporters?
Storing and managing data. At a basic level, artists can move data represented as images, or raw geometry. This can create a logistical issue as they will often end up with a lot of data (per frame, per layer, different versions) that needs to be managed.
In this context, using software solutions that can read and write the FBX format, enables the passing of 3D geometry data between the 3D and compositing applications to be achieved more easily, without needing to render out hard mattes/image files.

Stereo finishing

In a typical stereo conversion pipeline, the majority of individual shots are processed by artists using individual compositing seats. The data then has to be aggregated, sequenced and finished. This can create a workflow issue if multiple finishing tools are being used.
Here, once again, artists need integrated software solutions that allow them to process individual shots and convert them to stereo. The project can then be aggregated, incorporating shots sequenced on a timeline and adding depth and colour-grading in real-time.

A real-world solution

Today, the practice of stereoscopy is becoming increasingly popular in film due to the latest developments in digital cinema and computer graphics. Growing numbers of studios are releasing animated and live-action feature films in S3D format.
For artists working on these productions, the ability to achieve an integrated 2D/3D workflow and to create, edit, and view stereo content using industry leading software solutions can be a key competitive differentiator‚ especially if these solutions enable them to effectively tackle many of the key challenges they face along the way.
Artists are able to make creative decisions within the context of what the audience will see – helping to eliminate guess work and resulting in a greater ability to use stereo as a storytelling aid. Stereoscopic 3D is likely to be an important element of the film production industry long into the future.

No comments:

Post a Comment