Editorial Policy
   Contact Information
   Advertising Info
   Publicist Info
   Privacy Statement
   Contact 3Dgate

TUTORIAL • April 1998

3D AND DIGITAL VIDEO: Bringing it All Together

More and more, 3D designers are being called upon to add compositing and video editing skills to their pool of knowledge. Follow along as we mix it up with 3D graphics and video using LightWave and After Effects.

by Chris Tome

View the Quicktime movie (2.1Mb) created using the techniques described in this article.

The months have flown by, another year is upon us, and once again broadcasters and content creators flock to Las Vegas for the extravaganza known as NAB (National Association of Broadcasters). While this show covers everything from satellite dishes to oscilloscopes and character generators, NAB has grown to accommodate 3D designers and content creators with a wide range of products and companies exhibiting at NAB Multimedia World, which is held at the Sands convention center just a few short blocks from the show's main site at the Las Vegas Convention Center. About the only thing that's not cutting-edge is the taxi service, as any NAB veteran will tell you.

As broadcast graphics have become more creative and complex, viewers have gotten increasingly savvy and demanding of quality visuals. More so than in other industries, the broadcast content creator must be extremely versatile, possessing the ability to create video sequences, 3D animations, computer graphics, and the know-how to bring these elements together into one cohesive piece. In the past, much of the work was done on high-end, proprietary systems, such as those available from Quantel (Harry, a CG system for broadcast graphics) and Discreet Logic (Flint, Flame, etc.), but most of these high-end tasks can be easily achieved on desktop computers.

In this article, we will create a 30-second promotional advertisement for the Haight Ashbury Street Fair, dubbed the Summer of Love in honor of the 30th anniversary of the 1967 San Francisco celebration with the same name. The piece will be created as a fictitious local TV spot for the 1998 festival and must reflect the dynamics and unique nature of the event. The footage or digital tape for this project was shot at last year's street fair. We will be using LightWave 3D from NewTek and Adobe After Effects, with compositing and layering done in whichever program is most appropriate for the specific task.


Screen shot of the composited 3D text and digital video in After Effects. Notice the light ripple on the background video.
Along with live-action footage from the street fair, we will incorporate 3D animated peace signs, text, and bars with bevels created in LightWave. We will be mapping video to some elements and compositing others inside of After Effects.

The video

The video was shot using a Sony DCR-PC7, which is a small, consumer-level, handheld Mini-DV format digital camera. One of the best things about this camera (aside from its excellent image quality) is that it's very unobtrusive, and people seem to be much more comfortable around it than they would larger, conventional video cameras. The other main advantage is its support of FireWire, or IEE 1394, which is a video input and output format originally created by Apple.

Using a Spark card from Digital Processing Systems (DPS), the footage was recorded straight to a Cybernetics RAID (redundant array of independent disks) hard disk from the FireWire connection. I used a Cybernetics CY-10XP, a 10GB Fast/Wide SCSI RAID array. Any time you are working with digital video, plan on having a lot of free hard disk space and AV quality drives for better playback. A RAID is invaluable to a video editing workstation and is much faster than typical AV drives. RAID achieves its throughput by writing files to multiple disks at once. The Cybernetics unit I used achieved sustained transfer rates of just under 18MB per second and played back the video files in near real-time. The Cybernetics CY-10XP is an excellent 10GB RAID drive, and because of the sustained transfer rates, it ensured that all the data was recorded, with no dropped frames.

The Spark digitized the video to the RAID in interlaced mode, which is necessary to output the final to standard NTSC video format. Interlaced video has two fields, or images, for every frame of video, which in the case of NTSC, translates into 30 frames-per-second (fps), or 60 fields-per-second. This makes the image look fuzzy or shifted on a computer monitor, but will look just fine when output to tape. This is because a TV monitor is interlaced and draws the odd fields first, then goes back in and draws the even fields to fill the frame. A computer monitor, on the other hand, is non-interlaced, and draws the screen line by line more quickly from top to bottom. If you are fortunate enough to have an NTSC monitor in your studio, you can view the video on that, which is the preferred way to work with digital video.

The Spark card digitizes in full frame, 720 x 480 mode at 30fps, which is exactly what's needed for this project. Even with a RAID drive and a fast Pentium Pro, the file sizes of the video elements are too large to work with in After Effects in real-time, so I would recommend that you make proxies of the files to do so. Proxies are low-resolution files used as stand-ins for the high-resolution original files. Making a proxy can be accomplished by using any video editing application, such as Premiere, After Effects, or Speed Razor, for example. Simply load the clip you wish to convert and make a movie with the resolution set to a proxy size of 180 x 135 (which is still in the proper aspect ratio of the original 720 x 480 files). You may also want to use compression and lower the bit depth from 24-bit to 8-bit to save memory when you are assembling your project. However, a 40MB file compressed using the Cinepak codec at 180 x 135 at 24-bits compressed down to only 3MB.

For the Haight Ashbury Street Fair project, I created low-resolution versions of the footage to use as proxies in After Effects. The proxies are scaled down from the original clips so that compositing, effects, and transitions can be created and viewed in real-time or close to it. The original clips were scaled from 720 x 480 to 180 x 135, which made them more manageable. 3D designers familiar with the process of rendering will notice that After Effects handles final output the same way a 3D program does.

When the final movie is rendered, the software will allow you to substitute the proxies with the real versions. This will mean a much longer final render, but the process will result in a broadcast-quality, first-generation digital master, which is the highest quality possible.

After rendering the finished movie, we'll convert the AVI file into a Perception AVI for output to tape on the Perception Video Recorder (PVR), also from DPS. This will play the file back in real-time on the PVR, allowing us to record the video out to tape for delivery to the "client."

I installed the Perception card in the system after digitizing the video using the Spark card, not realizing I had a digital capture option on the PVR, which means I could have recorded using only the PVR. This oversight ultimately worked to my advantage, however, as the FireWire output on the Spark is superior in quality to the S-Video output also available on the Sony DV camera, which aside from Component is the only other format the video-ins on the PVR will accept. If you have a Sony DV camera in your set-up, I highly recommend the Spark card as the best way to get the video (and audio) into your system using FireWire. Additionally, the PVR capture card does not record the audio track, but the Spark card does.

Shooting video footage is an art unto itself, and if you are uncomfortable shooting your own video, you may want to subcontract the job to a professional videographer to obtain footage for your commercial project. While I did not originally shoot this video with any particular project in mind, I was able to find footage that I thought would work well. Typically, a project of this nature would initially involve storyboarding with the video being shot to match the boards. Because the video was shot before this project was conceived, I had to work backwards, digitizing the footage I wanted and creating a timeline for it.

Creating Elements in LightWave

For this project, I wanted to experiment with some unique tools and techniques that were new to me. The first thing I decided to do was to create some simple geometry and map video footage to it as the 3D elements moved. However, LightWave doesn't currently offer the capability of importing AVI files. Originally, I had planned on compositing using both After Effects and LightWave (not just After Effects), so I went online to the #lightwave channel on IRC (Internet relay chat) and asked if anyone had a solution to my dilemma. One LightWave user showed me a link to a third-party developer called Burnt Pixels (www.burntpixels.com), which distributes a plug-in called AVI Load that can be used to import AVIs into LightWave. A couple of hours after I e-mailed Burnt Pixels, I received the plug-in in a return e-mail. For anyone working with LightWave and video files, this plug-in is a must-have. It works incredibly well, and although it doesn't offer OpenGL texture support in Layout (a fairly insignificant omission), at $55, it is worth every penny.

The AVI Load plug-in works by first loading the .avi file as an image, which creates a directory of .frm files. The first frame of the image is available to use as a texture, but if you want to use the video sequence as a map, you have to hit Load Sequence in the Images panel and add one of the numbered .frm files to load the video. The video doesn't show up until the rendering, but it works just fine and handles .avi files of any size effortlessly.


A shot of the LightWave Layout interface, with the music text, and ripple displaced plane that the video will be mapped to when rendered.
One of the techniques this project gave me an opportunity to experiment with was Front Projection Mapping, which allows an object or scene element to be perfectly matched with the background scene footage. If you've seen the film Predator, you'll remember how the camouflaged alien looked running through the jungle, as if it were embossed onto the jungle back-drop. Front Projection Mapping has many uses, but for this project, it was used to make scene elements animate over time, while revealing the same or different video in the background and maintaining the aspect ratio of the source footage. Using Front Projection Mapping, I included footage of the crowd dancing at the festival, and animated peace symbols flying around the screen mapped with the same video. This gave the appearance that the animated peace symbols were embossed and moving through the video-an easy, but cool effect. I also created a series of beveled bars with a video clip of a musical group mapped to them, each of which rotates and moves off-screen to reveal another video clip behind the next.

Towards the end of the sequence, as the first set of bars leaves the screen, the background video is replaced by a flat plane with a ripple displacement map applied as the word "music!" goes sailing by. The plane looks very distorted, but will only affect the video a little bit, giving it a shadowy, light ripple effect. OpenGL textures are turned on, but the video sequence is not displayed in the window. This is due to the AVI Load plug-in, which would make a nice addition to a future update. This entire scene was generated in LightWave without the need for After Effects, thanks to AVI Load. Without this plug-in, I would have had to render separate files with Alpha channels and composite all those files in After Effects. This technique has its advantages (like being able to move elements around more easily and apply different effects to different parts of the piece), but if you can get everything set up in LightWave or your favorite 3D application, it can be a real time-saver. This technique is not true compositing, but more mapping video on to 3D elements and combining them in an animation.

For the part of the video in which the Haight Ashbury logo appears, I decided to animate and render the 3D elements with an Alpha Channel, then apply effects and composite the animation with the video in After Effects. I did this for two reasons; first, to illustrate how to combine 3D elements and video in After Effects, and second, to do some special effects and play with the placement of the animation in After Effects. This technique provides more control on the back end and lets you experiment with some of the 2D special effects plug-ins that can be used in After Effects. I rendered the animation to a series of Targa files-a 32-bit format with an 8-bit Alpha channel. In an Alpha channel, black is transparent, white is opaque, and 254 levels of gray represent varying shades of transparency. An Alpha channel from an image can be viewed directly in Photoshop or in the After Effects Preview window. Another thing to remember when rendering for video is to use Title Safe mode and to turn on NTSC Legal Colors.

Using After Effects

The final composition was rendered in After Effects, including the live-action clips and 3D CG elements. I found After Effects intuitive and easy to use. Many of the standard operations found in 3D applications are also implemented in After Effects, albeit for a 2D environment. There are many plug-ins available for After Effects, such as the Final Effects Complete plug-ins from MetaCreations, which let users create bubbles, warps, transitions, and more. These special effects filters can produce a wide variety of digital video effects, such as particles, distortion effects, glows, and ripples. From rotation and scaling to third-party plug-in effects, the values can be changed over time by setting keyframes (which will be discussed shortly) in the timeline just as in 3D applications.

After Effects can also be used to manipulate Bezier curves in order to smooth out the animation paths. Most 3D designers will find the majority of tools in After Effects familiar, although their layout is different than in most 3D applications. For those who might need a little more help in learning After Effects, or for users who want to learn more advanced techniques, there is an excellent series of training tapes from Total Training Inc., called Total AE. I viewed the first video in the nine-tape series, and it was very professionally done in regard to presentation and content. You can get more information from Total Training's web site at www.totalae.com.

After Effects has numerous keyboard shortcuts, and it's a good idea to learn them. It will dramatically increase your productivity with the software and allow you to work much faster. Don't be intimidated by the interface-it looks complicated, but the manual includes a cohesive tutorial that covers all the basics thoroughly.

After Effects has two main areas of importance: the Project and Composition windows. In the Project window, you load, organize, and manage all of your content-video, animation, stills, audio, and more. The Composition window is where you combine scene elements to make the finished video. After importing scene elements using the File> Import> Import File(s) pull-down menu, drag-and-drop the elements from the Project window into the Composition window (if no composition is created, make one by selecting Composition> New Composition from the pull-down menu). Once in the Composition window, scene elements can be properly ordered and edited, and special effects can be applied. Items can be moved, scaled, rotated, have their transparency adjusted, and much more.

Keyframing is simple and can be done either numerically or interactively in the Composition window. Each element in the composition has a little arrow next to its name, and when clicked on, it turns downward to reveal the different ways the element can be changed or manipulated over time. Additionally, the I key goes to the In point of the element (start), and the O key goes to the Out point (end)-a very important keyboard shortcut. A small, light-blue slider on the top of the Composition window can be dragged to go to any point in the timeline, or it can be moved by clicking on the time readout in the upper left corner of the Composition window and entering the numeric point in the segment you wish to reach. Almost all of the functions in After Effects can be performed interactively and numerically, which offers greater control and precision over scene elements.

For this project, I wanted to create a fade operation, which is a transition between two clips where the first clip dissolves into the next clip. This process turned out to be extremely easy. First, set the Opacity control under the Geometrics menu of the first clip to have three keyframes; set the first and second clips at 100% and at zero and five seconds. For the third keyframe at the six-second mark, set the Opacity at 0%. For the second clip, do the reverse, with the first keyframe set to 0% opacity and the second and third keyframes to 100%. Voilà! Instant fade-out fade-in. Though some of the special effects in After Effects can get quite complicated, keyframing is easy, and making changes at any point in the project is equally effortless. Another example of After Effects' ease of use is in creating a simple cut from one clip to another. All you have to do is place one clip's end point up against another clip's start point, and the cut happens automatically.

An interesting thing happened when I first tried rendering the sequence to test some transitions. The footage had become placed off-screen, as if the center-point of the video was the bottom right corner of the video, placing the files off-screen from the upper left of the Composition window. The positions of all of the AVI files in the Composition window were set improperly to X and Y values of 90 and 67.5, respectively. As a solution, I went into the Geometrics portion of each AVI file and changed the Position values to read 360 and 240, which is dead center of the 720 x 480 video file, after which everything rendered properly.

For the 3D animated Haight Ashbury Street Fair text, I rendered the 120-frame animation in LightWave at 720 x 480 pixels with D1 NTSC pixel size, as NTSC pixels are not square like on a computer monitor, but rectangular. I brought the sequence into After Effects by choosing File> Import> Footage
File, clicking on the first frame of the animation sequence, turning on the TIFF sequence check box, and hitting OK. This brought all of the animation stills into After Effects as a sequence, which can be composited with the video elements using the included Alpha channel. After Effects makes it easy to check your Alpha channels, just double-click on the footage to go to a preview window, and at any point in the sequence, click the little white box displayed near the R, G, and B color boxes on the bottom of the window, which will show the Alpha channel.

Compositing in After Effects is very easy and intuitive. Once a clip is dragged to the Composition window, a bar appears in the timeline indicating the position and length of time of the clip. From here, drag the bar to the appropriate place in the timeline. If your clip is too long, double-click on the clip name, which will bring up a preview window allowing you to visually reset the In and Out points of the clip, and shorten it to the needed duration. Once my video files were digitized and imported into After Effects, I found I had far too much footage and had to chop time off different clips to make the piece work better. Clips can be ordered by click-dragging the filename above or below other clips. A dark line between clips indicates where in the order the clip will go when you release the mouse button.

One thing to make sure of when rendering your final video is to use the proper compression codec. For this tutorial, I rendered my final output on a DPS Perception board, so I chose the DPS NT AVI codec, which is included with the Perception. The DPS codec also writes a .pvd file, which the Perception will recognize, that allows it to load the video file for real-time playback to either a composite or S-Video deck. For the files to be uploaded to the 3D Design web site, I created two versions, one at 360 x 240 for high bandwidth, and one at 180 x 135 for low bandwidth. I used the Cinepak codec for both files and set the codec at 90% quality.

A common mistake that many first-time video editors make is to overdo the effects. Often times, less is more, and an effect or transition should not be used simply because it can be. Careful consideration should be paid to what effects you should use in relation to what the project is about. For example, transitions of explosions might be good
for a monster truck rally commercial, but not for the Summer of Love festival.

Making it real

With any 3D application, there are a wealth of tools available to create the kind of digital video effects no one has ever seen before. While there are wipes, transitions, and a variety of special effects built into video editing software, a 3D animation program that can import video sequences for use as textures can offer a cornucopia of creativity for creating digital video effects that are completely original. Whether you create elements separately and composite them in your video editing program, map video onto 3D elements, or both-have fun, experiment, and enjoy a virtually limitless array of ways you can combine 3D and digital video to achieve something original, compelling, and fun. 3D

Chris Tome is the editor in chief of DesignFreak.com.