DETAILS


COLUMNS


CONTRIBUTIONS

a a a

GAMING & GRAPHICS

Vol.32 No.4 November 1998
ACM SIGGRAPH



Embrace Your Limitations — Cut-Scenes in Computer Games



Richard Rouse III
Paranoid Productions


November 98 Columns
Entertaining the Future Visfiles


Richard Rouse III
Previous article by Richard Rouse III Next article by Richard Rouse III

“Cut-scenes” have featured prominently in computer games for at least the last decade, and if one looks hard enough one will notice their presence in games even older than that. A good early example is Pac Man, which sported amusing little interludes between some levels, featuring the characters from the game in humorous comic sketches. These functioned as a reward to the player for getting through N-many levels of the game, and the promise of still-more cut-scenes later in the game provided extra incentive for addicted kids to pump still more quarters into the arcade cabinet. In some limited way, the Pac Man cut-scenes also told a story to the player, filling him in on, say, just what Pac Man and Ms. PacMan did when they weren’t eating little white dots.

The appeal of cut-scenes to designers who wish to tell stories in their computer games is obvious. Instead of working in the tricky and mostly unexplored arena of storytelling in a completely interactive environment — that is to say telling stories during the actual gameplay — designers are able to convey whatever plot elements they feel necessary through non-interactive segments which actually interrupt the gameplay proper. For instance, if a designer absolutely wants the player to see nasty boss monster Gargantutron tunneling out of the ground and has a really nifty effect for the same, why risk that the player may be looking at something else when Gargantutron shows up on the level? Instead, briefly take away the player’s control of what he or she looks at, and force the player to watch a prerendered animation of Gargantutron emerging from the bowels of the Earth. Another, perhaps more cynical, explanation for the predominance of non-interactive cut-scenes in games is that the interactive entertainment industry is riddled with people who wish they were working on movies, as anyone who has worked in gaming for any amount of time can attest.

But I’m not arguing for the elimination of non-interactive cut-scenes in interactive entertainment. Far from it. I see them as another useful tool in interactive storytelling. What concerns me most are cut-scenes which don’t graphically fit in any way with the game they supplement, movies which seem to have been filmed in an entirely different universe from the one that the player encounters in the game itself. Surprisingly enough, this is the norm for our industry.

Methods that Lead to Inconsistency

The worst case scenario is when computer game cut-scenes are outsourced to film or CGI houses which have nothing or very little to do with the creation of the in-game graphics, and are merely working from concept sketches provided to them, or, worse yet, vague text descriptions of what storyline is supposed to unfold in each given cinematic. As a result, more often than not game cut-scenes look nothing like the art which appears during gameplay. Instead of functioning as smooth transitions between interactive segments in the game, the cut-scenes become jarring disruptions which break any suspension of disbelief the player might have developed while playing the game.

In terms of visual incongruity, the worst offender of all seems to be the live-action video cut-scenes which were so ballyhooed in the gaming industry five years ago and which now everyone seems to be moving away from. Aside from the fact that these live action segments were often badly filmed, acted and scripted, the visual dissimilarity between the gameplay graphics (be they either sprite-based or real-time 3D) and the digitized actors appearing in the cut-scenes should have set off warning bells in designers’ or producers’ heads. It seems almost inherently true that filmed actors are going to stick out like so many sore body parts from the in-game graphics, and achieving any sort of visual continuity between them and the gameplay visuals is all but impossible.

One would think that prerendered CGI cut-scenes would have been more suited to providing continuity with gameplay graphics, but more often than not this simply isn’t the case. Though I’ll be the first to admit that CGI scenes have come closer than live action video cut-scenes, they often still look they are taking place in a realm altogether different from the one in which the gameplay takes place. This is mainly because artists and animators are relatively free to use whatever quantity of polygons they desire to create a prerendered scene, whereas polygon counts used during gameplay segments often need to be strictly limited. Artists have a natural desire to make whatever bit of art they’re currently working on look as good as possible. If they’re able to use a million polygons in the prerendered cut-scenes they’re certainly going to use them, even if they can’t in the gameplay artwork.

For instance, consider a game which uses a real-time 3D engine, such as Quake. In such a gameplay environment, artists and animators are strictly limited in the number of polygons they can use, since the engine can only handle N-many polygons on the screen at once. So while an animator might want to use at least several thousand polygons for a vaguely realistic humanoid figure, the game engine limits them to a couple of hundred. From my experience, few things frustrate animators, who are accustomed to using whatever polygons are needed for a piece, more than suddenly having to use a very small number. But, forced (often at gun-point) by the producer to limit themselves to only a few hundred polygons, the artists make due, swearing that some day they’ll be able to make really swell looking models once again. And then, when it comes time to do the cut-scenes, praise the gods! The animators are now free to use however many polygons they want, since these scenes are prerendered on Silicon Graphics machines and then just played back in the game as Smacker or QuickTime movies. And so the artists go wild, using as many polygons as they want, taking their in-game model of 200 polygons and increasing its count 10 or even a hundred fold. What the heck, they may throw out the model and make an entirely different one just for use in the cinematic. This results in high-poly cut-scene renderings which — though beautiful — barely resemble the graphics displayed during the actual gameplay. And when the player gets to one of these cut-scenes she can’t help but think (unless she’s a particularly computer graphics savvy lady), “Man, why can’t the graphics in the game look this good!”

Figure 1
Figure 1: A cut-scene from Interstate ‘76.


Figure 2
Figure 2: An image from Interstate ‘76 gameplay.

Some Games that Get It Right

But not all games are guilty of making their cut-scenes look exceedingly different from their in-game graphics. A good example of a game that gets its cut-scenes right is Interstate ‘76. Probably best described as the Car Wars role playing game done in an arcade game style infected with a 1970s America sensibility, the game includes many well-done cut-scenes which add immensely to the gaming experience. What’s especially beautiful about the game’s non-interactive interludes is that the cut-scenes match visually with the gameplay graphics. In terms of the color palette used, the low-polygon look all the characters have, and the stylization of the characters, the gameplay and the cut-scenes form into one cohesive storytelling whole. Everything looks like it takes place in the same universe.

Some animators have been quick to point out to me that the cut-scenes for Interstate ‘76 are not actually that low polygon, and indeed if one examines the scenes closely one will count many more polygons on the screen than could actually be rendered in real time using the game’s engine. However, to the layman who’s not so savvy to graphics techniques, the scenes look similar, even if they’re technically not, and “fooling” the player is, after all, what we’re ultimately concerned with. The game also benefits by using identical voice-acting during both the cut-scenes and the gameplay, as well as having the cut-scenes lead up directly into the gaming action. Primarily, however, it’s the consistent visual look which makes the game a smooth experience for the player.

Though Interstate ‘76’s designers managed to create visual cohesion between prerendered cut-scenes and real-time rendered gameplay, an even better method sure to yield consistent results is the use of the game’s primary graphics engine to handle the cut-scenes. Those cut-scenes used in Pac Man I mentioned previously are a good, though simplistic, example of this. There was no technology for prerendered movie playback in the early 1980s when Pac Man was released, and I’ll bet that the cut-scenes were hard-coded manipulations of Pac Man’s graphics engine. In any event, their graphics match exactly with those found in the game, and visual continuity is maintained throughout.

Some more modern examples come to us in a number of games in development which have licensed the Quake engine. Two titles in particular spring to mind: Sin and Half-Life. These games, like Pac Man, use the game’s regular drawing capabilities to generate, in real time, the cut-scenes the player sees. This of course means that the non-interactive interludes are subject to the same polygon count restrictions prevalent during the rest of the game, and the complexity of the scenes the designers are able to show is, as a result, quite constrained. It’s my guess that both games, instead of hard-coding their cut-scenes as Pac Man no doubt did, use a special, complex scripting language to govern the placement and movement of characters on the screen, as well as the movement of the camera. Despite polygon limitations, the cut-scenes I have seen for Sin are quite well done, and prevent a wonderfully seamless continuity between the interactive and non-interactive segments of the game. Though I have yet to see Half-Life in action, the screen shots I’ve seen from both interactive and non-interactive segments of the game seem to match perfectly. With that game’s heavy emphasis on storyline, it’s good to know that game will present its story in a consistent visual style, allowing for the maximum amount of immersion for the player.

A pleasant side effect of using the game’s engine to handle cut-scenes is that the resolution, screen size, frame rate and overall quality of the cut-scenes on the player’s screen are all identical to what the player will see during gameplay. Though prerendered movie playback technology has vastly improved in recent years, and said playback gets better and better as the megahertz speed of the target platform increases, it doesn’t take an expert to spot the pixelation that occurs when a movie is playing full screen versus the usually much sharper (if lower poly) graphics one will see during gameplay. Using the in-game engine, these graphical inconsistencies go away, providing the smoothest possible experience for the player.

Figure 3
Figure 3: A non-interactive cut-scene from The Last Express.


Figure 4
Figure 4: A gameplay screenshot from The Last Express.

An Old Pro

A designer who has been putting cut-scenes in his games as long as anyone is Jordan Mechner, creator of Karateka, Prince of Persia, Prince of Persia 2 and most recently, The Last Express. Though the first three of these games are arcade adventures and the last is a more “pure” adventure, all masterfully use cut-scenes to communicate their story, and all but Prince of Persia 2 use the gameplay graphics engine to render these interludes. The result in all the games is a very cinematic feel, with complete graphical continuity between the gameplay and non-gameplay sections. Not too long ago I had the pleasure of interviewing Mechner for InsideMac Games magazine. One of the questions I asked him was if his use of the game’s gameplay graphics engine in the storytelling interludes was an effort to make those cut-scenes visually indistinguishable from the gameplay. He answered:

“Absolutely. I think part of the aesthetic of all three of those games is that if you sit back and watch it, you should have as smooth a visual experience as if you were watching a film. Whereas if you’re playing it, you should have a smooth experience controlling it. It should work both for the player and for someone who’s standing over the player’s shoulder watching. Cut-scenes and the gameplay should look as much as possible as if they belong to the same world... [This is the] basic principle you have in Last Express: say you’re in point-of-view, you see August Schmidt walking to you down the corridor, then you cut to a reaction shot of Cath, the player’s character, seeing him coming. Then you hear August’s voice, and you cut back to August, and without realizing it you’ve shifted in to a third-person type of scene. Then as soon as it’s over, August walks away, cut to Cath looking at August, and when you cut back you’re back in point-of-view and now you’re controlling it again.”

Mechner makes an interesting point that the player, who interacts with the game directly, and the over-the-shoulder viewer, who watches it as he would a movie, should both have a smooth graphical experience while viewing the game. Whereas one might be able to argue that a stylistic break in the graphics between the interactive and non-interactive sections might make some sense to player who has, simultaneously with the visual switch, lost control of their game, it makes no sense at all to someone who’s just watching the game and not playing it.

The only Mechner game that didn’t use the game’s engine to render cut-scenes on the fly was Prince of Persia 2, which used still frames that appear handpainted for its interludes. These create a sharp break in the continuity of the game, emphasizing to the player a loss of control whenever they come up. I mentioned this to Mechner, and he replied: “I agree with you about that. There’s a distancing effect to those cut-scenes, they make you feel like you’re watching a storybook. But it was the effect we were going for at the time.”

Richard Rouse III is Lead Designer and President of Paranoid Productions and has published two games to date: Odyssey - The Legend of Nemesis and Damage Incorporated. He is currently serving as Lead Designer and AI Programmer at Leaping Lizard Software on the forthcoming Centipede 3D, to be published by Hasbro Interactive and due out — you guessed it — by Christmas. Your feedback to this column is encouraged at the address below.

Richard Rouse III
2124 I St. N.W. #306
Washington, D.C. 20037

Tel: +1-202-861-5513

Website

The copyright of articles and images printed remains with the author unless otherwise indicated.

Embrace Your Limitations

Recently some coworkers and I were discussing the problem of getting our game — the forthcoming Centipede 3D — to run faster by cutting down on the polygon counts of various objects in the game world. In particular, we talked about how we could make a decent looking mushroom in less than 24 polygons, since mushrooms are the most commonly found object in the world of Centipede, sometimes with 70 or so appearing on the screen at once. One of them pointed out that the best way would be to have two pyramids — a larger one on top of a smaller one — and in such a simple construction we’d have a model that, in a minimalist or perhaps even cubist way, could represent a mushroom. I was suddenly struck by the idea that if, from the project’s inception, we had striven for a more minimalist look, both our frame rate problems as well as our artistic inconsistencies would have been resolved. Instead of having insects that tried to look real but failed and instead looked like they were made out of (at most) 90 polygons, we could have had insects that looked like cubist representations of insects and which gamers would recognize as deliberately in that minimalist style.

At this point in the discussion I blurted out the half-joking epithet “Embrace your limitations!” which got a big round of guffaws from all present. But thinking about it later I came to see the statement as less humorous and more generally true to what we should be attempting. As game creators, we need to recognize early in the development cycle what our limitations are, and figure out how we can make the best game while working around those limitations. And if the in-game graphics are only going to be able to use N-many polygons, and we all agree that visual consistency demands that we make the game’s cut-scenes be of a matching style to the gameplay art, we need to make the cut-scenes have only N-many polygons as well. Or, at least they must appear to consist of as few polygons, even if we use a few more to “round off the edges.”

Of course all is not so easy for the game designer who strives for visual consistency. What of the marketing people who, if there are four screen shots on the back of the game’s box, like to take three of them from the beautiful, movie-quality cut-scenes and only one from the in-game art? They would surely cry bloody murder if now all of the screen shots looked as “bad” as the gameplay graphics. How would they pull the wool over the eyes of the gaming public if the game had a consistent visual style? But in a perfect world, where the marketing people don’t take a look at the game until it’s done, I hope that we as designers and artists see the importance of trying to maintain a visual smoothness throughout our games, an effect which leads to the player perceiving the game as being a more professional product. Until the day comes when there are no non-interactive cut-scenes in interactive entertainment, we need to make our games look as similar as possible both when the player is interacting with them and when she’s not, whether she’s playing the game herself or watching it over someone else’s shoulder.