Vol.32 No.2 May 1998
Game Graphics During the 8-bit Computer Era
The technologies being employed in current games have advanced to the point where computer game companies are now leaders in graphics research. Indeed, the requirement for realistic real-time graphics has arguably driven graphics research in areas such as image-based rendering and visibility processing. This article will explore the 8-bit computer industry (from about 1982 to 1990) and in particular the graphics architectures, algorithms and techniques being employed at that time in computer games. Rather than attempt a complete review of all the machines available at the time, I'll concentrate on what I know best: the Commodore 64, which was undoubtedly the most successful of the 8-bit machines, but will also have a brief look at the Atari 400/800 and Sinclair Spectrum for comparison.
In later sections, I'll outline the architecture of the 64's graphics subsystem (and compare it with some of its main rivals), list some of the graphical techniques used in different genres of games and will also explore some of the more esoteric effects that could be squeezed from the 64 by exploiting quirks of its video chip.
The golden era of the 8-bit computer game began around 1982 and continued until about 1990. Following on from the success of hobbyist computer kits (like the Sinclair ZX-80), a number of computer companies simultaneously released a range of powerful, preassembled home computers, epitomized by the Commodore 64, the Sinclair Spectrum and the Atari 400/800 (which I will simply refer to as the Atari 800 or just Atari). In fact, there were many more contenders, and a summary of these is given in Table 1.
Table 1: A summary of a selection of the large range of 8-bit home computers that appeared on the market circa 1982.
The market leaders were the 64, the Spectrum and the BBC (in Europe) and the Atari (in the U.S.). Amstrad was able to make a major impression, capturing some of the market share in the mid-eighties with the CPC-464, but until the advent of the 16-bit machine (heralded by the Commodore Amiga and the Atari ST), the Commodore 64 was the most popular home computer. Its popularity was almost certainly due to the graphics and sound capabilities rather than operating system (the 64's implementation of BASIC was notoriously bad) or speed (the processor was clocked slower than most of its contemporaries). It was a simple matter to achieve basic animation effects -- the 64 had hardware support for scrolling and sprites. Sprites are small graphic elements of fixed width and height that may be positioned independently of the main screen and were provided for the implementation of moving characters in games. These features encouraged experimentation, and an entire generation of programmers became familiar with the architecture and began to push the boundary of what was possible.
|Figure 1: Classic Atari titles: (a) Miner 2049er, (b) Defender and (c) Galaxian. Figure 2: The Atari Player Missile (PM) graphics.||
The Atari 800
The Atari (see the Planet Atari Web site  for more information) had the most powerful graphics system, which is not surprising given the machine's lineage. The GTIA chip (George's Television Interface Adaptor) provided hardware support for sprites (called Player/Missile Graphics or simply PM graphics), a large number of video modes and a display list processor, the ANTIC, allowing mode changes per raster line for advanced display effects. Both chips were memory mapped and had a large number of registers controlling their operation.
Five eight-pixel wide "players" could be displayed at varying horizontal positions (see Figure 2). These columns spanned the entire height of the display. To move a player's graphic horizontally, the horizontal position register of the player was updated. For vertical movement, the bitmap data associated with the player was shifted in memory. The fifth player sprite could optionally be split into four two-pixel wide sprites, each with independent horizontal control. These were designed for displaying missiles. This arrangement was ideally suited to certain types of games -- particularly the Space Invaders genre -- a good example of which is Galaxian, shown in Figure 1(c).
Inter-sprite and sprite-background priorities could be specified, and the GTIA chip would detect all collisions between sprites and background and latch these in registers, indicating the sprites which had been involved in the collision. The implementation was more flexible than that of the Commodore 64, in which a single bit registered sprite-sprite collisions and another flagged sprite-background collisions, requiring further testing of extent overlaps to determine which sprite had been involved. This is analogous to the broad and narrow collision detection phases in use in most physically based animation systems .
The ANTIC chip was responsible for interpreting the display buffer for the GTIA chip. It was the ANTIC chip that determined the resolution and number of colors available on the display, and it did so by selecting one of a large number of both text-based and bitmap graphics modes. Unique to the Atari, however, was the display list, which later became an integral part of the Commodore Amiga's graphics architecture.
The display list was a list of commands interpreted by the ANTIC chip and accessed via DMA. Each command of the display list was capable of selecting one of the 16 display modes, which determined the resolution, number of colors and the interpretation of the display buffer. The display list had its own flow control implemented using jump commands, so effectively the ANTIC chip was a processor operating in parallel with the 6502. Potentially, each line of the display could have its own entry in the display list, thus allowing selective control over each raster line; many modes could exist on the screen at the same time.
Hardware support for scrolling was provided through X and Y scroll registers, allowing the entire display to be shifted in the horizontal or vertical direction by up to 15 pixel positions. For larger scrolls, the start of screen memory, but not the screen data itself, was shifted in memory. Each display list command could enable/disable scrolling for its associated line, thus allowing split screen scrolling. Finally, each display list entry was capable of flagging an interrupt request, thus control could be passed to the CPU when the raster scan reached a certain point in the display, facilitating synchronization of the display and the software. These techniques were also in common use on the Commodore 64, but significantly less support was provided, and display list functionality could only be emulated in software.
Depending on the mode, a number of colors could be displayed on the screen selected from a palette of 16 hues. Uniquely, the brightness of these colors could also be specified -- there were eight luminance settings -- giving a total palette of 128 colors. In a few modes, this could be extended to 16 luminance settings, giving 256 possible colors.
|Figure 3: Classic Spectrum titles: (a) Sabrewulf, (b) KnightLore, and (c) Manic Miner.||
The Sinclair Spectrum
The Spectrum distinguished itself by having no hardware support for sprites, which became a major stumbling block for graphics programmers developing for the machine. In fact, the Spectrum was a marvel of minimalist engineering, lacking even a dedicated video chip. All video I/O was performed via an ULA which controlled the lower 16K of RAM, of which 6912 bytes were used for the display buffer. The display was made up of a 256x192 bitplane with a zero representing a pixel to be colored with the background color and a one indicating the use of the foreground color, as was the case with many other machines. However, an attribute buffer of 768 bytes encoded unique foreground and background colors for each 8x8 pixel square, selected between two brightness values, and toggled flashing. So from the 16 available colors, two were possible in each block.
As a result, it was very difficult to avoid color bleeding artifacts. When an animating character (usually implemented as arrays of 8x8 pixel blocks) moved smoothly across a background, if the character was a different color than the background, it was often impossible to serve the color requirements of both character and background graphics within single blocks. Usually the character color was used both for foreground and background graphics. This made the character color appear to have bled into the background. To minimize this, many games either a) avoided color altogether, b) confined animation steps to multiples of eight pixels in any direction or c) created a thick border surrounding the character to minimize the effect of bleeding.
As with the 64 and Atari, it was possible to synchronize the software with the display using interrupt handlers invoked in response to ULA interrupts to prevent flicker. For more details see the Planet spectrum Web site .
|Figure 4: Classic Commodore 64 titles: (a) Ghosts 'n Goblins, (b) Impossible Mission and (c) Paradroid.||
The Commodore 64
The 64's graphic capabilities were provided by MOS Technologies 6567/6569 VIC II chip (Video Interface Controller). These devices were originally designed for cabinet-based games and graphics workstations and had excellent graphics capabilities, surpassed only by the Atari's GTIA/ANTIC devices. The VIC device supported three character based display modes, two bitmap modes, eight hardware sprites, hardware assisted scrolling and a palette of 16 colors (but no luminance control). As with the Atari, the device was memory mapped and addressed 16K of Display RAM (DRAM). It had a 12-bit data bus to allow simultaneous connection to main memory (8 bits) and 4-bit static RAM which contained the color information for the screen.
Since the launch of the Commodore 64, the VIC chip has been reverse engineered to the point where probably every nuance of its operation is now understood. This has allowed programmers to take advantage of some quirks of the design which facilitate certain graphical effects that would be impossible to achieve through software alone. For more details about the inner workings of the Commodore 64, visit the CBM Document Page  and for an excellent review of the VIC chip functionality, read Christian Bauer's technical article . Christian is the designer of Frodo, an excellent Commodore 64 emulator.
When in a character-based mode, 1000 bytes of screen memory were used to specify the character symbol to use in each of the 40x25 character positions. Each character was itself a block of 8x8 pixels. The default character set was available in ROM, but the VIC could be pointed at RAM to allow the creation of user-defined characters. The VIC could generate and display 256 such characters. The foreground color for each character position was supplied by the color RAM mentioned earlier.
|Figure 5: Display memory layout in bitmap mode. Display bytes were ordered as the would be in character mode to simplify the implementation of the scanning hardware in the VIC chip.||
In bitmap mode, a full 8000 bytes were used to address the 320x200 pixels of the display. To facilitate cheaper implementation via the VIC’s memory scanning architecture, the bitmap data was arranged rather unusually as shown in Figure 5. As can be seen, this arrangement was similar to the memory scanning sequence the VIC would adopt for character based modes.
The 64 had eight independent sprites, each being a block of 24x21 pixels (i.e. 63 bytes of graphics data per sprite). Unlike the Atari, the 64's sprites were free to move both horizontally and vertically. The VIC resolved collisions between sprites and between sprites and screen data and latched this information in registers to be read by the software (or would raise an interrupt if enabled). Sprites, like the Atari's PM graphics, could be stretched vertically and horizontally by a factor of two. Display priority was fixed between sprites, with sprite zero always in front and sprite seven in the back, but priority with the screen data could be specified by the user, allowing for basic depth effects exploited in many 3D games. Unlike the Spectrum, sprite colors were managed independently of the background graphics and so there were no color bleeding artifacts.
Hardware scrolling allowed the entire screen image to be offset by up to seven pixels in either the horizontal or vertical direction. For scrolls larger than this, the software was responsible for shifting the display memory appropriately when the hardware scroll limit was reached. To achieve independently scrolling regions within the same screen, the programmer had to implement more complicated raster methods.
|Figure 6: A sprite in a) normal mode and b) in multicolor mode.||
The 64 had a fixed palette of 16 colors. Border and background colors were specified using the appropriate VIC register. Foreground colors could be specified for individual character positions using the color RAM (in both character based and bitmap modes). The VIC chip also supported a multicolor version of each mode -- and multicolor sprites. In all cases, when multicolor mode was selected, pairs of bits in display memory were used to specify the color (background, multicolor1, multicolor2 and foreground). Whereas the foreground color could vary from character position to character position, the remaining three colors were fixed for the entire display. A consequence of multicolor mode was a halving of the display resolution; it was frequently dubbed "fat pixel mode." Figure 6 illustrates both normal and multicolor sprites.
Figure 7: Definitive 8-bit computer games: a) Encounter, b) Tornado Low Level, c) Elite, d) Lords of Midnight, e) Stunt Car Racer, f) The Hobbit, g) Ant Attack, h) KnightLore, and i) Head over Heels.
3D Graphics on an 8-bit Computer?
It's fair to say that around 1985, when the 8-bit games industry was in full swing, computer graphics used in games were quite primitive when compared to the state of the art in graphics research. At the time when the Hemicube method for radiosity and distribution ray tracing were being developed, the pinnacle of graphical achievement in the games scene was some clever visibility determination in the seminal KnightLore (see Figure 7(h)) from Ultimate Play The Game (now called Rare). The technique, named "filmation," was remarkable at the time though, and represented the first real attempt at detailed 3D isometric graphics. Since then there have been a large number of games employing the technique, which was achieved using depth ordered drawing. In almost all cases, the data to be drawn was aligned to a grid and viewed from fixed orientations (usually permitting rotation of the view through 90 degrees) thus simplifying the depth ordering.
|Figure 8: Using sprite background priority for 3D depth effects. In (a) the sprite has lower priority and appears behind the wall, whereas in (b) its priority has been raised and so it appears in front of the wall (thus it appears as if the ball has traveled around the corner of the tower). This priority switch was done through software. Figure 9: By drawing elements of the image in depth order, a 3D image was created with consistent visibility. Figure 10: The visibility determination algorithm employed by Elite resolved local visibility only via backface culling. Note the incorrect visibility indicated by the arrow.||
The simplest method used to convey the impression of depth involved the use of sprite-background or sprite-sprite priority to achieve a degree of hidden surface removal. Nebulus used this effect to achieve the appearance of rotation around a central tower, as can be seen in Figure 8. In such circumstances, usually the graphics were tailored to avoid any ambiguity (i.e. a sprite should never need to be both in front of one piece of the foreground and behind another). This method was used to create some of the least CPU intensive 3D effects.
Isometric graphics (originally appearing in Sega's Zaxxon arcade game) are probably best represented by the Ultimate Play The Game's filmation games series which began with KnightLore in 1984.
One of the earliest of the wireframe based games was David Braben and Ian Bell's Elite, originally released on the BBC Micro and which remained the best selling game for a long time. Its combination of space trading, vast playing area and atmosphere more than made up for the rather sluggish frame rates, which often dropped as low as one a second if a number of ships were being displayed simultaneously. Elite implemented back-face culling per object but no global visibility testing was performed (see Figure 10). One of the major innovations was the superlative 3D radar control, which remains one of the most intuitive 3D navigation controls I have come across. It was patented by the authors.
Other noteworthy examples include Mercenary, which defined the standard for Commodore 64 wireframe graphics with update speeds significantly faster than those of Elite, and which allowed you to discover the joys of flying a piece of cheese! Stunt Car Racer, shown in Figure 7(e), showed what could be done with filled polygons, and though slow to update it managed to -- ironically -- convey a convincing sense of speed and momentum.
A special mention must go to Andrew Braybrook who possibly is still the most famous of Commodore 64 programmers. I was enthralled by the ''Game Diaries'' that he published in popular magazines of the time chronicling the development of both Paradroid and Morpheus. Andrew was undoubtedly responsible for the huge interest in the use of bas-relief for imparting a sense of 3D to a game (you get the same effect by passing an image with good contrast through an embossing filter), and it became a favorite method of mine when designing 64 graphics. See Figure 11 for some examples. Other games making use of this technique included Uridium, Sanxion and Parallax.
Whereas the Atari had its ANTIC chip and the associated display list, to achieve similar results on the 64 or the Spectrum you were required to implement your own interrupt handlers called at key moments during a screen refresh. Many of the more esoteric effects possible with the VIC chip relied on precise manipulation of VIC registers during each refresh. However, raster interrupts were a necessity if you required smooth scrolling, flicker-free screen updates or split screen display modes.
Everyone knows that in order to eliminate flicker, you must synchronize the update of the display with the frame refresh (in particular, avoid drawing into an area of the screen that is currently under the raster beam). Current video hardware usually implements this via a double buffer switch which is synchronized in this manner. On the 64, the normal method was to enable VIC raster interrupts and request an interrupt on a line just beyond the bottom of the visible display, during the vertical blank. The interrupt handler was then responsible for updating the display before the raster returned to refresh the next frame.
A trivial implementation of a split screen mode involved simply requesting a raster interrupt at the line you wished the split to occur at. The interrupt handler then simply switched modes as required and reinitialized the raster interrupt to occur soetime during the vertical blank period, to allow the mode to be flipped back in time for the next raster refresh. This worked quite well for static screens and horizontal scrolling, but when vertical scrolling was required within a split window and when sprites were allowed to cross the split boundary, the timing of the split become more critical. Wherein lies the problem? The VIC was capable of locking out the CPU when it required the bus for graphics data accesses. If this happened at a split point, the result could be a nasty flickering line around the split point representing the delay introduced as a result of the CPU halt.
|Figure 11: The bas-relief effect was a great way to simulate raised surfaces: (a) Herobotix, (b) Sanxion, and (c) Parallax.||
Another raster trick was to increase the number of sprites being displayed simultaneously. You simply needed to change each sprite's vertical position once the raster had completely displayed it. The VIC did not keep track of the number of times a sprite was displayed; it simply examined the contents of the sprite y-position registers and at each raster line displayed those sprites that lay on the current line. Sprites could therefore be reused as many times as required with the proviso that a sprite could not occupy a single raster line more than once. This was known as sprite multiplexing. There were some difficulties in determining the optimum raster interrupt line after which sprites would be repositioned, and this required an optimization step which minimized sprite splitting. Unfortunately, there would not always be a solution (i.e. if the software required more than eight sprites on a raster line), then something had to give. But through clever scheduling it was possible to minimize the problems. Some games suffered terribly from sprite break-up, and the best example of this was Commando.
How Far Have We Come?
The Commodore 64's reign ended in the early nineties. This marked the end of the 8-bit computer (the 64 was probably the last of the popular 8-bit computers) and suddenly the 16-bit and 32-bit eras were upon us. Now in the late nineties, all previous machines have been surpassed by the PC which currently holds the home computer crown. Games today are rarely ever the result of a single programmer and involve teams of programmers, graphics artists, musicians, directors, actors, script writers (and I'm sure there are probably grips and Foley artists and hairdressers).
Steven Collins is the director of the Image Synthesis research group and a lecturer in the Department of Computer Science at Trinity College Dublin. He wrote three Commodore 64 games, two of which were published: 1987's original Herobotix and a 1990 port of the Badlands coin-op. His Ph.D. research focused on modeling phenomena resulting from light interaction with specular surfaces. He is a member of ACM and Eurographics.
In the golden 8-bit era, the programmers were the heroes. Everyone waited for the next release from the aforementioned Mr. Braybrook, or Jeff Minter, Tony Crowther, Paul Noakes, Geoff Crammond, David Braben, Steve Turner, John Phillips and so many others. Usually the programmer was also responsible for the graphics (though not always), but often somebody else would provide the music. The famous musicians of the time were Rob Hubbard, Martin Galway, Ben Dalglish and the Maniacs of Noise among others. The Commodore 64's SID chip (sound interface device) was an excellent three-oscillator sound generator with resonant filtering that was pushed to the limits by these guys. With multiplexed chords, pattern based sequencing and sampled drum sounds, some of the music created was quite amazing. The music for Parallax -- about 20 minutes worth -- by Galway, and Masters of Magic by Hubbard were among the very best.
I suppose the attraction back then was the accessibility; you felt that you too could partake in the programmer's quest for the ultimate game. Today the industry has grown up and we go to college, get our degrees and then get a day job with a games company.
Current 3D technology (with Direct3D, OpenGL and the plethora of 3D acceleration cards) and 16 or 32-bit multichannel sound and genetic algorithms for creature intelligence and CD-ROMs with gigabytes of level data have certainly changed the face of the computer game. It's now an interactive immersive environment with entities and goals and strategies.
So have things really changed? What metric might we use to judge this? If I were to apply the metric of level of excitement generated, or the fear, or the sense of achievement at having completed a goal, then we haven't moved at all. I consider Paradroid and Mission Impossible to have been the best of the 8-bit crop. I would now consider Quake II to be the best of the current crop. I get equally as much enjoyment out of each. All I can really conclude is that the technology and the industry have grown up, but I haven't.