Why Were Old Video Games So Pixelated?


    Pixelated Super Mario Bros box art excerpt on a blue background
    Nintendo

    If you didn’t grow up in the 20th century, you might wonder why video and computer games once used such blocky, pixelated graphics. We’ll explore the origins of pixel art and how graphics have exploded in complexity over time.

    The Short Answer: Resolution Was Limited by Cost and Available Technology

    The pixelated artwork in older video games—where the display resolution is low enough that the pixels are obvious and blocky—was largely the result of low-resolution television sets and the high cost of memory chips and digital logic at the time those games were created relative to today.

    While it was possible to create an HD-resolution digital still image by the late 1970s, the technology to animate it in real-time did not exist until much later. Such technology was far too expensive to put into a mass-produced entertainment product that consumers could afford until the mid-2000s.

    A child playing River Raid on an Atari 800XL computer.
    Benj Edwards

    Game designers did what they could with the limited technology available at the time, using mosaic-like blocky, pixelated graphics to illustrate their games for consoles, such as the Atari 2600, NES, Sega Genesis, and many more.

    “We would sketch on graph paper, and then digitize those drawings,” says Joe Decuir, who co-created the Atari 2600 game console and programmed one of its earliest games. “I remember vividly how clunky the tanks looked in Combat.”

    The EGA version of Loom for the IBM PC. Lucasfilm

    Even artists on the PC gaming side had to deal with relatively low-resolution, low-color images compared to today. “Having to work in these giant bricks of solid color and be limited to just 16 terrible colors that had been chosen for us ahead of time, from which there was no varying, was a tremendous obstacle,” says Mark Ferrari, who drew the EGA graphics for Lucasfilm games on the IBM PC such as Zak McKracken, Loom, and The Secret of Monkey Island.

    But artists embraced the limitations and made timeless classics anyway. Let’s take a deeper look into what technical capabilities led to those limitations and why pixelated game art became less necessary over time.

    How Video Game Graphics Work

    Digital game graphics are all about pixels—how you store them, how you process them, and how you display them. More pixels-per-inch means more detail, but the more pixels you have, the more hardware power you need to drive them.

    The word “pixel” originated as an abbreviation of the term “picture element,” coined by computer researchers in the 1960s. Pixels are the smallest possible part of any digital image, regardless of resolution. In modern computers, they’re usually square—but not always, depending on the aspect ratio of the display device.

    A bitmap of Mario from Super Mario Bros. on the NES.
    Benj Edwards / Nintendo

    In abstract terms, most video game graphics work by storing a grid of pixels (known as a bitmap) in a portion of video memory called a frame buffer. A special circuit then reads that memory and translates it into an image on the screen. The amount of detail (resolution) and the number of colors you can store in that image are directly related to how much video memory you have available in your computer or game console.

    Some early console and arcade games didn’t use frame buffers. In fact, the Atari 2600 console, released in 1977, kept its costs low by using dedicated logic to generate a signal on the fly as the television scan line moved down the screen. “We were trying to be cheap, but that put the vertical in the hands of the programmers, who were a lot smarter than the hardware designers realized,” says Decuir of the 2600.

    In the cases of pre-frame buffer games, the graphical detail was limited by the cost of the supporting circuitry (as in Atari’s early discrete logic arcade games) or the size of the program code (as in the Atari 2600).

    Exponential Changes in Memory and Resolution

    The scale of improvement in the technical capabilities of computers and game consoles has been exponential over the past 50 years, which means that the cost of digital memory and computing power has decreased at a rate that defies common sense.

    That’s because improving chip fabrication technologies has allowed manufacturers to cram exponentially more transistors into a given area on a piece of silicon, allowing for dramatic increases in memory, CPU speed, and graphics chip complexity.

    “Really, it’s how many transistors could you use?” says Steve Golson, co-designer of the Atari 7800‘s graphics chip and a co-creator of Ms. Pac-Man, among other games. “With a few tens of thousands of transistors, you have the Atari 2600. With tens of billions of transistors, you get modern consoles. That’s a million times more. And clock speeds have increased from a few megahertz to a few gigahertz. That’s a thousand-fold increase.”

    Casino Poker for the Fairchild Channel F made the best of a 102×58 pixel display.

    The cost of transistors impacted every electronic component that utilized them, including RAM memory chips. At the dawn of the computerized game console in 1976, digital memory was very expensive. The Fairchild Channel F used a mere 2 kilobytes of RAM to store a bitmapped image of the screen—just 128×64 pixels (102×58 visible), with only one of four colors per pixel. RAM chips of similar capacity to the four RAM chips used in the Channel F retailed for about $80 total at the time, which is $373 adjusted for inflation.

    Fast-forward to 2021, when the Nintendo Switch includes 4 gigabytes of RAM that can be shared between working memory and video memory. Let’s assume a game uses 2GB (2,000,000 kilobytes) of video RAM in the Switch. At 1976 RAM prices, those 2,000,000 kilobytes of RAM would have cost $80 million in 1976—that’s over $373 million today. Insane, right? That’s the logic-defying nature of exponential change.

    As the price of memory has dropped since 1976, console makers have been able to include more video RAM in their consoles, allowing for much higher-resolution images. With more resolution, individual pixels have become smaller and harder to see.

    Today's Mario in Mario Odyssey uses more pixels than the entire NES system resolution.
    The Mario in Mario Odyssey uses roughly as many pixels as the entire NES system resolution. Benj Edwards / Nintendo

    The Nintendo Entertainment System, released in 1985, could produce a 256×240 resolution image (61,440 pixels). Today, a Sony PlayStation 5 console can produce a 3840×2160 image (4K), and potentially, one as high as 7680×4320 (33,177,600 pixels). That’s a 53,900% increase in video game console resolution over the past 36 years.

    Even if it were possible to display high-definition graphics in the 1980s, there was no way to move those images from memory and paint them onto a screen at 30 or 60 times a second. “Consider Pixar’s wonderful animated short film The Adventures of André & Wally B.,” says Golson. “In 1984, this film required a $15 million Cray supercomputer to create.”

    In 1984, it took a $15 million supercomputer 1/2 hour per frame to render the short film The Adventures of André & Wally B. Pixar

    For The Adventures of André & Wally B., Pixar rendered detailed 512×488 resolution frames at a rate of roughly one frame per 2-3 hours. Higher-resolution works attempted later took much longer render time and multi-million dollar world-class equipment. According to Golson, when it came to real-time photorealistic graphics, “It simply could not be done using the hardware available in 1984. Let alone at a price point to be sold to consumers.”

    TV Set Resolution Was Low, Limiting Detail

    Of course, for a console to display an image with a 4K resolution like today’s high-end consoles, you need a display capable of doing so, which didn’t exist in the 1970s and ’80s.

    Prior to the HDTV era, most game consoles utilized relatively antique display technology developed in the 1950s—long before anyone anticipated playing high-resolution home video games. Those TV sets were designed to receive broadcasts over the air via an antenna that plugged into the back.

    “The only way to connect to the TV was through the antenna input,” says Steve Golson, recalling his work on the Atari 7800 in 1984. “Thus the console had to generate a compatible signal that looked like it was coming from your antenna. So you were limited by the possible resolution of an analog NTSC broadcast signal.”

    Double Dragon on the NES

    Ideally, the NTSC analog TV signal can handle about 486 interlaced lines that are about 640 pixels wide (although this varies based on implementation due to the standard’s analog nature). But early on, game console designers discovered that they could save memory by only using half of the NTSC’s two interlaced fields-per-second to make a very stable 240 pixel-high image, now called “240p” among enthusiasts. To keep the 4:3 aspect ratio, they limited the horizontal resolution to about 320 pixels, although this exact number varied considerably between consoles.

    The NTSC signal also restricted the number of colors you could generate without having them bleed together or wash out. “And you had to make it look nice for the many people who still had black-and-white TVs! This further limited your color choices,” says Golson.

    To get around this restriction, personal computers began using higher-resolution non-television displays in the early 1980s. “The IBM PC and its clones inspired a big market for separate color monitors that could handle at least VGA (640 x 480),” adds Joe Decuir. “However, game players didn’t get those until the 1990s, for PC-connected gaming.”

    At 512×448 pixels, Nintendo’s Popeye was a high-resolution game for 1982, but it required an expensive arcade machine and a special monitor to work. Nintendo

    Some vintage arcade games, such as Nintendo’s Popeye (1982), took advantage of much higher resolutions (512×448) made possible with arcade monitors using a non-standard interlaced video mode, but those games could not be played on home game consoles at the time without graphical compromises when translated to home consoles.

    Storage Space Set Limits on Graphical Complexity, Too

    In both console and computer games, the complexity of graphics was limited not just by display capabilities and logic speed, but also by how they were stored on removable media that could be distributed to customers.

    “These days, people don’t really begin to understand what a limited environment we were working in in terms of storage space and processing time,” says Mark Ferrari. “Disk space was really precious in those days.”

    A 5.25" Floppy Disk and a 3.5" Floppy Disk
    A 5.25″ floppy disk and a 3.5″ floppy disk. Benj Edwards

    At the time Ferrari drew his graphics for Lucasfilm, a game had to fit on a handful of floppy disks that could only store about 1.4 megabytes apiece. Even though Lucasfilm compressed its game artwork, the limitation on much detail Ferrari could include came not just from the resolution of the IBM PC graphics card, but also from the storage capacity of the floppy disks themselves.

    But, like memory prices, the cost of storing graphics data on removable media has dropped exponentially as well. On the console side of things, a Fairchild Channel F cartridge held about 2 kilobytes of data in 1976, whereas Nintendo Switch Game Cards can store up to 32,000,000 kilobytes of data (32GB). That’s 16 million times more storage space, providing a lot more room for detailed graphics data.

    The End of the Visible Pixel … and a New Beginning

    In 2010, Apple introduced a “Retina display” on the iPhone 4—a screen with a resolution high enough that the naked eye (at a standard viewing distance) could no longer distinguish individual pixels. Since then, these ultra high-res displays have moved to tablets, desktops, and laptop computers.

    An Apple iPhone with a Retina display.
    An iPhone with a Retina display. Apple

    For a little while, it looked like maybe pixel art’s days were finally over completely. But low-res pixel art hasn’t disappeared. In fact, it’s on an upswing.

    Beginning in the late 2000s, indie game developers began to embrace the retro pixel art aesthetic in earnest. They did it partially for nostalgic reasons, and also because in some cases, it’s easier for a small team of developers to create simpler block graphics than detailed, high-resolution illustrations that look professional. (As with everything, there are exceptions—creating convincing and smooth animations with 2D sprites is a very labor-intensive process, for example.)

    Blocky-pixel games like Stardew Valley and Minecraft evoke feelings of a simpler time, while also providing the conveniences that come with modern game design.

    Stardew Valley uses pixel art to evoke nostalgic memories. ConcernedApe LLC

    Mark Ferrari looks at these modern-day pixel artists with awe and reverence. “I was doing pixel art because there was no alternative. It wasn’t a choice, it was a necessity,” says Ferrari. “People doing pixel art now are all doing it by choice. There’s not a technical imperative in the world right now to be doing pixel art anymore. But they are choosing this as an aesthetic because they love it.”

    So even though pixel art was once a restriction, it’s now a treasured art aesthetic that will likely never disappear, and it’s all thanks to that very short period in history when artists did what they could with the limited technology of the time. Pixels forever!





    Source link

    Previous articleSupercar Maker Mazzanti Cruises Into Crypto With Bitcoin Payments, Token Sale
    Next articleMicrosoft fixes nasty Windows 10 bug – but you’re going to have to wait