Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Digital Image Basics

Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color

Introduction
When using digital equipment to capture, store, modify and view photographic images, they must first be converted to a set of numbers in a process called digitization or scanning. Computers are very good at storing and manipulating numbers, so once your image has been digitized you can use your computer to archive, examine, alter, display, transmit, or print your photographs in an incredible variety of ways.

Pixels and Bitmaps


Digital images are composed of pixels (short for picture elements). Each pixel represents the color (or gray level for black and white photos) at a single point in the image, so a pixel is like a tiny dot of a particular color. By measuring the color of an image at a large number of points, we can create a digital approximation of the image from which a copy of the original can be reconstructed. Pixels are a little like grain particles in a conventional photographic image, but arranged in a regular pattern of rows and columns and store information somewhat differently. A digital image is a rectangular array of pixels sometimes called a bitmap.

Digital Image Basics

Digital Image Basics

Types of Digital Images


For photographic purposes, there are two important types of digital imagescolor and black and white. Color images are made up of colored pixels while black and white images are made of pixels in different shades of gray.

Black and White Images


A black and white image is made up of pixels each of which holds a single number corresponding to the gray level of the image at a particular location. These gray levels span the full range from black to white in a series of very fine steps, normally 256 different grays. Since the eye can barely distinguish about 200 different gray levels, this is enough to give the illusion of a stepless tonal scale as illustrated below:

Assuming 256 gray levels, each black and white pixel can be stored in a single byte (8 bits) of memory.

Color Images
A color image is made up of pixels each of which holds three numbers corresponding to the red, green, and blue levels of the image at a particular location. Red, green, and blue (sometimes referred to as RGB) are the primary colors for mixing lightthese so-called additive primary colors are different from the subtractive primary colors used for mixing paints (cyan, magenta, and yellow). Any color can be created by mixing the correct amounts of red, green, and blue light. Assuming 256 levels for each primary, each color pixel can be stored in three bytes (24 bits) of memory. This corresponds to roughly 16.7 million different possible colors. Note that for images of the same size, a black and white version will use three times less memory than a color version.

Digital Image Basics

Types of Digital Images

Binary or Bilevel Images


Binary images use only a single bit to represent each pixel. Since a bit can only exist in two stateson or off, every pixel in a binary image must be one of two colors, usually black or white. This inability to represent intermediate shades of gray is what limits their usefulness in dealing with photographic images.

Indexed Color Images


Some color images are created using a limited palette of colors, typically 256 different colors. These images are referred to as indexed color images because the data for each pixel consists of a palette index indicating which of the colors in the palette applies to that pixel. There are several problems with using indexed color to represent photographic images. First, if the image contains more different colors than are in the palette, techniques such as dithering must be applied to represent the missing colors and this degrades the image. Second, combining two indexed color images that use different palettes or even retouching part of a single indexed color image creates problems because of the limited number of available colors.

Digital Image Basics

Digital Image Basics

Resolution
The more points at which we sample the image by measuring its color, the more detail we can capture. The density of pixels in an image is referred to as its resolution. The higher the resolution, the more information the image contains. If we keep the image size the same and increase the resolution, the image gets sharper and more detailed. Alternatively, with a higher resolution image, we can produce a larger image with the same amount of detail. For example, the following images illustrate what happens as we reduce the resolution of an image while keeping its size the samethe pixels get larger and larger and there is less and less detail in the image: Original (400x262) Half Size (200x131)

Quarter Size (100x65)

Eighth Size (50x32)

Digital Image Basics

Color Terminology

As we reduce the resolution of an image while keeping its pixels the same sizethe image gets smaller and smaller while the amount of detail (per square inch) stays the same:

Color Terminology
While pixels are normally stored within the computer according to their red, green, and blue levels, this method of specifying colors (sometimes called the RGB color space) does not correspond to the way we normally perceive and categorize colors. There are many different ways to specify colors, but the most useful ones work by separating out the hue, saturation, and brightness components of a color.

Primary and Secondary Colors and Additive and Subtractive Color Mixing
Primary colors are those that cannot be created by mixing other colors. Because of the way we perceive colors using three different sets of wavelengths, there are three primary colors. Any color can be represented as some mixture of these three primary colors. There are two different ways to combine colorsadditive and subtractive color mixing. Subtractive color mixing is the one most of us learned in school, and it describes how two colored paints or inks combine on a piece of paper. The three subtractive

Digital Image Basics

Digital Image Basics

primaries are Cyan (blue-green), Magenta (purple-red), and Yellow (not Blue, Red, and Yellow as we were taught). Additive color mixing refers to combining lights of two different colors, for example by shining two colored spotlights on the same white wall. The additive color model is the one used in computer displays as the image is formed on the face of the monitor by combining beams of red, green, and blue light in different proportions. Color printers use the subtractive color model and use cyan, magenta, and yellow inks. To compensate for the impure nature of most printing inks, a fourth color, black is also used since the black obtained by combining cyan, magenta, and yellow inks is often a murky dark green rather than a deep, rich black. For this and other reasons, commercial color printing presses use a 4-color process to reproduce color images in magazines. A color created by mixing equal amounts of two primary colors is called a secondary. In the additive color system, the primary colors are:

sRed sCyan

sGreen

sBlue

and the secondary colors are:

sMagenta sYellow

In the subtractive color system, the roles of the primaries and secondaries are reversed.

Color Gamut
In the real world, the ideal of creating any visible color by mixing three primary colors is never actually achieved. The dyes, pigments, and phosphors used to create colors on paper or computer screens are imperfect and cannot recreate the full range of visible colors. The actual range of colors achievable by a particular device or medium is called its color gamut and this is mostly but not entirely determined by the characteristics of its primary colors. Since different devices such as computer monitors, printers, scanners, and photographic film all have different color gamuts, the problem of achieving consistent color is quite challenging. Different media also differ in their total dynamic rangehow dark is the darkest achievable black and how light is the brightest white.

Digital Image Basics

Color Terminology

Color Management
The process of getting an image to look the same between two or more different media or devices is called color management, and there are many different color management systems available today. Unfortunately, most are complex, expensive, and not available for a full range of devices.

Hue
The hue of a color identifies what is commonly called color. For example, all reds have a similar hue value whether they are light, dark, intense, or pastel.

Saturation
The saturation of a color identifies how pure or intense the color is. A fully saturated color is deep and brilliantas the saturation decreases, the color gets paler and more washed out until it eventually fades to neutral.

Brightness
The brightness of a color identifies how light or dark the color is. Any color whose brightness is zero is black, regardless of its hue or saturation. There are different schemes for specifying a color's brightness and depending on which one is used, the results of lightening a color can vary considerably.

Luminance
The luminance of a color is a measure of its perceived brightness. The computation of luminance takes into account the fact that the human eye is far more sensitive to certain colors (like yellow-green) than to others (like blue).

Chrominance
Chrominance is a complementary concept to luminance. If you think of how a television signal works, there are two componentsa black and white image which represents the luminance and a color signal which contains the chrominance information. Chrominance is a 2-dimensional color space that represents hue and saturation, independent of brightness.

Digital Image Basics

Digital Image Basics

Viewing Images on a Monitor


Before you can get consistent results working with digital images, you must create a standard viewing environment. Most computer monitors use a standard set of phosphors (the chemical compounds on the face of the screen that are responsible for creating its primary colors). These phosphors, combined with the electronic circuitry that activates them create color images on your computer screen. To get an accurate match between what you see on your monitor and prints or other output requires that a number of variables be controlled:

Viewing Conditions
First, it is very important for critical evaluation of photographic images on a computer monitor to work in an environment with subdued lighting. When strong room light falls on the face of the monitor, it makes it almost impossible to discern shadow detail and seriously alters the appearance of many of the colors.

Monitor Brightness Control


Next, you must set the brightness control on your monitor correctly. If the level is too high, blacks will start to become grays and colors will be washed out. If the level is too low, there will be loss of shadow detail and colors will appear murky.

Gamma
Since the human eye has a nonlinear response curve (it is more sensitive to variations in low light than to an equal variation in bright light), it is important to track this nonlinearity with the display of different gray levels. The curve that relates the amount of light emitted from the monitor to the 256 different gray levels in the digital image is called the gamma curve. There are several different standard gamma curves in common use. They all follow a power law of the following form:

b = v
where b is the brightness of the light emitted from the screen, v is the gray level value in the image and is the monitor gamma. The standard value for that most closely approximates the response of the human eye is 2.22. This is the value used by the television industry for storing and displaying video images. The prepress

Digital Image Basics

Viewing Images on a Monitor

industry has standardized on a value of 1.8 which more closely corresponds to the characteristics of printing presses. Macintosh computers often use the value 1.4 for some unknown reason, while most uncorrected PC monitors have a gamma of about 2.3. If an image was designed to be displayed on a monitor with a gamma of 2.2 and the monitor's actual gamma is set to 1.8, the image will appear too light and washed out. Conversely, if an image was adjusted to look good using a computer with a monitor gamma of 1.4 or 1.8 and viewed at 2.2, it will look too dark. For this reason, it is very important to set your computer's monitor to the same gamma setting as will be used to print it. If gamma is ignored, there is no way to guarantee that prints will not be too light or too dark.

White Point
Finally, there is an important characteristic of computer monitors called white point. This is usually given as a color temperature and for most uncorrected monitors, it run about 9300 degrees. What this means is that the white that is displayed on the monitor when you send it maximum values of red, green, and blue has a distinctly bluish cast. If you were going to view your prints under a light of this same color temperature, you would get a good match (all other variables being set properly). Different lights like incandescent, quartz halogen, fluorescent, and daylight (morning, evening, indirect, etc.) all have different color temperatures and prints viewed using these different lighting conditions will look radically different. The standard white point used in the graphic arts industry is a 5000 degree color temperature and most light boxes are set to roughly this color. Viewing prints under fluorescent lighting can be particularly deceptive as mercury vapor lamps has several spikes in the green part of the spectrum whereas daylight has a relatively smooth spectrum. This can give rise to considerable variation in certain colors. Two monitors which use the same phosphors, are both viewed in subdued light, have their gamma set the same, have their brightness control set properly, and have the same white point, should produce nearly identical displays.

Digital Image Basics

Digital Image Basics

Color Spaces
A color space is a mathematical system for representing colors. Since it takes at least three independent measurements to determine a color, most color spaces are three-dimensional. Many different color spaces have been created over the years in an effort to categorize the full gamut of possible colors according to different characteristics. Picture Window uses three different color spaces:

RGB
Most computer monitors work by specifying colors according to their red, green, and blue components. These three values define a 3-dimensional color space call the RGB color space. The RGB color space can be visualized as a cube with red varying along one axis, green varying along the second, and blue varying along the third. Every color that can be created by mixing red, green, and blue light is located somewhere within the cube. The following images show the outside of the RGB cube viewed from two different directions:

The eight corners of the cube correspond to the three primary colors (Red, Green, Blue), the three secondary colors (Cyan, Magenta, Yellow) and black and white. All the different neutral grays are located on the diagonal of the cube that connects the black and the white vertices.

10

Digital Image Basics

Color Spaces

HSV (Hue Saturation Value)


The HSV color space attempts to characterize colors according to their hue, saturation, and value (brightness). This color space is based on a so-called hexcone model which can be visualized as a prism with a hexagon on one end that tapers down to a single point at the other. The hexagonal face of the prism is derived by looking at the RGB cube centered on its white corner. The cube, when viewed from this angle, looks like a hexagon with white in the center and the primary and secondary colors making up the six vertices of the hexagon. This color hexagon is the one Picture Window uses in its color picker to display the brightest possible versions of all possible colors based on their hue and saturation. Successive crossections of the HSV hexcone as it narrows to its vertex are illustrated below showing how the colors get darker and darker, eventually reaching black.

V = 100%

V = 75%

V = 50%

V = 25%

HSL (Hue Saturation Lightness)


The HSL color space (also sometimes called HSB) attempts to characterize colors according to their hue, saturation, and lightness (brightness). This color space is based on a double hexcone model which consists of a hexagon in the middle that converges down to a point at each end. Like the HSV color space, the HSL space

Digital Image Basics

11

Digital Image Basics

goes to black at one end, but unlike HSV, it tends toward white at the opposite end. The most saturated colors appear in the middle. Note that unlike in the HSL color space, this central crossection has 50% gray in the center and not white.

L = 25%

L = 50%

L = 75%

Continuous Tone vs. Halftone Images


There are many different technologies used to reproduce photographic images. Some printing technologies such as laser and ink jet printers work by creating individual dots, each of which is one of a small number of solid colors. For example, a black and white laser printer can only print black or white pixels -- it cannot produce gray dots. To create the illusion of a photographic image, these tiny dots must be clustered in different proportions to reproduce the different colors and gray levels in the image. This dot clustering process is called halftoning or dithering and it can be done in many different ways.

continuous tone image

ordered dither halftone

error diffusion halftone

Devices such as computer monitors, dye sublimation printers, and film recorders can create dots of any color without halftoning, and these are called continuous tone printers because they can reproduce a continuous range of gray levels or colors.

12

Digital Image Basics

How images are digitized

Because a continuous tone printer packs more information into each dot, to achieve comparable image quality, printers that use halftoning or dithering require higher resolution, by a factor of roughly 6-8. Thus a continuous tone printer that outputs images at 200 dpi is roughly similar in quality to a halftone printer that outputs pixels at roughly 1200-1600 dpi. For this reason, even a 600 dpi laser printer makes relatively coarse images equivalent to only about 75 dpi continuous tone output. For printed output to appear photographic when viewed from a standard distance of about 10 inches requires between 150 and 300 dpi. At 150 dpi, most images will appear noticeable soft, while between 200 and 300 dpi the incremental improvement is quite small. Beyond 300 dpi there is still some improvement, but for most images and most observers, it will be virtually unnoticeable.

How images are digitized


The process of converting an image to pixels is called digitizing or scanning and this function can be performed in many different ways.

Film Scanners
This type of scanner is sometimes called a slide or transparency scanner. They are specifically designed for scanning film, usually 35mm slides or negatives, but some of the more expensive ones can also scan medium and large format film. These scanners work by passing a narrowly focused beam of light through the film and reading the intensity and color of the light that emerges.

Flatbed Scanners
This type of scanner is sometimes called a reflective scanner. They are designed for scanning prints or other flat, opaque materials. These scanners work by shining white light onto the object and reading the intensity and color of the light that is reflected from it, usually a line at a time. Some flatbed scanners have available transparency adapters, but for a number of reasons, in most cases these are not very well suited to scanning film. On the other hand, flatbed scanners can be used as a sort of lensless camera to directly digitize flat objects like leaves.

Digital Image Basics

13

Digital Image Basics

Digital Cameras
One of the most direct ways to capture an image is a digital camera which uses a special semiconductor chip called a CCD (charge coupled device) to convert light to electrical signals right at the image plane. The quality of the images created in this manner is closely related to the number of pixels the CCD can capture. Affordable digital cameras suffer from relatively low resolution, limited dynamic range, and low ISO film speed equivalent, and consequently do not always produce high quality digital images. To get images with quality comparable to film photography currently requires very expensive digital cameras.

Video Frame Grabbers


This type of scanner uses a video camera to capture a scene or object and then converts the video signal that comes out of the camera to a digital image in your computer memory. A video camera can be used to digitize scenes containing 3dimensional objects, but they usually have much lower image quality than film or flatbed scanners.

Scanning Services
Photo CD is a service started by Kodak a number of years ago whereby your film can be scanned using a high quality scanner and written to a compact disk you computer can access. Using Photo CD service is an inexpensive way to get high quality scans of your images without purchasing a scanner. Many other scanning services are available which can scan prints or film to floppy disks or removable disk cartridges. These vary from low resolution snapshot quality images to professional drum scans at very high resolution.

14

Digital Image Basics

Making fine prints in your digital darkroom Pixels, images, and files
by Norman Koren

Submit Query Google Search

Scanners

Pixels and images

A digital image is a rectangular grid of pixels, or "picture elements," illustrated by the image on the right, which consists of 52x35 pixels; 1820 total. It has been enlarged 5x (to 260x175 pixels) to make the pixels visible. Pixels appear as squares when enlarged in this way. Digital images exist as an array of bytes in a computer's RAM memory or as files in memory cards, hard drives, and CDs or DVDs. Several popular file formats are described below.

Each pixel typically consists of 8 bits (1 byte) for a Black and White (B&W) image or 24 bits (3 bytes) for a color image-- one byte each for Red, Green, and Blue. 8 bits represents 28 = 256 tonal levels (0-255). 16-bit B&W and 48-bit color image formats are also available; each pixel represents 216 = 65,536 tonal levels. Editing images in 16/48-bits produces the highest quality results; you can save images in 8/24 bits after editing is complete. An image's resolution is the total number of pixels, e.g., 1600 x2000 = 3.2 Megapixels, which corresponds to 3.2 Megabytes inside your computer for 8-bit B&W images or 9.6 Megabytes for 24-bit (3 bytes/pixel) color images. There are several other definitions of "resolution." See the explanation below. Digital images are obtained from digital cameras or by scanning film or prints. Scanners are specified by their dpi or ppi resolution-- dots (actually, pixels) per inch they can obtain from the source. Scanning the original source-- the negative or slide-- always produces better quality than scanning a print. Printers are specified by their dpi (dots per inch) "resolution," typically 720, 1440 or 2880 for Epson. This number is the stepper motor pitch, not the actual visual resolution. It typically requires several printer dots to represent one image pixel. You don't need to worry about the correspondence between image pixels and printer dots; this is handled by the image editor and printer driver software.

Image resolution and print size

Considerable confusion arises because image size is specified by the number of pixels, the "resolution" in dots or pixels per inch (dpi or ppi), and the physical size (width and height). But the only attribute that counts is the number of pixels. The image on the left was taken on the Canon EOS-10D, converted from RAW using the Canon File Viewer Utility (which I've since replaced with Capture One DSLR LE), adjusted for color (though far from finished), then resized to 260x175 pixels wide in my favorite image editor, Picture Window Pro. . Picture Window Pro . The dialog box used to resize the image in is shown on the right. The original (Current) image is 3072 pixels wide and 2048 pixels high-- straight out of the EOS-10D. The 180 dpi "Resolution" (strictly speaking, it should be ppi-- pixels per inch) is set when the image is converted from RAW format. This number is arbitrary and has no effect on image quality. It is informational only. The same holds for the Width of 17.07 inches and Height of 11.38 inches. It is calculated from the equation,
size = pixels/dpi (or pixels/ppi)

It make no difference whether the image is 3437 dpi, 22.7x15.1 mm (the actual size of the EOS-10D digital sensor) or 17.07 dpi, 15x10 feet (a billboard); each of the 3072x2048 (6.3 million) pixels (6.3 megapixels) is exactly the same. I didn't change the Resolution (dpi setting) when I resized the image, hence the "size" of the new 200x133 pixel image size is tiny: 1.11x0.74 inches. This "size" is completely unrelated to the size you see on your monitor. To add to the confusion, the word "Resolution" has several meanings. It can be the highest spatial frequency where a line pattern is visible: See the series on Image sharpness and MTF. It often refers to the total pixel count of an image, e.g., 3072x2048 pixels for the EOS-10D. I prefer either of these definitions to the dpi/ppi setting, which has nothing to do with the total detail present in an image, and which can be changed without changing a single pixel. But we're stuck with it in image editing programs. You can easily change Resolution (dpi), hence Width and Height, without changing pixel count, i.e., you can rescale the image without resizing (i.e., resampling) it. In Picture Window Pro, open the Resize dialog box (above), click on the arrow to the right of the Preserve, then select File Size and Proportions. When you can change either Width, Height, or Resolution (dpi), the other two follow. In Photoshop, open the Image Size dialog box, shown below, and leave Resample Image unchecked. Check Resample Image if you want to change the pixel count-- to resize the image.

Picture Window Pro If you right-click on the image in Picture Window Pro, then click on Display info, the Window Info box appears. The properties of the resized 200x133 pixel image are shown on the left. The Size numbers are the same as the New column in Resize dialog box, above. But the File Size, 8,305 pixels, is much smaller than the 79,800 byte image size (200x133 pixels x 3 bytes/pixel). This is the result of JPEG compression. GIF and PNG files are also compressed, but TIFF files are not: Image and file sizes are the same. I usually sharpen an image (using the simple Sharpen transformation,with Amount around 70%) after resizing it down.

In Picture Window Pro you select the print size when you print. The Width and Height attributes are ignored. This doesn't exactly hold for Photoshop. . Photoshop 6.0 . Photoshop's default Print... command tries to print images at the specified Width and Height. When you try to print the original 17.07x11.38 inch image on letter-sized paper, an annoying message appears, "The image is larger than the paper's printable area; some clipping will occur." There are two solutions. 1. You can rescale the image using the Image Size dialog box, according to the instructions above. You rarely need to resize it. 2. Click on Print Options... instead of Print... in Photoshop 6 (Print with Preview... in Photoshop CS). The first time you open this box for an image, Scale is set to 100% and Scale to Fit Media is unchecked. If you check Scale to Fit Media, Scale is adjusted so the image fits the page. You may need to click Page Setup... to adjust borders and orientation (Portrait or Landscape). Or you can leave Scale to Fit Media unchecked and manually set Scale. A small page preview in the Print Options box helps with the setting, which will be remembered as long as the image remains open.

How many pixels to you need for a sharp print?

Print PPI Perceived print quality


300 200 150 100 Outstanding. As sharp as most printers can print; about as sharp as the eye can see at normal viewing distances. Excellent. Close to 300 PPI for small prints, 8x11 (or A4) and smaller. Outstanding quality in large prints, 11x17" (or A3) and larger, which tend to be viewed from greater distances. OK for large prints. Adequate, but not optimum, for small prints. Adequate, but not optimum, for large prints. Mediocre for small prints.

300 pixels per inch (ppi) is about as sharp as the eye can see on an inkjet print; it can be very impressive in a print from a sharp image file. Remember, these numbers are actual pixels per inch on the print, not the ppi setting of the image file. When an image is sent to the printer, the image editor or printer driver resizes it to the printer's native resolution-- 720 dpi for Epson Photo printers; 600 dpi for HP and Canon. No manual resizing is required. There is some controversy about how good a job image editors do (particularly Photoshop). Read Qimage Print Quality Challenge to learn more. I'm pleased with the results I get from Picture Window Pro. Most digital images must be resized down for the monitor display-- for web pages or e-mail. Many people are careful to scale the resized images to 72 dpi. Absolutely unnecessary. I know of no web browser or viewing software that pays any attention to the dpi setting. Most monitors actually display 80-100 pixels per inch, anyway.

Image file formats


Several file formats are available for image storage. The most important are TIFF, JPEG, GIF, and PNG. The primary difference between them is the type and amount of image compression. Compression reduces the amount of storage space required by an image. For example, a 1600x2000 pixel 24-bit color image (3 bytes per pixel) requires 9.6 megabytes to store without compression; it requires considerably less with compression. There are two types of compression.
q

Lossless compression maintains all image detail, bit-by-bit. Typical compression ratios are 30-50%, depending on the detail in the image. The finer the detail, the less the compression. Lossless compression is used by the PNG format. It is available, though rarely used, with the TIFF format. Lossy compression sacrifices detail in order to achieve higher compression ratios. The amount of compression depends on the detail in the image and the quality level selected when the image is saved. JPEG and GIF use different types of lossy compression.

In addition to the standard formats, many digital cameras have the option of storing images in RAW format-- unaltered data straight out of the image sensor. The information in RAW files replicates the mosaic pattern of the Bayer filter arrays used in most digital cameras. RAW files do not conform to any standard; they are unique to each camera and manufacturer. Canon calls them CRW; Nikon calls them NEF. They must be converted to a standard format by a RAW converter (or de-mosaicing program) before they can opened by an image editor. I discuss RAW conversion for the Canon EOS-10D here; I explain how to use RAW files to obtain a significant quality advantage in Tonal quality and dynamic range in digital cameras.

The image on the right is used as to illustrate compression. It is a 24-bit color 260x175 pixel image (3 bytes per pixel), and therefore contains 136,500 bytes (136.5 kB). It was stored as a 100% Quality JPEG, 46.3 kB in size. The compression ratio is 136.5/46.3 = 2.95x. 100% JPEGs are virtually indistinguishable from uncompressed TIFF images-- they contain a tiny loss that would only become visible after the file was repeatedly opened and saved. The storage required for this image in each format is shown below in (bold blue). Web browsers support JPEG, GIF, and PNG formats, but not TIFF.

TIFF files (identified by the .tif extension) are normally uncompressed. Lossless compression is available, but not universally supported. TIFF is highly versatile; it can store 16/48-bit images and metadata, such as Description, Artist's name, and Copyright, in tagged fields. This makes TIFF files slightly larger than the images they contain. TIFF is the format of choice for saving images intended for high quality printed output. (138.5 kB) PNG (.png) is another format worth considering. It uses lossless compression: image quality is equal to TIFF but file size can be considerably smaller, though generally larger than JPEG. PNG is supported by most web browsers and image editors, but it isn't widely used-- it's undoubtedly the most underrated image file format. I occasionally use it for Web display of Black & White images, such as the gamma test pattern, because it compresses almost as well as JPEG, which doesn't support B&W. PNG was created to circumvent the LZW patent used in GIF and TIFF formats. (96.3 kB; 1.42x) GIF (.gif) is best for Web display of block graphics and charts. Its limited color palette makes it a poor choice for high-quality photographic images, although it's often used for thumbnails. GIF files can be very compact; they lack the wavy edge artifacts of JPEG files (38.2 kB; 3.6x) JPEG (.jpg) is the most popular format for Web display of photographic images. Its principal feature is lossy compression, which can result in artifacts, most notably a wavy appearance near boundaries. The loss in quality and the stored image size depends on the Quality setting used to save the image. Higher quality JPEGs are larger but have more detail and fewer artifacts. JPEGs typically range from 1/5 to 1/25 the size of TIFFs. This is illustrated below for an image saved by Picture Window Pro with JPEG quality levels of 90%, 70%, and 30%.

File formats are discussed in more detail in A few scanning tips by Wayne Fulton. Irfanview is a great free utility that can read and write image files in almost any format.

The images below illustrate the effects of the JPEG Quality setting on image compression with Picture Window Pro. The dialog box is shown on the right. I use quality levels between about 80 and 95% for most images in this site.

90% quality; 15.3 kB; 8.9x.Quality is close to 100%, but a little waviness-- the classic JPEG artifact-- is visible in the sky near the trees.

70% quality; 8.9 kB; 15.3x. Image quality is still pretty good, but wavy artifacts are significantly worse and detail in the rocks is reduced.

30% quality; 6.1 kB; 22.4x. Image quality is quite poor. Wavy artifacts are obvious; there is significant loss of detail, and square zones are visible in the sky.

Image quality and size vs. Quality setting for JPEG images Every program has its own JPEG Quality settings; there is no universal standard. Photoshop has settings between 0 and 12. IrfanView (a handy little editing utility) has settings between 0 and 100. The relationship between settings in different programs is anything but linear. PW Pro Quality = 70% (8.9 kB) is roughly equivalent to Irfanview Quality = 50% (7.9 kB) and Photoshop Quality = 3 (33 kB in Photoshop CS; 13.8 kB in Photoshop 6). Photoshop files are considerably larger, even though have no embedded ICC profiles, perhaps because they have extra space reserved for Exif data (digital camera settings). This can be a disadvantage for small images used for Web display. Many digital cameras store images in JPEG format. This can lead to problems because image quality can deteriorate rapidly when images are saved and reloaded in JPEG format. JPEGs should be used with extreme caution for images intended for high quality quality printed output. If possible they should be avoided altogether. If you plan to do much editing on a JPEG image, it's best to convert it to TIFF. No quality is lost when images are stored and reloaded in TIFF format. Additional file formats You may occasionally encounter these, but they're not recommended for general use. Macintosh users should add the appropriate three leter suffix (JPG, TIF, PNG, etc.) to file names to make it easier for us Windows users to read.

JPEG 2000 An update of JPEG that offers a higher degree of compression at the same image quality level as well as an option for lossless compression. It also has options for fast display when full resolution isn't needed. It should be an excellent format once it's widely adapted, but few web browsers support it as of early 2004. I wouldn't use it until all major web browsers (Internet Explorer, Mozilla, and the Mac browsers) have supported it for at least a year. PSD Adobe Photoshop Document. Designed by Adobe for use within Photoshop; not supported by web browsers and most image editors. It sould not be used for file interchange. To be used with programs other than Photoshop, PSD files generally have to be transformed into a standard format such as TIFF. Irfanview is a great little program for translating them.

BMP Windows bitmap format. A widely-supported uncompressed format that lacks the versatility of TIFF. TIFF should be used in its place.

There are about three dozen additional formats, most of which you'll never encounter. The Graphics File Formats Page describes many of them in gory detail. Kodak has a tutorial on file formats that promotes their technically-excellent proprietary formats and mostly ignores standard formats. Another reminder of how this once-great company has fallen from grace. Back to Making fine prints... Getting started | Calibrating your monitor

.
Submit Query

Making fine prints in your digital darkroom Light and color: an introduction
by Norman Koren

Google Search

Getting started | Scanners | Pixels, images, & files Light & color

Rhode Island

This page introduces the basic concepts of light and color. Color theory is dealt with in more depth in the series on Color management.

Light and Color


We begin with a review of light and color. The concepts presented here-- additive and subtractive color and their respective primaries-- are critically important for image editing. You may skip to the next section if you are familiar with them. The human eye is sensitive to electromagnetic radiation with wavelengths between about 380 and 700 nanometers. This radiation is known as light. The visible spectrum is illustrated on the right. The eye has three classes of color-sensitive light receptors called cones, which respond roughly to red, blue and green light (around 650, 530 and 460 nm, respectively). A range of colors can be reproduced by one of two complimentary approaches:

Additive color: Combine light sources, starting with darkness (black). The additive primary colors are red (R), green (G), and blue (B). Adding R and G light makes yellow (Y). Similarly, G + B = cyan (C) and R + B = magenta (M). Combining all three additive primaries makes white. Subtractive color: Illuminate objects that contain dyes or pigments that remove portions of the visible spectrum. The objects may either transmit light (transparencies) or reflect light (paper, for example). The subtractive primaries are C, M and Y. Cyan absorbs red; hence C is sometimes called "minus red" (-R). Similarly, M is -G and Y is -B. The two approaches are illustrated on the right and described in the table below.

Unfortunately, ideal C, Y and M inks don't exist; the subtractive primaries don't entirely remove their compliments (R, B and G). This isn't a problem for film, where light is transmitted through three separate dye layers, but it has important consequences for prints made with ink on reflective media (i.e., paper). Combining C, Y and M usually produces a muddy brown. Black ink (K) must added to the mix to obtain deep black tones. CMYK color is highly device dependent-- there are many algorithms for converting RGB to CMYK. Photographic editing should be done in RGB (or related) color spaces. Conversion to CMYK (usually with colors added to extend the printer color gamut) should be left to the printer driver software.

Additive color
Light sources: beams of light or dots of light on monitor screens Primaries: Red (R), Green (G), Blue (B) Light from independent sources is added.

Subtractive color
Objects that transmit or reflect light: film or prints. Typically illuminated by white light. Primaries: Cyan (C), Magenta (M), Yellow (Y) Portions of the visible light spectrum are absorbed by inks, which contain dyes or pigments, or by dye layers in photographic film or paper. Each subtractive primary removes one of the additive primary colors from the reflected or transmitted image. Cyan (C) removes red; hence it is known as minus red (-R). Similarly, M is -G and Y is -B. Objects are typically illuminated by white light. Combining two subtractive primaries makes an additive primary (see illustration). Combining all three subtractive primaries in roughly equal amounts creates gray or black.

Adding red and green makes yellow (R + G = Y); Similarly, G + B = C and R + B = M. Adding all three additive primaries in roughly equal amounts creates gray or white light.

You can obtain a wide range of colors, but not all the colors the eye can see, by combining RGB light. The gamut of colors a device can reproduce depends on the spectrum of the primaries, which can be far from ideal. To complicate matters, the eye's response doesn't correspond exactly to R, G and B, as commonly defined (the description above is oversimplified). Device color gamut and the eye's response are discussed in detail in the page on Color Management.

Color models
If you lighten or darken color images you need to understand how color is represented. Unfortunately there are several models for representing color. The first two should be familiar; the latter two may be new.
q

RGB - Red, Green, Blue; The additive primary colors. Used for monitor screens and most image file formats. There are actually a number of RGB color spaces-- sRBG, Adobe RGB 1998, Bruce RGB, Chrome 2000, etc.-- differing from each other in the purity of their primary colors, which affects their gamut-- they range of colors they represent. They are discussed in Color Management. CMY(K) - Cyan, Magenta, Yellow; The subtractive primary colors: the compliments of the additive primaries (Cyan is -red; magenta is -green; yellow is -blue.) Widely used in inks for printing with black (K) added because C, Y, and M pigments and inks rarely give deep, rich black tones by themselves (they tend to make a muddy brown). CMYK is important to the prepress industry, but most photographers don't need to be concerned with it. Most high quality photographic printers have additional inks (light M, light C and gray may be added to the basic four), so they aren't really CMYK; the printer driver software converts RGB files into ink densities. HSV - Hue, Saturation, Value. Hue is what we perceive as color. S is saturation: 100% is a pure color. 0% is a shade of gray. Value is related to brightness. HSV and HSL (below) are obtained by mathematically transforming RGB. HSV is the identical to HSB. HSL - Hue, Saturation, Lightness. H is the same as in HSV but L and V are defined differently. S is similar for dark colors but quite different for light colors. Also called HLS.

It is not practical to use RGB or CMY(K) to adjust brightness or color saturation because each of the three color channels would have to be changed, and changing them by the same amount to adjust brightness would usually shift the color (hue). HSV and HSL are practical for editing because the software only needs to change V, L, or S. Image editing software typically transforms RGB data into one of these representations, performs the adjustment, then transforms the data back to RGB. You need to know which color model is used because the effects on saturation are very different. HSV color is shown here in an illustration from Jonathan Sachs' tutorial, "The Basics of Digital Images" (right click on the link to save it in Adobe PDF format). V = max(R,G, B). Maximum Value (V = 1 or 100%) corresponds to pure white (R=G=B=1) and to any fully saturated color (at least one RGB value at 1 and one at 0; no gray component (W = min(R,G,B)). V = 0 is pure black, regardless of H and S. The HSV color model can be depicted as a cone, widest at the top (V = 1), coming to a point at the bottom (V = 0; pure black). (I use the "V"-like appearance of the cone as a mnemonic to remember "HSV." The names of the color models are pretty arbitrary.) efg has a technically detailed explanation of the HSV color model, complete with a Java applet.

HSL color. Maximum color saturation takes place at L = 0.5 (50%). L = 0 is pure black and L = 1 (100%) is pure white, regardless of H or S. The HSL color model can be depicted as a double cone, widest at the middle (L = 0.5), coming to points at the top (L = 1; pure white) and bottom (L = 0; pure black).

Now the important part. What you must remember about the HSV and HSL color models is,

Darkening in HSV reduces saturation. Lightening in HSV increases saturation.

Darkening in HSL increases saturation when L > 0.5. Lightening in HSL reduces saturation when L > 0.5.

HSV
Best representation of saturation

HSL
Best representation of lightness

HSV and HSL were developed to represent colors in systems with limited dynamic range (pixel levels 0-255 for 24bit color). The limitation forces a compromise. HSV represents saturation much better than brightness: V = 1 can be a pure primary color or pure white; hence "Value" is a poor representaton of brightness. HSL represents brightness much better than saturation: L = 1 is always pure white, but when L > 0.5, colors with S = 1 contain white, hence aren't completely saturated. In both models, hue H is unchanged when L, V, or S are adjusted. HSV and HSL are illustrated above for red (H=0). S varies from 0 to 1 along the horizontal axis; V and L vary from 0 to 1 along the vertical axis. The right side of the HSV illustration (S=1) always has maximum saturation (G=B=0) but the top (V=1) varies from pure white at S=0 to pure red at S=1. The top of the HSL illustration (L=1) is pure white for all values of S. It would be nice to be able to represent brightness and saturation properly in one system, but you can't have it both ways. Transformations for adjusting brightness and saturation in Picture Window Pro let you choose between HSV and HSL. HSV is the default. I usually prefer HSL when the primary intent of the transformation is to adjust brightness; I sometimes prefer HSV when the primary intent is to adjust saturation (I work with brightness more often). But I'm not rigid about these preferences; I often try out both to see how they appear in the Preview window. I don't recommend HSL when you want to darken white or nearly white tones. They can take on an unnatural color cast.

SHSV = 1

Relationship between RGB, HSV, and HSL color representation

for math geeks

H is the same for HSV and HSL. We won't give the equations here; you can find them in efg's HSV lab report. Expressed in degrees (0-360) for any nonzero x, H = 0 for Red (x,0,0); 60 for Yellow (x, x,0), 120 for Green (0,x,0) (illustrated below), 180 for Cyan (0,x,x), 240 for Blue (0,0,x), and 300 for Magenta (x,0,x). H can also be represented on a scale of 0 to 1. Assume R, G, and B can have values between 0 and 1. Let W = min(R,G,B) = the gray component.

HSV (HSB)
V = max(R,G,B)

HSL (HLS)
L = (V+W)/2 SHSL = (V-W) / (V+W) = (V-W) / (2L) ; L <= 0.5 SHSL = (V-W) / (2-V-W) = (V-W) / (2-2L) ; L > 0.5 A bright, fully saturated color (max(R,G,B) = V = 1; min(R,G,B) = W = 0; SHSV = SHSL = 1) must have L = 0.5. L = 1 corresponds to pure white.

SHSV = (V-W) / V

Any color with R, G, or B = 1 has V = 1. Maximum saturation occurs when W = 0.

V, L, and S illustrated for H = 0.333 (120; Green)

HSV: Best representation of saturation

HSL: Best representation of lightness

V, L, and H illustrated for S = 1 (maximum saturation)

The Y, C, and M bands are much narrower than the R, G, and B bands. I'm not sure why; I suspect it results from specific properties of HSL and HSV (where V = max(R,G,B) rather than, say, mean (R,G,B).). Y, C, and M appear brighter than R, G, and B at similar V and L levels because both V and L are related to max(R,G,B). Some interesting relationships (1) For W = min(R,G,B) = 0 (no gray; maximum saturation), SHSL = SHSV = 1 ; L = V / 2 ; L <= 0.5. (2) For V = max(R,G,B) = 1 and SHSL = 1, L = 1-(SHSV / 2) ; SHSV = (1-L) / 2; L >= 0.5 These relationships imply that (1) the bottom half of the HSL L-H plot for S=1 (above, right) is identical to the HSV V-H plot for S=1 (above, left), and (2) the top half of the HSL S-H plot is identical to an HSV S-H plot for V=1 (not shown), i.e., the HSL L-H plot (above, right) combines two HSV plots. All saturation equations have V-W in the numerator; they differ in denominator scaling. In both representations, S is a measure of relative saturation. S is 0 when W = V (R = G = B; neutral gray); S = 1 when W is at its minimum allowable value for a given value of V or L. For HSV, W = 0 when S = 1. For HSL with L <= 0.5, maximum saturation takes place when W = 0; SHSL = (V-W) / (V+W) = 1. When L > 0.5, W must be greater than 0. Maximum saturation takes place when W = 2L-V takes its minimum value, Wmin. In this case V = 1, so Wmin = 2L-1. SHSL = (V-W) / (2-V-W) = (1-2L+1) / (2-12L+1) = 1. V and L don't correspond with perceived luminance; for example, blue and gray or white with the same V or L values would have very different luminance. The PAL luminance signal, Y = 0.30R + 0.59G + 0.11B, corresponds more closely to perceived luminance.

View image galleries How to purchase prints


. . . .

An excellent opportunity to collect high quality photographic prints and support this website

You might also like