Imaging_Tutorial(1) - School of Computing and Mathematics

advertisement
WEEK 2s
COM602
TUTORIAL 1
Tutor: Dr Raymond Bond
EXERCISES
1. Complete task 1 given in appendix 1. You need to calculate image
resolution, aspect ratio, number of pixels, bits and compression ratios etc.
You will need a calculator.
2. Complete task 2 given in appendix 1. Think about how image compression
works in order to match up the image sizes.
3. Run length Encoding: Watch the following illustration and think about
lossless compression algorithms and the fact that they are not always
complicated.
http://tinyurl.com/lf8a7wr
4. Go to the following URL and convert binary into text (ASCII) and vice versa.
Think about how binary is converted into other information such as colour
in RGB format for representing pixels.
http://tinyurl.com/3uk4h
5. Go to the following URL and interact with the RGB simulation. Think about
how various coloured lights interact to create a myriad of other colours.
http://tinyurl.com/kcodp8z
6. During the module you will learn about the following technologies. Do a
Google search on each technology and write down what you think each
technology might be best used for.
a.
b.
c.
d.
e.
SVG
Canvas
CSS3 transforms, transitions and animations
WebGL
Unity3D
7. Download and read your assignment brief available on Blackboard under
assessments. Feel free to ask questions.
8. Please look over the reading material available in appendix 2.
APPENDIX 1
you!)
Resolution
Part 1: Fill in the rows (The first one is done for
Aspect
ratio?
Number of
pixels?
W/H
Width|Height
Convert
this to kilo
Bits
(kBits)
(1 kilo =
1000)
This is the
New Image
size after
Compression
(in kBits)
What is the
Compression
Ratio?
WxH
Assuming
8 bits per
colour
(RGB),
How
many bits
in image?
What is
the New
image
size if the
original
image is
reduced
by 75
percent?
128
96
1.33
12288px
294912
294.912
kB
126 kB
0.42
73.728
kB
176
144
1.22
25344px
608256
608.256
kB
240 kB
0.39
152.064
kB
144
288
0.5
41472px
995328
995.328
kB
120 kB
0.12
248.832
kB
640
450
14.22
288000px
6912000
6912.000
kB
3000 kB
0.43
1728 kB
1280
1024
1.25
1310720px
25165824
25165.824
kB
200 kB
0.01
6291.456
kB
Part 2: Match the images to their image size:
Image 1 – Colour
B&W
Image 2 – Colour
Image 3 –
A. 119 Kbytes- Image 2- no repetition pixels
B. 46 Kbytes – Image1- Repetitive pixels, stores key bytes using run length encoding
C. 20 Kbytes- Image 3 -lowest bit depth
APPENDIX 2
Introduction to Digital Imaging
Dr Raymond Bond – University of Ulster
What does ‘Digital’ mean?
In a general sense, anything ‘Digital’ relates to computer technology. I’m sure you have heard of the ‘Digital
Revolution’. More specifically, the word ‘Digital’ comes from the word ‘digit’ because ‘Digital’ refers to data
that is stored or processed using the digits ‘0’ and ‘1’, otherwise known as ‘binary’.
What is ‘Binary’?
Just the way we use a base-10 number system in everyday life (0,1,2,3,4,5,6,7,8,9), binary is a base-2 system
(0,1) used in computer technology. With the binary system, one bit (which stands for ‘binary digit’)
represents a state that can either be ‘on’ or ‘off’ and this is represented using the values ‘1’ (on) and ‘0’ (off)
or vice versa. When using a large number of bits, a lot of information can be represented on a computer. For
example, the following binary digits can be translated into the words, ‘Welcome to Digital Imaging’.
01010111 01100101 01101100 01100011 01101111 01101101 01100101 00100000 01110100
01101111 00100000 01000100 01101001 01100111 01101001 01110100 01100001 01101100
00100000 01001001 01101101 01100001 01100111 01101001 01101110 01100111
Visit the following Web address and see for yourself www.roubaixinteractive.com/PlayGround/Binary_Conversion/Binary_To_Text.asp
It is worth noticing that each letter in the sentence ‘Welcome to Digital Imaging’ is represented using 8 bits
(for example, the letter ‘W’ in the word ‘Welcome’ is represented in binary as ‘01010111’). Translating
binary to text is done using a standard called the American Standard Code for Information Interchange
(ASCII).
Computer scientists use a set of standard terminology when referring to a specific number of bits. For
example, 8 bits is also called a byte, and 16 bits is the same as 2 bytes. Refer to the following list for more
definitions:






1 bit = a single digit (either ‘1’ or ‘0’)
8 bits = 1 byte (a combination of 1's and 0's)
1024 Bytes = 1 KB (kilobyte)
1024 Kilobytes = 1 MB (megabyte)
1024 Megabytes = 1 GB (gigabyte)
1024 Gigabytes = 1 TB (terabyte)
What is ‘Digital Imaging’?
Digital Imaging is the art and science of capturing, creating and editing digital images.
Capturing digital images from the real world can be done using:
 Digital Cameras (a compact camera or a DSLR = ‘Digital Single Lens Reflex’)
 Camera phones (iPhone, Blackberry etc)
 Digital Camcorders
 Web Cams
 Digital Scanners
What is a ‘Digital Image’?
A digital image is a graphic stored using the binary system. Therefore, every image or photo you view on a
computer is a digital image as opposed to an analogue image (i.e. a printed photo). Most digital images are
raster graphics.
What is a ‘Raster Graphic’?
A raster graphic is also known as a ‘bitmap image’. A bitmap image/raster graphic is made up of pixels. The
word pixel stands for ‘picture element’. A pixel is simply a small square dot. Everything you see on your
computer screen is made up of small square dots/pixels. Today, computer screens can display millions of
pixels, which give us the high quality detail as seen in digital images and digital videos. The resolution on a
typical Apple iMac LCD screen is 1920 pixels wide by 1200 pixels in height. Therefore by using basic Area
from Maths we can calculate the actual number of pixels on the screen 1920x1200 = 2304,000 pixels.
People refer to High Definition (HD) when the resolution is more than the standard-definition (768×576). A
typical resolution of an HD screen is 1280×720 pixels.
Each pixel in a digital image (or raster graphic) displays a specific colour. However, the number of bits that
have been assigned to each pixel dictates the range of colours each pixel can display. In digital imaging, this
is called ‘bit-depth, sometimes referred to as the ‘colour-depth’’.
What is ‘Bit-Depth’?
As an example, lets imagine 1 bit has been assigned to each pixel.
This 1 bit can only represent two states (‘1’ or ‘0’). Therefore
each pixel can only display one of two colours. This is why bitdepth is commonly referred to as ‘colour-depth’.
Take a look at the Figure to the right. Each square represents a
pixel and each pixel can only be black or white because the bitdepth is 1-bit. You could say that the figure displays a map of the
bits (you can see where the word ‘bitmap’ comes from). A digital
image with a bit-depth of 1-bit is called a bitonal image.
Nevertheless, it is unlikely that the bit-depth for an image will be
1 bit. A gray-scale image usually has a bit-depth of 8 bits and
digital colour images usually have a bit-depth of 24 bits. A digital
image with a 24 bit-depth allows each pixel to represent one of
16.7 million colours. Refer to the following list, which details the
range of colours that can be displayed with various different bitdepths.







1 bit (21) = 2 colours
2 bits (22) = 4 colours
3 bits (23) = 8 colours
4 bits (24) = 16 colours
8 bits (28) = 256 colours
16 bits (216) = 65,536 colours
24 bits (224) = 16.7 million colours
What is ‘Resolution’?
Resolution is often referred to as the number of pixels in an image. For example, an image may be 800x600
(800 pixels wide by 600 pixels in height = 480,000 pixels in the image). This kind of resolution is called
‘dimensional resolution’. However, there is also ‘linear resolution’ which is different.
Linear resolution is the number of Pixels Per Inch (ppi). This has also been called Dots Per Inch (dpi).
Interestingly, computer screens can only display 72 dpi. For example, if you opened an image that was 300
dpi on a computer, the display screen (e.g. LCD) will only show the image at 72 dpi. However, a printer will
proceed to print this image at 300 dpi. Hence a printed image can have a higher dpi resolution when
compared to a computer display screen.
At this point you might think the more pixels and the higher the bit-depth an image has, the better.
However, a computer scientist will say that a lot of pixels with a high bit-depth is costly in terms of memory,
file size and bandwidth. You might respond and say that we have high capacity hard drives today, however,
in terms of browsing the Internet, no one wants to wait on a Web page loading because the digital images
are too big in terms of file size.
How to you know the file size of a bitmap image?
The file size of an uncompressed bitmap image can be worked out using the following equation:
File Size in bytes = (width x height x bit depth) / 8
For example, if the dimensions of a bitmap image is 2048 pixels (in width) x 3072 pixels (in height) with a
bit depth of 24, the file size can be worked out:
(theWidth x theHeight x bit-depth) / 8 = (2048 x 3072 x 24) / 8 = 18, 874, 368 bytes.
NOTE: Dividing by 8 simply converts the bits into bytes.
What is ‘Digital Image Compression’?
A number of computer algorithms have been developed to compress an image and reduce the size of the
actual file. There are, however, two types of compression techniques, i.e. ‘Lossy’ and ‘Lossless’. Lossy is
when the image losses information, detail and quality as a result of the compression technique. Lossless is
where the image is reduced in file size but still retains its original quality when decompressed. The
following image illustrates how the quality of an image degrades the more a lossy compression technique is
applied (i.e. low, medium, high compression).
Example of digital image compression in use:
1.
2.
3.
4.
An author of a webpage compresses a
digital image (e.g. a picture saved as
.JPG file) and inserts it into a webpage.
The webpage along with the
compressed digital image is uploaded
to a server.
A user downloads the webpage along
with the compressed digital image.
The user's browser decompresses the
digital image and displays it.
NOTE: The software routines used to compress and decompress multimedia files are often called ‘codecs’.
Most compression methods involve removing data from the file and replacing it when the file is
decompressed. They most likely use one or more of the methods below:
Repetition
In most digital images some pixels are redundant, i.e. the same
information can be listed over and over again. For example, look at the
image to the right, the bottom half of the image contains a lot of white
pixels.
As dicussed, each pixel is stored on
a computer as a binary number
representing the colour of that pixel.
For example, if there is a row of 200
pixels where every pixel is white,
the same binary information is stored 200 times. This could instead be
stored as one instruction that basically says 'the next 200 pixels are all
white'. This would dramatically reduce the file size. This compression
technique is called Run Length Encoding (RLE). Therefore, instead of
using one byte per pixel, RLE uses one byte to represent numerous
pixels. This is done by storing ‘key bytes’. The computer will read a
‘key-byte’ which represents a decimal number. Imagine a key-byte
represents the number 27. This means that the colour represented in
the next byte will also be the colour used for the next 27 pixels.
Averaging
Suppose an image contains six adjacent pixels that are slightly different shades of the same colour. For these
six pixels you need to store six different numbers. If you average the colour and replace the different shades
with one colour then you could run a repetition compression scheme to further reduce the file size. This
method is used in the JPEG format, however this method can result in the image looking slightly blocky.
Selectivity
In a digital image certain sections stand out more than others, for example the sharp edge of an image. By
being selective and increasing the compression on those sections that do not stand out , a smaller file size
for the perceived quality can be achieved.
Remember image compression schemes can be divided into two categories:
1. Lossy techniques (e.g. JPEG)
A filter function is applied to the image, which reduces the quantity of data, but some of the original data in
the image is lost. This means that the original image cannot be exactly reproduced from the compressed
image.
2. Lossless techniques (e.g. BMP’s RLE, TIFF’s LZW)
Lossless refers to compression schemes that conserve space on disk without sacrificing any data in the
image. The original image is always reproduced.
NOTE: Most images on the web are compressed with the JPEG or the GIF system. Photographic images are most
often compressed as JPEGs, and line drawings are most often compressed with GIF.
Compression Ratio
Definition: The Compression Ratio represents the size of the original image divided by the size of the
compressed image. It is basically how much the digital image has actually been compressed.
Compression Ratio = Compressed image size / Uncompressed image size
Some compression schemes yield ratios that are dependent on the image content: a busy image of a crowd
of people may yield a small compression ratio, whereas an image of a blue sky and ocean may yield a high
compression ratio. The higher the compression ratio the smaller the compressed file size.
What file formats are available to store raster images?
There are a large number of file formats for storing raster images. The table below details just a sample of
the available formats.
File format
Bitmap
Tagged Image File Format
Joint Photographic Experts
Group
Portable Network Graphics
Graphics Interchange Format
Photoshop Document
Extension
used
.bmp
.tiff
.jpg
Bit-depth
Variable
Variable
24
Web
compatible
Yes
No
Yes
Support for Image
Compression
No
No
Yes
.png
.gif
.psd
Variable
8
Variable
Yes
Yes
No
Yes
Yes
No
At the start of each digital image format is a ‘header’ section, which tells the computer what kind of file it is.
For example, 'this is an image file using the JPEG format'. After the header section is the actual data that
represents the image. Without the header section, the computer would not know what format the file is in.
The format of a file refers to the way the numbers are arranged.
The BMP file format
What do we mean when we say file format? Let's use the BMP image file format as an example and examine
it in more detail. The BMP is the native bitmap image file format of the Microsoft Windows environment.
The file must somehow store pixel values - but what else?
Elements of a BMP file specification
Each bitmap file contains a bitmap-file header, a bitmap-information header, a colour table, and an array of
bytes that defines the bitmap bits. Refer to the BMP file specification figure.
Number of bits per pixel
As discussed, the number of bits per pixel is basically the bit-depth. The possible values for the BMP format
are:
 1 bit - (black/white)
 4 bits - (16 colours)
 8 bits - (256 colours)
 24 bits - (16.7 million colours)
In 1-bit mode the colour table has to contain 2
entries (usually white and black). If a bit in the
image data is 0, it points to the first palette entry. If
the bit is 1, it points to the second. In 4-bit mode the
colour table must contain 16 colours. Every byte in
the image data represents two pixels. The byte is
split into the higher 4 bits and the lower 4 bits and
each value points to a palette entry. In 8-bit mode
every byte represents a pixel. The value points to an
entry in the colour table which contains 256 entries.
In 24-bit mode three bytes represent one pixel. The
first byte represents the red part, the second the
green and the third the blue part. In 24-bit mode
there is no need for a palette because every pixel
contains a literal RGB-value, so the palette is
omitted.
Colour Look-up Table (CLUT)
The colour look-up table, contains as many
elements as there are colours in the bitmap. As already identified, the colour look-up table is not present for
bitmaps with a bit-depth of 24-bits. The colours in the table should appear in order of importance. This
helps a display driver render a bitmap on a device that cannot display as many colours as there are in the
bitmap.
Pixel Data
The pixel data, immediately following the colour look-up table, consists of an array of byte values
representing consecutive rows, or "scan lines," of the bitmap. Each scan line consists of consecutive bytes
representing the pixels in the scan line, in left-to-right order. The number of bytes representing a scan line
depends on the colour format and the width, in pixels. The scan lines in the bitmap are stored from bottom
up. This means that the first byte in the array represents the pixels in the lower-left corner of the bitmap
and the last byte represents the pixels in the upper-right corner.
What is Digital ‘Sampling’?
A digital image is said to be a sampled representation of a scene. Given actual scenes have continuous
colour, a digital image is a sample of those colours that can be used to represent the scene. Sampling has
also been called digitization. Below is an illustration of a 10x10 grid (100 pixels), which is a low sample of
pixels from a scene.
What are the problems with sampling?
When a scene is sampled, two basic problems arise:
1. Loss of detail. When an item falls between two sample points the item may not be included.
2. Inaccurate representation of the sampled data. When an item is larger than the sample point, this
could lead to inaccurate representation of the sampled data.
What is Anti-aliasing?
This is defined as the removal of jagged edges and the correct representation of sub-pixel detail.
There are hardware solutions to anti-aliasing. For example, ‘Pixel Phasing’ entails finer positioning of the
electron beam in a CRT monitor. However, although practical and financial considerations make this
unsuitable for most applications, we must look at solving the problem using software. Anti-aliasing in
software can take 3 approaches.
1st approach: Supersampling - Increase the sample points, resulting in a greater number of smaller steps.
This is an expensive solution as doubling the resolution quadruples the memory cost.
2nd approach: Postfiltering - Apply a smoothing filter to the completed
image. It removes jagged edges, but as the filter is applied globally to the
image, the appearance of other objects is affected. This however does
nothing to resolve the problem of lost detail.
3rd approach: Prefiltering - 2 ideas (Crow 77 & 81) (a) Intensity
related to area - treat a pixel as bounding a finite area and illuminate
according to the amount of the area covered by the object being drawn,
(b) Spread the effect - the effect of objects influencing a pixel is spread
over neighbouring pixels using an intensity matrix centered on the pixel
and decreasing towards the edges.
As an example of anti-aliasing we will use the Intensity related to area
method. Take a look at the donut shape in the figure. We are going to use
a 9x9 grid to sample the shape.
Using the rules of intensity related to area we will look at the coverage in each sample square and illuminate
the area with the corresponding intensity i.e:
• 100% coverage => full intensity
• 75% coverage => 75% intensity
• 50% coverage => 50% intensity
• 25% coverage => 25% intensity
• 0% coverage => 0% intensity (the background colour)
This gives a soft-edging effect...
At this magnification the result looks fairly
unimpressive, but at normal viewing resolutions antialiasing can significantly reduce aliasing effects, albeit
at the expense of a certain fuzziness.
Digital Colour Theory
What is colour theory?
It is the theory behind colour mixing and colour combination. Colour mixing is a process where particular
colours are combined to create other colours. This can simply be done using physical ink or pigments.
However, colour mixing in digital technology (televisions, data projectors, LCD and Plasma screens etc) is
done using coloured beams of light as opposed to physical ink.
How do we humans perceive colour?
Rods and cones of the Human eye! Rods are sensitive to light however the cones are more sensitive to
colour. The 6 to 7 million cones in the human eye can be divided into ‘red’ cones (64%), ‘green’ cones
(32%), and ‘blue’ cones (2%).
How do we create colours?
From School we should know that by combining primary colors, a large number of other colours can be
created (these have been called secondary and tertiary colours etc). However, usually only three primary
colors are needed to produce a large range of other colours. This is because human color vision is said to be
‘trichromatic’. Remember the three types of cones in the human eye.
The primary colours chosen and the technique used to create other colours is called a colour model. There
are three popular colour models called RYB, RGB and CMYK. Red, Yellow and Blue (RYB) are the primary
colours we used at School to mix and create other colours. However, this RYB colour model cannot produce
as many colours as we were led to believe and as a result, this colour model has not been implemented in
digital technology.
What is CMYK?
CMYK stands for Cyan, Magenta, Yellow and Key (or blacK). These CMYK colours
are actual inks that are mixed at different levels to create a large number of
different colours. As a result, CMYK is the colour model used by most modern
printers. Yes, that’s why printers often need four ink cartridges (one for each
colour). The ‘K’ in CMYK stands for ‘Key’, which is short hand for ‘key printing
plate’. To put it simply, this is black ink.
CMYK is known as a ‘subtractive colour model’. This is because the model
subtracts from white light (white paper) and moves towards black by mixing the
three colours - Cyan, Magenta and Yellow. When these three colours are equally
mixed together, the result is a black colour and when these colours are totally
subtracted, the result is white. This is because white is usually the background colour of the canvas (e.g. the
actual colour of the paper).
A person might ask, if CMY can produce black, why do we need the extra black ink (the ‘K’ in CMYK)?
Although CMY can create black, it is often a muddy black (almost a brown like colour) and because it is
expensive to create black using the three inks (CMY), it makes sense to have dedicated black ink.
NOTE: When black is created using CMY, it is called ‘composite black’ or ‘process black’, and when black is
created using the dedicated black ink, this black is called ‘rich black’.
What is RGB?
The acronym RGB stands for Red, Green and Blue. Again, these colours can
be mixed to create a large number of other colours. Notice the similarity
between RGB and the three various kinds of cones in the human eye.
Unlike CMYK, the RGB colour model is used to create colour on electronic
and digital devices such as televisions, computer screens, LCDs, CRTs, LED
screens, Plasma screens, data projectors and mobile phones. Although the
CMYK colour model uses actual ink to create other colours, the RGB model
combines three differently coloured beams of light (i.e. Red beam, Green
beam and a Blue beam) to create additional colours.
The RGB colour model has been called an ‘additive colour model’. This is because coloured light beams are
added together to create additional colours. When the full intensity of the Red, Green and Blue light beams
are combined, the result is white. Likewise, when the RGB colours are completely subtracted, the result is
black (because the absence of light is black). Notice that the RGB model is the opposite of the CMYK model,
where the full intensity of CMY creates black and the absence of CMY creates white.
What does RGB have to do with Digital Imaging?
Each pixel in most digital images get their colour from the RGB model. In fact, each pixel is digitally stored
using three separate values, each value representing the intensity of Red, Green and Blue respectively. In
Photoshop, these are called the Red, Green and Blue Channels. Interestingly, if you could zoom into a
computer screen at the pixel level, you would see that each pixel on the physical screen is made up of three
close but separate RGB light sources. However, from our normal viewing distance, these separate light
sources cannot be seen, which gives us the illusion of solid color.
With respect to Chapter 1, when we say that the bit-depth of a colour image is 24-bits, this means each pixel
assigns 8-bits to represent the Red intensity, 8-bits for the Green intensity and 8 bits to represent the Blue
intensity (8+8+8=24 bits). As you know from binary, 8 bits can be used to represent 256 distinct states (this
is calculated by 2 to the power of 8 - 28 = 2x2x2x2x2x2x2x2 = 256). Therefore, each pixel can represent 256
intensities of Red, 256 intensities of Green and 256 intensities of Blue. Think about the combinations and
colours you can create using 256 reds, 256 greens and 256 blues. Well, you can calculate this by
256x256x256 = 16.7 million colours. A bit-depth of 24 bits is often referred to as ‘true colour’.
How do we numerically represent RGB colours?
You will find that there are a number of ways to specify an RGB colour. The following is a table of the
common approaches used to denote RGB colours in computing.
The arithmetic approach uses float values
between 0.0 and 1.0 to represent the
intensity of each the RGB colours. In this
approach, (1.0, 0.0, 0.0) would produce a
solid red given the red channel is set to
1.0 whilst the green and blue channels
have been set to 0.0.
The percentage approach is similar to the
arithmetic approach. In this approach,
each RGB channel is represented by a
percentage. For example, (0%, 100, 0%)
would produce green.
The digital 8 bit per colour channel (bit-depth = 24-bits) approach has been adopted by many image
processing applications such as Adobe Photoshop. This is where each of the RGB colour channels are
represented using 256 values (its not 255 values because you need to count 0 as a value, i.e. 0-255 = 256
values). This approach has been adopted because 256 colours can be represented using 8 bits. Although
these numeric representations may seem intuitive, you will find that RGB colour values are very often
represented using hexadecimal notation.
What is hexadecimal?
Just the way we use a base-10 number system and the binary number system is base-2, hexadecimal is a
base-16 number system. To be base-16, the hexadecimal system uses both numbers and letters to represent
16 different states. Therefore one hexadecimal (or hex) number can have one of the following values 0123456789ABCDEF. Before we go any further and explain the hexadecimal system, try understanding the
following list. If you understand the following, you understand the various number systems we have
mentioned.







Using the Base-2 system (Binary), two bits can represent 4 different states (2 2), i.e. 10, 01, 00, 11
Using the everyday Base-10 system, two numbers can represent 100 different states (102), i.e. from
00 to 99
Using the Hexadecimal Base-16 system, two hex numbers can represent 256 states (162), i.e. from
00 to FF
Using the Base-2 system, three bits can represent 8 different states (23)
Using the everyday Base-10 system, three numbers can represent 1000 different states (103), i.e.
from 000 to 999
Using the Hexadecimal Base-16 system, three hex numbers can represent 4096 different states
(163), i.e. from 0000 to FFFF
and so on…
Why do we see RGB colour values represented in hexadecimal? The reason being is that each hexadecimal
number is equivalent to 4 binary bits (4 bits is also called a nibble). Therefore two hexadecimal digits are
equivalent to 8 binary bits (or one byte). Given 8 bits is assigned to each of the RGB colour channels, two
hexadecimal numbers can be used to represent each of three RGB values. As a result, because 2 hexadecimal
numbers are used to represent each of the RGB colours, 6 hexadecimal numbers can be used to represent
the exact RGB colour. The hexadecimal system is therefore more user-friendly when compared to the binary
system. For example, the colour Red can be represented in binary as ‘111111110000000000000000’ or in
hexadecimal as ‘FF0000’.
NOTE: To avoid confusing hexadecimals with the other number systems, hexadecimal numbers are normally
prefixed with ‘0x’ or ‘#’ to indicate that the number is actually hexadecimal and not for example decimal. For
example, the hexadecimal number #FF0000 represents the colour Red given FF means put the Red channel to
full intensity, the following 00 means put the Green channel to zero intensity and likewise the following Blue
channel is put to 00.
The following Figure shows the colour picker feature in Photoshop. You will recognize the hexadecimal and
the RGB options that can be used to specify a colour, however you may not familiar with the Hue Saturation
Brightness (HSB) model that can also be used to specify a digital colour.
What is HSB?
Like RGB, Hue Saturation
Brightness (HSB) is another
colour model used to specify a
colour in the digital domain.
Hue defines the tint of the
colour, Saturation defines the
intensity of the colour and
Brightness obviously defines
the brightness or lightness
(also called value) of the
colour.
The HSB colour model is based on a cylinder (refer to the
image on the left). The circle at the top end of the cyclinder
has been called the colour wheel (refer to the image on the right). As you can see from the colour wheel, the
Hue is represented in degrees, for example, 0 degrees = red and 60 degrees = yellow. Therefore, when using
Photoshop the Hue can be specified by a value between 0-360 degrees. As illustrated in the cylinder, the
Saturation is defined by moving between the edge and the centre of the circle. When at the centre of the
circle, the Hue is said to be fully desaturated. Defining the brightness has also been illustrated in the
cycliner diagram.
Unlike the Hue, both Saturation and Brightness are defined using percentages. For example, 100%
saturation is full intensity in terms of saturation and 100% Brightness is full brightness.
What is a Colour Gamut?
We know that computer
monitors and screens use the
RGB colour model and most
printers use the CMYK colour
model. However, each device
will use these models in a
slightly different way and are
said to have their own colour
profiles. A colour gamut is
simply the number of colours a
device can produce. The Figure
illustrates a number of common
colour gamuts. You can see the
colour gamut for human vision
in the background. This is called
the CIE colour space that was
created by the International
Commission on Illumination
(CIE) in 1931. Notice there is no
RGB gamut that can represent
the entire colour spectrum in
human vision. Also, notice there
are different RGB profiles with
different colour gamuts. Most
digital images use the sRGB
colour profile whilst
professional photographers like
to use the ProPhoto RGB colour
profile because they want to have more colour variation. Notice that the CMYK colour gamut in the diagram.
This CMYK gamut is actually, for an Epson printer. See how small a typical CMYK colour gamut is in
comparison to the RGB gamuts. This can be problem for many professionals since they use the RGB model
in Photoshop and then print their photos using the CMYK colour model.
What is Gamut Conversion?
Gamut conversion is where an image is converted from one colour space into another. The most common
being from an RGB colour profile to a CMYK colour profile. When an image is scanned, the colour in the
image is converted to the RGB colour space (this is called ‘quantization’) and when the image is printed, the
RGB colours are converted to CMYK. As we seen from the previous Figure, this is a problem because not all
RGB colours can be reproduced in CMYK. Ideally, the gamut conversion is done using colour profiles, i.e. the
proper colour profile for the image (e.g. sRGB) and the proper CMYK profile for the printer. Nevertheless
when you print an RGB image using a CMYK printer, the printed image is only an approximation. This is
why knowledge of colour and colour management is important, particularly in the digital imaging industry!
If you want to find out more about digital colour theory, you should research how a Super-Video Graphic
Array (SVGA) works and how a Liquid Crystal Display (LCD) works. Or even go back in history and look at
how an old Cathode Ray-Tube (CRT) works.
Download