TV ENG E

advertisement
NEHRU ARTS AND SCIENCE COLLEGE
DEPARTMENT OF ELECTRONICS AND COMMUNICATION SYSTEM
E- LEARNING MATERIAL
CLASS: II B.sc ELECTRONICS AND COMMUNICATION SYSTEM
SUBJECT: TELEVISION ENGINEERING
BATCH:2010-2013
semester:IV
STAFF NAME: S. VENKATESAN
TELEVISION ENGINEERING
COLOR TELEVISION
Color television is part of the history of television, the technology of television and practices
associated with television's transmission of moving images in color video.In its most basic form,
a color broadcast can be created by broadcasting three monochrome images, one each in the
three colors of red, green and blue (RGB). When displayed together or in fast succession, these
images will blend together to produce a full color image as seen by the viewer.
One of the great technical challenges of introducing color broadcast television was the desire to
reduce the high bandwidth, three times that of the existing black-and-white (B&W) standards,
into something more acceptable that would not use up most of the available radio spectrum.
After considerable research, the National Television System Committee, introduced NTSC
which is a system that encoded the color information separately from the brightness, and greatly
reduced the resolution of the color information in order to conserve bandwidth. The brightness
image remained compatible with existing B&W television sets, at slightly reduced resolution,
while color televisions could decode the extra information in the signal and produce a limitedcolor display. The higher resolution B&W and lower resolution color images combine in the eye
to produce a seemingly high resolution color image. The NTSC standard represents a major
technical achievement.
Although introduced in the U.S. in 1953,[2] only a few years after black-and-white televisions
had been standardized there, high prices and lack of broadcast material greatly slowed its
acceptance in the marketplace. Although the first colorcast being the Rose Parade occurred in
January of that year, it was not until the late 1960s that color sets started selling in large
numbers, due in some part to the introduction of GE's Porta-Color set in the Spring of 1966
along with the first all-color primetime season beginning that fall.
By the early 1970s though, color sets had become standard, and the completion of total
colorcasting was achieved when the last of the daytime programs converted to color and joined
with primetime in the first all-color season in 1972.
Color broadcasting in Europe was not standardized on the PAL format until the 1960s, and
broadcasts did not start until 1967. By this point many of the technical problems in the early sets
had been worked out, and the spread of color sets in Europe was fairly rapid.
By the mid-1970s, the only stations broadcasting in black-and-white were a few high-numbered
UHF stations in small markets, and a handful of low-power repeater stations in even smaller
markets such as vacation spots. By 1979, even the last of these had converted to color and by the
early 1980s B&W sets had been pushed into niche markets, notably low-power uses, small
portable sets, or use as video monitor screens in lower-cost consumer equipment, in the
television production and post-production industry.
UNIT I
TELEVISION STANDARDS
Geometric form & Aspect ratio of the picture – Vertical scanning – Horizontal scanning – Number of
scanning lines – Interlaced scanning – Vertical and horizontal resolution –negative modulation – Complete Channel
bandwidth – Reception of VSD Signals – allocation of Frequency band for TV signal Transmission – Standards of
TV System – Complete channel bandwidth – Composite video signal – CCIR – B standards – camera tubes.
SECTION A
1) Explain raster scan?
A raster scan, or raster scanning, is the rectangular pattern of image capture and reconstruction
in television. By analogy, the term is used for raster graphics, the pattern of image storage and
transmission used in most computer bitmap image systems. The word raster comes from the
Latin word rastrum (a rake), which is derived from radere (to scrape); see also rastrum, an
instrument for drawing musical staff lines
2) EXPLAIN SCAN LINES?
In a raster scan, an image is subdivided into a sequence of (usually horizontal) strips known as
"scan lines". Each scan line can be transmitted in the form of an analog signal as it is read from
the video source, as in television systems, or can be further divided into discrete pixels for
processing in a computer system. This ordering of pixels by rows is known as raster order, or
raster scan order. Analog television has discrete scan lines (discrete vertical resolution), but does
not have discrete pixels (horizontal resolution) – it instead varies the signal continuously over the
scan line. Thus, while the number of scan lines (vertical resolution) is unambiguously defined, the
horizontal resolution is more approximate, according to how quickly the signal can change over
the course of the scan line.
3)Define negative modulation?
Modulation in which an increase in brightness corresponds to a decrease in amplitude-modulated
transmitter power; used in United States television transmitters and in some facsimile systems.
Modulation in which an increase in brightness corresponds to a decrease in the frequency of a
frequency-modulated facsimile transmitter. Also known as negative transmission.
4)Explain Composite video?
]Composite video is the format of an analog television (picture only) signal before it is combined
with a sound signal and modulated onto an RF carrier. In contrast to component video (YPbPr) it
contains all required video information, including colors in a single line-level signal. Like
component video, composite-video cables do not carry audio and are often paired with audio
cables (see RCA connector).
4)Distinguish Signal modulation?
Composite video can easily be directed to any broadcast channel simply by modulating the
proper RF carrier frequency with it. Most home analog video equipment record a signal in
(roughly) composite format: LaserDiscs store a true composite signal, while consumer videotape
formats (including VHS and Betamax) and lesser commercial and industrial tape formats
(including U-Matic) use modified composite signals (generally known as "color-under").[citation
needed]
On playback, these devices often give the user the option to outputting the baseband signal
or to modulating it onto a VHF or UHF frequency compatible with a TV tuner (i.e. appearing on
a selected TV channel). The professional television production uncompressed digital video
videocassette format known as D-2 (video), directly recorded and reproduced standard NTSC
composite video signals, using PCM encoding of the analog signal on the magnetic tape.
SECTION B
1)Define Interlaced scanning?
To obtain flicker-free pictures, analog CRT TVs write only odd-numbered scan lines on the
first vertical scan; then, the even-numbered lines follow, placed ("interlaced") between the
odd-numbered lines. This is called interlaced scanning. (In this case, positioning the evennumbered lines does require precise position control; in old analog TVs, trimming the
Vertical Hold adjustment made scan lines space properly. If slightly misadjusted, the scan
lines would appear in pairs, with spaces between.) Modern high-definition TV displays use
data formats like progressive scan in computer monitors (such as "1080p", 1080 lines,
progressive), or interlaced (such as "1080i").
Raster scans have been used in (naval gun) fire-control radar, although they were typically
narrow rectangles. They were used in pairs (for bearing, and for elevation). In each display, one
axis was angular offset from the line of sight, and the other, range. Radar returns brightened the
video. Search and weather radars have a circular display (Plan Position Indicator, PPI) that
covers a round screen, but this is not technically a raster. Analog PPIs have sweeps that move
outward from the center, and the angle of the sweep matches antenna rotation, up being north, or
the bow of the ship.
2)Define Standard-definition television?
" Standard-definition television (SDTV) is a television system that uses a resolution that is not
considered to be either enhanced-definition television (EDTV) or high-definition television (HDTV). The
term is usually used in reference to digital television, in particular when broadcasting at the same (or
similar) resolution as analog systems. The two common SDTV signal types are 576i, derived from the
European-developed PAL and SECAM systems with 576 interlaced lines of resolution; and 480i, based
on the American NTSC system.
In the USA, digital SDTV is broadcast in the same 4:3 aspect ratio as NTSC signals.[1] However,
in areas that used the PAL or SECAM analog standards, standard-definition television is now
usually shown with a 16:9 aspect ratio, with the transition occurring between the mid-1990s and
mid-2000s. Older programs with a 4:3 aspect ratio are shown in 4:3.
Standards that support digital SDTV broadcast include DVB, ATSC Standards and ISDB. The
last two were originally developed for HDTV, but are more often used for their ability to deliver
multiple SD video and audio streams via multiplexing, than for using the entire bitstream for one
HD channel.[clarification needed]
In ATSC Standards, SDTV can be broadcast in 720 pixels × 480 lines with 16:9 aspect ratio
(40:33 rectangular (unsquare) pixel), 720 pixels × 480 lines with 4:3 aspect ratio (10:11
rectangular pixel) or 640 pixels × 480 lines with 4:3 ratio. The refresh rate can be 24, 30 or 60
frames per second.
Digital SDTV in 4:3 aspect ratio has the same appearance as regular analog TV (NTSC, PAL,
SECAM) without the ghosting, snowy images and white noise. However, if the reception is poor,
one may encounter various other artifacts such as blockiness and stuttering.

3)Describe the theory of Pixel aspect ratio?
When standard-definition television signals are transmitted in digital form, its pixels have
rectangular shape, as opposed to square pixels that are used in modern computer monitors and
modern implementations of HDTV. The table below summarizes pixel aspect ratios for various
kinds of SDTV video signal. Note that the actual image (be it 4:3 or 16:9) is always contained in
the center 704 horizontal pixels of the digital frame, regardless of how many horizontal pixels
(704 or 720) are used. In case of digital video signal having 720 horizontal pixels, only the center
704 pixels contain actual 4:3 or 16:9 image, and the 8 pixel wide stripes from either side are
called nominal analogue blanking and should be discarded before displaying the image. Nominal
analogue blanking should not be confused with overscan, as overscan areas are part of the actual
4:3 or 16:9 image.
Video Format Resolution Pixel Aspect Ratio Equivalent square-pixel resolution
PAL 4:3
704×576
12:11
768×576
PAL 4:3
720×576
12:11
786×576
PAL 16:9
704×576
16:11
1024×576
PAL 16:9
720×576
16:11
1048×576
NTSC 4:3
704×480
10:11
640×480
NTSC 4:3
720×480
10:11
654×480
NTSC 16:9
704×480
40:33
854×480
NTSC 16:9
720×480
40:33
872×480
The pixel aspect ratio is always the same for corresponding 720 and 704 pixel resolutions
because the center part of a 720 pixels wide image is equal to the corresponding 704 pixels wide
image
SECTION C
1)Explain television?
Television (TV) is a telecommunication medium for transmitting and receiving moving images
that can be monochrome (black-and-white) or colored, with accompanying sound. "Television"
may also refer specifically to a television set, television programming, television transmission.
The etymology of the word has a mixed Latin and Greek origin, meaning "far sight": Greek tele
(τῆλε), far, and Latin visio, sight (from video, vis- to see, or to view in the first person).
Commercially available since the late 1920s, the television set has become commonplace in
homes, businesses and institutions, particularly as a vehicle for advertising, a source of
entertainment, and news. Since the 1970s the availability of video cassettes, laserdiscs, DVDs
and now Blu-ray Discs, have resulted in the television set frequently being used for viewing
recorded as well as broadcast material. In recent years Internet television has seen the rise of
television available via the Internet, e.g. iPlayer and Hulu.
Although other forms such as closed-circuit television (CCTV) are in use, the most common
usage of the medium is for broadcast television, which was modeled on the existing radio
broadcasting systems developed in the 1920s, and uses high-powered radio-frequency
transmitters to broadcast the television signal to individual TV receivers.
The broadcast television system is typically disseminated via radio transmissions on designated
channels in the 54–890 MHz frequency band.[1] Signals are now often transmitted with stereo
and/or surround sound in many countries. Until the 2000s broadcast TV programs were generally
transmitted as an analog television signal, but in 2008 the USA went almost exclusively digital.
A standard television set comprises multiple internal electronic circuits, including those for
receiving and decoding broadcast signals. A visual display device which lacks a tuner is properly
called a video monitor, rather than a television. A television system may use different technical
standards such as digital television (DTV) and high-definition television (HDTV). Television
systems are also used for surveillance, industrial process control, and guiding of weapons, in
places where direct observation is difficult or dangerous.
Amateur television (ham TV or ATV) is also used for non-commercial experimentation, pleasure
and public service events by amateur radio operators. Ham TV stations were on the air in many
cities before commercial TV stations came on the air.
2)Briefly explain about Horizontal Resolution (NTSC video)?
A PC screen may have a resolution of 800 x 600 which means 800 pixels (dots) going across
horizontally (width) and 600 pixels going down vertically (height)*.
TV's engineers, however, only speak about TV resolutions in terms of the number of lines going
across (resolution width) not down vertically (resolution height)! Why? Because all TV's have
exactly the same amount of lines going down (resolution height), but not all TV's have the same
amount of discernable dots going across. For example, an American TV picture will always scan
(project) 480 lines horizontally (resolution height), but the number of lines going across
(resolution width) will always depend on the quality of the TV and the signal broadcast to it.
A VHS video will only offer about 210 dots across while a TV station may offer about 330 dots
across!
TV engineers use a test pattern to determine a TV's resolution. This test pattern has lots of
vertical lines like this:
The engineer increases the lines until it is impossible to see any lines because they have all
blurred into each other. When the lines cannot be seen any more the maximum resolution of the
TV has been reached. These test lines are stacked from left to right as seen in the picture above.
Because the lines are stacked from left to right, the number of discernable lines across on the TV
screen is called the horizontal resolution!
So when we say a TV has 485 lines we mean it has a maximum resolution of 487 dots across.
But to say a TV has 487 dots across is never correct since it will always be less unless the signal
quality is perfect . . . If we take into account signal loss and low broadcast quality we are looking
at something like 330 lines.
TV screens have an aspect ratio of 1.33:1 and are slightly oblong.
Video Format.............................. Horizontal Resolution (resolution width)
Standard VHS............................. 210 Vertical "Lines"
Hi8...............................................400 Lines
Laserdisc......................................425 Lines
DV...............................................500 Lines
DVD............................................540 Lines(?) [some actual DIGITAL sizes: 720(w)x480(h),
704(w)x480 or 352(w)x480 ]
Typically, for actual NTSC signals, 485 lines are used for displaying the picture (because real
NTSC signals are interlaced, that equals 242.5 lines for each of the two fields making up the
frame).
"We suggest capturing at a resolution that most closely matches the resolution of the video
source.
For video sources from VHS, Hi8, or Laserdisc, SIF resolution of 352x240 will give good
results.
For better sources such as a direct broadcast feed, DV, or DVD video, Half D1 resolution of
352x480**is fine
3)Describe the theory of Display resolution?
Display resolution
For screen sizes (typically in inches, measured in the diagonal), see Display size.
For a list of particular display resolutions, see Graphic display resolutions.
This chart shows the most common display resolutions, with the color of each resolution type indicating
the display ratio (e.g., red indicates a 4:3 ratio)
The display resolution of a digital television or display device is the number of distinct pixels in
each dimension that can be displayed. It can be an ambiguous term especially as the displayed
resolution is controlled by all different factors in cathode ray tube (CRT), flat panel or projection
displays using fixed picture-element (pixel) arrays.
It is usually quoted as width × height, with the units in pixels: for example, "1024x768" means
the width is 1024 pixels and the height is 768 pixels. This example would normally be spoken as
"ten twenty-four by seven sixty-eight" or "ten twenty-four by seven six eight".
One use of the term “display resolution” applies to fixed-pixel-array displays such as plasma
display panels (PDPs), liquid crystal displays (LCDs), digital light processing (DLP) projectors,
or similar technologies, and is simply the physical number of columns and rows of pixels
creating the display (e.g., 1920×1080). A consequence of having a fixed grid display is that, for
multi-format video inputs, all displays need a "scaling engine" (a digital video processor that
includes a memory array) to match the incoming picture format to the display.
Note that the use of the word resolution here is a misnomer, though common. The term “display
resolution” is usually used to mean pixel dimensions, the number of pixels in each dimension
(e.g., 1920×1080), which does not tell anything about the resolution of the display on which the
image is actually formed: resolution properly refers to the pixel density, the number of pixels per
unit distance or area, not total number of pixels. In digital measurement, the display resolution
would be given in pixels per inch. In analog measurement, if the screen is 10 inches high, then
the horizontal resolution is measured across a square 10 inches wide. This is typically stated as
"lines horizontal resolution, per picture height;"[citation needed] for example, analog NTSC TVs can
typically display 486 lines of "per picture height" horizontal resolution, which is equivalent to
648 total lines of actual picture information from left edge to right edge. Which would give
NTSC TV a display resolution of 648×486 in actual lines/picture information, but in "per picture
height" a display resolution of 640×480.
4)Explain about 8VSB?
This article is about the television modulation method. For the SBE Certification, see Certified 8-VSB
Specialist.
8VSB is the modulation method used for broadcast in the ATSC digital television standard.
ATSC and 8VSB modulation is used primarily in North America; in contrast, the DVB-T
standard uses COFDM.
A modulation method specifies how the radio signal fluctuates to convey information. ATSC and
DVB-T specify the modulation used for over-the-air digital television; by comparison, QAM is
the modulation method used for cable. The specifications for a cable-ready television, then,
might state that it supports 8VSB (for broadcast TV) and QAM (for cable TV).
8VSB is an 8-level vestigial sideband modulation. In essence, it converts a binary stream into an
octal representation by amplitude modulating a sinusoidal carrier to one of eight levels. 8VSB is
capable of transmitting three bits (23=8) per symbol; in ATSC, each symbol includes two bits
from the MPEG transport stream which are trellis modulated to produce a three-bit figure. The
resulting signal is then band-pass filtered with a Nyquist filter to remove redundancies in the side
lobes, and then shifted up to the broadcast frequency.[1]
Modulation Technique
vestigial sideband modulation (VSB) is a modulation method which attempts to eliminate the
spectral redundancy of pulse amplitude modulated (PAM) signals. It is well known that
modulating a real data sequence by a cosine carrier results in a symmetric double-sided passband
spectrum. The symmetry implies that one of the sidebands is redundant, and thus removing one
sideband with an ideal brickwall filter should preserve the ability for perfect demodulation. As
brickwall filters with zero transition bands cannot be physically realized, the filtering actually
implemented in attempting such a scheme leaves a vestige of the redundant sideband, hence the
name “VSB”.
Throughput
In the 6 MHz (megahertz) channel used for broadcast ATSC, 8VSB carries a symbol rate of
10.76 Mbaud, a gross bit rate of 32 Mbit/s, and a net bit rate of 19.39 Mbit/s of usable data. The
net bit rate is lower due to the addition of forward error correction codes. The eight signal levels
are selected with the use of a trellis encoder. There are also similar modulations 2VSB, 4VSB,
and 16VSB. 16VSB was notably intended to be used for ATSC digital cable, but quadrature
amplitude modulation (QAM) has become the de facto industry standard instead.

power saving advantages
A significant advantage of 8VSB for broadcasters is that it requires much less power to cover an
area comparable to that of the earlier NTSC system, and it is reportedly better at this than the
most common alternative system, COFDM. Part of the advantage is the lower peak to average
power ratio needed compared to COFDM. An 8VSB transmitter needs to have a peak power
capability of 6 db (four times) its average power. 8VSB is also more resistant to impulse noise.
Some stations can cover the same area while transmitting at an effective radiated power of
approximately 25% of analog broadcast power. While NTSC and most other analog television
systems also use a vestigial sideband technique, the unwanted sideband is filtered much more
effectively in ATSC 8VSB transmissions. 8VSB uses a Nyquist filter to achieve this. Reed–
Solomon error correction is the primary system used to retain data integrity.
In summer of 2005, the ATSC published standards for Enhanced VSB, or E-VSB [1]. Using
forward error correction, the E-VSB standard will allow DTV reception on low power handheld
receivers with smaller antennas in much the same way DVB-H does in Europe, but still using
8VSB transmission.
Disputes over ATSC's use
For some period of time, there had been a continuing lobby for changing the modulation for
ATSC to COFDM, the way DVB-T is transmitted in Europe, and ISDB-T in Japan. However,
the FCC has always held that 8VSB is the better modulation for use in U.S. digital television
broadcasting. In a 1999 report, the Commission found that 8VSB has better threshold or carrierto-noise (C/N) performance, has a higher data rate capability, requires less transmitter power for
equivalent coverage, and is more robust to impulse and phase noise.[2] As a result, it denied in
2000 a petition for rulemaking from Sinclair Broadcast Group requesting that broadcasters be
allowed to choose between 8VSB or COFDM as is most appropriate for their area of coverage.[3]
The FCC report also acknowledged that COFDM would "generally be expected to perform better
in situations where there is dynamic multipath," such as mobile operation or in the presence of
trees that are moving in high winds. Since the original FCC report, further improvements to VSB
reception technologies as well as the introduction of E-VSB option to ATSC have reduced this
challenge somewhat.
Because of continued adoption of the 8VSB-based ATSC standard in the U.S., and a large
growing ATSC receiver population, a switch to COFDM is now essentially impossible. Most
analog terrestrial transmissions in the US were turned off in June 2009, and 8VSB tuners are
common to all new TVs, further complicating a future transition to COFDM.
8VSB vs COFDM
The previously cited FCC Report also found that COFDM has better performance in dynamic
and high level static multipath situations, and offers advantages for single frequency networks
and mobile reception. Nonetheless, in 2001, a technical report compiled by the COFDM
Technical Group concluded that COFDM did not offer any significant advantages over 8VSB.
The report recommended in conclusion that receivers be linked to outdoor antennas raised to
roughly 30 feet (9 m) in height. Neither 8VSB nor COFDM performed acceptably in most indoor
test installations. [4]
However, there were questions whether the COFDM receiver selected for these tests − a
transmitter monitor[2] lacking normal front end filtering − colored these results. Retests that
were performed using the same COFDM receivers with the addition of a front end band pass
filter gave much improved results for the DVB-T receiver, but further testing was not pursued.[3]
The debate over 8VSB versus COFDM modulation is still ongoing. Proponents of COFDM
argue that it resists multipath far better than 8VSB. Early 8VSB DTV (digital television)
receivers often had difficulty receiving a signal in urban environments. Newer 8VSB receivers,
however, are better at dealing with multipath. Moreover, 8VSB modulation requires less power
to transmit a signal the same distance. In less populated areas, 8VSB may outperform COFDM
because of this. However, in some urban areas, as well as for mobile use, COFDM may offer
better reception than 8VSB. Several "enhanced" VSB systems were in development, most
notably E-VSB, A-VSB, and MPH. The deficiencies in 8VSB in regards to multipath reception
can be dealt with by using additional forward error-correcting codes, such as that used by ATSCM/H for Mobile/Handheld reception.
It should also be noted that the vast majority of USA TV stations use COFDM for their studio to
transmitter links and news gathering operations. It should also be noted that these are point-topoint communication links and not broadcast transmissions.
UNIT II
TELEVISION RECEIVER SECTION
Monochrome receiver block diagram – Receiving antennas – Balun – IF Filters RF tuners – VHF Stage and
Response – Video detector – sound section – video amplifiers DC restoration – Picture tubes.
SECTION A
1) Explain Video Content Analysis?
Video Content Analysis (VCA) is the capability of automatically analyzing video to detect and
determine temporal events not based on a single image. As such, it can be seen as the automated equivalent of
the biological visual cortex.This technical capability is used in a wide range of domains including
entertainment[1], health care, retail, automotive, transport, home automation, safety and security[2]. The
algorithms can be implemented as software on general purpose machines, or as hardware in specialized video
processing units.
2)Explain Sound TV ?
Sound TV was a free-to-air television channel following the tradition of the variety show, which
has not been popular in Britain since the 1980s. It aspired to give television exposure to acts
(young and old) unable to acquire airtime on other channels.
The managing director of the channel was comedian and folk singer Richard Digance, a talent
popular on variety shows such as the Sunday evening Live from... (Her Majesty's/the
Piccadilly/the Palladium) series (produced by LWT for ITV) and also on Summertime Special, a
moderately popular variety showcase of the 1980s.
Chris Tarrant and Mike Osman were executives and associates and Cornish comedian Jethro was
a director.
The channel was managed by Information TV, a factual channel which broadcasts on the same
frequency between midnight and 16:00. Sound TV's launch was delayed several times under its
working title of The Great British Television Channel.
3)Define CRT?
The cathode ray tube (CRT) is a vacuum tube containing an electron gun (a source of
electrons) and a fluorescent screen, with internal or external means to accelerate and deflect the
electron beam, used to create images in the form of light emitted from the fluorescent screen.
The image may represent electrical waveforms (oscilloscope), pictures (television, computer
monitor), radar targets and others. CRTs have also been used as memory devices, in which case
the visible light emitted from the fluoresecent material (if any) is not intended to have significant
meaning to a visual observer (though the visible pattern on the tube face may cryptically
represent the stored data).
The CRT uses an evacuated glass envelope which is large, deep (i.e. long from front screen face
to rear end), fairly heavy, and relatively fragile. As a matter of safety, the face is typically made
of thick lead glass so as to be highly shatter-resistant and to block most X-ray emissions,
particularly if the CRT is used in a consumer product.
4)Define amplifier?
Amp was a music video program on MTV that aired from 1997 to 2001. It was aimed at the
electronic music and rave crowd and was responsible for exposing many electronica acts to the
mainstream. When co-creator Todd Mueller (who'd worked on this with Jane King) left the show
in 1998, it was redubbed Amp 2.0. The show aired some 46 episodes in total over its 6-year run.
In its final two years, reruns were usually shown from earlier years. Amp's time slot was moved
around quite a bit, but the show usually aired in the early morning hours on the weekend, usually
2am to 4am. Because of this late night time slot, the show developed a small but cult like
following. A few online groups formed after the show's demise to ask MTV to bring the show
back and air it during normal hours, but MTV never responded to the requests.
SECTIONB
1)Explain antenna?
A television antenna, or TV aerial, is an antenna specifically designed for the reception of over the air
broadcast television signals, which are transmitted at frequencies from about 41 to 250 MHz in the VHF
band, and 470 to 960 MHz in the UHF band in different countries. To cover this range antennas generally
consist of multiple conductors of different lengths which correspond to the wavelength range the antenna
is intended to receive. The length of the elements of a TV antenna are usually half the wavelength of the
signal they are intended to receive. The wavelength of a signal equals the speed of light (c) divided by the
frequency. The design of a television broadcast receiving antenna is the same for the older analog
transmissions and the digital television (DTV) transmissions which are replacing them. Sellers often
claim to supply a special "digital" or "high-definition television" (HDTV) antenna advised as a
replacement for an existing analog television antenna, even if satisfactory: this is misinformation to
generate sales of unneeded equipment.
2)Draw TV-block-diagram?
1)Explain types of tuners?
Analog TV tuners
Analog television cards output a raw video stream, suitable for real-time viewing but ideally
requiring some sort of video compression if it is to be recorded. More advanced TV tuners
encode the signal to Motion JPEG or MPEG, relieving the main CPU of this load. Some cards
also have analog input (composite video or S-Video) and many also provide a tuner (radio).
Hybrid tuners
A hybrid tuner has one tuner that can be configured to act as an analog tuner or a digital tuner.
Switching between the systems is fairly easy, but cannot be done immediately. The card operates
as a digital tuner or an analog tuner until reconfigured..
Combo tuners
This is similar to a hybrid tuner, except there are two separate tuners on the card. One can watch
analog while recording digital, or vice versa. The card operates as an analog tuner and a digital
tuner simultaneously. The advantages over two separate cards are cost and utilization of
expansion slots in the computer. As many regions around the world convert from analog to
digital broadcasts, these tuners are gaining popularity.
Like the analog cards, the Hybrid and Combo tuners can have specialized chips on the tuner card
to perform the encoding, or leave this task to the CPU. The tuner cards with this 'hardware
encoding' are generally thought of as being higher quality.[citation needed] Small USB tuner sticks
have become more popular in 2006 and 2007 and are expected to increase in popularity. These
small tuners generally do not have hardware encoding due to size and heat constraints.
While most TV tuners are limited to the radio frequencies and video formats used in the country
of sale, many TV tuners used in computers use DSP, so a firmware upgrade is often all that's
necessary to change the supported video format. Many newer TV tuners have flash memory big
enough to hold the firmware sets for decoding several different video formats, making it possible
to use the tuner in many countries without having to flash the firmware. However, while it is
generally possible to flash a card from one analog format to another due to the similarities, it is
generally not possible to flash a card from one digital format to another due to differences in
decode logic necessary.
Many TV tuners can function as FM radios; this is because there are similarities between
broadcast television and FM radio. The FM radio spectrum is close to (or even inside) that used
by VHF terrestrial TV broadcasts. And many broadcast television systems around the world use
FM audio. So listening to an FM radio station is simply a case of configuring existing hardware.
SECTION C
1) Explain types of Television antenna?
A television antenna, or TV aerial, is an antenna specifically designed for the reception of over
the air broadcast television signals, which are transmitted at frequencies from about 41 to
250 MHz in the VHF band, and 470 to 960 MHz in the UHF band in different countries. To
cover this range antennas generally consist of multiple conductors of different lengths which
correspond to the wavelength range the antenna is intended to receive. The length of the
elements of a TV antenna are usually half the wavelength of the signal they are intended to
receive. The wavelength of a signal equals the speed of light (c) divided by the frequency. The
design of a television broadcast receiving antenna is the same for the older analog transmissions
and the digital television (DTV) transmissions which are replacing them. Sellers often claim to
supply a special "digital" or "high-definition television" (HDTV) antenna advised as a
replacement for an existing analog television antenna, even if satisfactory: this is misinformation
to generate sales of unneeded equipment.[1][2]
Television antennas are used in conjunction with a tuner (television) that are included with
television sets.

Simple/indoor
SEE ALSO dipole antenna
Very common "rabbit ears" set-top antenna of older model
Simple half-wave dipole antenna for VHF or UHF loop antennas that are made to be placed
indoors are often used for television (and VHF radio); these are often called "rabbit ears" or
"bunny aerials". because of their appearance. The length of the telescopic "ears" can be adjusted
by the user, and should be about one half of the wavelength of the signal for the desired channel.
These are not as efficient as an aerial rooftop antenna since they are less directional and not
always adjusted to the proper length for the desired channel. Dipole antennas are bi-directional,
that is, they receive evenly forward and backwards, and also cover a broader band than antennas
with more elements. This makes them less efficient than antennas designed to maximise the
signal from a narrower angle in one direction. Coupled with the poor placing, indoors and closer
to the ground, they are much worse than multi-element rooftop antennas at receiving signals
which are not very strong, although often adequate for nearby transmitters, in which case they
may be adequate and cheap. These simple antennas are called set-top antennas because they were
often placed on top of the television set or receiver.
The actual length of the ears is optimally about 91% of half the wavelength of the desired
channel in free space.[3] Quarter-wave television antennas are also used. These use a single
element, and use the earth as a ground plane; therefore, no ground is required in the feed line.
See also: Dipole antenna#Quarter-wave antenna
Outdoor
See also: Yagi antenna
An aerial or rooftop antenna generally consists of multiple conductive elements that are arranged
such that it is a directional antenna. The length of the elements is about one half of the signal
wavelength. Therefore, the length of each element corresponds to a certain frequency.
In a combined VHF/UHF antenna the longer elements (for picking up VHF frequencies) are at
the "back" of the antenna, relative to the device's directionality, and the much shorter UHF
elements are in the "front"[citation needed], and the antenna works best when "pointing" to the source
of the signal to be received. The smallest elements in this design, located in the "front", are UHF
director elements, which are usually identical and give the antenna its directionality, as well as
improving gain. The longest elements, located in the "back" of the antenna form a VHF phased
array. Other long elements may be UHF reflectors [4] Another common aerial antenna element is
the corner reflector, a type of UHF reflector which increases gain and directionality for UHF
frequencies.
An antenna can have a smaller or larger number of directors; the more directors it has (requiring
a longer boom), and the more accurate their tuning, the higher its gain will be. For the commonly
used Yagi antenna this is not a linear relationship. Antenna gain is the ratio of the signal received
from the preferred direction to the signal from an ideal omnidirectional antenna. Gain is
inversely proportional to the antenna's acceptance angle. The thickness of the rods on a Yagi
antenna and its bandwidth are inversely proportional; thicker rods provide a wider band.[5]
Thinner rods are preferable to provide a narrower band, hence higher gain in the preferred
direction; however, they must be thick enough to withstand wind.
Two or more directional rooftop antennas can be set up and connected to one receiver. Antennas
designed for rooftop use are sometimes located in attics.
Sometimes television transmitters are organised such that all receivers in a given location need
receive transmissions in only a relatively narrow band of the full UHF television spectrum and
from the same direction, so that a single antenna provides reception from all stations.[6]
Types of outdoor antenna
A UHF television antenna
An antenna pole setup in a chimney, reaching 35 feet (10.7 meters) off the ground
Small multi-directional: The smallest of all outdoor television antennas. They are designed to
receive equal amounts of signal from all directions. These generally receive signals up to a
maximum of thirty miles away from the transmitting station, greatly depending on the type. But,
things such as large buildings or thick woods may greatly affect signal. They come in many
different styles, ranging from small dishes to small metal bars, some can even mount on existing
satellite dishes.
Medium multi-directional: A step up from the small multi-directional, these also receive
signals from all directions. These usually require an amplifier in situations when long cable
lengths are between the television receiver and the antenna. Styles are generally similar to small
multi-directionals, but slightly larger.
Large multi-directional: These are the largest of all multi-directional outdoor television
antennas. Styles include large "nets" or dishes, but can also greatly vary. Depending on the type,
signal reception usually ranges from 30 to up to 70 miles.
Small directional: The smallest of all directional antennas, these antennas are multi-element
antennas, typically placed on rooftops. This style of antenna receives signals generally equal to
that of large multi-directionals. One advantage that small directionals hold, however, is that they
can significantly reduce "ghosting" effects of television picture.
Medium directional: These antennas are the ones most often seen on suburban rooftops.
Usually consisting of many elements, and slightly larger than the small directionals, these
antennas are ideal for receiving television signals in suburban areas. Signal usually ranges from
30 to 60 miles away from the broadcasting station.
Large directional: The largest of all common outdoor television antennas, these antennas are
designed to receive the weakest available stations in an area. Larger than the medium directional,
this type of antenna consists of many elements and is usually used in rural areas, where reception
is difficult. When used in conjunction with an amplifier, these antennas can usually pick up
stations from 60 up to and over 100 miles, depending on the type.
The use of outdoor antennas with an amplifier can improve signal on low signal strength
channels. If the signal quality is low repositioning the antenna onto a high mast will improve
signal
Installation
A short antenna pole next to a house.
Multiple Yagi TV aerials in Israel
See also: Radio masts and towers
Antennas are commonly placed on rooftops, and sometimes in attics. Placing an antenna indoors
significantly attenuates the signal available to it. [7] [8] Directional antennas must be pointed at the
transmitter they are receiving; in most cases great accuracy is not needed. In a given region it is
sometimes arranged that all television transmitters are located in roughly the same direction and
use frequencies space closely enough that a single antenna suffices for all. A single transmitter
location may transmit signals for several channels.[9]
Analog television signals are susceptible to ghosting in the image, multiple closely spaced
images giving the impression of blurred and repeated images of edges in the picture. This was
due to the signal being reflected from nearby objects (buildings, tree, mountains); several copies
of the signal, of different strengths and subject to different delays, are picked up. This was
different for different transmissions. Careful positioning of the antenna could produce a
compromise position which minimized the ghosts on different channels. Ghosting is also
possible if multiple antennas connected to the same receiver pick up the same station, especially
if the lengths of the cables connecting them to the splitter/merger were different lengths or the
antennas were too close together.[10] Analog television is being replaced by digital, which is not
subject to ghosting.
Rooftop and other outdoor antennas
Aerials are attached to roofs in various ways, usually on a pole to elevate it above the roof. This
is generally sufficient in most areas. In some places; however, such as a deep valley or near taller
structures, the antenna may need to be placed significantly higher, using a lattice tower or mast.
The wire connecting the antenna to indoors is referred to as the downlead or drop, and the longer
the downlead is, the greater the signal degradation in the wire.
The higher the antenna is placed, the better it will perform. An antenna of higher gain will be
able to receive weaker signals from its preferred direction. Intervening buildings, topographical
features (mountains), and dense forest will weaken the signal; in many cases the signal will be
reflected such that a usable signal is still available. There are physical dangers inherent to high or
complex antennas, such as the structure falling or being destroyed by the weather. There are also
varying local ordinances which restrict and limit such things as the height of a structure without
obtaining permits. For example, in the USA, the Telecommunications Act of 1996 allows any
homeowner to install "An antenna that is designed to receive local television broadcast signals",
but that "masts higher than 12 feet above the roof-line may be subject to local permitting
requirements." [11]
Indoor antennas
As discussed previously, antennas may be placed indoors where signals are strong enough to
overcome antenna shortcomings. The antenna is simply plugged into the television receiver and
placed conveniently, often on the top of the receiver ("set-top"). Sometimes the position needs to
be experimented with to get the best picture. Indoor antennas can also benefit from RF
amplification, commonly called a TV booster. Indoor antennas will never be an option in weak
signal areas.
Attic installation
Sometimes it is desired not to put an antenna on the roof; in these cases, antennas designed for
outdoor use are often mounted in the attic or loft, although antennas designed for attic use are
also available. Putting an antenna indoors significantly decreases its performance due to lower
elevation above ground level and intervening walls; however, in strong signal areas reception
may be satisfactory.[12] One layer of asphalt shingles, roof felt, and a plywood roof deck is
considered to attenuate the signal to about half.[13]
Multiple antennas, rotators
Two aerials setup on a roof. Spaced horizontally and vertically
It is sometimes desired to receive signals from transmitters which are not in the same direction.
This can be achieved, for one station at a time, by using a rotator operated by an electric motor to
turn the antenna as desired. Alternatively, two or more antennas, each pointing at a desired
transmitter and coupled by appropriate circuitry, can be used. To prevent the antennas interfering
with each other, the vertical spacing between the booms must be at least half the wavelength of
the lowest frequency to be received (Distance=λ/2).[14] The wavelength of 54 MHz (Channel 2)
is 5.5 meters (λ x f = c) so the antennas must be a minimum of 2.25 meters, or ~89 inches apart.
It is also important that the cables connecting the antennas to the signal splitter/merger be exactly
the same length, to prevent phasing issues, which cause ghosting with analog reception. That is,
the antennas might both pick up the same station; the signal from the one with the shorter cable
will reach the receiver slightly sooner, supplying the receiver with two pictures slightly offset.
There may be phasing issues even with the same length of down-lead cable. Bandpass filters or
"signal traps" may help to reduce this problem.
For side-by-side placement of multiple antennas, as is common in a space of limited height such
as an attic, they should be separated by at least one full wavelength of the lowest frequency to be
received at their closest point.
Often when multiple antennas are used, one is for a range of co-located stations and the other is
for a single transmitter in a different direction
UNIT III
SYNC SEPARATOR
Sync separator – Basic principle – Noise in sync pulses – Vertical and horizontal sync separation –
Automatic frequency Control (AFC) – Horizontal AFC – Vertical and horizontal output stage – EHT generation.
SECTION A
1. Explain horizontal AFC circuit comprising?
phase detector means supplied with a horizontal synchronizing signal separated from a television
video signal and with a comparison signal and carrying out phase comparison, said phase
detector means having a transistor supplied at the base thereof with the horizontal synchronizing
signal;
filter means for filtering the output of said phase detector means;
horizontal oscillator means supplied with the output of said filter means for oscillating with an
oscillation frequency controlled thereby;
horizontal deflection means for forming the output signal of said oscillator means into a
horizontal deflection pulse;
2)Define horizontal AFC?
A horizontal AFC circuit comprising a phase detector circuit supplied with a horizontal
synchronizing signal separated from a television video signal and with a comparison signal and
carrying out phase comparison, a filter circuit for filtering the output of the phase detector
circuit, a horizontal oscillator circuit supplied with the output of the filter circuit and oscillating
with an oscillation frequency controlled thereby, a horizontal deflection circuit for forming the
output signal of the horizontal oscillator circuit into a horizontal deflection pulse, a wave shaping
circuit operating upon being supplied with the output pulse of the horizontal deflection circuit to
wave shape this output pulse and to supply the resulting output signal thereof as said comparison
signal to the phase detector circuit, means for supplying a control pulse of a pulse width
corresponding to a vertical blanking period of the television video signal, and loop gain control
means supplied with the control pulse and operating to cause the loop gain of the horizontal AFC
circuit to be relatively large in the pulse width duration and to cause the loop gain to be relatively
small in a period other than said pulse width duration.
SECTION B
1)write a note an Automatic Frequency Control?
In radio equipment, Automatic Frequency Control (AFC) is a method (or device) to automatically keep
a resonant circuit tuned to the frequency of an incoming radio signal. It is primarily used in radio
receivers to keep the receiver tuned to the frequency of the desired station.
In radio communication AFC is needed because, after the bandpass frequency of a receiver is
tuned to the frequency of a transmitter, the two frequencies may drift apart, interrupting the
reception. This can be caused by a poorly controlled transmitter frequency, but the most common
cause is drift of the center bandpass frequency of the receiver, due to thermal or mechanical drift
in the values of the electronic components.
Assuming that a receiver is nearly tuned to the desired frequency, the AFC circuit in the receiver
develops an error voltage proportional to the degree to which the receiver is mistuned. This error
voltage is then fed back to the tuning circuit in such a way that the tuning error is reduced. In
most frequency modulation (FM) detectors an error voltage of this type is easily available. See
Negative feedback.
AFC is also called Automatic Fine Tuning (AFT) in radio and TV receivers. It became rare in
this application, late in the 20th century, as the more effective frequency synthesizer method
became cheaper and more widespread.
2)Explain Sync Separator?
Portion of a PAL videosignal. From left to right: end of a video line, front porch, horizontal sync pulse,
back porch with color burst, and beginning of next line
Beginning of the frame, showing several scan lines; the terminal part of the vertical sync pulse is at the
left
PAL videosignal frames. Left to right: frame with scan lines (overlapping together, horizontal sync pulses
show as the doubled straight horizontal lines), vertical blanking interval with vertical sync (shows as
brightness increase of the bottom part of the signal in almost the leftmost part of the vertical blanking
interval), entire frame, another VBI with VSYNC, beginning of third frame
Image synchronization is achieved by transmitting negative-going pulses; in a composite video
signal of 1 volt amplitude, these are approximately 0.3 V below the "black level". The horizontal
sync signal is a single short pulse which indicates the start of every line. Two timing intervals are
defined - the front porch between the end of displayed video and the start of the sync pulse, and
the back porch after the sync pulse and before displayed video. These and the sync pulse itself
are called the horizontal blanking (or retrace) interval and represent the time that the electron
beam in the CRT is returning to the start of the next display line. The vertical sync signal is a
series of much longer pulses, indicating the start of a new field. The sync pulses occupy the
whole of line interval of a number of lines at the beginning and end of a scan; no picture
information is transmitted during vertical retrace. The pulse sequence is designed to allow
horizontal sync to continue during vertical retrace; it also indicates whether each field represents
even or odd lines in interlaced systems (depending on whether it begins at the start of a
horizontal line, or mid-way through). In the TV receiver, a sync separator circuit detects the sync
voltage levels and sorts the pulses into horizontal and vertical sync. Loss of horizontal
synchronization usually resulted in an unwatchable picture; loss of vertical synchronization
would produce an image rolling up or down the screen.
2)Describe the CRT flyback power supply design and operation principles?
Most of the receiver's circuitry (at least in transistor- or IC-based designs) operates from a
comparatively low-voltage DC power supply. However, the anode connection for a cathode-ray
tube requires a very high voltage (typically 10-30 kV) for correct operation.
This voltage is not directly produced by the main power supply circuitry; instead the receiver
makes use of the circuitry used for horizontal scanning. Direct current (DC), is switched though
the line output transformer, and alternating current ([AC]) is induced into the scan coils. At the
end of each horizontal scan line the magnetic field which has built up in both transformer and
scan coils by the current, is a source of latent electromagnetic energy. This stored collapsing
magnetic field energy can be captured. The reverse flow, short duration, (about 10% of the line
scan time) current from both the line output transformer and the horizontal scan coil is
discharged again into the primary winding of the flyback transformer by the use of a rectifier
which blocks this negative reverse emf. A small value capacitor is connected across the scan
switching device. This tunes the circuit inductances to resonate at a much higher frequency. This
slows down (lengthens) the flyback time from the extremely rapid decay rate that would result if
they were electrically isolated during this short period. One of the secondary windings on the
flyback transformer then feeds this brief high voltage pulse to a Cockcroft design voltage
multiplier. This produces the required EHT supply. A flyback converter is a power supply circuit
operating on similar principles.
Typical modern design incorporates the flyback transformer and rectifier circuitry into a single
unit with a captive output lead, (known as a diode split line output transformer),[15] so that all
high-voltage parts are enclosed. Earlier designs used a separate line output transformer and a
well insulated high voltage multiplier unit. The high frequency (15 kHz or so) of the horizontal
scanning allows reasonably small components to be used.
SECTIONC
1)Explain about Synchronization?
Synchronizing pulses added to the video signal at the end of every scan line and video frame
ensure that the sweep oscillators in the receiver remain locked in step with the transmitted signal,
so that the image can be reconstructed on the receiver screen.[6] [7] [8]
A sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and
vertical sync. (see section below - Other technical information, for extra detail.)
Horizontal synchronization
The horizontal synchronization pulse (horizontal sync HSYNC), separates the scan lines. The
horizontal sync signal is a single short pulse which indicates the start of every line. The rest of
the scan line follows, with the signal ranging from 0.3 V (black) to 1 V (white), until the next
horizontal or vertical synchronization pulse.
The format of the horizontal sync pulse varies. In the 525-line NTSC system it is a 4.85 µs-long
pulse at 0 V. In the 625-line PAL system the pulse is 4.7 µs synchronization pulse at 0 V . This is
lower than the amplitude of any video signal (blacker than black) so it can be detected by the
level-sensitive "sync stripper" circuit of the receiver.
Vertical synchronization
Vertical synchronization (Also vertical sync or V-SYNC) separates the video fields. In PAL and
NTSC, the vertical sync pulse occurs within the vertical blanking interval. The vertical sync
pulses are made by prolonging the length of HSYNC pulses through almost the entire length of
the scan line.
The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The
sync pulses occupy the whole of line interval of a number of lines at the beginning and end of a
scan; no picture information is transmitted during vertical retrace. The pulse sequence is
designed to allow horizontal sync to continue during vertical retrace; it also indicates whether
each field represents even or odd lines in interlaced systems (depending on whether it begins at
the start of a horizontal line, or mid-way through).
The format of such a signal in 525-line NTSC is:



pre-equalizing pulses (6 to start scanning odd lines, 5 to start scanning even lines)
long-sync pulses (5 pulses)
post-equalizing pulses (5 to start scanning odd lines, 4 to start scanning even lines)
Each pre- or post- equalizing pulse consists in half a scan line of black signal: 2 µs at 0 V,
followed by 30 µs at 0.3 V.
Each long sync pulse consists in an equalizing pulse with timings inverted: 30 µs at 0 V,
followed by 2 µs at 0.3 V.
In video production and computer graphics, changes to the image are often kept in step with the
vertical synchronization pulse to avoid visible discontinuity of the image. Since the frame buffer
of a computer graphics display imitates the dynamics of a cathode-ray display, if it is updated
with a new image while the image is being transmitted to the display, the display shows a
mishmash of both frames, producing a page tearing artifact partway down the image.
Vertical synchronization eliminates this by timing frame buffer fills to coincide with the vertical
blanking interval, thus ensuring that only whole frames are seen on-screen. Software such as
computer games and Computer aided design (CAD) packages often allow vertical
synchronization as an option, because it delays the image update until the vertical blanking
interval. This produces a small penalty in latency, because the program has to wait until the
video controller has finished transmitting the image to the display before continuing. Triple
buffering reduces this latency significantly.
Two timing intervals are defined - the front porch between the end of displayed video and the
start of the sync pulse, and the back porch after the sync pulse and before displayed video. These
and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the
time that the electron beam in the CRT is returning to the start of the next display line.
Horizontal hold and vertical hold
The lack of precision timing components available in early television receivers meant that the
timebase circuits occasionally needed manual adjustment. The adjustment took the form of
horizontal hold and vertical hold controls, usually on the rear of the television set. Loss of
horizontal synchronization usually resulted in an unwatchable picture; loss of vertical
synchronization would produce an image rolling up or down the screen.
UNIT IV
COLOUR TELEVISION
Nature of color – Color perception – Compatibility – Three color theories – Chromaticity diagram –
Luminance and color difference signals – weighting factors – color picture tube – Bandwidth for color signal
transmission – PAL Color TV systems- Block diagram of color TV Receiver
Colors (TV channel).
Colors, (Hindi: कलर्स) known as Aapka Colors in the U.S., is a Hindi language Indian general
entertainment channel based in Mumbai,[1] part of the Viacom 18 family, which was launched on
July 21, 2008.[2] The channel got a huge popularity just after its launch with Fear Factor:
Khatron Ke Khiladi with a Bollywood actor Akshay Kumar and due to its successful ratings, it
received a top position among other Hindi general entertainment channels for a little while, such
as STAR Plus, Zee TV, Sony TV, Imagine TV, STAR One and Sahara One. The network has
successfully completed its 1st year.
Currently, the channel is featuring a number of successful shows, such as Bigg Boss, Balika
Vadhu, Uttaran, Na Aana Is Des Laado, and Laagi Tujhse Lagan. The channels' most popular
show, Balika Vadhu has been ranked in the TOP 5 shows of Indian television's TRPs charts,
within 3 months of its launch.[3]
On 21 January 2010, Colors became available on Dish Network in the U.S., where it is called
Aapka Colors (Respectfully your Colors) because of a clash with Colours TV.[4] Amitabh
Bachchan served as brand ambassador for the UK and USA launches.[5]
Colors launched in the United Kingdom and Ireland on Sky on 25 January 2010.[6] On 9
December 2009, INX Media confirmed that Colors had bought 9XM's Sky EPG slot on channel
829 and on 5 January 2010, Colors secured a deal to join the VIEWASIA subscription
package.[7][8] EPG tests began on 4 January 2010 using the 9XM stream, followed by Colors'
own video and audio on 8 January.[9][10] Initially the channel was available free-to-air and then
subsequently was added to the VIEWASIA package on 19 April 2010.[11] Colors was added to
Virgin Media on 1 April 2011, as a part of the Asian Mela pack.[12]
Most of the shows on Colors are produced by IBC Corporation's subsidiary, IBC Television.
They include: Sasural Simar Ka, Parichay, Havan, Mukti Bandhan and Phulwa.
SECTION A
1)Write a note an PAL colour TV system?
PAL, short for Phase Alternating Line, is an analogue television colour encoding system used
in broadcast television systems in many countries. Other common analogue television systems
are NTSC and SECAM. This page primarily discusses the PAL colour encoding system. The
articles on broadcast television systems and analogue television further describe frame rates,
image resolution and audio modulation. For discussion of the 625-line / 50 field (25 frame) per
second television standard.
2)Explain TV?
Television (TV) is the most widely used telecommunication medium for transmitting and receiving moving images
that are either monochromatic ("black and white") or color, usually accompanied by sound. "Television" may also
refer specifically to a television set, television programming or television transmission. The word is derived from
mixed Latin and Greek roots, meaning "far sight": Greek tele (τῆλε), far, and Latin visio, sight (from video, vis- to
see, or to view in the first person).
SECTION B
1)Explain cathode ray tube (CRT)?
The cathode ray tube (CRT) is a vacuum tube containing an electron gun (a source of
electrons) and a fluorescent screen, with internal or external means to accelerate and deflect the
electron beam, used to create images in the form of light emitted from the fluorescent screen.
The image may represent electrical waveforms (oscilloscope), pictures (television, computer
monitor), radar targets and others. CRTs have also been used as memory devices, in which case
the visible light emitted from the fluoresecent material (if any) is not intended to have significant
meaning to a visual observer (though the visible pattern on the tube face may cryptically
represent the stored data).
The CRT uses an evacuated glass envelope which is large, deep (i.e. long from front screen face
to rear end), fairly heavy, and relatively fragile. As a matter of safety, the face is typically made
of thick lead glass so as to be highly shatter-resistant and to block most X-ray emissions,
particularly if the CRT is used in a consumer product.
2)Explain primary colours in TV?
The RGB color model is an additive color model in which red, green, and blue light is added
together in various ways to reproduce a broad array of colors. The name of the model comes
from the initials of the three additive primary colors, red, green, and blue.
The main purpose of the RGB color model is for the sensing, representation, and display of
images in electronic systems, such as televisions and computers, though it has also been used in
conventional photography. Before the electronic age, the RGB color model already had a solid
theory behind it, based in human perception of colors.
RGB is a device-dependent color model: different devices detect or reproduce a given RGB
value differently, since the color elements (such as phosphors or dyes) and their response to the
individual R, G, and B levels vary from manufacturer to manufacturer, or even in the same
device over time. Thus an RGB value does not define the same color across devices without
some kind of color management.
Typical RGB input devices are color TV and video cameras, image scanners, and digital
cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma,
etc.), computer and mobile phone displays, video projectors, multicolor LED displays, and large
screens such as JumboTron, etc. Color printers, on the other hand, are not RGB devices, but
subtractive color devices (typically CMYK color model).
This article discusses concepts common to all the different color spaces that use the RGB color
model, which are used in one implementation or another in color image-producing technology.
SECTION C
1)Briefly explain Color perception?
What you should know from this lecture





Light & wavelengths
Spectral power distribution
Trichromacy theory
o Color matching experiment
o Photoreceptor spectral sensitivities
o Color blindness
Color opponency
Color constancy, chromatic adaptation, & simultaneous color contrast
For a simple, online introduction to color vision, see "Breaking the Code of Color" at the
Howard Hughes Medical Institute web page:




How Do We See Colors
Red, Green and Blue Cones
Color Blindness: More Prevalent Among Males
Judging a Color
Physics of Color/Wavelength
Color vision begins with the physics of light. Issac Newton discovered the fundamental decomposition of
light into separate wavelength components (a drawing from his notebook is reproduced below on the left).
If we pass light through a prism, the result is a spectrum, the colors of the rainbow (below on the right).
Visible light corresponds to a small range of the electromagetic spectrum roughly from 400 nm (which
appears blue) to 700 nm (appears red) in wavelength.
Spectral power distribution (SPD) is a plot of energy versus wavelength. The SPD can be
measured using a spectro-radiometer. The diagram below of a spectro-radiometer shows a light
source, a prism that splits the light into its separate components, a slit that passes only a narrow
band of wavelengths (ideally it would pass only one wavelength), and a photodetector that
measures how much light there is at that wavelength. By moving the slit and detector, one can
measure the amount of energy at each wavelength. Most lights contain energy at many
wavelengths. A light that contains only one wavelength is called a monochromatic light. Any
light can be characterized as the sum of a bunch of monochromatic lights, and that is what is
plotted in the SPD graph (note that this is just like characterizing a sound as the sum of a bunch
of pure tones). You can tell from the SPDs plotted below that both of those lights will have a
reddish-yellowish appearance because most of the energy is at the long wavelengths.
Color Matching and Trichromacy
Nineteenth century scientists, first Young and then our old friend Helmholtz performed a simple
perceptual experiment to infer that there must be 3 types of photoreceptors in our eyes.
This figure is a diagram of the classic color matching experiment. A box is split into two
chambers, one chamber has a test light, the other chamber has three primary lights (the 3
primaries can be almost any 3 light sources as long as they are different from one another). A
small hole in the box allows a subject to see the colors from the 2 chambers right next to one
another. The subject's task is to adjust 3 knobs that set the intensities of the 3 primaries so as to
match the test light as closely as possible.
The results:
1. This task is possible to do. In almost all circumstances (the exceptions involve technicalities we
won't discuss in this class), subjects can match any test light whatsoever as a sum of three
primary lights, where all they can do is vary the intensity of each constituent primary.
2. Lights that are physically different can look identical. Such pairs of lights are called metamers or
metameric lights. The test light SPD is typically different from the SPD of the combination of the
primaries. Using a spectro-radiometer, you can tell that the lights in the two chambers are
different. Using your eye, they look identical.
3. Three primaries are always enough to match any test light. With three primaries there is only one
way to set the knobs to get the match. Two primaries are not enough; there is no way to achieve a
match for most light sources. Four is too many; with four primaries there is an infinite number of
different settings that can achieve a match.
4. People behave like linear systems in the color matching experiment. Matching obeys the scalar
rule: if you double the intensity of the test light, subjects will double the settings of the 3 knobs. It
also obeys the additivity rule: if you add any two test lights, the subject will set the 3 knobs to the
sum of settings when matching the individual test lights. You might expect peoples' behavior to
be more complicated than this, considering all the neural activity that most go into observing the
lights, making a decision, adjusting the knobs, etc. The incredibly simple behavior in this
experiment calls out for a simple and basic explanation (see below).
The SPDs shown above are a pair of metamers: two lights that are physically different, yet look
identical. The one on the left is the SPD of the light coming from the sun. The one of the right is
the SPD of the light coming from a TV screen. The intensities of the red, green, and blue
phosphors in the TV were adjusted to give a perceptual match to the color of sunlight.
The color matching experiment is the basis for the design of color TV. Three types of phosphors
are painted on the CRT screen that glow red, green and blue. Yet, the TV can produce the
appearance of most colors (yellow, purple, orange, etc.). The designers of color TV took
advantage of the results of the color matching experiment: 3 primaries are all you need.
Physiological Basis of Trichromacy
The explanation of the color matching experiment is that there are three types of cone photoreceptors. All
that matters is the response of the 3 cone types. With 3 primaries, you can get any combination of
responses in the 3 cone types, so you can match the appearance of any test light.
Denis Baylor, at Stanford, measured the spectral sensitivities of macaque monkey rods and
cones. To do this, you chop up the retina. Then, you manage to get a single rod or cone into a
glass pipette. Then, you shine a light on it and measure the resulting electrical current. This is
repeated for many different wavelengths and for each of the three cone classes.
The figure above shows plots of rod spectral sensitivity - relative response versus wavelength from Baylor's measurements. The height of the curve at a certain wavelength corresponds to the
probability that a photopigment molecule will absorb (and isomerize) a photon of light with that
wavelength. The greater the probability of isomerization, the greater the response from the cone.
Rods are most sensitive to 500 nm monochromatic light. Note that 500 nm is a pretty short
wavelength - the range of wavelengths in visible light is about 400-700nm. Most cones are
sensitive to longer wavelengths than this. Because of this, the brightness of a blue object
compared to a red one increases during dark adaptation, called the Purkinje shift.
You may have noticed that under low light conditions (when your eye is dark adapted), you don't
see colors. Rather, everything appears as some shade of gray. All rods have the same
photopigment (rhodopsin) and hence all rods have the same spectral sensitivity. With only one
spectral sensitivity, there's no way to discriminate wavelength. Wavelength is totally confounded
with intensity - this is the principle of univariance.
The figure above shows plots of the cone spectral sensitivities - relative response versus
wavelength - for each of the 3 cone types. S cones are most sensitive to short wavelengths. L
cones are most sensitive to long wavelengths. M cones' peak sensitivity is to middle
wavelengths. Note that the y-axis is on a log scale. These are amazing measurements, precise to
6 orders of magnitude.
Changing the wavelength of a monochromatic light changes the relative responses of the three
cone types. This is the basis of your ability to discriminate the colors of the rainbow (wavelength
discrimination). Each wavelength evokes a unique ratio of cone responses.
The cone responses to any test light can be computed by multiplying the test light SPD by the
spectral sensitivities of each cone, and then summing over wavelength.
The SPD of light reaching the eye depends on the SPD of the light source multiplied by the
surface reflectance. The response of each photoreceptor depends on the SPD of the light reaching
the eye multiplied by the spectral sensitivity of the photopigment.
Each point of a scene is illuminated by various light sources, each of which has its own SPD
(upper-left). Surfaces are characterized by the proportion of the light landing on them that is
reflected (e.g., towards your eye), known as a spectral reflectance function (below-left in the
figure). This surface is blueish, as it mostly reflects short-wavelength light. The product
(wavelength by wavelength) of the illuminant and reflectance yields the color signal, which is
the SPD of the light heading toward your eye from the surface. This signal is analyzed by your
three cone photoreceptors, which respond differentially due to their individual spectral
sensitivities (above-right). The only information your brain has to work with to characterize the
color percept of each point in the scene is the set of three responses to each surface by the three
cone types (below-right).
Color mixture: An issue that is can be confusing about color and trichromacy concerns that
colored lights behave differently from colored pigments.
Lights mix "additively" meaning that the spectral power distribution of the sum of two lights is
the sum of the two spectral power distributions. Mixing more of one of the primaries gives more
light. This is what happens when you control the intensities of the 3 primary lights in the color
matching experiment or when your TV presents color with a mixture of 3 phosphors.
"Subtractive" color mixture is the term that is used when mixing pigments (like paints or inks).
In this case, it is the absorption of the pigments that is being combined. Mixing more of one of
the pigments gives less reflected light. The spectral power distribution of a light reflecting off of
a pigmented surface depends is the spectral power of the incident light multiplied by the
reflectance of the surface. Mixing more pigment (more paint) reduces the reflectance (absorbs
more light) and hence reduces the spectral power distribution of the reflected light at one or more
wavelengths.
Summary of trichromacy theory: There are three cone types that differ in their photopigments.
The three photopigments are each selective for a different range of wavelengths. If two lights
evoke the same responses in the three cone types, then the two lights will look the same. All that
matters is the excitation in the three cone types. Each cone outputs only a single number (i.e.,
satisfies univariance). It tells us how many photons it has absorbed, but nothing about which
photons they were (i.e., which wavelength). There are lots of lights out there that are physically
different, but result in the same cone excitations (such lights are called metamers). Trichromacy
is the basis of color technology in the print industry and color TV.
Color blindness: There are two basic forms of color blindness. Either the person has only a
single type of receptor (called a monochromat) or has 2 types (and is called a dichromat).



A dichromat only requires 2 primary lights to successfully complete the color matching
experiment. A dichromat will accept a trichromat's match, but a trichromat will not typically
accept a dichromat's match. In other words, some stimuli that look different to the trichromat are
metamers for the dichromat.
A monochromats requires only 1 primary light to match any test light.
A rod monochromat is missing all 3 cone types; they only have rods. They don't see color at all,
only different shades of gray. They also have to wear dark sunglasses during the daytime.
Otherwise their photorecptors would be fully bleached and they would be effectively blind.
About 7% of males have an impairment in their ability to discriminate red-green colors. This common,
sex-linked defect is explained by the close proximity of the two genes on the X chromosome. Try an
online color blindness simulator to "see" what it would be like to be color blind.
Color Opponency
The color purple looks both reddish and blueish. The color orange looks both reddish and yellowish.
Turquoise both blueish and greenish. But you've never seen a color that looks both green and red. Nor
have you ever seen a color that looks both yellow and blue. This fundamental observation led Hering
(another 19th century psychophysicist) to propose the opponent colors theory of color perception.
Color opponency was established with the hue cancellation experiment, in which subjects were
instructed to adjust a mixture of red and green lights until it appeared neither redish nor greenish.
At this point, it typically appeared yellow (notable, not redish-greenish). Likewise, one can
adjust a mixture of blue and yellow lights to appear neither blueish nor yellowish.
For many years, the notion of opponent colors was viewed as a competing/alternate theory to
trichromacy. Today, we understand how the two theories fit together.
Trichromacy falls out from the fact that you have three cone types with different spectral
sensitivities. In the retina, the cone signals get recombined into opponent mechanisms:
1. White/black: adds signals from all three cones types, L+M+S.
2. Red/green: L-M
3. Yellow/blue: L+M-S
A color appears reddish when the red/green mechanism gives a positive response, greenish when
the red/green mechanism gives a negative response. Likewise for yellow/blue.
Color opponency in the retina: Color opponency requires very specific wiring in the retina.
The blue-yellow mechanism, for example, must receive complementary inputs from specific
cone types (e.g., inhibition from S cones, excitation from L and M cones). Anatomists have
identified a special subclass of ganglion cells, called bistratified cells, that do just that. The
anatomical substrate for red/green opponency is still unknown.
This figure is a diagram of the blue-yellow pathway in the retina. S cones (shown in blue in the
figure) connect to a special subclass of bipolar cells (called the S-cone bipolar cells). L and M
cones connect to another type of bipolar cells. The B/Y bistratified ganglion cell receives
complementary inputs from the two bipolar cell classes, providing excitation from the S cones
and inhibition of the the L and M cones.
In this figure, the S cones are filled with flourescent dye. It turns out to be easy to stain the S
cones, because their photopigment is very different from the other two types. Most of the cones
are L and M cones, there are only a few S cones. Because it's easy to find the S cones,
anatomists have been able to identify the retinal circuitry for blue-yellow.
Note that there aren't very many S cones (yellow in the above pictures) compared to the L and M
cones (dark in the above pictures). As a consequence of this, the blue-yellow pathway has poor
spatial resolution. The blue and yellow colors in the stripes below are all the same. When viewed
from a sufficiently large distance, the fat stripes look more saturated than the thin stripes because
the thin stripes are at the spatial resolution limit of the S cone mosaic.
Color Constancy and Chromatic Adaptation
Take a photograph under flourescent light, and compare it to the same picture taken under
daylight. The colors come out totally differently - greenish under the flourescent light and
reddish under daylight - unless you do some "color correction" while developing the film.
But you wouldn't see it that way if you were in the room. To you the colors would look pretty
much the same under both illuminants. This phenomenon is called color constancy, analogous to
brightness constancy that we discussed earlier. The eye does not act like a camera, simply
recording the image. Rather, the eye adapts to compensate for the color (SPD) of the light
source.
Above is another example of a pair of photographs taken under different lighting conditions
without color correction. The physical characteristics of the light reaching the camera is very
different depending on the color of the illuminant. This results in dramatically different
photographs. But if you were there when the pictures were taken, this object would look pretty
much the same to you under both illuminants.
Glance at the penguin and dragon pictures above by fixated on the dot between them. The
penguin picture looks very blueish and the dragon looks very yellowish. Next, you will hold your
gaze on the dot between the blue and yellow fields. Continue staring at that dot for 30 secs or so.
Then look back at the penguin and dragon by fixating the dot between them. What do you see?
Why?
The change in percept following adaptation is due to chromatic adaptation. Chromatic
adaptation is like light and dark adaptation but instead of adapting just to light and dark, it adapts
to whatever the color is of the ambient illumination.
Each cone type adapts independently. For example, a given L cone adapts according to local
average L cone excitation. Likewise for the M cones. Thus, the retinal image adjusts to
compensate not only for the overall intensity of the light source, but also to compensate for the
color of the light source.
Chromatic adaptation, like light adaptation, can give rise to dramatic aftereffects. For example,
adapt to this green, black, and yellow flag for 60 secs, then look at a white field and you will see
an afterimage of a red, white, and blue flag. Red/green, blue/yellow, black/white are
complementary colors. Normally, when you look at a white field, L and M cones give about the
same response so the red/green opponnent colors mechanism does not respond at all. If you adapt
to green, the M cone sensitivity is reduced. Then, when you look at a white field, the L:M cones
are out of balance; the L cones are now more sensitive than the M cones so the red/green
mechanism gives a positive response and you see red instead of white. This only lasts for a
couple of seconds because the M cone sensitivity starts to readjust right away.
The visual system is designed to try to achieve a perceptual constancy. But, as with the various
brightness illusions I showed earlier, color adaptation also results in some misperceptions. The
colored afterimage is an undesirable consequence of chromatic adaptation coupled with color
opponency. Usually chromatic adaptation does the right thing, it compensates for the color of the
illuminant.
Simultaneous color contrast (analogous to simultaneous brightness contrast). The X on the left is
surrounded by yellow. The X on the right is surrounded by gray. The paint/pigment of the two
X's is identical, yet the color appearance is quite different because the surrounding context is
different. Color perception, like brightness perception, depends on contrast/surrounding context.
2)Describe the theory of analog TV?
Analog television
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Analog (or analogue) television is the analog transmission that involves the broadcasting of
encoded analog audio and analog video signal:[1] one in which the message conveyed by the
broadcast signal is a function of deliberate variations in the amplitude and/or frequency of the
signal. All broadcast television systems preceding digital transmission of digital television
(DTV) were systems utilizing analog signals. Analog television may be wireless or can require
copper wire used by cable converters.
Early Monochrome Analog receiver
Development
Main article: History of Television
The earliest mechanical television systems used spinning disks with patterns of holes punched
into the disc to "scan" an image. A similar disk reconstructed the image at the receiver.
Synchronization of the receiver disc rotation was handled through sync pulses broadcast with the
image information. However these mechanical systems were slow, the images were dim and
flickered severely, and the image resolution very low. Camera systems used similar spinning
discs and required intensely bright illumination of the subject for the light detector to work.
Analog television did not really begin as an industry until the development of the cathode-ray
tube (CRT), which uses a steered electron beam to "write" lines of electrons across a phosphor
coated surface. The electron beam could be swept across the screen much faster than any
mechanical disc system, allowing for more closely spaced scan lines and much higher image
resolution, while slow-fade phosphors removed image flicker effects. Also far less maintenance
was required of an all-electronic system compared to a spinning disc system.
Standards
Further information: Broadcast television system
Broadcasters using analog television systems encode their signal using NTSC, PAL or SECAM
analog encoding[2] and then use RF modulation to modulate this signal onto a Very high
frequency (VHF) or Ultra high frequency (UHF) carrier. Each frame of a television image is
composed of lines drawn on the screen. The lines are of varying brightness; the whole set of lines
is drawn quickly enough that the human eye perceives it as one image. The next sequential frame
is displayed, allowing the depiction of motion. The analog television signal contains timing and
synchronization information so that the receiver can reconstruct a two-dimensional moving
image from a one-dimensional time-varying signal.
In many countries, over-the-air broadcast television of analog audio and analog video signals is
being discontinued, to allow the re-use of the television broadcast radio spectrum for other
services such as datacasting and subchannels.
The first commercial television systems were black-and-white; The beginning of color television
was in the 1950s.[3]
A practical television system needs to take luminance, chrominance (in a color system),
synchronization (horizontal and vertical), and audio signals, and broadcast them over a radio
transmission. The transmission system must include a means of television channel selection.
Analog broadcast television systems come in a variety of frame rates and resolutions. Further
differences exist in the frequency and modulation of the audio carrier. The monochrome
combinations still existing in the 1950s are standardized by the International Telecommunication
Union (ITU) as capital letters A through N. When color television was introduced, the hue and
saturation information was added to the monochrome signals in a way that black & white
televisions ignore. This way backwards compatibility was achieved. That concept is true for all
analog television standards.
However there are three standards for the way the additional color information can be encoded
and transmitted. The first was the American NTSC (National Television Systems Committee)
color television system. The European/Australian PAL (Phase Alternation Line rate) and the
French-Former Soviet Union SECAM (Séquentiel Couleur Avec Mémoire) standard were
developed later and attempt to cure certain defects of the NTSC system. PAL's color encoding is
similar to the NTSC systems. SECAM, though, uses a different modulation approach than PAL
or NTSC.
In principle all three color encoding systems can be combined with any scan line/frame rate
combination. Therefore, in order to describe a given signal completely, it's necessary to quote the
color system and the broadcast standard as capital letter. For example the United States uses
NTSC-M, the UK uses PAL-I, France uses SECAM-L, much of Western Europe and Australia
uses PAL-B/G, most of Eastern Europe uses PAL-D/K or SECAM-D/K and so on.
However not all of these possible combinations actually exist. NTSC is currently only used with
system M, even though there were experiments with NTSC-A (405 line) and NTSC-I (625 line)
in the UK. PAL is used with a variety of 625-line standards (B,G,D,K,I,N) but also with the
North American 525-line standard, accordingly named PAL-M. Likewise, SECAM is used with
a variety of 625-line standards.
For this reason many people refer to any 625/25 type signal as "PAL" and to any 525/30 signal
as "NTSC", even when referring to digital signals, for example, on DVD-Video which don't
contain any analog color encoding, thus no PAL or NTSC signals at all. Even though this usage
is common, it is misleading as that is not the original meaning of the terms PAL/SECAM/NTSC.
Although a number of different broadcast television systems were in use worldwide, the same
principles of operation apply.[4]
Displaying an image
A cathode-ray tube (CRT) television displays an image by scanning a beam of electrons across
the screen in a pattern of horizontal lines known as a raster. At the end of each line the beam
returns to the start of the next line; at the end of the last line it returns to the top of the screen. As
it passes each point the intensity of the beam is varied, varying the luminance of that point. A
color television system is identical except that an additional signal known as chrominance
controls the color of the spot.
Raster scanning is shown in a slightly simplified form below.
When analog television was developed, no affordable technology for storing any video signals
existed; the luminance signal has to be generated and transmitted at the same time at which it is
displayed on the CRT. It is therefore essential to keep the raster scanning in the camera (or other
device for producing the signal) in exact synchronization with the scanning in the television.
The physics of the CRT require that a finite time interval is allowed for the spot to move back to
the start of the next line (horizontal retrace) or the start of the screen (vertical retrace). The
timing of the luminance signal must allow for this.
The human eye has a characteristic called Persistence of vision. Quickly displaying successive
scan images will allow the apparent illusion of smooth motion. Flickering of the image can be
partially solved using a long persistence phosphor coating on the CRT, so that successive images
fade slowly. However, slow phosphor has the negative side-effect of causing image smearing
and blurring when there is a large amount of rapid on-screen motion occurring.
The maximum frame rate depends on the bandwidth of the electronics and the transmission
system, and the number of horizontal scan lines in the image. A frame rate of 25 or 30 hertz is a
satisfactory compromise, while the process of interlacing two video fields of the picture per
frame is used to build the image. This process doubles the apparent number of video fields per
second and further reduces flicker and other defects in transmission.
Close up image of analog color screen
Other types of display screens
Plasma screens and LCD screens have been used in analog television sets. These types of display
screens use lower voltages than older CRT displays. Many dual system television receivers,
equipped to receive both analog transmissions and digital transmissions have analog tuner
(television) receiving capability and must use a television antenna.
Receiving signals
The television system for each country will specify a number of television channels within the
UHF or VHF frequency ranges. A channel actually consists of two signals: the picture
information is transmitted using amplitude modulation on one frequency, and the sound is
transmitted with frequency modulation at a frequency at a fixed offset (typically 4.5 to 6 MHz)
from the picture signal.
The channel frequencies chosen represent a compromise between allowing enough bandwidth
for video (and hence satisfactory picture resolution), and allowing enough channels to be packed
into the available frequency band. In practice a technique called vestigial sideband is used to
reduce the channel spacing, which would be at least twice the video bandwidth if pure AM was
used.
Signal reception is invariably done via a superheterodyne receiver: the first stage is a tuner
which selects a television channel and frequency-shifts it to a fixed intermediate frequency (IF).
The signal amplifier (from the microvolt range to fractions of a volt) performs amplification to
the IF stages.
Extracting the sound
At this point the IF signal consists of a video carrier wave at one frequency and the sound carrier
at a fixed offset. A demodulator recovers the video signal and sound as an FM signal at the offset
frequency (this is known as intercarrier sound).
The FM sound carrier is then demodulated, amplified, and used to drive a loudspeaker. Until the
advent of the NICAM and MTS systems, TV sound transmissions were invariably monophonic.
Structure of a video signal
The video carrier is demodulated to give a composite video signal; this contains luminance,
chrominance and synchronization signals;[5] this is identical to the video signal format used by
analog video devices such as VCRs or CCTV cameras. Note that the RF signal modulation is
inverted compared to the conventional AM: the minimum video signal level corresponds to
maximum carrier amplitude, and vice versa. The carrier is never shut off altogether; this is to
ensure that intercarrier sound demodulation can still occur.
Each line of the displayed image is transmitted using a signal as shown above. The same basic
format (with minor differences mainly related to timing and the encoding of color) is used for
PAL, NTSC and SECAM television systems. A monochrome signal is identical to a color one,
with the exception that the elements shown in color in the diagram (the color burst, and the
chrominance signal) are not present.
Portion of a PAL videosignal. From left to right: end of a video scan line, front porch, horizontal sync
pulse, back porch with color burst, and beginning of next line
The front porch is a brief (about 1.5 microsecond) period inserted between the end of each
transmitted line of picture and the leading edge of the next line sync pulse. Its purpose was to
allow voltage levels to stabilise in older televisions, preventing interference between picture
lines. The front porch is the first component of the horizontal blanking interval which also
contains the horizontal sync pulse and the back porch.[6][7]
The back porch is the portion of each scan line between the end (rising edge) of the horizontal
sync pulse and the start of active video. It is used to restore the black level (300 mV.) reference
in analog video. In signal processing terms, it compensates for the fall time and settling time
following the sync pulse.[6][7]
In color TV systems such as PAL and NTSC, this period also includes the colorburst signal. In
the SECAM system it contains the reference subcarrier for each consecutive color difference
signal in order to set the zero-color reference.
In some professional systems, particularly satellite links between locations, the audio is
embedded within the back porch of the video signal, to save the cost of renting a second channel.
Monochrome video signal extraction
The luminance component of a composite video signal varies between 0 V and approximately
0.7 V above the 'black' level. In the NTSC system, there is a blanking signal level used during
the front porch and back porch, and a black signal level 75 mV above it; in PAL and SECAM
these are identical.
In a monochrome receiver the luminance signal is amplified to drive the control grid in the
electron gun of the CRT. This changes the intensity of the electron beam and therefore the
brightness of the spot being scanned. Brightness and contrast controls determine the DC shift and
amplification, respectively.
Color video signal extraction
Color bar generator test signal
A color signal conveys picture information for each of the red, green, and blue components of an
image (see the article on Color space for more information). However, these are not simply
transmitted as three separate signals, because:



such a signal would not be compatible with monochrome receivers (an important consideration when
color broadcasting was first introduced)
it would occupy three times the bandwidth of existing television, requiring a decrease in the number
of TV channels available
typical problems with signal transmission (such as differing received signal levels between different
colors) would produce unpleasant side effects.
Instead, the RGB signals are converted into YUV form, where the Y signal represents the overall
brightness, and can be transmitted as the luminance signal. This ensures a monochrome receiver
will display a correct picture. The U and V signals are the difference between the Y signal and
the B and R signals respectively. The U signal then represents how "blue" the color is, and the V
signal how "red" it is. The advantage of this scheme is that the U and V signals are zero when the
picture has no color content. Since the human eye is more sensitive to errors in luminance than in
color, the U and V signals can be transmitted in a relatively lossy (specifically: bandwidthlimited) way with acceptable results. The G signal is not transmitted in the YUV system, but
rather it is recovered eletronically at the receiving end.
Color signals mixed with video signal
In the NTSC and PAL color systems, U and V are transmitted by adding a color subcarrier to
the composite video signal, and using quadrature amplitude modulation on it. For NTSC, the
subcarrier is usually at about 3.58 MHz, but for the PAL system it is at about 4.43 MHz. These
frequencies are within the luminance signal band, but their exact frequencies were chosen such
that they are midway between two harmonics of the horizontal line repetition rate, thus ensuring
that the majority of the power of the luminance signal does not overlap with the power of the
chrominance signal.
In the British PAL (D) system, the actual chrominance center frequency is 4.43361875 MHz, a
direct multiple of the scan rate frequency. This frequency was chosen to minimize the
chrominance beat interference pattern that would be visible in areas of high color saturation in
the transmitted picture.
The two signals (U and V) modulate both the amplitude and phase of the color carrier, so to
demodulate them it is necessary to have a reference signal against which to compare it. For this
reason, a short burst of reference signal known as the color burst is transmitted during the back
porch (re-trace period) of each scan line. A reference oscillator in the receiver locks onto this
signal (see phase-locked loop) to achieve a phase reference, and uses its amplitude to set an AGC
system to achieve an amplitude reference.
The U and V signals are then demodulated by band-pass filtering to retrieve the color subcarrier,
mixing it with the in-phase and quadrature signals from the reference oscillator, and low-pass
filtering the results.
Test card showing "Hanover Bars" (color banding phase effect) in Pal S (simple) signal mode of
transmission.
NTSC uses this process unmodified. Unfortunately, this often results in poor color reproduction
due to phase errors in the received signal. The PAL D (delay) system corrects this by reversing
the phase of the signal on each successive line, and the averaging the results over pairs of lines.
This process is achieved by the use of a 1H (where H = horizontal scan frequency) duration
delay line. (A typical circuit used with this device converts the low frequency color signal to
ultrasonic sound and back again). Phase shift errors between successive lines are therefore
cancelled out and the wanted signal amplitude is increased when the two in-phase (coincident)
signals are re-combined.
In the SECAM television system, U and V are transmitted on alternate lines, using simple
frequency modulation of two different color subcarriers.
In analog color CRT displays, the brightness control signal (luminance) is fed to the cathode
connections of the electron guns, and the color difference signals (chrominance signals) are fed
to the control grids connections. This simple matrix mixing technique was replaced in later solid
state designs of signal processing.
Synchronization
Synchronizing pulses added to the video signal at the end of every scan line and video frame
ensure that the sweep oscillators in the receiver remain locked in step with the transmitted signal,
so that the image can be reconstructed on the receiver screen.[6] [7] [8]
A sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and
vertical sync. (see section below - Other technical information, for extra detail.)
Horizontal synchronization
The horizontal synchronization pulse (horizontal sync HSYNC), separates the scan lines. The
horizontal sync signal is a single short pulse which indicates the start of every line. The rest of
the scan line follows, with the signal ranging from 0.3 V (black) to 1 V (white), until the next
horizontal or vertical synchronization pulse.
The format of the horizontal sync pulse varies. In the 525-line NTSC system it is a 4.85 µs-long
pulse at 0 V. In the 625-line PAL system the pulse is 4.7 µs synchronization pulse at 0 V . This is
lower than the amplitude of any video signal (blacker than black) so it can be detected by the
level-sensitive "sync stripper" circuit of the receiver.
Vertical synchronization
Vertical synchronization (Also vertical sync or V-SYNC) separates the video fields. In PAL and
NTSC, the vertical sync pulse occurs within the vertical blanking interval. The vertical sync
pulses are made by prolonging the length of HSYNC pulses through almost the entire length of
the scan line.
The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The
sync pulses occupy the whole of line interval of a number of lines at the beginning and end of a
scan; no picture information is transmitted during vertical retrace. The pulse sequence is
designed to allow horizontal sync to continue during vertical retrace; it also indicates whether
each field represents even or odd lines in interlaced systems (depending on whether it begins at
the start of a horizontal line, or mid-way through).
The format of such a signal in 525-line NTSC is:



pre-equalizing pulses (6 to start scanning odd lines, 5 to start scanning even lines)
long-sync pulses (5 pulses)
post-equalizing pulses (5 to start scanning odd lines, 4 to start scanning even lines)
Each pre- or post- equalizing pulse consists in half a scan line of black signal: 2 µs at 0 V,
followed by 30 µs at 0.3 V.
Each long sync pulse consists in an equalizing pulse with timings inverted: 30 µs at 0 V,
followed by 2 µs at 0.3 V.
In video production and computer graphics, changes to the image are often kept in step with the
vertical synchronization pulse to avoid visible discontinuity of the image. Since the frame buffer
of a computer graphics display imitates the dynamics of a cathode-ray display, if it is updated
with a new image while the image is being transmitted to the display, the display shows a
mishmash of both frames, producing a page tearing artifact partway down the image.
Vertical synchronization eliminates this by timing frame buffer fills to coincide with the vertical
blanking interval, thus ensuring that only whole frames are seen on-screen. Software such as
computer games and Computer aided design (CAD) packages often allow vertical
synchronization as an option, because it delays the image update until the vertical blanking
interval. This produces a small penalty in latency, because the program has to wait until the
video controller has finished transmitting the image to the display before continuing. Triple
buffering reduces this latency significantly.
Two timing intervals are defined - the front porch between the end of displayed video and the
start of the sync pulse, and the back porch after the sync pulse and before displayed video. These
and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the
time that the electron beam in the CRT is returning to the start of the next display line.
Horizontal hold and vertical hold
The lack of precision timing components available in early television receivers meant that the
timebase circuits occasionally needed manual adjustment. The adjustment took the form of
horizontal hold and vertical hold controls, usually on the rear of the television set. Loss of
horizontal synchronization usually resulted in an unwatchable picture; loss of vertical
synchronization would produce an image rolling up or down the screen.
Transition to digital broadcasts
Main article: Digital television transition
Main article: Digital television
As of late 2009, ten countries had completed the process of turning off analog terrestrial
broadcasting. Many other countries had plans to do so or were in the process of a staged
conversion. The first country to make a wholesale switch to digital over-the-air (terrestrial
television) broadcasting was Luxembourg in 2006, followed later in 2006 by the Netherlands; in
2007 by Finland, Andorra, Sweden, Norway, and Switzerland; in 2008 by Belgium (Flanders)
and Germany; in 2009 by the United States (high power stations -- the important ones), southern
Canada, the Isle of Man, Norway, and Denmark. In 2010, Belgium (Wallonia), Spain, Wales,
Latvia, Estonia, the Channel Islands, and Slovenia; in 2011 Israel, Austria, Monaco, Scotland,
Cyprus, Japan (excluding Miyagi, Iwate, and Fukushima Prefectures) and Malta completed the
transition.
In the United States, high-power over-the-air broadcasts are solely in the ATSC digital format
since June 12, 2009, the date that the Federal Communications Commission (FCC) set for the
end of all high-power analog TV transmissions. As a result, almost two million households could
no longer watch TV because they were not prepared for the transition. The switchover was
originally scheduled for February 17, 2009, until the U.S. Congress passed the DTV Delay
Act.[9] By special dispensation, some analog TV signals ceased on the original date.[10] While the
majority of the viewers of over-the-air broadcast television in the U.S. watch full-power stations
(which number about 1800), there are three other categories of TV stations in the U.S.: lowpower broadcasting stations, Class A stations, and TV translator stations. There is presently no
deadline for these stations, about 7100 in number, to convert to digital broadcasting.
It is necessary to be cognizant of the fact that in broadcasting, whatever happens in the United
States also happens simultaneously in southern Canada and in northern Mexico because those
areas are covered by TV stations in the U.S. Furthermore, the major cities of southern Canada
made their transitions to digital TV broadcasts simultaneously with the U.S.: Toronto, Montreal,
Vancouver, Ottawa, Winnipeg, Sault Ste. Marie, Quebec City, Charlottetown, Halifax, and so
forth.
In Japan, the switch to digital occurred on the 24th of July, 2011 (with the exception of
Fukushima, Iwate, and Miyagi prefectures, where conversion was delayed one year due to
complications from the 2011 Tōhoku earthquake and tsunami). In Canada, it is scheduled to
happen August 31, 2011. China is scheduled to switch in 2015. In the United Kingdom, the
digital switchover has different times for each part of the country. However, the entire U.K.
should be on digital TV by 2012.
Brazil switched to digital TV on December 2, 2007, in its major cities, and now it is estimated
that it will take about seven years for complete conversion over all of Brazil -- but understand
that large parts of Brazil are unpopulated by people who have electricity and TV. Australia will
turn off analog TV in steps, TV network by network, between 2010 and 2013, region by region.
[11]
In Malaysia, the Malaysian Communications & Multimedia Commission (MCMC) advertised
for tender bids to be submitted in the third quarter of 2009 for the 470 through 742 MHz UHF
allocation, to enable Malaysia's broadcast system to move into DTV. The new broadcast band
allocation would result in Malaysia's having to build an infrastructure for all broadcasters, using
a single digital terrestrial transmission/TV broadcast (DTTB) channel.
People also need to understand that large portions of Malaysia are covered by TV broadcasts
from Singapore, Thailand, Brunei, and/or Indonesia (from Borneo).
Users may then encode and transmit their television programs on this channels` digital data
stream. The winner was to be announced at the end of 2009 or early 2010. A condition of the
award is that digital transmission must start as soon as possible, and analog switch-off was
proposed for 2015. The scheme may not go ahead as the Government successor, Najib Tun
Razak deferred the transition indefinitely in favor of his own 1Malaysia concept, which means
that analog television will continue for longer than originally planned.[citation needed]
Components of a television system
A typical analog television receiver is based around the block diagram shown below:
Sync Separator
Portion of a PAL videosignal. From left to right: end of a video line, front porch, horizontal sync pulse,
back porch with color burst, and beginning of next line
eginning of the frame, showing several scan lines; the terminal part of the vertical sync pulse is at the left
PAL videosignal frames. Left to right: frame with scan lines (overlapping together, horizontal sync pulses
show as the doubled straight horizontal lines), vertical blanking interval with vertical sync (shows as
brightness increase of the bottom part of the signal in almost the leftmost part of the vertical blanking
interval), entire frame, another VBI with VSYNC, beginning of third frame
Image synchronization is achieved by transmitting negative-going pulses; in a composite video
signal of 1 volt amplitude, these are approximately 0.3 V below the "black level". The horizontal
sync signal is a single short pulse which indicates the start of every line. Two timing intervals are
defined - the front porch between the end of displayed video and the start of the sync pulse, and
the back porch after the sync pulse and before displayed video. These and the sync pulse itself
are called the horizontal blanking (or retrace) interval and represent the time that the electron
beam in the CRT is returning to the start of the next display line. The vertical sync signal is a
series of much longer pulses, indicating the start of a new field. The sync pulses occupy the
whole of line interval of a number of lines at the beginning and end of a scan; no picture
information is transmitted during vertical retrace. The pulse sequence is designed to allow
horizontal sync to continue during vertical retrace; it also indicates whether each field represents
even or odd lines in interlaced systems (depending on whether it begins at the start of a
horizontal line, or mid-way through). In the TV receiver, a sync separator circuit detects the sync
voltage levels and sorts the pulses into horizontal and vertical sync. Loss of horizontal
synchronization usually resulted in an unwatchable picture; loss of vertical synchronization
would produce an image rolling up or down the screen.
Timebase circuits
Further information: Oscilloscope
In an analog receiver with a CRT display sync pulses are fed to horizontal and vertical timebase
amplifier circuits. These generate modified sawtooth and parabola current waveforms to scan the
electron beam in a linear way. The waveform shapes are necessary to make up for the distance
variations from the electron beam source and the screen surface. Each beam direction switching
circuit is reset by the appropriate sync timing pulse. These waveforms are fed to the horizontal
and vertical scan coils wrapped around the CRT tube. These coils produce a magnetic field
proportional to the changing current, and this deflects the electron beam across the screen. In the
1950s, television receiver timebase supply was derived directly from the mains supply. A simple
circuit consisted of a series voltage dropper resistance and a rectifier valve (tube) or
semiconductor diode. This avoided the cost of a large high voltage mains supply (50 or 60 Hz)
transformer. This type of circuit was used for thermionic valve (tube) technology. It was
inefficient and produced a lot of heat which led to premature failures in the circuitry. In the
1960s, semiconductor technology was introdued into timebase circuits. During the late 1960s in
the U.K., synchronous, (with the scan line rate), power generation was introduced into solid state
receiver designs.[12] These had very complex circuits in which faults were difficult to trace, but
had very efficient use of power. In the early 1970s AC mains (50 Hz), and line timebase
(15,625 Hz), thyristor based switching circuits were introduced. In the U.K. use of the simple
(50 Hz) types of power circuits were discontinued. The reason for design changes arose from the
electricity supply contamination problems arising from EMI,[13] and supply loading issues due to
energy being taken from only the positive half cycle of the mains supply waveform.[14]
CRT flyback power supply design and operation principles
Further information: Extra high tension
Most of the receiver's circuitry (at least in transistor- or IC-based designs) operates from a
comparatively low-voltage DC power supply. However, the anode connection for a cathode-ray
tube requires a very high voltage (typically 10-30 kV) for correct operation.
This voltage is not directly produced by the main power supply circuitry; instead the receiver
makes use of the circuitry used for horizontal scanning. Direct current (DC), is switched though
the line output transformer, and alternating current ([AC]) is induced into the scan coils. At the
end of each horizontal scan line the magnetic field which has built up in both transformer and
scan coils by the current, is a source of latent electromagnetic energy. This stored collapsing
magnetic field energy can be captured. The reverse flow, short duration, (about 10% of the line
scan time) current from both the line output transformer and the horizontal scan coil is
discharged again into the primary winding of the flyback transformer by the use of a rectifier
which blocks this negative reverse emf. A small value capacitor is connected across the scan
switching device. This tunes the circuit inductances to resonate at a much higher frequency. This
slows down (lengthens) the flyback time from the extremely rapid decay rate that would result if
they were electrically isolated during this short period. One of the secondary windings on the
flyback transformer then feeds this brief high voltage pulse to a Cockcroft design voltage
multiplier. This produces the required EHT supply. A flyback converter is a power supply circuit
operating on similar principles.
Typical modern design incorporates the flyback transformer and rectifier circuitry into a single
unit with a captive output lead, (known as a diode split line output transformer),[15] so that all
high-voltage parts are enclosed. Earlier designs used a separate line output transformer and a
well insulated high voltage multiplier unit. The high frequency (15 kHz or so) of the horizontal
scanning allows reasonably small components to be used.
UNITV
UNIT V ADVANCE TECHNIQUES
CCD camera – HDTV – Digital TV – Video Disc – Cable TV – Video Cassette Recorder.
SECTION A
1)Explain HDTV?
HDTV blur is a common term used to describe a number of different artifacts on modern
consumer high-definition television sets.
The following factors are generally the primary or secondary causes of HDTV blur; in some
cases more than one of these factors may be in play at the studio or receiver end of the
transmission chain.






Pixel response time on LCD displays (blur in the color response of the active pixel)
Lower camera shutter speeds common in Hollywood production films (blur in the content of the film)
Blur from eye tracking fast moving objects on sample-and-hold LCD, plasma, or microdisplay.[1]
Resolution resampling (blur due to resizing image to fit the native resolution of the HDTV)
Blur due to 3:2 pulldown and/or motion-speed irregularities in framerate conversions from film to
video
High and/or lossy compression present in almost all digital video streams
2)Write a note an DIGITAL TV?
Digital television (DTV) is the transmission of audio and video by digital signals, in contrast to
the analog signals used by analog TV. Many countries are replacing broadcast analog television
with digital television to allow other uses of the television radio spectrum.
3)Explain digital video recorder (DVR)?
A digital video recorder (DVR), sometimes referred to by the merchandising term personal
video recorder (PVR), is a consumer electronics device or application software that records
video in a digital format to a disk drive, USB flash drive, SD memory card or other local or
networked mass storage device. The term includes set-top boxes (STB) with direct to disk
recording facility, portable media players (PMP) with recording, recorders (PMR) as camcorders
that record onto Secure Digital memory cards and software for personal computers which
enables video capture and playback to and from a hard disk. A television set with built-in digital
video-recording facilities was introduced by LG in 2007,[1] followed by other manufacturers.
DVR adoption has rapidly accelerated in recent years: in January 2006, ACNielsen recorded
1.2% of US households having a DVR but by February 2011, this number had grown to 42.2%
of viewers in the United States.
SECTION - B
1)Explain CCTV?
Closed-circuit television (CCTV) is the use of video cameras to transmit a signal to a specific
place, on a limited set of monitors.
It differs from broadcast television in that the signal is not openly transmitted, though it may
employ point to point (P2P), point to multipoint, or mesh wireless links. Though almost all video
cameras fit this definition, the term is most often applied to those used for surveillance in areas
that may need monitoring such as banks, casinos, airports, military installations, and convenience
stores. Videotelephony is seldom called "CCTV" but the use of video in distance education,
where it is an important tool, is often so called.[1][2]
In industrial plants, CCTV equipment may be used to observe parts of a process from a central
control room, for example when the environment is not suitable for humans. CCTV systems may
operate continuously or only as required to monitor a particular event. A more advanced form of
CCTV, utilizing Digital Video Recorders (DVRs), provides recording for possibly many years,
with a variety of quality and performance options and extra features (such as motion-detection
and email alerts). More recently, decentralized IP-based CCTV cameras, some equipped with
megapixel sensors, support recording directly to network-attached storage devices, or internal
flash for completely stand-alone operation.
Surveillance of the public using CCTV is particularly common in the United Kingdom, where
there are reportedly more cameras per person than in any other country in the world.[3] There and
elsewhere, its increasing use has triggered a debate about security versus privacy.
2)Define Cable television?
Cable television is a system of providing television programs to consumers via radio frequency
(RF) signals transmitted to televisions through coaxial cables or digital light pulses through fixed
optical fibers located on the subscriber's property, much like the over-the-air method used in
traditional broadcast television (via radio waves) in which a television antenna is required. FM
radio programming, high-speed Internet, telephony, and similar non-television services may also
be provided. The major difference is the change of radio frequency signals used and optical
connections to the subscriber property.
Most television sets are cable-ready and have a cable television tuner capable of receiving cable
TV already built-in that is delivered as an analog signal. To obtain premium television most
televisions require a set top box called a cable converter that processes digital signals. The
majority of basic cable channels can be received without a converter or digital television adapter
that the cable companies usually charge for, by connecting the copper wire with the F connector
to the Ant In that is located on the back of the television set.
The abbreviation CATV is often used to mean "Cable TV". It originally stood for Community
Antenna Television, from cable television's origins in 1948: in areas where Over-the-air
reception was limited by distance from transmitters or mountainous terrain, large "community
antennas" were constructed, and cable was run from them to individual homes. The origins of
cable broadcasting are even older as radio programming was distributed by cable in some
European cities as far back as 1924.
It is most commonplace in North America, Europe, Australia and East Asia, though it is present
in many other countries, mainly in South America and the Middle East. Cable TV has had little
success in Africa, as it is not cost-effective to lay cables in sparsely populated areas. So-called
"wireless cable" or microwave-based systems are used instead.
SECTION - C
1)Briefly explain HDTV?
HDTV blur is a common term used to describe a number of different artifacts on modern
consumer high-definition television sets.
The following factors are generally the primary or secondary causes of HDTV blur; in some
cases more than one of these factors may be in play at the studio or receiver end of the
transmission chain.






Pixel response time on LCD displays (blur in the color response of the active pixel)
Lower camera shutter speeds common in Hollywood production films (blur in the content of the film)
Blur from eye tracking fast moving objects on sample-and-hold LCD, plasma, or microdisplay.[1]
Resolution resampling (blur due to resizing image to fit the native resolution of the HDTV)
Blur due to 3:2 pulldown and/or motion-speed irregularities in framerate conversions from film to
video
High and/or lossy compression present in almost all digital video streams
Causes
It is common for observers to confuse or misunderstand the source of blurring on HDTV sets.
There are many different possible causes, many of them being possible simultaneously.
Pixel response times need to be below 16.67 milliseconds in order to fully represent the
bandwidth of color changes necessary for 60 Hz video. However, even when this response time
is achieved or surpassed, motion blur can still occur because of the least understood blur effect:
eye tracking.
LCDs often have a greater motion blur effect because their pixels remain lit, unlike CRT
phosphors that merely flash briefly. Reducing the time an LCD pixel is lit reduces motion blur
due to eye tracking by decreasing the time the backlit pixels are on.[2] However, an instant strobe
is required to completely eliminate the retinal blurring. [3][4][5]
Fixes
Strobing backlight

Philips created Aptura, also known as ClearLCD, to strobe the backlight in order to reduce the
sample time and thus the retinal blurring due to sample-and-hold.[6][7]

Samsung developed "LED Motion Plus" strobed backlighting, which is available on the "Samsung 81
Series" LCD screens as of August 2007.[8]

BenQ developed SPD (Simulated Pulse Drive), also more commonly known as "black frame
insertion", and claim that their images are as stable and clear as CRTs.[9][10] This is conceptually
similar to a strobing backlight.
100 Hz +
Main article: Motion interpolation
Some displays that run at 100 Hz or more add additional technology to address blurring issues.
Motion interpolation can cut the amount of blur while adding to the latency by inserting extra
synthesized in-between frames. Some LCD TVs supplement the standard 50/60 Hz signal by
interpolating an extra frame between every pair of frames in the signal so the display runs at
100 Hz or 120 Hz depending on which country you live in. The effect of this technology is most
noticeable when watching material that was originally shot on 35mm film, in which case the
typical film judder can be reduced, at the cost of introducing small visual artifacts. Film that is
viewed with this kind of processing can have a smoother look, appearing more like it was shot on
video, in contrast to the typical look of film. [11]
Motion interpolation technology generally may be added to TVs in PAL/SECAM countries if the
TV refreshes at 100 Hz and in NTSC countries if the TV refreshes at 120 Hz.[12] It's notable that
this solution is adequate for movies (which must have blur to begin with to solve double imaging
problems with higher shutter speeds on film) but due to gamers' sensitivity to lag even in the
200ms range, it is often better to turn off all video enhancement effects for video games.[13]
One possible advantage of a 100 Hz + display is superior conversion of the standard 24 frame/s
film speed. Usually movies and other film sources in NTSC are converted for home viewing
using what is called 3:2 pulldown which uses 4 frames from the original to create 5 (interlaced)
frames in the output. As a result 3:2 pulldown shows odd frames for 50 milliseconds and even
frames for 33 milliseconds. At 120 Hz 5:5 pulldown from 24 frame/s video is possible[14]
meaning all frames are on screen for the same 42 milliseconds. This eliminates the jerky effect
associated with 3:2 pulldown called telecine judder. However, to use 5:5 pulldown instead of the
normal 3:2 pulldown requires either support for 24 frame/s output like 1080p/24 from the
DVD/HD DVD/Blu-ray Disc player or the use of reverse telecine to remove the standard 3:2
pulldown. Some TVs (particularly plasma models) do 3:3 pulldown at 72 Hz or 4:4 at 96 Hz.[15]
(for specific models, see list of displays that support pulldown at multiples of the original frame
rate.) PAL countries speed the 24 frame/s film speed by 4% to obtain 25 frame/s, therefore
movies in the PAL format are completely free of Telecine judder effects.
Recently, so-called 240 Hz have become available. There are two classes of sets that claim
240 Hz. In the better class, Samsung and Sony both create 3 additional frames of data to
supplement the original 60 Hz signal. Other manufacturers to this date who also claim 240 Hz
are merely applying an image strobe to a more traditional 120 Hz approach and calling it 240 Hz.
Both Samsung and Sony allow for strobing the backlight, but do not market the product with an
inflated frequency count. The Sony and Samsung 240 Hz sets also provide for viewing content in
3D, which benefits from the same base technologies of strobing backlights and fast LCD
response times.
Manufacturer Terminology:







JVC calls their 100 Hz + technology "Clear Motion Drive" and "Clear Motion Drive II
100/120HZ".[16]
LG calls their 100 Hz + technology "TruMotion". In the U.S., 120 Hz is called "Real Cinema 24".
Mitsubishi calls their 100 Hz + technology "Smooth120Hz".[17]
Samsung calls their 100 Hz + technology AMP "Auto Motion Plus".[18]
Sony calls their 100 Hz + technology "Motionflow".[19]
Toshiba calls their 100 Hz + technology "Clear Frame".[20]
Insignia (Best Buy/Future Shop) house brand calls their 120 Hz + technology DCM Plus, for Digital
Clear Motion.
Laser TV
Laser TV has the potential to eliminate double imaging and motion artifacts by utilizing a
scanning architecture similar to the way that a CRT works.[21] Laser TV is generally not yet
available from many manufacturers. Claims have been made on television broadcasts such as
KRON 4 News' Coverage of Laser TV from October 2006,[22] but no consumer-grade laser
television sets have made any significant improvements in reducing any form of motion artifacts
since that time. One recent development in laser display technology has been the phosphorexcited laser, as demonstrated by Prysm's newest displays. These displays currently scan at
240 Hz, but are currently limited to a 60 Hz input. This has the effect of presenting four distinct
images when eye tracking a fast-moving object seen from a 60 Hz input source
2)Briefly explain function of Digital television?
Formats and bandwidth
Digital television supports many different picture formats defined by the broadcast television
systems which are a combination of size, aspect ratio (width to height ratio).
With digital terrestrial television (DTV) broadcasting, the range of formats can be broadly
divided into two categories: high definition television (HDTV) for the transmission of highdefinition video and standard-definition television (SDTV). These terms by themselves are not
very precise, and many subtle intermediate cases exist.
One of several different HDTV formats that can be transmitted over DTV is: 1280 × 720 pixels
in progressive scan mode (abbreviated 720p) or 1920 × 1080 pixels in interlaced video mode
(1080i). Each of these utilizes a 16:9 aspect ratio. (Some televisions are capable of receiving an
HD resolution of 1920 × 1080 at a 60 Hz progressive scan frame rate — known as 1080p.)
HDTV cannot be transmitted over current analog television channels because of channel capacity
issues.
Standard definition TV (SDTV), by comparison, may use one of several different formats taking
the form of various aspect ratios depending on the technology used in the country of broadcast.
For 4:3 aspect-ratio broadcasts, the 640 × 480 format is used in NTSC countries, while
720 × 576 is used in PAL countries. For 16:9 broadcasts, the 704 × 480 format is used in NTSC
countries, while 720 × 576 is used in PAL countries. However, broadcasters may choose to
reduce these resolutions to save bandwidth (e.g., many DVB-T channels in the United Kingdom
use a horizontal resolution of 544 or 704 pixels per line).[1]
Each commercial broadcasting terrestrial television DTV channel in North America is permitted
to be broadcast at a bit rate up to 19 megabits per second. However, the broadcaster does not
need to use this entire bandwidth for just one broadcast channel. Instead the broadcast can use
the channel to include PSIP and can also subdivide across several video subchannels (aka feeds)
of varying quality and compression rates, including non-video datacasting services that allow
one-way high-bandwidth streaming of data to computers like National Datacast.
A broadcaster may opt to use a standard-definition (SDTV) digital signal instead of an HDTV
signal, because current convention allows the bandwidth of a DTV channel (or "multiplex") to be
subdivided into multiple digital subchannels, (similar to what most FM radio stations offer with
HD Radio), providing multiple feeds of entirely different television programming on the same
channel. This ability to provide either a single HDTV feed or multiple lower-resolution feeds is
often referred to as distributing one's "bit budget" or multicasting. This can sometimes be
arranged automatically, using a statistical multiplexer (or "stat-mux"). With some
implementations, image resolution may be less directly limited by bandwidth; for example in
DVB-T, broadcasters can choose from several different modulation schemes, giving them the
option to reduce the transmission bitrate and make reception easier for more distant or mobile
viewers.
Reception
There are a number of different ways to receive digital television. One of the oldest means of
receiving DTV (and TV in general) is using an antenna (television) (known as an aerial in some
countries). This way is known as Digital terrestrial television (DTT). With DTT, viewers are
limited to whatever channels the antenna picks up. Signal quality will also vary.
Other ways have been devised to receive digital television. Among the most familiar to people
are digital cable and digital satellite. In some countries where transmissions of TV signals are
normally achieved by microwaves, digital MMDS is used. Other standards, such as Digital
multimedia broadcasting (DMB) and DVB-H, have been devised to allow handheld devices such
as mobile phones to receive TV signals. Another way is IPTV, that is receiving TV via Internet
Protocol, relying on Digital Subscriber Line (DSL) or optical cable line. Finally, an alternative
way is to receive digital TV signals via the open Internet. For example, there is P2P (peer-topeer) Internet television software that can be used to watch TV on a computer.
Some signals carry encryption and specify use conditions (such as "may not be recorded" or
"may not be viewed on displays larger than 1 m in diagonal measure") backed up with the force
of law under the WIPO Copyright Treaty and national legislation implementing it, such as the
U.S. Digital Millennium Copyright Act. Access to encrypted channels can be controlled by a
removable smart card, for example via the Common Interface (DVB-CI) standard for Europe and
via Point Of Deployment (POD) for IS or named differently CableCard.
Protection parameters for terrestrial DTV broadcasting
Digital television signals must not interfere with each other, and they must also coexist with
analog television until it is phased out. The following table gives allowable signal-to-noise and
signal-to-interference ratios for various interference scenarios. This table is a crucial regulatory
tool for controlling the placement and power levels of stations. Digital TV is more tolerant of
interference than analog TV, and this is the reason a smaller range of channels can carry an alldigital set of television stations.[citation needed]
System Parameters
(protection ratios)
C/N for AWGN Channel
Canada
[13]
USA [5]
+19.5 dB +15.19
(16.5 dB[3]) dB
Japan & Brazil [36,
ITU-mode M3
37][2]
EBU [9, 12]
+19.3 dB
Co-Channel DTV into Analog TV
+33.8 dB
+34.44
dB
Co-Channel Analog TV into DTV
+7.2 dB
+1.81 dB +4 dB
Co-Channel DTV into DTV
+19.5 dB +15.27
(16.5 dB[3]) dB
+19.2 dB
+34 ~ 37 dB +38 dB
+4 dB
+19 dB
+19 dB
Lower Adjacent Channel DTV into
Analog TV
−16 dB
−17.43
dB
−5 ~ −11
dB[4]
−6 dB
Upper Adjacent Channel DTV into
Analog TV
−12 dB
−11.95
dB
−1 ~ −10[4]
−5 dB
Lower Adjacent Channel Analog TV
into DTV
−48 dB
−47.33
dB
−34 ~ −37
dB[4]
−35 dB
Upper Adjacent Channel Analog TV
into DTV
−49 dB
−48.71
dB
−38 ~ −36
dB[4]
−37 dB
Lower Adjacent Channel DTV into DTV −27 dB
−28 dB
−30 dB
−28 dB
Upper Adjacent Channel DTV into DTV −27 dB
−26 dB
−30 dB
−29 dB
Interaction
Interaction happens between the TV watcher and the DTV system. It can be understood in
different ways, depending on which part of the DTV system is concerned. It can also be an
interaction with the STB only (to tune to another TV channel or to browse the EPG).
Modern DTV systems are able to provide interaction between the end-user and the broadcaster
through the use of a return path. With the exceptions of coaxial and fiber optic cable, which can
be bidirectional, a dialup modem, Internet connection, or other method is typically used for the
return path with unidirectional networks such as satellite or antenna broadcast.
In addition to not needing a separate return path, cable also has the advantage of a
communication channel localized to a neighborhood rather than a city (terrestrial) or an even
larger area (satellite). This provides enough customizable bandwidth to allow true video on
demand 1-segment broadcasting
1seg (1-segment) is a special form of ISDB. Each channel is further divided into 13 segments.
The 12 segments of them are allocated for HDTV and remaining segment, the 13th, is used for
narrowband receivers such as mobile television or cell phone.
Comparison analog vs digital
DTV has several advantages over analog TV, the most significant being that digital channels
take up less bandwidth, and the bandwidth needs are continuously variable, at a corresponding
reduction in image quality depending on the level of compression as well as the resolution of the
transmitted image. This means that digital broadcasters can provide more digital channels in the
same space, provide high-definition television service, or provide other non-television services
such as multimedia or interactivity. DTV also permits special services such as multiplexing
(more than one program on the same channel), electronic program guides and additional
languages (spoken or subtitled). The sale of non-television services may provide an additional
revenue source.
Digital signals react differently to interference than analog signals. For example, common
problems with analog television include ghosting of images, noise from weak signals, and many
other potential problems which degrade the quality of the image and sound, although the
program material may still be watchable. With digital television, the audio and video must be
synchronized digitally, so reception of the digital signal must be very nearly complete;
otherwise, neither audio nor video will be usable. Short of this complete failure, "blocky" video
is seen when the digital signal experiences interference.
Effect on existing analog technology
Television sets with only analog tuners cannot decode digital transmissions. When analog
broadcasting over the air ceases, users of sets with analog-only tuners may use other sources of
programming (eg cable, recorders) or may purchase set-top convertor boxes to tune in the digital
signals. In the United States, a government-sponsored coupon was available to offset the cost of
an external converter box. Analog switch-off (of full-power stations) took place on June 12, 2009
in the United States[5] and July 24, 2011 in Japan[6] and is scheduled for August 31, 2011 in
Canada,[7] by 2012 in the United Kingdom[8] and Ireland,[9] by 2013 in Australia[10], by 2015 in
the Philippines and Uruguay and by 2017 in Costa Rica.
Environmental issues
The adoption of a broadcast standard incompatible with existing analog receivers has created the
problem of large numbers of analog receivers being discarded during digital television transition.
An estimated 99 million unused analog TV receivers are currently in storage in the US alone[11]
and, while some obsolete receivers are being retrofitted with converters, many more are simply
dumped in landfills[12] where they represent a source of toxic metals such as lead as well as lesser
amounts of materials such as barium, cadmium and chromium.[13]
While the glass in cathode ray tubes contains an average of 3.62 kilograms (8.0 lb) of
lead[14][unreliable source?] (amount varies from 1.08 lb to 11.28 lb, depending on screen size but the
lead is "stable and immobile"[15]) which can have long-term negative effects on the environment
if dumped as landfill,[16] the glass envelope can be recycled at suitably equipped facilities.[17]
Other portions of the receiver may be subject to disposal as hazardous material.
Local restrictions on disposal of these materials vary widely; in some cases second-hand stores
have refused to accept working color television receivers for resale due to the increasing costs of
disposing of unsold TVs. Those thrift stores which are still accepting donated TVs have reported
significant increases in good-condition working used television receivers abandoned by viewers
who often expect them not to work after digital transition.[18]
In Michigan, one recycler has estimated that as many as one household in four will dispose of or
recycle a TV set in the next year.[19] The digital television transition, migration to high-definition
television receivers and the replacement of CRTs with flatscreens are all factors in the increasing
number of discarded analog CRT-based television receivers.
Technical limitations
Compression artifacts and allocated bandwidth
DTV mages have some picture defects that are not present on analog television or motion picture
cinema, because of present-day limitations of bandwidth and compression algorithms such as
MPEG-2. This defect is sometimes referred to as "mosquito noise".[20]
Because of the way the human visual system works, defects in an image that are localized to
particular features of the image or that come and go are more perceptible than defects that are
uniform and constant. However, the DTV system is designed to take advantage of other
limitations of the human visual system to help mask these flaws, e.g. by allowing more
compression artifacts during fast motion where the eye cannot track and resolve them as easily
and, conversely, minimizing artifacts in still backgrounds that may be closely examined in a
scene (since time allows).
Effects of poor reception
Changes in signal reception from factors such as degrading antenna connections or changing
weather conditions may gradually reduce the quality of analog TV. The nature of digital TV
results in a perfectly decodable video initially, until the receiving equipment starts picking up
interference that overpowers the desired signal or if the signal is too weak to decode. Some
equipment will show a garbled picture with significant damage, while other devices may go
directly from perfectly decodable video to no video at all or lock up. This phenomenon is known
as the digital cliff effect.
For remote locations, distant channels that, as analog signals, were previously usable in a snowy
and degraded state may, as digital signals, be perfectly decodable or may become completely
unavailable. In areas where transmitting antennas are located on mountains, viewers who are too
close to the transmitter may find reception difficult or impossible because the strongest part of
the broadcast signal passes above them. The use of higher frequencies will add to these
problems, especially in cases where a clear line-of-sight from the receiving antenna to the
transmitter is not available
ALL THE BEST
Download