Recombinant DNA - Degree College Bemina

advertisement
Recombinant DNA
Recombinant DNA (rDNA) is a form of artificial DNA that is created by combining two or
more sequences that would not normally occur together through the process of gene
splicing.[1] In terms of genetic modification, it is created through the introduction of relevant
DNA into an existing organismal DNA, such as the plasmids of bacteria, to code for or alter
different traits for a specific purpose, such as antibiotic resistance.[2] It differs from genetic
recombination in that it does not occur through natural processes within the cell, but is
engineered.[2] A recombinant protein is a protein that is derived from recombinant DNA.[3]
The recombinant DNA technique was first proposed by Peter Lobban, a graduate student,
with A. Dale Kaiser at the Stanford University Department of Biochemistry. The technique
was then realized by Lobban and Kaiser; Jackson, Symons and Berg; and Stanley Norman
Cohen, Chang, Herbert Boyer and Helling, in 1972–74. They published their findings in
papers including the 1972 paper "Biochemical Method for Inserting New Genetic Information
into DNA of Simian Virus 40: Circular SV40 DNA Molecules Containing Lambda Phage
Genes and the Galactose Operon of Escherichia coli", the 1973 paper "Enzymatic end-to-end
joining of DNA molecules" and the 1973 paper "Construction of Biologically Functional
Bacterial Plasmids in vitro",[4] all of which described techniques to isolate and amplify genes
or DNA segments and insert them into another cell with precision, creating a transgenic
bacterium.
Recombinant DNA technology was made possible by the discovery, isolation and application
of restriction endonucleases by Werner Arber, Daniel Nathans, and Hamilton Smith, for
which they received the 1978 Nobel Prize in Medicine. Cohen and Boyer applied for a patent
on the Process for producing biologically functional molecular chimeras which could not
exist in nature in 1974. The patent was granted in 1980.
Applications and methods
Cloning and relation to plasmids
A simple example of how a desired gene is inserted into a plasmid. In this example, the gene
specified in the white color becomes useless as the new gene is added.
The use of cloning is interrelated with recombinant DNA in classical biology, as the term
"clone" refers to a cell or organism derived from a parental organism,[2] with modern biology
referring to the term as a collection of cells derived from the same cell that remain
identical.[2] In the classical instance, the use of recombinant DNA provides the initial cell
from which the host organism is then expected to recapitulate when it undergoes further cell
division, with bacteria remaining a prime example due to the use of viral vectors in medicine
that contain recombinant DNA inserted into a structure known as a plasmid.[2]
Plasmids are extrachromosomal self-replicating circular forms of DNA present in most
bacteria, such as Escherichia coli (E. Coli), containing genes related to catabolism and
metabolic activity,[2] and allowing the carrier bacterium to survive and reproduce in
conditions present within other species and environments. These genes represent
characteristics of resistance to bacteriophages and antibiotics[2] and some heavy metals, but
can also be fairly easily removed or separated from the plasmid by restriction
endonucleases,[2], which regularly produce "sticky ends" and allow the attachment of a
selected segment of DNA, which codes for more "reparative" substances, such as peptide
hormone medications including insulin, growth hormone, and oxytocin. In the introduction of
useful genes into the plasmid, the bacteria are then used as a viral vector, which are
encouraged to reproduce so as to recapitulate the altered DNA within other cells it infects,
and increase the amount of cells with the recombinant DNA present within them.
The use of plasmids is also key within gene therapy, where their related viruses are used as
cloning vectors or carriers, which are means of transporting and passing on genes in
recombinant DNA through viral reproduction throughout an organism.[2] Plasmids contain
three common features—a replicator, selectable marker and a cloning site.[2] The replicator
or "ori"[2] refers to the origin of replication with regard to location and bacteria where
replication begins. The marker refers to a particular gene that usually contains resistance to
an antibiotic, but may also refer to a gene that is attached alongside the desired one, such as
that which confers luminescence to allow identification of successfully recombined DNA.[2]
The cloning site is a sequence of nucleotides representing one or more positions where
cleavage by restriction endonucleases occurs.[2] Most eukaryotes do not maintain canonical
plasmids; yeast is a notable exception.[5] In addition, the Ti plasmid of the bacterium
Agrobacterium tumefaciens can be used to integrate foreign DNA into the genomes of many
plants. Other methods of introducing or creating recombinant DNA in eukaryotes include
homologous recombination and transfection with modified viruses.
Chimeric plasmids
An example of chimeric plasmid formation from two "blunt ends" via the enzyme, T4 Ligase.
When recombinant DNA is then further altered or changed to host additional strands of DNA,
the molecule formed is referred to as "chimeric" DNA molecule,[2] with reference to the
mythological chimera, which consisted as a composite of several animals.[2] The presence of
chimeric plasmid molecules is somewhat regular in occurrence, as, throughout the lifetime of
an organism[2], the propagation by vectors ensures the presence of hundreds of thousands of
organismal and bacterial cells that all contain copies of the original chimeric DNA.[2]
In the production of chimeric(from chimera) plasmids, the processes involved can be
somewhat uncertain[2], as the intended outcome of the addition of foreign DNA may not
always be achieved and may result in the formation of unusable plasmids. Initially, the
plasmid structure is linearised[2] to allow the addition by bonding of complementary foreign
DNA strands to single-stranded "overhangs"[2] or "sticky ends" present at the ends of the
DNA molecule from staggered, or "S-shaped" cleavages produced by restriction
endonucleases.[2]
A common vector used for the donation of plasmids originally was the bacterium Escherichia
coli and, later, the EcoRI derivative[6], which was used for its versatility[6] with addition of
new DNA by "relaxed" replication when inhibited by chloramphenicol and spectinomycin,
later being replaced by the pBR322 plasmid.[6]In the case of EcoRI, the plasmid can anneal
with the presence of foreign DNA via the route of sticky-end ligation, or with "blunt ends"
via blunt-end ligation, in the presence of the phage T4 ligase [6], which forms covalent links
between 3-carbon OH and 5-carbon PO4 groups present on blunt ends.[6] Both sticky-end, or
overhang ligation and blunt-end ligation can occur between foreign DNA segments, and
cleaved ends of the original plasmid depending upon the restriction endonuclease used for
cleavage.[6]
Synthetic insulin production using recombinant DNAOne breakthrough in
recombinant DNA technology was the manufacture of biosynthetic "human" insulin, which
was the first medicine made via recombinant DNA technology ever to be approved by the
FDA. Insulin was the ideal candidate because it is a relatively simple protein and was
therefore relatively easy to copy, as well as being extensively used to the extent that if
researchers could prove that biosynthetic "human" insulin was safe and effective, the
technology would be accepted as such, and would open opportunities for other products to be
made in this fashion.
The specific gene sequence, or polynucleotide, that codes for insulin production in humans
was introduced to a sample colony of E. coli (the bacteria found in the human intestine). Only
about 1 out of 106 bacteria picks up the sequence. However, because the lifecycle is only
about 30 minutes for E. coli, this limitation is not problematic, and in a 24-hour period, there
may be billions of E. coli that are coded with the DNA sequences needed to induce insulin
production.[7]
However, a sampling of initial reaction showed that Humulin was greeted more as a
technological rather than a medical breakthrough, and that this sentiment was building even
before the drug reached pharmacies.
The Economist concluded: "The first bug-built drug for human use may turn out to be a
commercial flop. But the way has now been cleared-and remarkably quickly, too—for
biotechnologists with interesting new products to clear the regulatory hurdles and run away
with the prizes."[8]
Ultimately, widespread consumer adoption of biosynthetic "human" insulin did not occur
until the manufacturers removed highly-purified animal insulin from the market, thereby
leaving consumers with no other alternative to synthetic varieties.
Plasmid
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Figure 1: Illustration of a bacterium with plasmid enclosed showing chromosomal DNA and
plasmids.
A plasmid is a DNA molecule that is separate from, and can replicate independently of, the
chromosomal DNA.[1] They are double stranded and, in many cases, circular. Plasmids
usually occur naturally in bacteria, but are sometimes found in eukaryotic organisms (e.g., the
2-micrometre-ring in Saccharomyces cerevisiae).
Plasmid size varies from 1 to over 1,000 kilobase pairs (kbp).[2][3][4] The number of identical
plasmids within a single cell can range anywhere from one to even thousands under some
circumstances. Plasmids can be considered to be part of the mobilome, since they are often
associated with conjugation, a mechanism of horizontal gene transfer.
The term plasmid was first introduced by the American molecular biologist Joshua Lederberg
in 1952.[5]
Plasmids are considered transferable genetic elements, or "replicons", capable of autonomous
replication within a suitable host. Plasmids can be found in all three major domains, Archea,
Bacteria and Eukarya.[1] Similar to viruses, plasmids are not considered a form of "life" as it
is currently defined.[6] Unlike viruses, plasmids are "naked" DNA and do not encode genes
necessary to encase the genetic material for transfer to a new host, though some classes of
plasmids encode the sex pilus necessary for their own transfer. Plasmid host-to-host transfer
requires direct, mechanical transfer by conjugation or changes in host gene expression
allowing the intentional uptake of the genetic element by transformation.[1] Microbial
transformation with plasmid DNA is neither parasitic nor symbiotic in nature, since each
implies the presence of an independent species living in a commensal or detrimental state
with the host organism. Rather, plasmids provide a mechanism for horizontal gene transfer
within a population of microbes and typically provide a selective advantage under a given
environmental state. Plasmids may carry genes that provide resistance to naturally occurring
antibiotics in a competitive environmental niche, or alternatively the proteins produced may
act as toxins under similar circumstances. Plasmids also can provide bacteria with an ability
to fix elemental nitrogen or to degrade recalcitrant organic compounds which provide an
advantage under conditions of nutrient deprivation.[1]
Vectors
There are two types of plasmid integration into a host bacteria: Non-integrating plasmids
replicate as with the top instance; whereas episomes, the lower example, integrate into the
host chromosome.
Plasmids used in genetic engineering are called vectors. Plasmids serve as important tools in
genetics and biotechnology labs, where they are commonly used to multiply (make many
copies of) or express particular genes.[2] Many plasmids are commercially available for such
uses. The gene to be replicated is inserted into copies of a plasmid containing genes that
make cells resistant to particular antibiotics and a multiple cloning site (MCS, or polylinker),
which is a short region containing several commonly used restriction sites allowing the easy
insertion of DNA fragments at this location. Next, the plasmids are inserted into bacteria by a
process called transformation. Then, the bacteria are exposed to the particular antibiotics.
Only bacteria which take up copies of the plasmid survive, since the plasmid makes them
resistant. In particular, the protecting genes are expressed (used to make a protein) and the
expressed protein breaks down the antibiotics. In this way the antibiotics act as a filter to
select only the modified bacteria. Now these bacteria can be grown in large amounts,
harvested and lysed (often using the alkaline lysis method) to isolate the plasmid of interest.
Another major use of plasmids is to make large amounts of proteins. In this case, researchers
grow bacteria containing a plasmid harboring the gene of interest. Just as the bacteria
produces proteins to confer its antibiotic resistance, it can also be induced to produce large
amounts of proteins from the inserted gene. This is a cheap and easy way of mass-producing
a gene or the protein it then codes for, for example, insulin or even antibiotics.
However, a plasmid can only contain inserts of about 1–10 kbp. To clone longer lengths of
DNA, lambda phage with lysogeny genes deleted, cosmids, bacterial artificial chromosomes
or yeast artificial chromosomes could be used.
Applications
Disease Models
Plasmids were historically used to genetically engineer the embryonic stem cells of rats in
order to create rat genetic disease models. The limited efficiency of plasmid based techniques
precluded their use in the creation of more accurate human cell models. Fortunately,
developments in Adeno-associated virus recombination techniques, and Zinc finger
nucleases, have enabled the creation of a new generation of isogenic human disease models.
Gene therapy
The success of some strategies of gene therapy depends on the efficient insertion of
therapeutic genes at the appropriate chromosomal target sites within the human genome,
without causing cell injury, oncogenic mutations (cancer) or an immune response. Plasmid
vectors are one of many approaches that could be used for this purpose. Zinc finger nucleases
(ZFNs) offer a way to cause a site-specific double strand break to the DNA genome and
cause homologous recombination. This makes targeted gene correction a possibility in human
cells. Plasmids encoding ZFN could be used to deliver a therapeutic gene to a pre-selected
chromosomal site with a frequency higher than that of random integration. Although the
practicality of this approach to gene therapy has yet to be proven, some aspects of it could be
less problematic than the alternative viral-based delivery of therapeutic genes.[7]
Types
Overview of bacterial conjugation
Electron micrograph of a DNA fiber bundle, presumably of a single bacterial chromosome
loop.
Electron micrograph of a bacterial DNA plasmid (chromosome fragment).
One way of grouping plasmids is by their ability to transfer to other bacteria. Conjugative
plasmids contain so-called tra-genes, which perform the complex process of conjugation, the
transfer of plasmids to another bacterium (Fig. 4). Non-conjugative plasmids are incapable of
initiating conjugation, hence they can only be transferred with the assistance of conjugative
plasmids, by 'accident'. An intermediate class of plasmids are mobilizable, and carry only a
subset of the genes required for transfer. They can 'parasitize' a conjugative plasmid,
transferring at high frequency only in its presence. Plasmids are now being used to
manipulate DNA and may possibly be a tool for curing many diseases.
It is possible for plasmids of different types to coexist in a single cell. Several different
plasmids have been found in E. coli. But related plasmids are often incompatible, in the sense
that only one of them survives in the cell line, due to the regulation of vital plasmid functions.
Therefore, plasmids can be assigned into compatibility groups.
Another way to classify plasmids is by function. There are five main classes:





Fertility-F-plasmids, which contain tra-genes. They are capable of conjugation
(transfer of genetic material between bacteria which are touching).
Resistance-(R)plasmids, which contain genes that can build a resistance against
antibiotics or poisons and help bacteria produce pili. Historically known as R-factors,
before the nature of plasmids was understood.
Col-plasmids, which contain genes that code for (determine the production of)
bacteriocins, proteins that can kill other bacteria.
Degradative plasmids, which enable the digestion of unusual substances, e.g., toluene
or salicylic acid.
Virulence plasmids, which turn the bacterium into a pathogen (one that causes
disease).
Plasmids can belong to more than one of these functional groups.
Plasmids that exist only as one or a few copies in each bacterium are, upon cell division, in
danger of being lost in one of the segregating bacteria. Such single-copy plasmids have
systems which attempt to actively distribute a copy to both daughter cells.
Some plasmids or microbial hosts include an addiction system or "postsegregational killing
system (PSK)", such as the hok/sok (host killing/suppressor of killing) system of plasmid R1
in Escherichia coli.[8] This variant produces both a long-lived poison and a short-lived
antidote. Several types of plasmid addiction systems (toxin/ antitoxin, metabolism-based,
ORT systems) were discribed in the literature[9] and used in biotechnical (fermentation) or
biomedical (vaccine therapy) applications. Daughter cells that retain a copy of the plasmid
survive, while a daughter cell that fails to inherit the plasmid dies or suffers a reduced
growth-rate because of the lingering poison from the parent cell. Finally, the overall
productivity could be enhanced.
Yeast Plasmids
Other types of plasmids, often related to yeast cloning vectors include:


Yeast integrative plasmid (YIp), yeast vectors that rely on integration into the host
chromosome for survival and replication, and are usually used when studying the
functionality of a solo gene or when the gene is toxic. Also connected with the gene
URA3, that codes an enzyme related to the biosynthesis of pyrimidine nucleotides (T,
C);
Yeast Replicative Plasmid (YRp), which transport a sequence of chromosomal DNA
that includes an origin of replication. These plasmids are less stable, as they can "get
lost" during the budding.
For further information see: http://dbb.urmc.rochester.edu/labs/sherman_f/yeast/Cont.html
Plasmid DNA extraction
As alluded to above, plasmids are often used to purify a specific sequence, since they can
easily be purified away from the rest of the genome. For their use as vectors, and for
molecular cloning, plasmids often need to be isolated.
There are several methods to isolate plasmid DNA from bacteria, the archetypes of which are
the miniprep and the maxiprep/bulkprep.[2] The former can be used to quickly find out
whether the plasmid is correct in any of several bacterial clones. The yield is a small amount
of impure plasmid DNA, which is sufficient for analysis by restriction digest and for some
cloning techniques.
In the latter, much larger volumes of bacterial suspension are grown from which a maxi-prep
can be performed. Essentially this is a scaled-up miniprep followed by additional purification.
This results in relatively large amounts (several micrograms) of very pure plasmid DNA.
In recent times many commercial kits have been created to perform plasmid extraction at
various scales, purity and levels of automation. Commercial services can prepare plasmid
DNA at quoted prices below $300/mg in milligram quantities and $15/mg in gram quantities
(early 2007).
Conformations
Plasmid DNA may appear in one of five conformations, which (for a given size) run at
different speeds in a gel during electrophoresis. The conformations are listed below in order
of electrophoretic mobility (speed for a given applied voltage) from slowest to fastest:





"Nicked Open-Circular" DNA has one strand cut.
"Relaxed Circular" DNA is fully intact with both strands uncut, but has been
enzymatically "relaxed" (supercoils removed). You can model this by letting a twisted
extension cord unwind and relax and then plugging it into itself.
"Linear" DNA has free ends, either because both strands have been cut, or because the
DNA was linear in vivo. You can model this with an electrical extension cord that is
not plugged into itself.
"Supercoiled" (or "Covalently Closed-Circular") DNA is fully intact with both strands
uncut, and with a twist built in, resulting in a compact form. You can model this by
twisting an extension cord and then plugging it into itself.
"Supercoiled Denatured" DNA is like supercoiled DNA, but has unpaired regions
that make it slightly less compact; this can result from excessive alkalinity during
plasmid preparation.
The rate of migration for small linear fragments is directly proportional to the voltage applied
at low voltages. At higher voltages, larger fragments migrate at continually increasing yet
different rates. Therefore the resolution of a gel decreases with increased voltage.
At a specified, low voltage, the migration rate of small linear DNA fragments is a function of
their length. Large linear fragments (over 20kb or so) migrate at a certain fixed rate
regardless of length. This is because the molecules 'resperate', with the bulk of the molecule
following the leading end through the gel matrix. Restriction digests are frequently used to
analyse purified plasmids. These enzymes specifically break the DNA at certain short
sequences. The resulting linear fragments form 'bands' after gel electrophoresis. It is possible
to purify certain fragments by cutting the bands out of the gel and dissolving the gel to
release the DNA fragments.
Because of its tight conformation, supercoiled DNA migrates faster through a gel than linear
or open-circular DNA.
Simulation of plasmids
The use of plasmids as a technique in molecular biology is supported by bioinformatics
software. These programmes record the DNA sequence of plasmid vectors, help to predict cut
sites of restriction enzymes, and to plan manipulations. Examples of software packages that
handle plasmid maps are Geneious, Lasergene, GeneConstructionKit, ApE, and Vector NTI.
Polymerase chain reaction
A strip of eight PCR tubes, each containing a 100 μL reaction mixture
The polymerase chain reaction (PCR) is a scientific technique in molecular biology to
amplify a single or a few copies of a piece of DNA across several orders of magnitude,
generating thousands to millions of copies of a particular DNA sequence. The method relies
on thermal cycling, consisting of cycles of repeated heating and cooling of the reaction for
DNA melting and enzymatic replication of the DNA. Primers (short DNA fragments)
containing sequences complementary to the target region along with a DNA polymerase
(after which the method is named) are key components to enable selective and repeated
amplification. As PCR progresses, the DNA generated is itself used as a template for
replication, setting in motion a chain reaction in which the DNA template is exponentially
amplified. PCR can be extensively modified to perform a wide array of genetic
manipulations.
Almost all PCR applications employ a heat-stable DNA polymerase, such as Taq polymerase,
an enzyme originally isolated from the bacterium Thermus aquaticus. This DNA polymerase
enzymatically assembles a new DNA strand from DNA building blocks, the nucleotides, by
using single-stranded DNA as a template and DNA oligonucleotides (also called DNA
primers), which are required for initiation of DNA synthesis. The vast majority of PCR
methods use thermal cycling, i.e., alternately heating and cooling the PCR sample to a
defined series of temperature steps. These thermal cycling steps are necessary first to
physically separate the two strands in a DNA double helix at a high temperature in a process
called DNA melting. At a lower temperature, each strand is then used as the template in DNA
synthesis by the DNA polymerase to selectively amplify the target DNA. The selectivity of
PCR results from the use of primers that are complementary to the DNA region targeted for
amplification under specific thermal cycling conditions.
Developed in 1983 by Kary Mullis,[1] PCR is now a common and often indispensable
technique used in medical and biological research labs for a variety of applications.[2][3] These
include DNA cloning for sequencing, DNA-based phylogeny, or functional analysis of genes;
the diagnosis of hereditary diseases; the identification of genetic fingerprints (used in forensic
sciences and paternity testing); and the detection and diagnosis of infectious diseases. In
1993, Mullis was awarded the Nobel Prize in Chemistry for his work on PCR.[4]
PCR principles and procedure
Figure 1a: A thermal cycler for PCR
Figure 1b: An older model three-temperature thermal cycler for PCR
PCR is used to amplify a specific region of a DNA strand (the DNA target). Most PCR
methods typically amplify DNA fragments of up to ~10 kilo base pairs (kb), although some
techniques allow for amplification of fragments up to 40 kb in size.[5]
A basic PCR set up requires several components and reagents.[6] These components include:





DNA template that contains the DNA region (target) to be amplified.
Two primers that are complementary to the 3' (three prime) ends of each of the sense
and anti-sense strand of the DNA target.
Taq polymerase or another DNA polymerase with a temperature optimum at around
70 °C.
Deoxynucleoside triphosphates (dNTPs; also very commonly and erroneously called
deoxynucleotide triphosphates), the building blocks from which the DNA
polymerases synthesizes a new DNA strand.
Buffer solution, providing a suitable chemical environment for optimum activity and
stability of the DNA polymerase.


Divalent cations, magnesium or manganese ions; generally Mg2+ is used, but Mn2+
can be utilized for PCR-mediated DNA mutagenesis, as higher Mn2+ concentration
increases the error rate during DNA synthesis[7]
Monovalent cation potassium ions.
The PCR is commonly carried out in a reaction volume of 10–200 μl in small reaction tubes
(0.2–0.5 ml volumes) in a thermal cycler. The thermal cycler heats and cools the reaction
tubes to achieve the temperatures required at each step of the reaction (see below). Many
modern thermal cyclers make use of the Peltier effect which permits both heating and cooling
of the block holding the PCR tubes simply by reversing the electric current. Thin-walled
reaction tubes permit favorable thermal conductivity to allow for rapid thermal equilibration.
Most thermal cyclers have heated lids to prevent condensation at the top of the reaction tube.
Older thermocyclers lacking a heated lid require a layer of oil on top of the reaction mixture
or a ball of wax inside the tube.
Procedure
Figure 2: Schematic drawing of the PCR cycle. (1) Denaturing at 94–96 °C. (2) Annealing
at ~65 °C (3) Elongation at 72 °C. Four cycles are shown here. The blue lines represent the
DNA template to which primers (red arrows) anneal that are extended by the DNA
polymerase (light green circles), to give shorter DNA products (green lines), which
themselves are used as templates as PCR progresses.
Typically, PCR consists of a series of 20-40 repeated temperature changes, called cycles,
with each cycle commonly consisting of 2-3 discrete temperature steps, usually three (Fig. 2).
The cycling is often preceded by a single temperature step (called hold) at a high temperature
(>90°C), and followed by one hold at the end for final product extension or brief storage. The
temperatures used and the length of time they are applied in each cycle depend on a variety of
parameters. These include the enzyme used for DNA synthesis, the concentration of divalent
ions and dNTPs in the reaction, and the melting temperature (Tm) of the primers.[8]

Initialization step: This step consists of heating the reaction to a temperature of 94–
96 °C (or 98 °C if extremely thermostable polymerases are used), which is held for 1–
9 minutes. It is only required for DNA polymerases that require heat activation by
hot-start PCR.[9]

Denaturation step: This step is the first regular cycling event and consists of heating
the reaction to 94–98 °C for 20–30 seconds. It causes DNA melting of the DNA
template by disrupting the hydrogen bonds between complementary bases, yielding
single-stranded DNA molecules.

Annealing step: The reaction temperature is lowered to 50–65 °C for 20–40 seconds
allowing annealing of the primers to the single-stranded DNA template. Typically the
annealing temperature is about 3-5 degrees Celsius below the Tm of the primers used.
Stable DNA-DNA hydrogen bonds are only formed when the primer sequence very
closely matches the template sequence. The polymerase binds to the primer-template
hybrid and begins DNA synthesis.

Extension/elongation step: The temperature at this step depends on the DNA
polymerase used; Taq polymerase has its optimum activity temperature at 75–
80 °C,[10][11] and commonly a temperature of 72 °C is used with this enzyme. At this
step the DNA polymerase synthesizes a new DNA strand complementary to the DNA
template strand by adding dNTPs that are complementary to the template in 5' to 3'
direction, condensing the 5'-phosphate group of the dNTPs with the 3'-hydroxyl group
at the end of the nascent (extending) DNA strand. The extension time depends both
on the DNA polymerase used and on the length of the DNA fragment to be amplified.
As a rule-of-thumb, at its optimum temperature, the DNA polymerase will polymerize
a thousand bases per minute. Under optimum conditions, i.e., if there are no
limitations due to limiting substrates or reagents, at each extension step, the amount of
DNA target is doubled, leading to exponential (geometric) amplification of the
specific DNA fragment.

Final elongation: This single step is occasionally performed at a temperature of 70–
74 °C for 5–15 minutes after the last PCR cycle to ensure that any remaining singlestranded DNA is fully extended.

Final hold: This step at 4–15 °C for an indefinite time may be employed for shortterm storage of the reaction.
Figure 3: Ethidium bromide-stained PCR products after gel electrophoresis. Two sets of
primers were used to amplify a target sequence from three different tissue samples. No
amplification is present in sample #1; DNA bands in sample #2 and #3 indicate successful
amplification of the target sequence. The gel also shows a positive control, and a DNA ladder
containing DNA fragments of defined length for sizing the bands in the experimental PCRs.
To check whether the PCR generated the anticipated DNA fragment (also sometimes referred
to as the amplimer or amplicon), agarose gel electrophoresis is employed for size separation
of the PCR products. The size(s) of PCR products is determined by comparison with a DNA
ladder (a molecular weight marker), which contains DNA fragments of known size, run on
the gel alongside the PCR products (see Fig. 3).
PCR stages
The PCR process can be divided into three stages:
Exponential amplification: At every cycle, the amount of product is doubled (assuming 100%
reaction efficiency). The reaction is very sensitive: only minute quantities of DNA need to be
present.[12]
Levelling off stage: The reaction slows as the DNA polymerase loses activity and as
consumption of reagents such as dNTPs and primers causes them to become limiting.
Plateau: No more product accumulates due to exhaustion of reagents and enzyme.
PCR optimization
In practice, PCR can fail for various reasons, in part due to its sensitivity to contamination
causing amplification of spurious DNA products. Because of this, a number of techniques
and procedures have been developed for optimizing PCR conditions.[13][14] Contamination
with extraneous DNA is addressed with lab protocols and procedures that separate pre-PCR
mixtures from potential DNA contaminants.[6] This usually involves spatial separation of
PCR-setup areas from areas for analysis or purification of PCR products, use of disposable
plasticware, and thoroughly cleaning the work surface between reaction setups. Primerdesign techniques are important in improving PCR product yield and in avoiding the
formation of spurious products, and the usage of alternate buffer components or polymerase
enzymes can help with amplification of long or otherwise problematic regions of DNA.
Application of PCR
Selective DNA isolation
PCR allows isolation of DNA fragments from genomic DNA by selective amplification of a
specific region of DNA. This use of PCR augments many methods, such as generating
hybridization probes for Southern or northern hybridization and DNA cloning, which require
larger amounts of DNA, representing a specific DNA region. PCR supplies these techniques
with high amounts of pure DNA, enabling analysis of DNA samples even from very small
amounts of starting material.
Other applications of PCR include DNA sequencing to determine unknown PCR-amplified
sequences in which one of the amplification primers may be used in Sanger sequencing,
isolation of a DNA sequence to expedite recombinant DNA technologies involving the
insertion of a DNA sequence into a plasmid or the genetic material of another organism.
Bacterial colonies (E. coli) can be rapidly screened by PCR for correct DNA vector
constructs.[15] PCR may also be used for genetic fingerprinting; a forensic technique used to
identify a person or organism by comparing experimental DNAs through different PCRbased methods.
Some PCR 'fingerprints' methods have high discriminative power and can be used to identify
genetic relationships between individuals, such as parent-child or between siblings, and are
used in paternity testing (Fig. 4). This technique may also be used to determine evolutionary
relationships among organisms.
Figure 4: Electrophoresis of PCR-amplified DNA fragments. (1) Father. (2) Child. (3)
Mother. The child has inherited some, but not all of the fingerprint of each of its parents,
giving it a new, unique fingerprint.
Amplification and quantification of DNA
Because PCR amplifies the regions of DNA that it targets, PCR can be used to analyze
extremely small amounts of sample. This is often critical for forensic analysis, when only a
trace amount of DNA is available as evidence. PCR may also be used in the analysis of
ancient DNA that is tens of thousands of years old. These PCR-based techniques have been
successfully used on animals, such as a forty-thousand-year-old mammoth, and also on
human DNA, in applications ranging from the analysis of Egyptian mummies to the
identification of a Russian tsar.[16]
Quantitative PCR methods allow the estimation of the amount of a given sequence present in
a sample—a technique often applied to quantitatively determine levels of gene expression.
Real-time PCR is an established tool for DNA quantification that measures the accumulation
of DNA product after each round of PCR amplification.
PCR in diagnosis of diseases
PCR permits early diagnosis of malignant diseases such as leukemia and lymphomas, which
is currently the highest developed in cancer research and is already being used
routinely.[citation needed] PCR assays can be performed directly on genomic DNA samples to
detect translocation-specific malignant cells at a sensitivity which is at least 10,000 fold
higher than other methods.[citation needed]
PCR also permits identification of non-cultivatable or slow-growing microorganisms such as
mycobacteria, anaerobic bacteria, or viruses from tissue culture assays and animal models.
The basis for PCR diagnostic applications in microbiology is the detection of infectious
agents and the discrimination of non-pathogenic from pathogenic strains by virtue of specific
genes.[citation needed]
Viral DNA can likewise be detected by PCR. The primers used need to be specific to the
targeted sequences in the DNA of a virus, and the PCR can be used for diagnostic analyses or
DNA sequencing of the viral genome. The high sensitivity of PCR permits virus detection
soon after infection and even before the onset of disease. Such early detection may give
physicians a significant lead in treatment. The amount of virus ("viral load") in a patient can
also be quantified by PCR-based DNA quantitation techniques (see below).
Variations on the basic PCR technique

Allele-specific PCR: a diagnostic or cloning technique which is based on singlenucleotide polymorphisms (SNPs) (single-base differences in DNA). It requires prior
knowledge of a DNA sequence, including differences between alleles, and uses
primers whose 3' ends encompass the SNP. PCR amplification under stringent
conditions is much less efficient in the presence of a mismatch between template and
primer, so successful amplification with an SNP-specific primer signals presence of
the specific SNP in a sequence.[17] See SNP genotyping for more information.

Assembly PCR or Polymerase Cycling Assembly (PCA): artificial synthesis of long
DNA sequences by performing PCR on a pool of long oligonucleotides with short
overlapping segments. The oligonucleotides alternate between sense and antisense
directions, and the overlapping segments determine the order of the PCR fragments,
thereby selectively producing the final long DNA product.[18]

Asymmetric PCR: preferentially amplifies one DNA strand in a double-stranded DNA
template. It is used in sequencing and hybridization probing where amplification of
only one of the two complementary strands is required. PCR is carried out as usual,
but with a great excess of the primer for the strand targeted for amplification. Because
of the slow (arithmetic) amplification later in the reaction after the limiting primer has
been used up, extra cycles of PCR are required.[19] A recent modification on this
process, known as Linear-After-The-Exponential-PCR (LATE-PCR), uses a limiting
primer with a higher melting temperature (Tm) than the excess primer to maintain
reaction efficiency as the limiting primer concentration decreases mid-reaction.[20]

Helicase-dependent amplification: similar to traditional PCR, but uses a constant
temperature rather than cycling through denaturation and annealing/extension cycles.
DNA helicase, an enzyme that unwinds DNA, is used in place of thermal
denaturation.[21]

Hot-start PCR: a technique that reduces non-specific amplification during the initial
set up stages of the PCR. It may be performed manually by heating the reaction
components to the melting temperature (e.g., 95°C) before adding the polymerase.[22]
Specialized enzyme systems have been developed that inhibit the polymerase's
activity at ambient temperature, either by the binding of an antibody[9][23] or by the
presence of covalently bound inhibitors that only dissociate after a high-temperature
activation step. Hot-start/cold-finish PCR is achieved with new hybrid polymerases
that are inactive at ambient temperature and are instantly activated at elongation
temperature.

Intersequence-specific PCR (ISSR): a PCR method for DNA fingerprinting that
amplifies regions between simple sequence repeats to produce a unique fingerprint of
amplified fragment lengths.[24]

Inverse PCR: is commonly used to identify the flanking sequences around genomic
inserts. It involves a series of DNA digestions and self ligation, resulting in known
sequences at either end of the unknown sequence.[25]

Ligation-mediated PCR: uses small DNA linkers ligated to the DNA of interest and
multiple primers annealing to the DNA linkers; it has been used for DNA sequencing,
genome walking, and DNA footprinting.[26]

Methylation-specific PCR (MSP): developed by Stephen Baylin and Jim Herman at
the Johns Hopkins School of Medicine,[27] and is used to detect methylation of CpG
islands in genomic DNA. DNA is first treated with sodium bisulfite, which converts
unmethylated cytosine bases to uracil, which is recognized by PCR primers as
thymine. Two PCRs are then carried out on the modified DNA, using primer sets
identical except at any CpG islands within the primer sequences. At these points, one
primer set recognizes DNA with cytosines to amplify methylated DNA, and one set
recognizes DNA with uracil or thymine to amplify unmethylated DNA. MSP using
qPCR can also be performed to obtain quantitative rather than qualitative information
about methylation.

Miniprimer PCR: uses a thermostable polymerase (S-Tbr) that can extend from short
primers ("smalligos") as short as 9 or 10 nucleotides. This method permits PCR
targeting to smaller primer binding regions, and is used to amplify conserved DNA
sequences, such as the 16S (or eukaryotic 18S) rRNA gene.[28]

Multiplex Ligation-dependent Probe Amplification (MLPA): permits multiple targets
to be amplified with only a single primer pair, thus avoiding the resolution limitations
of multiplex PCR (see below).

Multiplex-PCR: consists of multiple primer sets within a single PCR mixture to
produce amplicons of varying sizes that are specific to different DNA sequences. By
targeting multiple genes at once, additional information may be gained from a single
test run that otherwise would require several times the reagents and more time to
perform. Annealing temperatures for each of the primer sets must be optimized to
work correctly within a single reaction, and amplicon sizes, i.e., their base pair length,
should be different enough to form distinct bands when visualized by gel
electrophoresis.

Nested PCR: increases the specificity of DNA amplification, by reducing background
due to non-specific amplification of DNA. Two sets of primers are used in two
successive PCRs. In the first reaction, one pair of primers is used to generate DNA
products, which besides the intended target, may still consist of non-specifically
amplified DNA fragments. The product(s) are then used in a second PCR with a set of
primers whose binding sites are completely or partially different from and located 3'
of each of the primers used in the first reaction. Nested PCR is often more successful
in specifically amplifying long DNA fragments than conventional PCR, but it requires
more detailed knowledge of the target sequences.

Overlap-extension PCR: a genetic engineering technique allowing the construction of
a DNA sequence with an alteration inserted beyond the limit of the longest practical
primer length.

Quantitative PCR (Q-PCR): used to measure the quantity of a PCR product
(commonly in real-time). It quantitatively measures starting amounts of DNA, cDNA
or RNA. Q-PCR is commonly used to determine whether a DNA sequence is present
in a sample and the number of its copies in the sample. Quantitative real-time PCR
has a very high degree of precision. QRT-PCR methods use fluorescent dyes, such as
Sybr Green, EvaGreen or fluorophore-containing DNA probes, such as TaqMan, to
measure the amount of amplified product in real time. It is also sometimes
abbreviated to RT-PCR (Real Time PCR) or RQ-PCR. QRT-PCR or RTQ-PCR are
more appropriate contractions, since RT-PCR commonly refers to reverse
transcription PCR (see below), often used in conjunction with Q-PCR.

Reverse Transcription PCR (RT-PCR): for amplifying DNA from RNA. Reverse
transcriptase reverse transcribes RNA into cDNA, which is then amplified by PCR.
RT-PCR is widely used in expression profiling, to determine the expression of a gene
or to identify the sequence of an RNA transcript, including transcription start and
termination sites. If the genomic DNA sequence of a gene is known, RT-PCR can be
used to map the location of exons and introns in the gene. The 5' end of a gene
(corresponding to the transcription start site) is typically identified by RACE-PCR
(Rapid Amplification of cDNA Ends).

Solid Phase PCR: encompasses multiple meanings, including Polony Amplification
(where PCR colonies are derived in a gel matrix, for example), Bridge PCR [29]
(primers are covalently linked to a solid-support surface), conventional Solid Phase
PCR (where Asymmetric PCR is applied in the presence of solid support bearing
primer with sequence matching one of the aqueous primers) and Enhanced Solid
Phase PCR[30] (where conventional Solid Phase PCR can be improved by employing
high Tm and nested solid support primer with optional application of a thermal 'step'
to favour solid support priming).

Thermal asymmetric interlaced PCR (TAIL-PCR): for isolation of an unknown
sequence flanking a known sequence. Within the known sequence, TAIL-PCR uses a
nested pair of primers with differing annealing temperatures; a degenerate primer is
used to amplify in the other direction from the unknown sequence.[31]

Touchdown PCR (Step-down PCR): a variant of PCR that aims to reduce nonspecific
background by gradually lowering the annealing temperature as PCR cycling
progresses. The annealing temperature at the initial cycles is usually a few degrees (35°C) above the Tm of the primers used, while at the later cycles, it is a few degrees (35°C) below the primer Tm. The higher temperatures give greater specificity for primer
binding, and the lower temperatures permit more efficient amplification from the
specific products formed during the initial cycles.[32]

PAN-AC: uses isothermal conditions for amplification, and may be used in living
cells.[33][34]

Universal Fast Walking: for genome walking and genetic fingerprinting using a more
specific 'two-sided' PCR than conventional 'one-sided' approaches (using only one
gene-specific primer and one general primer - which can lead to artefactual 'noise')[35]
by virtue of a mechanism involving lariat structure formation. Streamlined derivatives
of UFW are LaNe RAGE (lariat-dependent nested PCR for rapid amplification of
genomic DNA ends),[36] 5'RACE LaNe[37] and 3'RACE LaNe.[38]
History
A 1971 paper in the Journal of Molecular Biology by Kleppe and co-workers first
described a method using an enzymatic assay to replicate a short DNA template with primers
in vitro.[39] However, this early manifestation of the basic PCR principle did not receive much
attention, and the invention of the polymerase chain reaction in 1983 is generally credited to
Kary Mullis.[40]
At the core of the PCR method is the use of a suitable DNA polymerase able to withstand the
high temperatures of >90 °C (194 °F) required for separation of the two DNA strands in the
DNA double helix after each replication cycle. The DNA polymerases initially employed for
in vitro experiments presaging PCR were unable to withstand these high temperatures.[2] So
the early procedures for DNA replication were very inefficient, time consuming, and required
large amounts of DNA polymerase and continual handling throughout the process.
The discovery in 1976 of Taq polymerase — a DNA polymerase purified from the
thermophilic bacterium, Thermus aquaticus, which naturally lives in hot (50 to 80 °C (122 to
176 °F)) environments[10] such as hot springs — paved the way for dramatic improvements of
the PCR method. The DNA polymerase isolated from T. aquaticus is stable at high
temperatures remaining active even after DNA denaturation,[11] thus obviating the need to
add new DNA polymerase after each cycle.[3] This allowed an automated thermocycler-based
process for DNA amplification.
When Mullis developed the PCR in 1983, he was working in Emeryville, California for Cetus
Corporation, one of the first biotechnology companies. There, he was responsible for
synthesizing short chains of DNA. Mullis has written that he conceived of PCR while
cruising along the Pacific Coast Highway one night in his car.[41] He was playing in his mind
with a new way of analyzing changes (mutations) in DNA when he realized that he had
instead invented a method of amplifying any DNA region through repeated cycles of
duplication driven by DNA polymerase. In Scientific American, Mullis summarized the
procedure: "Beginning with a single molecule of the genetic material DNA, the PCR can
generate 100 billion similar molecules in an afternoon. The reaction is easy to execute. It
requires no more than a test tube, a few simple reagents, and a source of heat."[42] He was
awarded the Nobel Prize in Chemistry in 1993 for his invention,[4] seven years after he and
his colleagues at Cetus first put his proposal to practice. However, some controversies have
remained about the intellectual and practical contributions of other scientists to Mullis' work,
and whether he had been the sole inventor of the PCR principle (see below).
Patent wars
The PCR technique was patented by Kary Mullis and assigned to Cetus Corporation, where
Mullis worked when he invented the technique in 1983. The Taq polymerase enzyme was
also covered by patents. There have been several high-profile lawsuits related to the
technique, including an unsuccessful lawsuit brought by DuPont. The pharmaceutical
company Hoffmann-La Roche purchased the rights to the patents in 1992 and currently holds
those that are still protected.
A related patent battle over the Taq polymerase enzyme is still ongoing in several
jurisdictions around the world between Roche and Promega. The legal arguments have
extended beyond the lives of the original PCR and Taq polymerase patents, which expired on
March 28, 2005.[43]
Adaptive immune system
A scanning electron microscope (SEM) image of a single human lymphocyte.
The adaptive immune system is composed of highly specialized, systemic cells and
processes that eliminate or prevent pathogenic challenges. Thought to have arisen in the
first jawed vertebrates, the adaptive or "specific" immune system is activated by the “nonspecific” and evolutionarily older innate immune system (which is the major system of host
defense against pathogens in nearly all other living things). The adaptive immune response
provides the vertebrate immune system with the ability to recognize and remember specific
pathogens (to generate immunity), and to mount stronger attacks each time the pathogen is
encountered. It is adaptive immunity because the body's immune system prepares itself for
future challenges.
The system is highly adaptable because of somatic hypermutation (a process of accelerated
somatic mutations), and V(D)J recombination (an irreversible genetic recombination of
antigen receptor gene segments). This mechanism allows a small number of genes to
generate a vast number of different antigen receptors, which are then uniquely expressed
on each individual lymphocyte. Because the gene rearrangement leads to an irreversible
change in the DNA of each cell, all of the progeny (offspring) of that cell will then inherit
genes encoding the same receptor specificity, including the Memory B cells and Memory T
cells that are the keys to long-lived specific immunity. Immune network theory is a theory of
how the adaptive immune system works, that is based on interactions between the variable
regions of the receptors of T cells, B cells and of molecules made by T cells and B cells that
have variable regions.
Functions
Adaptive immunity is triggered in vertebrates when a pathogen evades the innate immune
system and generates a threshold level of antigen.[1]
The major functions of the adaptive immune system include:



the recognition of specific “non-self” antigens in the presence of “self”, during the process of
antigen presentation.
the generation of responses that are tailored to maximally eliminate specific pathogens or
pathogen infected cells.
the development of immunological memory, in which each pathogen is “remembered” by a
signature antibody. These memory cells can be called upon to quickly eliminate a pathogen
should subsequent infections occur.
Effector cells
The cells of the adaptive immune system are a type of leukocyte, called a lymphocyte. B
cells and T cells are the major types of lymphocytes. The human body has about 2 trillion
lymphocytes, constituting 20-40% of white blood cells (WBCs); their total mass is about the
same as the brain or liver.[2] The peripheral blood contains 20–50% of circulating
lymphocytes; the rest move within the lymphatic system.[2]
B cells and T cells are derived from the same multipotent hematopoietic stem cells, and are
indistinguishable from one another until after they are activated.[3] B cells play a large role in
the humoral immune response, whereas T-cells are intimately involved in cell-mediated
immune responses. However, in nearly all other vertebrates, B cells (and T-cells) are
produced by stem cells in the bone marrow.[3] T-cells travel to and develop in the thymus,
from which they derive their name. In humans, approximately 1-2% of the lymphocyte pool
recirculates each hour to optimize the opportunities for antigen-specific lymphocytes to find
their specific antigen within the secondary lymphoid tissues.[4]
In an adult animal, the peripheral lymphoid organs contain a mixture of B and T cells in at
least three stages of differentiation:



naive cells that have matured, left the bone marrow or thymus, have entered the lymphatic
system, but that have yet to encounter their cognate antigen,
effector cells that have been activated by their cognate antigen, and are actively involved in
eliminating a pathogen and,
memory cells – the long-lived survivors of past infections.
Antigen presentation
Adaptive immunity relies on the capacity of immune cells to distinguish between the body's
own cells and unwanted invaders.
Association between TCR and MHC class I or MHC class II
The host's cells express "self" antigens. These antigens are different from those on the
surface of bacteria ("non-self" antigens) or on the surface of virally infected host cells
(“missing-self”). The adaptive response is triggered by recognizing non-self and missing-self
antigens.
With the exception of non-nucleated cells (including erythrocytes), all cells are capable of
presenting antigen and of activating the adaptive response.[3] Some cells are specially
equipped to present antigen, and to prime naive T cells. Dendritic cells and B-cells (and to a
lesser extent macrophages) are equipped with special immunostimulatory receptors that
allow for enhanced activation of T cells, and are termed professional antigen presenting
cells (APC).
Several T cells subgroups can be activated by professional APCs, and each type of T cell is
specially equipped to deal with each unique toxin or bacterial and viral pathogen. The type
of T cell activated, and the type of response generated depends, in part, on the context in
which the APC first encountered the antigen.[1]
Exogenous antigens
Antigen presentation stimulates T cells to become either "cytotoxic" CD8+ cells or "helper" CD4+
cells [5]
Dendritic cells engulf exogenous pathogens, such as bacteria, parasites or toxins in the
tissues and then migrate, via chemotactic signals, to the T cell enriched lymph nodes. During
migration, dendritic cells undergo a process of maturation in which they lose most of their
ability to engulf other pathogens and develop an ability to communicate with T-cells. The
dendritic cell uses enzymes to chop the pathogen into smaller pieces, called antigens. In the
lymph node, the dendritic cell will display these "non-self" antigens on its surface by
coupling them to a "self"-receptor called the Major histocompatibility complex, or MHC
(also known in humans as Human leukocyte antigen (HLA)).[1] This MHC:antigen complex is
recognized by T-cells passing through the lymph node. Exogenous antigens are usually
displayed on MHC class II molecules, which activate CD4+ helper T-cells.[1]
Endogenous antigens
Endogenous antigens are produced by viruses replicating within a host cell.[1] The host cell
uses enzymes to digest virally associated proteins, and displays these pieces on its surface to
T-cells by coupling them to MHC. Endogenous antigens are typically displayed on MHC class
I molecules, and activate CD8+ cytotoxic T-cells. With the exception of non-nucleated cells
(including erythrocytes), MHC class I is expressed by all host cells.[1]
T lymphocytes
CD8+ T lymphocytes and cytotoxicity
Cytotoxic T cells (also known as TC, killer T cell, or cytotoxic T-lymphocyte (CTL)) are a subgroup of T cells which induce the death of cells that are infected with viruses (and other
pathogens), or are otherwise damaged or dysfunctional.[1]
Killer T cells—also called cytotoxic T lymphocytes or CTL-directly attack other cells carrying certain
foreign or abnormal molecules on their surfaces[5].
Naive cytotoxic T cells are activated when their T-cell receptor (TCR) strongly interacts with
a peptide-bound MHC class I molecule. This affinity depends on the type and orientation of
the antigen/MHC complex, and is what keeps the CTL and infected cell bound together. [1]
Once activated, the CTL undergoes a process called clonal expansion in which it gains
functionality, and divides rapidly, to produce an army of “armed”-effector cells. Activated
CTL will then travel throughout the body in search of cells bearing that unique MHC Class I +
peptide.
When exposed to these infected or dysfunctional somatic cells, effector CTL release perforin
and granulysin: cytotoxins which form pores in the target cell's plasma membrane, allowing
ions and water to flow into the infected cell, and causing it to burst or lyse.[1] CTL release
granzyme, a serine protease that enters cells via pores to induce apoptosis (cell death). To
limit extensive tissue damage during an infection, CTL activation is tightly controlled and
generally requires a very strong MHC/antigen activation signal, or additional activation
signals provided by "helper" T-cells (see below).[1]
Upon resolution of the infection, most of the effector cells will die and be cleared away by
phagocytes, but a few of these cells will be retained as memory cells.[3] Upon a later
encounter with the same antigen, these memory cells quickly differentiate into effector
cells, dramatically shortening the time required to mount an effective response.
Innate immune system
The innate immune system comprises the cells and mechanisms that defend the host from
infection by other organisms, in a non-specific manner. This means that the cells of the
innate system recognize and respond to pathogens in a generic way, but unlike the adaptive
immune system, it does not confer long-lasting or protective immunity to the host.[1] Innate
immune systems provide immediate defense against infection, and are found in all classes
of plant and animal life.
The innate system is thought to constitute an evolutionarily older defense strategy, and is
the dominant immune system found in plants, fungi, insects, and in primitive multicellular
organisms.[2]
The major functions of the vertebrate innate immune system include:



Recruiting immune cells to sites of infection, through the production of chemical factors,
including specialized chemical mediators, called cytokines.
Activation of the complement cascade to identify bacteria, activate cells and to promote
clearance of dead cells or antibody complexes.
The identification and removal of foreign substances present in organs, tissues, the blood and
lymph, by specialized white blood cells.
Activation of the adaptive immune system through a process known as antigen presentation.
Anatomical
barriers
Anatomical barrier
Additional defense mechanims
Skin
Sweat, desquamation, flushing,[3] organic acids[3]
Gastrointestinal tract
Peristalsis, gastric acid, bile acids, digestive enzyme,
flushing, thiocyanate[3], defensins[3], gut flora[3]
Respiratory airways and lungs Mucociliary elevator, surfactant[3], defensins[3]
Nasopharynx
Mucus, saliva, lysozyme[3]
Eyes
Tears[3]
The epithelial surfaces form a physical barrier that is very impermeable to most infectious
agents, acting as the first line of defense against invading organisms.[3] Desquamation of
skin epithelium also helps remove bacteria and other infectious agents that have adhered to
the epithelial surfaces.[3] In the gastrointestinal and respiratory tract, movement due to
peristalsis or cilia helps removing infectious agents.[3] Also, mucus traps infectious agents.[3]
The gut flora can prevent the colonization of pathogenic bacteria by secreting toxic
substances or by competing with pathogenic bacteria for nutrients or attachment to cell
surfaces.[3] The flushing action of tears and saliva helps prevent infection of the eyes and
mouth. [3]
Inflammation
Inflammation is one of the first responses of the immune system to infection or irritation.
Inflammation is stimulated by chemical factors released by injured cells and serves to
establish a physical barrier against the spread of infection, and to promote healing of any
damaged tissue following the clearance of pathogens.[4]
Chemical factors produced during inflammation (histamine, bradykinin, serotonin,
leukotrienes also prostaglandins) sensitize pain receptors, cause vasodilation of the blood
vessels at the scene, and attract phagocytes, especially neutrophils.[4] Neutrophils then
trigger other parts of the immune system by releasing factors that summon other
leukocytes and lymphocytes.
The inflammatory response is characterized by the following symptom: redness, heat,
swelling, pain, and possible dysfunction of the organs or tissues involved.
Complement system
The complement system is a biochemical cascade of the immune system that helps, or
“complements”, the ability of antibodies to clear pathogens or mark them for destruction by
other cells. The cascade is composed of many plasma proteins, synthesized in the liver,
primarily by hepatocytes. The proteins work together to:




trigger the recruitment of inflammatory cells.
"tag" pathogens for destruction by other cells by opsonizing, or coating, the surface of the
pathogen.
disrupt the plasma membrane of an infected cell, resulting in cytolysis of the infected cell,
causing the death of the pathogen.
rid the body of neutralized antigen-antibody complexes.
Elements of the complement cascade can be found in many nonmammalian species
including plants, birds, fish and some species of invertebrates.[5]
Cells of the innate immune response
A scanning electron microscope image of normal circulating human blood. One can see red blood
cells, several knobby white blood cells including lymphocytes, a monocyte, a neutrophil, and many
small disc-shaped platelets.
All white blood cells (WBC) are known as leukocytes. Leukocytes are different from other
cells of the body in that they are not tightly associated with a particular organ or tissue;
thus, they function similar to independent, single-celled organisms. Leukocytes are able to
move freely and interact with and capture cellular debris, foreign particles, or invading
microorganisms. Unlike many other cells in the body, most innate immune leukocytes
cannot divide or reproduce on their own, but are the products of multipotent
hematopoietic stem cells present in the bone marrow.[1]
The innate leukocytes include: Natural killer cells, mast cells, eosinophils, basophils; and the
phagocytic cells including macrophages, neutrophils and dendritic cells, and function within
the immune system by identifying and eliminating pathogens that might cause infection.[2]
Mast cells
Mast cells are a type of innate immune cell that resides in the connective tissue and in the
mucous membranes, and are intimately associated with defense against pathogens, wound
healing, but are also often associated with allergy and anaphylaxis.[4] When activated, mast
cells rapidly release characteristic granules, rich in histamine and heparin, along with various
hormonal mediators, and chemokines, or chemotactic cytokines into the environment.
Histamine dilates blood vessels, causing the characteristic signs of inflammation, and
recruits neutrophils and macrophages.[4]
Phagocytes
The word 'phagocyte' literally means 'eating cell'. These are immune cells that engulf, i.e.
phagocytose, pathogens or particles. To engulf a particle or pathogen, a phagocyte extends
portions of its plasma membrane, wrapping the membrane around the particle until it is
enveloped (i.e. the particle is now inside the cell). Once inside the cell, the invading
pathogen is contained inside an endosome which merges with a lysosome.[2] The lysosome
contains enzymes and acids that kill and digest the particle or organism. Phagocytes
generally patrol the body searching for pathogens, but are also able to react to a group of
highly specialized molecular signals produced by other cells, called cytokines. The
phagocytic cells of the immune system include macrophages, neutrophils, and dendritic
cells.
Phagocytosis of the hosts’ own cells is common as part of regular tissue development and
maintenance. When host cells die, either internally induced by processes involving
programmed cell death (also called apoptosis), or caused by cell injury due to a bacterial or
viral infection, phagocytic cells are responsible for their removal from the affected site. [1] By
helping to remove dead cells preceding growth and development of new healthy cells,
phagocytosis is an important part of the healing process following tissue injury.
A macrophage
Macrophages
Macrophages, from the Greek, meaning "large eaters," are large phagocytic leukocytes,
which are able to move outside of the vascular system by moving across the cell membrane
of capillary vessels and entering the areas between cells in pursuit of invading pathogens. In
tissues, organ-specific macrophages are differentiated from phagocytic cells present in the
blood called monocytes. Macrophages are the most efficient phagocytes, and can
phagocytose substantial numbers of bacteria or other cells or microbes. [2] The binding of
bacterial molecules to receptors on the surface of a macrophage triggers it to engulf and
destroy the bacteria through the generation of a “respiratory burst”, causing the release of
reactive oxygen species. Pathogens also stimulate the macrophage to produce chemokines,
which summons other cells to the site of infection.[2]
Neutrophils
A neutrophil
Neutrophils, along with two other cell types; eosinophils and basophils (see below), are
known as granulocytes due to the presence of granules in their cytoplasm, or as
polymorphonuclear cells (PMNs) due to their distinctive lobed nuclei. Neutrophil granules
contain a variety of toxic substances that kill or inhibit growth of bacteria and fungi. Similar
to macrophages, neutrophils attack pathogens by activating a respiratory burst. The main
products of the neutrophil respiratory burst are strong oxidizing agents including hydrogen
peroxide, free oxygen radicals and hypochlorite. Neutrophils are the most abundant type of
phagocyte, normally representing 50 to 60% of the total circulating leukocytes, and are
usually the first cells to arrive at the site of an infection.[4] The bone marrow of a normal
healthy adult produces more than 100 billion neutrophils per day, and more than 10 times
that many per day during acute inflammation.[4]
Dendritic cells
Dendritic cells (DC) are phagocytic cells present in tissues that are in contact with the
external environment, mainly the skin (where they are often called Langerhans cells), and
the inner mucosal lining of the nose, lungs, stomach and intestines.[1] They are named for
their resemblance to neuronal dendrites, but dendritic cells are not connected to the
nervous system. Dendritic cells are very important in the process of antigen presentation,
and serve as a link between the innate and adaptive immune systems.
An eosinophil
Basophils and eosinophils
Basophils and eosinophils are cells related to the neutrophil (see above). When activated by
a pathogen encounter, basophils releasing histamine are important in defense against
parasites, and play a role in allergic reactions (such as asthma).[2] Upon activation,
eosinophils secrete a range of highly toxic proteins and free radicals that are highly effective
in killing bacteria and parasites, but are also responsible for tissue damage occurring during
allergic reactions. Activation and toxin release by eosinophils is therefore tightly regulated
to prevent any inappropriate tissue destruction.[4]
Natural killer cells
Natural killer cells, or NK cells, are a component of the innate immune system which does
not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such
as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing
self." This term describes cells with low levels of a cell-surface marker called MHC I (major
histocompatibility complex) - a situation that can arise in viral infections of host cells.[5] They
were named "natural killer" because of the initial notion that they do not require activation
in order to kill cells that are "missing self."
γδ T cells
Like other 'unconventional' T cell subsets bearing invariant T cell receptors (TCRs), such as
CD1d-restricted Natural Killer T cells, γδ T cells exhibit characteristics that place them at the
border between innate and adaptive immunity. On one hand, γδ T cells may be considered a
component of adaptive immunity in that they rearrange TCR genes to produce junctional
diversity and develop a memory phenotype. However, the various subsets may also be
considered part of the innate immune system where a restricted TCR or NK receptors may
be used as a pattern recognition receptor. For example, according to this paradigm, large
numbers of Vγ9/Vδ2 T cells respond within hours to common molecules produced by
microbes, and highly restricted intraepithelial Vδ1 T cells will respond to stressed epithelial
cells.
Other vertebrate mechanisms
The coagulation system overlaps with the immune system. Some products of the
coagulation system can contribute to the non-specific defenses by their ability to increase
vascular permeability and act as chemotactic agents for phagocytic cells. In addition, some
of the products of the coagulation system are directly antimicrobial. For example, betalysine, a protein produced by platelets during coagulation, can cause lysis of many Grampositive bacteria by acting as a cationic detergent.[3] Many acute-phase proteins of
inflammation are involved in the coagulation system.
Also increased levels of lactoferrin and transferrin inhibit bacterial growth by binding iron,
an essential nutrient for bacteria.[3]
Pathogen-specificity
The parts of the innate immune system have different specificity for different pathogens.
Pathogen
Intracellular and
cytoplasmic virus
Intracellular
bacteria
Extracellular
bacteria
Intracellular
protozoa
Extracellular
protozoa
Extracellular
fungi
Main examples [6]
Phagocytosis [6]
complement[6] NK cells[6]
no




influenza
mumps
measles
rhinovirus
yes




Listeria
monocytogenes
Legionella
Mycobacterium
Rickettsia
yes
(specifically
neutrophils, no for no
rickettsia)
yes (no for
rickettsia)




Staphylococcus
Streptococcus
Neisseria
Salmonella typhi
yes
yes
no

Plasmodium
malariae
no
Leishmania donovani
no
no

Entamoeba
histolytica
Giardia lamblia
yes
yes
no



Candida
Histoplasma
Cryptococcus
no
yes
no


yes
Innate immune evasion
Cells of the innate immune system effectively prevent free growth of bacteria within the
body; however, many pathogens have evolved mechanisms allowing them to evade the
innate immune system.[7][8]
Evasion strategies that circumvent the innate immune system include intracellular
replication, such as in Salmonella, or a protective capsule that prevents lysis by complement
and by phagocytes, as in Mycobacterium tuberculosis.[9] Bacteroides species are normally
mutualistic bacteria, making up a substantial portion of the mammalian gastrointestinal
flora.[10] Some species (B. fragilis, for example) are opportunistic pathogens, causing
infections of the peritoneal cavity. These species evade the immune system through
inhibition of phagocytosis by affecting the receptors that phagocytes use to engulf bacteria
or by mimicking host cells so that the immune system does not recognize them as foreign.
Staphylococcus aureus inhibits the ability of the phagocyte to respond to chemokine signals.
Other organisms such as M. tuberculosis, Streptococcus pyogenes and Bacillus anthracis
utilize mechanisms that directly kill the phagocyte.
Bacteria and fungi may also form complex biofilms, providing protection from the cells and
proteins of the immune system; recent studies indicate that such biofilms are present in
many successful infections, including the chronic Pseudomonas aeruginosa and Burkholderia
cenocepacia infections characteristic of cystic fibrosis.[11]
Innate immunity in other species
Host defense in prokaryotes
Bacteria (and perhaps other prokaryotic organisms), utilize a unique defense mechanism,
called the restriction modification system to protect themselves from pathogens, such as
bacteriophages. In this system, bacteria produce enzymes, called restriction endonucleases,
that attack and destroy specific regions of the viral DNA of invading bacteriophages.
Methylation of the host's own DNA marks it as "self" and prevents it from being attacked by
endonucleases.[12] Restriction endonucleases and the restriction modification system exist
exclusively in prokaryotes.
Host defense in invertebrates
Invertebrates do not possess lymphocytes or an antibody-based humoral immune system,
and it is likely that a multicomponent, adaptive immune system arose with the first
vertebrates.[13] Nevertheless, invertebrates possess mechanisms that appear to be
precursors of these aspects of vertebrate immunity. Pattern recognition receptors are
proteins used by nearly all organisms to identify molecules associated with microbial
pathogens. Toll-like receptors are a major class of pattern recognition receptor, that exists in
all coelomates (animals with a body-cavity), including humans.[14] The complement system,
as discussed above, is a biochemical cascade of the immune system that helps clear
pathogens from an organism, and exists in most forms of life. Some invertebrates, including
various insects, crabs, and worms utilize a modified form of the complement response
known as the prophenoloxidase (proPO) system.[13]
Antimicrobial peptides are an evolutionarily conserved component of the innate immune
response found among all classes of life and represent the main form of invertebrate
systemic immunity. Several species of insect produce antimicrobial peptides known as
defensins and cecropins.
] Proteolytic cascades
In invertebrates, pattern recognition proteins (PRPs) trigger proteolytic cascades that
degrade proteins and control many of the mechanisms of the innate immune system of
invertebrates—including hemolymph coagulation and melanization. Proteolytic cascades
are important components of the invertebrate immune system because they are turned on
more rapidly than other innate immune reactions because they do not rely on gene
changes. Proteolytic cascades have been found to function the same in both vertebrate and
invertebrates, even though different proteins are used throughout the cascades. [15]
Clotting mechanisms
In the hemolymph, which makes up the fluid in the circulatory system of arthropods, a gellike fluid surrounds pathogen invaders, similar to the way blood. There are various different
proteins and mechanisms that are involved in invertebrate clotting. In crustaceans,
transglutaminase from blood cells and mobile plasma proteins make up the clotting system,
where the transglutaminase polymerizes 210 kDa subunits of a plasma clotting protein. On
the other hand, in the horseshoe crab species clotting system, components of proteolytic
cascades are stored as inactive forms in granules of hemocytes, which are released when
foreign molecules, like lipopolysaccharides enter.[15]
Host defense in plants
Members of every class of pathogen which infect humans also infect plants. Although the
exact pathogenic species vary with the infected species, bacteria, fungi, viruses, nematodes
and insects can all cause plant disease. As with animals, plants attacked by insects or other
pathogens use a set of complex metabolic responses that lead to the formation of defensive
chemical compounds that fight infection or make the plant less attractive to insects and
other herbivores.[16] (see: plant defense against herbivory).
Like invertebrates, plants neither generate antibody or T-cell responses nor possess mobile
cells that detect and attack pathogens. In addition, in case of infection, parts of some plants
are treated as disposable and replaceable, in ways that very few animals are able to do.
Walling off or discarding a part of a plant helps stop spread of an infection.[16]
Most plant immune responses involve systemic chemical signals sent throughout a plant.
Plants use pattern-recognition receptors to identify pathogens and to start a basal response,
which produces chemical signals that aid in warding off infection. When a part of a plant
becomes infected with a microbial or viral pathogen, in case of an incompatible interaction
triggered by specific elicitors, the plant produces a localized hypersensitive response (HR), in
which cells at the site of infection undergo rapid programmed cell death to prevent the
spread of the disease to other parts of the plant. HR has some similarities to animal
pyroptosis, such as a requirement of caspase-1-like proteolytic activity of VPEγ, a cysteine
protease that regulates cell disassembly during cell death.[17]
"Resistance" (R) proteins, encoded by R genes, are widely present in plants and detect
pathogens. These proteins contain domains similar to the NOD Like Receptors and Toll-like
receptors utilized in animal innate immunity. Systemic acquired resistance (SAR) is a type of
defensive response that renders the entire plant resistant to a broad spectrum of infectious
agents.[18] SAR involves the production of chemical messengers, such as salicylic acid or
jasmonic acid. Some of these travel through the plant and signal other cells to produce
defensive compounds to protect uninfected parts, e.g., leaves.[19] Salicylic acid itself,
although indispensable for expression of SAR, is not the translocated signal responsible for
the systemic response. Recent evidence indicates a role for jasmonates in transmission of
the signal to distal portions of the plant. RNA silencing mechanisms are also important in the
plant systemic response, as they can block virus replication.[20] The jasmonic acid response,
is stimulated in leaves damaged by insects, and involves the production of methyl
jasmonate.[16]
Immune system / Immunology
Antigen (Superantigen, Allergen) · Hapten
Antigens
Epitope (Linear, Conformational) · Mimotope
Antibody (Monoclonal antibodies, Polyclonal antibodies, Autoantibody,
Microantibody) · Polyclonal B cell response · Allotype · Isotype · Idiotype
Antibodies
Immune complex · Paratope
Immunity
tolerance
action: Immunity · Autoimmunity · Alloimmunity · Allergy · Hypersensitivity ·
Cross-reactivity
vs. Inflammation ·
inaction: Tolerance (Central, Peripheral, Clonal anergy, Clonal deletion, Tolerance in
pregnancy) · Immunodeficiency
Immunogenetics
Somatic hypermutation · V(D)J recombination · Junctional diversity · Immunoglobulin
class switching · MHC/HLA
M: LMC
cell/phys/auag/auab
imdf/ipig/hyps/tumr
proc, drug(L3/4)
Human systems and organs
Bone (Carpus · Collar bone (clavicle) · Thigh bone (femur) · Fibula ·
Humerus · Mandible · Metacarpus · Metatarsus · Ossicles · Patella ·
Skeletal system
Phalanges · Radius · Skull (cranium) · Tarsus · Tibia · Ulna · Rib · Vertebra ·
Pelvis · Sternum) · Cartilage
Joints
Fibrous joint · Cartilaginous joint · Synovial joint
Muscular system Muscle · Tendon · Diaphragm
mostly
Thoracic
Respiratory system
Digestive
adnexa
mostly
Abdominopelvic
URT (Nose, Nasopharynx, Larynx) · LRT (Trachea,
Bronchus, Lung)
Mouth (Salivary gland, Tongue) · upper GI
(Oropharynx, Laryngopharynx, Esophagus,
system+
Stomach) · lower GI (Small intestine, Appendix,
Colon, Rectum, Anus) · accessory (Liver, Biliary
tract, Pancreas)
GU: Urinary system
Kidney · Ureter · Bladder · Urethra
Female (Uterus, Vulva, Ovary, Placenta) · Male
GU: Reproductive system (Scrotum, Penis, Prostate, Testicle, Seminal
vesicle)
Endocrine system Pituitary · Pineal · Thyroid · Parathyroid · Adrenal · Islets of Langerhans
Cardiovascular system
peripheral (Artery, Vein, Lymph vessel) ·
Heart
Lymphatic system
primary (Bone marrow, Thymus) · secondary
(Spleen, Lymph node)
Circulatory system
Nervous system
(Brain, Spinal cord, Nerve) · Sensory system (Ear, Eye)
Integumentary system Skin · Subcutaneous tissue · Breast (Mammary gland)
The T lymphocyte activation pathway. T cells contribute to immune defenses in two major ways:
some direct and regulate immune responses; others directly attack infected or cancerous cells[5].
CD4+ lymphocytes, or helper T cells, are immune response mediators, and play an
important role in establishing and maximizing the capabilities of the adaptive immune
response.[1] These cells have no cytotoxic or phagocytic activity; and cannot kill infected cells
or clear pathogens, but, in essence "manage" the immune response, by directing other cells
to perform these tasks.
Helper T cells express T-cell receptors (TCR) that recognize antigen bound to Class II MHC
molecules. The activation of a naive helper T-cell causes it to release cytokines, which
influences the activity of many cell types, including the APC that activated it. Helper T-cells
require a much milder activation stimulus than cytotoxic T-cells. Helper T-cells can provide
extra signals that "help" activate cytotoxic cells.[3]
Th1 and Th2: helper T cell responses
Two types of effector CD4+ T helper cell responses can be induced by a professional APC,
designated Th1 and Th2, each designed to eliminate different types of pathogens. The
factors that dictate whether an infection will trigger a Th1 or Th2 type response are not fully
understood, but the response generated does play an important role in the clearance of
different pathogens.[1]
The Th1 response is characterized by the production of Interferon-gamma, which activates
the bactericidal activities of macrophages, and induces B-cells to make opsonizing (coating)
antibodies, and leads to "cell-mediated immunity" [1]. The Th2 response is characterized by
the release of Interleukin 4, which results in the activation of B-cells to make neutralizing
(killing) antibodies, leading to "humoral immunity".[1] Generally, Th1 responses are more
effective against intracellular pathogens (viruses and bacteria that are inside host cells),
while Th2 responses are more effective against extracellular bacteria, parasites and toxins[1].
Like cytotoxic T-cells, most of the CD4+ helper cells will die upon resolution of infection,
with a few remaining as CD4+ memory cells.
HIV is able to subvert the immune system by attacking the CD4+ T cells, precisely the cells
that could drive the destruction of the virus, but also the cells that drive immunity against all
other pathogens encountered during an organisms' lifetime.[3]
A third type of T lymphocyte, the regulatory T cells (Treg), limits and suppresses the immune
system, and may control aberrant immune responses to self-antigens; an important
mechanism in controlling the development of autoimmune diseases.[3]
T cells never act as antigen-presenting cells.
γδ T cells
γδ T cells (gamma delta cells) possess an alternative T cell receptor (TCR) as opposed to
CD4+ and CD8+ αβ T cells and share characteristics of helper T cells, cytotoxic T cells and
natural killer cells. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as
CD1d-restricted Natural Killer T cells, γδ T cells exhibit characteristics that place them at the
border between innate and adaptive immunity. On one hand, γδ T cells may be considered a
component of adaptive immunity in that they rearrange TCR genes via V(D)J recombination,
which also produces junctional diversity, and develop a memory phenotype. On the other
hand however, the various subsets may also be considered part of the innate immune
system where a restricted TCR and/or NK receptors may be used as a pattern recognition
receptor. For example, according to this paradigm, large numbers of Vγ9/Vδ2 T cells
respond within hours to common molecules produced by microbes, and highly restricted
intraepithelial Vδ1 T cells will respond to stressed epithelial cells.
B lymphocytes and antibody production
The B lymphocyte activation pathway. B cells function to protect the host by producing antibodies
that identify and neutralize foreign objects like bacteria and viruses.[5]
B Cells are the major cells involved in the creation of antibodies that circulate in blood
plasma and lymph, known as humoral immunity. Antibodies (or immunoglobulin, Ig), are
large Y-shaped proteins used by the immune system to identify and neutralize foreign
objects. In mammals there are five types of antibody: IgA, IgD, IgE, IgG, and IgM, differing in
biological properties, each has evolved to handle different kinds of antigens. Upon
activation, B cells produce antibodies, each of which recognizes a unique antigen, and
neutralize specific pathogens.[1]
Like the T cell receptor, B cells express a unique B cell receptor (BCR), in this case, an
immobilized antibody molecule. The BCR recognizes and binds to only one particular
antigen. A critical difference between B cells and T cells is how each cell "sees" an antigen. T
cells recognize their cognate antigen in a processed form - as a peptide in the context of an
MHC molecule,[1] while B cells recognize antigens in their native form.[1] Once a B cell
encounters its cognate (or specific) antigen (and receives additional signals from a helper T
cell (predominately Th2 type)), it further differentiates into an effector cell, known as a
plasma cell.[1]
Plasma cells are short lived cells (2-3 days) which secrete antibodies. These antibodies bind
to antigens, making them easier targets for phagocytes, and trigger the complement
cascade.[1] About 10% of plasma cells will survive to become long-lived antigen specific
memory B cells.[1] Already primed to produce specific antibodies, these cells can be called
upon to respond quickly if the same pathogen re-infects the host; while the host
experiences few, if any, symptoms.
Alternative adaptive immune system
Although the classical molecules of the adaptive immune system (e.g. antibodies and T cell
receptors) exist only in jawed vertebrates, a distinct lymphocyte-derived molecule has been
discovered in primitive jawless vertebrates, such as the lamprey and hagfish. These animals
possess a large array of molecules called variable lymphocyte receptors (VLRs for short)
that, like the antigen receptors of jawed vertebrates, are produced from only a small
number (one or two) of genes. These molecules are believed to bind pathogenic antigens in
a similar way to antibodies, and with the same degree of specificity.[6]
Immunological memory
.
When B cells and T cells are activated some will become memory cells. Throughout the
lifetime of an animal these memory cells form a database of effective B and T lymphocytes.
Upon interaction with a previously encountered antigen, the appropriate memory cells are
selected and activated. In this manner, the second and subsequent exposures to an antigen
produce a stronger and faster immune response. This is "adaptive" because the body's
immune system prepares itself for future challenges. Immunological memory can either be
in the form of passive short-term memory or active long-term memory.
Passive memory
Passive memory is usually short-term, lasting between a few days and several months.
Newborn infants have had no prior exposure to microbes and are particularly vulnerable to
infection. Several layers of passive protection are provided by the mother. In utero,
maternal IgG is transported directly across the placenta, so that at birth, human babies have
high levels of antibodies, with the same range of antigen specificities as their mother. [1]
Breast milk contains antibodies that are transferred to the gut of the infant, protecting
against bacterial infections, until the newborn can synthesize its own antibodies.[1]
This is passive immunity because the fetus does not actually make any memory cells or
antibodies, it only borrows them. Short-term passive immunity can also be transferred
artificially from one individual to another via antibody-rich serum.
Active Memory
Active immunity is generally long-term and can be acquired by infection followed by B cells
and T cells activation, or artificially acquired by vaccines, in a process called immunization.
Immunization
Historically, infectious disease has been the leading cause of death in the human population.
Over the last century, two important factors have been developed to combat their spread;
sanitation and immunization.[3] Immunization (commonly referred to as vaccination) is the
deliberate induction of an immune response, and represents the single most effective
manipulation of the immune system that scientists have developed.[3] Immunizations are
successful because they utilize the immune system's natural specificity as well as its
inducibility.
The principle behind immunization is to introduce an antigen, derived from a disease
causing organism, that stimulates the immune system to develop protective immunity
against that organism, but which does not itself cause the pathogenic effects of that
organism. An antigen (short for antibody generator), is defined as any substance that binds
to a specific antibody and elicits an adaptive immune response.[2]
Most viral vaccines are based on live attenuated viruses, while many bacterial vaccines are
based on acellular components of micro-organisms, including harmless toxin components.[2]
Many antigens derived from acellular vaccines do not strongly induce an adaptive response,
and most bacterial vaccines require the addition of adjuvants that activate the antigen
presenting cells of the innate immune system to enhance immunogenicity.[3]
Immunological diversity
Most large molecules, including virtually all proteins and many polysaccharides, can serve as
antigens.[1] The parts of an antigen that interact with an antibody molecule or a lymphocyte
receptor, are called epitopes. Most antigens contain a variety of epitopes and can stimulate
the production of antibodies, specific T cell responses, or both.[1]
An antibody is made up of two heavy chains and two light chains. The unique variable region allows
an antibody to recognize its matching antigen.[5]
A very small proportion (less than 0.01%) of the total lymphocytes are able to bind to a
particular antigen, which suggests that only a few cells will respond to each antigen.[3]
For the adaptive response to "remember" and eliminate a large number of pathogens the
immune system must be able to distinguish between many different antigens, [2] and the
receptors that recognize antigens must be produced in a huge variety of configurations,
essentially one receptor (at least) for each different pathogen that might ever be
encountered. Even in the absence of antigen stimulation, a human is capable of producing
more than 1 trillion different antibody molecules.[3] Millions of genes would be required to
store the genetic information used to produce these receptors, but, the entire human
genome contains fewer than 25,000 genes.[7]
This myriad of receptors are produced through a process known as clonal selection.[1][2]
According to the clonal selection theory, at birth, an animal will randomly generate a vast
diversity of lymphocytes (each bearing a unique antigen receptor) from information
encoded in a small family of genes. In order to generate each unique antigen receptor, these
genes will have undergone a process called V(D)J recombination, or combinatorial
diversification, in which one gene segment recombines with other gene segments to form a
single unique gene. It is this assembly process that generates the enormous diversity of
receptors and antibodies, before the body ever encounters antigens, and enables the
immune system to respond to an almost unlimited diversity of antigens.[1] Throughout the
lifetime of an animal, those lymphocytes that can react against the antigens an animal
actually encounters, will be selected for action, directed against anything that expresses
that antigen.
It is important to note that the innate and adaptive portions of the immune system work
together and not in spite of each other. The adaptive arm, B and T cells, would be unable to
function without the input of the innate system. T cells are useless without antigenpresenting cells to activate them, and B cells are crippled without T-cell help. On the other
hand, the innate system would likely be overrun with pathogens without the specialized
action of the adaptive immune response.
Adaptive immunity during pregnancy
The cornerstone of the immune system is the recognition of "self" versus "non-self".
Therefore, the mechanisms which protect the human fetus (which is considered "non-self")
from attack by the immune system, are particularly interesting. Although no comprehensive
explanation has emerged to explain this mysterious, and often repeated, lack of rejection,
two classical reasons may explain how the fetus is tolerated. The first is that the fetus
occupies a portion of the body protected by a non-immunological barrier, the uterus, which
the immune system does not routinely patrol.[1] The second is that the fetus itself may
promote local immunosuppression in the mother, perhaps by a process of active nutrient
depletion.[1] A more modern explanation for this induction of tolerance is that specific
glycoproteins expressed in the uterus during pregnancy suppress the uterine immune
response (see eu-FEDS).
During pregnancy in viviparous mammals (all mammals except Monotremes), endogenous
retroviruses are activated and produced in high quantities during the implantation of the
embryo. They are currently known to possess immunosuppressive properties, suggesting a
role in protecting the embryo from its mother's immune system. Also viral fusion proteins
apparently cause the formation of the placental syncytium[8] in order to limit the exchange
of migratory cells between the developing embryo and the body of the mother (something
an epithelium will not do sufficiently, as certain blood cells are specialized to be able to
insert themselves between adjacent epithelial cells). The immunodepressive action was the
initial normal behavior of the virus, similar to HIV, the fusion proteins were a way to spread
the infection to other cells by simply merging them with the infected one (HIV does this
too). It is believed that the ancestors of modern viviparous mammals evolved after an
infection by this virus, enabling the fetus to survive the immune system of the mother.[9]
The human genome project found several thousand ERVs classified into 24 families.[10]
Immune system / Immunology
Antigen (Superantigen, Allergen) · Hapten
Antigens
Epitope (Linear, Conformational) · Mimotope
Antibodies
Antibody
(Monoclonal
antibodies,
Polyclonal
antibodies,
Autoantibody, Microantibody) · Polyclonal B cell response · Allotype ·
Isotype · Idiotype
Immune complex · Paratope
Immunity
tolerance
action:
Immunity ·
Autoimmunity ·
Alloimmunity ·
Allergy ·
Inflammation ·
Cross-reactivity
vs. Hypersensitivity ·
inaction: Tolerance (Central, Peripheral, Clonal anergy, Clonal deletion,
Tolerance in pregnancy) · Immunodeficiency
Immunogenetics
Somatic hypermutation · V(D)J recombination · Junctional diversity
Immunoglobulin class switching · MHC/HLA
·
Immune system: Lymphatic system (TA A13.1-2, GA 8 and 9)
Bone marrow Hematopoietic stem cell
Thymus
Cortex · Medulla · Hassall's corpuscles
Spleen
(process blood)
structural: Hilum · Trabeculae · Diaphragmatic surface of spleen ·
Visceral surface of spleen
Red pulp (Cords of Billroth, Marginal zone)
White pulp (Periarteriolar lymphoid sheaths, Germinal center)
blood flow: Trabecular arteries · Trabecular veins
Tonsils/Waldeyer's Palatine ·
Tonsillar crypts
tonsillar ring
Lingual ·
Pharyngeal ·
Tubal
lymph flow: Afferent lymph vessels · Cortical sinuses · Medullary
sinuses · Efferent lymph vessels
T cells: High endothelial venules
Lymph
nodes
(process
B cells: Primary follicle/Germinal center · Mantle zone · Marginal
extracellular fluid)
zone
layers: Capsule/Trabeculae · Subcapsular sinus · Cortex · Paracortex ·
Medulla (Medullary cord) · Hilum
MALT
GALT · Peyer's patch · Germinal center
(process mucosa)
Human systems and organs
Bone (Carpus · Collar bone (clavicle) · Thigh bone (femur) · Fibula ·
Humerus · Mandible · Metacarpus · Metatarsus · Ossicles · Patella ·
Skeletal system
Phalanges · Radius · Skull (cranium) · Tarsus · Tibia · Ulna · Rib · Vertebra ·
Pelvis · Sternum) · Cartilage
Joints
Fibrous joint · Cartilaginous joint · Synovial joint
Muscular system Muscle · Tendon · Diaphragm
mostly
Thoracic
Respiratory system
Digestive
adnexa
mostly
Abdominopelvic
URT (Nose, Nasopharynx, Larynx) · LRT (Trachea,
Bronchus, Lung)
Mouth (Salivary gland, Tongue) · upper GI
(Oropharynx, Laryngopharynx, Esophagus,
system+
Stomach) · lower GI (Small intestine, Appendix,
Colon, Rectum, Anus) · accessory (Liver, Biliary
tract, Pancreas)
GU: Urinary system
Kidney · Ureter · Bladder · Urethra
Female (Uterus, Vulva, Ovary, Placenta) · Male
GU: Reproductive system (Scrotum, Penis, Prostate, Testicle, Seminal
vesicle)
Endocrine system Pituitary · Pineal · Thyroid · Parathyroid · Adrenal · Islets of Langerhans
Cardiovascular system
peripheral (Artery, Vein, Lymph vessel) ·
Heart
Lymphatic system
primary (Bone marrow, Thymus) · secondary
(Spleen, Lymph node)
Circulatory system
Nervous system
(Brain, Spinal cord, Nerve) · Sensory system (Ear, Eye)
Integumentary system Skin · Subcutaneous tissue · Breast (Mammary gland)
Microbiological culture
A culture of Bacillus anthracis
A microbiological culture, or microbial culture, is a method of multiplying microbial
organisms by letting them reproduce in predetermined culture media under controlled
laboratory conditions. Microbial cultures are used to determine the type of organism, its
abundance in the sample being tested, or both. It is one of the primary diagnostic methods of
microbiology and used as a tool to determine the cause of infectious disease by letting the
agent multiply in a predetermined media. For example, a throat culture is taken by scraping
the lining of tissue in the back of the throat and blotting the sample into a media to be able to
screen for harmful microorganisms, such as Streptococcus pyogenes, the caustive agent of
strep throat.[1] Furthermore, the term culture is more generally used informally to refer to
"selectively growing" a specific kind of microorganism in the lab.
Microbial cultures are foundational and basic diagnostic methods used extensively as a
research tool in molecular biology. It is often essential to isolate a pure culture of
microorganisms. A pure (or axenic) culture is a population of cells or multicellular organisms
growing in the absence of other species or types. A pure culture may originate from a single
cell or single organism, in which case the cells are genetic clones of one another.
For the purpose of gelling the microbial culture, the medium of agarose gel (Agar) is used.
Agar is a gelatinous substance that is derived from seaweed. A cheap substitute for agar is
Guar gum, which can be used for the isolation and maintenance of thermophiles.
Bacterial culture
Microbiological cultures utilize petri dishes of differing sizes that have a thin layer of agar
based growth medium in them. Once the growth medium in the petri dish is inoculated with
the desired bacteria, the plates are incubated in an oven usually set at 37 degrees Celsius.
Another method of bacterial culture is liquid culture, in which case desired bacteria are
suspended in liquid broth, a nutrient medium. These are ideal for preparation of an
antimicrobial assay. The experimenter would inoculate liquid broth with bacteria and let it
grow overnight in a shaker for uniform growth, then take aliquots of the sample to test for the
antimicrobial activity of a specific drug or protein (antimicrobial peptides).
Virus and phage culture
Virus or phage cultures require host cells for the virus or phage to multiply in. For
bacteriophages, cultures are grown by infecting bacterial cells. The phage can then be
isolated from the resulting plaques in a lawn of bacteria on a plate. Virus cultures are
obtained from their appropriate eukaryotic host cells.
Eukaryotic cell culture
Isolation of pure cultures
For single-celled eukaryotes, such as yeast, the isolation of pure cultures uses the same
techniques as for bacterial cultures. Pure cultures of multicellular organisms are often more
easily isolated by simply picking out a single individual to initiate a culture. This is a useful
technique for pure culture of fungi, multicellular algae, and small metazoa, for example.
Developing pure culture techniques is crucial to the observation of the specimen in question.
The most common method to isolate individual cells and produce a pure culture, is to prepare
a streak plate. The streak plate method is a way to physically separate the microbial
population, and is done by spreading the inoculate back and forth with an inoculating loop
over the solid agar plate. Upon incubation, colonies will arise and, hopefully, single cells will
have been isolated from the biomass.
Medicine: Pathology
Disease/Medical
condition
(Infection,
Neoplasia) ·
Hemodynamics (Ischemia) · Inflammation · Wound healing
Principles of pathology
Cell death: Necrosis (Liquefactive necrosis, Coagulative
necrosis, Caseous necrosis, Fat necrosis) · Apoptosis ·
Pyknosis · Karyorrhexis · Karyolysis
Cellular adaptation: Atrophy · Hypertrophy · Hyperplasia ·
Dysplasia · Metaplasia (Squamous, Glandular)
accumulations: pigment (Hemosiderin, Lipochrome/Lipofuscin,
Melanin) · Steatosis
Anatomical pathology
Surgical pathology · Cytopathology · Autopsy · Molecular
pathology · Forensic pathology · Dental pathology
Gross examination · Histopathology · Immunohistochemistry ·
Electron microscopy · Immunofluorescence · Fluorescent in
situ hybridization
Clinical pathology
Clinical chemistry · Hematopathology · Transfusion
medicine · Medical microbiology · Diagnostic immunology
·
Immunopathology
Enzyme assay · Mass spectrometry · Chromatography · Flow
cytometry · Blood bank · Microbiological culture · Serology
Specific conditions
Myocardial infarction
Major histocompatibility complex
Protein images comparing the MHC I (1hsa) and MHC II (1dlh) molecules. (more details...)
The major histocompatibility complex (MHC) is a large genomic region or gene family
found in most vertebrates that encodes MHC molecules. MHC molecules play an important
role in the immune system and autoimmunity.
Proteins are continually synthesized in the cell. These include normal proteins (self) and
microbial invaders (nonself). A MHC molecule inside the cell takes a fragment of those
proteins and displays it on the cell surface. (The protein fragment is sometimes compared to a
hot dog, and the MHC protein to the bun.[1]) When the MHC-protein complex is displayed on
the surface of the cell, it can be presented to a nearby immune cell, usually a T cell or natural
killer (NK) cell. If the immune cell recognizes the protein as nonself, it can kill the infected
cell, and other infected cells displaying the same protein.[2]
Because MHC genes must defend against a great diversity of microbes in the environment,
with a great diversity of proteins, the MHC genes themselves must be diverse. The MHC is
the most gene-dense region of the mammalian genome. MHC genes vary greatly from
individual to individual, that is, MHC alleles have polymorphisms (diversity). This
polymorphism is adaptive in evolution because it increases the likelihood that at least some
individuals of a population will survive an epidemic.[2]
There are two general classes of MHC molecules: Class I and Class II. Class I MHC
molecules are found on almost all cells and present proteins to cytotoxic T cells. Class II
MHC molecules are found on certain immune cells themselves, chiefly macrophages and B
cells, also known as antigen-presenting cells (APCs). These APCs ingest microbes, destroy
them, and digest them into fragments. The Class II MHC molecules on the APCs present the
fragments to helper T cells, which stimulate an immune reaction from other cells.[2]
Classification
In humans, the 3.6-Mb (3 600 000 base pairs) MHC region on chromosome 6 contains 140
genes between flanking genetic markers MOG and COL11A2.[3] About half have known
immune functions (see human leukocyte antigen). The same markers in the marsupial
Monodelphis domestica (gray short-tailed opossum) span 3.95 Mb and contain 114 genes, 87
shared with humans.[4]
Subgroups
The MHC region is divided into three subgroups, class I, class II, and class III.
Name
Function
Expression
All nucleated cells. MHC class I proteins
contain an α chain & β2-micro-globulin(not
Encodes
non-identical
pairs
part of the MHC encoded by chromosome
MHC (heterodimers) of peptide-binding
15). They present antigen fragments to
class I proteins, as well as antigen-processing
cytotoxic T-cells via the CD8 receptor on
molecules such as TAP and Tapasin.
the cytotoxic T-cells and also bind inhibitory
receptors on NK cells.
Encodes
heterodimeric
peptideOn most immune system cells, specifically
binding proteins and proteins that
on antigen-presenting cells. MHC class II
modulate antigen loading onto MHC
MHC
proteins contain α & β chains and they
class II proteins in the lysosomal
class II
present antigen fragments to T-helper cells
compartment such as MHC II DM,
by binding to the CD4 receptor on the TMHC II DQ, MHC II DR, and MHC II
helper cells.
DP.
Encodes
for
other
immune
MHC components, such as complement
class III components (e.g., C2, C4, factor B) Variable (see below).
region and some that encode cytokines (e.g.,
TNF-α) and also hsp.
Class III has a function very different from that of class I and class II, but, since it has a locus
between the other two (on chromosome 6 in humans), they are frequently discussed together.
Responses
The MHC proteins act as "signposts" that serve to alert the immune system if foreign material
is present inside a cell. They achieve this by displaying fragmented pieces of antigens on the
host cell's surface. These antigens may be self or nonself. If they are nonself, there are two
ways by which the foreign protein can be processed and recognized as being "nonself".


Phagocytic cells such as macrophages, neutrophils, and monocytes degrade foreign
particles that are engulfed during a process known as phagocytosis. Degraded
particles are then presented on MHC Class II molecules.[5]
On the other hand, if a host cell was infected by a bacterium or virus, or was
cancerous, it may have displayed the antigens on its surface with a Class I MHC
molecule. In particular, cancerous cells and cells infected by a virus have a tendency
to display unusual, nonself antigens on their surface. These nonself antigens,
regardless of which type of MHC molecule they are displayed on, will initiate the
specific immunity of the host's body.
Cells constantly process endogenous proteins and present them within the context of MHC I.
Immune effector cells are trained not to react to self peptides within MHC, and as such are
able to recognize when foreign peptides are being presented during an infection/cancer.
HLA genes
Codominant expression of HLA genes.
The best-known genes in the MHC region are the subset that encodes antigen-presenting
proteins on the cell surface. In humans, these genes are referred to as human leukocyte
antigen (HLA) genes; however people often use the abbreviation MHC to refer to HLA gene
products. To clarify the usage, some of the biomedical literature uses HLA to refer
specifically to the HLA protein molecules and reserves MHC for the region of the genome
that encodes for this molecule. This convention is not consistently adhered to, however.
The most intensely studied HLA genes are the nine so-called classical MHC genes: HLA-A,
HLA-B, HLA-C, HLA-DPA1, HLA-DPB1, HLA-DQA1, HLA-DQB1, HLA-DRA, and
HLA-DRB1. In humans, the MHC is divided into three regions: Class I, II, and III. The A, B,
C, E, F, and G genes belong to MHC class I, whereas the six D genes belong to class II.
MHC genes are expressed in codominant fashion[6]. This means that the alleles (variants)
inherited from both progenitors are expressed in equivalent way:


As there are 3 Class-I genes, named in humans HLA-A, HLA-B and HLA-C, and as
each person inherits a set of genes from each progenitor, that means that any cell in an
individual can express 6 different types of MHC-I molecules (see figure).
In the Class-II locus, each person inherits a couple of genes HLA-DP (DPA1 and
DPA2, which encode α and β chains), a couple of genes HLA-DQ (DQA1 and DQA2,
for α and β chains), one gene HLA-DRα (DRA1) and one or two genes HLA-DRβ
(DRB1 and DRB3, -4 o -5). That means that one heterozygous individual can inherit 6
or 8 Class-II alleles, three or four from each progenitor.
The set of alleles which is present in each chromosome is called MHC haplotype. In
humans, each HLA allele is named with a number. For instance, for a given individual, his
haplotype might be HLA-A2, HLA-B5, HLA-DR3, etc... Each heterozygous individual will
have two MHC haplotypes, one in each chromosome (one of paternal origin and the other of
maternal origin).
The MHC genes are highly polymorphic; this means that there are many different alleles in
the different individuals inside a population. The polymorphism is so high that in a mixed
population (non-endogamic) there are not two individuals with exactly the same set of MHC
genes and molecules, with the exception of the identical twins.
The polymorphic regions in each allele are located in the region for peptide contact, which
is going to be displayed to the lymphocyte. For this reason, the contact region for each allele
of MHC molecule is highly variable, as the polymorphic residues of the MHC will create
specific clefts in which only certain types of residues of the peptide can enter. This imposes a
very specific link between the MHC molecule and the peptide, and it implies that each MHC
variant will be able to bind specifically only those peptides which are able to properly enter in
the cleft of the MHC molecule, which is variable for each allele. In this way, the MHC
molecules have a broad specificity, because they can bind many, but not all types of possible
peptides. This is an essential characteristic of MHC molecules: in a given individual, it is
enough to have a few different molecules to be able to display a high variety of peptides.
On the other hand, inside a population, the presence of many different alleles ensures that it
will always be an individual with a specific MHC molecule able to load the correct peptide to
recognize a specific microbe. The evolution of the MHC polymorphism ensures that a
population will not succumb face to a new pathogen or a mutated one, because at least some
individuals will be able to develop an adequate immune response to win over the pathogen.
The variations in the MHC molecules (responsible for the polymorphism) are the result of the
inheritance of different MHC molecules, and they are not induced by recombination, as it is
the case for the antigen receptors.
Because of the high levels of allelic diversity found within its genes, MHC has also attracted
the attention of many evolutionary biologists.[citation needed]
Molecular biology of MHC proteins
MHC molecules are anchored in the cell membrane at the bottom of the illustration; they can
then bind to immune cells at the top of the illustration. The MHC Class I molecule (left) on
most cells binds to the T-cell receptor (TCR) and CD8 receptor (top). The MHC Class II
molecule (right) on immune cells binds to the TCR and CD4 receptor on other immune cells
(top).
The classical MHC molecules (also referred to as HLA molecules in humans) have a vital
role in the complex immunological dialogue that must occur between T cells and other cells
of the body. At maturity, MHC molecules are anchored in the cell membrane, where they
display short polypeptides to T cells, via the T cell receptors (TCR). The polypeptides may be
"self," that is, originating from a protein created by the organism itself, or they may be
foreign ("nonself"), originating from bacteria, viruses, pollen, and so on. The overarching
design of the MHC-TCR interaction is that T cells should ignore self-peptides while reacting
appropriately to the foreign peptides.
The immune system has another and equally important method for identifying an antigen: B
cells with their membrane-bound antibodies, also known as B cell receptors (BCR).
However, whereas the BCRs of B cells can bind to antigens without much outside help, the
TCRs require "presentation" of the antigen through the help of MHC. For most of the time,
however, MHC are kept busy presenting self-peptides, which T cells should appropriately
ignore. A full-force immune response usually requires the activation of B cells via BCRs and
T cells via the MHC-TCR interaction. This duality creates a system of "checks and balances"
and underscores the immune system's potential for running amok and causing harm to the
body (see autoimmune disorders).
MHC molecules retrieve polypeptides from the interior of the cell they are part of and display
them on the cell's surface for recognition by T cells. However, MHC class I and MHC class II
differ significantly in the method of peptide presentation.
Structure of a molecule of MHC Class-I.
MHC Class-I
In eutheria, the Class-I region contains a group of genes which is conserved between species,
in the same order along the region.
MHC Class-I genes (MHC-I) code glucoproteins, with immunoglobulin structure: they
present one heavy chain type α, subdivided in three regions: α1, α2 y α3. These three
regions are exposed to the extracellular space, and they are linked to the cellular membrane
through a transmembrane region. The α chain is always associated to a molecule of β2
microglobulin, which is coded by an independent region on chromosome 15. These
molecules are present in the surface of all nucleated cells.[6]
The most important function of the gene products for the Class-I genes is the presentation of
intracellular antigenic peptides to the cytotoxic T lymphocytes (CD8+). The antigenic
peptide is located in a cleft existing between the α1 and α2 regions in the heavy chain.
In humans, there are many different isotypes (different genes) for the Class-I molecules,
which can be grouped as:


"classic molecules", whose function consist in antigen presentation to the T8
lymphocytes: inside this group we find HLA-A, HLA-B y HLA-C.
"non classic molecules" (named also MHC class IB), with specialized functions: they
do not present antigens to T lymphocytes, but they interact with inhibitory receptors
in NK cells; inside this group we find HLA-E, HLA-F, HLA-G.
Structure of a molecule of MHC Class-II.
MHC Class-II
These genes code glucoproteins with immunoglobulin structure, but in this case the
functional complex is formed by two chains, one α and one β (each one with two domains:
α1 and α2, β1 and β2). Each chain is linked to the cellular membrane through a
transmembrane region, and both chains are confronted, with domains 1 and 2 consecutives, in
the extracellular espace.[6]
These molecules are present mostly in the membrane of the antigen presenting cells
(dendritic and phagocytic cells), where they present processed extracellular antigenic
peptides to the helper T lymphocytes (CD4+). The antigenic peptide is located in a cleft
formed by α1 and β1 peptides.
MHC-II molecules in humans present 5-6 isotypes, and can be grouped in:


"classic molecules", presenting peptides to T4 lymphocytes; inside this group we find
HLA-DP, HLA-DQ, HLA-DR;
"non classic molecules", accessories, with intracellular functions (they are not
exposed in the cellular membrane, but in internal membrares in lysosomes); normally,
they load the antigenic peptides on the classic MHC-II molecules; in this group are
included HLA-DM and HLA-DO.
On top of the MHC-II molecules, in the Class-II region are located genes coding for antigen
processing molecules, such as TAP (transporter associated with antigen processing) or
Tapasin.
MHC Class-III
This class include genes coding several secreted proteins with immune functions:
components of the complement system (such as C2, C4 and B factor) and molecules related
with inflammation (cytokines such as TNF-α, LTA, LTB) or heat shock proteins (hsp).
Class-III molecules do not share the same function as class- I and II molecules, but they are
located between them in the short arm of human chromosome 6, and for this reason they are
frequently described together.
Functions of MHC-I and II molecules
Both types of molecules present antigenic peptides to T lymphocytes, which are responsible
for the specific immune response to destroy the pathogen producing those antigens. However,
Class-I and II molecules correspond to two different pathways of antigen processing, and are
associated to two different systems of immune defense:[6]
Table 2. Characteristics of the antigen processing pathways
Characteristic
MHC-II pathway
MHC-I pathway
Polymorphic chain α and β2
Composition of the stable Polymorphic chains α and β,
microglobulin, peptide bound to
peptide binds to both
peptide-MHC complex
α chain
Dendritic cells, mononuclear
Types of antigen presenting phagocytes, B lymphocytes,
All nucleated cells
some
endothelial
cells,
cells (APC)
epithelium of thymus
T
lymphocytes Cytotoxic
T
lymphocytes
T lymphocytes able to Helper
(CD4+)
(CD8+)
respond
Proteins
present
in cytosolic
proteins
(mostly
endosomes or lysosomes synthetized by the cell; may
Origin of antigenic proteins
(mostly internalized from also enter from the extracellular
extracellular medium)
medium via phagosomes)
Proteases from endosomes
Enzymes responsible for
and lysosomes (for instance, Cytosolic proteasome
peptide generation
cathepsin)
Location of loading the
Specialized
vesicular
Endoplasmic reticulum
peptide on the MHC
compartment
molecule
Molecules implicated in
TAP (transporter associated
transporting the peptides
DM, invariant chain
with antigen processing)
and loading them on the
MHC molecules
T lymphocytes belonging to one specific individual present a property called MHC
restriction (see below): they only can detect an antigen if it is displayed by an MHC
molecule from the same individual. This is due to the fact that each T lymphocyte has a dual
specificity: the T cell receptor (TCR) recognizes at the same time some residues from the
peptide and some residues from the displaying MHC molecule. This property is of great
importance in organ transplantation, and it implies that, during their development, T
lymphocytes must "learn" to recognize the MHC molecules belonging to the individual (the
"self" recognition), during the complex process of maturation and selection having place in
the thymus.
MHC molecules can only display peptides. For this reason, as T lymphocytes can only
recognize an antigen if it is displayed by an MHC molecule, they only can react to antigens of
proteic origin (coming from microbes) and not to other types of chemical compounds (neither
lipids, nor nucleic acids, nor sugars). Each MHC molecule can display only one peptide each
time, because the cleft in the molecule has space only to load one peptide. However, one
given MHC molecule has a broad specificity, because it can display many different peptides
(although not all).
Peptide processing for peptides associated to MHC-I molecules: proteins present in the
cytosol are degraded by the proteasome, and the resulting peptides are internalized by the
TAP channel in the endoplasmic reticulum, where they become associated with MHC-I
molecules freshly synthesized. The MHC-I/peptide complexes enter in the Golgi apparatus,
where they are glycosylated, and from there they enter in secreting vesicles, which fuse with
the cell membrane. In this way, the complexes become exposed to the outside of the cell,
allowing the contact with circulating T lymphocytes.
MHC molecules obtain the peptide that they display to the outside of the cell during their
own biosynthesis, inside the cell. That means that those peptides come from microbes that are
inside the cell. This is the reason why T lymphocytes, which only can recognize a peptide if it
is displayed by an MHC molecule, are only able to detect microbes associated to cells,
developing only an immune response against intracellular microbes.
It is important to notice that MHC-I molecules acquire peptides coming from cytosolic
proteins, whereas MHC-II molecules acquire peptides from proteins contained in intracellular
vesicles. For this reason, MHC-I molecules display "self" peptides, viral peptides
(synthesized by the own cell) or peptides coming from ingested microbes in phagosomes.
MHC-II molecules, however, display peptides coming from microbes ingested in vesicles
(MHC-II molecules are present only in cells with phagocytic capacity). MHC molecules are
stable on the cell membrane only if they display a loaded peptide: the peptide stabilizes the
structure of the MHC molecules, whereas "empty" molecules are degraded inside the cell.
MHC molecules loaded with a peptide can remain in the cell membrane for days, long
enough to ensure that the correct T lymphocyte will recognize the complex and initiate the
immune response.
In each individual, MHC molecules can display both foreign peptides (coming from
pathogens) as well as peptides coming from the self proteins of this individual. For this
reason, in a given moment, only a small fraction of the MHC molecules in one cell will
display a foreign peptide: most of the displayed peptides will be self peptides, because these
are much more abundant. However, T lymphocytes are able to detect a peptide displayed by
only 0.1%-1% of the MHC molecules to initiate an immune response.
On the other hand, the self peptides cannot initiate an immune response (except in the case of
autoimmune diseases), because the specific T lymphocytes for the self peptides are destroyed
or inactivated in the thymus. However, the presence of self peptides displayed by MHC
molecules is essential for the supervising function of the T lymphocytes: these cells are
constantly patrolling the organism, verifying the presence of self peptides associated to MHC
molecules. In the rare cases in which they detect a foreign peptide, they will initiate an
immune response.
Role of MHC molecules in transplant rejection
MHC molecules were identified and named precisely after their role in transplant rejection
between mice from different endogamic strains. In humans, MHC molecules are the "human
leukocyte antigens" (HLA). It took more than 20 years to understand the physiological role of
MHC molecules in peptide presentation to T lymphocytes.[7]
As previously described, each human cell express 6 MHC class-I alleles (one HLA-A, -B and
-C allele from each progenitor) and 6-8 MHC class-2 alleles (one HLA-DP and -DQ, and one
or two HLA-DR from each progenitor, and some combinations of these). The MHC
polymorphism is very high: it is estimated that in the population there are at least 350 alleles
for HLA-A genes, 620 alleles for HLA-B, 400 alleles for DR and 90 alleles for DQ. As these
alleles can be inherited and expressed in many different combinations, each individual in the
population will most likely express some molecules which will be different from the
molecules in another individual, except in the case of identical twins. All MHC molecules
can be targets for transplant rejection, but HLA-C and HLA-DP molecules show low
polymorphism, and most likely they are less important in rejection.
In a tranplant (an organ transplantation or stem cells transplantation), MHC molecules work
as antigens: they can initiate an immune response in the receptor, thus provoking the
transplant rejection. MHC molecules recognition in cells from another individual is one of the
most intense immune responses currently known. The reason why an individual reacts against
the MHC molecules from another individual is pretty well understood.
During T lymphocytes maturation in the thymus, these cells are selected according to their
TCR capacity to weakly recognize complexes "self peptide:self MHC". For this reason, in
principle T lymphocytes should not react against a complex "foreign peptide:foreign MHC",
which is what can be found in transplanted cells. However, what seems to happen is a kind of
cross-reaction: T lymphocytes from the receptor individual can be mistaken, because the
MHC molecule of the donor is similar to self MHC molecule in the binding region to the
TCR (the MHC variable region is in the binding motif for the peptide they display). For this
reason, the lymphocytes from the receptor individual mistake the complex present in the cells
or the transplanted organ as "foreign peptide:self MHC" and they initiate an immune
response against the "invading" organ, because this is perceived as if it was an infected or
tumoral self organ, but with an extremely high number of complexes able to initiate an
immune response. The recognition of the foreign MHC as self by T lymphocytes is called
allorecognition.
There can be two types of transplant rejection mediated by MHC molecules (HLA):


hyperacute rejection: it happens when the receptor individual has preformed antiHLA antibodies, generated before the trasplantation; this can be due to previous blood
transfusions (because this includes donor lymphocytes, expressing HLA molecules),
to the generation of anti-HLA during pregnancy (against the HLA molecules from the
father present in the fetus) and due to a previous trasplantation;
acute humoral rejection and chronic disfunction of the transplanted organ: due to
the formation of anti-HLA antibodies in the receptor individual, against the HLA
molecules present in the endothelial cells of the transplanted tissue.
In both cases, there is an immune reaction against the transplanted organ, which can produce
lesions in the organ, eventually producing lost of function, immediately in the first case, and
progressive in the second one.
For this reason, it is crucial to realize a cross-reaction test between donor cells and receptor
serum, to detect the potential presence of preformed anti-HLA antibodies in the receptor
against donor HLA molecules, in order to avoid the hyperacute rejection. Normally, what is
checked is the compatibility between HLA-A, -B and -DR molecules: the higher the number
of incompatibilities, the lower the 5 years survival of the transplant. Total compatibility exists
only between identical twins, but nowadays there are databases of donor information at
global level, allowing the optimization of the HLA compatibility between a potential donor
and a given receptor.
MHC evolution and allelic diversity
MHC gene families are found in all vertebrates, though the gene composition and genomic
arrangement vary widely. Chickens, for instance, have one of the smallest known MHC
regions (19 genes), though most mammals have an MHC structure and composition fairly
similar to that of humans. Research has determined that gene duplication is responsible for
much of the genetic diversity. In humans, the MHC is littered with many pseudogenes.
One of the most striking features of the MHC, in particular in humans, is the astounding
allelic diversity found therein, and especially among the nine classical genes. In humans, the
most conspicuously-diverse loci, HLA-A, HLA-B, and HLA-DRB1, have roughly 1000,
1600, and 870 known alleles respectively (IMGT/HLA) — diversity truly exceptional in the
human genome. The MHC gene is the most polymorphic in the genome. Population surveys
of the other classical loci routinely find tens to a hundred alleles — still highly diverse. Many
of these alleles are quite ancient: It is often the case that an allele from a particular HLA gene
is more closely related to an allele found in chimpanzees than it is to another human allele
from the same gene.
In terms of phylogenetics, the marsupial MHC lies between eutherian mammals and the
minimal essential MHC of birds, although it is closer in organization to non-mammals. Its
Class I genes have amplified within the Class II region, resulting in a unique Class I/II
region.[4]
The allelic diversity of MHC genes has created fertile grounds for evolutionary biologists.
The most important task for theoreticians is to explain the evolutionary forces that have
created and maintained such diversity. Most explanations invoke balancing selection (see
polymorphism (biology)), a broad term that identifies any kind of natural selection in which
no single allele is absolutely most fit. Frequency-dependent selection and heterozygote
advantage are two types of balancing selection that have been suggested to explain MHC
allelic diversity. However, recent models suggest that a high number of alleles is not
plausibly achievable through heterozygote advantage alone. Pathogenic co-evolution, a
counter-hypothesis has recently emerged; it theorizes that the most common alleles will be
placed under the greatest pathogenic pressure, thus there will always be a tendency for the
least common alleles to be positively selected for. This creates a "moving target" for
pathogen evolution. As the pathogenic pressure decreases on the previously common alleles,
their concentrations in the population will stabilize, and they will usually not go extinct if the
population is large enough, and a large number of alleles will remain in the population as a
whole. This explains the high degree of MHC polymorphism found in the population,
although an individual can have a maximum of 18 MHC I or II alleles.
MHC and sexual selection
It has been suggested that MHC plays a role in the selection of potential mates, via olfaction.
MHC genes make molecules that enable the immune system to recognise invaders; in
general, the more diverse the MHC genes of the parents the stronger the immune system of
the offspring. It would be beneficial, therefore, to have evolved systems of recognizing
individuals with different MHC genes and preferentially selecting them to breed with.
Yamazaki et al. (1976) showed this to be the case for male mice, which show a preference for
females of different MHC. Similar results have been obtained with fish.[8]
In 1995, Swiss biologist Claus Wedekind determined MHC-dissimilar mate selection
tendencies in humans. In the experiment, a group of female college students smelled t-shirts
that had been worn by male students for two nights, without deodorant, cologne, or scented
soaps. An overwhelming number of women preferred the odors of men with dissimilar MHCs
to their own. However, their preference was reversed if they were taking oral
contraceptives.[9] The hypothesis is that MHCs affect mate choice and that oral contraceptives
can interfere with this. A study in 2005 on 58 test subjects found that the women were more
indecisive when presented with MHCs similar to their own.[10] However, without oral
contraceptives, women had no particular preference, contradicting the earlier finding.[11]
However, another study in 2002 showed results consistent with Wedekind's—paternally
inherited HLA-associated odors influence odor preference and may serve as social cues.[12]
In 2008, Peter Donnelly and colleagues proposed that MHC is related to mating choice in
some human populations.
Rates of early pregnancy loss are lower in couples with dissimilar MHC genes.[citation needed]
Restriction
A given T cell is restricted to recognize a peptide antigen only when it is bound to self-MHC
molecules.
MHC restriction is particularly important when primary lymphocytes are developing and
differentiation in the thymus or bone marrow. It is at this stage that T cells die by apoptosis if
they express high affinity for self-antigens presented by an MHC molecule or express too low
affinity for self MHC. This is ensured through two distinct developmental stages: positive
selection and negative selection.
Immune system / Immunology
Systems
Adaptive vs. Innate · Humoral
(Anaphylatoxins) · Intrinsic
vs.
Cellular ·
Complement
Antigen (Superantigen, Allergen) · Hapten
Antigens
Epitope (Linear, Conformational) · Mimotope
Antibodies
Antibody
(Monoclonal
antibodies,
Polyclonal
antibodies, Autoantibody, Microantibody) · Polyclonal
B cell response · Allotype · Isotype · Idiotype
Immune complex · Paratope
Lymphoid
Immunity
tolerance
action: Immunity · Autoimmunity · Alloimmunity ·
Allergy · Hypersensitivity · Inflammation · Crossvs. reactivity
inaction: Tolerance (Central, Peripheral, Clonal
anergy, Clonal deletion, Tolerance in pregnancy) ·
Immunodeficiency
Immunogenetics Somatic
hypermutation ·
V(D)J
recombination ·
Junctional diversity
switching · MHC/HLA
Immune
cells/
White blood cells
·
Immunoglobulin
class
Lymphoid: T cell · B cell ·
Beekeeping
Beekeeping, tacuinum sanitatis casanatensis (14th century)
Honey seeker depicted on 8000 year old cave painting near Valencia, Spain[1]
Beekeeping (or apiculture, from Latin apis, bee) is the maintenance of honey bee colonies,
commonly in hives, by humans. A beekeeper (or apiarist) keeps bees in order to collect honey
and beeswax, to pollinate crops, or to produce bees for sale to other beekeepers. A location
where bees are kept is called an apiary or "bee yard". Apiculture means scientific method of
rearing insects that can produce honey and wax.
History of beekeeping
Origins
There are more than 20,000 species of wild bees[2]. Many species are solitary[3], and many
others rear their young in burrows and small colonies, like mason bees and bumblebees.
Beekeeping, or apiculture, is concerned with the practical management of the social species
of honey bees, which live in large colonies of up to 100,000 individuals. In Europe and
America the species universally managed by beekeepers is the Western honey bee (Apis
mellifera). This species has several sub-species or regional varieties, such as the Italian bee
(Apis mellifera ligustica ), European dark bee (Apis mellifera mellifera), and the Carniolan
honey bee (Apis mellifera carnica). In the tropics, other species of social bee are managed for
honey production, including Apis cerana.
All of the Apis mellifera sub-species are capable of inter-breeding and hybridizing. Many bee
breeding companies strive to selectively breed and hybridize varieties to produce desirable
qualities: disease and parasite resistance, good honey production, swarming behaviour
reduction, prolific breeding, and mild disposition. Some of these hybrids are marketed under
specific brand names, such as the Buckfast Bee or Midnite Bee. The advantages of the initial
F1 hybrids produced by these crosses include: hybrid vigor, increased honey productivity,
and greater disease resistance. The disadvantage is that in subsequent generations these
advantages may fade away and hybrids tend to be very defensive and aggressive.
Wild honey harvesting
Collecting honey from wild bee colonies is one of the most ancient human activities and is
still practiced by aboriginal societies in parts of Africa, Asia, Australia, and South America.
Some of the earliest evidence of gathering honey from wild colonies is from rock paintings,
dating to around 13,000 BCE. Gathering honey from wild bee colonies is usually done by
subduing the bees with smoke and breaking open the tree or rocks where the colony is
located, often resulting in the physical destruction of the nest location.
Domestication of wild bees
At some point humans began to domesticate wild bees in artificial hives made from hollow
logs, wooden boxes, pottery vessels, and woven straw baskets or "skeps". Honeybees were
kept in Egypt from antiquity.[4] On the walls of the sun temple of Nyuserre Ini from the 5th
Dynasty, before 2422 BCE, workers are depicted blowing smoke into hives as they are
removing honeycombs.[5] Inscriptions detailing the production of honey are found on the
tomb of Pabasa from the 26th Dynasty (circa 650 BCE), depicting pouring honey in jars and
cylindrical hives.[6] Sealed pots of honey were found in the grave goods of Pharaohs such as
Tutankhamun.
In prehistoric Greece (Crete and Mycenae), there existed a system of high-status apiculture,
as can be concluded from the finds of hives, smoking pots, honey extractors and other
beekeeping paraphernalia in Knossos. Beekeeping was considered a highly valued industry
controlled by beekeeping overseers — owners of gold rings depicting apiculture scenes rather
that religious ones as they have been reinterpreted recently, contra Sir Arthur Evans.[7]
Archaeological finds relating to beekeeping have been discovered at Rehov, a Bronze- and
Iron Age archaeological site in the Jordan Valley, Israel.[8] Thirty intact hives, made of straw
and unbaked clay, were discovered by archaeologist Amihai Mazar of the Hebrew University
of Jerusalem in the ruins of the city, dating from about 900 BCE. The hives were found in
orderly rows, three high, in a manner that could have accommodated around 100 hives, held
more than 1 million bees and had a potential annual yield of 500 kilograms of honey and 70
kilograms of beeswax, according to Mazar, and are evidence that an advanced honey industry
existed in ancient Israel 3,000 years ago.[9] Ezra Marcus, an expert from the University of
Haifa, said the finding was a glimpse of ancient beekeeping seen in texts and ancient art from
the Near East.[10][11]
In ancient Greece, aspects of the lives of bees and beekeeping are discussed at length by
Aristotle. Beekeeping was also documented by the Roman writers Virgil, Gaius Julius
Hyginus, Varro, and Columella.
Study of honey bees
It was not until the 18th century that European natural philosophers undertook the scientific
study of bee colonies and began to understand the complex and hidden world of bee biology.
Preeminent among these scientific pioneers were Swammerdam, René Antoine Ferchault de
Réaumur, Charles Bonnet, and the blind Swiss scientist Francois Huber. Swammerdam and
Réaumur were among the first to use a microscope and dissection to understand the internal
biology of honey bees. Réaumur was among the first to construct a glass walled observation
hive to better observe activities within hives. He observed queens laying eggs in open cells,
but still had no idea of how a queen was fertilized; nobody had ever witnessed the mating of a
queen and drone and many theories held that queens were "self-fertile," while others believed
that a vapor or "miasma" emanating from the drones fertilized queens without direct physical
contact. Huber was the first to prove by observation and experiment that queens are
physically inseminated by drones outside the confines of hives, usually a great distance away.
Following Réaumur's design, Huber built improved glass-walled observation hives and
sectional hives which could be opened, like the leaves of a book, to inspect individual wax
combs; this greatly improved the direct observation of activity within a hive. Although he
became blind before he was twenty, Huber employed a secretary, Francois Burnens, to make
daily observations, conduct careful experiments, and to keep accurate notes over a period of
more than twenty years. Huber confirmed that a hive consists of one queen who is the mother
of all the female workers and male drones in the colony. He was also the first to confirm that
mating with drones takes place outside of hives and that queens are inseminated by a number
of successive matings with male drones, high in the air at a great distance from their hive.
Together, he and Burnens dissected bees under the microscope and were among the first to
describe the ovaries and spermatheca, or sperm store, of queens as well as the penis of male
drones. Huber is universally regarded as "the father of modern bee-science" and his
"Nouvelles Observations sur Les Abeilles (or "New Observations on Bees") [2]) revealed all
the basic scientific truths for the biology and ecology of honeybees.
Invention of the movable comb hive
Early forms of honey collecting entailed the destruction of the entire colony when the honey
was harvested. The wild hive was crudely broken into, using smoke to suppress the bees, the
honeycombs were torn out and smashed up — along with the eggs, larvae and honey they
contained. The liquid honey from the destroyed brood nest was crudely strained through a
sieve or basket. This was destructive and unhygienic, but for hunter-gatherer societies this did
not matter, since the honey was generally consumed immediately and there were always more
wild colonies to exploit. But in settled societies the destruction of the bee colony meant the
loss of a valuable resource; this drawback made beekeeping both inefficient and something of
a "stop and start" activity. There could be no continuity of production and no possibility of
selective breeding, since each bee colony was destroyed at harvest time, along with its
precious queen. During the medieval period abbeys and monasteries were centers of
beekeeping, since beeswax was highly prized for candles and fermented honey was used to
make alcoholic mead in areas of Europe where vines would not grow.
The 18th and 19th centuries saw successive stages of a revolution in beekeeping, which
allowed the bees themselves to be preserved when taking the harvest.
Intermediate stages in the transition from the old beekeeping to the new were recorded for
example by Thomas Wildman in 1768/1770, who described advances over the destructive old
skep-based beekeeping so that the bees no longer had to be killed to harvest the honey.[12]
Wildman for example fixed a parallel array of wooden bars across the top of a straw hive or
skep (with a separate straw top to be fixed on later) "so that there are in all seven bars of
deal" [in a 10-inch-diameter (250 mm) hive] "to which the bees fix their combs".[13] He also
described using such hives in a multi-storey configuration, foreshadowing the modern use of
supers: he described adding (at a proper time) successive straw hives below, and eventually
removing the ones above when free of brood and filled with honey, so that the bees could be
separately preserved at the harvest for a following season. Wildman also described[14] a
further development, using hives with "sliding frames" for the bees to build their comb,
foreshadowing more modern uses of movable-comb hives. Wildman's book acknowledged
the advances in knowledge of bees previously made by Swammerdam, Maraldi, and de
Reaumur—he included a lengthy translation of Reaumur's account of the natural history of
bees—and he also described the initiatives of others in designing hives for the preservation of
bee-life when taking the harvest, citing in particular reports from Brittany dating from the
1750s, due to Comte de la Bourdonnaye.
The 19th Century saw this revolution in beekeeping practice completed through the
perfection of the movable comb hive by Lorenzo Lorraine Langstroth, a descendant of
Yorkshire farmers who emigrated to the United States. Langstroth was the first person to
make practical use of Huber's earlier discovery that there was a specific spatial measurement
between the wax combs, later called "the bee space", which bees would not block with wax,
but kept as a free passage. Having determined this "bee space" (between 5 and 8 mm, or 1/4
to 3/8"), Langstroth then designed a series of wooden frames within a rectangular hive box,
carefully maintaining the correct space between successive frames, and found that the bees
would build parallel honeycombs in the box without bonding them to each other or to the
hive walls. This enables the beekeeper to slide any frame out of the hive for inspection,
without harming the bees or the comb, protecting the eggs, larvae and pupae contained within
the cells. It also meant that combs containing honey could be gently removed and the honey
extracted without destroying the comb. The emptied honey combs could then be returned to
the bees intact for refilling. Langstroth's classic book, The Hive and Honey-bee, published in
1853, described his rediscovery of the bee space and the development of his patent movable
comb hive.
The invention and development of the movable-comb-hive fostered the growth of
commercial honey production on a large scale in both Europe and the USA.
Evolution of hive designs
Langstroth's design for movable comb hives was seized upon by apiarists and inventors on
both sides of the Atlantic and a wide range of moveable comb hives were designed and
perfected in England, France, Germany and the United States. Classic designs evolved in
each country: Dadant hives and Langstroth hives are still dominant in the USA; in France the
De-Layens trough-hive became popular and in the UK a British National Hive became
standard as late as the 1930s although in Scotland the smaller Smith hive is still popular. In
some Scandinavian countries and in Russia the traditional trough hive persisted until late in
the 20th Century and is still kept in some areas. However, the Langstroth and Dadant designs
remain ubiquitous in the USA and also in many parts of Europe, though Sweden, Denmark,
Germany, France and Italy all have their own national hive designs. Regional variations of
hive evolved to reflect the climate, floral productivity and the reproductive characteristics of
the various subspecies of native honey bee in each bio-region.
The differences in hive dimensions are insignificant in comparison to the common factors in
all these hives: they are all square or rectangular; they all use movable wooden frames; they
all consist of a floor, brood-box, honey-super, crown-board and roof. Hives have traditionally
been constructed of cedar, pine, or cypress wood, but in recent years hives made from
injection molded dense polystyrene have become increasingly important.
Hives also use queen excluders between the brood-box and honey supers to keep the queen
from laying eggs in cells next to those containing honey intended for consumption. Also, with
the advent in the 20th century of mite pests, hive floors are often replaced for part of (or the
whole) year with a wire mesh and removable tray.
Pioneers of practical and commercial beekeeping
The 19th Century produced an explosion of innovators and inventors who perfected the
design and production of beehives, systems of management and husbandry, stock
improvement by selective breeding, honey extraction and marketing. Preeminent among these
innovators were:
Jan Dzierżon, was the father of modern apiology and apiculture. All modern beehives are
descendants of his design.
L. L. Langstroth, Revered as the "father of American apiculture", no other individual has
influenced modern beekeeping practice more than Lorenzo Lorraine Langstroth. His classic
book The Hive and Honey-bee was published in 1853.
Moses Quinby, often termed 'the father of commercial beekeeping in the United States',
author of Mysteries of Bee-Keeping Explained.
Amos Root, author of the A B C of Bee Culture which has been continuously revised and
remains in print to this day. Root pioneered the manufacture of hives and the distribution of
bee-packages in the United States.
Dr. C.C. Miller was one of the first entrepreneurs to actually make a living from apiculture.
By 1878 he made beekeeping his sole business activity. His book, Fifty Years Among the
Bees, remains a classic and his influence on bee management persists to this day.
Major Francesco De Hruschka was an Italian military officer who made one crucial
invention that catalyzed the commercial honey industry. In 1865 he invented a simple
machine for extracting honey from the comb by means of centrifugal force. His original idea
was simply to support combs in a metal framework and then spin them around within a
container to collect honey as it was thrown out by centrifugal force. This meant that
honeycombs could be returned to a hive undamaged but empty — saving the bees a vast
amount of work, time and materials. This single invention greatly improved the efficiency of
honey harvesting and catalysed the modern honey industry.
In the UK practical beekeeping was led in the early 20th century by a few men, pre-eminently
Brother Adam and his Buckfast bee and R.O.B. Manley, author of many titles, including
'Honey Production In The British Isles' and inventor of the Manley frame, still universally
popular in the UK.
Other notable British pioneers include William Herrod-Hempsall and Gale.
Traditional beekeeping
Fixed comb hives
Wooden hives in Stripeikiai in Lithuania
A fixed comb hive is a hive in which the combs cannot be removed or manipulated for
management or harvesting without permanently damaging the comb. almost any hollow
structure can be used for this purpose, such as a log gum, skep or a clay pot. Fixed comb
hives no longer in common use in most places, and are illegal in some places that require
inspection for problems such as varroa and American foulbrood
Modern beekeeping
Movable frame hives
In the United States, the Langstroth hive is commonly used. The Langstroth was the first
successful top-opened hive with movable frames, and other designs of hive have been based
on it. Langstroth hive was however a descendant of Jan Dzierzon’s Polish hive designs. In the
United Kingdom, the most common type of hive is the British National Hive, which can hold
Hoffman, British Standard or popular Manley frames, but it is not unusual to see some other
sorts of hive (Smith, Commercial and WBC, rarely Langstroth). Straw skeps, bee gums, and
unframed box hives are now unlawful in most US states, as the comb and brood cannot be
inspected for diseases. However, straw skeps are still used for collecting swarms by hobbyists
in the UK, before moving them into standard hives.
Top bar hives
A few hobby beekeepers are adopting various top-bar hives of the type commonly found in
Africa. Top bar hives were originally used as traditional beekeeping a method in both Greece
and Vietnam [11]. These have no frames and the honey filled comb is not returned to the hive
after extraction, as it is in the Langstroth hive. Because of this, the production of honey in a
top bar hive is only about 25% to 50% with minimal management than that of a Langstroth
hive.
Some of the most well known top bar hives are the Kenyan Top Bar Hive (KTBH) with
sloping sides, the Tanzanian Top Bar Hive, which has straight sides and the Vertical Top Bar
Hives such as the Warre or "People's Hive" designed by Abbe Warre in the mid 1900's.
The initial costs and equipment requirements are far lower. Often scrap wood or #2 or #3
pine is able to be used with a nice hive as the outcome. Top-bar hives also offer some
advantages in interacting with the bees and the amount of weight that must be lifted is greatly
reduced. Top Bar Hives are being widely used in developing countries in Africa and Asia as a
result of the 'Bees For Development' program. There is a growing number of beekeepers in
the U.S. using various top bar hives within the past 2 years or so.[15]
Protective clothing
Beekeepers often wear protective clothing to protect themselves from stings.
While knowledge of the bees is the first line of defense, most beekeepers also wear some
protective clothing. Novice beekeepers usually wear gloves and a hooded suit or hat and veil.
Experienced beekeepers sometimes elect not to use gloves because they inhibit delicate
manipulations. The face and neck are the most important areas to protect, so most beekeepers
will at least wear a veil.
It's worth noting that no amount of protective clothing will make the experience of a face-full
of aggressive bees flying up from an opened hive pleasant for any beekeeper, and so it's
rewarding to colonise kindly bees as soon as possible.
Defensive bees are attracted to the breath, and a sting on the face can lead to much more pain
and swelling than a sting elsewhere, while a sting on a bare hand can usually be quickly
removed by fingernail scrape to reduce the amount of venom injected.
The protective clothing is generally light coloured (but not colourful) and of a smooth
material. This provides the maximum differentiation from the colony's natural predators
(bears, skunks, etc.) which tend to be dark-colored and furry.
The 'stings' retained in the fabric of the clothing will continue to pump out an alarm
pheromone that actually attracts aggressive action and further stinging attacks. Washing suits
regularly, and rinsing gloved hands in vinegar will minimise attraction.
Smoker
Bee smoker with heat shield and hook
Smoke is the beekeeper's third line of defense. Most beekeepers use a "smoker" — a device
designed to generate smoke from the incomplete combustion of various fuels. Smoke calms
bees; it initiates a feeding response in anticipation of possible hive abandonment due to fire.
Smoke also masks alarm pheromones released by guard bees or when bees are squashed in an
inspection. The ensuing confusion creates an opportunity for the beekeeper to open the hive
and work without triggering a defensive reaction. In addition, when a bee consumes honey
the bee's abdomen distends, supposedly making it difficult to make the necessary flexes to
sting, though this has not been tested scientifically.
Smoke is of questionable use with a swarm, because swarms do not have honey stores to feed
on in response. Usually smoke is not needed, since swarms tend to be less defensive, as they
have no stores to defend, and a fresh swarm will have fed well from the hive.
Many types of fuel can be used in a smoker as long as it is natural and not contaminated with
harmful substances. These fuels include hessian, twine, burlap, pine needles, corrugated
cardboard, and mostly rotten or punky wood. Indian beekeepers, especially in Kerala, often
use coconut fibers as they are readily available, safe, and of negligible expense. Some
beekeeping supply sources also sell commercial fuels like pulped paper and compressed
cotton, or even aerosol cans of smoke. Other beekeepers use sumac as fuel because it ejects
lots of smoke and doesn't have an odor.
Some bee keepers are using "liquid smoke" as a safer, more convenient, alternative. It is a
water-based solution that is sprayed onto the bees from a plastic spray bottle.
Effects of stings and of protective measures
Some beekeepers believe that the more stings a beekeeper receives, the less irritation each
causes, and they consider it important for safety of the beekeeper to be stung a few times a
season. Beekeepers have high levels of antibodies (mainly IgG) reacting to the major antigen
of bee venom, phospholipase A2 (PLA).[16] Antibodies correlate with the frequency of bee
stings.
The entry of venom into the body from bee-stings may also be hindered and reduced by
protective clothing which allows the wearer to remove stings and venom sacs simply with a
tug on the clothing.
Natural beekeeping
There is a current movement that eschews chemicals in beekeeping, and feels that Colony
collapse disorder can be most effectively addressed by reversing trends that disrespect the
needs of the bees themselves. Crop spraying, unnatural conditions in which bees are moved
thousands of miles to pollinate commercial crops, artificial insemination of queens, and sugar
water feeding are thought to all contribute to a general weakening of the constitution of the
honeybee.
Urban or backyard beekeeping
Related to natural beekeeping, urban beekeeping is an attempt to revert to a less
industrialized way of obtaining honey by utilizing small-scale colonies that pollinate urban
gardens. Urban apiculture has undergone a renaissance in the 2000s. Paris, Berlin, London,
Tokyo and Washington, D.C., are among beekeeping cities. Until 2010, beekeeping was
banned in New York City and punishable with a $2,000 fine.[17][18] Urban beekeeping is
commonly practiced in areas that have a pesticide ban. This includes Paris, as well as 156
municipalities in Canada and 3 of 10 Canadian provinces. Beekeeping was illegal in
Vancouver until 2003, for example, but by 2010 there were bees on the roof of Vancouver
City Hall.[19]
Bee colonies
Castes
A colony of bees consists of three castes of bee:



a Queen bee, which is normally the only breeding female in the colony;
a large number of female worker bees, typically 30,000–50,000 in number;
a number of male drones, ranging from thousands in a strong hive in spring to very
few during dearth or cold season.
The queen is the only sexually mature female in the hive and all of the female worker bees
and male drones are her offspring. The queen may live for up to three years or more and may
be capable of laying half a million eggs or more in her lifetime. At the peak of the breeding
season, late spring to summer, a good queen may be capable of laying 3,000 eggs in one day,
more than her own body weight. This would be exceptional however; a prolific queen might
peak at 2,000 eggs a day, but a more average queen might lay just 1,500 eggs per day. The
queen is raised from a normal worker egg, but is fed a larger amount of royal jelly than a
normal worker bee, resulting in a radically different growth and metamorphosis. The queen
influences the colony by the production and dissemination of a variety of pheromones or
"queen substances". One of these chemicals suppresses the development of ovaries in all the
female worker bees in the hive and prevents them from laying eggs.
Mating of queens
The queen emerges from her cell after 15 days of development and she remains in the hive
for 3–7 days before venturing out on a mating flight. Mating flight is otherwise known as
'nuptial flight'. Her first orientation flight may only last a few seconds, just enough to mark
the position of the hive. Subsequent mating flights may last from 5 minutes to 30 minutes,
and she may mate with a number of male drones on each flight. Over several matings,
possibly a dozen or more, the queen will receive and store enough sperm from a succession
of drones to fertilize hundreds of thousands of eggs. If she does not manage to leave the hive
to mate — possibly due to bad weather or being trapped within part of the hive — she will
remain infertile and become a 'drone layer', incapable of producing female worker bees, and
the hive is doomed.
Mating takes place at some distance from the hive and often several hundred feet up in the
air; it is thought that this separates the strongest drones from the weaker ones - ensuring that
only the fastest and strongest drones get to pass on their genes.
Female worker bees
Almost all the bees in a hive are female worker bees. At the height of summer when activity
in the hive is frantic and work goes on non-stop, the life of a worker bee may be as short as 6
weeks; in late autumn, when no brood is being raised and no nectar is being harvested, a
young bee may live for 16 weeks, right through the winter. During its life a worker bee
performs different work functions in the hive which are largely dictated by the age of the bee.
Period
Days 1-3
Day 3-6
Work activity
Cleaning cells and incubation
Feeding older larvae
Day 6-10
Feeding younger larvae
Day 8-16
Receiving honey and pollen from field bees
Day 12-18
Wax making and cell building
Day 14 onwards Entrance guards; nectar and pollen foraging
Male bees (drones)
Drones are the largest bees in the hive (except for the queen), at almost twice the size of a
worker bee. They do no work, do not forage for pollen or nectar and are only produced in
order to mate with new queens and fertilize them on their mating flights. A bee colony will
generally start to raise drones a few weeks before building queen cells in order to supersede a
failing queen or in preparation for swarming. When queen raising for the season is over, the
bees in colder climates will drive the drones out of the hive to die, biting and tearing at their
legs and wings.
Differing stages of development
Stage of development Queen Worker Drone
Egg
3 days 3 days 3 days
Larva
8 days 10 days 13 days
Pupa
4 days 8 days 8 days
Total
15 days 21 days 24 days
Structure of a bee colony
A domesticated bee colony is normally housed in a rectangular hive body, within which eight
to ten parallel frames house the vertical plates of honeycomb which contain the eggs, larvae,
pupae and food for the colony. If one were to cut a vertical cross-section through the hive
from side to side, the brood nest would appear as a roughly ovoid ball spanning 5-8 frames of
comb. The two outside combs at each side of the hive tend to be exclusively used for longterm storage of honey and pollen.
Within the central brood nest, a single frame of comb will typically have a central disk of
eggs, larvae and sealed brood cells which may extend almost to the edges of the frame.
Immediately above the brood patch an arch of pollen-filled cells extends from side to side,
and above that again a broader arch of honey-filled cells extends to the frame tops. The
pollen is protein-rich food for developing larvae, while honey is also food but largely energy
rich rather than protein rich. The nurse bees which care for the developing brood secrete a
special food called 'royal jelly' after feeding themselves on honey and pollen. The amount of
royal jelly which is fed to a larva determines whether it will develop into a worker bee or a
queen.
Apart from the honey stored within the central brood frames, the bees store surplus honey in
combs above the brood nest. In modern hives the beekeeper places separate boxes, called
'supers', above the brood box, in which a series of shallower combs is provided for storage of
honey. This enables the beekeeper to remove some of the supers in the late summer, and to
extract the surplus honey harvest, without damaging the colony of bees and its brood nest
below. If all the honey is 'stolen', including the amount of honey needed to survive winter, the
beekeeper must replace these stores by feeding the bees sugar or corn syrup in autumn.
Annual cycle of a bee colony
The development of a bee colony follows an annual cycle of growth which begins in spring
with a rapid expansion of the brood nest, as soon as pollen is available for feeding larvae.
Some production of brood may begin as early as January, even in a cold winter, but breeding
accelerates towards a peak in May (in the northern hemisphere), producing an abundance of
harvesting bees synchronized to the main 'nectar flow' in that region. Each race of bees times
this build-up slightly differently, depending on how the flora of its original region blooms.
Some regions of Europe have two nectar flows: one in late spring and another in late August.
Other regions have only a single nectar flow. The skill of the beekeeper lies in predicting
when the nectar flow will occur in his area and in trying to ensure that his colonies achieve a
maximum population of harvesters at exactly the right time.
The key factor in this is the prevention, or skillful management of the swarming impulse. If a
colony swarms unexpectedly and the beekeeper does not manage to capture the resulting
swarm, he is likely to harvest significantly less honey from that hive, since he will have lost
half his worker bees at a single stroke. If, however, he can use the swarming impulse to breed
a new queen but keep all the bees in the colony together, he will maximize his chances of a
good harvest. It takes many years of learning and experience to be able to manage all these
aspects successfully, though owing to variable circumstances many beginners will often
achieve a good honey harvest.
Formation of new colonies
Colony reproduction; swarming and supersedure
All colonies are totally dependent on their queen, who is the only egg-layer. However, even
the best queens live only a few years and one or two years longevity is the norm. She can
choose whether or not to fertilize an egg as she lays it; if she does so, it develops into a
female worker bee; if she lays an unfertilized egg it becomes a male drone. She decides
which type of egg to lay depending on the size of the open brood cell which she encounters
on the comb; in a small worker cell she lays a fertilized egg; if she finds a much larger drone
cell she lays an unfertilized drone egg.
All the time that the queen is fertile and laying eggs she produces a variety of pheromones
which control the behavior of the bees in the hive; these are commonly called 'queen
substance' but in reality there are various different pheromones with different functions. As
the queen ages she begins to run out of stored sperm and her pheromones begin to fail. At
some point, inevitably, the queen begins to falter and the bees will decide to replace her by
creating a new queen from one of her worker eggs. They may do this because she has been
damaged (lost a leg or an antenna), because she has run out of sperm and cannot lay fertilized
eggs (has become a 'drone laying queen') or because her pheromones have dwindled to a
point where they cannot control all the bees in the hive anymore.
At this juncture the bees will produce one or more queen cells by modifying existing worker
cells which contain a normal female egg. However, there are two distinct behaviors which the
bees pursue:
1. Supersedure: queen replacement within one hive without swarming
2. Swarm cell production: the division of the hive into two colonies by swarming
Different sub-species of Apis mellifera exhibit differing swarming characteristics which
reflect their evolution in different ecotopes of the European continent. In general the more
northerly black races are said to swarm less and supersede more, whereas the more southerly
yellow and grey varieties are said to swarm more frequently. The truth is complicated
because of the prevalence of cross-breeding and hybridization of the sub species and opinions
differ.
Supersedure is highly valued as a behavioral trait by beekeepers because a hive that
supersedes its old queen does not swarm and so no stock is lost; it merely creates a new
queen and allows the old one to fade away, or alternatively she is killed when the new queen
emerges. When superseding a queen the bees will produce just one or two queen cells,
characteristically in the center of the face of a broodcomb.
In swarming, by contrast, a great many queen cells are created — typically a dozen or more
— and these are located around the edges of a broodcomb, most often at the sides and the
bottom.
New wax combs between basement joists
Once either process has begun, the old queen will normally leave the hive with the hatching
of the first queen cells. When she leaves the hive the old queen is accompanied by a large
number of bees, predominantly young bees (wax-secreters), who will form the basis of the
new hive. Scouts are sent out from the swarm to find suitable hollow trees or rock crevices
and as soon as one is found the entire swarm moves in, building new wax brood combs
within a matter of hours using the honey stores which the young bees have filled themselves
with before leaving the old hive. Only young bees can secrete wax from special abdominal
segments and this is why there tend to be more young bees than old in swarms. Often a
number of virgin queens accompany the first swarm (the 'prime swarm'), and the old queen is
replaced as soon as a daughter queen is mated and laying. Otherwise, she will be quickly
superseded in their new home.
Factors that trigger swarming
It is generally accepted that a colony of bees will not swarm until they have completed all of
their brood combs, i.e. filled all available space with eggs, larvae and brood. This generally
occurs in late Spring at a time when the other areas of the hive are rapidly filling with honey
stores. So one key trigger of the swarming instinct is when the queen has no more room to lay
eggs and the hive population is becoming very congested. Under these conditions a prime
swarm may issue with the queen, resulting in a halving of the population within the hive and
leaving the old colony with a large amount of hatching bees. The queen who leaves finds
herself in a new hive with no eggs, no larvae but lots of energetic young bees who create a
new set of brood combs from scratch in a very short time.
Another important factor in swarming is the age of the queen. Those under a year in age are
unlikely to swarm unless they are extremely crowded, while older queens have swarming
predisposition.
Beekeepers monitor their colonies carefully in spring and watch for the appearance of queen
cells, which are a dramatic signal that the colony is determined to swarm.
When a colony has decided to swarm, queen cells are produced in numbers varying to a
dozen or more. When the first of these queen cells is sealed, after 8 days of larval feeding, a
virgin queen will pupate and be due to emerge seven days after sealing. Before leaving, the
worker bees fill their stomachs with honey in preparation for the creation of new honeycombs
in a new home. This cargo of honey also makes swarming bees less inclined to sting and a
newly issued swarm is noticeably gentle for up to 24 hours — often capable of being handled
without gloves or veil by a beekeeper.
A swarm attached to a branch
This swarm is looking for shelter. A beekeeper may capture it and introduce it into a new
hive, helping to meet this need. Otherwise, it will return to a feral state, in which case it will
find shelter in a hollow tree, an excavation, an abandoned chimney, or even behind shutters.
Back at the original hive, the first virgin queen to emerge from her cell will immediately seek
to kill all her rival queens who are still waiting to emerge from their cells. However, usually
the bees deliberately prevent her from doing this, in which case, she too will lead a second
swarm from the hive. Successive swarms are called 'after-swarms' or 'casts' and can be very
small, often with just a thousand or so bees, as opposed to a prime swarm which may contain
as many as ten to twenty thousand bees.
Small after-swarms have less chance of survival and, depleting the original hive, may
threaten its survival as well. When a hive has swarmed despite the beekeeper's preventative
efforts, a good management practice is to give the depleted hive a couple frames of open
brood with eggs. This helps replenish the hive more quickly, and gives a second opportunity
to raise a queen, if there is a mating failure.
Each race or sub-species of honeybee has its own swarming characteristics. Italian bees are
very prolific and inclined to swarm; Northern European black bees have a strong tendency to
supersede their old queen, without swarming. These differences are the result of differing
evolutionary pressures in the regions where each sub-species evolved.
Artificial swarming
When a colony accidentally loses its queen, it is said to be 'queenless'. The workers realize
that the queen is absent after as little as an hour, as her pheromones fade in the hive. The
colony cannot survive without a fertile queen laying eggs to renew the population. So the
workers select cells containing eggs aged less than three days and enlarge these cells
dramatically to form 'emergency queen cells'. These appear similar to large peanut-like
structure about an inch long, which hangs from the center or side of the brood combs. The
developing larva in a queen cell is fed differently from an ordinary worker-bee, receiving in
addition to the normal honey and pollen a great deal of royal jelly, a special food secreted by
young 'nurse bees' from the hypopharyngeal gland. This special food dramatically alters the
growth and development of the larva so that, after metamorphosis and pupation, it emerges
from the cell as a queen bee. The queen is the only bee in a colony which has fully developed
ovaries and she secretes a pheromone which suppresses the normal development of ovaries in
all her workers.
Beekeepers use the ability of the bees to produce new queens in order to increase their
colonies, a procedure called splitting a colony. In order to do this, they remove several brood
combs from a healthy hive, taking care that the old queen is left behind. These combs must
contain eggs or larvae less than three days old which will be covered by young 'nurse bees'
which care for the brood and keep it warm. These brood combs and attendant nurse bees are
then placed into a small 'nucleus hive' along with other combs containing honey and pollen.
As soon as the nurse bees find themselves in this new hive and realize that they have no
queen they set about constructing emergency queen cells using the eggs or larvae which they
have in the combs with them.
World apiculture
World honey production and consumption in 2005
Production
Consumption
Number
Country
(1000 metric (1000
metric
of beekeepers
tons)
tons)
Europe and Russia
Ukraine
71.46
52
Russia
52.13
54
Spain
37.00
40
Germany (*2008)
21.23
89
90,000*
Number
of
bee
hives
1,000,000*
Hungary
19.71
4
Romania
19.20
10
Greece
16.27
16
France
15.45
30
Bulgaria
11.22
2
Serbia
3 to 5
6.3
Denmark (*1996)
2.5
5
North America
United
States
of
America
(*2006, 70.306*
158.75*
**2002)
45 (2006); 28
Canada
29
(2007) [20]
Latin America
93.42 (Average
Argentina
3
84)[21]
Mexico
50.63
31
Brazil
33.75
2
Uruguay
11.87
1
Oceania
Australia
18.46
16
New Zealand
9.69
8
Asia
299.33 (average
China
238
245)
82.34 (average
Turkey
66
70)
30,000
*4,000
12,029** (210,000
2,400,000*
bee keepers)
13,000
500,000
7,200,000
[21]
4,500,000
[21][22]
3,500,000
Iran
India
430,000
*150,000
[21]
52.23
45
South Korea
23.82
Vietnam
13.59
Turkmenistan
10.46
Africa
Ethiopia
41.23
Tanzania
28.68
Angola
23.77
Kenya
22.00
Egypt (*1997)
16*
Central
African
14.23
Republic
Morocco (*1997)
4.5*
27
0
10
9,800,000
[21]
40
28
23
21
4,400,000
200,000*
2,000,000*
27,000*
400,000*
14
Source: Food and Agriculture Organization of the United Nations (FAO), August 2007.
Sources:




Denmark: beekeeping.com [23] (1996)
Arab countries: beekeeping.com [24] (1997)
USA: University of Arkansas National Agricultural Law Center
Resource Center [26] (2006)
Serbia [27]
[25]
(2002), Agricultural Marketing
Images of harvesting honey
A
beekeeper
removing framesA frame
from the hive
Opening the cells:An
Uncapping
fork
Filtering
honey
Smoking the hive
uncapping
Using a blower to remove bees from
honey super prior to removal to
honey house
Uncapping the cells byExtracting the honey
hand
using
an
uncapping knife
Pouring in pots
the(after maturation)
Honey bee types and characteristics
Bee Castes
Queen bee · Worker bee · Laying worker bee ·
Drone
Lifecycle
Beehive · Honey bee life cycle · Brood · Bee
learning and communication · Swarming
Western honey bee subspecies and breeds
Buckfast bee · Carniolan honey bee · European
dark bee · Italian bee · Maltese honey bee ·
Africanized bee · Apis mellifera scutellata ·
Honey bee race
Cultivation
Beekeeping · Apiology · Apiary · Beehive ·
Langstroth hive · Top-bar hive · Beeswax ·
Honey · Honey extraction · Honey extractor
Lists
Topics in beekeeping · Diseases of the honey
bee
Sericulture
Sericulture, or silk farming, is the rearing of silkworms for the production of raw silk.
Although there are several commercial species of silkworms, Bombyx mori is the most
widely used and intensively studied. According to Confucian texts, the discovery of silk
production by B. mori dates to about 2700 BC, although archaeological records point to silk
cultivation as early as the Yangshao period (5000 – 10,000 BC).[1] About the first half of the
1st century AD it had reached ancient Khotan[2], and by AD 300 the practice had been
established in India.[citation needed] Later it was introduced to Europe, the Mediterranean and
other Asiatic countries. Sericulture has become one of the most important cottage
industries in a number of countries like China, Japan, India, Korea, Brazil, Russia, Italy and
France. Today, China and India are the two main producers, together manufacturing more
than 60% of the world production each year.
Production
Silkworm larvae are fed mulberry leaves, and, after the fourth molt, climb a twig placed near
them and spin their silken cocoons. The silk is a continuous-filament fiber consisting of
fibroin protein, secreted from two salivary glands in the head of each larva, and a gum
called sericin, which cements the two filaments together. The sericin is removed by placing
the cocoons in hot water, which frees the silk filaments and readies them for reeling. The
immersion in hot water also kills the silkworm larvae.
Single filaments are combined to form yarn. This yarn is drawn under tension through
several guides and wound onto reels. Finally, the yarn is dried, and the now raw silk is
packed according to quality.
Stages of production
The stages of production are as follows:
1. The silk moth lays eggs.
2. When the eggs hatch, the caterpillars are fed mulberry leaves.
3. When the silkworms are about 25 days old, they are 10,000 times heavier than when they
hatched. They are now ready to spin a silk cocoon.
4. The silk is produced in two glands in the silkworm's head and then forced out in liquid form
through openings called spinnerets.
5. The silk solidifies when it comes in contact with the air.
6. The silkworm spins approximately 1 mile of filament and completely encloses itself in a
cocoon in about two or three days but due to quality restrictions, the amount of usable silk in
each cocoon is small. As a result, 5500 silkworms are required to produce 1 kg of silk.
7. The silk is obtained from the undamaged cocoons by brushing the cocoon to find the outside
ends of the filament.
8. The silk filaments are then wound on a reel. One cocoon contains approximately 1,000 yards
of silk filament. The silk at this stage is known as raw silk. Just one thread consists of 48
individual silk filaments. This could lead to at least 4000 yards in a whole cocoon.
Cruelty towards silkworms
Strict Buddhists refrain from wearing or using silk products, as the silkworm pupae are killed in the
process of making silk. Animal activists[which?] say that the practice of boiling cocoons in hot water for silk is
cruel.
Silkworm: Silkworm Diseases
Silkworm Diseases
Nosema bombycis: is a microsporidium that kills 100% silkworms hatched from infected eggs.
Nosema bombycis is a microsporidium that can be carried over from worms to moths then eggs and
worms again. This microsporidium comes from the food that silkworms eat. If silkworms get this
microsporidium in their worm stage, there are no visible symptoms. However, mother moths will pass the
disease onto the eggs, and 100% of worms hatching from the diseased eggs will die in their worm stage.
Therefore, it is extremely important to rule out all eggs from infected moths by checking the moth’s body
fluid under a microscope.
Botrytis bassiana: is a fungus that destroys the entire silkworm body. This fungus usually appears when
silkworms are raised under cold and high humidity. This disease will not be passed on to the eggs from
moths. Actually, the infected silkworms cannot survive to become moths and lay eggs.
This fungus can extend to other insects.
Grasserie:If grasserie is observed in chawkie stage, then the
chawke larvae must have been infected while hatching or during chawkie rearing
2. Chawkie larvae may get infected with the silkworm egg surface is not disinfected
3. The larvae also get infected, when the silkworm rearing hous is not disinfected and hygine is not
practiced
effectively during chawkie rearing
4. The disease development in early instar rearing is faster
Pebrine:Pebrine is a disease caused by a parasitic microsporidian,
Nosema bombycis Nageli1. Diseased larvae show slow growth, undersized body
2. Diseased larvae reveal pale and flaccid body. Tiny
black spots appear on larval integument.
3. Dead larvae remain rubbery and do not undergo
putrefaction shortly after death.
Temperature adaptation
The ability of animals to survive and function at widely different temperatures
as a result of specific physiological adaptations. Temperature is an all-pervasive
attribute of the environment that limits the activity, distribution, and survival
of animals.
Changes in temperature influence biological systems, both by determining the
rate of chemical reactions and by specifying equilibria. Because temperature
exerts a greater effect upon the percentage of molecules that possess
sufficient energy to react (that is, to exceed the activation energy) than upon
the average kinetic energy of the system, modest reductions in temperature
(for example, from 77 to 59°F or from 25 to 15°C, corresponding to only a 3%
reduction in average kinetic energy) produce a marked depression (two- to
threefold) in reaction rate. In addition, temperature specifies the equilibria
between the formation and disruption of the noncovalent (electrostatic,
hydrophobic, and hydrogen-bonding) interactions that stabilize both the higher
levels of protein structure and macromolecular aggregations such as biological
membranes. Maintenance of an appropriate structural flexibility is a
requirement for both enzyme catalysis and membrane function, yet cold
temperatures constrain while warm temperatures relax the conformational
flexibility of both proteins and membrane lipids, thereby perturbing biological
function. See Cell membranes, Enzyme
Animals are classified into two broad groups depending on the factors that
determine body temperature. For ectotherms, body temperature is
determined by sources of heat external to the body; levels of resting
metabolism (and heat production) are low, and mechanisms for retaining heat
are limited. Such animals are frequently termed poikilothermic or coldblooded, because the body temperature often conforms to the temperature of
the environment. In contrast, endotherms produce more metabolic heat and
possess specialized mechanisms for heat retention. Therefore, body
temperature is elevated above ambient temperature; some endotherms
(termed homeotherms or warm-blooded animals) maintain a relatively
constant body temperature. There is no natural taxonomic division between
ecto- and endotherms. Most invertebrates, fish, amphibians, and reptiles are
ectotherms, while true homeothermy is restricted to birds and mammals.
However, flying insects commonly elevate the temperature of their thoracic
musculature prior to and during flight (to 96°F or 36°C), and several species of
tuna retain metabolic heat in their locomotory musculature via a vascular
countercurrent heat exchanger. See Hibernation, Thermoregulation
Adaptation
Part of the Biology series on
Evolution
Mechanisms and processes
Adaptation
Genetic
drift
Gene
flow
Mutation
Natural
selection
Speciation
Research and history
Introduction
Evidence
Evolutionary history of life
History
Level
of
support
Modern
synthesis
Objections / Controversy
Social
effect
Theory and fact
Evolutionary biology fields
Cladistics
Ecological
genetics
Evolutionary development
Evolutionary
psychology
Molecular
evolution
Phylogenetics
Population
genetics
Systematics
Biology portal · v • d • e
Adaptation is the evolutionary process whereby a population becomes better suited to its
habitat. This process takes place over many generations,[3] and is one of the basic phenomena
of biology.
The term adaptation may also refer to a feature which is especially important for an
organism's survival.[5] For example, the adaptation of horses' teeth to the grinding of grass, or
their ability to run fast and escape predators. Such adaptations are produced in a variable
population by the better suited forms reproducing more successfully, that is, by natural
selection.
General principles
The significance of an adaptation can only be understood in relation to the total biology of the
species. Julian Huxley[6]
Adaptation is, first of all, a process, rather than a physical part of a body.[7] The distinction
may be seen in an internal parasite (such as a fluke), where the bodily structure is greatly
simplified, but nevertheless the organism is highly adapted to its unusual environment. From
this we see that adaptation is not just a matter of visible traits: in such parasites critical
adaptations take place in the life-cycle, which is often quite complex.[8] However, as a
practical term, adaptation is often used for the product: those features of a species which
result from the process. Many aspects of an animal or plant can be correctly called
adaptations, though there are always some features whose function is in doubt. By using the
term adaptation for the evolutionary process, and adaptive trait for the bodily part or
function (the product), the two senses of the word may be distinguished.
Adaptation is one of the two main processes that explain the diverse species we see in
biology, such as the different species of Darwin's finches. The other is speciation (speciessplitting or cladogenesis), caused by geographical isolation or some other mechanism.[9][10] A
favourite example used today to study the interplay of adaptation and speciation is the
evolution of cichlid fish in African lakes, where the question of reproductive isolation is
much more complex.[11][12]
Adaptation is not always a simple matter, where the ideal phenotype evolves for a given
external environment. An organism must be viable at all stages of its development and at all
stages of its evolution. This places constraints on the evolution of development, behaviour
and structure of organisms. The main constraint, over which there has been much debate, is
the requirement that each genetic and phenotypic change during evolution should be
relatively small, because developmental systems are so complex and interlinked. However, it
is not clear what "relatively small" should mean, for example polyploidy in plants is a
reasonably common large genetic change.[13] The origin of the symbiosis of multiple microorganisms to form a eukaryota is a more exotic example.[14]
All adaptations help organisms survive in their ecological niches.[15] These adaptive traits
may be structural, behavioral or physiological. Structural adaptations are physical features of
an organism (shape, body covering, armament; and also the internal organization).
Behavioural adaptations are composed of inherited behaviour chains and/or the ability to
learn: behaviours may be inherited in detail (instincts), or a tendency for learning may be
inherited (see neuropsychology). Examples: searching for food, mating, vocalizations.
Physiological adaptations permit the organism to perform special functions (for instance,
making venom, secreting slime, phototropism); but also more general functions such as
growth and development, temperature regulation, ionic balance and other aspects of
homeostasis. Adaptation, then, affects all aspects of the life of an organism.
Definitions
The following definitions are mainly due to Theodosius Dobzhansky.
1. Adaptation is the evolutionary process whereby an organism becomes better able to
live in its habitat or habitats.[16]
2. Adaptedness is the state of being adapted: the degree to which an organism is able
to live and reproduce in a given set of habitats.[17]
3. An adaptive trait is an aspect of the developmental pattern of the organism which
enables or enhances the probability of that organism surviving and reproducing.[18]
Adaptedness and fitness
From the above definitions, it is clear that there is a relationship between adaptedness and
fitness (a key population genetics concept). Differences in fitness between genotypes predict
the rate of evolution by natural selection. Natural selection changes the relative frequencies of
alternative phenotypes, insofar as they are heritable.[19] Although the two are connected, the
one does not imply the other: a phenotype with high adaptedness may not have high fitness.
Dobzhansky mentioned the example of the Californian redwood, which is highly adapted, but
a relic species in danger of extinction.[16] Elliott Sober commented that adaptation was a
retrospective concept since it implied something about the history of a trait, whereas fitness
predicts a trait's future.[20]
1. Relative fitness. The average contribution to the next generation by a phenotype or
a class of phenotypes, relative to the contributions of other phenotypes in the
population. This is also known as Darwinian fitness, selective coefficient, and other
terms.
2. Absolute fitness. The absolute contribution to the next generation by a phenotype
or a class of phenotypes. Also known as the Malthusian parameter when applied to
the population as a whole.[21]
3. Adaptedness. The extent to which a phenotype fits its local ecological niche. This
can sometimes be tested through a reciprocal transplant experiment.
Brief history
Adaptation as a fact of life has been accepted by all the great thinkers who have tackled the
world of living organisms. It is their explanations of how adaptation arises that separates
these thinkers. A few of the most significant ideas:[22]



Empedocles did not believe that adaptation required a final cause (~ purpose), but
"came about naturally, since such things survived". Aristotle, however, did believe in
final causes.
In natural theology, adaptation was interpreted as the work of a deity, even as
evidence for the existence of God.[23] William Paley believed that organisms were
perfectly adapted to the lives they lead, an argument that shadowed Leibniz, who had
argued that God had brought about the best of all possible worlds. Voltaire's Dr
Pangloss[24] is a parody of this optimistic idea, and Hume also argued against
design.[25] The Bridgewater Treatises are a product of natural theology, though some
of the authors managed to present their work in a fairly neutral manner. The series
was lampooned by Robert Knox, who held quasi-evolutionary views, as the
Bilgewater Treatises. Darwin broke with the tradition by emphasising the flaws and
limitations which occurred in the animal and plant worlds.[26]
Lamarck. His is a proto-evolutionary theory of the inheritance of acquired traits,
whose main purpose is to explain adaptations by natural means.[27] He proposed a
tendency for organisms to become more complex, moving up a ladder of progress,
plus "the influence of circumstances", usually expressed as use and disuse. His
evolutionary ideas, and those of Geoffroy, fail because they cannot be reconciled with
heredity. This was known even before Mendel by medical men interested in human
races (Wells, Lawrence), and especially by Weismann.
Many other students of natural history, such as Buffon, accepted adaptation, and some also
accepted evolution, without voicing their opinions as to the mechanism. This illustrates the
real merit of Darwin and Wallace, and secondary figures such as Bates, for putting forward a
mechanism whose significance had only been glimpsed previously. A century later,
experimental field studies and breeding experiments by such as Ford and Dobzhansky
produced evidence that natural selection was not only the 'engine' behind adaptation, but was
a much stronger force than had previously been thought.[28][29][30]
Types of adaptation
Changes in habitat
Before Darwin, adaptation was seen as a fixed relationship between an organism and its
habitat. It was not appreciated that as the climate changed, so did the habitat; and as the
habitat changed, so did the biota. Also, habitats are subject to changes in their biota: for
example, invasions of species from other areas. The relative numbers of species in a given
habitat are always changing. Change is the rule, though much depends on the speed and
degree of the change.
When the habitat changes, three main things may happen to a resident population: habitat
tracking, genetic change or extinction. In fact, all three things may occur in sequence. Of
these three effects, only genetic change brings about adaptation.
Habitat tracking
When a habitat changes, the most common thing to happen is that the resident population
moves to another locale which suits it; this is the typical response of flying insects or oceanic
organisms, who have wide (though not unlimited) opportunity for movement.[32] This
common response is called habitat tracking. It is one explanation put forward for the periods
of apparent stasis in the fossil record (the punctuated equilibrium thesis).[33]
Genetic change
Genetic change is what occurs in a population when natural selection acts on the genetic
variability of the population. By this means, the population adapts genetically to its
circumstances.[34] Genetic changes may result in visible structures, or may adjust
physiological activity in a way that suits the changed habitat.
It is now clear that habitats and biota do frequently change. Therefore, it follows that the
process of adaptation is never finally complete.[35] Over time, it may happen that the
environment changes little, and the species comes to fit its surroundings better and better. On
the other hand, it may happen that changes in the environment occur relatively rapidly, and
then the species becomes less and less well adapted. Seen like this, adaptation is a genetic
tracking process, which goes on all the time to some extent, but especially when the
population cannot or does not move to another, less hostile area. Also, to a greater or lesser
extent, the process affects every species in a particular ecosystem.[36][37]
Van Valen thought that even in a stable environment, competing species had to constantly
adapt to maintain their relative standing. This became known as the Red Queen hypothesis.
Intimate relationships: co-adaptations
In co-evolution, where the existence of one species is tightly bound up with the life of
another species, new or 'improved' adaptations which occur in one species are often followed
by the appearance and spread of corresponding features in the other species. There are many
examples of this; the idea emphasises that the life and death of living things is intimately
connected, not just with the physical environment, but with the life of other species. These
relationships are intrinsically dynamic, and may continue on a trajectory for millions of years,
as has the relationship between flowering plants and insects (pollination).
Pollinator constancy: these honeybees selectively visit flowers from only one species, as can
be seen by the colour of the pollen in their baskets:








Co-extinction
Infection-resistance
Mimicry
Mutualism
Parasite-host
Pollination syndrome
Predator-prey
Symbiosis
The gut contents, wing structures, and mouthpart morphologies of fossilized beetles and flies
suggest that they acted as early pollinators. The association between beetles and angiosperms
during the early Cretaceous period led to parallel radiations of angiosperms and insects into
the late Cretaceous. The evolution of nectaries in late Cretaceous flowers signals the
beginning of the mutualism between hymenopterans and angiosperms.[38]
Mimicry
A and B show real wasps; the rest are mimics: three hoverflies and one beetle.
Henry Walter Bates' work on Amazonian butterflies led him to develop the first scientific
account of mimicry, especially the kind of mimicry which bears his name: Batesian
mimicry.[39] This is the mimicry by a palatable species of an unpalatable or noxious species.
A common example seen in temperate gardens is the hover-fly, many of which – though
bearing no sting – mimic the warning colouration of hymenoptera (wasps and bees). Such
mimicry does not need to be perfect to improve the survival of the palatable species.[40]
Bates, Wallace and Müller believed that Batesian and Müllerian mimicry provided evidence
for the action of natural selection, a view which is now standard amongst biologists.[41] All
aspects of this situation can be, and have been, the subject of research. [42] Field and
experimental work on these ideas continues to this day; the topic connects strongly to
speciation, genetics and development.[43]

More on mimicry: Warning Colour and Mimicry Lecture outline from University
College London
The basic machinery: internal adaptations
There are some important adaptations to do with the overall coordination of the systems in
the body. Such adaptations may have significant consequences. Examples, in vertebrates,
would be temperature regulation, or improvements in brain function, or an effective immune
system. An example in plants would be the development of the reproductive system in
flowering plants.[44] Such adaptations may make the clade (monophyletic group) more viable
in a wide range of habitats. The acquisition of such major adaptations has often served as the
spark for adaptive radiation, and huge success for long periods of time for a whole group of
animals or plants.
Compromise and conflict between adaptations
It is a profound truth that Nature does not know best; that genetical evolution... is a story of
waste, makeshift, compromise and blunder. Peter Medawar[45]
All adaptations have a downside: horse legs are great for running on grass, but they can't
scratch their backs; mammals' hair helps temperature, but offers a niche for ectoparasites; the
only flying penguins do is under water. Adaptations serving different functions may be
mutually destructive. Compromise and make-shift occur widely, not perfection. Selection
pressures pull in different directions, and the adaptation that results is some kind of
compromise.[46]
Since the phenotype as a whole is the target of selection, it is impossible to improve
simultaneously all aspects of the phenotype to the same degree. Ernst Mayr.[47]
Consider the antlers of the Irish elk, (often supposed to be far too large; in deer antler size has
an allometric relationship to body size). Obviously antlers serve positively for defence
against predators, and to score victories in the annual rut. But they are costly in terms of
resource. Their size during the last glacial period presumably depended on the relative gain
and loss of reproductive capacity in the population of elks during that time.[48] Another
example: camouflage to avoid detection is destroyed when vivid colors are displayed at
mating time. Here the risk to life is counterbalanced by the necessity for reproduction.
An
in full display
Indian
Peacock's
train
The peacock's ornamental train (grown anew in time for each mating season) is a famous
adaptation. It must reduce his maneuverability and flight, and is hugely conspicuous; also, its
growth costs food resources. Darwin's explanation of its advantage was in terms of sexual
selection: "it depends on the advantage which certain individuals have over other individuals
of the same sex and species, in exclusive relation to reproduction."[49] The kind of sexual
selection represented by the peacock is called 'mate choice', with an implication that the
process selects the more fit over the less fit, and so has survival value.[50] The recognition of
sexual selection was for a long time in abeyance, but has been rehabilitated.[51] In practice,
the blue peafowl Pavo cristatus is a pretty successful species, with a big natural range in
India, so the overall outcome of their mating system is quite viable.
The conflict between the size of the human foetal brain at birth, (which cannot be larger than
about 400ccs, else it will not get through the mother's pelvis) and the size needed for an adult
brain (about 1400ccs), means the brain of a newborn child is quite immature. The most vital
things in human life (locomotion, speech) just have to wait while the brain grows and
matures. That is the result of the birth compromise. Much of the problem comes from our
upright bipedal stance, without which our pelvis could be shaped more suitably for birth.
Neanderthals had a similar problem.[52][53][54]
Shifts in function
Adaptation and function are two aspects of one problem. Julian Huxley[55]
Pre-adaptations
This occurs when a species or population has characteristics which (by chance) are suited for
conditions which have not yet arisen. For example, the polyploid rice-grass Spartina
townsendii is better adapted than either of its parent species to their own habitat of saline
marsh and mud-flats.[56] White Leghorn fowl are markedly more resistant to vitamin B
deficiency than other breeds.[57] On a plentiful diet there is no difference, but on a restricted
diet this preadaptation could be decisive.
Pre-adaptation may occur because a natural population carries a huge quantity of genetic
variability.[58] In diploid eukaryotes, this is a consequence of the system of sexual
reproduction, where mutant alleles get partially shielded, for example, by the selective
advantage of heterozygotes. Micro-organisms, with their huge populations, also carry a great
deal of genetic variability.
The first experimental evidence of the pre-adaptive nature of genetic variants in microorganisms was provided by Salvador Luria and Max Delbrück who developed fluctuation
analysis, a method to show the random fluctuation of pre-existing genetic changes that
conferred resistance to phage in the bacterium Escherichia coli.
Co-option of existing traits: exaptation
The classic example is the ear ossicles of mammals, which we know from palaeontological
and embrological studies originated in the upper and lower jaws and the hyoid of their
Synapsid ancestors, and further back still were part of the gill arches of early fish.[59][60] We
owe this esoteric knowledge to the comparative anatomists, who, a century ago, were at the
cutting edge of evolutionary studies.[61] The word exaptation was coined to cover these shifts
in function, which are surprisingly common in evolutionary history.[62] The origin of wings
from feathers that were originally used for temperature regulation is a more recent discovery
(see feathered dinosaurs).
Related issues
Non-adaptive traits
Some traits do not appear to be adaptive, that is, they appear to have a neutral or even
deleterious effect on fitness in the current environment. Because genes have pleiotropic
effects, not all traits may be functional (i.e. spandrels). Alternatively, a trait may have been
adaptive at some point in an organism's evolutionary history, but a change in habitats caused
what used to be an adaptation to become unnecessary or even a hindrance (maladaptations).
Such adaptations are termed vestigial.
Vestigial organs
Many organisms have vestigial organs, which are the remnants of fully functional structures
in their ancestors. As a result of changes in lifestyle the organs became redundant, and are
either not functional or reduced in functionality. With the loss of function goes the loss of
positive selection, and the subsequent accumulation of deleterious mutations. Since any
structure represents some kind of cost to the general economy of the body, an advantage may
accrue from their elimination once they are not functional. Examples: wisdom teeth in
humans; the loss of pigment and functional eyes in cave fauna; the loss of structure in
endoparasites.[63]
Fitness landscapes
Main article: Fitness landscape
Sewall Wright proposed that populations occupy adaptive peaks on a fitness landscape. In
order to evolve to another, higher peak, a population would first have to pass through a valley
of maladaptive intermediate stages.[64] A given population might be "trapped" on a peak that
is not optimally adapted.
[edit] Extinction
Main article: Extinction
If a population cannot move or change sufficiently to preserve its long-term viability, then
obviously, it will become extinct, at least in that locale. The species may or may not survive
in other locales. Species extinction occurs when the death rate over the entire species
(population, gene pool ...) exceeds the birth rate for a long enough period for the species to
disappear. It was an observation of Van Valen that groups of species tend to have a
characteristic and fairly regular rate of extinction.[65]
Co-extinction
Just as we have co-adaptation, there is also co-extinction. Co-extinction refers to the loss of a
species due to the extinction of another; for example, the extinction of parasitic insects
following the loss of their hosts. Co-extinction can also occur when a flowering plant loses its
pollinator, or through the disruption of a food chain.[66] "Species co-extinction is a
manifestation of the interconnectedness of organisms in complex ecosystems ... While coextinction may not be the most important cause of species extinctions, it is certainly an
insidious one".[67]
Flexibility, acclimatization, learning
Flexibility deals with the relative capacity of an organism to maintain themselves in different
habitats: their degree of specialization. Acclimatization is a term used for automatic
physiological adjustments during life; learning is the term used for improvement in
behavioral performance during life. In biology these terms are preferred, not adaptation, for
changes during life which improve the performance of individuals. These adjustments are not
inherited genetically by the next generation.
Adaptation, on the other hand, occurs over many generations; it is a gradual process caused
by natural selection which changes the genetic make-up of a population so the collective
performs better in its niche.
Flexibility
Populations differ in their phenotypic plasticity, which is the ability of an organism with a
given genotype to change its phenotype in response to changes in its habitat, or to its move to
a different habitat.[68][69]
To a greater or lesser extent, all living things can adjust to circumstances. The degree of
flexibility is inherited, and varies to some extent between individuals. A highly specialized
animal or plant lives only in a well-defined habitat, eats a specific type of food, and cannot
survive if its needs are not met. Many herbivores are like this; extreme examples are koalas
which depend on eucalyptus, and pandas which require bamboo. A generalist, on the other
hand, eats a range of food, and can survive in many different conditions. Examples are
humans, rats, crabs and many carnivores. The tendency to behave in a specialized or
exploratory manner is inherited – it is an adaptation.
Rather different is developmental flexibility: "An animal or plant is developmentally flexible
if when it is raised or transferred to new conditions it develops so that it is better fitted to
survive in the new circumstances".[70] Once again, there are huge differences between
species, and the capacities to be flexible are inherited.
Acclimatization
If humans move to a higher altitude, respiration and physical exertion become a problem, but
after spending time in high altitude conditions they acclimatize to the pressure by increasing
production of red blood corpuscles. The ability to acclimatize is an adaptation, but not the
acclimatization itself. Fecundity goes down, but deaths from some tropical diseases also goes
down.
Over a longer period of time, some people will reproduce better at these high altitudes than
others. They will contribute more heavily to later generations. Gradually the whole
population becomes adapted to the new conditions. This we know takes place, because the
performance of long-term communities at higher altitude is significantly better than the
performance of new arrivals, even when the new arrivals have had time to make
physiological adjustments.[71]
Some kinds of acclimatization happen so rapidly that they are better called reflexes. The
rapid colour changes in some flatfish, cephalopods, chameleons are examples.[72]
Learning
Social learning is supreme for humans, and is possible for quite a few mammals and birds: of
course, that does not involve genetic transmission except to the extent that the capacities are
inherited. Similarly, the capacity to learn is an inherited adaptation, but not what is learnt; the
capacity for human speech is inherited, but not the details of language.
Function and teleonomy
Adaptation raises some issues concerning how biologists use key terms such as function.
Function
To say something has a function is to say something about what it does for the organism,
obviously. It also says something about its history: how it has come about. A heart pumps
blood: that is its function. It also emits sound, which is just an ancillary side-effect. That is
not its function. The heart has a history (which may be well or poorly understood), and that
history is about how natural selection formed and maintained the heart as a pump. Every
aspect of an organism that has a function has a history. Now, an adaptation must have a
functional history: therefore we expect it must have undergone selection caused by relative
survival in its habitat. It would be quite wrong to use the word adaptation about a trait which
arose as a by-product.[73][74]
It is widely regarded as unprofessional for a biologist to say something like "A wing is for
flying", although that is their normal function. A biologist would be conscious that sometime
in the remote past feathers on a small dinosaur had the function of retaining heat, and that
later many wings were not used for flying (e.g. penguins, ostriches). So, the biologist would
rather say that the wings on a bird or an insect usually had the function of aiding flight. That
would carry the connotation of being an adaptation with a history of evolution by natural
selection.
Teleonomy
Teleonomy is a term invented to describe the study of goal-directed functions which are not
guided by the conscious forethought of man or any supernatural entity. It is contrasted with
Aristotle's teleology, which has connotations of intention, purpose and foresight. Evolution is
teleonomic; adaptation hoards hindsight rather than foresight. The following is a definition
for its use in biology:
Teleonomy: The hypothesis that adaptations arise without the existence of a prior
purpose, but by the action of natural selection on genetic variability.[75]
The term may have been suggested by Colin Pittendrigh in 1958;[76] it grew out of
cybernetics and self-organising systems. Ernst Mayr, George C. Williams and Jacques
Monod picked up the term and used it in evolutionary biology.[77][78][79][80]
Philosophers of science have also commented on the term. Ernest Nagel analysed the concept
of goal-directedness in biology;[81] and David Hull commented on the use of teleology and
teleonomy by biologists:
Haldane can be found remarking, "Teleology is like a mistress to a biologist: he
cannot live without her but he’s unwilling to be seen with her in public". Today the
mistress has become a lawfully wedded wife. Biologists no longer feel obligated to
apologize for their use of teleological language; they flaunt it. The only concession
which they make to its disreputable past is to rename it ‘teleonomy’.[82]
[hide]
v•d•e
Basic topics in evolutionary biology
Evidence of common descent
Processes of evolution
Adaptation ·
Speciation
Macroevolution ·
Microevolution ·
Population genetic mechanisms
Genetic drift · Gene flow · Mutation · Natural
selection
Evolutionary
developmental
biology (Evo-devo) concepts
Canalisation · Inversion · Modularity · Phenotypic
plasticity
Evolution
of
organs
and biological processes
Aging · Cellular · DNA · The Ear · The Eye ·
Flagella · Flight · Hair · Human intelligence ·
Modular · Muticellular · Sex
Taxa evolution
Birds · Butterflies · Dinosaurs · Dolphins and whales ·
Fungi · Horses · Humans · Influenza · Insects ·
Lemur · Life · Molluscs · Plants · Sirenians (sea
cows) · Spiders
Modes of speciation
Anagenesis · Catagenesis · Cladogenesis
History of evolutionary thought
Charles Darwin · On the Origin of Species ·
Modern evolutionary synthesis · Gene-centered view
of evolution · Life (classification trees)
Other subfields
Ecological
genetics ·
Molecular
Phylogenetics · Systematics
evolution ·
List of evolutionary biology topics · Timeline of evolution
Energy flow (ecology)
In ecology, energy flow, also called the calorific flow, refers to the flow of energy through a
food chain. In an ecosystem, ecologists seek to quantify the relative importance of different
component species and feeding relationships.
A general energy flow scenario follows:


Solar energy is fixed by the photoautotrophs, called primary producers, like green
plants. They absorb most of the stored energy in the plant through digestion, and
transform it into the form of energy they need, such as adenosine triphosphate (ATP),
through respiration. A part of the energy received by primary consumers, herbivores,
is converted to body heat (an effect of respiration), which is radiated away and lost
from the system. The loss of energy through body heat is far greater in warm-blooded
animals, which must eat much more frequently than those that are cold-blooded.
Energy loss also occurs in the expulsion of undigested food (egesta) by excretion or
regurgitation.
Secondary consumers, carnivores, then consume the primary consumers, although
omnivores also consume primary producers. Energy that had been used by the
primary consumers for growth and storage is thus absorbed into the secondary
consumers through the process of digestion. As with primary consumers, secondary
consumers convert this energy into a more suitable form (ATP) during respiration.
Again, some energy is lost from the system, since energy which the primary
consumers had used for respiration and regulation of body temperature cannot be
utilised by the secondary consumers.


Tertiary consumers, which may or may not be apex predators, then consume the
secondary consumers, with some energy passed on and some lost, as with the lower
levels of the food chain.
A final link in the food chain are decomposers which break down the organic matter
of the tertiary consumers (or whichever consumer is at the top of the chain) and
release nutrients into the soil. They also break down plants, herbivores and carnivores
that were not eaten by organisms higher on the food chain, as well as the undigested
food that is excreted by herbivores and carnivores. Saprotrophic bacteria and fungi are
decomposers, and play a pivotal role in the nitrogen and carbon cycles.
The energy is passed on from trophic level to trophic level and each time about 90% of the
energy is lost, with some being lost as heat into the environment (an effect of respiration) and
some being lost as incompletely digested food (egesta). Therefore, primary consumers get
about 10% of the energy produced by autotrophs, while secondary consumers get 1% and
tertiary consumers get 0.1%. This means the top consumer of a food chain receives the least
energy, as a lot of the food chain's energy has been lost between trophic levels. This loss of
energy at each level limits typical food chains to only four to six links.
"Symbiology" redirects here. For the study or use of things that represent other things by
association, resemblance, or convention, see Symbology.
This article is about the biological phenomenon. For other uses, see Symbiosis
(disambiguation). For the Marvel character, see Symbiote (comics).
In a symbiotic mutualism, the clownfish feeds on small invertebrates which otherwise
potentially could harm the sea anemone, and the fecal matter from the clownfish provides
nutrients to the sea anemone.
Symbiosis (from the Greek: σύν syin "with"; and βίωσις biosis "living") is close and often
long-term interactions between different biological species. In 1877 Bennett used the word
symbiosis (which previously had been used of people living together in community) to
describe the mutualistic relationship in lichens[1] . In 1879 by the German mycologist
Heinrich Anton de Bary, defined it as "the living together of unlike organisms."[2][3] The
definition of symbiosis is in flux, and the term has been applied to a wide range of biological
interactions. The symbiotic relationship may be categorized as mutualistic, commensal, or
parasitic in nature.[4][5] Some symbiotic relationships are obligate, meaning that both
symbionts entirely depend on each other for survival. For example, many lichens consist of
fungal and photosyntheitc symbionts that cannot live on their own.[2][6][7][8] Others are
facultative, meaning that they can but do not have to live with the other organism.
Symbiotic relationships include those associations in which one organism lives on another
(ectosymbiosis, such as mistletoe), or where one partner lives inside the other
(endosymbiosis, such as lactobacilli and other bacteria in humans or zooxanthelles in corals).
Symbiotic relationships may be either obligate, i.e., necessary for the survival of at least one
of the organisms involved, or facultative, where the relationship is beneficial but not essential
for survival of the organisms.[9][10]
Physical interaction
Alder tree root nodule
Endosymbiosis is any symbiotic relationship in which one symbiont lives within the tissues
of the other, either in the intracellular space or extracellularly.[10][11] Examples are rhizobia,
nitrogen-fixing bacteria that live in root nodules on legume roots; actinomycete nitrogenfixing bacteria called Frankia, which live in alder tree root nodules; single-celled algae inside
reef-building corals; and bacterial endosymbionts that provide essential nutrients to about
10%–15% of insects.
Ectosymbiosis, also referred to as exosymbiosis, is any symbiotic relationship in which the
symbiont lives on the body surface of the host, including the inner surface of the digestive
tract or the ducts of exocrine glands.[10][12] Examples of this include ectoparasites such as lice,
commensal ectosymbionts such as the barnacles that attach themselves to the jaw of baleen
whales, and mutualist ectosymbionts such as cleaner fish.
Mutualism
Hermit crab, Calcinus laevimanus, with sea anemone.
Mutualism is any relationship between individuals of different species where both individuals
derive a benefit.[13] Generally, only lifelong interactions involving close physical and
biochemical contact can properly be considered symbiotic. Mutualistic relationships may be
either obligate for both species, obligate for one but facultative for the other, or facultative for
both. Many biologists restrict the definition of symbiosis to close mutualist relationships.
An Egyptian Plover picking the teeth of a Nile crocodile
A large percentage of herbivores have mutualistic gut fauna that help them digest plant
matter, which is more difficult to digest than animal prey.[9] Coral reefs are the result of
mutualisms between coral organisms and various types of algae that live inside them.[14] Most
land plants and land ecosystems rely on mutualisms between the plants, which fix carbon
from the air, and mycorrhyzal fungi, which help in extracting minerals from the ground.[15]
An example of mutual symbiosis is the relationship between the ocellaris clownfish that
dwell among the tentacles of Ritteri sea anemones. The territorial fish protects the anemone
from anemone-eating fish, and in turn the stinging tentacles of the anemone protect the
clownfish from its predators. A special mucus on the clownfish protects it from the stinging
tentacles.[16]
Another example is the goby fish, which sometimes lives together with a shrimp. The shrimp
digs and cleans up a burrow in the sand in which both the shrimp and the goby fish live. The
shrimp is almost blind, leaving it vulnerable to predators when above ground. In case of
danger the goby fish touches the shrimp with its tail to warn it. When that happens both the
shrimp and goby fish quickly retract into the burrow.[17]
One of the most spectacular examples of obligate mutualism is between the siboglinid tube
worms and symbiotic bacteria that live at hydrothermal vents and cold seeps. The worm has
no digestive tract and is wholly reliant on its internal symbionts for nutrition. The bacteria
oxidize either hydrogen sulfide or methane which the host supplies to them. These worms
were discovered in the late 1980s at the hydrothermal vents near the Galapagos Islands and
have since been found at deep-sea hydrothermal vents and cold seeps in all of the world's
oceans.[18]
There are also many types of tropical and sub-tropical ants that have evolved very complex
relationships with certain tree species.[19]
Wildlife
Various species of deer are commonly seen wildlife across the Americas and Eurasia.
A Bottlenose Dolphin (Tursiops truncatus) surfs the wave of a research boat on the Banana
River, near the Kennedy Space Center, and is an example of wildlife.
Wildlife includes all non-domesticated plants, animals and other organisms. Domesticating
wild plant and animal species for human benefit has occurred many times all over the planet,
and has a major impact on the environment, both positive and negative.
Wildlife can be found in all ecosystems. Deserts, rain forests, plains, and other areas
including the most developed urban sites, all have distinct forms of wildlife. While the term
in popular culture usually refers to animals that are untouched by human factors, most
scientists agree that wildlife around the world is impacted by human activities.
Humans have historically tended to separate civilization from wildlife in a number of ways
including the legal, social, and moral sense. This has been a reason for debate throughout
recorded history. Religions have often declared certain animals to be sacred, and in modern
times concern for the natural environment has provoked activists to protest the exploitation of
wildlife for human benefit or entertainment. Literature has also made use of the traditional
human separation from wildlife.
Food, pets, traditional medicines
Anthropologists believe that the Stone Age peoples and hunter-gatherers relied on wildlife,
both plant and animal, for their food. In fact, some species may have been hunted to
extinction by early human hunters. Today, hunting, fishing, or gathering wildlife is still a
significant food source in some parts of the world. In other areas, hunting and noncommercial fishing are mainly seen as a sport or recreation, with the edible meat as mostly a
side benefit.[citation needed] Meat sourced from wildlife that is not traditionally regarded as game
is known as bush meat. The increasing demand for wildlife as a source of traditional food in
East Asia is decimating populations of sharks, primates, pangolins and other animals, which
they believe have aphrodisiac properties.
In November 2008, almost 900 plucked and "oven-ready" owls and other protected wildlife
species were confiscated by the Department of Wildlife and National Parks in Malaysia,
according to TRAFFIC. The animals were believed to be bound for China, to be sold in wild
meat restaurants. Most are listed in CITES (the Convention on International Trade in
Endangered Species of Wild Fauna and Flora) which prohibits or restricts such trade.
“
Malaysia is home to a vast array of amazing wildlife. However, illegal hunting
and trade poses a threat to Malaysia’s natural diversity.
—Chris S. Shepherd[1]
A November 2008 report from biologist and author Sally Kneidel, PhD, documented
numerous wildlife species for sale in informal markets along the Amazon River, including
wild-caught marmosets sold for as little as $1.60 (5 Peruvian soles).[2] Many Amazon species,
including peccaries, agoutis, turtles, turtle eggs, anacondas, armadillos, etc., are sold
primarily as food. Others in these informal markets, such as monkeys and parrots, are
destined for the pet trade, often smuggled into the United States. Still other Amazon species
are popular ingredients in traditional medicines sold in local markets. The medicinal value of
animal parts is based largely on superstition.
Religion
Many wildlife species have spiritual significance in different cultures around the world, and
they and their products may be used as sacred objects in religious rituals. For example,
eagles, hawks and their feathers have great cultural and spiritual value to Native Americans
as religious objects.
Media
The Douglas Squirrel (Tamiasciurus douglasii) is an example of wildlife.
”
Wildlife has long been a common subject for educational television shows. National
Geographic specials appeared on CBS beginning in 1965, later moving to ABC and then
PBS. In 1963, NBC debuted Wild Kingdom, a popular program featuring zoologist Marlin
Perkins as host. The BBC natural history unit in the UK was a similar pioneer, the first
wildlife series LOOK presented by Sir Peter Scott, was a studio-based show, with filmed
inserts. It was in this series that David Attenborough first made his appearance which led to
the series Zoo Quest during which he and cameraman Charles Lagus went to many exotic
places looking for elusive wildlife—notably the Komodo dragon in Indonesia and lemurs in
Madagascar. Since 1984, the Discovery Channel and its spin off Animal Planet in the USA
have dominated the market for shows about wildlife on cable television, while on PBS the
NATURE strand made by WNET-13 in New York and NOVA by WGBH in Boston are
notable. See also Nature documentary. Wildlife television is now a multi-million dollar
industry with specialist documentary film-makers in many countries including UK, USA,
New Zealand NHNZ, Australia, Austria, Germany, Japan, and Canada. There are many
magazines which cover wildlife including National Wildlife Magazine, Birds & Blooms,
Birding (magazine), and Ranger Rick (for children).
Tourism
Fuelled by media coverage and inclusion of conservation education in early school
curriculum, Wildlife tourism & Ecotourism has fast become a popular industry generating
substantial income for developing nations with rich wildlife specially , Africa and India. This
ever growing and ever becoming more popular form of tourism is providing the much needed
incentive for poor nations to conserve their rich wildlife heritage and its habitat.
Destruction
Map of early human migrations, according to mitochondrial population genetics. Numbers
are millennia before the present.
This subsection focuses on anthropogenic forms of wildlife destruction.
Exploitation of wild populations has been a characteristic of modern man since our exodus
from Africa 130,000 – 70,000 years ago. The rate of extinctions of entire species of plants
and animals across the planet has been so high in the last few hundred years it is widely
considered that we are in the sixth great extinction event on this planet; the Holocene Mass
Extinction.
Destruction of wildlife does not always lead to an extinction of the species in question,
however, the dramatic loss of entire species across Earth dominates any review of wildlife
destruction as extinction is the level of damage to a wild population from which there is no
return.
The four most general reasons that lead to destruction of wildlife include overkill, habitat
destruction and fragmentation, impact of introduced species and chains of extinction.[3]
Overkill
Overkill occurs whenever hunting occurs at rates greater than the reproductive capacity of the
population is being exploited. The effects of this are often noticed much more dramatically in
slow growing populations such as many larger species of fish. Initially when a portion of a
wild population is hunted, an increased availability of resources (food, etc) is experienced
increasing growth and reproduction as Density dependent inhibition is lowered. Hunting,
fishing and so on, has lowered the competition between members of a population. However,
if this hunting continues at rate greater than the rate at which new members of the population
can reach breeding age and produce more young, the population will begin to decrease in
numbers.
Populations are confined to islands, whether literal islands or just areas of habitat that are
effectively an “island” for the species concerned have also been observed to be at greater risk
of dramatic population declines following unsustainable hunting.
Habitat destruction and fragmentation
Deforestation and increased road-building in the Amazon Rainforest are a significant concern
because of increased human encroachment upon wild areas, increased resource extraction and
further threats to biodiversity.
The habitat of any given species is considered its preferred area or territory. Many processes
associated human habitation of an area cause loss of this area and the decrease the carrying
capacity of the land for that species. In many cases these changes in land use cause a patchy
break-up of the wild landscape. Agricultural land frequently displays this type of extremely
fragmented, or relictual, habitat. Farms sprawl across the landscape with patches of uncleared
woodland or forest dotted in-between occasional paddocks.
Examples of habitat destruction include grazing of bushland by farmed animals, changes to
natural fire regimes, forest clearing for timber production and wetland draining for city
expansion.
Impact of introduced species
Mice, cats, rabbits, dandelions and poison ivy are all examples of species that have become
invasive threats to wild species in various parts of the world[citation needed]. Frequently species
that are uncommon in their home range become out-of-control invasions in distant but similar
climates. The reasons for this have not always been clear and Charles Darwin felt it was
unlikely that exotic species would ever be able to grow abundantly in a place in which they
had not evolved. The reality is that the vast majority of species exposed to a new habitat do
not reproduce successfully. Occasionally, however, some populations do take hold and after a
period of acclimation can increase in numbers significantly, having destructive effects on
many elements of the native environment of which they have become part.
Chains of extinction
This final group is one of secondary effects. All wild populations of living things have many
complex intertwining links with other living things around them. Large herbivorous animals
such as the hippopotamus have populations of insectivorous birds that feed off the many
parasitic insects that grow on the hippo. Should the hippo die out, so too will these groups of
birds, leading to further destruction as other species dependent on the birds are affected. Also
referred to as a Domino effect, this series of chain reactions is by far the most destructive
process that can occur in any ecological community.
Another example is the black drongos and the cattle egrets found in India. These birds feed
on insects on the back of cattle, which helps to keep them disease-free. If we destroy the
nesting habitats of these birds, it will result a decrease in the cattle population because of the
spread of insect-borne diseases.
Download