CHAPT 07 Nutritional Properties

advertisement
Chapter 7. Nutritional Properties of Minerals
K
ey :
In this chapter we key on nutritional status, focusing on the various procedures that are
used to make decisions on mineral requirements for an at-risk population and an individual.
From balanced diets and biomarkers of adequacy to signs of deficiency and toxicity, decisions
regarding nutritional adequacy depend on data obtained from carefully designed experiments.
Such decisions can have meaning only when the recommended values are backed up by sound
scientific evidence.
O
bjectives:
1. To examine the various procedures used to evaluate mineral adequacy,
2. To determine the application, value and shortcomings of biomarkers,
3. To examine risk as a factor in inadequate and overly adequate intake,
4. To connect mineral status with the health of the individual and the population.
I. Assessing Nutrition Status-General Considerations
The goal of nutrition is to link diet and foods with health. This goal can be achieved by
knowing amounts of nutrient that will suffice to assure minimal risk of mineral deficiency or
toxicity in an individual or a population. Evaluating the level of minerals needed to achieve
these goals and the current mineral status of the individual are the two areas that require
attention in nutritional assessment (Fig. 7.1). With age as a variable, guidelines have been
Evaluating Need
Evaluating Status of the individual
Setting Standards for
Optima
Population Approach
Balance Studies
Experimental Approach
Biomarkers
Figure 7.1. Flow Chart for making nutritional assessments
Functional Tests
set to delineate optimal growth of the young or to maintain a status quo in adults. Using a
purified diet approach it has been possible to categorize nutrients with essential roles and by
systematically eliminating minerals from these same diets, it is possible to mimic deficiency
states and relate the missing mineral to a symptom. Quantities that suppress symptoms of a
deficiency have been correlated with amount in a similar evaluation scheme. This evidencedbased approach has been used to assemble tables with ranges for specific mineral nutrients.
The current status of minerals in an ad hoc population of healthy individuals give assurance that
a certain percentage of a population would stay healthy and avoid risk of insufficiency or
excess when the dietary recommendation is met. But, as with all procedures, there are
limitations and dangers of interpretation.
II. Determining the Nutritional Requirements
A. The Balanced Diet Approach to Assess Adequacy-Applications and Shortcomings
Matching intake with excretion is a mainstay in nutritional investigation. The procedure
is fairly straight forward. One merely takes into account the quantity of a specific nutrient in
the diet and compares that to the amount excreted in feces or urine. Excretion represents
turnover, which is an index of replacement. Bodily systems are constantly turning over
components in an endless attempt to keep the system at optimal function. Minerals do not
escape this search for perfection. Figure 7.2 shows the balance scheme. Kin and Kout are rate
constants for input and output, i.e., absorption and excretion, respectively. The system is in
balance when Kin = Kout. When more is taken in than excreted, Kin > Kout and the system is said
to be in “positive balance”. Losing minerals at a rate faster than can be replaced, Kout > Kin,
puts the system in “negative balance”. Positive balance reflects growth whereas negative
balance reflects a wasting condition and could if not corrected, lead to functional impairments
and the failure of specific systems. A certain amount of the mineral taken in can be transferred
to a storage site represented by C. Storage is not permanent, however, and a dynamic
equilibrium exists between B and C. When k1 = k-1, the amount retained is constant and
balance is achieved in this area as well. But, when k1 > k-1, retention is favored and the system
goes into positive balance. Similarly, when k-1 > k1, the system excretes more than it retains.
Thus, a negative balance impinges on the body’s stores of the mineral, making less available for
rapid mobilization. This is the blueprint for a mineral deficiency.
Absorbed
Excreted
Kin
Kout
A  B  D
k1
k-1
C
Retained
Although balance studies would seem to be straight forward, there are concerns with
the interpretation of data from a balance study approach. Sometimes balance requires days or
even weeks to achieve and the end point is not always clear. To assume that intake and
excretion in a given period occur at uniform rates may be more hypothetical than real. For one,
excretion is under episodic control, not continuous control. Another concern is that the
mineral in question may be multifunctional and one of the functions could be compromised by
internal shifting between factors dependent on the mineral’s presence. This sometimes
becomes apparent when the results of a balance study are not in agreement with results of a
function test (see below). But, by far the greatest weakness to a balance approach is
adaptation. A low intake can force an adjustment of the system to excrete less and absorb
more, whereas a high intake can do the opposite. Hence, the system may appear to be in
balance when its may actually be experiencing sub- or super-optimal levels of nutrients
internally. These factors when combined make balance study approaches less reliable for
assessing mineral status.
B. Clinical Approach
Clinical approaches rely mostly on observations performed in a clinical setting. A
suboptimal level of a mineral could cause the appearance of deficiency symptoms similar to
those described above. Anemia, diarrhea, loss of hair, scaly skin, diminished pigment,
substandard cognitive performance and immune system compromise are clinical signs of
mineral deficiencies that are readily observed and diagnosed. In clinical assessments it is
important to link the physical symptoms to an underlying biochemical flaw, a defective enzyme
in a pathway, for example. Only then it is possible to rationalize the overt sign with some
specific component in a failed system.
C. Standards of Optimal Intake
One could argue that optimal performance by the system is the best way to determine
the amount of a specific mineral to meet dietary requirements. Performance judged by mental
acuity, physical adeptness, disease prevention, or even longevity is one such approach.
Assessment s of optimum can also be based on known accepted standards of excellence. For
example, some would consider bovine milk nature’s perfect food. Hence, the quantity of
minerals in milk and eggs could set a standard with which to judge all other food sources for
mineral nutritional quality. As befits mineral nutrition, however, milk is not nature’s perfect
food and is far from being in that category. Among the concerns are the very low iron and
copper content of milk and the poor absorbability of calcium caused by the whey proteins that
tend to precipitate in mild acid environments. There is also some concern that milk may be
high in sodium and other minerals. All this leads one to conclude that from a mineral
perspective, milk strays quite a bit from ideal and should not be used as a paradigm for setting
a standard. This conclusion could apply to other food sources that are regarded à priori as
standards of excellence.
III. Assessing Mineral Status of an Individual
Having viewed the candidates for determining the amount of mineral needed to achieve
optimal health status, we can now focus on four of the more common methods in practice
today for assessing an individual’s current status in minerals. These are: (1) measuring body
stores, (2) observing physical changes to increased intake, (3) performing functional assays
that directly involve the mineral, and (4) reversing symptoms of an apparent deficiency. While
providing feedback to the current status of an individual mineral level, each has its values and
limitations. Seldom are all four used for any one mineral. Measuring body stores of a particular
mineral, for example, is simply an assessment of what’s there. A person whose intake of
certain minerals, iron for example, is sub adequate would be suspected of having low body
stores of iron. Response to increased intake of a mineral gives insights into whether the
current level of a mineral is adequate to meet the system’s needs. When combined with a
functional test that measure a factor dependent on the mineral’s presence, there is good
assurances that the mineral in question is either adequate of sub adequate. Finally, reversing
symptoms of a deficiency provides prima fascia evidence for substandard levels of a mineral
and further point to biochemical factor that may underlie the deficiency symptoms.
A. Body Stores
The body has a limited capacity to store minerals for future use. This capacity is not the
same for all organs or all minerals. Body stores of iron, for example, can be judged by
measuring serum iron or serum ferritin, i.e., a simple blood test. Less reliable, perhaps are
measurement of serum transferrin saturation with iron. The latter measures the transport
protein that carries iron, whereas serum ferritin and transferrin receptor are an index of the
level of stored iron in the tissues. Both parameters would reveal deficiencies or adequacies in
iron. While this may work for iron, a simple blood test may not work for zinc and other
microminerals. This is because only a fraction of the total body mineral may be in the serum.
For example, plasma zinc accounts for only about 0.1% of the body zinc load. Consequently, it
will be necessary to judge zinc by some other means, zinc-dependent enzymes, for example.
Body stores apply mainly to the macrominerals, minerals that are plentiful in body fluids or
minerals such as iron that amass inside cells. They clearly cannot be used to assess the status
of all microminerals.
B. Overt response to mineral intake
Measuring the daily growth rate of an experimental animal as a function of the mineral
intake is a common procedure in nutritional research. One obtains insight into the
“conditional” amount of the mineral for optimal growth. Referring this as “conditional”
implies the level so observed will also depend on other factors such as competing minerals in
the diet and the values obtained could reflect this and other influences. This happens when the
fixed minerals are linked synergistically to the one under study. Calcium, magnesium, and
phosphorous, for example, must each be evaluated in toto to obtain a value for the amount of
either of these that gives optimal growth (Figure 7.1) Optimal growth depends on the right
proportions, not the level of any one mineral. Adjusting only one without the others will define
the optimal level for just that one setting or condition. One way out of this dilemma is to set a
standard ratio for two of the minerals and vary the third to a point where a plateau in growth
rate is observed. Repeating this procedure for each mineral will give some insight into the
optimal amount for growth.
3.2% Ca, 0.8% P
7
Guinea Pigs
Daily Wt
gain (g)
6
0.9% Ca, 0.8% P
(normal)
5
4
3
2.5% Ca, 1.7% P
2
1
1 g/kg
2.4 g/kg
4.0 g/kg
0
0
30
60
120 180 240 360 600 1200
Log of Dietary Magnesium (mg/100g)
Figure 7.1. Assessing the Dietary Requirement of Magnesium in Guinea Pigs. Note how the
requirement of magnesium for an optimal growth rate depends on the concentration of
calcium and phosphorous in the diet. Raising calcium from 0.9% to 3.2% at constant
phosphorous increases the magnesium requirement five times.
Microminerals can also be judged by overt responses, but valid quantitative judgments
are not always possible. A deficiency in copper, zinc or iron can stunt growth. Zinc, copper
and iron are antagonistic and interfere with one another’s bioavailability. Thus a value set for
copper must pay heed to the level of zinc and iron in the diet. Mineral interactions at the level
of the gut and internally make it hard to arrive at a valid numbers for optimum or even
adequate amounts. Some microminerals, however, eschew detrimental interactions. Raising
the chromium content of a diet improves glucose tolerance possibly through a more efficient
utilization of insulin. Only chromium can cause that change. Very small amounts of minerals
added to test diets are sometimes just above the level of contaminants. This slight difference
can reflect a disparity between optimal, adequate and suboptimal ranges. More often than
not, micronutrient status is best judged by functional assays.
3. Biomarkers: Applications and Limitations
By definition a biomarker is an internal or external signal elicited in response to the
intake of a select nutrient. Ideally the nutrient in question is the only one that elicits a
response from the biomarker. Also important, the strength of that signal should be directly
related to the amount of mineral at the eliciting site, i.e., a variable that depends on the mineral
for expression. The importance of a biomarker in making a crucial decision in nutrition cannot
be overstated. If, for example, we wish to know the amount of iodine needed to prevent a
goiter from forming, we can measure thyroid hormone or thryroglobulin in the plasma. The
protein is a reliable measure because it precedes the formation of thyroid hormone.
Biomarkers allow us to confront a condition indirectly and perhaps more conveniently, yet still
lead to a valid conclusion regarding the amount of mineral needed.
There are other advantages to the use of biomarkers. For one, the biomarker can be
tested by instrumentation common to a clinical laboratory. A change in a biomarker level could
be the first clinical sign of abnormal mineral status. Second, a biomarker is an internal indicator
and as such surpasses decision based solely on the amount of mineral in the food
source…which basically ignores the organism’s role entirely. This obviates concerns for
absorption and transport in the assessment. Biomarkers can relate to a biochemical
impairment that leads to pathology. For example, a selenium-dependent enzyme is required to
synthesize the thyroid hormone T3 from T4. Either the enzyme or T3 in the plasma could be a
biomarker for a selenium requirement and a window to whether the current amount is
adequate or below average.
The disadvantages of biomarkers are their sometimes dubious link to the nutrient under
study. Iron intake, for example, is judged adequate based on blood hemoglobin concentration
or blood cell count. Both fall below steady-state when intake is low. Hemoglobin
concentrations, however, depend on iron being mobilized to the site of hemoglobin
biosynthesis, a function of transferrin. In order to bind to transferrin, copper is need to oxidize
iron to its ferric form. Transferrin saturation with iron, therefore, becomes a dubious
biomarker for iron adequacy. Confounded the transferrin marker is the observation that
transferrin saturation can vary depending on the time of day. Night time is conducive to low
iron saturation whereas early morning reading are always much higher. This time-dependence
of transferrin saturation is believed to be the reason for between laboratory discrepancies.
Finally, one must consider that a physiological biomarker of low iron, outward signs of anemia,
is itself subject to other factors such as a low intake of folate and B12 as well as copper. The
problems encountered in defining iron status is a clear illustration of the difficulties one
encounters in finding an ideal biomarker for a nutrient mineral.
Table 7.1 shows how the biomarkers for many minerals are similar. A deficiency in
sodium or potassium is generally not a nutritional concern. Indeed, for sodium, its presence in
excess is to be avoided. Sodium, potassium, magnesium and calcium, which together account
for 99% of the minerals in the body, can usually be quantified by a simple blood or urine analysis
employing automated instrumentation. Micronutrients, however, which include most of the
minerals listed in the table are present in too low a quantity to assess by a simple chemical
analysis. These minerals require more sensitive instrumentation. Because copper, zinc,
manganese, cobalt, etc. are in micromolar quantities in bodily fluids, measurable quantities
would require samples be taken over a 24 hour period. Thus, as noted above,
Table 7.1 Example of Biomarkers Used to Assess Mineral Status
Mineral
Marker
Physiological System
Calcium
Magnesium
Sodium
Potassium
Phosphorous
Chloride
blood calcium
blood and urine levels
blood and urine levels
blood and urine levels
blood and urine levels
blood and urine levels
bone turnover or resorption
bone structure, energy
electrolyte balance
electrolyte balance
bone structure
electrolytes balance
Iron
blood hemoglobin,
transferrin saturation, ferritin
oxygen transport, energy
Copper
Zinc
Manganese
plasma ceruloplasmin
plasma zinc, zinc enzymes
manganese enzymes
energy, pigment, antioxidant
growth and development
antioxidant
Selenium
Vanadium
Molybdenum
Chromium
Iodine
selenium enzymes
none
molybdenum enzymes
none
T3, T4
antioxidant
uncertain
nucleic acid metabolism
glucose tolerance
thyroid hormone
micromineral status is generally based on a function analysis, which is an instantaneous
measurement of an enzyme system or physiological process. As seen in the table,
microminerals are cofactors for numerous enzymes and the measurement of these enzymes is
readily straight forward. In experimental nutrition the ebb and flow of enzyme activity is a
window into the effectiveness of the diet designed to meet a level of stasis and avoid a
nutritional deficiency. These same enzymes can be biomarkers of adequacy or inadequacy
when a nutritional deficiency is suspected, which is common in a clinical setting.
4. Functional Assays
Functional assays rely on biomarkers and base the diagnosis on mineral-dependent
responses within the organism. Their focus is on what the mineral is doing internally, not
simply the amount that is in the diet. Functional assays, therefore, bypass digestion and
absorption and parallel bioavailability as an assessment. Often a functional assay is a simple
biochemical assay such as measuring the activity of an enzyme in a tissue or the level of a
mineral in the blood. As noted above with biomarkers, functional assays to determine mineral
status can be misleading because other minerals or extraneous factors may elicit nearly
identical biochemical responses.
5. Reversing Deficiencies
Sub adequate intake of minerals can be pathogenic. Prior to the development of a
pathology, however, it is likely a biomarker could be activated. The stage is then set to
determine if a mineral suspected of being in limited amounts in the diet is the cause of such
changes. One way to connect a single mineral is to determine if the deleterious signs can be
reversed when the mineral is added back to the diet, thereby establishing the mineral as the
primary cause of the defect. This ploy has worked very well in assessing the need for zinc. A
severe deficiency in zinc is known to cause rashes on the skin hair loss in experimental animals.
Children suffering from a genetic impairment in zinc absorption, a condition known as
acrodematitis enteropathica (AE), develop a rash over most of the body (Figure 7.2). Treating
the infant with zinc supplements alleviates the rash and returns to the skin to normal
Figure 7.2. Reversal of Zinc deficiency with supplements of zinc
appearance, clearly showing that limited zinc in the diet or bioavailable to the tissues was the
causative factor. In a similar way, cobalt, a component of vitamin B12, taking as a B12 complex,
can reverse the symptoms of pernicious anemia, a B12-related disorder. Cobalt by itself,
however, is mildly toxic. In both these examples the condition that needed correction was
brought about by supplementing the missing mineral or mineral complex and decisions were
based on a single dietary factor.
6. Assessing Adequacy and Risk of Toxicity
With the above tools in hand, it is now possible to define a range of intake of a certain
mineral (or any nutrient) that meets the daily requirement. There are two concerns in the
assessment; risk of deficit and risk of excess. First, a biomarker must be selected that will set
the criteria for the decisions. Next, one seeks the “estimated average requirement” or EAR
which can be defined as the nutrient intake level that meets 50% of an average “healthy”
population in a particular age (or gender) group of individuals”. Once an EAR is set the
recommended nutrient intake (RNI) can be determined. RNI (or RDA) is the daily intake
represented by two standard deviations from the EAR, which in a normal Gaussian distribution
is the level that satisfies the needs of 97.5% of the population. This means that at most an RNI
or RDA when followed puts only about 3% of the population at risk of a deficiency. In addition
one must always be mindful of the “upper limit” or UL which looks at the opposite extreme,
i.e., the limit of safe consumption before crossing into the danger (toxic) zone.
A. Risk of excess
Together risk of insufficiency or excess is the focus of a recommended level of intake.
There is a different vocabulary of acronyms associated with the latter, however. In this
instance one is concerned with bad signs, more specifically, the highest level or the “no
observed adverse effect level” (NOAEL). Concomitant to NOAEL is the LOAEL or “lowest
observed adverse effect level” which refers to the point when adverse signs are first noticed.
Most of these terms are shown in the figure below.
LOAEL
NOAEL
Figure 7.3. Assessments of Risks as a Function of Dietary Intake
Fig. 7.3 above is taken from the Food and Nutrition Board. EAR is the estimated daily
intake to meet the requirement of half a population of apparently healthy individuals, whereas
RDA (RNI) is estimated to meet 97-98 percent. Thus, RNI/RDA when heeded presents a risk of
inadequacy of only 2 to 3 percent, which is deemed acceptable. Remember, setting an
RNI/RDA depends setting an EAR first. In essence the 50th percentile must be defined to allow
the 97-98th percentile to be calculated. Recall in statistics, the value for the standard deviation is
determined after one determines the mean. This is because standard deviation represents the
deviation from the mean.
The uniqueness of the UL is sometimes overlooked. In essence UL defines the point
where risks of inadequacy or excess is zero. That is comforting to know for an individual, but
for a manufacturer who is adding a nutritional supplement to thousands of cartons of a
product, putting in more than is needed could be something that is strongly avoided.
SUMMARY
Assessments of mineral status rely on a bevy of important tests. Some of these use
simple observations that are readily apparent in a clinical setting. Others require more
sophisticated procedures and obtain answers indirectly. The latter make use of biomarkers of
adequacy. Biomarkers operate on the premise that an ideal biomarker is some internal factor
that responds directly, specifically, and quantitatively to changes in a mineral’s homeostasis.
There are, however, caveats in their application. To be valid and reliable, a biomarker must be
both sensitive and specific for the eliciting mineral. Measurements must be repeatable and not
subject to change with time of day, age and gender of the individual and health issues afflicting
the individual. Moreover, any observed differences must correlate strictly with changes in the
mineral’s exposure. Data obtained from biomarkers have been used to formulate tables that
provide values for adequate intake and freedom from risk. Another concern is overly strong
intake of a mineral and the potential for toxicity. Judgments of this phase of mineral nutrition
rely on the appearance of adverse signs that relate to pathogenic changes that are taking place
and signal a dangerous situation developing.
Problems
1. To appreciate how terms mentioned in this chapter apply to research, read the
abstract below and answer the questions. The abstract deals with meaningful
recommendations and assessments for pregnant women and appeared in the Journal of
American Dietetic Association, 2003, titled “Comparing Nutrient Intake from Food to the
Estimated Average Requirement Shows Middle to Upper Income Pregnant Women Lack Iron
and Possibly Magnesium”.
OBJECTIVE: To determine whether nutrient intake from food alone was adequate across
trimesters for middle- to upper-income pregnant women when compared with estimated average
requirements (EAR), and to determine whether food intake exceeded the tolerable upper intake level
(UL) for any nutrient. DESIGN: Observational study in which pregnant women completed 3-day
diet records each month during their pregnancy. Records were analyzed for nutrient content, and
usual intake distributions were determined. SUBJECTS/SETTING: Subjects were low-risk women in
their first trimester of pregnancy (living in middle- to upper-income households). Ninety-four
women were recruited, and sixty-three participated. STATISTICAL ANALYSIS PERFORMED:
Nutrient intake data were adjusted to achieve normality by using a power transformation. A mixed
model method was used to assess trends in intake over time, and to estimate mean intake and
within-subjects and between-subjects variance. The usual intake distribution for each nutrient was
determined and compared with the EAR and UL. RESULTS: The probabilities of usual nutrient
intake from food being less than the EAR were highest for iron (.91), magnesium (.53), zinc (.31),
vitamin B6 (.21), selenium (.20), and vitamin C (.12). Women were not at risk of exceeding the UL
from food intake for any nutrient studied. APPLICATIONS/CONCLUSIONS: Study participants did
not consume adequate amounts of iron from food to meet the needs of pregnancy, and therefore iron
supplementation is warranted in this population. Intake of magnesium was suboptimal using the
EAR as a cut-point for adequacy.
Upon reading the abstract, you should be able to answer the following questions.
1.
2.
3.
4.
5.
6.
What nutrients were being evaluated?
What biomarker was being used to make the evaluation?
How was the test performed?
What was the purpose of the 3-day diet record?
Based on the results what advice would be most pertinent for the subjects in the study?
How would you criticize these data as being relevant to the health status of pregnant women?
Write down your own answers before you read what’s below.
1. The investigators chose four elements (Fe, Mg, Zn, Se) and two vitamins (B6-pyridoxine and CL-ascorbate) to evaluate. Their supposition was that one or more of these critical markers may
suffice to answer the question as to adequacy of the diets for these pregnant women. The main
concern is whether women deemed low risk based on economic status were still at risk of key
nutrients during pregnancy.
2. None. They used a 3-day recall in which the subjects were told to keep track of what they ate
during their pregnancy. Knowing these food items and their amount allowed them to determine the
intake of the marker nutrients.
3. By asking subjects to list what they ate and record the data every three days. This was followed
by using a standard set of reference tables that gave the quantity of the nutrient in the food items.
4. This was the raw data that was tabulated in order to make the evaluations and render conclusions.
5. There was evidence that the intake of iron and magnesium was likely to be below the
recommended amount. The conclusion was based on the probability of intake being below the
50:50 point of 100 percent to zero risk of a deficiency, i.e., EAR. Although there was a 91%
probability for iron and a 53% for magnesium (literally 9 out of 10 and 5 out of 10 for the two
minerals, respectively), there was also clear evidence that zinc and selenium could be risk factors as
well.
6. Keep in mind a diet records refers only to the levels of individual nutrients in the food source.
The data say nothing about nutrient interactions that may affect bioavailability within. Likewise,
they do not account for individual variations between subjects, the fact that some may absorb more
even though less in provided, i.e., the tendency of the system to adapt to the low intake, and they
say nothing of what is happening internally. These are limitations of the study that need to be
addressed before advice for higher intake is warranted.
Download