The Mathematical Theory of Information

advertisement
By Thanaporn Sundaravej
The Mathematical Theory of Information by Jan Kåhre
Chapter 1: About Information
The Law of Diminishing Information is a primary focus of The Mathematical
Theory of Information. The author applies the communication theory of Shannon to his
new approach. The communication theory of Shannon is based upon probability rules to
assess an amount of transferred information on a telecommunication medium. Later the
name of the theory has been changed to the information theory or classic information
theory of Shannon.
In the information theory, transferred information can be expressed as inf(B@A)
which means the information B gives about A. Information can be measured in terms of
quantity and quality. A bit or alternatives in a set is used for a quantitative computation,
while a hit is used for a qualitative measurement. A logarithm is a mathematical
calculation for an amount of bits. The increased number of bits results in the higher
limited probability of results or entropy. In other words, the entropy is the uncertainty of
the outcome or the disorder in a population in a closed system. What Shannon wants to
know from the theory is how much information can be transmitted on a communication
channel. Unlike the author of this book, Shannon does not pay much attention to the
information quality. The author studies the reliability of the information which is
measured in hits. He argues that the reliability rel(B@A) means the probability that the
message A is correct when it is sent and the message B is received.
The ideal receiver or the recipient who receive purely correct information from a
source or sender does not exist in the Law of Diminishing Information. The principle of
this theory is that the information that C (receiver) gives about A (source) cannot be
IS 7894: Theoretical Foundation of Information Systems Research
Page 1
By Thanaporn Sundaravej
greater than the information B (intermediary) gives about A. That can be expressed as
inf(C@A <= inf(B@A) if A -=- B -=- C. The cause of decreased amount of transferred
information can be noise added to a signal.
Chapter 2: The Law of Diminishing Information
A discrete message is the first selection to prove the Law of Diminishing
Information because the discrete message can be concisely measured and the result is
clearly correct. In transferring information, only one message from alternative elements
of an exhaustive set is selected. An example of an exhaustive set is throwing a dice. The
outcome of a toss is one and only one of the six faces.
To measure a discrete message, several mathematical techniques are involved.
Probability is mostly used in the assessment of discrete messages. Any probability ranges
from 0 to 1. The joint probability P(ai bj) is the probability that ai and bj occur together.
The conditional probability P(bj | ai) is a cause and effect probability. That is, the
probability of bj if ai occurs. Matrix algebra is also utilized in the measurement of
discrete messages because its calculation is suitable for the data form of lists or tables.
Rules of chain conditions are also employed in the law for discrete finite systems.
In conclusion, the Law of Diminishing Information can be expressed as inf(C@A)
<= inf(B@A) if P(ai bj ck) = P(ai | bj) P(ck | bj) P(bj). From this rule, it can be
interpreted that the distortion cannot increase the information. If it does, then the chain
condition is violated.
IS 7894: Theoretical Foundation of Information Systems Research
Page 2
Download