Information theory

2017-07-27T22:08:42+03:00[Europe/Moscow] en true Kullback–Leibler divergence, Fisher information, Bra–ket notation, Entropy (information theory), Message, Quantum computing, Quantum information, Quantum information science, Information model, Kolmogorov complexity, MIMO, Principle of least privilege, Type I and type II errors, Π-calculus, Spatiotemporal pattern, Random number generation, Ascendency, Name collision, Uncertainty coefficient, Typical set flashcards Information theory
Click to flip
  • Kullback–Leibler divergence
    In probability theory and information theory, the Kullback–Leibler divergence, also called discrimination information (the name preferred by Kullback), information divergence, information gain, relative entropy, KLIC, KL divergence, is a measure of the difference between two probability distributions P and Q.
  • Fisher information
    In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X.
  • Bra–ket notation
    In quantum mechanics, bra–ket notation is a standard notation for describing quantum states.
  • Entropy (information theory)
    In information theory, systems are modeled by a transmitter, channel, and receiver.
  • Message
    A message is a discrete unit of communication intended by the source for consumption by some recipient or group of recipients.
  • Quantum computing
    Quantum computing studies theoretical computation systems (quantum computers) that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.
  • Quantum information
    In physics and computer science, quantum information is information that is held in the state of a quantum system.
  • Quantum information science
    Quantum information science is an area of study based on the idea that information science depends on quantum effects in physics.
  • Information model
    An information model in software engineering is a representation of concepts and the relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse.
  • Kolmogorov complexity
    In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of the shortest computer program (in a predetermined programming language) that produces the object as output.
  • MIMO
    In radio, multiple-input and multiple-output, or MIMO (pronounced /ˈmaɪmoʊ/ or /ˈmiːmoʊ/), is a method for multiplying the capacity of a radio link using multiple transmit and receive antennas to exploit multipath propagation.
  • Principle of least privilege
    In information security, computer science, and other fields, the principle of least privilege (also known as the principle of minimal privilege or the principle of least authority) requires that in a particular abstraction layer of a computing environment, every module (such as a process, a user, or a program, depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose.
  • Type I and type II errors
    In statistical hypothesis testing, a type I error is the incorrect rejection of a true null hypothesis (a "false positive"), while a type II error is incorrectly retaining a false null hypothesis (a "false negative").
  • Π-calculus
    In theoretical computer science, the π-calculus (or pi-calculus) is a process calculus.
  • Spatiotemporal pattern
    Spatialtemporal patterns are patterns that occur in a wide range of natural phenoma and are characterized by a spatial and a temporal patterning.
  • Random number generation
    A random-number generator (RNG) is a computational or physical device designed to generate a sequence of numbers or symbols that cannot be reasonably predicted better than by a random chance.
  • Ascendency
    Ascendency is a quantitative attribute of an ecosystem, defined as a function of the ecosystem's trophic network.
  • Name collision
    The term "name collision" refers to the nomenclature problem that occurs in computer programs when the same variable name is used for different things in two separate areas that are joined, merged, or otherwise go from occupying separate namespaces to sharing one.
  • Uncertainty coefficient
    In statistics, the uncertainty coefficient, also called proficiency, entropy coefficient or Theil's U, is a measure of nominal association.
  • Typical set
    In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution.