2017-07-30T15:36:38+03:00[Europe/Moscow] en true History of probability, Stable distribution, Stochastic, Gaussian process, Standard deviation, Lebesgue integration, Probability axioms, Typical set, Uncertainty theory, Entropy (information theory), Probability density function, Total variation, Principles of the Theory of Probability, Benford's law, Inclusion–exclusion principle, Gauss–Markov theorem, Conditional dependence, Conditional independence flashcards
Probability theory

Probability theory

  • History of probability
    Probability has a dual aspect: on the one hand the likelihood of hypotheses given the evidence for them, and on the other hand the behavior of stochastic processes such as the throwing of dice or coins.
  • Stable distribution
    In probability theory, a distribution is said to be stable (or a random variable is said to be stable) if a linear combination of two independent copies of a random sample has the same distribution, up to location and scale parameters.
  • Stochastic
    A stochastic event or system is one that is unpredictable because of a random variable.
  • Gaussian process
    In probability theory and statistics, a Gaussian process is a statistical model where observations occur in a continuous domain, e.
  • Standard deviation
    In statistics, the standard deviation (SD, also represented by the Greek letter sigma σ or the Latin letter s) is a measure that is used to quantify the amount of variation or dispersion of a set of data values.
  • Lebesgue integration
    In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the x-axis.
  • Probability axioms
    In Kolmogorov's probability theory, the probability P of some event E, denoted , is usually defined such that P satisfies the Kolmogorov axioms, named after the Russian mathematician Andrey Kolmogorov, which are described below.
  • Typical set
    In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution.
  • Uncertainty theory
    Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity, and product measure axioms.
  • Entropy (information theory)
    In information theory, systems are modeled by a transmitter, channel, and receiver.
  • Probability density function
    In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function that describes the relative likelihood for this random variable to take on a given value.
  • Total variation
    In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure.
  • Principles of the Theory of Probability
    Principles of the Theory of Probability is a 1939 book by the philosopher Ernest Nagel.
  • Benford's law
    Benford's law, also called the first-digit law, is an observation about the frequency distribution of leading digits in many real-life sets of numerical data.
  • Inclusion–exclusion principle
    In combinatorics (combinatorial mathematics), the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as The principle is more clearly seen in the case of three sets, which for the sets A, B and C is given by Generalizing the results of these examples gives the principle of inclusion–exclusion.
  • Gauss–Markov theorem
    In statistics, the Gauss–Markov theorem, named after Carl Friedrich Gauss and Andrey Markov, states that in a linear regression model in which the errors have expectation zero and are uncorrelated and have equal variances, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator.
  • Conditional dependence
    In probability theory, conditional dependence is a relationship between two or more events that are dependent when a third event occurs.
  • Conditional independence
    In probability theory, two events R and B are conditionally independent given a third event Y precisely if the occurrence or non-occurrence of R and the occurrence or non-occurrence of B are independent events in their conditional probability distribution given Y.