2017-07-27T17:59:59+03:00[Europe/Moscow] en true History of probability, Likelihood-ratio test, Interval estimation, KullbackÔÇôLeibler divergence, Likelihood function, Smoothing spline, Bias of an estimator, Benford's law, Entropy (information theory), Estimation theory, Multidimensional scaling, Uncertainty, Variance, Window function, General linear model, Sufficient statistic, Type I and type II errors, A priori probability, Response surface methodology, Errors and residuals, Fisher transformation, Optimal design, Peirce's criterion, Pivotal quantity flashcards
Statistical theory

Statistical theory

  • History of probability
    Probability has a dual aspect: on the one hand the likelihood of hypotheses given the evidence for them, and on the other hand the behavior of stochastic processes such as the throwing of dice or coins.
  • Likelihood-ratio test
    In statistics, a likelihood ratio test is a statistical test used to compare the goodness of fit of two models, one of which (the null model) is a special case of the other (the alternative model).
  • Interval estimation
    In statistics, interval estimation is the use of sample data to calculate an interval of possible (or probable) values of an unknown population parameter, in contrast to point estimation, which is a single number.
  • Kullback–Leibler divergence
    In probability theory and information theory, the Kullback–Leibler divergence, also called discrimination information (the name preferred by Kullback), information divergence, information gain, relative entropy, KLIC, KL divergence, is a measure of the difference between two probability distributions P and Q.
  • Likelihood function
    In statistics, a likelihood function (often simply the likelihood) is a function of the parameters of a statistical model given data.
  • Smoothing spline
    (For a broader coverage related to this topic, see Spline (mathematics).) The smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function.
  • Bias of an estimator
    In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.
  • Benford's law
    Benford's law, also called the first-digit law, is an observation about the frequency distribution of leading digits in many real-life sets of numerical data.
  • Entropy (information theory)
    In information theory, systems are modeled by a transmitter, channel, and receiver.
  • Estimation theory
    Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component.
  • Multidimensional scaling
    Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset.
  • Uncertainty
    Uncertainty is a situation which involves imperfect and/or unknown information.
  • Variance
    In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean, and it informally measures how far a set of (random) numbers are spread out from their mean.
  • Window function
    In signal processing, a window function (also known as an apodization function or tapering function) is a mathematical function that is zero-valued outside of some chosen interval.
  • General linear model
    The general linear model is a statistical linear model.
  • Sufficient statistic
    In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter".
  • Type I and type II errors
    In statistical hypothesis testing, a type I error is the incorrect rejection of a true null hypothesis (a "false positive"), while a type II error is incorrectly retaining a false null hypothesis (a "false negative").
  • A priori probability
    An a priori probability is a probability that is derived purely by deductive reasoning.
  • Response surface methodology
    In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables.
  • Errors and residuals
    In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value".
  • Fisher transformation
    Given a set of N bivariate sample pairs (Xi, Yi), i = 1, .
  • Optimal design
    In the design of experiments, optimal designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion.
  • Peirce's criterion
    In robust statistics, Peirce's criterion is a rule for eliminating outliers from data sets, which was devised by Benjamin Peirce.
  • Pivotal quantity
    In statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters whose probability distribution does not depend on the unknown parameters (also referred to as nuisance parameters).