2017-07-30T08:25:28+03:00[Europe/Moscow] en true Likelihood-ratio test, Type I and type II errors, Bias of an estimator, History of probability, Uncertainty, Estimation theory, Interval estimation, Multidimensional scaling, Kullback–Leibler divergence, A priori probability, Pivotal quantity, General linear model, Peirce's criterion, Smoothing spline, Sufficient statistic, Fisher transformation, Mathematical statistics, Variance, Entropy (information theory), Likelihood function, Window function, Errors and residuals, Optimal design, Benford's law, Response surface methodology flashcards
Statistical theory

Statistical theory

  • Likelihood-ratio test
    In statistics, a likelihood ratio test is a statistical test used to compare the goodness of fit of two models, one of which (the null model) is a special case of the other (the alternative model).
  • Type I and type II errors
    In statistical hypothesis testing, a type I error is the incorrect rejection of a true null hypothesis (a "false positive"), while a type II error is incorrectly retaining a false null hypothesis (a "false negative").
  • Bias of an estimator
    In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated.
  • History of probability
    Probability has a dual aspect: on the one hand the likelihood of hypotheses given the evidence for them, and on the other hand the behavior of stochastic processes such as the throwing of dice or coins.
  • Uncertainty
    Uncertainty is a situation which involves imperfect and/or unknown information.
  • Estimation theory
    Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component.
  • Interval estimation
    In statistics, interval estimation is the use of sample data to calculate an interval of possible (or probable) values of an unknown population parameter, in contrast to point estimation, which is a single number.
  • Multidimensional scaling
    Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset.
  • Kullback–Leibler divergence
    In probability theory and information theory, the Kullback–Leibler divergence, also called discrimination information (the name preferred by Kullback), information divergence, information gain, relative entropy, KLIC, KL divergence, is a measure of the difference between two probability distributions P and Q.
  • A priori probability
    An a priori probability is a probability that is derived purely by deductive reasoning.
  • Pivotal quantity
    In statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters whose probability distribution does not depend on the unknown parameters (also referred to as nuisance parameters).
  • General linear model
    The general linear model is a statistical linear model.
  • Peirce's criterion
    In robust statistics, Peirce's criterion is a rule for eliminating outliers from data sets, which was devised by Benjamin Peirce.
  • Smoothing spline
    (For a broader coverage related to this topic, see Spline (mathematics).) The smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function.
  • Sufficient statistic
    In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter".
  • Fisher transformation
    Given a set of N bivariate sample pairs (Xi, Yi), i = 1, .
  • Mathematical statistics
    Mathematical statistics is the application of mathematics to statistics, which was originally conceived as the science of the state — the collection and analysis of facts about a country: its economy, land, military, population, and so forth.
  • Variance
    In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean, and it informally measures how far a set of (random) numbers are spread out from their mean.
  • Entropy (information theory)
    In information theory, systems are modeled by a transmitter, channel, and receiver.
  • Likelihood function
    In statistics, a likelihood function (often simply the likelihood) is a function of the parameters of a statistical model given data.
  • Window function
    In signal processing, a window function (also known as an apodization function or tapering function) is a mathematical function that is zero-valued outside of some chosen interval.
  • Errors and residuals
    In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value".
  • Optimal design
    In the design of experiments, optimal designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion.
  • Benford's law
    Benford's law, also called the first-digit law, is an observation about the frequency distribution of leading digits in many real-life sets of numerical data.
  • Response surface methodology
    In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables.