2017-07-31T11:00:58+03:00[Europe/Moscow] en true Likelihood function, Receiver operating characteristic, Smoothing spline, Logistic regression, General linear model, Gauss–Markov theorem, Coefficient of determination, Mixed model, Structural equation modeling, Generalized estimating equation, Linear model, Vector generalized linear model, Multivariate adaptive regression splines, Errors and residuals, Partial least squares regression, Multinomial logistic regression, Curve fitting, Optimal design, Standardized coefficient, Linear least squares (mathematics), Partition of sums of squares, Regression validation, Component analysis (statistics), Ordinary least squares, Prediction interval flashcards
Regression analysis

Regression analysis

  • Likelihood function
    In statistics, a likelihood function (often simply the likelihood) is a function of the parameters of a statistical model given data.
  • Receiver operating characteristic
    In statistics, a receiver operating characteristic (ROC), or ROC curve, is a graphical plot that illustrates the performance of a binary classifier system as its discrimination threshold is varied.
  • Smoothing spline
    (For a broader coverage related to this topic, see Spline (mathematics).) The smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function.
  • Logistic regression
    In statistics, logistic regression, or logit regression, or logit model is a regression model where the dependent variable (DV) is categorical.
  • General linear model
    The general linear model is a statistical linear model.
  • Gauss–Markov theorem
    In statistics, the Gauss–Markov theorem, named after Carl Friedrich Gauss and Andrey Markov, states that in a linear regression model in which the errors have expectation zero and are uncorrelated and have equal variances, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator.
  • Coefficient of determination
    In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is a number that indicates the proportion of the variance in the dependent variable that is predictable from the independent variable.
  • Mixed model
    A mixed model is a statistical model containing both fixed effects and random effects.
  • Structural equation modeling
    Structural equation modeling (SEM) refers to a diverse set of mathematical models, computer algorithms, and statistical methods that fit networks of constructs to data.
  • Generalized estimating equation
    In statistics, a generalized estimating equation (GEE) is used to estimate the parameters of a generalized linear model with a possible unknown correlation between outcomes.
  • Linear model
    In statistics, the term linear model is used in different ways according to the context.
  • Vector generalized linear model
    In statistics, the class of vector generalized linear models (VGLMs) was proposed to enlarge the scope of models catered for by generalized linear models (GLMs).
  • Multivariate adaptive regression splines
    In statistics, multivariate adaptive regression splines (MARS) is a form of regression analysis introduced by Jerome H.
  • Errors and residuals
    In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value".
  • Partial least squares regression
    Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space.
  • Multinomial logistic regression
    In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.
  • Curve fitting
    Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints.
  • Optimal design
    In the design of experiments, optimal designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion.
  • Standardized coefficient
    In statistics, standardized coefficients or beta coefficients are the estimates resulting from a regression analysis that have been standardized so that the variances of dependent and independent variables are 1.
  • Linear least squares (mathematics)
    In statistics and mathematics, linear least squares is an approach fitting a mathematical or statistical model to data in cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknown parameters of the model.
  • Partition of sums of squares
    The partition of sums of squares is a concept that permeates much of inferential statistics and descriptive statistics.
  • Regression validation
    In statistics, regression validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are acceptable as descriptions of the data.
  • Component analysis (statistics)
    Component analysis is the analysis of two or more independent variables which comprise a treatment modality.
  • Ordinary least squares
    In statistics, ordinary least squares (OLS) or linear least squares is a method for estimating the unknown parameters in a linear regression model, with the goal of minimizing the sum of the squares of the differences between the observed responses in the given dataset and those predicted by a linear function of a set of explanatory variables (visually this is seen as the sum of the vertical distances between each data point in the set and the corresponding point on the regression line - the smaller the differences, the better the model fits the data).
  • Prediction interval
    In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which future observations will fall, with a certain probability, given what has already been observed.