2019-03-31T20:55:38+03:00[Europe/Moscow]entrueAnalysis of Variance, Variance Between Groups, Variance Within Groups, Statistical Error, Sum of Squares, Mean Square, Factors, Contrast Coefficients, Levels of a Factor, Type I Error, Planned Comparisons, Post Hoc Comparisons, F Ratio, Main Effect, Interaction, Test for Trend, Orthogonal Components, Dunnettâ€™s T-Test, Factorial Design, Regression line, Y-intercept, Slope, Pearson Correlation Coefficient, Homoscedasticity, Heteroscedasticity, Least Squares Criterion, Direct Relationship, Spearman Correlation Coefficient, Coefficient of Determination, Coefficient of Alienation, Scatterplot, Standard Error of Estimate, Residuals, Inverse Relationship, Fisher Transformation, Linearity Assumption, Simpsonâ€™s Paradox, Observed Frequency, Expected Frequency, Cell, Column Percent, Row Percent, Odds Ratio, Chi^2 Goodness of Fit Test, Chi^2 Test for Association, McNemar Test for Symmetry, Contingency Table, Phi Coefficient, Cramer V, Relative Benefit (or Risk), Simpson's Paradox, Effect Size, Type II Error, Alpha Level, Correct Decision, Power, Beta, 1-Betaflashcardshttps://studylib.netBiostatistics
A method of hypothesis testing, normally used with more than two study groups, in which the likelihood of the outcome is estimated by a ratio of the variability between groups divided by the variability within groups.
Variance Between Groups
The component of the total variability in ones data that measures the variability of group means around the grand mean. The between groups variance measures treatment effects if present. If there is no treatment effect the between variance component reflects only error variance.
Variance Within Groups
The component of the total variability in ones data that measures error (i.e., natural) variability. This is measured by the variability of group measurements around the group mean.
Statistical Error
A conclusion error in hypothesis testing. This happens when one incorrectly rejects Ho or incorrectly does not reject Ho.
Sum of Squares
A component of variance that is formed by summing the squared differences of numbers around their mean.
Mean Square
Another term for variance. A mean square is formed by dividing a sum of squares term by its degrees of freedom.
Factors
The systematic sources of variability in an ANOVA.
Contrast Coefficients
Positive and negative integers that are used in a sum of squares equation to test a particular pattern of means in a one-way ANOVA.
Levels of a Factor
The divisions of a factor. Each factor must have at least two levels.
Type I Error
A conclusion error in hypothesis testing when one incorrectly rejects Ho. This is analogous to a false positive.
Planned Comparisons
A method of hypothesis testing within ANOVA in which a specific pattern of group means is tested. A planned comparison is also called an a priori comparison because a global F does not need to be computed.
Post Hoc Comparisons
The hypothesis testing that occurs among group means following a statistically significant global ANOVA. Post-hoc comparisons are also called a posteriori tests because they are conducted only if a global F test reaches statistical significance.
F Ratio
Used in the analysis of variance, the F ratio is the quotient of variance between groups divided by the variance within groups.
Main Effect
In a factorial design a main effect is the influence of one factor on the dependent variable while disregarding the other factors.
Interaction
In a factorial design an interaction is the combined influence of more than one factor on the dependent variable.
Test for Trend
A specific type of a priori comparison in which one tests for a particular type of relationship (e.g., linear, quadratic) with an independent and dependent variable.
Orthogonal Components
Independent contributions to the total sum of squares in an ANOVA. In general, the term "orthogonal" means independent.
Dunnett’s T-Test
This is one of the post-hoc tests in which all groups are compared with a reference group. The results of the Dunnett Test should be reported only if the global ANOVA reaches statistical significance.
Factorial Design
An experimental design in which the influence of more than one factor is studied.
Regression line
A type of analysis to reveal the nature and degree of association of two measurements.
Y-intercept
The point along the ordinate (the y axis) where a line cuts through when x = 0
Slope
The rate of change of a line. The slope describes how fast the line is rising or sinking. It is expressed as the change in y per unit change in x.
Pearson Correlation Coefficient
A non-parametric measure of association, bounded by + or - 1 which is based upon the same concept of association in the Pearson Correlation, but is computed by way of ranks.
Homoscedasticity
One of the main assumptions of regression analysis which states that the variability of the y variable is constant across values of x.
Heteroscedasticity
In regression analysis when the variability of the y variable is not constant across values of x.
Least Squares Criterion
The rule for identifying the best fitting line in a scatterplot. The criterion states that the best fitting line is the one which minimizes the squared deviations of points around the line.
Direct Relationship
A relationship in which two variables vary in the same manner. A direct relationship is observed when one variable increases and decreases as another variable does the same.
Spearman Correlation Coefficient
A non-parametric measure of association, bounded by + or - 1 which is based upon the same concept of association in the Pearson Correlation, but is computed by way of ranks.
Coefficient of Determination
The square of the Pearson r which describes the proportion of variance in one variable that is explained by the variance in the other.
Coefficient of Alienation
1 - the square of the Pearson r which describes the proportion of variance in one variable that is unrelated to the variance in the other.
Scatterplot
A graph in which the values of one variable are plotted as a function of another.
Standard Error of Estimate
The average error (in y units) when predicting y from x.
Residuals
The difference between a measurement and the value of the measurement that is predicted by some mathematical model.
Inverse Relationship
A relationship in which two variables vary in opposition. As one increases, the other decreases.
Fisher Transformation
A non-linear re-expression that is used in assessing the confidence interval of a population Pearson Correlation coefficient.
Linearity Assumption
One of the main assumptions of regression analysis which states that a straight line adequately represents the relationship between x and y.
Simpson’s Paradox
The change in the magnitude or direction of a relationship that is due to a confounding variable.
Observed Frequency
Frequencies that are determined from observation (i.e., ones data).
Expected Frequency
Frequencies that are calculated from an expected pattern.
Cell
The intersection of two levels in a contingency table.
Column Percent
The proportion or percentage that is formed within a vertical column from the frequencies in a chi-square test for association.
Row Percent
The proportion or percentage that is formed within a Horizontal row from the frequencies in a chi-square test for association.
Odds Ratio
A quotient of two odds that is conditioned by a dichotomous variable.
Chi^2 Goodness of Fit Test
A form of hypothesis testing with frequencies in which the observed data are compared or "fitted" against expected values.
Chi^2 Test for Association
A chi-square goodness of fit test that is used to study the relationship between two categorical variables.
McNemar Test for Symmetry
A specific type of chi-square test that analyzes the change in a dichotomous variable.
Contingency Table
A table constructed with at least two factors that reveal the intersection of all levels. A contingency table is used in factorial ANOVA and with the chi-square test for association.
Phi Coefficient
A measure of association bounded between + and - 1 that is calculated from a 2x2 contingency table. The Phi coefficient is a specific instance of the Cramer V.
Cramer V
A measure of association bounded between + and - 1 that is calculated from a contingency table that is larger than 2x2.
Relative Benefit (or Risk)
The quotient of the probability of improving with treatment divided by the probability of improving without treatment.
Simpson's Paradox
The change in the magnitude or direction of a relationship that is due to a confounding variable.
Effect Size
A measure of the degree of impact of a treatment.
Type II Error
A conclusion error in hypothesis testing when one incorrectly does not reject Ho. This is analogous to a false negative.
Alpha Level
The point in a sampling distribution in which one rejects Ho. Alpha is the probability of a Type I error.
Correct Decision
In hypothesis testing when one makes a conclusion that is consistent with reality. In other words, one makes a correct decision to reject Ho when it is false and to not reject Ho when it is true.
Power
The probability or correctly rejecting Ho. Typically, we want power to be .80 (i.e., 80%).
Beta, 1-Beta
The probability of a Type II error.
Click to flip
Analysis of Variance
A method of hypothesis testing, normally used with more than two study groups, in which the likelihood of the outcome is estimated by a ratio of the variability between groups divided by the variability within groups.
Variance Between Groups
The component of the total variability in ones data that measures the variability of group means around the grand mean. The between groups variance measures treatment effects if present. If there is no treatment effect the between variance component reflects only error variance.
Variance Within Groups
The component of the total variability in ones data that measures error (i.e., natural) variability. This is measured by the variability of group measurements around the group mean.
Statistical Error
A conclusion error in hypothesis testing. This happens when one incorrectly rejects Ho or incorrectly does not reject Ho.
Sum of Squares
A component of variance that is formed by summing the squared differences of numbers around their mean.
Mean Square
Another term for variance. A mean square is formed by dividing a sum of squares term by its degrees of freedom.
Factors
The systematic sources of variability in an ANOVA.
Contrast Coefficients
Positive and negative integers that are used in a sum of squares equation to test a particular pattern of means in a one-way ANOVA.
Levels of a Factor
The divisions of a factor. Each factor must have at least two levels.
Type I Error
A conclusion error in hypothesis testing when one incorrectly rejects Ho. This is analogous to a false positive.
Planned Comparisons
A method of hypothesis testing within ANOVA in which a specific pattern of group means is tested. A planned comparison is also called an a priori comparison because a global F does not need to be computed.
Post Hoc Comparisons
The hypothesis testing that occurs among group means following a statistically significant global ANOVA. Post-hoc comparisons are also called a posteriori tests because they are conducted only if a global F test reaches statistical significance.
F Ratio
Used in the analysis of variance, the F ratio is the quotient of variance between groups divided by the variance within groups.
Main Effect
In a factorial design a main effect is the influence of one factor on the dependent variable while disregarding the other factors.
Interaction
In a factorial design an interaction is the combined influence of more than one factor on the dependent variable.
Test for Trend
A specific type of a priori comparison in which one tests for a particular type of relationship (e.g., linear, quadratic) with an independent and dependent variable.
Orthogonal Components
Independent contributions to the total sum of squares in an ANOVA. In general, the term "orthogonal" means independent.
Dunnett’s T-Test
This is one of the post-hoc tests in which all groups are compared with a reference group. The results of the Dunnett Test should be reported only if the global ANOVA reaches statistical significance.
Factorial Design
An experimental design in which the influence of more than one factor is studied.
Regression line
A type of analysis to reveal the nature and degree of association of two measurements.
Y-intercept
The point along the ordinate (the y axis) where a line cuts through when x = 0
Slope
The rate of change of a line. The slope describes how fast the line is rising or sinking. It is expressed as the change in y per unit change in x.
Pearson Correlation Coefficient
A non-parametric measure of association, bounded by + or - 1 which is based upon the same concept of association in the Pearson Correlation, but is computed by way of ranks.
Homoscedasticity
One of the main assumptions of regression analysis which states that the variability of the y variable is constant across values of x.
Heteroscedasticity
In regression analysis when the variability of the y variable is not constant across values of x.
Least Squares Criterion
The rule for identifying the best fitting line in a scatterplot. The criterion states that the best fitting line is the one which minimizes the squared deviations of points around the line.
Direct Relationship
A relationship in which two variables vary in the same manner. A direct relationship is observed when one variable increases and decreases as another variable does the same.
Spearman Correlation Coefficient
A non-parametric measure of association, bounded by + or - 1 which is based upon the same concept of association in the Pearson Correlation, but is computed by way of ranks.
Coefficient of Determination
The square of the Pearson r which describes the proportion of variance in one variable that is explained by the variance in the other.
Coefficient of Alienation
1 - the square of the Pearson r which describes the proportion of variance in one variable that is unrelated to the variance in the other.
Scatterplot
A graph in which the values of one variable are plotted as a function of another.
Standard Error of Estimate
The average error (in y units) when predicting y from x.
Residuals
The difference between a measurement and the value of the measurement that is predicted by some mathematical model.
Inverse Relationship
A relationship in which two variables vary in opposition. As one increases, the other decreases.
Fisher Transformation
A non-linear re-expression that is used in assessing the confidence interval of a population Pearson Correlation coefficient.
Linearity Assumption
One of the main assumptions of regression analysis which states that a straight line adequately represents the relationship between x and y.
Simpson’s Paradox
The change in the magnitude or direction of a relationship that is due to a confounding variable.
Observed Frequency
Frequencies that are determined from observation (i.e., ones data).
Expected Frequency
Frequencies that are calculated from an expected pattern.
Cell
The intersection of two levels in a contingency table.
Column Percent
The proportion or percentage that is formed within a vertical column from the frequencies in a chi-square test for association.
Row Percent
The proportion or percentage that is formed within a Horizontal row from the frequencies in a chi-square test for association.
Odds Ratio
A quotient of two odds that is conditioned by a dichotomous variable.
Chi^2 Goodness of Fit Test
A form of hypothesis testing with frequencies in which the observed data are compared or "fitted" against expected values.
Chi^2 Test for Association
A chi-square goodness of fit test that is used to study the relationship between two categorical variables.
McNemar Test for Symmetry
A specific type of chi-square test that analyzes the change in a dichotomous variable.
Contingency Table
A table constructed with at least two factors that reveal the intersection of all levels. A contingency table is used in factorial ANOVA and with the chi-square test for association.
Phi Coefficient
A measure of association bounded between + and - 1 that is calculated from a 2x2 contingency table. The Phi coefficient is a specific instance of the Cramer V.
Cramer V
A measure of association bounded between + and - 1 that is calculated from a contingency table that is larger than 2x2.
Relative Benefit (or Risk)
The quotient of the probability of improving with treatment divided by the probability of improving without treatment.
Simpson's Paradox
The change in the magnitude or direction of a relationship that is due to a confounding variable.
Effect Size
A measure of the degree of impact of a treatment.
Type II Error
A conclusion error in hypothesis testing when one incorrectly does not reject Ho. This is analogous to a false negative.
Alpha Level
The point in a sampling distribution in which one rejects Ho. Alpha is the probability of a Type I error.
Correct Decision
In hypothesis testing when one makes a conclusion that is consistent with reality. In other words, one makes a correct decision to reject Ho when it is false and to not reject Ho when it is true.
Power
The probability or correctly rejecting Ho. Typically, we want power to be .80 (i.e., 80%).