Wonder Club world wonders pyramid logo
×

Reviews for School Libraries Worth Their Keep: A Philosophy Plus Tricks - Carolyn C. Leopold - Paperback

 School Libraries Worth Their Keep magazine reviews

The average rating for School Libraries Worth Their Keep: A Philosophy Plus Tricks - Carolyn C. Leopold - Paperback based on 2 reviews is 3 stars.has a rating of 3 stars

Review # 1 was written on 2021-03-19 00:00:00
0was given a rating of 3 stars Jeremy Stephenson
2016.11.01– Howell DC (2013) Statistical Methods for Psychology Preface About the Author 01. Basic Concepts 1.1. Important Terms 1.2. Descriptive and Inferential Statistics 1.3. Measurement Scales • Nominal Scales • Ordinal Scales • Interval Scales • Ratio Scales • The Role of Measurement Scales 1.4. Using Computers 1.5. What You Should Know about this Edition 02. Describing and Exploring Data 2.1. Plotting Data • Frequency Distributions 2.2. Histograms 2.3. Fitting Smoothed Lines to Data • Fitting a Normal Curve • Kernel Density Plots 2.4. Stem-and-Leaf Displays 2.5. Describing Distributions 2.6. Notation • Notation of Variables • Summation Notation • Double Subscripts 2.7. Measures of Central Tendency • The Mode • The Median • The Mean • Relative Advantages and Disadvantages of the the Mode, the Median, and the Mean • Trimmed means 2.8. Measures of Variability • Range • Interquartile Range and other Range Statistics • The Average Deviation • The Mean Absolute Deviation • The Variance • The Standard Deviation • Computational Formulae for the Variance and the Standard Deviation • The Influence of Extreme Values on the Variance and Standard Deviation • The Coefficient of Variation • Unbiased Estimators • The Sample Variance as an Estimator of The Population Variance 2.9. Boxplots: Graphical Representations of Dispersions and Extreme Scores 2.10. Obtaining Measures of Dispersion Using SPSS 2.11. Percentiles, Quartiles, and Deciles 2.12. The Effect of Linear Transformations on Data • Centering • Reflection as a Transformation • Standardization • Nonlinear Transformations 03. The Normal Distribution 3.1. The Normal Distribution 3.2. The Standard Normal Distribution 3.3. Using the Tables of the Standard Normal Distribution 3.4. Setting Probable Limits on an Observation 3.5. Assessing Whether Data are Normally Distributed • Q-Q Plots • The Axes for a Q-Q Plot • The Kolmogorov-Smirnov Test 3.6. Measures Related to z 04. Sampling Distributions and Hypothesis Testing 4.1. Two Simple Examples Involving Course Evaluations and Rude Motorists 4.2. Sampling Distributions 4.3. Theory of Hypothesis Testing • Preamble • The Traditional Approach to Hypothesis Testing • The First Stumbling Block 4.4. The Null Hypothesis • Statistical Conclusions 4.5. Test Statistics and Their Sampling Distributions 4.6. Making Decisions About the Null Hypothesis 4.7. Type I and Type II Errors 4.8. One- and Two-Tailed Tests 4.9. What Does it Mean to Reject the Null Hypothesis? 4.10. An Alternative View of Hypothesis Testing 4.11. Effect Size 4.12. A Final Worked Example 4.13. Back to Course Evaluations and Rude Motorists 05. Basic Concepts of Probability 5.1. Probability 5.2. Basic Terminology and Rules • Basic Laws of Probability • The Additive Rule • The Multiplicative Rule • Sampling with Replacement • Joint and Conditional Probabilities 5.3. Discrete versus Continuous Variables 5.4. Probability Distributions for Discrete Variables 5.5. Probability Distributions for Continuous Variables 5.6. Permutations and Combinations • Permutations • Combinations 5.7. Bayes' Theorem • A Second Example • A Generic Formula • Back to the Hypothesis Testing 5.8. The Binomial Distribution • Plotting Binomial Distributions • The Mean and Variance of a Binomial Distribution 5.9. Using the Binomial Distribution to Test Hypotheses • The Sign Test 5.10. The Multinomial Distribution 06. Categorical Data and Chi-Square 6.1. The Chi-Square Distribution 6.2. The Chi-Square Goodness-of-Fit Test—One-Way Classification • The Tabled Chi-Square Distribution • An Example with More Than Two Categories 6.3. Two Classification Variables: Contingency Table Analysis • Expected Frequencies for Contingency Tables • Calculation of Chi-Square • Degrees of Freedom • Evaluation of X2 • Another Example • Evaluation of X2 • 2 x 2 Tables are Special Cases • Correcting for continuity • Fisher's Exact Test • Fisher's Exact Test versus Pearson's Chi Square 6.4. An Additional Example—A 4 x 2 Design • Computer analyses • Small Expected Frequencies 6.5. Chi-Square for Ordinal Data 6.6. Summary of the Assumptions of Chi-Square • The Assumption of Independence • Inclusion of Nonoccurrences 6.7. Dependent or Repeated Measures 6.8. One- and Two-Tailed Tests 6.9. Likelihood Ratio Tests 6.10. Mantel-Haenszel Tests 6.11. Effect Sizes • A Classic Example • d-family: Risks and Odds • Odds Ratios in 2 x K Tables • Odds Ratios in 2 x 2 x K Tables • r-family Measures • Phi (Φ) and Cramér's V 6.12. Measures of Agreement • Kappa (ϰ)—A Measure of Agreement 6.13. Writing Up the Results 07. Hypothesis Tests Applied to Means 7.1. Sampling Distribution of the Mean 7.2. Testing Hypotheses About Means—σ Known 7.3. Testing a Sample Mean When σ is Unknown—The One-Sample t Test • The Sampling Distribution of s2 • The t Statistic • Degrees of Freedom • Psychomotor Abilities of Low-Birthweight Infants • Things Are Changing • The Moon Illusion • Confidence Interval on µ • But What Is a Confidence Interval • Identifying Extreme Cases • A Final Example: We Aren't Done with Therapeutic Touch • Using SPSS to Run One-Sample t Tests 7.4. Hypothesis Tests Applied to Means—Two Matched Samples • Treatment of Anorexia • Difference Scores • The t Statistic • Degrees of Freedom • Confidence Intervals • The Moon Illusion Revisited • Effect Size • d-Family of Measures • More About Matched Samples • Missing Data • Using Computer Software for t Tests on Matched Samples • Writing up the Results of a Dependent t Test 7.5. Hypothesis Tests Applied to Means—Two Independent Samples • Distribution of Differences Between Means • The t Statistic • Pooling Variances • Homophobia and Sexual Arousal • Confidence Limits on µ1 – µ2 • Effect Size • Confidence Limits on Effect Sizes • Reporting Results • SPSS Analysis • A Second Worked Example • Writing up the Results 7.6. Heterogeneity of Variance: the Behrens–Fisher Problem • The Sampling Distribution of t' • Testing for Heterogeneity of Variance • The Robustness of t with Heterogeneous Variances • But Should We Test for Homogeneity of Variance? • A Caution 7.7. Hypothesis Testing Revisited 08. Power 8.1. The Basic Concept of Power 8.2. Factors Affecting the Power of a Test • A Short Review • Power as a Function of α • Power as a Function of H1 • Power as a Function of n and σ2 8.3. Calculating Power the Traditional Way • Estimating the Effect Size • Recombining the Effect Size and n 8.4. Power Calculations for the One-Sample t • Estimating Required Sample Size • Noncentrality Parameters 8.5. Power Calculations for Differences Between Two Independent Means • Equal Sample Sizes • Unequal Sample Sizes 8.6. Power Calculations for Matched-Sample t 8.7. Turning the Tables on Power 8.8. Power Considerations in More Complex Designs 8.9. The Use of G*Power to Simplify Calculations 8.10. Retrospective Power 8.11. Writing Up the Results of a Power Analysis 09. Correlation and Regression 9.1. Scatterplot 9.2. The Relationship Between Pace of Life and Heart Disease 9.3. The Relationship Between Stress and Health 9.4. The Covariance 9.5. The Pearson Product-Moment Correlation Coefficient (r) • Adjusted r 9.6. The Regression Line • Interpretations of Regression • Intercept • Slope • Standardized Regression Coefficients • Correlation and Beta • A Note of Caution 9.7. Other Ways of Fitting a Line to Data 9.8. The Accuracy of Prediction • The Standard Deviation as a Measure of Error • The Standard Error of Estimate • r2 and the Standard Error of Estimate • Errors of Prediction as a Function of r • r2 as a Measure of Predictable Variability 9.9. Assumptions Underlying Regression and Correlation 9.10. Confidence Limits on Y-hat 9.11. A Computer Example Showing the Role of Test-Taking Skills 9.12. Hypothesis Testing • Testing the Significance of r • Testing the Significance of b • Testing the Difference Between Two Independent bs • Testing the Difference Between Two Independent rs • Testing the Hypothesis that ρ Equals any Specified Value • Confidence Limits on ρ • Confidence Limits Versus Tests of Significance • Testing the Difference Between Two Nonindependent rs 9.13. One Final Example 9.14. The Role of Assumptions in Correlation and Regression 9.15. Factors that Affect the Correlation • The Effect of Range Restrictions • The Effect of Heterogeneous Subsamples 9.16. Power Calculations for Pearson's r • Additional Examples 10. Alternative Correlational Techniques 10.1. Point-Biserial Correlation and Phi: Pearson Correlations by Another Name • Point-Biserial Correlation (rpb) • Calculating rpb • The Relationship Between rpb and t • Testing the Significance of r2pb • r2pb and Effect Size • Confidence Limits on d • The Phi Coefficient (Φ) • Calculating Φ • Significance of Φ • The Relationship Between Φ and X2 • Φ2 as a Measure of the Practical Significance of X2 10.2. Biserial and Tetrachoric Correlation: Non-Pearson Correlation Coefficients 10.3. Correlation Coefficients for Ranked Data • Ranking Data • Spearman's Correlation Coefficient for Ranked Data (rs) • The Significance of rs • Kendall's Tau Coefficient (τ) • Calculating τ • Significance of τ 10.4. Analysis of Contingency Tables with Ordered Data • A Correlational Approach 10.5. Kendall's Coefficient of Concordance (W) 11. Simple Analysis of Variance 11.1. An Example 11.2. The Underlying Model • Assumptions • Homogeneity of Variance • Normality • The Null Hypothesis 11.3. The Logic of the Analysis of Variance • Variance Estimation 11.4. Calculations in the Analysis of Variance • Sum of Squares • The Data • SStotal • SStreat • SSerror • The Summary Table • Sources of Variation • Degrees of Freedom • Mean Squares • The F Statistic • Conclusions 11.5. Writing Up the Results 11.6. Computer Solutions 11.7. Unequal Sample Sizes • Effective Therapies for Anorexia 11.8. Violations of Assumptions • The Welch Procedure • BUT! 11.9. Transformations • Logarithmic Transformation • Square-Root Transformation • Reciprocal • The Arcsine Transformation • Trimmed Samples • When to Transform and How to Choose a Transformation • Resampling 11.10. Fixed versus Random Models 11.11. The Size of an Experimental Effect • Eta-Squared (η2) • Omega-Squared (ω2) • d-Family Measures of Effect Size 11.12. Power • Resampling for Power • An Example • A Little Bit of Theory • Effect Size • Calculating Power using G*Power 11.13. Computer Analyses 12. Multiple Comparisons Among Treatment Means 12.1. Error Rates • Error Rate Per Comparison (PC) • Familywise Error Rate (FW) • The Null Hypothesis and Error Rates • A Priori versus Post Hoc Comparisons • Significance of the Overall F 12.2. Multiple Comparisons in a Simple Experiment on Morphine Tolerance • Magnitude of Effect 12.3. A Priori Comparisons • Multiple t Tests • Linear Contrasts • Sum of Squares for Contrasts • The Choice of Coefficients • The Test of Significance • Orthogonal Contrasts • Orthogonal Coefficients • Bonferroni t (Dunn's test) • Multistage Bonferroni Procedures • Trimmed Means 12.4. Confidence Intervals and Effect Sizes for Contrasts • Confidence Interval • Effect Size 12.5. Reporting Results 12.6. Post Hoc Comparisons • Fisher's Least Significant Difference (LSD) Procedure • The Studentized Range Statistic (q) 12.7. Tukey's Test • Unequal Sample Sizes and Heterogeneity of Variance • Other Range-Based Tests • Benjamini–Hochberg Test 12.8. Which Test? 12.9. Computer Solutions 12.10. Trend Analysis • Alcohol and Aggression • Unequal Intervals 13. Factorial Analysis of Variance • Notation 13.1. An Extension of the Eysenck Study • Calculations • Interpretation 13.2. Structural Models and Expected Mean Squares 13.3. Interactions 13.4. Simple Effects • Calculation of Simple Effects • Interpretation • Additivity of Simple Effects 13.5. Analysis of Variance Applied to the Effects of Smoking 13.6. Comparisons Among Means 13.7. Power Analysis for Factorial Experiments 13.8. Alternative Experimental Designs • A Crossed Experimental Design with Fixed Variables • A Crossed Experimental Design with a Random Variable • Nested Designs • Calculation for Nested Designs • Summary 13.9. Measures of Association and Effect Size • r-Family Measures • Partial Effects • d-Family Measures • Simple Effects 13.10. Reporting the Results 13.11. Unequal Sample Sizes • The Problem 13.12. Higher-Order Factorial Designs • Variables Affecting Driving Performance • Simple Effects • Simple Interaction Effects 13.13. A Computer Example 14. Repeated-Measures Designs 14.1. The Structural Model 14.2. F Ratios 14.3. The Covariance Matrix 14.4. Analysis of Variance Applied to Relaxation Therapy 14.5. Contrasts and Effect Sizes in Repeated Measures Designs • Effect Sizes 14.6. Writing Up the Results 14.7. One Between-Subjects Variable and One Within-Subjects Variable • Partitioning the Between-Subjects Effects • Partitioning the Within-Subjects Effects • The Analysis • Assumptions • Adjusting the Degrees of Freedom • Simple Effects • Multiple Comparisons 14.8. Two Between-Subjects Variables and One Within-Subjects Variable • Simple Effects for Complex Repeated-Measures Designs 14.9. Two Within-Subjects Variables and One Between-Subjects Variable • An Analysis of Data on Conditioned Suppression 14.10. Intraclass Correlation 14.11. Other Considerations With Repeated Measures Analyses • Sequence Effects • Unequal Group Sizes • Matched Samples and Related Problems 14.12. Mixed Models for Repeated-Measures Designs • The Data 15. Multiple Regression 15.1. Multiple Linear Regression • The Regression Equation • Two Variable Relationships • Looking at One Predictor While Controlling for Another • The Multiple Regression Equation • Another Interpretation of Multiple Regression • A Final Way to Think of Multiple Regression • Review 15.2. Using Additional Predictors • Standardized Regression Coefficients 15.3. Standard Errors and Tests of Regression Coefficients 15.4. A Resampling Approach 15.5. Residual Variance 15.6. Distribution Assumptions 15.7. The Multiple Correlation Coefficient • Testing the Significance of R2 • Sample Sizes 15.8. Partial and Semipartial Correlation • Partial Correlation • Semipartial Correlation • Alternative Interpretation Partial and Semipartial Correlation • Why Do We Care About Partial and Semipartial Correlations? 15.9. Suppressor Variables 15.10. Regression Diagnostics • Diagnostic Plots • Comparing Models 15.11. Constructing a Regression Equation • Selection Methods • All Subsets Regression • Backward Elimination • Stepwise Regression • Cross-Validation • Missing Observations 15.12. The "Importance" of Individual Variables 15.13. Using Approximate Regression Coefficients 15.14. Mediating and Moderating Relationships • Mediation • An Alternative Approach Using Bootstrapping • Moderating Relationships 15.15. Logistic Regression [Continued in comments due to Goodreads character limit]
Review # 2 was written on 2013-01-16 00:00:00
0was given a rating of 3 stars Bryan Tincher
My entry point into formal stats coursework was intermediate stats in grad school and this textbook was the required reading, beginning around chapter 11 of the text. I did not find it very approachable, given my needs as a learner. However, after years of study in this area, I now find the text very simple and straight forward. If statistical language and symbols are new to you, I'd recommend something like Seltman (2014) or any of the Andy Field texts.


Click here to write your own review.


Login

  |  

Complaints

  |  

Blog

  |  

Games

  |  

Digital Media

  |  

Souls

  |  

Obituary

  |  

Contact Us

  |  

FAQ

CAN'T FIND WHAT YOU'RE LOOKING FOR? CLICK HERE!!!