CHF106.00
Download est disponible immédiatement
Provides an easy-to-understand guide to statistical linear models and its uses in data analysis This book defines a broad spectrum of statistical linear models that is useful in the analysis of data. Considerable rewriting was done to make the book more reader friendly than the first edition. Linear Models, Second Edition is written in such a way as to be self-contained for a person with a background in basic statistics, calculus and linear algebra. The text includes numerous applied illustrations, numerical examples, and exercises, now augmented with computer outputs in SAS and R. Also new to this edition is: A greatly improved internal design and format A short introductory chapter to ease understanding of the order in which topics are taken up Discussion of additional topics including multiple comparisons and shrinkage estimators Enhanced discussions of generalized inverses, the MINQUE, Bayes and Maximum Likelihood estimators for estimating variance components Furthermore, in this edition, the second author adds many pedagogical elements throughout the book. These include numbered examples, end-of-example and end-of-proof symbols, selected hints and solutions to exercises available on the book's website, and references to "big data" in everyday life. Featuring a thorough update, Linear Models, Second Edition includes: A new internal format, additional instructional pedagogy, selected hints and solutions to exercises, and several more real-life applications Many examples using SAS and R with timely data sets * Over 400 examples and exercises throughout the book to reinforce understanding Linear Models, Second Edition is a textbook and a reference for upper-level undergraduate and beginning graduate-level courses on linear models, statisticians, engineers, and scientists who use multiple regression or analysis of variance in their work. SHAYLE R. SEARLE, PhD, was Professor Emeritus of Biometry at Cornell University. He was the author of the first edition of Linear Models, Linear Models for Unbalanced Data, and Generalized, Linear, and Mixed Models (with Charles E. McCulloch), all from Wiley. The first edition of Linear Models appears in the Wiley Classics Library. MARVIN H. J. GRUBER, PhD, is Professor Emeritus at Rochester Institute of Technology, School of Mathematical Sciences. Dr. Gruber has written a number of papers and has given numerous presentations at professional meetings during his tenure as a professor at RIT. His fields of interest include regression estimators and the improvement of their efficiency using shrinkage estimators. He has written and published two books on this topic. Another of his books, Matrix Algebra for Linear Models, also published by Wiley, provides good preparation for studying Linear Models. He is a member of the American Mathematical Society, the Institute of Mathematical Statistics and the American Statistical Association.
Auteur
The late SHAYLE R. SEARLE, PhD, was Professor Emeritus of Biometry at Cornell University. He was the author of the first edition of Linear Models, Linear Models for Unbalanced Data, and Generalized, Linear, and Mixed Models (with Charles E. McCulloch), all from Wiley. The first edition of Linear Models appears in the Wiley Classics Library.
MARVIN H. J. GRUBER, PhD, is Professor Emeritus at Rochester Institute of Technology, School of Mathematical Sciences. Dr. Gruber has written a number of papers and has given numerous presentations at professional meetings during his tenure as a professor at RIT. His fields of interest include regression estimators and the improvement of their efficiency using shrinkage estimators. He has written and published two books on this topic. Another of his books, Matrix Algebra for Linear Models, also published by Wiley, provides good preparation for studying Linear Models. He is a member of the American Mathematical Society, the Institute of Mathematical Statistics and the American Statistical Association.
Contenu
Preface xvii
Preface to First Edition xxi
About the Companion Website xxv
Introduction and Overview 1
1. Generalized Inverse Matrices 7
a. Definition and Existence of a Generalized Inverse, 8
b. An Algorithm for Obtaining a Generalized Inverse, 11
c. Obtaining Generalized Inverses Using the Singular Value Decomposition (SVD), 14
2. Solving Linear Equations, 17
a. Consistent Equations, 17
b. Obtaining Solutions, 18
c. Properties of Solutions, 20
The Penrose Inverse, 26
Other Definitions, 30
Symmetric Matrices, 32
a. Properties of a Generalized Inverse, 32
b. Two More Generalized Inverses of X'X, 35
Arbitrariness in a Generalized Inverse, 37
Other Results, 42
Exercises, 44
Distributions and Quadratic Forms 49
Introduction, 49
Symmetric Matrices, 52
Positive Definiteness, 53
Distributions, 58
a. Multivariate Density Functions, 58
b. Moments, 59
c. Linear Transformations, 60
d. Moment and Cumulative Generating Functions, 62
e. Univariate Normal, 64
f. Multivariate Normal, 64
g. Central 2, F, and t, 69
h. Non-central 2, 71
i. Non-central F, 73
j. The Non-central t Distribution, 73
a. Cumulants, 75
b. Distributions, 78
c. Independence, 80
Bilinear Forms, 87
Exercises, 89
Regression for the Full-Rank Model 95
Introduction, 95
a. The Model, 95
b. Observations, 97
c. Estimation, 98
d. The General Case of k x Variables, 100
e. Intercept and No-Intercept Models, 104
Deviations From Means, 105
Some Methods of Estimation, 109
a. Ordinary Least Squares, 109
b. Generalized Least Squares, 109
c. Maximum Likelihood, 110
d. The Best Linear Unbiased Estimator (b.l.u.e.) (GaussMarkov Theorem), 110
e. Least-squares Theory When The Parameters are Random Variables, 112
a. Unbiasedness, 115
b. Variances, 115
c. Estimating E(y), 116
d. Residual Error Sum of Squares, 119
e. Estimating the Residual Error Variance, 120
f. Partitioning the Total Sum of Squares, 121
g. Multiple Correlation, 122
a. The Vector of Observations y is Normal, 126
b. The Least-square Estimator b is Normal, 127
c. The Least-square Estimator b and the Estimator of the Variance ^2 are Independent, 127
d. The Distribution of SSE/2 is a 2 Distribution, 128
e. Non-central 2's, 128
f. F-distributions, 129
g. Analyses of Variance, 129
h. Tests of Hypotheses, 131
i. Confidence Intervals, 133
j. More Examples, 136
k. Pure Error, 139
a. Testing Linear Hypothesis, 141
b. Estimation Under the Null Hypothesis, 143
c. Four Common Hypotheses, 145
d. Reduced Models, 148
e. Stochastic Constraints, 158
f. Exact Quadratic Constraints (Ridge Regression), 160
a. The Likelihood Ratio Test, 163
b. Type I and Type II Errors, 164
c. The Power of a Test, 165
d. Estimating Residuals, 166
Summary of Regression Calculations, 168
Exercises, 169
Introducing Linear Models: Regression on Dummy Variables 175
Regression on Allocated Codes, 175
a. Allocated Codes, 175
b. Difficulties and Criticism, 176
c. Grouped Variables, 177 <p...