Introduction to econometrics : brief edition için kapak resmi
Introduction to econometrics : brief edition
Başlık:
Introduction to econometrics : brief edition
ISBN:
9780321432513
Basım Bilgisi:
2. bs.
Yayım Bilgisi:
Boston : Pearson/Addison Wesley, 2008.
Fiziksel Açıklamalar:
xxvi, 379 s. : şkl. ; 24 cm
Genel Not:
Kaynakça var.

201 Assumption 3: Large Outliers Are Unlikely 201 Assumption 4: No Perfect Multicollinearity 201 6.6 The Distribution of the OLS Estimators in Multiple Regression 203 6.7 Multicollinearity 204 Examples of Perfect Multicollinearity 204 Imperfect Multicollinearity 207 6.8 Conclusion 208 Appendix 6.1 Derivation of Equation (6.1) 216 Appendix 6.2 Distribution of the OLS Estimators When There Are Two Regressors and Homoskedastic Errors 216 Appendix 6.3 The OLS Estimator With Two Regressors 216 Chapter 7 Hypothesis Tests and Confidence Intervals in Multiple Regression 218 7.1 Hypothesis Tests and Confidence Intervals for a Single Coefficient 219 Standard Errors for the OLS Estimators 219 Hypothesis Tests for a Single Coefficient 219 Confidence Intervals for a Single Coefficient 221 Application to Test Scores and the Student/Teacher Ratio 221 7.2 Tests of Joint Hypotheses 223 Testing Hypotheses on Two or More Coefficients 223 The F-Statistic 225 Application to Test Scores and the Student/Teacher Ratio 227 The Homoskedasticity-Only F-Statistic 228 7.3 Testing Single Restrictions Involving Multiple Coefficients 230 7.4 Confidence Sets for Multiple Coefficients 232 7.5 Model Specification for Multiple Regression 233 Omitted Variable Bias in Multiple Regression 234 Model Specification in Theory and in Practice 234 Interpreting the R2 and the Adjusted R2 in Practice 235 7.6 Analysis of the Test Score Data Set 237 7.7 Conclusion 242 Appendix 7.1 The Bonferroni Test of a Joint Hypotheses 249 Chapter 8 Nonlinear Regression Functions 252 8.1 A General Strategy for Modeling Nonlinear Regression Functions 254 Test Scores and District Income 254 The Effect on Y of a Change in X in Nonlinear Specifications 258 A General Approach to Modeling Nonlinearities Using Multiple Regression 262 8.2 Nonlinear Functions of a Single Independent Variable 262 Polynomials 263 Logarithms 265 Polynomial and Logarithmic Models of Test Scores and District Income 273 8.3 Interactions Between Independent Variables 275 Interactions Between Two Binary Variables 275 Interactions Between a Continuous and a Binary Variable 278 Interactions Between Two Continuous Variables 284 8.4 Nonlinear Effects on Test Scores of the Student/Teacher Ratio 288 Discussion of Regression Results 289 Summary of Findings 293 8.5 Conclusion 294 Appendix 8.1 Regression Functions That Are Nonlinear in the Parameters 305 Chapter 9 Assessing Studies Based on Multiple Regression 310 9.1 Internal and External Validity 311 Threats to Internal Validity 311 Threats to External Validity 312 9.2 Threats to Internal Validity of Multiple Regression Analysis 314 Omitted Variable Bias 314 Misspecification of the Functional Form of the Regression Function 317 Errors-in-Variables 317 Sample Selection 320 Simultaneous Causality 322 Sources of Inconsistency of OLS Standard Errors 323 9.3 Internal and External Validity When the Regression Is Used for Forecasting 325 Using Regression Models for Forecasting 325 Assessing the Validity of Regression Models for Forecasting 326 9.4 Example: Test Scores and Class Size 327 External Validity 327 Internal Validity 334 Discussion and Implications 335 9.5 Conclusion 336 Appendix 9.1 The Massachusetts Elementary School Testing Data 342 Chapter 10 Conducting a Regression Study Using Economic Data 343 10.1 Choosing a Topic 344 10.2 Collecting the Data 345 Finding a Data Set 345 Time Series Data and Panel Data 346 Preparing the Data for Regression Analysis 347 10.3 Conducting Your Regression Analysis 347 10.4 Writing Up Your Results 348 Appendix 351 References 359 Answers to "Review the Concepts" Questions 363 Glossary 367 Index 373 Key Concepts Part One Introduction and Review 1 1.1 Cross-Sectional, Time Series, and Panel Data 13 2.1 Expected Value and the Mean 22 2.2 Variance and Standard Deviation 23 2.3 Means, Variances, and Covariances of Sums of Random Variables 36 2.4 Computing Probabilities Involving Normal Random Variables 38 2.5 Simple Random Sampling and i.i.d. Random Variables 45 2.6 Convergence in Probability, Consistency, and the Law of Large Numbers 48 2.7 The Central Limit Theorem 53 3.1 Estimators and Estimates 65 3.2 Bias, Consistency, and Efficiency 66 3.3 Efficiency of : is BLUE 68 3.4 The Standard Error of 74 3.5 The Terminology of Hypothesis Testing 77 3.6 Testing the Hypothesis E(Y) = mY,0 Against the Alternative E(Y) ­ mY,0 78 3.7 Confidence Intervals for the Population Mean 80 Part Two Fundamentals of Regression Analysis 107 4.1 Terminology for the Linear Regression Model with a Single Regressor 113 4.2 The OLS Estimator, Predicted Values, and Residuals 117 4.3 The Least Squares Assumptions 129 4.4 Large-Sample Distributions of 0 and 1 131 5.1 General Form of the t-Statistic 148 5.2 Testing the Hypothesis b1 = b1,0 Against the Alternative b1 ­ b1,0 150 5.3 Confidence Interval for b1 155 5.4 Heteroskedasticity and Homoskedasticity 160 5.5 The Gauss-Markov Theorem for 1 166 6.1 Omitted Variable Bias in Regression with a Single Regressor 187 6.2 The Multiple Regression Model 194 6.3 The OLS Estimators, Predicted Values, and Residuals in the Multiple Regression Model 196 6.4 The Least Squares Assumptions in the Multiple Regression Model 202 6.5 Large Sample Distribution of 0, 1, . . . , k 204 7.1 Testing the Hypothesis bj = bj,0 Against the Alternative bj ­ bj,0 220 7.2 Confidence Intervals for a Single Coefficient in Multiple Regression 221 7.3 Omitted Variable Bias in Multiple Regression 235 7.4 R2 and : What They Tell You; and What They Don't 236 8.1 The Expected Effect on Y of a Change in X1 in the Nonlinear Regression Model (8.3) 259 8.2 Logarithms in Regression: Three Cases 271 8.3 A Method for Interpreting Coefficients in Regressions with Binary Variables 277 8.4 Interactions Between Binary and Continuous Variables 280 8.5 Interactions in Multiple Regression 285 9.1 Internal and External Validity 311 9.2 Omitted Variable Bias: Should I Include More Variables in My Regression? 316 9.3 Functional Form Misspecification 317 9.4 Errors-in-Variables Bias 319 9.5 Sample Selection Bias 321 9.6 Simultaneous Causality Bias 323 9.7 Threats to the Internal Validity of a Multiple Regression Study 325 10.1 Guidelines for Conducting an Emperical Economic Study 350 General Interest Boxes The Distribution of Earnings in the United States in 2004 34 A Bad Day on Wall Street 40 Landon Wins! 69 The Gender Gap of Earnings of College Graduates in the United States 84 A Novel Way to Boost Retirement Savings 88 The "Beta" of a Stock 120 The Economic Value of a Year of Education: Heteroskedasticity or Homoskedasticity? 163 The Mozart Effect: Omitted Variable Bias? 188 The Returns to Education and the Gender Gap 282 The Demand for Economics Journals 286 Do Stock Mutual Funds Outperform the Market? 321

Contents Preface xix Part One Introduction and Review 1 Chapter 1 Economic Questions and Data 3 1.1 Economic Questions We Examine 4 Question 1: Does Reducing Class Size Improve Elementary School Education? 4 Question 2: What Are the Economic Returns to Education? 5 Quantitative Questions, Quantitative Answers 6 1.2 Causal Effects and Idealized Experiments 6 Estimation of Causal Effects 6 Forecasting and Causality 8 1.3 Data: Sources and Types 8 Experimental versus Observational Data 8 Cross-Sectional Data 9 Time Series Data 10 Panel Data 11 Chapter 2 Review of Probability 15 2.1 Random Variables and Probability Distributions 16 Probabilities, the Sample Space, and Random Variables 16 Probability Distribution of a Discrete Random Variable 17 Probability Distribution of a Continuous Random Variable 19 2.2 Expected Values, Mean, and Variance 21 The Expected Value of a Random Variable 21 The Standard Deviation and Variance 22 Mean and Variance of a Linear Function of a Random Variable 23 Other Measures of the Shape of a Distribution 24 2.3 Two Random Variables 27 Joint and Marginal Distributions 27 Conditional Distributions 28 Independence 32 Covariance and Correlation 32 The Mean and Variance of Sums of Random Variables 33 2.4 The Normal, Chi-Squared,Student t, and F Distributions 37 The Normal Distribution 37 The Chi-Squared Distribution 41 The Student t Distribution 42 The F Distribution 42 2.5 Random Sampling and the Distribution of the Sample Average 43 Random Sampling 43 The Sampling Distribution of the Sample Average 44 2.6 Large-Sample Approximations to Sampling Distributions 46 The Law of Large Numbers and Consistency 47 The Central Limit Theorem 50 Appendix 2.1 Derivation of Results in Key Concept 2.3 61 Chapter 3 Review of Statistics 63 3.1 Estimation of the Population Mean 64 Estimators and Their Properties 65 Properties of 66 The Importance of Random Sampling 68 3.2 Hypothesis Tests Concerning the Population Mean 69 Null and Alternative Hypotheses 70 The p-Value 70 Calculating the p-Value When sY Is Known 72 The Sample Variance, Sample Standard Deviation, and Standard Error 73 Calculating the p-Value When sY Is Unknown 74 The t-Statistic 75 Hypothesis Testing with a Prespecified Significance Level 76 One-Sided Alternatives 78 3.3 Confidence Intervals for the Population Mean 79 3.4 Comparing Means from Different Populations 81 Hypothesis Tests for the Difference Between Two Means 81 Confidence Intervals for the Difference Between Two Population Means 82 3.5 Differences-of-Means Estimation of Causal Effects Using Experimental Data 83 The Causal Effect as a Difference of Conditional Expectations 83 Estimation of the Causal Effect Using Differences of Means 85 3.6 Using the t-Statistic When the Sample Size Is Small 86 The t-Statistic and the Student t Distribution 86 Use of the Student t Distribution in Practice 90 3.7 Scatterplot, the Sample Covariance, and the Sample Correlation 90 Scatterplots 91 Sample Covariance and Correlation 92 Appendix 3.1 The U.S. Current Population Survey 103 Appendix 3.2 Two Proofs That Is the Least Squares Estimator of mY 104 Appendix 3.3 A Proof That the Sample Variance Is Consistent 105 Part Two Fundamentals of Regression Analysis 107 Chapter 4 Linear Regression with One Regressor 109 4.1 The Linear Regression Model 110 4.2 Estimating the Coefficients of the Linear Regression Model 114 The Ordinary Least Squares Estimator 116 OLS Estimates of the Relationship Between Test Scores and the Student/Teacher Ratio 118 Why Use the OLS Estimator? 119 4.3 Measures of Fit 121 The R2 121 The Standard Error of the Regression 122 Application to the Test Score Data 123 4.4 The Least Squares Assumptions 124 Assumption 1: The Conditional Distribution of ui Given Xi Has a Mean of Zero 124 Assumption 2: (Xi, Yi ), i = 1, . . . , n Are Independently and Identically Distributed 126 Assumption 3: Large Outliers Are Unlikely 127 Use of the Least Squares Assumptions 128 4.5 The Sampling Distribution of the OLS Estimators 129 The Sampling Distribution of the OLS Estimators 130 4.6 Conclusion 133 Appendix 4.1 The California Test Score Data Set 141 Appendix 4.2 Derivation of the OLS Estimators 141 Appendix 4.3 Sampling Distribution of the OLS Estimator 142 Chapter 5 Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals 146 5.1 Testing Hypotheses About One of the Regression Coefficients 147 Two-Sided Hypotheses Concerning b1 147 One-Sided Hypotheses Concerning b1 151 Testing Hypotheses About the Intercept b0 153 5.2 Confidence Intervals for a Regression Coefficient 153 5.3 Regression When X Is a Binary Variable 156 Interpretation of the Regression Coefficients 156 5.4 Heteroskedasticity and Homoskedasticity 158 What Are Heteroskedasticity and Homoskedasticity? 158 Mathematical Implications of Homoskedasticity 161 What Does This Mean in Practice? 162 5.5 The Theoretical Foundations of Ordinary Least Squares 164 Linear Conditionally Unbiased Estimators and the Gauss-Markov Theorem 165 Regression Estimators Other Than OLS 166 5.6 Using the t-Statistic in Regression When the Sample Size Is Small 167 The t-Statistic and the Student t Distribution 168 Use of the Student t Distribution in Practice 168 5.7 Conclusion 169 Appendix 5.1 Formulas for OLS Standard Errors 178 Appendix 5.2 The Gauss-Markov Conditions and a Proof of the Gauss-Markov Theorem 180 Chapter 6 Linear Regression with Multiple Regressors 184 6.1 Omitted Variable Bias 184 Definition of Omitted Variable Bias 185 A Formula for Omitted Variable Bias 187 Addressing Omitted Variable Bias by Dividing the Data into Groups 189 6.2 The Multiple Regression Model 191 The Population Regression Line 191 The Population Multiple Regression Model 192 6.3 The OLS Estimator in Multiple Regression 194 The OLS Estimator 195 Application to Test Scores and the Student-Teacher Ratio 196 6.4 Measures of Fit in Multiple Regression 198 The Standard Error of the Regression (SER) 198 The R2 198 The "Adjusted R2" 199 Application to Test Scores 200 6.5 The Least Squares Assumptions in Multiple Regression 200 Assumption 1: The Conditional Distribution of ui Given X1i, X2i, . . . , Xki Has a Mean of Zero 201 Assumption 2: (X1i, X2i, . . . , Xki, Yi ) i = 1, . . . , n Are i.i.d.
Özet:
201 Assumption 3: Large Outliers Are Unlikely 201 Assumption 4: No Perfect Multicollinearity 201 6.6 The Distribution of the OLS Estimators in Multiple Regression 203 6.7 Multicollinearity 204 Examples of Perfect Multicollinearity 204 Imperfect Multicollinearity 207 6.8 Conclusion 208 Appendix 6.1 Derivation of Equation (6.1) 216 Appendix 6.2 Distribution of the OLS Estimators When There Are Two Regressors and Homoskedastic Errors 216 Appendix 6.3 The OLS Estimator With Two Regressors 216 Chapter 7 Hypothesis Tests and Confidence Intervals in Multiple Regression 218 7.1 Hypothesis Tests and Confidence Intervals for a Single Coefficient 219 Standard Errors for the OLS Estimators 219 Hypothesis Tests for a Single Coefficient 219 Confidence Intervals for a Single Coefficient 221 Application to Test Scores and the Student/Teacher Ratio 221 7.2 Tests of Joint Hypotheses 223 Testing Hypotheses on Two or More Coefficients 223 The F-Statistic 225 Application to Test Scores and the Student/Teacher Ratio 227 The Homoskedasticity-Only F-Statistic 228 7.3 Testing Single Restrictions Involving Multiple Coefficients 230 7.4 Confidence Sets for Multiple Coefficients 232 7.5 Model Specification for Multiple Regression 233 Omitted Variable Bias in Multiple Regression 234 Model Specification in Theory and in Practice 234 Interpreting the R2 and the Adjusted R2 in Practice 235 7.6 Analysis of the Test Score Data Set 237 7.7 Conclusion 242 Appendix 7.1 The Bonferroni Test of a Joint Hypotheses 249 Chapter 8 Nonlinear Regression Functions 252 8.1 A General Strategy for Modeling Nonlinear Regression Functions 254 Test Scores and District Income 254 The Effect on Y of a Change in X in Nonlinear Specifications 258 A General Approach to Modeling Nonlinearities Using Multiple Regression 262 8.2 Nonlinear Functions of a Single Independent Variable 262 Polynomials 263 Logarithms 265 Polynomial and Logarithmic Models of Test Scores and District Income 273 8.3 Interactions Between Independent Variables 275 Interactions Between Two Binary Variables 275 Interactions Between a Continuous and a Binary Variable 278 Interactions Between Two Continuous Variables 284 8.4 Nonlinear Effects on Test Scores of the Student/Teacher Ratio 288 Discussion of Regression Results 289 Summary of Findings 293 8.5 Conclusion 294 Appendix 8.1 Regression Functions That Are Nonlinear in the Parameters 305 Chapter 9 Assessing Studies Based on Multiple Regression 310 9.1 Internal and External Validity 311 Threats to Internal Validity 311 Threats to External Validity 312 9.2 Threats to Internal Validity of Multiple Regression Analysis 314 Omitted Variable Bias 314 Misspecification of the Functional Form of the Regression Function 317 Errors-in-Variables 317 Sample Selection 320 Simultaneous Causality 322 Sources of Inconsistency of OLS Standard Errors 323 9.3 Internal and External Validity When the Regression Is Used for Forecasting 325 Using Regression Models for Forecasting 325 Assessing the Validity of Regression Models for Forecasting 326 9.4 Example: Test Scores and Class Size 327 External Validity 327 Internal Validity 334 Discussion and Implications 335 9.5 Conclusion 336 Appendix 9.1 The Massachusetts Elementary School Testing Data 342 Chapter 10 Conducting a Regression Study Using Economic Data 343 10.1 Choosing a Topic 344 10.2 Collecting the Data 345 Finding a Data Set 345 Time Series Data and Panel Data 346 Preparing the Data for Regression Analysis 347 10.3 Conducting Your Regression Analysis 347 10.4 Writing Up Your Results 348 Appendix 351 References 359 Answers to "Review the Concepts" Questions 363 Glossary 367 Index 373 Key Concepts Part One Introduction and Review 1 1.1 Cross-Sectional, Time Series, and Panel Data 13 2.1 Expected Value and the Mean 22 2.2 Variance and Standard Deviation 23 2.3 Means, Variances, and Covariances of Sums of Random Variables 36 2.4 Computing Probabilities Involving Normal Random Variables 38 2.5 Simple Random Sampling and i.i.d. Random Variables 45 2.6 Convergence in Probability, Consistency, and the Law of Large Numbers 48 2.7 The Central Limit Theorem 53 3.1 Estimators and Estimates 65 3.2 Bias, Consistency, and Efficiency 66 3.3 Efficiency of : is BLUE 68 3.4 The Standard Error of 74 3.5 The Terminology of Hypothesis Testing 77 3.6 Testing the Hypothesis E(Y) = mY,0 Against the Alternative E(Y) ­ mY,0 78 3.7 Confidence Intervals for the Population Mean 80 Part Two Fundamentals of Regression Analysis 107 4.1 Terminology for the Linear Regression Model with a Single Regressor 113 4.2 The OLS Estimator, Predicted Values, and Residuals 117 4.3 The Least Squares Assumptions 129 4.4 Large-Sample Distributions of 0 and 1 131 5.1 General Form of the t-Statistic 148 5.2 Testing the Hypothesis b1 = b1,0 Against the Alternative b1 ­ b1,0 150 5.3 Confidence Interval for b1 155 5.4 Heteroskedasticity and Homoskedasticity 160 5.5 The Gauss-Markov Theorem for 1 166 6.1 Omitted Variable Bias in Regression with a Single Regressor 187 6.2 The Multiple Regression Model 194 6.3 The OLS Estimators, Predicted Values, and Residuals in the Multiple Regression Model 196 6.4 The Least Squares Assumptions in the Multiple Regression Model 202 6.5 Large Sample Distribution of 0, 1, . . . , k 204 7.1 Testing the Hypothesis bj = bj,0 Against the Alternative bj ­ bj,0 220 7.2 Confidence Intervals for a Single Coefficient in Multiple Regression 221 7.3 Omitted Variable Bias in Multiple Regression 235 7.4 R2 and : What They Tell You; and What They Don't 236 8.1 The Expected Effect on Y of a Change in X1 in the Nonlinear Regression Model (8.3) 259 8.2 Logarithms in Regression: Three Cases 271 8.3 A Method for Interpreting Coefficients in Regressions with Binary Variables 277 8.4 Interactions Between Binary and Continuous Variables 280 8.5 Interactions in Multiple Regression 285 9.1 Internal and External Validity 311 9.2 Omitted Variable Bias: Should I Include More Variables in My Regression? 316 9.3 Functional Form Misspecification 317 9.4 Errors-in-Variables Bias 319 9.5 Sample Selection Bias 321 9.6 Simultaneous Causality Bias 323 9.7 Threats to the Internal Validity of a Multiple Regression Study 325 10.1 Guidelines for Conducting an Emperical Economic Study 350 General Interest Boxes The Distribution of Earnings in the United States in 2004 34 A Bad Day on Wall Street 40 Landon Wins! 69 The Gender Gap of Earnings of College Graduates in the United States 84 A Novel Way to Boost Retirement Savings 88 The "Beta" of a Stock 120 The Economic Value of a Year of Education: Heteroskedasticity or Homoskedasticity? 163 The Mozart Effect: Omitted Variable Bias? 188 The Returns to Education and the Gender Gap 282 The Demand for Economics Journals 286 Do Stock Mutual Funds Outperform the Market? 321

Contents Preface xix Part One Introduction and Review 1 Chapter 1 Economic Questions and Data 3 1.1 Economic Questions We Examine 4 Question 1: Does Reducing Class Size Improve Elementary School Education? 4 Question 2: What Are the Economic Returns to Education? 5 Quantitative Questions, Quantitative Answers 6 1.2 Causal Effects and Idealized Experiments 6 Estimation of Causal Effects 6 Forecasting and Causality 8 1.3 Data: Sources and Types 8 Experimental versus Observational Data 8 Cross-Sectional Data 9 Time Series Data 10 Panel Data 11 Chapter 2 Review of Probability 15 2.1 Random Variables and Probability Distributions 16 Probabilities, the Sample Space, and Random Variables 16 Probability Distribution of a Discrete Random Variable 17 Probability Distribution of a Continuous Random Variable 19 2.2 Expected Values, Mean, and Variance 21 The Expected Value of a Random Variable 21 The Standard Deviation and Variance 22 Mean and Variance of a Linear Function of a Random Variable 23 Other Measures of the Shape of a Distribution 24 2.3 Two Random Variables 27 Joint and Marginal Distributions 27 Conditional Distributions 28 Independence 32 Covariance and Correlation 32 The Mean and Variance of Sums of Random Variables 33 2.4 The Normal, Chi-Squared,Student t, and F Distributions 37 The Normal Distribution 37 The Chi-Squared Distribution 41 The Student t Distribution 42 The F Distribution 42 2.5 Random Sampling and the Distribution of the Sample Average 43 Random Sampling 43 The Sampling Distribution of the Sample Average 44 2.6 Large-Sample Approximations to Sampling Distributions 46 The Law of Large Numbers and Consistency 47 The Central Limit Theorem 50 Appendix 2.1 Derivation of Results in Key Concept 2.3 61 Chapter 3 Review of Statistics 63 3.1 Estimation of the Population Mean 64 Estimators and Their Properties 65 Properties of 66 The Importance of Random Sampling 68 3.2 Hypothesis Tests Concerning the Population Mean 69 Null and Alternative Hypotheses 70 The p-Value 70 Calculating the p-Value When sY Is Known 72 The Sample Variance, Sample Standard Deviation, and Standard Error 73 Calculating the p-Value When sY Is Unknown 74 The t-Statistic 75 Hypothesis Testing with a Prespecified Significance Level 76 One-Sided Alternatives 78 3.3 Confidence Intervals for the Population Mean 79 3.4 Comparing Means from Different Populations 81 Hypothesis Tests for the Difference Between Two Means 81 Confidence Intervals for the Difference Between Two Population Means 82 3.5 Differences-of-Means Estimation of Causal Effects Using Experimental Data 83 The Causal Effect as a Difference of Conditional Expectations 83 Estimation of the Causal Effect Using Differences of Means 85 3.6 Using the t-Statistic When the Sample Size Is Small 86 The t-Statistic and the Student t Distribution 86 Use of the Student t Distribution in Practice 90 3.7 Scatterplot, the Sample Covariance, and the Sample Correlation 90 Scatterplots 91 Sample Covariance and Correlation 92 Appendix 3.1 The U.S. Current Population Survey 103 Appendix 3.2 Two Proofs That Is the Least Squares Estimator of mY 104 Appendix 3.3 A Proof That the Sample Variance Is Consistent 105 Part Two Fundamentals of Regression Analysis 107 Chapter 4 Linear Regression with One Regressor 109 4.1 The Linear Regression Model 110 4.2 Estimating the Coefficients of the Linear Regression Model 114 The Ordinary Least Squares Estimator 116 OLS Estimates of the Relationship Between Test Scores and the Student/Teacher Ratio 118 Why Use the OLS Estimator? 119 4.3 Measures of Fit 121 The R2 121 The Standard Error of the Regression 122 Application to the Test Score Data 123 4.4 The Least Squares Assumptions 124 Assumption 1: The Conditional Distribution of ui Given Xi Has a Mean of Zero 124 Assumption 2: (Xi, Yi ), i = 1, . . . , n Are Independently and Identically Distributed 126 Assumption 3: Large Outliers Are Unlikely 127 Use of the Least Squares Assumptions 128 4.5 The Sampling Distribution of the OLS Estimators 129 The Sampling Distribution of the OLS Estimators 130 4.6 Conclusion 133 Appendix 4.1 The California Test Score Data Set 141 Appendix 4.2 Derivation of the OLS Estimators 141 Appendix 4.3 Sampling Distribution of the OLS Estimator 142 Chapter 5 Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals 146 5.1 Testing Hypotheses About One of the Regression Coefficients 147 Two-Sided Hypotheses Concerning b1 147 One-Sided Hypotheses Concerning b1 151 Testing Hypotheses About the Intercept b0 153 5.2 Confidence Intervals for a Regression Coefficient 153 5.3 Regression When X Is a Binary Variable 156 Interpretation of the Regression Coefficients 156 5.4 Heteroskedasticity and Homoskedasticity 158 What Are Heteroskedasticity and Homoskedasticity? 158 Mathematical Implications of Homoskedasticity 161 What Does This Mean in Practice? 162 5.5 The Theoretical Foundations of Ordinary Least Squares 164 Linear Conditionally Unbiased Estimators and the Gauss-Markov Theorem 165 Regression Estimators Other Than OLS 166 5.6 Using the t-Statistic in Regression When the Sample Size Is Small 167 The t-Statistic and the Student t Distribution 168 Use of the Student t Distribution in Practice 168 5.7 Conclusion 169 Appendix 5.1 Formulas for OLS Standard Errors 178 Appendix 5.2 The Gauss-Markov Conditions and a Proof of the Gauss-Markov Theorem 180 Chapter 6 Linear Regression with Multiple Regressors 184 6.1 Omitted Variable Bias 184 Definition of Omitted Variable Bias 185 A Formula for Omitted Variable Bias 187 Addressing Omitted Variable Bias by Dividing the Data into Groups 189 6.2 The Multiple Regression Model 191 The Population Regression Line 191 The Population Multiple Regression Model 192 6.3 The OLS Estimator in Multiple Regression 194 The OLS Estimator 195 Application to Test Scores and the Student-Teacher Ratio 196 6.4 Measures of Fit in Multiple Regression 198 The Standard Error of the Regression (SER) 198 The R2 198 The "Adjusted R2" 199 Application to Test Scores 200 6.5 The Least Squares Assumptions in Multiple Regression 200 Assumption 1: The Conditional Distribution of ui Given X1i, X2i, . . . , Xki Has a Mean of Zero 201 Assumption 2: (X1i, X2i, . . . , Xki, Yi ) i = 1, . . . , n Are i.i.d.
Konu Terimleri:

Ek Yazar: