Download Free Statistical Modeling Of The National Assessment Of Educational Progress Book in PDF and EPUB Free Download. You can read online Statistical Modeling Of The National Assessment Of Educational Progress and write the review.

The purpose of this book is to evaluate a new approach to the analysis and reporting of the large-scale surveys for the National Assessment of Educational Progress carried out for the National Center for Education Statistics. The need for a new approach was driven by the demands for secondary analysis of the survey data by researchers who needed analyses more detailed than those published by NCES, and the need to accelerate the processing and publication of results from the surveys. This new approach is based on a full multilevel statistical and psychometric model for students’ responses to the test items, taking into account the design of the survey, the backgrounds of the students, and the classes, schools and communities in which the students were located. The authors detail a fully integrated single model that incorporates both the survey design and the psychometric model by extending the traditional form of the psychometric model to accommodate the design structure while allowing for student, teacher, and school covariates.
Education is a hot topic. From the stage of presidential debates to tonight's dinner table, it is an issue that most Americans are deeply concerned about. While there are many strategies for improving the educational process, we need a way to find out what works and what doesn't work as well. Educational assessment seeks to determine just how well students are learning and is an integral part of our quest for improved education. The nation is pinning greater expectations on educational assessment than ever before. We look to these assessment tools when documenting whether students and institutions are truly meeting education goals. But we must stop and ask a crucial question: What kind of assessment is most effective? At a time when traditional testing is subject to increasing criticism, research suggests that new, exciting approaches to assessment may be on the horizon. Advances in the sciences of how people learn and how to measure such learning offer the hope of developing new kinds of assessments-assessments that help students succeed in school by making as clear as possible the nature of their accomplishments and the progress of their learning. Knowing What Students Know essentially explains how expanding knowledge in the scientific fields of human learning and educational measurement can form the foundations of an improved approach to assessment. These advances suggest ways that the targets of assessment-what students know and how well they know it-as well as the methods used to make inferences about student learning can be made more valid and instructionally useful. Principles for designing and using these new kinds of assessments are presented, and examples are used to illustrate the principles. Implications for policy, practice, and research are also explored. With the promise of a productive research-based approach to assessment of student learning, Knowing What Students Know will be important to education administrators, assessment designers, teachers and teacher educators, and education advocates.
This report examines the effects of both student and school characteristics on mathematics and science achievement levels in the third, seventh, and eleventh grades using data from the 1985-86 National Assessment of Educational Progress (NAEP). Analyses feature hierarchical linear models (HLM), a regression-like statistical technique that addresses the problem of students nested within schools by directly modeling within- and between-schools variation in achievement. Additionally, HLM allows examination of the impact of school characteristics on the relationship between student characteristics and achievement within schools. Following an executive summary, this report contains: (1) an introduction including information on the background and purpose of the study, a description of data sources and variables used in the analyses, and an outline of the methodological approach utilized; (2) a summary of the effects of school characteristics on mathematics achievement for each of the three grades with respect to the within-school model and the five between-school models; (3) a summary of the effects of school characteristics on science achievement for each of the three grades with respect to the within-school model and the five between-school models, enlarged with a comparison of mathematics and science results; (4) an extensive discussion of the findings in relation to methodological goals, grade level differences, school size, disassociation of socio-economic influences from race-ethnicity, tracking, gender differences, and teacher characteristics; and (5) appendices that include technical notes for the variables and the HLM methodology, descriptive statistics for selected characteristics, and supporting tables for the HLM results. In general, the school characteristics examined in the analyses provided better explanations for average achievement between schools than they did for the effects of gender, race-ethnicity, and socioeconomic status on achievement. (JJK)
Since 1969, the National Assessment of Educational Progress (NAEP) has been providing policymakers, educators, and the public with reports on academic performance and progress of the nation's students. The assessment is given periodically in a variety of subjects: mathematics, reading, writing, science, the arts, civics, economics, geography, U.S. history, and technology and engineering literacy. NAEP is given to representative samples of students across the U.S. to assess the educational progress of the nation as a whole. Since 1992, NAEP results have been reported in relation to three achievement levels: basic, proficient, and advanced. However, the use of achievement levels has provoked controversy and disagreement, and evaluators have identified numerous concerns. This publication evaluates the NAEP student achievement levels in reading and mathematics in grades 4, 8, and 12 to determine whether the achievement levels are reasonable, reliable, valid, and informative to the public, and recommends ways that the setting and use of achievement levels can be improved.
The National Assessment of Educational Progress (NAEP), known as the nation's report card, has chronicled students' academic achievement in America for over a quarter of a century. It has been a valued source of information about students' performance, providing the best available trend data on the academic achievement of elementary, middle, and secondary school students in key subject areas. NAEP's prominence and the important need for stable and accurate measures of academic achievement call for evaluation of the program and an analysis of the extent to which its results are reasonable, valid, and informative to the public. This volume of papers considers the use and application of NAEP. It provides technical background to the recently published book, Grading the Nation's Report Card: Evaluating NAEP and Transforming the Assessment of Educational Progress (NRC, 1999), with papers on four key topics: NAEP's assessment development, content validity, design and use, and more broadly, the design of education indicator systems.
This paper offers recommendations to the National Center for Education Statistics (NCES) on the development of the background questionnaire for the National Assessment of Adult Literacy (NAAL). The recommendations are from the viewpoint of a researcher interested in applying sophisticated statistical models to address important issues in adult literacy. The paper focuses on five issues, each of which is the subject of a section of the paper: sampling; selection bias; measurement; policy modeling; and gauging cohort effects. Each section considers the scope of the issue and then makes recommendations to NCES. These recommendations include providing all appropriate sampling weights in NAAL data; examining contextual effects on the distribution of literacy ability in the population; considering relevant auxiliary variables that would constitute the selection equation; considering the hypothesized number of factors and including at least four variables measuring each factor in the questionnaire; obtaining retrospective data on general and job-specific literacy-related activities; and exploring the possibility of linking NAAL with existing longitudinal surveys. (Contains 21 references.) (YLB)
Fit statistics provide a direct measure of assessment accuracy by analyzing the fit of measurement models to an individual's (or group's) response pattern. Students that lose interest during the assessment, for example, will miss exercises that are within their abilities. Such students will respond correctly to some more difficult items and incorrectly to some less difficult items. Most assessment programs, including the National Assessment of Educational Progress (NAEP), currently either ignore such response anomalies or assume they do not exist. The use of a weighted-total-fit-mean-square as a measure of assessment accuracy was investigated using data from the 1990 and 1992 NAEP assessments. The distribution of fit across individuals was examined for fit and item-type differences, and the practical significance of this type of fit statistic was explored. It is concluded that this person-fit statistic has little to offer in the analysis of traditional NAEP data. Sixteen tables present analysis results. Appendix A contains 12 subscale tables, and Appendix B presents software routines. (Contains 62 references.) (SLD)