Institute Courses and Descriptions
AERI's activities include capacity-building institutes for technical and non-technical audiences in global and domestic settings. Institutes are typically 1-2 weeks long and can be rotated at different venues, in consultation with partners.
AERI Inaugural Institute Director: Dr. Madhabi Chatterji (email@example.com).
Course 1: The Logic of Causal Reasoning with Randomized Experiments and Quasi-experiments
Faculty: Dr. Judith Scott-Clayton, Teachers College, Columbia University
Description: The purpose of this course is to provide a brief introduction to the logic of causal inference in the context of randomized experiments and quasi-experiments. We will begin by discussing the fundamental importance and challenge of causal modeling. We will then discuss the causal logic of randomized experiments as well as increasingly popular quasi-experimental methods, including instrumental variables and regression discontinuity designs. The course will be aimed at future policymakers and administrators with the goal of preparing these individuals to understand and critically interpret the findings of evaluations and other studies claiming to test causal relationships, as well as to help such individuals better identify real-world situations in which randomized experiments and/or quasi-experiments may be both useful and feasible. Graduate-level statistics is recommended as a prerequisite; however, the course will focus on broad methodological underpinnings rather than econometric techniques.
Course 2: Qualitative Interviews: Collecting and Analyzing Data
Faculty: Dr. A. Jordan Wright, Empire State College, The State University of New York
Description: This course will take a hands-on approach to training participants in skills related to qualitative interviewing and data analysis. The first day will focus on interviewing skills and understanding how the interviewer can impact the data collected. The second day will address the major techniques for analyzing qualitative interview data, focusing especially on data-driven techniques. Participants will conduct qualitative interviews on education quality during the course and begin to analyze the data collected.
Course 3: What is a Case Study? Design and Implementation of Case Studies
Faculty: Dr. Lyle Yorks, Teachers College, Columbia University
Description: In this workshop participants will work with making critical decisions related to their research problems and purposes. The focus will be on the core design decisions that must be made in designing and conducting a case study for research purposes. Particular attention will be paid to data collection methods including interviews, survey, focus group, observation, and archival methodologies and effectively sequencing them for purposes of triangulation and validity. The focus will be on choosing, designing, and integrating methods appropriate for the research problem in question, and addressing validity issues. The challenges of ethics and subject protection in the complexity of field based case study designs will also be addressed. The workshop will be interactive, with lectures and group discussions interspersed throughout the two days. Attention will be given to the research questions with which participants are wor
Course 4: Cost-Benefit Analysis and Cost-Effectiveness Analysis
Faculty: Dr. Clive Belfield, Queens College, City University of New York & Teachers College, Columbia University & Dr. Hank Levin, Teachers College, Columbia University
Description: The course is an overview of cost-benefit analysis and cost-effectiveness analysis applied to education. Tools and techniques will be presented and students will have opportunity to apply the procedures using actual case studies. Content includes: identification and measurement of costs and benefits; consideration of intangible costs and benefits; calculation of net program benefits; examination of the benefits-to-costs ratio; conducting a sensitivity analysis on assumptions; and understanding and handling risk factors. Advances in shadow pricing and benefit transfer will be considered.
Course 5: Value-added Models of Evaluation: Statistical Foundations, Assumptions, and Limitations
Faculty: Dr. Francisco Rivera-Batiz, Columbia University
Description: The objective of this course is to offer an introduction to value added measures of evaluation and accountability in education. The history of how value-added measures were developed and have been utilized in school systems in the U.S. is first presented. The statistical foundations of value added models are then discussed and the difficulties and complexities involved in estimating value added measures examined. This includes an analysis of case studies as well as recent research using value added to measure the impact of teachers and schools on student outcomes. The course concludes with a discussion of the benefits as well as the limitations of value added analysis. Although a foundational course on statistics and econometrics is highly-recommended as a pre-requisite, the course will be policy-oriented and the goal will be to explain to a broad audience the statistical underpinnings of value added and its current and potential uses.
Course 6: Using Standardized Test Data for Global and Multi-cultural Test-takers: Best Practices and Validity Standards
Faculty: Dr. Madhabi Chatterji & Dr. Michael Lau, Teachers College, Columbia University
Description: This course will provide the foundational principles underlying the design of standardized, large scale and individual assessments, highlighting how the information generated by these instruments should be appropriately interpreted and used. We will treat validity and reliability issues through a global and multicultural lens. Validity standards that apply to both norm-referenced and criterion-referenced (standards-based) assessments will be discussed. The course will help build skills in interpreting some popular types of scores, scales, and assessment reports for individuals and groups, such as, Percentile Ranks, Stanines, various forms of Standard Scores, Cut-scores, Domain-referenced Proficiency Scores, and Percents meeting Proficiency Standards with refernce to eoors of measurement and reliability issues. We will apply this knowledge to cases, evaluating factors potentially affecting valid and appropriate use of test data in research, practice and policy contexts. Examples of widely used standardized instruments will be discussed, such as the WAIS/WPPSI, SAT, TIMSS, PISA and NAEP.
Course 7: Assessment, Accountability and Accreditation Models in Higher Education Systems
Faculty: Dr. Judy R. Wilkerson, College of Education, Florida Gulf Coast University
Description: Course participants will build or refine an assessment framework, using a logic model to structure the relationships between the constructs included. The constructs will be defined using U.S. federal accreditation guidelines, international guidelines, U.S. regional and professional accreditation agency guidelines, and locally defined needs and mission. All of these incorporate a focus on student achievement, accountability, and continuous improvement, thereby establishing the potential for high quality assessment. A standards-based approach will be used to confirm the quality of the assessments through evidence of validity, reliability fairness, and utility of the data produced as well as the feasibility of the framework in K-12 and higher education systems.