Glossary

This glossary defines terms used in evaluation of magnet school programs.

Accountability
The responsibility of program staff to provide evidence to stakeholders and sponsors that a program is effective and in conformity with its coverage, service, legal, and fiscal requirements.
Analysis
Examination of a body of data and information using appropriate qualitative methods or statistical techniques to produce answers to evaluation and research questions.
Assessment
An approach used to examine or consider a process or factor before an intervention is implemented, commonly referred to as a needs assessment.
Attrition
Loss of subjects from a student sample during the course of data collection.
Baseline
Data describing the condition of performance level of participants before the intervention, treatment or implemented program.
Benchmark
A point of reference or standard of behavior against which performance in compared.
Coding
To translate a given set of data or items into descriptive or analytical categories for data.
Comparison Group
In a quasi-experimental design, carefully chosen groups of participants who either do not receive the intervention or receive a different intervention from that offered to the primary intervention group. Unlike control groups, they are selected rather than assigned to the treatment or non-treatment condition.
Contamination
When the evaluation is threatened by unintended influence of the comparison or control group receiving the intervention being studied.
Control Group
In an experimental design, a randomly assigned group from the same population that does not receive the treatment or intervention that is the subject of the evaluation.
Data Collection Instruments
Tools used to collect information for an evaluation, including surveys, tests, questionnaires, interview instruments, intake forms, case logs and attendance records. Instruments may be developed for a specific evaluation or modified from existing instruments.
Data Collection Plan
A written document describing the specific procedures to be used to gather information or data. The plan describes who will collect the information, when and where it will be collected, and how it will be obtained.
Data Dictionary
A collection of descriptions of the data objects or items in a data model for the benefit of programmers and others. After each data object is given a descriptive name (e.g., sex, age), its relationship is described, the type of data (text, image, binary value) is described, possible predefined values are listed, and a brief textual description is provided.
Data Display
A visual format for organizing information (e.g. graphs, charts, matrices or other designs).
Data Sources
The people, documents, products, activities, events and records from which data are obtained.
Design
The process of creating procedures to follow in conducting an evaluation.
Dosage
How much of the intervention activity was done, how many people were involved and how much of each activity was administered to each participant, classroom or school over a specified length of time.
Effect Size
Measurement of the strength of a relationship or the degree of change.
Effectiveness
Degree to which the program yields desired/desirable results.
Evaluation Plan
A written document that describes the overall approach or design that will guide the evaluation. The plan includes what evaluation will be done, how it will be done, who will do it, when it will be done, and the purpose of the evaluation. The evaluator and project director develop the plan after consultation with key stakeholders, and it serves as a guide for the evaluation team.
Evaluation Team
A group of project staff members that includes, at minimum, the evaluator, the project director, and representatives of key stakeholders and that has the responsibility to oversee the evaluation process.
Evaluator
An individual who is trained and experienced in designing and conducting evaluations and who uses tested and accepted research methodologies.
Experimental Design
The random assignment of students, classrooms or schools to either the intervention group (or groups) or the control group (or groups). Randomized experiments are the most effective and reliable research method available for testing casual hypotheses and for making causal conclusions, that is, being able to say that the intervention cause the outcomes.
External Evaluator
A person conducting an evaluation who is not employed by or closely affiliated with the organization conducting the intervention; also known as a third-party evaluator.
Fidelity
The extent to which an intervention or program is practiced and set forth as designed. It is one important focus of a process evaluation.
Formative Evaluation
An approach that uses data to improve program implementation, address unanticipated problems as they are discovered, and/or document progress toward desired outputs. This generally occurs as the program is still being developed as in a start up or pilot.
IB
International Baccalaureate
Impact
Social, economic, and/or environmental effects or consequences of the program. Impacts tend to be long-term. They may be positive, negative, neutral or unintended.
Impact Evaluation
An evaluation that assesses the changes in the well-being of individuals that can be attributed to a particular intervention, such as a project, program or policy.
Implementation Evaluation
A form of evaluation that focuses on what happens in a program as it is delivered and documents the extent to which intervention strategies and activities are executed as planned. It requires close monitoring of implementation activities and processes. This type of information can be used to adjust activities throughout a program’s lifecycle. See definition for process evaluation.
Implementation Fidelity
When evidence that is based on data shows that an intervention has been put into effect as intended.
Indicator
A variable that is empirically connected to the standard.
Inputs
Resources that go into a program including staff time, materials, money, equipment, facilities, and volunteer time.
Institutional Review Board (IRB)
A committee or organization charged with reviewing and approving the use of human participants in research and evaluation projects. The IRB serves as a compliance committee and is responsible for reviewing reported instances of regulatory noncompliance related to the use of human participants in research. IRB approval is required for federally funded, nonexempt, human participants research.
Instrument
A device for collecting data- such as a survey, test or questionnaire- that can be used in process and outcome evaluations.
Internal Evaluator
A staff member or organizational unit who is conducting an evaluation and who is employed by or affiliated with the organization within which the project is housed.
Logic Model
A diagram showing the logic or rationale underlying a specific intervention. A logic model visually describes the link between (a) the intervention, requirements and activities, and (b) the expected outcomes. It is developed in conjunction with the program theory. See definition for program theory.
Methodology
The process, procedures and techniques used to collect and analyze data.
Magnet School Assistance Program (MSAP)
A federal program that provides financial assistance for school districts or local education agencies to develop or expand magnet school programs designed to promote the reduction, elimination, or prevention of minority group isolation and quality instruction. MSAP is authorized by Title V of the Improving America’s Schools Act.
Objective
A clearly identified, measurable outcome that leads to achieving a goal. The most straightforward method for stating objectives is by means of a specified percentage of increase or decrease in knowledge, skill, attitude or behavior that will occur over a given time period (e.g. by the end of the academic year, students will report demonstrating a 20 percent increase in science scores).
Outcomes
Changes or benefits resulting from activities and outputs. Outcomes answer the questions, “So what?” and “What difference does the program make in people’s lives?”. Outcomes may be intended or unintended; positive or negative.
Outcome Evaluation
An evaluation that assesses the extent to which an intervention affects (a) its participants (i.e. the degree to which changes occur in their knowledge, skills, attitudes, or behaviors) and (b) the environments of the school, community, or both. Several important design issues must be considered, including how to best determine the results and how to best contrast what happens as a result of the intervention with what happens without the program.
Outputs
Activities, services, events, products, participation generated by a program.
Pilot Test
A preliminary test or study of either a program intervention or an evaluation instrument to assess appropriateness of components or procedures and make any necessary adjustments. For example, an agency might pilot test new data-collection instruments developed for an evaluation.
Post-Test
A test or measurement taken after a service or intervention has occurred. The results of a post-test are compared with the results of a pre-test to seek evidence of the chance in the participant’s knowledge, skills, attitudes or behaviors or changes in schools or community environments that have resulted from the intervention.
Power Analysis
A method used by the evaluation team to decide on the number of participants necessary to detect meaningful results.
Pre-Test
A test or measurement taken before a service or intervention begins. The results of a pre-test are compared with the results of a post-test to assess change. A pre-test can be used to obtain baseline data.
Process Evaluation
A form of evaluation that focuses on what happens in a program as it is delivered and documents the extent to which intervention strategies and activities are executed as planned. It requires close monitoring of implementation activities and processes. This type of information can be used to adjust activities throughout a program’s lifecycle. See definition for implementation evaluation.
Program Evaluation
Research, using any of several methods, designed to judge the merit and worth of a program or intervention, as well as test its influence or effectiveness.
Program Implementation Activities
The intended steps identified in the plan for the intervention.
Program Monitoring
The process of documenting the activities of program implementation.
Program Theory
A statement of the assumptions about why the intervention should affect the intended outcomes. The theory includes hypothesized links between (a) the program requirements and activities, and (b) the expected outcomes; it is depicted in the logic model.
Propensity Score
The probability of a unit (e.g., person, classroom, school) being assigned to a particular condition in a study given a set of known covariates. Propensity scores are used to reduce selection bias by equating groups based on these covariates.
Qualitative Data
Nonnumeric data the can answer the how and why questions in an evaluation. These data are needed to triangulate (see definition for triangulation) results to obtain a complete picture of the effects of an intervention.
Qualitative Evaluation
An evaluation approach that is primarily descriptive and interpretive.
Quasi-Experimental
The nonrandom assignments of students, classrooms or schools to either the intervention group (or groups) or to the comparison group (or groups). Assignments may be based on matching or other selection criteria. Quasi-experiments cannot test casual hypotheses nor make casual conclusions. They identify correlations between the intervention and outcomes.
Random Assignment
A procedure in which sample participants are assigned indiscriminately to experimental or control groups, creating two statistically equivalent groups.
Random Control Trial
A study that indiscriminately assigns individuals or groups from the target population either to an intervention (experimental) group or to a control group to measure the effects of the intervention.
Reliability
The extent to which an instrument, test or procedure produces the same results on repeated trials.
Response Bias
The degree to which a self-reported answer may not reflect reality because of the respondent’s misperception or deliberative deception.
Results
Relevant information gleaned from the information and data that have been collected and analyzed in an evaluation.
Rigorous Evaluation
An evaluation that uses experimental or quasi-experimental design for a specific purpose and to determine a program’s effectiveness.
Sample
A subset of a total population. A sample should be representative because information gained from the sample is used to estimate and predict the population characteristics under study.
School Climate
Multidimensional aspects of a school encompassing both characteristics of the school and perceptions of the school as a place to work and learn.
Self-Evaluation
A self-assessment of program processes and/or outcomes by those conducting or involved in the program.
Stakeholders
Individuals who have an interest in a project. Examples include students, teachers, the project’s source of funding, the sponsoring or host organization, internal project administrators, participants, parents, community members and other potential program users.
Statistical Significance
A general evaluation term referencing to the idea that a difference observed in a sample could not be attributed to chance. Statistical tests are performed to determine whether one group (e.g. the experimental group) is different from another group (e.g. the control or comparison group) on the measurable outcome variables used in a research study.
STEM
Science, Technology, Engineering and Mathematics
Summative
An approach that uses data to make judgments and decisions about whether to continue, modify, or end a particular program. In the evaluation of a magnet program there may be both formative and summative uses of data.
Treatment Group
Also called an experimental intervention group, a treatment group is composed of a group of individuals receiving the intervention services, products or activities to be evaluated.
Triangulation
The multiple use of various sources of data, observers, methods and theories in investigations to verify an outcome finding.
Validity
In terms of an instrument, the degree to which it measures what it is intended to measure, also described as the soundness of the instrument. In terms of an evaluation study, the degree to which it uses sound measures, analyzes data correctly and bases its inferences on the study’s findings.
Variable
An attribute of behavior, skill, quality or attitude being studied or observed that is measurable.