Teaching American History - The Enduring Legacy of the American Revolution: Liberty Freedom and Equality

 

Project Evaluation

The evaluation plan is designed to address 5 key points:

  1. Methods of evaluation that provide for examining the effectiveness of project implementation strategies
  2. The extent to which the methods of evaluation will provide performance feedback and permit periodic assessment of progress toward achieving intended outcomes
  3. The extent to which the methods of evaluation are thorough, feasible, and appropriate to the goals, objectives, and outcomes of the proposed project
  4. The degree to which the methods of evaluation include the use of objective performance measures that are clearly related to the intended outcomes of the project and will produce qualitative and quantitative data to the extent possible
  5. Guidance about effective strategies for replication or testing in other settings.

The evaluation will be directed by two external evaluators with Ph.D.s. Evaluators include one quantitative research psychologist and one qualitative research educator, who will collect both qualitative and quantitative data, conduct surveys, interview teachers, make onsite visits to teacher-participants’ classrooms, analyze test score data, write evaluation reports, and provide input and feedback to the Advisory Board, Teacher Support Teams, and the project director to help monitor and adjust activities to enhance the effectiveness of the Developing Master Teachers grant. More about the evaluation team.

1.  Methods of evaluation that provide for examining the effectiveness of project implementation strategies

In order to maintain the most rigorous methodological design to assess program effectiveness, a pre-post non-equivalent control group is necessary to assess participant and student achievement. Evaluators propose a randomized-quasi-experimental design using teacher participants as the experimental group and a control group consisting of teachers wait-listed to participate the following year. Incentive for control group participants will be a promised inclusion in the summer seminar the following year. Subsequently, additional schools can be promised inclusion in the intervention if they agree to participate as a control school.  This design control allows for self-selection into the group and demographic information regarding school participation should be somewhat matched. Initial randomization of teacher participants will be conducted before the beginning of the intervention. Longitudinal data of all participants will also be investigated to provide an overall summary of program performance. Schools will be randomly assigned to experimental or control groups. Unfortunately, it would be impractical to randomly assign students to experimental and control groups. However, we believe that when a control group is used, effectiveness of outcome goals will be clearer when compared to a control group of teachers not currently receiving the intervention.

2.  How methods of evaluation provide performance feedback and permit periodic assessment of progress toward achieving intended outcomes

A quasi-experimental design using pre and post test measures of evaluation will allow for a more reliable and valid measure of process and outcome evaluation goals. Using statistical analyses such as independent t-tests and other descriptive, univariate and multivariate statistics will evaluate progress toward key goals. Control and experimental groups will receive pre and post test measures of intended goals. Surveys, observations and interviews will be used in all assessments to examine whether significant differences occur in the experimental group. Further, participant demographic information and rates of attrition will be carefully monitored to assess mortality rates and possible correlates.

3.  Measuring the extent to which the methods of evaluation are thorough, feasible, and appropriate to the goals, objectives, and outcomes of the proposed project

Evaluators currently have existing quantitative and qualitative measures designed to assess project goals. Quantitative measures will be compared to qualitative measures using this multiple methods approach of data collection. Statistical analyses will examine the average responses of participants, while qualitative data will be used to provide a richer explanation of the program; In addition, quantitative measures used will be examined using factor analyses to investigate reliability and validity of materials.

4.  Methods of evaluation included for the use of objective performance assessments must be clearly related to the intended outcomes of the project and will produce qualitative and quantitative data to the best extent possible
  • Objective performance measures will first be created to assess process indicators.
  • Process Indicators Made to Facilitate Outcome Improvement

  • Key progress indicators measured will include:
  • Assessment of implementation of innovative summer seminars utilizing CSC faculty, master elementary and secondary teachers, and public historians,
  • Evaluation of ongoing meetings of participants during the academic year to discuss American history content and teaching strategies,
  • Following financial responsibilities,
  • Examine timelines and allocations of resources and seminar materials, maintenance of TAH website,
  • Assessment of seminar effectiveness,
  • Basic demographic information will be collected, as well as obtaining baseline information on all participants (control and experimental).
  • This evaluation provides information about the implementation of the program and the outcomes of project participants and will be used to inform ongoing project development as well as determining whether the project is meeting its stated objectives. Through a combination of qualitative and quantitative approaches, the evaluation team will address the extent to which the preceding goals have been attained.
Methods of Evaluation: Qualitative and Quantitative Analysis

Evaluators will gather descriptive information on the development and implementation of the project. At this stage, we will also provide an in-depth analysis of program participants (experimental and control). Using observations, interviews and surveys we will produce descriptive and statistically rigorous information on proposed vs. actual implementation of the project, critical issues encountered by providers as they attempt to implement the project, descriptions of participants, and lessons learned from the project. Further, when possible, evaluators will attempt to collect data with assistants who tend to be more objective and are blind to the hypotheses and therefore the data will not be susceptible to observer bias. With the multiple methods approach and qualitative data supplements the quantitative analyses will be enriched. These methods can identify problems with any specific aspect of the project’s implementation. The Program Director and staff can make any necessary adjustments and continue to observe the results of the adjustment. Furthermore, documenting project implementation adjustments along the way as well as lessons learned will provide a rich resource of information for others who may want to replicate this project. Data collection formats include: (1) a review of comments made by program participants, (2) observations and (3) surveys for all program participants.

Measuring Outcomes. The outcome evaluation utilizes multiple data sources to obtain a great deal of participant information. Basic outcome goals include (some sample questions noted in italics):

A. Will such workshops enhance and enlarge teacher participant’s background in American History and general understanding of content information of American history by addressing gaps in the knowledge of American history through professional developmental workshops?

“Have gaps in the knowledge of American history been identified and addressed through professional developmental workshops?”

“To what extent have teacher-participants actually increased their content knowledge of that period of American history?”

As measured on a Likert type scale, participants in the experimental group will exhibit significantly greater understanding of content material, confidence in knowledge and presentation of content material from pre to post test and throughout the year. Further, it is expected that over 75% of program participants will report increased knowledge in content material and confidence when discussing content material with peers and students.

B.  How will community partnerships in the State of Vermont strengthen the program design?

“How did community partnerships strengthen the project design?”

The extent to which community partnerships strengthen program design will assess perceptions of advisory committee members and participating schools.

C.  Improve teacher practice by modifying and improving (updating) curriculum and student success in American history

”What percentage of teacher-participants actually implement new curriculum into their classroom?”

“What attributes of the project contributed most to improving educational opportunities and services to support/accommodate the needs of the students?”

“How have teachers’ background and understanding of American history been affected?”

“Have any teacher practices changed in the classrooms?”

“What strategies were most successful for teacher-participant  learning and implementation?”

“Are students enjoying the new curriculum, or showing more interest in American history than before?”

“Have schools ultimately updated and revised the American history curriculum in the schools noted above?”

Teacher practices which include modifying, updating and improving their curriculum will be evaluated using quantitative measurements consisting of assessment of a comparison of experimental versus control groups where teacher-participants will include new teaching strategies, modified or updated curriculum relating to topic materials they will learn at the summer seminars.  It is predicted that approximately 75% of program participants will report modified, updated or improved curriculum compared to control participants. Qualitative measures such as interviews and observations will be used to further examine differences between control and experimental schools with regard to curriculum topics. Student success will be measured using surveys that assess enjoyment of history, teacher presentation and test scores relating to the topic. This could be compared to control group teachers that are teaching the same topics. Assessing student success and general knowledge can be addressed by surveying student perceptions of classes.

“I feel I know more about Heroes of Equality after this class.”

“I am more interested in American History this year compared to last year.”   

“My history grades are better when we go over interesting material such as ________.”

“I am more interested in history since we’ve been discussing Heroes in Equality.”

D. Utilize and incorporate additional resources offered by TAH into teacher curriculum.

“What percentage of participants utilize resources available to them (primary documents, internet, etc.), not used previously?”

“Have teachers been utilizing additional resources than they had in the past?”

“To what extent have such resources complemented new knowledge and curriculum growth?”

Use of additional resources will also be examined and compared between control and experimental groups with regard to number and type of resources used to cover content materials. It is expected that 75% of teacher-participants will report utilizing additional resources compared to control participants.

E. Information sharing among teacher-participants, peers and mentors

“Have teachers been sharing and discussing their new content knowledge with others?”

“How often do teacher-participants meet with mentors and colleagues to discuss content knowledge of that time period of American history?”

Use of information sharing among participants, peers and master-teachers will be compared between control and experimental groups. It is expected that 100% of teacher-participants will exhibit an increase in information sharing among peers and master teachers. Teacher-participants’ use of website and library materials will also exhibit significant increases in use (60%) compared to control participants. While the program facilitates information sharing among peers, mentors and master teachers, one serious concern is that of contamination “spillover effects” of the experimental group. In order to control this problem, teacher-participants will be instructed to share information within their school systems only. 

5.  The scope of which the evaluation will provide guidance about effective strategies for replication or testing in other settings

All results will be presented to the Director and Advisory Council on a quarterly basis. Products to be generated by the evaluation team include: reports to management staff regarding program progress and data findings presented in APA format that will include: title, abstract, literature review, methods, results and discussion (via mid-year assessments, participant interviews and a final evaluation report describing the findings). Based on information learned from these assessments, the evaluation team will describe the ongoing process and implementation of the project throughout the year; identify critical issues during implementation as identified by program participant assessment; identify the extent to which the program has been implemented as planned and address direct outcome evaluation components. As planned, the following year will include the original control group, and the process can be replicated using a different group of participants each year. Another school will then be chosen as a control group. This process serves to provide somewhat matched groups of control and experimental participants as well as providing evaluators with a way to replicate results using additional populations and settings.

Further Evaluative Objectives:

Objective I.
Evaluators will engage in an ongoing process evaluation designed to provide vital feedback about the program on a regular basis, allowing for rapid response to identification of problems in program implementation and management. Our goal is to obtain relevant information to provide yearly outcome measurements on program participant status in conjunction with the outcomes stated above.

Objective II.
During the first year of the project, the evaluation team will establish a baseline of all program participants. Further analysis will continue to assess program-specific process evaluation goals and key outcome indicators on a longitudinal basis.

Objective III.
By the end of years one and two, evaluators will have collected data designed to provide information useful in aiding and enhancing program effectiveness and developing effective program evaluation techniques.

Evaluation Plan and Execution

The evaluation plan focuses on providing information about the implementation of the program and the outcome of project participants. This information will be used to inform ongoing project development and to determine whether the project is meeting its stated objectives. Through a combination of qualitative and quantitative approaches, the evaluation team will address the extent to which the preceding goals have been attained.

The evaluation has three components. The first is a performance monitoring system that will produce quarterly data on participant characteristics, project implementation indicators and participant outcome indicators. The second is a qualitative component which will be used to describe the development of the project, identify critical issues during implementation, and gather information critical for follow-up and replication. Third, the collection of outcome data on project participants using standardized measures will be evaluated.

A synopsis of time periods and execution of the evaluation plan can be seen in the following table.

Teaching American History Grant

Data Sources & Instruments for Quasi-Experimental Research Design

Time Periods Instruments
Sources Pre-Test Incident Specific Post-Test Purpose
TEACHERS

Quasi-experimental randomized experimental & control groups

Pre-summer session workshops

Post-summer session workshops

Mid-year evaluations of perceptions of topic material; use of resources in the classroom; student perceptions of teaching strategies used; teacher participants compared to control participants

Mid-year workshop, observations, focus groups and structured interviews with control and experimental participants

 

Pre-test provided to participants before summer seminar.

Entrance survey of participant’s content knowledge, confidence of teaching content, perceptions, current use of resources and teaching strategies for topics covered each summer

 

Experimental group attends summer seminar

Control group doesn’t attend summer seminar

 

Post-test provided to participants after summer seminar.

Exit survey of participants’ content knowledge, confidence of teaching content, perceptions, current use of resources and teaching strategies for topics covered each summer

 

To determine whether  significant differences exist between experimental and control groups in  content knowledge,  confidence of teaching content material in the  classroom, current content knowledge of  history of “heroes of liberty” with an emphasis in understanding the importance of “both genders,” use of resources on the topic, and student perceptions of teaching strategies used
Mid-year evaluation of program participants’ perceptions of how the summer workshop has helped them, and compare responses to control  teacher participants
 
 

Focus Group Interview
Biannually

Structured Interviews with Key Individuals (periodic)

Observation of Key Field Trips & Meetings

 

  Examine differences between experimental and control group; teachers’ perception topic areas and experimental participants’ perceptions of project effectiveness

STUDENTS

Non-randomized experimental & control groups

 

Document-Based Questions
(rubric-scored)

History Experience Survey

American History Quiz (one for each unit/lesson)

 

Document-Based Questions
(rubric-scored)

History Experience Survey

American History Quiz (one for each unit/lesson)

 

Capture students’ skill with thinking/writing task in history

Capture students’ perception of experience with learning history

Capture students’ knowledge of project-specific topic

Capture context-specific data on students’ work

Capture context-specific data on students’ work

Note: In this proposed evaluation design, CRE’s budget for a rigorous independent, external evaluation is set at 12% of the total project budget. Also, as an incentive to promote full-scale participation and also to help demonstrate the effectiveness of the project, teachers are paid a fee by CRE in two installments (at start and at finish of each year) to complete pre and post test data collection instruments on their perceptions/knowledge and also to administer the student survey and assessment instruments, and to score the students’ results on the Document-Based Questions (pre and post), using a common rubric. Additionally, each participating school is paid a fee by CRE at the end of the project to compensate the system for providing assistance with data collection.

 

Copyright 2007, Teaching American History
www.TAHVT.org