Main Content

Standard 9.2: Data analysis and data sharing

Program leadership, program partners, and all stakeholders analyze multiple sources of data and share results with stakeholders in a systematic way.

Analyzing data and sharing the results are ongoing processes. The steering committee should establish a regular schedule of meetings to discuss recent data acquisitions. For a small system, this could include anecdotal information as well as the results of surveys and other formal procedures. Program leaders then need to compile or summarize their data in some way to help them answer their research questions.

To be truly useful, the program evaluation requires the following:

  • An agreement on the data to be collected
  • An agreed upon, consistent, reliable and transparent way to collect good data
  • A team of people who have been well trained in how to collect the data
  • A collaborative structure for analyzing the data
  • Criteria for interpreting and drawing inferences from the data
  • A schedule for collaboratively using the criteria and reviewing and reflecting on the interpretations of the data
  • A plan for disseminating the data to the teachers, the induction/mentoring program team, the school, the program leadership, and the system

With quantitative data, program leaders might figure out averages, maximums and minimums, and medians. They can break out, or disaggregate, the data in different ways in order to compare different groups. For example, one could conduct surveys after a beginning teacher orientation then disaggregate the data to see if there were any differences between elementary and secondary teachers or between men and women.

With qualitative data, program leaders might start by looking for themes or patterns (e.g. in exit interviews, many teachers mentioned feeling isolated) as well as differences (e.g. teachers at School A describe the school environment as supportive while teachers at School B describe the environment as hostile).

After summarizing the data, program leaders still need to figure out what it means for their program. This often involves general knowledge, looking at other data sources, intuition, and the needs and interests of a particular program. For example, a program might discover that secondary teachers are more likely to leave the district than are elementary teachers. At this point, program leaders might decide to gather more data (e.g. conduct focus groups to find out why), share the information with others (e.g. administrators and the school board), or act on the information (e.g. provide additional mentoring supports for secondary teachers).

Important questions that program leaders should ask themselves include the following:

  • How does the program plan to use the data collected for improving the beginning teacher induction/mentoring program?
  • How will the mentors and other individuals doing the evaluations be trained in the collection of data (including the use of the instruments, rubrics, and protocols), the analysis of the data, and ways to provide feedback on the data to the beginning teacher and to the team?
  • How will the program make sure the evaluation is worth having, makes sense, and can be easily used for program improvement?
  • How will the analysis of the data be disseminated?
  • How will the data be used to improve the beginning teacher induction/mentoring program as well as the teaching and learning in the whole system?

A Sample Schedule

A small or new program is likely to start with a few small data-collection measures. These will vary according to the needs of the individual program and might include the following:

  • A needs assessment for beginning teachers (by surveying beginning teachers, mentors, and/or administrators)
  • Evaluations turned in after each workshop
  • A mid-year and end-of-year survey (of beginning teachers, mentors, and/or administrators)

On the other hand, a large and well-resourced program, with a mature induction/mentoring program, is likely to have a more robust evaluation schedule. At different points during the year, a leadership team often conducts multiple rounds of data analysis and collaborative discussions of possible interpretations and implications of those analyses. For example, a first review often occurs at the end of the first month of the program. This may or may not coincide with the end of the first month of the school term. This gives structural feedback and helps make immediate and differentiated adjustments to individual beginning teachers, to the professional development design, or to the overall program.

A mid-point review could provide longitudinal data about the growth of the beginning teachers to date as well as provide additional information and data on the effectiveness of the induction/mentoring program, including its professional development programs. The end of the semester is often a convenient mid-point.

The end of the year meeting can be a time for looking back at the trajectory of the data, analysis, interpretation, making adjustments, improving program impact, and planning the program and program evaluation for the following year.

In summary, it is important that the leadership team evaluate multiple sources of data and share the results with all stakeholders while being mindful of confidentiality issues.

Previous: Standard 9.1 Next: Standard 9.3