Critical Issue: Evaluating Professional Growth and Development

ISSUE: The ultimate worth of professional development for teachers is the essential role it plays in the improvement of student learning. That means that educators must pay attention to the results of professional development on job performance, organizational effectiveness, and the success of all students. Each professional development effort should be accompanied by a well-designed evaluation plan for determining its effectiveness. Because of the complexity of current changes in education, this plan should focus extensively on the use of ethnographic research. It also should extend over time in order to describe and give value to the interrelationship of individual change and systemic change in education.

OVERVIEW: As schools continue to move into different phases of educational reform, one factor that is consistent in each state, district, and school plan is the need for professional development. Every school-improvement effort hinges on the smallest unit; in education, that is the classroom (McLaughlin, 1991). Educational reform requires teachers not only to update their skills and knowledge but also to totally transform their roles as educators. It establishes new expectations for students, teachers, and school communities that some educators may not be prepared to meet. Professional development helps teachers learn new roles and teaching strategies that will improve student achievement. (See the Critical Issue "Realizing New Learning for All Students Through Professional Development.") Educators have access to an expanding body of knowledge in regard to their content area, teaching techniques, and meaningful, engaged learning for students. This increased flow of information--along with the current focus on educational standards that emphasize in-depth learning experiences and problem-solving abilities in mathematics, science, social studies, and other disciplines--has made it imperative that teachers are prepared to implement change in the classroom. Guskey (quoted in Asayesh, 1993) states, "If improvement results from change, there must be something to initiate that change. And what's going to initiate it other than staff development?" (p. 24).

To demonstrate its commitment to professional development, the U.S. Department of Education has developed a National Awards Program for Model Professional Development to evaluate and reward schools that have effective professional development programs. In December 1996, five American schools were honored for excellence in teacher development.

Along with the demand for quality professional development comes the need to be accountable. Professional development programs must be assessed to document their value to the school organization, individual educator, and ultimately the students. To ensure the effectiveness of each professional development effort, the National Staff Development Council (1995) recommends that "evaluation must be seen as an ongoing process that is initiated in the earliest stages of program planning and continued beyond program completion" (p. 7).

The development of the evaluation strategy should commence at the beginning of the planning process for each professional development program. Members of an evaluation team are chosen and charged with the responsibility for evaluating each program. As a preliminary step, evaluators determine the purposes of the evaluation. Mullins (1994) states:

"Identifying the purposes of the evaluation makes it possible to formulate the questions that will have to be answered as part of the evaluation. The evaluation team should begin to formulate these questions by talking with as many of the stakeholders of the staff development program as possible." (p. 6)

After the purposes of the evaluation are determined, the evaluation team can plan the evaluation itself. The evaluation design should be based on the intended outcomes of the school-improvement effort. Mullins (1994) notes, "What is to be evaluated is guided by why the evaluation is being done and how the results are going to be used" (p. 6). The evaluation team can ensure the quality of the professional development program by asking questions that focus on the value of the program in achieving school-improvement goals.

Evaluation of a professional development program has two important goals: to improve the quality of the program, and to determine its overall effectiveness. Evaluation that is used to modify or improve a professional development program is called formative evaluation. Formative evaluation is done at intervals during a professional development program. Participants are asked for feedback and comments, which enable the staff developers to make mid-course corrections and do fine-tuning to improve the quality of the program. Formative evaluation helps ensure that each professional development program meets the participants' needs and expectations, is a meaningful experience, and can be translated into action in the classroom. Some staff developers use formative evaluation on a daily basis during their programs.

Evaluation to determine the overall effectiveness of a professional development program is called summative evaluation. Summative evaluation is done at the conclusion of the program. It is collected at three levels: educator practices, organizational changes, and student outcomes.

The first level of summative evaluation is to assess the changes in the educators as a result of participating in the professional development program. Participants are asked to describe changes in how they think, what they believe, and what they do in the classroom (Guskey & Sparks, 1991a). They describe their own professional growth and evaluate the program in meeting their personal and professional goals. Such changes in participants can be determined through questionnaires, observations, interviews, self-assessment instruments, and analysis of records (such as minutes of faculty meetings) (Guskey & Sparks, 1991b).

The second level of summative evaluation is to assess the ways in which the school organization has changed. This assessment is critical because research shows that organizational climate and culture strongly influence both initial and continued use of innovation (Joyce, 1990). Professional development efforts will have a greater impact on student outcomes if the organizational culture provides ongoing support for such efforts. Showers (quoted in Asayesh, 1993) describes the component of organizational change that accompanies effective professional development:

"[Effective professional development is] intensive enough to allow people to develop new knowledge and skills. And it has a component in the workplace as well as in the training environment. Somehow there's a support system that follows teachers into the workplace and either provides continuing training or some kind of structure enabling teachers to continue solving problems in the workplace." (p. 25)

Change in the school organization can be determined through interviews, questionnaires, observations, analysis of documents (such as budgets and policies), and minutes of meetings (Guskey & Sparks, 1991b). To assess this level of change, evaluators can look for increased collaboration, an improved relationship between administration and teachers, and general changes in the culture of the school.

Anne Norford's PictureAnne Norford, principal of Brownsville Elementary School in Crozet, Virginia, talks about professional development at her school and the changes in the teaching atmosphere that have occurred as a result of professional development. [672 k audio file] Excerpted from an interview with Anne Norford (North Central Regional Educational Laboratory, 1992). A text transcript is available.

The third level of summative evaluation is to determine the effect of the professional development process on student learning. Because most school-improvement efforts are intended to influence student achievement, the evaluation ultimately must consider the effectiveness of professional development in that area. Procedures for feedback on results provide teachers with evidence of the effectiveness of their efforts on student learning. Teachers are more likely to proceed with new practices if they see positive results in their students. Student scores on standardized tests should not be the only measure of effectiveness. Teacher-developed achievement tests, student portfolios, and grades can be used to evaluate the effect of professional development (Asayesh, 1993). Schools also may wish to evaluate the extent to which students are responsible for their own learning and are engaged in in-depth investigations of issues and problems. If the professional development effort has a nonacademic focus (such as reducing student violence, for example), the indicators of effectiveness should include authentic forms of assessment (such as dropout rates, student attendance rates, and changes in the amount of school vandalism).

Margaret B. Tinzman's PictureMargaret B. Tinzmann, senior program researcher at North Central Regional Educational Laboratory in Oak Brook, Illinois, talks about evaluation to determine the effect of professional development on student learning. [420k audio file] Excerpted from an interview with Margaret B. Tinzmann (North Central Regional Educational Laboratory, 1997). A text transcript is available.

Although professional development programs are designed to affect the participants and in turn the students, they typically have an impact on other stakeholders--administrators, counselors, principals, and other instructional staff. To be thorough, evaluation should be drawn from these sources as well as from the program participants. A multifaceted approach, using different types of information from a variety of sources, should be employed. Both quantitative (measurement-driven) and qualitative (narrative-based) data should be included in the process to provide valuable information. Specifically, the information should include data on participant outcomes, organizational outcomes, and student outcomes.

Evaluation results should be presented in a format that can be understood by all stakeholders in the professional development process (Guskey & Roy, 1995). Clear communication of the findings helps ensure that the results are used to guide school improvement efforts and subsequent professional development activities.

Typically, evaluation of a professional development activity is completed at the end of the activity. This evaluation usually is restricted to the participant's initial reaction to the session content and relevance as well as the effectiveness of the presenter or facilitator. Although assessing these factors has value, the most important factor is the long-term impact of the professional development activity. A follow-up evaluation, completed after teachers have had an opportunity to implement ideas gleaned from the activity, is helpful in assessing changes in instructional practice, changes in professional culture (such as increased collaboration and problem solving), and student improvement. To reinforce changes made by teachers and the school organization, follow-up programs and support activities can be scheduled as part of the evaluation process.

Evaluation of professional development efforts need not be a terribly complicated process requiring the assistance of outside experts. Mullins (1994) notes that teachers, counselors, and other professional staff can provide important information concerning the appropriateness of topics and the effectiveness of staff developers. Action research is an example of a powerful evaluation process that typically can be conducted by the school faculty with little or no outside assistance. Calhoun (1994) suggests a five-phase action research cycle: (1) selecting an area or problem of collective interest, (2) collecting data, (3) organizing data, (4) analyzing and interpreting data, and (5) taking action based on this information. Data is then collected again to determine the effectiveness of these efforts.

Margaret B. Tinzmann's PictureMargaret B. Tinzmann, senior program researcher at North Central Regional Educational Laboratory in Oak Brook, Illinois, talks about action research as a means of evaluation that ultimately leads to school improvement. [476k audio file] Excerpted from an interview with Margaret B. Tinzmann (North Central Regional Educational Laboratory, 1997). A text transcript is available.

GOALS:

ACTION OPTIONS: The evaluation team (consisting of teachers, administrators, school board members, parents, and interested community members) and classroom teachers can take the following actions to evaluate professional development programs:

Evaluation Team:

Teachers:

IMPLEMENTATION PITFALLS: Evaluations of professional development programs often focus on superficial issues (sometimes called the "happiness quotient") rather than on the substantive impact on teacher practice and student learning (Fine, 1995). Although each program should contribute to the broader personal and professional growth of the participants, its main goal should be to increase student achievement.

Planning for the evaluation of professional development programs sometimes is done as an afterthought. It is important to think about and design the evaluation process at the beginning of an initiative rather than to tag it on later. From the beginning, the evaluation plan is critical to determine the evaluation's audience (for example, teachers, school board members, and parents) and the questions that will be of interest to each. The audience and their interests should drive both the approach and the tools used in the evaluation.

Evaluation of a professional development activity often consists of an evaluation form filled out at the end of the activity. Such timing does not allow for corrections to be made or concerns to be addressed during the activity. To ensure that programs are useful and relevant, the participants' feedback and comments should be collected at intervals during the program and used to modify and improve it.

If a qualitative approach to evaluation is employed, the findings may not be communicated clearly. For narrative-based data, clear communication of the findings is essential. A case study approach (such as Lieberman, 1995) has become increasingly popular. This format could be used to present the evaluation information and to highlight important findings.

After the evaluation has been completed, the data may be set aside and forgotten. Thorough evaluation planning should consider how the data will be used. The evaluation process should be designed so that the data can be used to inform and adjust, not merely to summarize and be set aside. To ensure that evaluation efforts are put to good use, the evaluation team can make recommendations for the implementation of future professional development programs.

The evaluation team may fail to allocate sufficient resources for collection and analysis of data. The evaluation process and instruments should be designed to mesh with available resources. Short checklists commonly are used as evaluation tools because they quickly gather data; however, they may not gather any useful information. On the other hand, long surveys or interviews may yield minimal results if no resources for analysis are available.

Staff developers commonly ask for feedback immediately following a professional development program, but that response rarely accounts for the long-term impact of the experience. A follow-up evaluation, completed after teachers have had time to understand and implement what they learned in a professional development activity, is more useful in assessing changes in teacher practice and student learning.

Evaluation may seem like a time-consuming process that requires technical skills beyond those possessed by most teachers and administrators. Early in the school improvement process, staff members may feel a strong desire to solve problems by quickly moving to some type of action. The discussion of the evaluation process at this time may seem like an unwelcome and unnecessary intrusion into their work. In addition, evaluation questions related to measurable outcomes and indicators of success may seem difficult to answer, especially if the staff members are not accustomed to thinking in those terms.

DIFFERENT POINTS OF VIEW: Because some people do not see the value of investing more time in professional development (refer to the Critical Issue "Finding Time for Professional Development"), they also may oppose the greater time and financial commitments required to evaluate such efforts.

A debate about evaluation methodology is useful in assessing the impact of professional growth activities. The quantitative approach traditionally has dominated evaluation efforts, but the qualitative approach recently has gained popularity. Some educators argue that a combination of methodologies is most effective; others assert that the two approaches do not mesh. In actuality, the goals and purposes of the evaluation are more important than the evaluation tools or methodology used. For example, a survey can take a quantitative approach if its purpose is to measure and predict, or a qualitative approach if its purpose is to understand. The types of questions asked and the underlying purpose of the survey help determine the most appropriate approach.

ILLUSTRATIVE CASES:

Effects of Evaluation on the TREK Teacher Training Model

Evaluation of Implementing the NCTM Standards for School Mathematics for the 21st Century

Samuel Mason School, Roxbury, Massachusetts

CONTACTS:

Association for Supervision and Curriculum Development (ASCD)
1250 Pitt St.
Alexandria, VA 22314-1453
(703) 549-9110; fax (703) 299-8631
Contact: Nora Gyuk, Manager of Information Services
E-mail: info@ascd.org
WWW: http://www.ascd.org/

National Center for Research on Teacher Learning (NCRTL)
Michigan State University
College of Education
116 Erickson Hall
East Lansing, MI 48824-1034
(517) 355-9302; fax (517) 432-2795
Contact: Robert E. Floden and G. Williamson McDiarmid, Co-Directors
E-mail: floden@msu.edu
WWW: http://ncrtl.msu.edu

National Center for Restructuring Education, Schools, and Teaching (NCREST)
Center for School Reform
Teachers College, Columbia University
525 W. 120th St., Box 110
New York, NY 10027
(212) 678-3432; fax (212) 678-4170
Contact: Dr. Nancy Lauter
E-mail: n157@columbia.edu
WWW: http://www.tc.columbia.edu/~ncrest/

National Commission on Teaching and America's Future
Teachers College, Columbia University
525 W. 120th Street, Box 110
New York, NY 10027
Contact: Margaret Garigan
E-mail: meg47@columbia.edu
WWW: http://www.glef.org/php/orgs.php?id=ORG_306335

National Staff Development Council (NSDC)
P.O. Box 240
Oxford, OH 45056
(513) 523-6029; fax (513) 523-0638
Contact: Stephanie Hirsh
E-mail: NSDCoffice@aol.com
WWW: http://www.nsdc.org/

National Teacher Policy Institute (NTPI)
IMPACT II
285 W. Broadway
New York, NY 10013
(800) 200-8284; fax (212) 941-1787
Contact: Ellen Meyers, Director
E-mail: teachnet@teachnet.org
WWW: http://www.imsa.edu/center/tree/ntpi.html

References


This Critical Issue was written by Cathy J. Cook, Mathematics Education and Professional Development Specialist, Midwest Consortium for Mathematics and Science Education, North Central Regional Educational Laboratory, and Carole S. Fine, Director of Professional Development, North Central Regional Educational Laboratory.

Development and production of this Critical Issue were supported in part by the Midwest Consortium for Mathematics and Science Education.

Date posted: 1997

info@ncrel.org
Copyright © North Central Regional Educational Laboratory. All rights reserved.
Disclaimer and copyright information.