Skip over navigation
Visit the NCREL Home Page

Why Should Assessment Be Based on a Vision of Learning?

M. Kulieke, J. Bakker, C. Collins, T. Fennimore, C. Fine, J. Herman, B.F. Jones, L. Raack, M.B. Tinzmann
NCREL, Oak Brook, 1990


New Assessment for New Learning

Guidebook I explores a new definition of learning which is based on cognitive, philosophical, and multicultural research perspectives. These perspectives suggest that meaningful learning occurs when a learner has a knowledge base that can be used with fluency to make sense of the world, solve problems, and make decisions. Learners need to be self-determined, feel capable, and continually strive to acquire and use the tools they have to learn. They need to be strategic learners who have a repertoire of effective strategies for their own learning. Finally, they need to be empathetic learners who view themselves and the world from perspectives other than their own.

As learning in schools is redefined, both the curriculum and the classroom environment need to be aligned. Guidebook 2 discusses the thinking curriculum which has a dual agenda of content and process for all students. This curriculum includes in-depth learning; involves students in real-world, relevant tasks; engages students in holistic tasks; and utilizes students' prior knowledge. Guidebook 3 in this series extends the new thinking curriculum into the classroom and suggests that the redefinition of learning requires a collaborative classroom which is notable for its change in the roles of students and teachers. The collaborative classroom is characterized by shared knowledge among teachers and students, shared authority among teachers and students, teachers as mediators, and heterogeneous groupings of students.

These changes in the definition of learning, the curriculum, and the classroom context lead to Guidebook 4 "Multidimensional Assessment." If indeed the shape of the educational experience for students is being changed, the ways that have been used previously to evaluate successful student learning need to undergo a shift as well. This guidebook presents the evolution that is currently underway in the concept of student assessment and the implementation of this new vision of assessment in various educational institutions throughout the United States. Four shifts in learning and assessment are discussed: from testing to multiple assessments, from cognitive to a range of abilities and talents, from assessments of one or two dimensions to multidimensional assessment, from testing as an isolated event to assessment as an integral part of instruction.

What Are the Major Changes in Testing and Assessment?

From Testing to Multiple Assessment

In the traditional schooling experience, student learning was most often measured only by testing: specific questions which (1) are tangible and structured and can be administered within a limited time period and (2) usually tap a limited number of cognitive knowledges and skills. Now, with the expanded concept of learning, it is increasingly important to remember that paper- and pencil testing is only one way to collect information about student learning. The broader concept of assessment is more appropriate. Assessment includes paper- and pencil-testing but may also include other procedures such as rating items on scales, observing student performances, critiquing student products, conducting interviews, and reviewing a student's background or previous performance.

The concept of assessment broadens the kind of information that is collected about students and the way that this information is used in the evaluation of student learning. Assessment needs to tap and build upon the strengths that learners, in all their diversity, bring to the learning situation.

New Assessment Strategies

Grant Wiggins, president of Grant Wiggins & Associates, and others have been developing new assessment strategies such as the portfolio and day-to-day assessment. A portfolio consists of student exhibitions that demonstrate mastery of the skills of inquiry and expression. Credit is given on the basis of what students can actually do in "showing off" their knowledge and know-how. The exhibitions require reading, writing, questioning, speaking, and listening. Exhibitions are the culmination of a series of performances in an area, usually a graduation-level exercise or final class project.

The staff at the Coalition of Essential Schools believes that required periodic public exhibitions, focused on the student's ability to use and display knowledge intelligently, would better ensure that essential skills are taught and learned. Exhibitions can yield an authentic measure of students' abilities to engage in inquiry and skillful expression, and they can motivate and engage students by involving them in a public challenge, such as a project in a science fair or an in-class presentation.

Assessment performances are day-to-day activities that can also be authentic and engaging demonstrations of students' abilities to grapple with the central challenges of a discipline in real-life contexts. Ideally, these performances become an integral part of the instructional cycle, and feedback provided by the teacher and peers is meant to be formative that is, it is intended to help the student assess his or her strengths and weaknesses, identifying areas of needed growth and mobilizing current capacity. Performances are provocations for what needs to be learned and extensions of what is learned and can help push the student to the next level of skill in performance. Performances become tools for reflection on learning accomplished and learning deferred.

School-Based Examples

Many schools are designing and using innovative assessment strategies. Some of these techniques are called authentic assessment, performance-based assessment, portfolio assessment, process assessment, exhibits, demonstrations, and profiles. Regardless of the label, each of these techniques has moved beyond the concept of measuring student learning using multiple choice and other simple tests as single measures of student learning at one point in time.

English At South Medford High School in Medford, Oregon, seniors must complete a three-part Senior Project to graduate. Students first choose a topic of interest to them, conduct research, and write a paper. Then they use the information in the papers to create real-life projects. While these projects are to satisfy requirements for senior English, the rich variety of topics chosen makes these efforts interdisciplinary. One aspiring singer wrote and performed a song which she had learned to orchestrate. Another student wrote about Big Brother and Big Sister programs and recruited students to work with children from broken homes. The third phase of the project is a formal presentation before a panel of faculty and community members, some of whom are experts in the topic. Following the formal presentation, judges ask each senior several questions to evaluate impromptu speaking skills, knowledge level, and poise.

Writing Oregon's Lacomb Elementary School implemented a new writing program after staff enrolled in for-credit courses on teaching writing as a process and integrating writing across content areas. Staff training also addressed cooperative learning, whole language learning, and assessment. Staff developed holistic scoring procedures to assess writing and document improvement over time. Assessment is ongoing in classrooms; students use feedback from classmates and teachers to assess and improve performance. Conferences with partners and teachers, before and during writing, help students select topics and polish skills. Audiences are the class and the larger community: a Young Authors' Fair gave students opportunities to share their writing, illustrating, reading, and storytelling skills with parents and community members. In turn, local residents serve as role models by sharing their stories and poems with youngsters in school.

Social Studies In his article, "Teaching to the (Authentic) Test," Grant Wiggins examines high school assessment strategies of two coalition schools. At Hope High School, Providence, Rhode Island, ninth grade students were asked to complete an oral history project based on interviews and written sources, and to present their findings in class. They selected their subjects from topics such as their family, running a small business, substance abuse, teenage parents, or recent immigrants. Students were then asked to create three hypotheses based on their preliminary investigations and four questions they would ask to test each hypothesis. Students were evaluated on such criteria as whether they had investigated three hypotheses, described at least one change over time, selected four appropriate sources for the interviews, asked valid questions, noted important differences between "fact" and "opinion" in answers, used evidence to prove the ultimate best hypothesis, and effectively organized their writing and presentation to the class.

A four-part final for a twelfth grade Humanities course at Thayer High School, Winchester, New Hampshire, asked students to choose a topic and construct their own final test subject to approval by the project supervisor; submit a written report conforming to stated grammar requirements; make a multi-media, 30-minute oral presentation; and serve on four evaluation teams of other students' presentations, playing a different role on each team either a journalist who summarizes important details or a coach who suggests improvements for the presentation.

Science California teacher Cathy Klinesteker uses these closure activities for authentic assessment at Evergreen School in Cottonwood: (1) At the end of a unit, students write a paper for another class of students (younger, older, or the same age) explaining the concept. Example: Sixth graders write a book for fourth graders explaining the cycle of a star. (2) Cooperative groups of students do an artistic representation (poster, model, videotape, slide show, etc.) of a concept. Example: For a communication unit, students created a poster demonstrating their understanding of elements of communication, including nonverbal, questioning, paraphrasing, and empathy. The representation must show the interrelatedness and importance of each component. (3) Using equipment or a drawing (depending on the developmental stage of the students), students demonstrate understanding of a scientific principle such as an open electrical circuit; stream cutting during flood stage of a river; or a food web in the ocean, a pond, a desert, a rain forest, etc. (4) With partners, students prepare a debate demonstrating understanding of two sides of a controversial issue.

Math In the Mendocino Middle School, Mendocino, California, math and science teacher Cory Wisnia's assessment strategies evaluate students' knowledge of a specific concept or subject area and the life skills they need for the future. One of the ways Wisnia teaches real-life skills in math and science classes is through projects which also can serve as final measures of learning. He teaches a unit or concept and then assigns projects that demonstrate how well students understand the concept. For example, to assess area and perimeter relationships in math, Wisnia asked the class to use a particular constant, "say 1250 square feet," and design a scale model of a dream home, using graph paper for the floors. These strategies help Wisnia judge how much learning the student retained. He found that, "While I thought my students fully understood area relationships before we started on the project, in fact they really learned much more as they went along, trying to find answers to specific questions such as how many square feet should a bathroom be?"

At Hazen Union School in Hardwick, Vermont, math teacher Jean Hackett stresses performance assessment. Her students work in groups on these projects. Hackett encourages each group to try different group approaches, such as brainstorming or "jigsawing" (each student is responsible for a "piece" of the puzzle) to solve a problem. For example, instead of having students multiply two numbers using a simple computation approach, she challenges them with a problem they can solve only by collecting data and using multiplication. When they are finished, each group has a product from which Hackett is able to assess both the math content her students know and their strategies.

From Cognitive to Multiple Abilities

Historically, a very narrow concept of human intelligence has driven views of human learning. This concept was under close scrutiny in the late 70s when there were signs of reawakened interest in the theories and research of intelligence. Robert Sternberg, a psychologist at Yale University, was perhaps the most important catalyst for this re-examination, although researchers from different areas in psychology have joined in the rediscovery of an enhanced concept of intelligence.

Defining Multiple Abilities and Skills

Over the past two decades, Howard Gardner, a cognitive psychologist from Harvard University, has been developing a theory of multiple abilities, talents, and skills. Gardner argues that traditional schooling emphasizes only two abilities-verbal-linguistic (especially in written form) and logical-mathematical. Yet, there are many other kinds of knowledges or talents that enrich our lives and help us respond effectively to our environment. He lists the following, while emphasizing that there are many others as well.

* Visual-spatial Capacity to perceive the visual-spatial world accurately and to modify or manipulate one's initial perceptions

* Bodily-kinesthetic Abilities to control one's body movements and to handle objects skillfully

* Musical-rhythmical Abilities to produce and appreciate rhythm, pitch, and timbre, and appreciation of the forms of musical expressiveness

* Interpersonal Capacities to discern and respond appropriately to the moods, temperaments, motivations, and desires of other people

* Intrapersonal Knowledge of one's own feelings, strengths, weaknesses, desires, and the ability to draw upon this knowledge to guide behavior

* Logical-mathematical The abilities to discern logical or numerical patterns and to handle long chains of reasoning.

* Verbal-linguistic Sensitivity to the sounds, rhythms, and meanings of words; sensitivity to the different functions of language

Expanding Views of Abilities and Skills

Another individual who has been working to expand the way that thinking and learning is viewed is Protase "Woody" Woodford, Head of the Language Group, Testing Development, Educational Testing Service (ETS). He and his colleagues at ETS have been working in foreign languages to develop an expanded concept of the knowledge and skills necessary to be a successful learner of a foreign language.

Woodford believes that successful language proficiency assessment happens only when students are evaluated on their speaking ability. Since no oral tests were available, ETS developed a scale for assessing language proficiency based on a test used by the U. S. Defense Language Institute. This scale measures proficiency at five levels: At the lowest level, Zero, students are unable to respond, even to a simple address such as "Hello." At the Novice level, students' skills range from "low"-recycling memorized material learned in class to "high" conversing in very simple language. The third level on ETS' scale is Intermediate. At this level, students no longer rely on memorized information, but can carry on narrative conversations, though only in the present tense. The more advanced intermediate students are able to speak in either the future or the past tense, but not both. Advanced students are able to deal with descriptions and concepts not only in the present tense, but in the future and past and can do this regularly. The last level is Superior. These students have a full range of competency. They can deal with abstraction, negotiate ideas, and can use conditional "if" clauses.

Gardner and Woodford, as well as many others, continue to challenge views of intelligence and learning that have traditionally directed the schooling experience. As these views evolve, it becomes necessary to examine the impact they have on assessing and evaluating student learning.

Toward Multidimensional Assessment

Gardner has suggested that traditional schools have emphasized the assessment of logical-mathematical and verbal-linguistic abilities, leaving other abilities out of the assessment process. Given the axiom that you "inspect what you expect" the message that continues to be sent to students is that only certain dimensions of learning are important. As different abilities and skills become increasingly valued in schools, new visions of assessment increasingly include assessment of the various abilities and skills. Moving to a concept of "multidimensional assessment" means that evaluation of students will be based on a broader concept of intelligence, ability, and learning. Not only will logical and verbal abilities continue to be assessed, but assessment also will include visual, auditory, kinesthetic, intrapersonal, and interpersonal abilities. This means assessing students' repertoire of learning strategies, skills in communicating with others, and knowledge as it is applied to day-to-day and culturally diverse contexts.

The new vision of learning and evaluation of student learning reflected in the term multidimensional assessment is broad-based, relevant to real life, process oriented, and based on multiple measures which provide a rich portrayal of student learning.

Figure 1, Type of Assessment

Figure 1 portrays some of the aspects of multidimensional assessment. The first continuum shows a movement from decontextualized, atomistic tests (short answer, fill-in blank, multiple choice, true/false, etc.) to authentic, contextualized tasks such as performances and/or products. The second continuum shows movement from a single measure of student learning to multiple measures. The third continuum depicts movement from simple to complex dimensions of learning while the fourth depicts movement from assessment of few dimensions of intelligence to assessment of many dimensions. Multidimensional assessment taps the power and diversity of active learning, creates multiple sources of information to support instructional decision making, and helps students become more reflective and capable learners.

A group of determined and dynamic teachers at Key School in Indianapolis embraced Gardner's theories and worked with Gardner and his colleagues to develop a curriculum based on his ideas. Key School emphasizes the use of all kinds of abilities by students. In addition to instruction in mathematics and language arts, students also play the violin, speak Spanish, move their bodies to music, enter data into a computer and solve board games that emphasize spatial skills. The curriculum is integrated through the use of schoolwide themes that span all grades and all subjects. Themes are studied in depth for nine weeks.

In implementing the new curriculum, the staff found that standardized tests were not usually useful in measuring the broad range of abilities fostered in the curriculum. Most tests measure only a narrow slice of children's linguistic and mathematical abilities. And even in these areas, the tests fail to reflect student's ability to think critically and creatively, their motivation to learn, or their capacity to engage in self-assessment.

Due to the limitations of such tests, teachers at Key School had to develop assessment strategies that would yield richer, more qualitative information about student achievement and instructional effectiveness. Strategies developed include the following:

* Videotaped portfolios document the learner's interests and accomplishments and assist teachers in evaluating each student's progress from year to year. The portfolios include tapes of an initial interview with the student and excerpts of his or her work throughout the year. Parents receive the videotape at the end of the year-it serves as a rich portrayal of their children's growth and an important supplement to information from report cards and results of standardized tests.

* Each student keeps a journal with weekly entries about the school's themes and his or her projects.

* Students spend a part of each week in a media-rich room where they can choose from a range of board games, puzzles, audio tapes, and other materials requiring the use of one or more of the seven abilities. A teacher observes and records which activities children select and how they tackle problems. That information will be combined with other evidence about the learner's interests and strengths to form a profile for each student.

* Students also carry out an original project each nine-week grading period on a theme that encourages students to think across subject lines. One theme was "connections," which explored how people are connected to their environment. As a project in this theme area, one student constructed a diorama showing ways in which turtles interact with people. Students are helped to assess their own projects by considering how well they illustrate the theme. These projects become part of the student's portfolio, documenting for both the teacher and the student his or her growth over time.

Isolated Versus Integrated Assessment

Teaching to the Test Accompanying the move from a single test as a measure of student learning to multidimensional assessments, has been a move to integrate assessment into the classroom instructional process. Viewing a test only as an "event" signaling completion of instruction is no longer appropriate for the new vision that learning is a process in which students orchestrate learning strategies in a dynamic flow as they move in and out of different tasks and phases of learning. Assessment, too, needs to be considered as an ongoing dynamic process.

Figure 2 shows assessment as part of a process which enables students to become successful learners. Assessment, in this schematic, becomes the feedback that enables students to be strategic in their own learning process and enables teachers to adapt the instructional process to meet the needs of their students. Assessment helps teachers communicate expectations and standards of learning and performance to students. Assessment helps students gain information about what is valued, set personal academic expections, internalize the required knowledge and skills, promote their self-knowledge about performance, understand who is in control of learning, and improve their learning.

Figure 2, Assessment and Instruction

The direct linkage between expectations, instruction, and assessment is evident in Figure 2. The expectations for learning will drive both the instructional and assessment process. If it is important that expectations and assessment be linked, then it is also appropriate to say that a teacher should be teaching to the assessment given that assessment is authentic. Without a clear relationship between the two, neither students nor teachers can use assessment information to its greatest potential in promoting learning.

Examples of dynamic and ongoing assessments that enable teachers to modify instruction as needed are the techniques developed by Campione and Brown and by Weinstein and her colleagues. While greater attention to these strategies will be given in Guidebook 9, it is helpful to mention them here.

Dynamic Assessment The concept of dynamic assessment is a natural extension of the idea of integrating assessment and instruction. Based on pioneer work done by Jensen and Feuerstein, Campione and Brown are developing dynamic assessment techniques for daily use in the classroom. Teachers employing these techniques present students with increasingly explicit cues and prompts for performing a task. Teachers may limit support by giving hints about the general approach to a particular problem or they may need to provide a specific blueprint for solving a problem. The number of hints required for the student to solve each problem serves as a measure of learning efficiency. The fewer the cues given, the higher the learning efficiency; the greater the number of cues given, the lower the learning efficiency. The assessment of learning efficiency focuses on how much help is needed for students to reach their learning potentials in a particular domain, rather than a static measure of what has already been acquired.

Assessment of Learning and Study Strategies Weinstein, Schulte, and Palmer have constructed an instrument, called the Learning and Studying Strategies Inventory (LASSI), that measures the use of strategies among secondary and postsecondary students. It identifies students' weaknesses and provides data on the effectiveness of training programs for students who have poor learning and study habits. Students' attitude, motivation, time management, anxiety, concentration, information processing, study aids, self-testing, and test strategies are measured. The test focuses on modifiable behaviors in learning. For example, in the area of motivation, students are assessed on their diligence, self discipline, and willingness to work hard. LASSI enables educators to help students become self-directed and self-managed learners.

The new vision for learning requires an educational environment where instruction and assessment are integrated and where assessment is not considered to be a single event. The concept of dynamic assessment as an active process in which students and teachers participate provides a model for new assessment techniques which are aligned with the new vision for teaching and learning.

How Do You Implement Multidimensional Assessment?

In one sense, much of this essay has been devoted to providing rich examples of multidimensional assessment. Other issues must also be addressed. Schools need to examine their learning and curricular goals and then select tasks that validly assess these goals. Indeed, part of the problem with more traditional assessment tasks has been their misinterpretation and misuse, not only by some schools, but also the public. These dangers are not eliminated simply by adopting new forms of assessment, but by soul-searching consideration of the whole range of issues surrounding assessment.

Some states have taken leadership in developing criteria for such assessment and others are following their lead. Teachers and other school staff can assume local leadership roles in setting and applying criteria for assessment procedures that are reliable and valid. For example, portfolios and exhibitions will not of themselves ensure that the problem of student assessment has been solved. Teachers need to understand issues of reliability and validity as they relate to these new tasks. What makes a portfolio "good?" Will one teacher view a student's portfolio as excellent and another view it as mediocre? What guidelines ought teachers provide to students for judging their own work, thus encouraging valid self-assessment?

The problem inherent in these questions is that teachers may use their own idiosyncratic criteria for evaluating portfolio performances and other assessments. All members of a school staff need to develop common sets of criteria for judging students' work. Such efforts are more likely to result in valid and reliable assessment. And by participating in the process of setting criteria, teachers will be more likely to understand and adhere to these criteria.

Purposes of Testing

The call for educational accountability throughout the United States has had a great impact on the assessment practices in the public schools. Historically, testing was primarily the domain of the classroom teacher. Today, federal program requirements, state and district accountability concerns, testing for minimum competency, testing for special diplomas, testing for admissions, and national/international assessments have made required assessments prevalent in the schools. As this movement continues it becomes increasingly important that educators design an assessment program which has a clear purpose for each assessment and a clear understanding of the use that will be made of the assessment data.

Generally, the overall purpose of student assessment is to provide valid information for decision making. Since educators, students, parents, the public, and governmental agencies make many different types of decisions, assessments need to be designed for a variety of specific purposes. The purpose of each assessment should be directly related to decisions about students. Those decisions, in turn, are related to decisions about educational programs in schools, districts, states, and nations. Some decisions about students made from assessment information are:

* Diagnosis Monitoring students' strengths, weaknesses, and progress in specific areas

* Placement Matching students to appropriate levels of instruction

* Guidance and Counseling Helping students make appropriate educational and vocational decisions

* Admissions Choosing students to be admitted into various programs

* Certification Determining mastery of specified criteria

Some decisions made about educational programs are:

* Evaluation Judging the value and effectiveness of instructional programs and delivery systems

* Accountability Reporting results to specific publics

* Research/Planning Identifying educational areas requiring further study

Criteria for an Assessment Program

Although there are many purposes for assessment data and many different decision makers who use this information, an important consideration for schools and districts continues to be the alignment between expectations, instruction, and assessment. Schools continue to be bombarded with expectations placed on them from society by state, district, legal and other requirements; available tests and other instructional materials; and professional standards. With these increased expectations, the issue of alignment becomes increasingly difficult, but essential to understanding the meaning of assessment information.

According to Joan Herman, UCLA Center for Research on Evaluation, Standards and Student Testing (CRESST), educators will be increasingly challenged (1) to articulate the significant outcomes which constitute their new vision of multidimensional learning; (2) to design and implement effective strategies to help students achieve these outcomes; (3) to assess the extent to which these outcomes are attained in fair, valid, and appropriate ways; and (4) to use the results of these new assessments to improve student learning. The match between significant learning goals, instructional programs, and assessment tools is a key element in implementing such an approach; the match, in fact, undergirds the validity of the assessment itself At present, however, the match is problematic: traditional assessments simply do not adequately reflect the new vision and their utility thus is limited. New measures will have to be created.

Focusing more directly on criteria for the assessments themselves, Sheila Valencia states the following: (1) Sound assessment is anchored in authenticity-authenticity of tasks, texts, and contexts. This means assessing the orchestration, integration, and application of skills in meaningful contexts. (2) Assessment must be continuous. This means that drafts, plans, and sketches become as important a part of the total picture as final products. (3) Assessments must measure or sample a wide range of cognitive processes and/or abilities. (4) Assessments must provide for active, collaborative reflection by both teacher and students. This means that students must also be active participants in designing assessment tasks and be given increasing responsibility for using assessment data to monitor and improve their own learning.

Some additional criteria for developing assessments should be considered. First, assessments should be fair to different cultural and family backgrounds as well as actively access and mobilize the unique resources bestowed upon learners because of their backgrounds. Second, assessments should have a variety of real audiences, beyond a solitary teacher, including peers, parents, and community members.

A Call for a New Vision of Assessment in Education

Dissatisfaction with the limitations of testing and continued faith in the value of creative multifaceted assessment has given rise to proposals for new assessment alternatives at all levels of education, from the National Assessment of Educational Progress, to those charged with responsibilities for state and district assessments, to those interested in better classroom assessments. The underlying ideas of all these proposals share some common threads: These alternative assessments ask students to perform, create, produce or do something that requires them to use higher-level, problem-solving skills; the assessment tasks themselves represent meaningful instructional activities; the tasks themselves are also relevant to real-life tasks or represent those that are common to a particular discipline. Connecticut and California are among those states dealing with large scale assessment in innovative ways.

California

California's Department of Education believes that a major purpose of assessment is to evaluate the quality of learning. Recognizing that multiple-choice tests traditionally used in large-scale assessment measure only isolated skills and facts, the California Assessment Program's (CAP) goal is to measure students' ability to apply what they have learned to real-life situations. CAP will align assessment with California's new curriculum mandates-that curricula be literature-based, value-laden, culturally-rich, and integrated across content areas. Assessment will reflect new outcomes-that students gain the knowledge and skills necessary to analyze, organize, interpret, evaluate, and communicate life experiences.

New types of assessment being introduced include open-ended questions, essay, portfolio, oral, and integrated performance assessments. The new program also calls for assessment in more content areas at more grade levels.

In the mathematics area, open-ended questions allow students to solve problems creatively and construct answers that demonstrate the depth of their understanding. When answers are written, students are able to show others how to solve problems, and teachers gain important insights into how much a student knows. An example at the twelfth grade level is:

James knows that half of the students from his school are accepted at the public university nearby. Also, half are accepted at a local private college. He thinks that this adds up to 100 percent, so he will surely be accepted at one or the other institution. Explain why James may be wrong. If possible, use a diagram in your explanation.

A team of teachers is developing an integrated language arts assessment tool. Assessment of students' work occurs as they are engaged in typical classroom activities - reading, talking, writing, and thinking about literature and human experience. Assessment tasks challenge students to discover what is important, meaningful, and valuable in a text and to write responses that give evaluators insight into the processes of student thinking and construction of meaning. The goal is to develop students' capacities for flexible, insightful, productive thinking. For example, eighth grade students are asked to write an evaluative essay about the worth of a book or type of music and then support their judgments. Students set criteria, analyze the subject, and select evidence to support their judgments.

In science, students will demonstrate knowledge of scientific concepts and processes, problem-solving ability, and performance of scientific investigations. For example, given a handful of materials, sixth grade students are asked to design and perform an experiment to determine which of two magnets is stronger. Openended questions are also used; they engage students in creating hypotheses, designing investigations, and writing about social and ethical issues in science.

Assessments of student achievement in history and social sciences will allow students to demonstrate breadth of learning as well as the ability to clarify issues. recognize relationships, determine causes and effects, interpret evidence, and argue for a position. Test instruments will assess a deep knowledge of eras and events rather than isolated minutiae. Testing will provide feedback on a new enriched curriculum which immerses students in a historical context, involving them in the literature, historical documents, and lives of the people of an era. Assessment activities include debating, dramatizing, and defending a point of view orally or in writing. Finding out how well students can think about history will be as important as finding out what they know.

Responses from both teachers and students to field-testing programs have been very positive. Students actually enjoy taking tests and teachers are eager to receive training.

Connecticut

The state of Connecticut is also a leader in the use of multidimensional assessment. It is being driven by a carefully articulated vision of learning, Connecticut's Common Core of Learning (CCL).

The Common Core, adopted in 1987, sets forth what the state believes should be the outcomes of education in its public schools. It details what graduates should know and be able to do as a result of the entire K-12 school experience. The Core is meant to provide integrated and interdependent learning outcomes in three major areas: Attributes and Attitudes, Skills and Competencies, and Understanding and Applications. The state emphasizes that CCL is not meant to define minimum competencies, but rather a set of standards for an educated citizen. Although the Core is not mandated, it is meant to give local districts a clear direction as they develop curriculum.

In conjunction with the Common Core of Learning, the Connecticut State Department of Education initiated a multi-year assessment project to measure the extent to which their students are achieving the outcomes described in the Core. They are working to develop a new model which includes exhibitions, hands-on performance experiences, student portfolios and other tasks that require students to demonstrate knowledge-in-use.

Illustrative of the CCL performance tasks is the Food Market Comparison described below.

Many local food markets claim to have the lowest prices. But what does this really mean? Does it mean that every item in their store is priced lower, or just some of them? How can you really tell which supermarket will save you the most money? Your assignment is to design and carry out a study to answer this question.

The students are directed to work in small research groups to design, undertake, and evaluate a study. They are graded on their group work and their individual reports on the project. Each student is also required to keep a log documenting the process.

Connecticut also provides sustained training and support to teachers in order to help them (the teachers) understand, design, and evaluate performance tasks.

The segments of this program reflect continuing belief in the importance of assessment-for accountability, program and curriculum improvement, and feedback to teachers, parents, and students but strongly call for new perspectives on assessment and alternative measurement approaches. Hard thinking about what students should accomplish, how they should develop, and what specific components they need to acquire are the basic building blocks for constructing better visions of assessment for the future.

References

Arter, J. (1989, November ). Assessing communications competencies and speaking and listening: A consumer's guide. Portland, OR: Northwest Regional Educational Laboratory.

Arter, J., & Salmon, J.R. (1987, April). Assessing higher-order thinking skills: A consumer's guide. Portland, OR: Northwest Regional Educational Laboratory.

Arter, J., Stiggins, R., & Spandel, V. (1990). Classroom assessment of writing, speaking, and listening (Vol. I: How To). Unpublished manuscript prepared for the Utah State Board of Education, NWREL, Evaluation and Assessment Program, Portland, OR.

Baker, E.L. Can we fairly measure the quality of education? (CSE Tech. Rep. No. 290). Los Angeles, CA: Center for the Study of Evaluation.

Berglund, R.L. (Dec. 1989-Jan. 1990). Convention sessions address whole language evaluation. Reading Today, p. 34.

Brooks, G. (1987) Speaking and listening: Assessment at age 15. Assessment of Performance Unit (APU). Department of Education and Science of Great Britian, London, England.

Brown, A., & Campione, J. (1986). Academic intelligence and learning potential. In R.J. Sternberg & D. Detterman (Eds.), What is intelligence? (pp. 39-49).  Hillsdale, NJ: Lawrence Erlbaum Associates.

Brown, J.S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32-42.

California State Department of Education. (1986). Handbook for planning an effective writing program (3rd ed.). Sacramento: Author.

Campione, J.C., & Brown, A.L. (in press). Guided learning and transfer: Implications for approaches to assessment. In N.F. Frederikson, R. Glaser, A. Lesgold, & M. Shafto (Eds.), Diagnostic monitoring of skill and knowledge acquisition. Hillsdale, NJ: Lawrence Erlbaum Associates.

Case, R., & Sandieson, R. (no date). A developmental approach to the identification and teaching of central conceptual structures in middle school mathematics and science. Unpublished manuscript, Ontario Institute for Studies in Education, Center for Applied Cognitive Science, Toronto.

Case, R., Sandieson, R., & Dennis S. (1986). Two cognitive-developmental approaches to the design of remedial instruction. Cognitive Development, 1, 293-333.

Collins, C. (1979). Criteria and referenced pre-reading test as indices of first grade reading achievement. Reading Improvement, 16(3), 182-189.

Collins, C. (in press). Extending thinking throught the language arts. Englewood Cliffs, NJ: Prentice Hall.

Collins, C., Garnder, E., Madden, R., Rudman, H., Karlsen, B., Marvin, J., & Callis, R. (1984). Handbook of instructional strategies. New York: Psychological Corporation.

Collins, C., & Mangieri, J. (in press). Building the quality of thinking in and out of schools in the 21st century.  Hillsdale, NJ: Lawrence Erlbaum Associates.

Dorr-Bremme, D.W., & Herman, J.L. (1986). Assessing student achievement: A profile of classroom practices. Center for the Study of Evaluation, University of California, Los Angeles.

Educational Testing Service. (1987). Learning by doing: A manual for teaching and assessing higher order thinking in science and mathematics (Report No. 17-HOS-DO). Princeton, NJ: Author.

Figueroa, R.A. (1989). Psychological testing of linguistic-minority students: Knowledge gaps and regulations. Exceptional Children, 56(2), 145-152.

Figueroa, R.A., & Amato, C. (1989). Issues in special education. Santa Barbara: University of California Linguistics Minority Research Project. No. 1 in the Research and Policy Series.

Fox, B.J. (1990). Teaching reading in the 1990's: The strengthened focus on accountability. Journal of Reading, 33, 336-339.

Frederikson, J.R., & Collins, A. (1989). A systems approach to educational testing, Educational Researcher, 18(9), 27-33.

Gardner, H. (1987). Beyond the IQ: Education and the human development. Harvard Educational Review, 57(2), 187-193.

Gardner, H. (1989). Zero-based arts education: An introduction to Arts PROPEL. Studies in Art Education, 30(2), 71-83.

Gardner, H. (in press). Assessment in context: The alternative to standardized testing. In B. Gifford (Ed.), Report to the commission on testing and public policy.

Gardner, H., & Hatch, T. (1989). Multiple intelligences go to school. Educational Researcher, 18(8), 4-10.

Hatch, T., & Garnder, H. (1986). From testing intelligence to assessing competences: A pluralistic view of intellect. Roeper Review, 8, 147-150.

Herman, J. (1990). Accountability applies not only to schools, but to testing, too. R&D Review, 4(3).

Herman, J. (in press). Research in cognition and learning: Implications for achievement testing practice. In M. Wittrock & E. Baker (Eds.), Testing and cognition. Englewood Cliffs: Prentice-Hall.

Jensen, M.R., & Feuerstein, R. (1987). The learning potential assessment device: From philosophy to practice. In C.S. Lidz (Ed.), Dynamic assessment: An interactional approach to evaluating learning potential (pp. 379-402). New York: Guilford Publications, Inc.

Kerins, T. (1988). The new Illinois test in reading. Teaching thinking and problem solving, 10(5), 1-4.

Kneedler, P.E. (1985). Assessment of critical thinking skills in history-social science. Sacramento: California State Department of Education.

Krechevsky, M., Gardner, H. (in press). The emergence and nurturance of multiple intelligences. In M.J.A. Howe (Ed.), Encouraging the development of exceptional abilities and talents. British Psychological Society.

Laboratory of Comparative Human Cognition (1982). Culture and intelligence. In R. Sternberg (Ed.), Handbook of human intelligence (Vol.2, pp. 642-722). New York: Cambridge University Press.

Lane, J. (1988). Cognition in practice.  New York: Cambridge University Press.

Lazear, D.G. (1989). Multiple intelligences and how we nurture them. Cogitare: Newsletter of the ASCD Network on Teaching Thinking, pp. 1, 4-5.

Lester, F.K. (1988). An assessment model for mathematical problem solving. Teaching thinking and problem solving, 10(5), 4-7.

Lewandowski, L.J., & Martens, B.K. (1990). Selecting and evaluating standardized reading tests. Journal of Reading, 33, 384, 388.

Linn, R.L. (1990). Dimensions of thinking: Implications for testing. In B.F. Jones & L. Idol (Eds.), Dimensions of thinking and cognitive instruction: Implications for educational reform. Hillsdale, NJ: Lawrence Erlbaum Associates.

Loucks-Horsley, S. (1989). Science assessment: What is and what might be. Educational Leadership, 46(7), 84-85.

Lucas, C.K. (1988). Toward ecological evaluation. Part one. The Quarterly of the National Writing Project and the Center for the Study of Writing. 10(1), 1-7.

Lucas, C.K. (1988). Toward ecological evaluation. Part two. The Quarterly of the National Writing Project and the Center for the Study of Writing. 10(2), 4-10.

Martinez, M.E., & Lipon, J.I. (1989). Assessment for learning. Educational Leadership, 46(7), 73-75.

Mervar, K. & Hiebert, E.H. (1989, December). Students' self-selection abilities and amount of reading in literature-based and conventional classrooms. Paper presented at the annual meeting of the National Reading Conference, Austin, TX.

National Council of Teachers of English (1986). The uses and abuses of testing. Urbana, IL: Author.

Nickerson, R.S. (1989). New directions in educational assessment. Educational Researcher, 18(9), 3-8.

Nickerson, R.S. (1989). Special issues on educational assessment. Educational Researcher, 18(9), 1-2.

Norris, S.P. (1989). Can we test validly for critical thinking? Educational Researcher, 18(9), 21-26.

Norris, S.F. (in press). Effect of eliciting verbal reports of thinking on critical thinking test performance. Journal of Educational Measurement.

Olson, L. (1988). Children flourish here: 8 teachers and a theory changed a school world. Education Week, 18(1), 18-19.

Peters, C.W., & Wixon, K.K. (1988). The new Michigan reading tests: Assessing reading as thinking. Teaching Thinking and Problem Solving, 10(5), 1-4.

Phillips, L.M. (1989). Developing and validating assessments of inference ability in reading comprehension (Tech. Rep. No. 451). Champaign: University of Illinois at Urbana-Champaign Center for the Study of Reading.

Quellmalz, E., Burry, J., & Herman, J. (1988). Analytic scales for assessing students' expository and narrative writing skills (CSE Resource Paper No. 5). Los Angeles, CA: Center for the Study of Evaluation.

Resnick, L.B., Bill, V., & Lesgold, S. (1989, September). Developing thinking abilities in arithmetic class. Paper presented at the Third European Conference for Research in Learning and Instruction, Madrid.

Rogoff, B. (1982). Integrating context and cognitive development. In M. Lamb & A. Brown (Eds.), Advances in developmental psychology (Vol. 2, pp. 125-169). Hillsdale, NJ: Lawrence Erlbaum Associates.

Salinger, T. (1988). New directions in assessment: Some considerations. Paper presented at the annual meeting of the National Reading Conference.

Scribner, S. (1986). Thinking in action: Some characteristics of practical thought. In R. Sternberg & R.K. Wagner (Eds.), Practical intelligence: Origins of competence in the everyday world. New York: Cambridge University Press.

Sewell, T.E. (1988). Intellectual assessment of at-risk students: Classificaiton vs. instructional goals. Teaching Thinking and Problem Solving, 10(6), 1-5.

Siegler, R.S. (1989). Strategy diversity and cognitive assessment. Educational Researcher, 18(9), 15-21.

Snow, R.E. (1989). Toward assessment of cognitive and conative structures in learning. Educational Researcher, 18(9), 8-15.

Spandel, V., & Stiggins, R.J. (1990). Creating writers: Linking assessment and writing instruction. New York: Longman.

Stiggins, R.J. (1986). Evaluating students by classroom observation: Watching students grow (Reference & Resource Series). Washington, DC: National Education Association.

Stiggins, R.J. (1987). Design and development of performance assessments. Educational Measurement: Issues and Practice, 6(3), 33-42.

Stiggins, R.J. (1988). Revitalizing classroom assessment: The highest instructional priority. Phi Delta Kappan, 69, 363-368.

Stiggins, R.J., Quellmalz, E., & Ruble, E. (1988). Assessing higher order thinking skills in the classroom: A teacher's handbook (rev. ed). Washington, DC: Natinoal Education Association.

Valencia, S. (1990). A portfolio approach to classroom reading assessment. The whys, whats and hows. The Reading Teacher, 43, 338-342.

Weinstein, C.E. (1978). Elaboration skills as a learning strategy. In H.F. O'Neil, Jr. (Ed.), Learning strategies. New York: Academic Press.

Weinstein, C.E., Zimmerman, S.A., & Palmer, D.R. (1988). Assessing learning strategies: The design and development of the LASSI. In C.E. Weinstein, E.T. Goetz & P.A. Alexander, Learning and study strategies: Issues in assessment, instruction, and evaluation. San Diego: Academic Press, Inc.

Wiggins, G. (1989). A true test: Toward more authentic and equitable assessment. Phi Delta Kappan, 70, 703-714.

Wiggins, G. (1989). Teaching to the (authentic) test. Educational Leadership, 46(7), 41-47.

Wolf, D.P. (1989-1988). Opening up assessment. Educational Leadership, 45(4), 35-39.

Wolf, D.P. (1989). Portfolio assessment: Sampling student work. Educational Leadership, 46(7), 35-39.

Zessoules, R., Wolf, D., & Gardner, H. (1988). A better balance: Arts PROPEL as an alternative to discipline-based art education. In J. Burton, A. Lederman & P. London (Eds.), Beyond discipline-based art education. Dartmouth, MA: Southern Methodist University, Art Education Department.

Video Sources

Project Zero. This interview of Howard Gardner, including a classroom demonstration of Immigrants, was developed and copyrighted by NCREL (1990).

Performance-Based Assessment. This collage of performance-based assessments was developed and copyrighted by Browne & Johnson Production, Inc., One Grafton Common, Grafton, MA  01519.

California Assessment Program (CAP) Test. This videotape was excerpted from Assessing Authentic Achievement (1990) with permission of the California State Department of Education, Sacramento, CA.

Hazen Union School. This interview of Jean Hackett and the video archives of her performance-based assessments in math was developed and copyrighted by NCREL (1990).

Key Elementary School. This segment, which includes video archives from Key Elementary School plus classroom demonstrations of teachers using performance-based assessments, was developed and copyrighted by NCREL (1990).

info@ncrel.org
Copyright © North Central Regional Educational Laboratory. All rights reserved.
Disclaimer and copyright information.