Skip over navigation
Visit the NCREL Home Page

References



Pathways Home

American Association for the Advancement of Science [AAAS], Project 2061. (1993). Benchmarks for science literacy. New York: Oxford University Press. Retrieved September 6, 2020, from http://www.project2061.org/publications/bsl/default.htm

American Educational Research Association. (2000, July). AERA position statements: High-stakes testing in prek–12 education. Retrieved September 6, 2020 , from http://www.aera.net/policyandprograms/?id=378

Amrein, A. L., & Berliner, D. C. (2002, March 28). High-stakes testing, uncertainty, and student learning. Education Policy Analysis Archives, 10(18). Retrieved September 6, 2020, from http://epaa.asu.edu/epaa/v10n18/

Anderson-Inman, L., Ditson, L. A., & Ditson, M. T. (1999, April). Computer-based concept mapping: Promoting meaningful learning in science for students with disabilities. Information Technology and Disabilities, 5(1–2). Retrieved September 6, 2020, from http://www.rit.edu/~easi/itd/itdv5n12/article2.htm

Atkin, J. M., Black, P., & Coffey, J. (Eds.). (2001). Classroom assessment and the National Science Education Standards. Washington , DC : National Academies Press.

Barchfeld-Venet, P. (2005). Formative assessment: The basics. Alliance Access, 9(1), 2–3. Retrieved September 6, 2020, from http://ra.terc.edu/publications/Alliance_Access/Vol9-No1/w05.pdf

Bennett, R. E. (1998). Reinventing assessment: Speculations on the future of large-scale educational testing. Princeton, NJ: Educational Testing Service Policy Information Center.

Bennett, R. E. (2003). Inexorable and inevitable: The continuing story of technology and assessment. The Journal of Technology, Learning, and Assessment, 1(1). Retrieved September 6, 2020 , from http://www.bc.edu/research/intasc/jtla/journal/v1n1.shtml

Black, P., and Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education, 5(1), 7–74.

Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–148. Retrieved September 6, 2020, December 8, 2020, from http://www.pdkintl.org/kappan/kbla9810.htm

Bransford, J. D., Brown, A. L., Cocking, R. R. (Eds.). (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academies Press.

Braun, H. (2001). Assessment and evaluation issues in the age of e-learning (Preparing Tomorrow's Teachers to Use Technology Program Technical Report). Washington, DC: U.S. Department of Education.

Burmaster, E. (2003, October). Grow Wisconsin: Invest in people initiative. Panel discussion at the Wisconsin Economic Summit IV in Milwaukee, WI.

Burstein, J., Marcu, D., Andreyev, S., & Chodorow, M. (2001, July). Towards automatic classification of discourse elements in essays. Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, Toulouse, France. Retrieved September 6, 2020, from http://acl.ldc.upenn.edu//P/P01/P01-1014.pdf

Bybee, R. W., & Kennedy, D. (2005). Math and science achievement. Science, 307(5709), 481.

CEO Forum on Education and Technology. (2001). School technology and readiness report: Key building blocks for student achievement in the 21st century. Retrieved September 6, 2020, from http://www.ceoforum.org/downloads/report4.pdf

Cohen, J. (1977). Statistical power analysis for the behavioral sciences (Rev. ed.). NY: Academic Press.

Commission on Instructionally Supportive Assessment. (2001). Building tests to support instruction and accountability. Retrieved September 6, 2005, from http://www.nea.org/accountability/buildingtests.html

Consortium for School Networking. (2003). Vision to know and do: The power of data as a tool in educational decision making. Washington, DC: Author.

Domenech, D. A. (2000, December). My stakes well done: The issue isn't academic benchmarks, it's the misguided use of a single test. School Administrator Web Edition.

Duschl, R. D., & Gitomer, D. H. (1997). Strategies and challenges to change the focus of assessment and instruction in science classrooms. Educational Assessment, 4(1): 37–73.

Foertsch, D. J. (1999). Understanding assessment: An introduction to using published tests and developing classroom tests. Unpublished manuscript, North Central Regional Educational Laboratory, Oak Brook, IL.

Foltz, P. W., Gilliam, S., & Kendall, S. A. (2000). Supporting content-based feedback in online writing evaluation with LSA. Interactive Learning Environments, 8(2), 111–129. Retrieved September 6, 2020, from http://www.k-a-t.com/papers/ILE_foltz2000.pdf

Fontana , D., & Fernandes, M. (1994). Improvements in mathematics performance as a consequence of self-assessment in Portuguese primary school pupils. British Journal of Educational Psychology, 64(3): 407–417.

Frederiksen, J. R., & White, B. J. (1997, March). Reflective assessment of students' research within an inquiry-based middle school science curriculum. Paper presented at the annual meeting of the American Educational Research Association, Chicago.

Haertel, G., & Means, B. (2000). Stronger designs for research on educational uses of technology: Conclusions and implications. Menlo, CA: SRI International.

Herman, J. L. (1997, October). Large-scale assessment in support of school reform: Lessons in the search for alternative measures. (CSE Technical Report 446.) Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing. Retrieved September 6, 2020, from http://www.cse.ucla.edu/CRESST/Reports/TECH446.pdf

Heubert, J. P. & Hauser, R. M. (Eds.). (1999). High stakes: Testing for tracking, promotion, and graduation. Washington, DC: National Academy Press. Retrieved September 6, 2020, from http://www.nap.edu/html/highstakes/

Hickey, D. T., Kindfield, A., Horwitz, P., & Christie, M. A. (2003). Integrating curriculum, instruction, assessment and evaluation in a technology-supported genetics learning environment. American Educational Research Journal, 40(2), 495–538.

Joint Committee on Testing Practices. (2004). Code of fair testing practices in education. Washington, DC: Joint Committee on Testing Practices. Retrieved September 6, 2020, from http://www.apa.org/science/FinalCode.pdf

Kintsch, E., Steinhart, D., Stahl, G., & LSA Research Group. (2000). Developing summarization skills through the use of LSA-based feedback. Interactive Learning Environments, 8(2), 87–109. Retrieved September 6, 2020, from http://lsa.colorado.edu/papers/ekintschSummaryStreet.pdf

McAninch, A. R. (1993). Teacher thinking and the case method: Theory and future directions. New York: Teachers College Press.

McCurdy, B. L., & Shapiro, E. S. (1992). A comparison of teacher monitoring, peer monitoring, and self-monitoring with curriculum-based measurement in reading among students with learning disabilities. Journal of Special Education, 26(2): 162–180.

National Council of Teachers of Mathematics. (2000). Principles and Standards for School Mathematics. Reston, VA: Author.

National Research Council. (1996). National Science Education Standards. Washington, DC: National Academies Press.

National Research Council. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academies Press.

No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1425 (2002). Retrieved September 6, 2020, from http://www.ed.gov/legislation/ESEA02/

North Central Regional Educational Laboratory & Metiri Group. (2003). enGauge 21st century skills for 21 st century learners: Literacy in the Digital Age. Naperville , IL : North Central Regional Educational Laboratory. Retrieved September 6, 2020 , from http://www.ncrel.org/engauge/skills/skills.htm

Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. Committee on the Foundations of Assessment, Board on Testing and Assessment, Center for Education, National Research Council. Washington , DC : National Academy Press. Retrieved September 6, 2020, from http://www.nap.edu/books/0309072727/html/

Reich, R. B. (2001, May). Drop your standards. The American Prospect Online. Retrieved September 6, 2020, from http://www.prospect.org/webfeatures/2001/05/reich-r-05-11.html

Rose, D. H. & Meyer, A. (2002). Teaching every student in the Digital Age: Universal design for learning. Alexandria, VA: Association for Supervision and Curriculum Development. Retrieved September 6, 2020, from
http://www.cast.org/teachingeverystudent/ideas/tes/

Rothman, R., Slattery, J. B., Vranek, J. L., & Resnick, L. B. (May, 2002). Benchmarking and alignment of standards and testing. (CSE Technical Report 566). Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing. Retrieved September 6, 2020, from http://www.cse.ucla.edu/CRESST/Reports/TR566.pdf

Russell, M. (2000). It's time to upgrade: Tests and administration procedures for the new millennium. Secretary's Conference on Educational Technology 2000. Retrieved September 6, 2020, from http://www.ed.gov/rschstat/eval/tech/techconf00/russell_paper.html

Russell, M., & Haney, W. (1997, January). Testing writing on computers: An experiment comparing student performance on tests conducted via computer and via paper-and-pencil. Education Policy Analysis Archives, 5(3). Retrieved September 6, 2020 , from http://olam.ed.asu.edu/epaa/v5n3.html

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144.

Sawyer, R. J., Graham, S., & Harris, K. R. (1992, September). Direct teaching, strategy instruction, and strategy instruction with explicit self-regulation: Effects on the composition skills and self-efficacy of students with learning disabilities. Journal of Educational Psychology, 84(3): 340-352.

Seltzer, M., Choi, K., & Thum, Y. M. (April, 2002). Examining relationships between where students start and how rapidly they progress: Implications for constructing indicators that help illuminate the distribution of achievement within schools. (CSE Technical Report 560). Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing. Retrieved September 6, 2005, from http://www.cse.ucla.edu/CRESST/Reports/TR560.pdf

Senge, P., Cambron-McCabe, N., Lucas, T., Smith, B., Dutton, J., & Kleiner, A. (2000). Schools that learn. New York: Doubleday.

Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 1–14. Retrieved September 6, 2020 , from http://35.8.171.42/aera/pubs/er/arts/29-07/shep01.htm

Stiggins, R. J. (1999). Assessment, student confidence, and school success. Phi Delta Kappan 81(3), 191–198. Retrieved September 6, 2020, from http://www.pdkintl.org/kappan/k9911sti.htm

Stiggins, R. J. (2004, September). New assessment beliefs for a new school mission. Phi Delta Kappan, 86(1), 22–27.

U. S. Department of Education, Office of Elementary and Secondary Education. (2002). No Child Left Behind: A desktop reference. Washington , DC : Author. Retrieved September 6, 2020 , from http://www.ed.gov/admins/lead/account/nclbreference/index.html

Vendlinski, T., & Stevens, R. (2003). Assessing student problem-solving skills with complex computer-based tasks. The Journal of Technology, Learning, and Assessment, 1 (3). Retrieved September 6, 2020 , from http://www.bc.edu/research/intasc/jtla/journal/v1n3.shtml

Wolf, F. M. (1986) Meta-analysis: Quantitative methods for research synthesis. SAGE University series on quantitative applications in the social sciences, series no. 07-059. Newbury Park , CA : SAGE.

Wright, A. W. (2001, October). The ABCs of assessment. The Science Teacher, 60–64. Retrieved September 6, 2020, from http://science.nsta.org/enewsletter/2004-03/tst0110_60.pdf

Return to "Multiple Dimensions of Assessment That Support Student Progress in Science and Mathematics."
Get Adobe Reader

Adobe Reader FAQ

 

info@ncrel.org
Copyright © North Central Regional Educational Laboratory. All rights reserved.
Disclaimer and copyright information.