Critical Issue:
Using Scientifically Based Research to Guide Educational Decisions

This Critical Issue was written by Jonathan Margolin, Program Associate, Learning Point Associates, and Beth Buchler, Educational Consultant, New-Learning Educational Services.

ISSUE: The No Child Left Behind Act requires educational programs and practices to be based on scientifically based research. The federal policy impacts practicing educators in the curriculum areas of reading, mathematics, and science. It also impacts instructional strategies, professional development, parent involvement, and all federally funded programs. The intent of these requirements is for teachers and administrators to improve their schools based on scientific knowledge as well as professional wisdom.

This Critical Issue focuses on how educators can use scientifically based research to inform teaching practices, curriculum decisions, and schoolwide programs. It is intended to guide teachers and administrators toward understanding, locating, and applying scientifically based research to improve student learning.


Overview | Goals | Action Options | Pitfalls | Points of View | Cases | Contacts | References

OVERVIEW: Quality teachers ask tough questions in their classrooms. They want their students to probe, investigate, and inquire about what they are learning and doing. Today, teachers and administrators are also being asked tough questions about the evidence for the effectiveness of the educational programs and methods they select for use in their classrooms. The reasons for this change are the provisions of the No Child Left Behind (NCLB) Act of 2001 that require federally funded educational programs to be built on scientifically based research (SBR). For example, a school using Title I funds to support the introduction of a new literacy approach must investigate the scientific evidence upon which that program is based. As a result, the way schools make critical decisions about curriculum and instruction will change. For this reason, it is important to understand the motivation behind these requirements.

Why Is SBR Important?

The trend toward scientifically based educational programs and practices has been a long time coming. For example, the standards of the National Staff Development Council (NSDC, 2004) state that the content of staff development should provide educators with "research-based instructional strategies." The reason for this standard is eloquently expressed as follows:

"The charisma of a speaker or the attachment of an educational leader to an unproven innovation drives staff development in far too many schools. Staff development in these situations is often subject to the fad du jour and does not live up to its promise of improved teaching and higher student achievement. Consequently, it is essential that teachers and administrators become informed consumers of educational research when selecting both the content and professional learning processes of staff development efforts." (NSDC, 2004)

These words, referring to professional development, apply just as well to all of the other decisions about curriculum, materials, and instructional methods. The imperative for incorporating SBR is dictated not only by federal law, but by common sense as well. With budgets tighter and district demands greater, educators need to be able to evaluate the evidence for the effectiveness of costly programs and materials. SBR is the "gold standard" for such evidence (Coalition for Evidence-Based Policy, 2003). Therefore, educators will need to care about SBR and how it impacts success in their school. They will need to learn and understand SBR in order to improve learning in the classroom and integrate SBR into their educational modus operandi. It is important to have a clear understanding of the scope of the federal requirements before exploring SBR.

NCLB Programs That Require SBR

Let us consider the different programs whose monies must be spent on programs supported by SBR. The first four "titles" of the NCLB Act (2002) contain the majority of the references to SBR that are of broad interest; they are summarized here:

This is not a comprehensive list, and details for each of the programs vary. Consult the NCLB Desktop Reference (Office of the Under Secretary, 2000) for more information on specific programs.

How the NCLB Act Defines SBR

Scientifically based research is defined in the NCLB legislation as "research that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs" (NCLB, 2002). In this section, we attempt to explain the law's definition of SBR in plain language.

To begin, the point of SBR requirements is to ensure that federal funds are being spent on "what works"—programs that are likely to make a strong impact on student achievement (Comprehensive School Reform Program Office, 2002). What sorts of research, then, constitute scientific evidence of effectiveness? When examining this question, Feuer, Towne, and Shavelson (2002) noted that "No method is good, bad, scientific, or unscientific in itself: Rather, it is the appropriate application of method to a particular problem that enables judgments about scientific quality" (p. 8). The NCLB legislation, along with guidelines from the U.S. Department of Education, defines scientific research for the goal of determining what works in educational programs and practices. For example, scientific evidence for a literacy program would need to demonstrate convincingly that the program causes an improvement in reading. The NCLB legislation describes criteria for research that meets this lofty standard.

Six Criteria for SBR

The NCLB Act presents a detailed definition of SBR focused on six criteria. An explanation and discussion of each of these criteria follows.

1. Research that employs systematic, empirical methods that draw on observation or experiment. The defining principle of scientific evidence is systematic empiricism. Empiricism is "watching the world," relying on careful observation of events to make conclusions (Stanovich & Stanovich, 2003, p. 33). Systematic empiricism requires doing those observations in a careful manner in order to answer a specific question. In the realm of educational research, systematic empiricism requires an exact definition of the intervention and program being studied, and a careful measurement of its outcomes.

Example: Following a state referendum restricting "bilingual education" in California, the popular media reported a dramatic improvement in the academic achievement of nonnative English speakers in a school district that switched to an English-only curriculum. This result was touted as evidence for the superiority of English-only instruction over bilingual instruction. However, this report did not constitute scientific evidence, because no one had systematically compared the two differing curriculum approaches. In fact, upon careful analysis, it became known that the school did not even have a bilingual curriculum to start with. Subsequent studies that precisely defined the key features of bilingual education and evaluated specific educational outcomes can be called systematic and empirical (see Krashen, 2002).

This criterion requires quantitative research, the hallmark of which is the use of numerical measurement of student outcomes. In order to know if one method truly caused an improvement, it is necessary to quantify the improvement in student performance. For example, studies about the effectiveness of certain mathematics instructional practices measure the improvement in mathematics ability, perhaps by quantifying changes over time in the percentage of math problems that students are able to answer.

2. Research that involves rigorous data analyses that are adequate to test the stated hypotheses and justify the general conclusions drawn.

It is necessary to analyze data from a study using appropriate statistical procedures that can support the conclusions. Failure to apply the appropriate statistical procedures calls the results into question. Reputable research does not issue strong claims for the effectiveness of a program or practice based on modest differences or gains in student achievement. It is necessary to use statistics to determine whether the results were significant and important.

Example: Research on the influence of class size on literacy achievement compared the reading ability of students in classrooms with 12 to 15 students to classrooms with 20 to 25 students. The students from the smaller classes scored higher on reading achievement tests. The researchers calculated the statistical significance of this difference to determine whether it was likely that such a result could have been possible by chance.

A great deal of technical expertise is necessary to understand whether statistical procedures have been performed and reported adequately. Fortunately, the publication of research in reputable sources and the replication of the results by different researchers give the layperson some degree of confidence that the research claims are above board. On a superficial level, quality research reports basic statistical information such as the following:

3. Research that relies on measurements or observational methods that provide reliable and valid data across evaluators and observers, across multiple measurements and observations, and across studies by the same or different investigators.

Scientific research needs to use reliable methods of collecting data. A reliable testing instrument will give you the same result each time you use it on the same person or situation. Whenever a study evaluates students in a manner that relies on human judgment, as with assessments of writing ability, it is essential for the research to report interrater reliability, an index of how closely the different raters agree. Studies that rely on testing instruments typically establish test-retest reliability by administering it to the same group of people twice. The main point is that SBR documents the reliability of its procedures for data collection.

Data about a particular outcome (e.g., mathematics achievement) are valid if they truly reflect that outcome and not some unrelated factor.

Example: Research that examines the effect of art education on mathematics achievement should use a measure that reflects that outcome and is not influenced by unrelated outcomes. For example, if the test of this outcome contains questions that are difficult to understand, then the test may measure verbal ability as well as mathematics achievement. Its validity would then be in doubt.

4. Research that is evaluated using experimental or quasi-experimental designs in which individuals, entities, programs, or activities are assigned to different conditions and with appropriate controls to evaluate the effects of the condition of interest, with a preference for random-assignment experiments, or other designs to the extent that those designs contain within-condition or across-condition controls.

Experimental design. This criterion specifies that in order to be deemed scientific by the NCLB Act, research needs to conform to an experimental or quasi-experimental design. The reasoning is that it is difficult to understand the effectiveness of any educational approach without comparing it to a different approach. For this reason, this criterion states that evidence for the effectiveness of any practice needs to include a comparison group to show what would happen if that practice had not been used. An ideal comparison group is similar in every important way that could influence the outcome of interest. Because the comparison group allows researchers to control for the influence of external factors unrelated to the intervention, it is sometimes called a control group. By contrast, the group of people (or schools) that uses the practice under investigation is typically called the treatment or experimental group.

Example: Consider an educational program to decrease tobacco use among teenagers. Suppose that the promoters of the program tout its effectiveness by noting that a school that began using this program in 2002 reported a decrease in smoking from that point on. Is this convincing evidence? According to NCLB guidelines, no. Because there are so many other variables that affect the smoking rate, it is not possible to identify any one cause. After all, perhaps an increase in the cigarette tax—or a national advertising campaign to deter tobacco use—caused the decline. To make a claim about the effectiveness of the educational intervention, researchers would need to compare the students at the school that implemented the program to students at a similar school that did not implement it.

This criterion makes an additional statement about comparison groups and treatment groups: The best way to assign people to these groups in through a random process. Random assignment is the hallmark of the experimental design. When researchers randomly assign students (or classrooms or schools) to the experimental or control groups, any given participant in the study has an equal chance of ending up in the control group or the treatment group. The purpose of this procedure is to make sure that the two groups are as equivalent as possible in terms of the background characteristics that could influence the outcome variables. Any preexisting differences between the comparison and the treatment group can confound—that is, spoil—the results. Random assignment eliminates, for the most part, the concern that the control group comprises people (or schools) that are fundamentally different from the treatment group.

Example: Continuing the tobacco education example, if the students in the treatment school were from poorer families than students at the comparison school, they might be more impacted by an increase in the tobacco tax. This preexisting difference would account for a decrease in the use of tobacco in the school where the anti-tobacco program is implemented. However, suppose the researchers randomly assigned 20 schools, all of which were similar in their major demographic traits, to either the treatment or control condition. If the treatment group reported a substantial decrease in student tobacco use in comparison to the control group, one could be highly confident that the education program worked.

Practical and ethical concerns with experimental design. Random assignment is not always possible, for both practical and ethical reasons. As a practical matter, the administration of a school district might insist on deciding which of its elementary schools adopt a new curriculum. It would therefore not be possible to randomly assign schools (or classrooms or students) to particular treatment or control groups to study the effectiveness of this curriculum. Other practical dilemmas abound but are beyond the scope of this discussion. As an ethical matter, random assignment often is not an appropriate way to determine which students in a school benefit from an experimental approach.

Quasi-experimental design. Because of these concerns, most educational research does not utilize a pure experimental design, but rather a quasi-experimental design. One such approach is to select a comparison group that closely matches the control group in all relevant factors. For example, a study of an intensive professional development program might select five schools to participate in the program, and five other similar schools to serve as comparison schools. Although this sounds very much like an experiment, it lacks the key factor of random assignment; the schools that received the program may have volunteered or been selected to participate. For this approach to be considered SBR by NCLB measures, the five comparison schools would need to closely match the treatment group in all of the factors that could influence the intended outcome of the program (e.g., demographic composition, academic achievement, and timing of evaluation).

It must be noted that this criterion has generated much controversy due to what some perceive as its exclusion of legitimate methods of scientific research such as qualitative designs and other nonexperimental approaches. These objections are discussed in the Different Points of View section.

5. Research that ensures that experimental studies are presented in sufficient detail and clarity to allow for replication or, at a minimum, offer the opportunity to build systematically on their findings.

Scientific research is open to the public. A person who claims to have discovered an effective teaching technique needs to submit evidence for its effectiveness to public scrutiny. If the results are sound, and the practice is truly effective, other people should be able to get the same results. For this reason, SBR must be reported in sufficient detail to allow for replication of the intervention and the scientific findings. One type of replication involves practitioners reproducing the educational intervention in their own schools. Another type of replication is more demanding; it involves another researcher attempting to replicate the original findings by following the same research procedures. This is an important process because it allows researchers to independently confirm the legitimacy of purported scientific evidence. For this reason, scientific research also needs to include all of the details about the educational intervention, participants, materials, outcome measures (e.g., tests and questionnaires), and the statistical procedures that were employed. Vague reporting of methods or results is a red flag, because it makes it seem as if the authors have something to hide. By the same token, successful replication of the research from a variety of sources ensures that the research is truly objective.

Example: In the early 1990s, psychology researchers published a study in which they claimed that listening to a Mozart sonata temporarily boosted the IQ of college students. The results of this small study were reported widely in the popular media, unleashing a torrent of marketing of classical music as a way to improve intelligence. Subsequent researchers have precisely replicated the methods of the original experimental, but have not replicated the findings of increased IQ. For this reason, the validity of the original findings is highly doubtful.

6. Research that has been accepted by a peer-reviewed journal or approved by a panel of independent experts through a comparably rigorous, objective, and scientific review.

The process of peer review is essential to SBR. Many journals of educational research, such as the American Educational Research Journal, accept their articles based on the review of other researchers who understand the research topic. The purpose of peer review is to submit research to public criticism—to shine the light of objectivity generated by independent minds. This process helps to screen out poor quality research, especially research that has serious problems in any of the areas discussed here. A variety of journals—with varying degrees of stringency of standards—exists, so peer review is a minimal standard. Yet because it is minimal, its absence is a sure sign that a particular method is lacking in quality (Stanovich & Stanovich, 2003). It is possible to determine whether a journal is peer reviewed by reading its editorial policy for acceptance of manuscripts. In summary, SBR is submitted to public scrutiny through peer review, and is replicated by independent researchers. Educators should therefore be wary of programs or practices whose support comes only from unpublished "in-house" studies conducted by its commercial vendors.

What Falls Outside of the NCLB Definition?

The foregoing criteria define a standard for SBR that focuses on the question of "what works" in educational practice. It should be noted that there are other important questions in education for which different research approaches are appropriate. Some of these questions include "What does a successful school or classroom look like?" and "What are the risk factors associated with dropping out of school?" The following research approaches are able to answer these important questions:

The importance of these methods is discussed further in the section on Different Points of View.

Evaluating the Evidence Base

We have defined the general approach to SBR and discussed different types of research questions. We have seen that certain research designs are more appropriate for establishing causality than others. The task of evaluating the evidence base is more complex, however, than considering the quality of individual studies in isolation. Rather, educators need to take into account three perspectives when weighing the evidence in favor of adopting a particular program or practice: (1) the theoretical base of the reform practice or program, (2) implementation and replicability information, and (3) evidence of effects on student achievement (Comprehensive School Reform Program Office, 2002).

Theoretical base. It is important to examine and understand the theory behind practice. This is the set of guiding principles behind a program that explain why and how the program works. For example, a guiding principle behind a science curriculum might be that students learn best within the context of solving a problem that is intellectually and socially meaningful to them. A clearly stated theory gives a program greater coherence because teachers understand how students are supposed to learn and what purpose the various components of the program serve. For example, when a teacher understands the theory of direct, explicit phonics instruction, that teacher is less likely to modify the program in ways that defeat its purpose. Conversely, with a clear grasp of the theoretically important elements of a program, the school is able to adapt it to the local circumstances in a manner that will not compromise the program's effectiveness.

The theoretical base helps to fill in gaps in SBR. Many, if not most, educational programs do not yet have sufficient research to earn the distinction of being "evidence-based." Programs that are based on a coherent, well-established theory are far preferable to programs whose theoretical base has not been substantiated (Stanovich & Stanovich, 2003).

Critical features: In a pertinent publication (Comprehensive School Reform Program Office, 2002), the U.S. Department of Education identified the following questions to ask about the quality of the theoretical base:

Implementation and replicability information. Certain programs can work better in some situations than in others. For this reason, local stakeholders must examine whether a particular program will be successful within their local context. They should look for evidence that the program has been implemented successfully in schools that are similar to their own based on a number of factors, such as demographic characteristics (e.g., socioeconomic status, race, locale), student achievement levels, school size, and teacher experience. The research base itself may indicate whether a program has been successful in a variety of settings and with different populations, or whether its success is limited to a narrow range of conditions. Case studies are very useful to understanding the vagaries of implementation in great detail. By highlighting critical factors for the success of a program, case studies often constitute a critical piece of information in helping schools decide if a program will work for them. It is also useful to examine statistics on the number of different schools that have fully implemented a program and their settings. In summary, it is important to understand the circumstances in which a program or practice is most effective, even if it has a strong research base.

Critical features: The U.S. Department of Education (Comprehensive School Reform Program Office, 2002) has identified the following questions to ask when judging the quality of implementation and replicability:

Evidence of effectiveness. When it comes to evaluating evidence of effectiveness, it is most important to keep in mind that a single study does not provide a "base" of evidence. Any single study may have numerous flaws in how it controls for confounding variables (although experimental studies are generally the least susceptible to these flaws). When a practice receives support from several different high-quality studies, the strength of the big picture overcomes the flaws that any one study may have (Stanovich & Stanovich, 2003).

Critical features: The U.S. Department of Education (Coalition for Evidence-Based Policy, 2003) has provided guidelines for a school to judge whether the research base of an educational intervention provides evidence of effectiveness:

These guidelines are presented in detail in A User Friendly Guide published by the U.S. Department of Education.

There is an additional factor to consider when judging the evidence base for a particular program. As Robert Slavin (2003) notes, there is a distinction between a program that is "based on scientifically based research" and program that has itself been subjected to rigorous testing. For example, a program to improve reading may consist of instructional components that are supported by scientific research, such as direct instruction of phonics. Yet, the particular program itself—as characterized by how it organizes and emphasizes its various research-based components—might not have been tested. If all else is equal, it is preferable to look for rigorous testing of the program or practice itself.

Conclusion

Gathering, synthesizing, and using SBR are the steps to making good decisions about educational programs, products, and practices. Although studying the evidence base is time consuming, proper consideration of SBR gives educators greater confidence in their decision making and may lead to greater opportunity for students to succeed.

GOALS

ACTION OPTIONS:

Teachers, administrators, and policymakers can team together and pursue the following action options to achieve their goal of using SBR to create the best educational opportunities for students:

IMPLEMENTATION PITFALLS: Perhaps the greatest pitfall to the use of SBR is the paucity of experimental studies about important topics in education. It is widely acknowledged that an increase in government funding for educational research is necessary to achieve the goal of evidence-based education (Committee on Scientific Principles for Education Research, 2002; Raudenbush, 2002). Therefore, educators who fail to find any high-quality studies that answer their questions may grow frustrated with the goal of utilizing research.

Due to the federal requirements regarding SBR, many vendors are touting their products and services as "evidence based." A pitfall is the unwarranted reliance on inadequate research designs to substantiate these claims. One example is the pre-post design, in which student performance is measured at some point before and some point after a school adopts a new practice. The increase in scores over time does not demonstrate causality because of the lack of a comparison group. (See Coalition for Evidence-Based Policy, 2003, and Slavin, 2003, for additional examples).

There is likely to be confusion between programs that are based on scientific research and programs which themselves have been rigorously tested (Slavin, 2003). This distinction is subtle yet important: The individual components of a program may all be supported by research, but the way that the program organizes and emphasizes the components may not be supported by research. The experience of the New York City schools in selecting a reading curriculum is a case in point. School officials selected a reading program whose major components were amply supported by research. Yet, in the view of some critics, the program itself had not been rigorously tested and, therefore, was not sufficiently scientifically based. As a result, the New York City school district had to switch its reading program in order to ensure federal funding (Manzo, 2004). Thus, educational leaders must understand this distinction and be clear about what level of research rigor they need to meet in order to qualify for funding.

Another pitfall, ironically, is overreliance on SBR. Potentially, a school can be so swayed by the evidence for the effectiveness of an educational program that it might fail to verify whether such a program is a good match for its own conditions and needs. Thus, educators must look beyond the evidence for a program's effectiveness and also consider evidence for successful implementation in schools similar to theirs.

Finally, the limited amount of time for educators to study the research can be a pitfall. Reviewing the research literature is a time-consuming process. Nevertheless, the up-front investment in finding the most effective education program will save time in the long run.

DIFFERENT POINTS OF VIEW:

The NCLB legislation excludes research that lacks a comparison group from the definition of SBR (see What Falls Outside of the NCLB Definition in the Overview section). Some researchers object to this definition, on the following grounds:

In summary, the critics contend that the entire evidence base for a given practice should be considered, including not only experimental and quasi-experimental research but also qualitative, descriptive, and correlational studies.

ILLUSTRATIVE CASES:

Actual research studies can illustrate the features of experimental design that are the hallmark of SBR. The three features are quantitative data, control groups, and (in experimental studies) random assignment. (Quasi-experimental studies lack random assignment.)

Experimental Study

Title: "The Effects of Thinking Aloud During Reading on Students' Comprehension of More or Less Coherent Text"

Purpose: The study was designed to test whether the effectiveness of think-aloud strategies for reading comprehension depended on the coherence of the text.

Design: "In order to compare the effects of both coherent text and active engagement on students' comprehension, we [randomly] assigned students to one of four conditions. In one condition, students read the original text silently; in a second condition, students read the original text thinking aloud; in a third condition, students read the revised text silently; and in a fourth condition, students read the revised text with thinking aloud" (Loxterman, Beck, & McKeown, 1994, p. 354).

Data Collection: "After reading the text, students in all four conditions were asked to recall what they had read and to answer a set of open-ended questions" (Loxterman et al., p. 356). The authors measured how many "content units" each participant recalled and scored their question responses based on model answers.

Table 1. Design Features of the Experimental Study

Present in study?

Design Feature

Yes

No

Quantitative measures of student achievement or performance

x

Control group

x

Random assignment of participants

x

Note: The authors used a process called stratified random sampling, which ensures that the randomly assigned groups are equivalent in their overall reading ability.

Quasi-Experimental Study

Title: "Fourth-Year Achievement Results on the Tennessee Value-Added Assessment System for Restructuring Schools in Memphis"

Purpose: This study examined how restructuring schools compared in student achievement gains to nonrestructuring schools over a four-year period.

Design: The study compared 53 restructuring schools with 23 nonrestructuring schools that had been matched on major demographic characteristics or were from the same district (Ross, Sanders, Wright, Stringfield, Wang, & Alberg, 2001).

Data Collection: Performance on the Tennessee Comprehensive Assessment Program (TCAP) was compared between the restructuring and nonrestructuring schools.

Table 2. Design Features of the Quasi-Experimental Study

Present in study?

Design Feature

Yes

No

Quantitative measures of student achievement or performance

x

Control group

x

Random assignment of participants (in this case, schools)

x

Pre-Post Design

A pre-post design examines a particular variable at two points in time: before and after an intervention. It does not provide evidence about causality or effectiveness due to its lack of a comparison group, although it describes what happens after an intervention. Consider the research report that follows.

Title: "Data Supporting the Four Blocks Framework"

Purpose: This report documented the changes in reading and language arts achievement at several schools that had implemented the Four Blocks, a model of literacy instruction (Cunningham & Hall, 2002).

Design: For several schools, the study reported average reading and language arts achievement scores from before and after the implementation of the Four Blocks. In other words, the study utilized a pre-post design. Table 3 reports the change in achievement at one of the schools; the literacy model was implemented in the 2000–01 school year. The percentage of third graders meeting or exceeding state reading standards increased the year following implementation (Cunningham & Hall, 2002).

Table 3. Results of the Illinois Standard Achievement Tests for Reading and Writing
for Third Graders in 2000 and 2001

 

Academic Warning

Below Standards

Meets Standards

Exceeds Standards

Reading 2000

4%

38%

47%

10%

Reading 2001

0%

28%

48%

23%


 

Table 4. Design Features of the Pre-Post Study

Present in study?

Design Feature

Yes

No

Quantitative measures of student achievement or performance

x

Control group

x

Random assignment of participants (in this case, schools)

x

CONTACTS:

What Works Clearinghouse
2277 Research Boulevard, MS 6M
Rockville, MD 20850
866-992-9799
http://www.w-w-c.org/

Resources for Locating and Understanding SBR

The principles provided in this Critical Issue for evaluating research are useful guidelines that can help identify promising programs and help rule out those with poor support. However, the process of evaluating the quality of the research base is highly technical and requires a great deal of skill and expertise. For this reason, it is necessary to turn to objective expert sources for guidance.

What Works Clearinghouse. One resource for identifying evidence-based practices is the What Works Clearinghouse (WWC), sponsored by the U.S. Department of Education. Established in 2002 by the department's Institute of Education Sciences, the clearinghouse is designed to provide educators, policymakers, and the public with a valuable source of scientific evidence of what works in education. Each year (beginning in 2003), the WWC investigates the evidence base for a set of educational topic areas. These topic areas are chosen to meet the needs of K–12 and adult educators who need to identify effective and replicable practices. For each topic area, the WWC produces an Evidence Report that makes causal statements about which educational interventions work for achieving a particular student goal. These conclusions are based on a systematic review of existing research on the topic.

In Year 1, the WWC will issue a report on each of the following topics:

Scholarly sources. Scholarly, peer-reviewed journals are a primary source for research about educational practices.

NCREL resources. The North Central Regional Educational Laboratory (NCREL) has collected many resources to help identify and understand SBR.

Comprehensive School Reform . Resources are available that summarize the research base behind various models of CSR.

Additional Resources

NCCSR offers a Web-based workshop titled Identifying Research-Based Solutions for School Improvement that aims to provide educators with the skills they need to find, identify, and make good use of the best available educational research.

From the U.S. Department of Education, Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide is "intended to serve as a user-friendly resource that the education practitioner can use to identify and implement evidence-based interventions, so as to improve educational and life outcomes for the children they serve" (p. iii).

A Handbook for Classroom Instruction That Works, a book written by Mid-continent Research for Education and Learning (McREL) researchers R.J. Marzano, J.S. Norford, D.E. Paynter, D.J. Pickering, and B.B. Gaddy, discusses nine instructional strategies that have been demonstrated through SBR to be effective in improving student achievement across all content areas and grade levels. More information is available at McREL's Web site.

No Child Left Behind: A Desktop Reference, from the U.S. Department of Education, provides detailed information on federal SBR requirements for specific programs funded under the NCLB legislation.

A Center on Education Policy publication, From the Capital to the Classroom: State and Federal Efforts to Implement the No Child Left Behind Act, reports on the first year of a six-year study. Chapter 5, "Using Scientifically Based Research to Improve Education," is especially relevant.

 

References

Get Adobe Reader

Adobe Reader FAQ

Posted: 2004

info@ncrel.org
Copyright © North Central Regional Educational Laboratory. All rights reserved.
Disclaimer and copyright information.