kheru2006 (kheru2006) wrote,

School-Based Assessment : East Meet West KL 2005 I

 School-Based Assessment and Assessment for Learning: Concept, Theory and Practice

Dr. Lorayne Dunlop-Robertson, Ontario Institute for Studies in Education: University of Toronto

Assessment policy in the province of Ontario, Canada has undergone significant change in the past decade – change that is aimed at literally transforming curriculum and assessment for the approximately two million students in Ontario’s elementary and secondary schools. In a series of sweeping policy changes, the government introduced changes to school finance and governance, while reorganizing the school boards. This shifted the decision making on curriculum and assessment policy to a central authority. This was accompanied by rapid curriculum and assessment policy changes, along with significant changes in the traditional instruments of assessment and evaluation policy implementation, such as revised curriculum guides and new report cards. A range of new assessment policy implementation instruments was introduced also, such as exemplar booklets, and an online curriculum unit planner. This paper seeks to examine the theory behind the direction of the assessment policy changes; to summarize the policy changes as reflected in policy documents; and to reflect on the instruments designed to support the assessment policy changes.

Why Assess? The Changing Purposes of Assessment and Evaluation

Traditionally, student assessment and evaluation information has been collected for the central purpose of communicating the results of student achievement (Marzano, 2000). For decades, the grading, reporting and communicating of student learning has been a key responsibility for teachers (Guskey, 1996). In the province of Ontario, Canada, teachers are required to report student progress to parents or guardians a minimum of three times during the school year. Schools and school districts also use the results of student evaluations for communication purposes when they inform their constituents of the progress of their schools or their districts. A second traditional purpose for student assessment and evaluation has been to “select and sort” students. Based on the results of certain key evaluations, students gain access to various programs or courses, such as entrance to university or college. In Ontario for example, students must pass an exit examination in literacy before they are permitted to graduate from secondary school. Traditionally, assessment results have been norm-reference, comparing students to one another, and have resulted in presentations of results of student achievement in bell curves or “normal distribution” (Marzano, 2000, p. 17).

Within the last decade, however, a strong case has been put forth to broaden the purposes of student assessment and evaluation beyond the traditional ones of reporting and sorting (Marzano, 2000; McMillan, 2004; Shepard, 2000; Stiggins, 1994; Wiggins, 1998). Educators are realizing that student assessment can serve other purposes such as improving student learning, improving teaching effectiveness, and increasing the levels of student engagement with the material. Assessment and evaluation strategies have the potential to be teaching strategies also – another means to educate and to help students understand. Student assessment and evaluation tasks can be used also to support more effective planning toward meeting the learning outcomes of courses or units of study. More recently, it has also been suggested that assessment and evaluation strategies can be used to engage students more deeply in their learning. In this next section, these three broader purposes for student assessment and evaluation are explored.

Wiggins (1998) sees that student assessment can be “educative”. He advocates that student assessment can be used for purposes of educating and for improving student learning, rather than solely for the purpose of reporting. This view of assessment theory can be illustrated in a simple way with two scenarios. In one classroom, students are given a multiple choice test. The test questions are secret because the teacher wants to ensure validity and fairness in the test. The teacher has built the test from item banks of questions that have been provided by the district. The teacher administers the test, scores it, and communicates the results with precision and a degree of certainty that the test has been rigorous. In this scenario, however, the test itself cannot be used as a learning tool, because the answers are carefully guarded to be used another year. The students do not know which questions they answered incorrectly or how their thinking “went wrong”. The teacher is clearly communicating the results to the students, but the results are not a learning tool that they can use effectively to improve. The teacher assigns a mark or a letter grade which could potentially be a motivator to students, but the letter grade or percentage itself does not improve student learning.

Contrast this with a second scenario where the teacher uses a more authentic, performance-based assessment. In this second scenario, the students are required to produce a product – a letter to the editor, which is one of the expectations or outcomes for their grade. The criteria for the assignment and the scoring of the assignment are posted. The teacher provides some models of the task from a booklet of samples of student work that has been provided by the district. The students complete the assignment, and then write a self-assessment of their work on this task. The self-assessment is a key task because the students analyze and explain what they understood and did not understand about the task. The teacher grades the assignments, writes feedback to the students, and reports on the results of the evaluation. This time, the students have some key information – focused, personalized feedback - that will help them to improve. The teacher is using the “test” for multiple purposes: to communicate student progress, to give students feedback for improvement, and to increase students’ understanding of both the subject and the criteria for quality work. The test has become a learning tool, giving students focused feedback to assist them in their learning.

In this second scenario, the assessment task can have further applications if the teacher uses it also to analyze the effectiveness of the teaching. Through an analysis of the errors and strengths of the students’ work, the teacher decides what lessons need to be reinforced or perhaps even taught again in a different way. The assessment has a second purpose – to guide program decisions and to help with planning.

Assessment for planning is perhaps best illustrated through the use of a circle to describe the cycle of planning learning. Picture a teacher who is receiving a new class of thirty-five students at the start of a school year. While she knows that the learning plan for the year must be based on the expectations or outcomes of the curriculum, she does not know the skills of the students on entry to her class. Experience tells her that there will be variations in reading ability and mathematics acquisition that will span several grades. Some of the students will be strong in number skills while others will be strong in the social sciences. It would be a waste of time to re-teach material that students already know, and it would also be a waste of time to introduce topics before students are ready, so the teacher undertakes some diagnostic assessment. This is not complex assessment, but a series of simple assessments designed by the teacher herself, to gauge the prior knowledge of the students. She is also looking for indicators of how the students learn, such as their reading ability; their writing ability; their ability to concentrate; and their ability to listen to instructions and process them (Sutton, 1995). The teacher is also checking the students’ skills in mathematics. Applying this information helps the teacher to select reading resources that her students can grasp, and she has some idea of where to begin in mathematics. She learns which students can handle a significant quantity of printed material on a page, and which ones will need support with written materials.

The teacher uses diagnostic assessment numerous other times during the year whenever she wants to determine students’ prior learning relative to the learning outcomes for history or science or other subjects. With so many outcomes to be taught for the subjects in the grade she is teaching, she uses diagnostic assessment for several purposes:

• To avoid repetition of previously-learned material,
• To determine connections with other subjects and prior learning,
• To plan learning that is an attainable “next step” for the students.

It is important that the learning should be a challenge to the student (not a repetition) and still attainable. Sutton (1995) refers to this intended or planned learning as the learning that is within the “extended grasp” of a student (p.22).

After the teacher has diagnosed the prior learning of the class, she plans a portion of their learning in a subject. This could be a unit, a topic or a module. The teacher asks a key assessment question, “What will students know and understand as a result of the learning in this unit?” She lists the learning outcomes for the unit, and designs a summative assessment task that will allow students to demonstrate what they have learned. This summative task or culminating task is designed to be as authentic a task as possible. The task reflects real life and is engaging or interesting to the students. The task has a defined purpose, and is generally rich or complex in its design. The task is one that is “worthy” of the efforts of the students. The teacher has decided on the intended end product of the lesson. Deciding this, she then begins to plan the unit and the daily lessons, using the summative task as her guidepost.

This is the opposite approach to planning that begins with the consideration of the textbook first or planning the learning activities first. Instead, the first consideration is the end product – evidence of the outcomes of student learning relative to an established standard. Wiggins and McTighe (1998) refer to this process as “backward design” (p.8) because the teacher is going about this in the opposite way to the traditional. The teacher is considering first what can be accepted as evidence that the students have learned and have understood the learning outcomes. Wiggins and McTighe refer to the teacher as the designer who undertakes three steps: the identification of the desired results, the determination of the acceptable evidence of learning, and the planning of the learning experiences and instruction (p.9). In order for teachers to accomplish this, Wiggins and McTighe encourage teachers to decide which of the curriculum outcomes are worth being familiar with, which ones represent important knowledge, and which curriculum outcomes represent “enduring understandings” about the subject or topic – or the knowledge and skills that are at the heart of the subject discipline. The final assessment is designed to allow students to provide evidence that they have grasped these enduring understandings (p. 13).

As the lessons in the unit are taught each day, the teacher uses a third form of assessment, formative assessment, to determine whether or not the students have understood and grasped the material of the day’s lesson. In this case, the teacher is using formative assessment for two purposes: to provide ongoing feedback to the student, and to “inform” the teaching of the next lesson. Sutton (1995) refers to this process as “feed forward to the next learning task” (p. 66). This on-going form of assessment ensures that the teacher does not go on to the next lesson until she has determined whether or not the students have mastered the learning outcomes of the previous lesson. This ongoing assessment can be time consuming, or it may take the form of a quick homework check, or an analysis of the students’ application of the lesson through written exercises. What is key is that the assessment serves a useful purpose for student learning, which is a higher form of accountability to students than completing the daily assessments in order to have a mark in a markbook. The goal is to provide clear feedback that assists the student toward the attainment of the learning outcome (Sutton, 1995). In order to do this, the feedback must be given to students in a timely way, and must be in a format that is meaningful to the students. Hattie (1992) as cited in Marzano (2000) finds in a review of 7,827 studies of education, that “accurate feedback to students can increase their level of knowledge and understanding by 37 percentile points” (p. 25). Assigning a grade with an explanation of the strengths, weaknesses, and next steps is meaningful formative assessment feedback.

The teacher assigns the planned summative assessment in order to capture the extent to which students have grasped the material. She explains the intended outcomes of the summative assessment or culminating task, along with descriptors of the levels of quality in the completed work. During the completion of the summative task, she may organize ongoing formative assessment for learning, in the form of student self-assessment and peer assessments. Two of the implications of this process are that the teacher may “marginally reduce the quantity of the teaching in the interests of the quality of the learning”; and students may need explicit training in assessing against given criteria (Sutton, 1995, p. 69). The teacher uses the results of the summative assessment for purposes of reporting to parents, reporting to the teacher at the next level, and for planning the next unit (the feedforward application). The concept map below illustrates the concept of this broader purpose of assessment: the application of assessment in planning for student learning.

Dunlop-Robertson, 2005

A third broader purpose of assessment for learning is that assessment can be used to deepen student engagement with the learning material. Wiggins (1998) states that, We sacrifice our aims and our children’s intellectual needs when we test what is easy to test rather than the complex and rich tasks that we value in our classrooms and that are at the heart of our curriculum. That is, we sacrifice information about what we truly want to assess and settle for score accuracy and efficiency. That sacrifice is possible only when all of us misunderstand the role assessment plays in learning. In other words, the greatest impediment to achieving the vision described is not standardized testing. Rather, the problem is the reverse: we use the tests we do because we persist in thinking of the assessment as not germane to learning and therefore best done expediently. (p. 7)

Wiggins advocates that once assessment is seen to be educative, it becomes a “major, essential, and integrated part of teaching and learning.” (p. 8). He encourages an examination of current testing practices to move toward the view of curriculum as a set of performance tests of mastery of key outcomes. For example, a test for a driver’s permit requires a performance. Teacher evaluation is based on performance. Many student skills can be demonstrated well only through a performance (such as playing an instrument, or demonstrating skills in physical education). He encourages teachers to make the performance or demonstration of the learning as “adult-like” as possible – stating that traditional tests may engage students’ attention but they do not engage students’ respect, passion and persistence (p. 16). The key to providing tasks that engage students is to use authentic forms of assessment.

Authenticity in assessment involves providing assessment tasks that have a purpose. These tasks mimic real-life and real-world applications of knowledge at a high level of intellectual skill and performance. They are tasks that students find to be engaging because they can see that the content is relevant to them for life-long learning. The tasks are generally complex. Authentic tasks involve application and synthesis and other forms of higher learning, (Bloom, (1956) as cited in Wiggins, (1998)). While there is generally only one right answer in a traditional test, in an authentic task, the result is a quality product or performance that differs from student to student, but the indicators of quality do not change. The scoring for quality in an authentic task is made clear from the outset, and the feedback from the task is designed to provide students with next steps to consider in their learning. McMillan (2004) cites research by Brookhart (1997) finding that,

Recent research on motivation suggests that teachers must constantly assess students and provide feedback that is informative. By providing specific and meaningful feedback to students and encouraging them to regulate their own learning, teachers encourage students to enhance their sense of self-efficacy and self-confidence, important determinants of motivation. (p. 12).

An authentic assessment task also has validity; in other words, the assessment task assesses what it purports to assess. For example, asking students to write an explanation of how a microscope works could be considered more an assessment of writing, than of the actual performance of correct use of a microscope. The authentic assessment task should not limit the student by its design. A valid authentic assessment task is one that allows the student to demonstrate what he or she knows, can do, and understands. Another key criterion of authentic assessment that has not, as yet, been addressed is that the authentic assessment task must be feasible, given the expected workload of the students and their teachers. In summary, authentic assessment tasks engage students for the following reasons: they are purposeful and linked to real life; they are individualized or closer to the student as a person; they allow the student to demonstrate understandings and a grasp of the knowledge and skills; and they provide focused feedback for improving student learning.

These recent advances in classroom assessment and evaluation theory are summarized by McMillan (2004). Traditional assessment of outcomes, (isolated skills and facts) has been replaced by assessments that have integrated outcomes and applications of knowledge. The assessment tasks are more authentic and contextualized. The standards are no longer secret but public. Assessment and evaluation no longer occur after the instruction but during the instruction, and considerable feedback is provided to the students. Single assessments have been replaced by multiple assessments. In other words, assessment of learning is being replaced by assessment for learning.

In an era of increased accountability for student learning relative to agreed-upon international standards, authentic assessment as described in this paper appears to be working against some long-held beliefs about objectivity, fairness, reliability and validity in student evaluation. Shepard (2000) explains how earlier assessment theory was based on theories of motivation, theories of cognitive development and theories of scientific measurement. Many teachers continue to believe that tests must be uniformly administered to ensure fairness and objectivity. Shepard suggests that a reconceptualization of assessment theory is needed to match new conceptions about teaching and learning. She argues that new forms of assessment are needed “to be compatible with and to support” the social-constructivist view of learning that has been advocated by key theorists such as Vygotsky (1978) because fixed theories of intelligence have been replaced “with new understanding that cognitive abilities are developed through socially supported interactions” (p.7).

Stiggins (2002) addresses also the changing landscape of assessment theory. He finds that the assessment landscape in the United States in the past fifty years has led to the clearer articulation of higher assessment standards, more rigourous assessment for those standards and increased accountability on the part of the educators. He sees a flaw, however, in the “belief in the power of accountability-oriented standardized tests to drive school improvement” (p.762). The flaw is that only some of the students are motivated to higher excellence by the high-stakes testing. The testing is having the opposite effect on the motivation of many other students. They are becoming discouraged learners in the face of the intimidation of the tests, and assessment policies do not seem to accommodate this concern. He advocates for a more powerful vision where assessment for learning and the assessment of learning are both important (p. 762). In order for this change to take place, he advocates that teachers need the assessment tools to accomplish this task.

Research has demonstrated that improving classroom assessment – assessment for learning – can have a strong impact on student achievement. Bloom (1984) as cited in Stiggins (2002) demonstrates that changing classroom instructional environment (and one of the changes was assessment for learning) could produce “differences from one to two standard deviations in student achievement attributable to the differences between experimental and control conditions” (p. 763). In a review of literature in 1998, Black and William determine that improving classroom assessment can raise standards and they cite effects of one-half to one standard deviation. More importantly, they found that improving classroom assessment advantages the lower achievers while raising the overall standards. They argue that,… standards can be raised only by changes that are put into direct effect by teachers and pupils in classrooms. There is a body of firm evidence that formative assessment is an essential component of classroom work and that its development can raise standards of achievement. We know of no other way of raising standards for which such a strong prima facie case can be made.” (p. 143)

In summary, there is a theoretical and research basis that points toward the usefulness of a broader set of purposes for student assessment. In the section that follows, changes in the Ontario assessment and evaluation policies and instruments are described relative to these theoretical constructs.

Ontario Education: Curriculum and Assessment Policy Changes

In 1995, a Conservative government with an agenda of sweeping educational reform was elected in Ontario, Canada. For the next five years, the reforms to the curriculum and assessment policy continued until there was virtually a complete reform of curricula for all of the grades in the school system, ending with the publication of a new Grade 12 curriculum in 2001. In published news releases, the Ministry linked some of the changes to an earlier provincial consultation report, For the Love of Learning (Queen’s Press, 1994) while stating that other changes were based on a stated need to show fiscal responsibility while improving the quality of the school system.

One of the earliest reforms was the establishment of both a testing program and an “arms-length” agency of the government, the Education Quality and Accountability Office or EQAO (Queen’s Park, November 1995). At this time, there were no census assessments of Ontario students, and there were no exit examinations for secondary school. According to the press releases, the EQAO was designed to respond to the public’s demand for closer scrutiny and greater accountability. EQAO introduced a system of testing for all students in Grades 3 and 6 in Language and Mathematics, and for all students in Grade 9 in Mathematics. The test instruments are a combination of multiple-choice items and essay items. The result for the individual student is a Level from 1 to 4 reported for Language and for Mathematics. School results and district results are published.

The Ministry introduced a secondary school graduation requirement – a literacy test for students in Grade 10. This test is a performance-based literacy assessment, and the results are reported to individual students as either a successful pass or unsuccessful. The school results and the district results are published. Students who are not successful in the test are encouraged to take remedial courses during their remaining years in secondary school.

Commencing in 1997, the Ministry of Education introduced sweeping changes to its curriculum, commencing with new policy documents in Language and Mathematics for the elementary schools. Prior to this time, the published elementary curriculum policy “The Formative Years: 1967” had remained essentially unchanged for thirty years. This earlier document did not have grade-specific outcomes. Many individual school districts had developed their own grade-specific curriculum outcomes and established their own systems for assessment, evaluation and reporting of student grades. The new curriculum was intended to bring consistency across the province. At the same time, the government announced that the province’s school boards would be re-organized for efficiency. The following year, the Ministry of Education reduced the 129 major school boards to sixty-six new district school boards. The first task of the newly-reorganized school boards was to implement new elementary curriculum and assessment policies, working under a reduced funding model.

By 1998, new elementary curriculum was introduced for all of the subjects in elementary schools. The new curricula included grade-specific learning outcomes organized into strands. An entirely new element was also introduced at this time, intended to assist teachers with the assessment of student performance: “The Levels of Achievement Chart.” This chart is explained in the following way in The Ontario Curriculum Grades 1-8: Language 1997,

The achievement levels are brief descriptions of four possible levels of student achievement…(p.5) A student will be assessed on how well he or she reasons, communicates, organizes ideas and applies language conventions. For each of these categories, there are four levels of achievement. These levels contain brief descriptions of degrees of achievement on which teachers will base their assessment of children’s work. (p. 8)

The introduction of the levels of achievement charts appears to have been an attempt to meet two goals: to bring greater consistency to student assessment across the province, and to broaden the levels of cognitive development at which students in the province were assessed. Teachers were to judge their assessment of student performance, not just on knowledge, but on the student’s demonstrated ability to reason, communicate, organize ideas, and to apply the skills. This was a key change for both curriculum and assessment.

In 1999, the Ministry published revised secondary school curricula for Grade 9, followed by new curriculum in each of the subsequent years for Grades 10 through 12. These curriculum documents also present grade and course- specific learning expectations and achievement level charts for all of the subjects in secondary school. There is one difference: In the secondary school curriculum, the levels of achievement charts are more consistent from subject to subject. Secondary students are assessed across the four categories of knowledge and skills:

• Knowledge / Understanding
• Thinking / Inquiry
• Communication
• Application / Making Connections

Again, the outcome of this change is the requirement for teachers to assess student learning above the level of knowledge acquisition. The Ministry of Education also published a policy document:Program Planning and Assessment (2000) . In this document, the requirements for assessment are prescriptive. In order to ensure validity and reliability, teachers are advised to conduct assessments over a period of time that are varied in nature and “designed to provide opportunities for students to demonstrate the full range of the learning” (p.13). Teachers are advised to give students clear directions for improvement and to use samples of student work to provide evidence to substantiate marks assigned for student achievement. In this policy statement, the final grade for the course is to be determined in the following way: seventy percent of the final grade is to be based on evaluations throughout the course, and 30 % of the grade is to be based on a final evaluation that may be “an examination, performance, essay and or other method of evaluation suitable to the course content and administered toward the end of the course” (p.15). In conclusion, the secondary assessment policy states that “In all of their courses, students must be provided with numerous and varied opportunities to demonstrate the full extent of their achievement of the curriculum expectations, across all four categories of knowledge and skills” (p. 15).

The current iteration of these charts is available online currently for public consultation. In the current version of the charts (Ministry of Education, Ontario, 2004), teachers are encouraged to base their assessments “on clear performance standards and on a body of evidence collected over time” (p.4) The performance standards are presented to give teachers a “common framework” to guide the development of assessment tasks across a variety of aspects, and to assist in the planning instruction in providing meaningful feedback to students. In the latest iteration, the assessment categories have been standardized across all subjects to the following categories: knowledge and understanding; thinking; communication; and application.

With the introduction of new curriculum, the Ministry of Education also introduced new standardized provincial report cards. The report cards use letter grades to report student progress in Grade 1 to 6, and percentages to report student progress in Grades 7 to 12. One of the most significant changes of the new report cards was the requirement for teachers to evaluate students’ learning skills separately from the evaluation of their achievement of the curriculum outcomes. This presented a significant change for teachers, who had traditionally factored in student effort, homework completion and other factors in the composite evaluation (percentage or letter grade) for a student.

To support the implementation of new assessment practices and new report cards for elementary and secondary schools, the Ministry of Education developed an online electronic curriculum planner, a software application for curriculum planning that contains a resource library. One of the resources is the Assessment Companion (2002), which is also available online. With this resource, teachers can review current assessment policy, review assessment literacy terms and see explanations of different assessment methods. They can utilize the curriculum planner software also to construct rubrics (an assessment checklist that provides descriptors of student work at different degrees of quality).
The second major assessment implementation resource for Ontario, initiated at the time of the new report cards, is the project to provide samples or exemplars of student work. Working with teams of teachers across the province, the Ministry collected samples of student work and organized these samples into booklets that present models of student achievement across different levels of achievement in the various subjects and grades. These examples of student work are available both in print format and on the government website, so that students, parents, and teachers are able to view demonstrated student performance at different grades and levels. While there have been numerous other Ministry of Education initiatives designed to support the revised assessment methods, the initiatives described in this paper give an indication that the change was given some support. Whether or not these instruments provided sufficient support for this degree of change over a short period of time is a topic that is worthy of educational research.


This paper has attempted to outline some of the current trends in assessment theory, and to outline one government’s approach toward changes in curriculum and assessment policy in order to change teacher practice. At this point in time, there has been insufficient research on the change in assessment practices in Ontario to indicate whether or not teachers have gained in assessment literacy, or that the overall quality of student performance is increasing.

Significant research is required to be undertaken to determine the level of implementation of the current curriculum and assessment policies and to articulate important barriers. Leithwood, Fullan and Watson (2003) caution that, while evidence shows that pressure (such as the recent focus on accountability and student learning outcomes) is helpful to direct attention to priority areas of student learning, the pressure alone is not likely to “lend to substantial positive change, especially in the face of scarce resources and hasty implementation” (p.6). They find that Ontario’s implementation has been “highly problematic, reducing potential benefits that might have accrued.”

One of the most controversial of the changes has been the decision of the Ministry of Education to separate the learning skills from the reporting of student achievement on the report card. It is challenging for teachers to evaluate on achievement alone without including effort, behaviour and homework completion in the final grade. The Elementary Teachers’ Federation of Ontario (2001) has recommended that a section for the reporting of effort should be included in the next revision of the report cards. Marzano (2000), in a review of the factors included in assessment across school districts, finds that while student achievement is generally considered to be the most important factor in reporting grades, the factoring in of student effort has a “relatively broad acceptance”. He also finds “significant support” for the inclusion of behaviour (p.29). These findings would indicate that this is just one of the topics in Ontario’s assessment policy that is ripe for future investigation.

If key criteria for quality assessment are considered to be reliability, validity and fairness, then the changes in Ontario education have created an interesting background for research in assessment policy implementation. Ontario has redefined validity and reliability – moving from external examinations toward an increase in the range and number of assessment tasks administered by the classroom teacher. The Ministry has attempted to build quality and consistency in assessment through policy documents and numerous supports for implementation. Yet, important questions need to be answered. In the new era of educational outcomes and consistent provincial standards, what are the results of these changes? Leithwood and colleagues (2003) have given a mixed review of the changes, finding that there have been some negative consequences. They find that the changes in the assessment landscape have created a “harsh environment for less advantaged and diverse student populations” (p.7). They caution that teachers feel demoralized by the change process and see “few benefits” to most of the changes. Leithwood and colleagues advocate that there has been a “lack of sustained opportunities for teachers and principals to develop the necessary understanding and expertise” (p.7). These cautions need to be addressed through research-informed implementation strategies that include strategies to help teachers to become more assessment literate and to feel a stronger sense of efficacy for curriculum and assessment change.

More research is needed to identify the impact on students from the changes in assessment policy, especially for those students who find learning challenging. Finally, studies need to be conducted in the institutions that receive the graduates of Ontario education. What is the perception of the universities, colleges and workplaces regarding the knowledge and skills of Ontario graduates? The answers to these measures of accountability and quality assurance are a rich source for educational research, and are definitely worth knowing.


Black, P. & William, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan. October, 1998,p.141. Retrieved at

Bloom B. S. (1956). Taxonomy of educational objectives, Handbook I: The cognitive domain. New York: David McKay Co Inc.

Elementary Teachers’ Federation of Ontario. (2001). Adjusting the optics: Assessment, evaluation and reporting. Toronto, Ontario: ETFO.

Guskey, T. (1996). Communicating student learning. ASCD Yearbook. Alexandria, VA: ASCD.

Hargreaves, A. (2001). Beyond subjects and standards: a critical view of Educational reform. Toronto, ON: Ontario Association for Supervision and Curriculum Development.

Leithwood, K., Fullan, M. & Watson, N. (2003). The schools we need: A new Blueprint for Ontario. Toronto, Ontario: Ontario Institute for Studies in Education of the University of Toronto.Retrieved at http://schools we

Marzano, R. (2000). Transforming classroom grading. Alexandria, VA: ASCD.

McMillan, J. (2004). Classroom assessment: Principles and practice for Effective instruction. New York: Pearson Education.

Ministry of Education, Ontario. (December 1994) For the love of learning; A report of the Royal commission on learning. Queen’s Park Printer for Ontario. accessed @

Ministry of Education, Ontario, November, 1995, News Release: Queen’s Park Printer for Ontario. Accessed @

Ministry of Education, Ontario (2000). The Ontario Curriculum Grades 9 to 12 : Program Planning and Assessment. Queen’s Park Printer for Ontario. Accessed @

Ministry of Education. (2002). The Ontario Curriculum Unit Planner. Accessed @

Ministry of Education. (2002). The Assessment Companion. Queen’s Park Printer for Ontario. Accessed @

Ministry of Education, Ontario (2004). The Ontario Curriculum Achievement Charts 1-12 (Draft).Queen’s Park Printer for Ontario. Accessed @

Principles for Fair Student Assessment Practices for Education in Canada. (1993). Edmonton, Alberta: Joint Advisory Committee. Retrieved @

Semple, B. (1992). Performance assessment: an international experiment. The Scottish Office Education Department: IAEP Educational Testing Service. Report No. 22-CAEP– 06 U.S. Department of Education and the National Science Foundation.

Shepard, L. ( 2000) . The role of assessment in a learning culture. Educational Researcher 27,7. 4-14.

Stiggins, R. (1994). Student-centered classroom assessment. New York: Macmillan.

Stiggins, R. (2002) . Assessment crisis: The absence of assessment FOR learning. Phi Delta Kappan, 83, 10, 758-765. June 2002. Available online @

Sutton, R. (1995). Assessment for learning. Salford, UK: RS Publications.

Wiggins, G. & McTighe, J. (1998). Understanding by design. Alexandria, VA: ASCD.

Wiggins, G. (1998). Educative assessment. San Francisco: Jossey-Bass.

Source : East Meet West KL 2005 , An international Colloqium For International Assessment APEC Paper 1
Tags: pendidikan, school based assessment

  • Post a new comment


    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.