Prepared by CCCC Committee on Assessment, November 2006 (revised March 2009, reaffirmed November 2014)
Writing assessment can be used for a variety of appropriate purposes, both inside the classroom and outside: providing assistance to students, awarding a grade, placing students in appropriate courses, allowing them to exit a course or sequence of courses, certifying proficiency, and evaluating programs-- to name some of the more obvious. Given the high stakes nature of many of these assessment purposes, it is crucial that assessment practices be guided by sound principles to insure that they are valid, fair, and appropriate to the context and purposes for which they designed. This position statement aims to provide that guidance.
In spite of the diverse uses to which writing assessment is put, the general principles undergirding it are similar:
Assessments of written literacy should be designed and evaluated by well-informed current or future teachers of the students being assessed, for purposes clearly understood by all the participants; should elicit from student writers a variety of pieces, preferably over a substantial period of time; should encourage and reinforce good teaching practices; and should be solidly grounded in the latest research on language learning as well as accepted best assessment practices.
Guiding Principles for Assessment
1. Writing assessment is useful primarily as a means of improving teaching and learning. The primary purpose of any assessment should govern its design, its implementation, and the generation and dissemination of its results.
As a result…
A. Best assessment practice is informed by pedagogical and curricular goals, which are in turn formatively affected by the assessment. Teachers or administrators designing assessments should ground the assessment in the classroom, program or departmental context. The goals or outcomes assessed should lead to assessment data which is fed back to those involved with the regular activities assessed so that assessment results may be used to make changes in practice.
B. Best assessment practice is undertaken in response to local goals, not external pressures. Even when external forces require assessment, the local community must assert control of the assessment process, including selection of the assessment instrument and criteria.
C. Best assessment practice provides regular professional development opportunities. Colleges, universities, and secondary schools should make use of assessments as opportunities for professional development and for the exchange of information about student abilities and institutional expectations.
2. Writing is by definition social. Learning to write entails learning to accomplish a range of purposes for a range of audiences in a range of settings.
As a result…
A. Best assessment practice engages students in contextualized, meaningful writing. The assessment of writing must strive to set up writing tasks and situations that identify purposes appropriate to and appealing to the particular students being tested. Additionally, assessment must be contextualized in terms of why, where, and for what purpose it is being undertaken; this context must also be clear to the students being assessed and to all stakeholders.
B. Best assessment practice supports and harmonizes with what practice and research have demonstrated to be effective ways of teaching writing. What is easiest to measure—often by means of a multiple choice test—may correspond least to good writing; choosing a correct response from a set of possible answers is not composing. As important, just asking students to write does not make the assessment instrument a good one. Essay tests that ask students to form and articulate opinions about some important issue, for instance, without time to reflect, talk to others, read on the subject, revise, and have a human audience promote distorted notions of what writing is. They also encourage poor teaching and little learning. Even teachers who recognize and employ the methods used by real writers in working with students can find their best efforts undercut by assessments such as these.
C. Best assessment practice is direct assessment by human readers. Assessment that isolates students and forbids discussion and feedback from others conflicts with what we know about language use and the benefits of social interaction during the writing process; it also is out of step with much classroom practice. Direct assessment in the classroom should provide response that serves formative purposes, helping writers develop and shape ideas, as well as organize, craft sentences, and edit. As stated by the CCCC Position Statement on Teaching, Learning, and Assessing Writing in Digital Environments, “we oppose the use of machine-scored writing in the assessment of writing.” Automated assessment programs do not respond as human readers. While they may promise consistency, they distort the very nature of writing as a complex and context-rich interaction between people. They simplify writing in ways that can mislead writers to focus more on structure and grammar than on what they are saying by using a given structure and style.
3. Any individual's writing ability is a sum of a variety of skills employed in a diversity of contexts, and individual ability fluctuates unevenly among these varieties.
As a result…
A. Best assessment practice uses multiple measures. One piece of writing—even if it is generated under the most desirable conditions—can never serve as an indicator of overall writing ability, particularly for high-stakes decisions. Ideally, writing ability must be assessed by more than one piece of writing, in more than one genre, written on different occasions, for different audiences, and responded to and evaluated by multiple readers as part of a substantial and sustained writing process.
B. Best assessment practice respects language variety and diversity and assesses writing on the basis of effectiveness for readers, acknowledging that as purposes vary, criteria will as well. Standardized tests that rely more on identifying grammatical and stylistic errors than authentic rhetorical choices disadvantage students whose home dialect is not the dominant dialect. Assessing authentic acts of writing simultaneously raises performance standards and provides multiple avenues to success. Thus students are not arbitrarily punished for linguistic differences that in some contexts make them more, not less, effective communicators. Furthermore, assessments that are keyed closely to an American cultural context may disadvantage second language writers. The CCCC Statement on Second Language Writing and Writers calls on us "to recognize the regular presence of second-language writers in writing classes, to understand their characteristics, and to develop instructional and administrative practices that are sensitive to their linguistic and cultural needs." Best assessment practice responds to this call by creating assessments that are sensitive to the language varieties in use among the local population and sensitive to the context-specific outcomes being assessed.
C. Best assessment practice includes assessment by peers, instructors, and the student writer himself or herself. Valid assessment requires combining multiple perspectives on a performance and generating an overall assessment out of the combined descriptions of those multiple perspectives. As a result, assessments should include formative and summative assessments from all these kinds of readers. Reflection by the writer on her or his own writing processes and performances holds particular promise as a way of generating knowledge about writing and increasing the ability to write successfully.
4. Perceptions of writing are shaped by the methods and criteria used to assess writing.
As a result…
A. The methods and criteria that readers use to assess writing should be locally developed, deriving from the particular context and purposes for the writing being assessed. The individual writing program, institution, or consortium, should be recognized as a community of interpreters whose knowledge of context and purpose is integral to the assessment. There is no test which can be used in all environments for all purposes, and the best assessment for any group of students must be locally determined and may well be locally designed.
B. Best assessment practice clearly communicates what is valued and expected, and does not distort the nature of writing or writing practices. If ability to compose for various audiences is valued, then an assessment will assess this capability. For other contexts and purposes, other writing abilities might be valued, for instance, to develop a position on the basis of reading multiple sources or to compose a multi-media piece, using text and images. Values and purposes should drive assessment, not the reverse. A corollary to this statement is that assessment practices and criteria should change as conceptions of texts and values change.
C. Best assessment practice enables students to demonstrate what they do well in writing. Standardized tests tend to focus on readily accessed features of the language (grammatical correctness, stylistic choices) and on error rather than on the appropriateness of the rhetorical choices that have been made. Consequently, the outcome of such assessments is negative: students are said to demonstrate what they do wrong with language rather than what they do well. Quality assessments will provide the opportunity for students to demonstrate the ways they can write, displaying the strategies or skills taught in the relevant environment.
5. Assessment programs should be solidly grounded in the latest research on learning, writing, and assessment.
As a result…
A. Best assessment practice results from careful consideration of the costs and benefits of the range of available approaches. It may be tempting to choose an inexpensive, quick assessment, but decision-makers should consider the impact of assessment methods on students, faculty, and programs. The return on investment from the direct assessment of writing by instructor-evaluators includes student learning, professional development of faculty, and program development. These benefits far outweigh the presumed benefits of cost, speed, and simplicity that machine scoring might seem to promise.
B. Best assessment practice is continually under review and subject to change by well-informed faculty, administrators, and legislators. Anyone charged with the responsibility of designing an assessment program must be cognizant of the relevant research and must stay abreast of developments in the field. The theory and practice of writing assessment is continually informed by significant publications in professional journals and by presentations at regional and national conferences. The easy availability of this research to practitioners makes ignorance of its content reprehensible.
Applications to Assessment Settings
The guiding principles apply to assessment conducted in any setting. In addition, we offer the following guidelines for situations that may be encountered in specific settings.
Assessment in the Classroom
In a course context, writing assessment should be part of the highly social activity within the community of faculty and students in the class. This social activity includes:
- a period of ungraded work (prior to the completion of graded work) that receives response from multiple readers, including peer reviewers,
- assessment of texts—from initial through to final drafts—by human readers, and
- more than one opportunity to demonstrate outcomes.
Self-assessment should also be encouraged. Assessment practices and criteria should match the particular kind of text being created and its purpose. These criteria should be clearly communicated to students in advance so that the students can be guided by the criteria while writing.
Assessment for Placement
Placement criteria in the most responsible programs will be clearly connected to any differences in the available courses. Experienced instructor-evaluators can most effectively make a judgment regarding which course would best serve each student’s needs and assign each student to the appropriate course. If scoring systems are used, scores should derive from criteria that grow out of the work of the courses into which students are being placed.
Decision-makers should carefully weigh the educational costs and benefits of timed tests, portfolios, directed self placement, etc. In the minds of those assessed, each of these methods implicitly establishes its value over that of others, so the first impact is likely to be on what students come to believe about writing. For example, timed writing may suggest to students that writing always cramps one for time and that real writing is always a test. Machine-scored tests may focus students on error-correction rather than on effective communication. In contrast, the value of portfolio assessment is that it honors the processes by which writers develop their ideas and re-negotiate how their communications are heard within a language community.
Students should have the right to weigh in on their assessment. Self-placement without direction may become merely a right to fail, whereas directed self-placement, either alone or in combination with other methods, provides not only useful information but also involves and invests the student in making effective life decisions.
If for financial or even programmatic reasons the initial method of placement is somewhat reductive, instructors of record should create an opportunity early in the semester to review and change students’ placement assignments, and uniform procedures should be established to facilitate the easy re-placement of improperly placed students. Even when the placement process entails direct assessment of writing, the system should accommodate the possibility of improper placement. If assessment employs machine scoring, whether of actual writing or of items designed to elicit error, it is particularly essential that every effort be made through statistical verification to see that students, individually and collectively, are placed in courses that can appropriately address their skills and abilities.
Placement processes should be continually assessed and revised in accord with course content and overall program goals. This is especially important when machine-scored assessments are used. Using methods that are employed uniformly, teachers of record should verify that students are appropriately placed. If students are placed according to scores on such tests, the ranges of placement must be revisited regularly to accommodate changes in curricula and shifts in the abilities of the student population.
Assessment of Proficiency
Proficiency or exit assessment involves high stakes for students. In this context, assessments that make use of substantial and sustained writing processes are especially important.
Judgments of proficiency must also be made on the basis of performances in multiple and varied writing situations (for example, a variety of topics, audiences, purposes, genres).
The assessment criteria should be clearly connected to desired outcomes. When proficiency is being determined, the assessment should be informed by such things as the core abilities adopted by the institution, the course outcomes established for a program, and/or the stated outcomes of a single course or class. Assessments that do not address such outcomes lack validity in determining proficiency.
The higher the stakes, the more important it is that assessment be direct rather than indirect, based on actual writing rather than on answers on multiple-choice tests, and evaluated by people involved in the instruction of the student rather than via machine scoring. To evaluate the proficiency of a writer on other criteria than multiple writing tasks and situations is essentially disrespectful of the writer.
Assessment of Programs
Program assessment refers to evaluations of performance in a large group, such as students in a multi-section course or majors graduating from a department. Because assessment offers information about student performance and the factors which affect that performance, it is an important way for programs or departments to monitor and develop their practice.
Programs and departments should see themselves as communities of professionals whose assessment activities reveal common values, provide opportunities for inquiry and debate about unsettled issues, and communicate measures of effectiveness to those inside and outside the program. Members of the community are in the best position to guide decisions about what assessments will best inform that community. It is important to bear in mind that random sampling of students can often provide large-scale information and that regular assessment should affect practice.
Assessment for School Admission
Admissions tests are not only high stakes for students, they are also an extremely important component for educational institutions determining if they and a student are an appropriate match. Consequently, where students’ writing ability is a factor in the admissions decision, the writing assessments should consist of direct measures of actual writing. Moreover, the assessment should consist of multiple writing tasks and should allow sufficient time for a student to engage in all stages of the writing process.
Assessments should be appropriate to educational institutions’ distinctive missions and student populations, although similar institutions may collaborate to create assessments. Assessment should be developed in consultation with high school writing teachers.
This position statement may be printed, copied, and disseminated without permission from NCTE.