Tuesday, January 13, 2015

Call for Papers: Special Issue of JWA on the Common Core State Standards Assessments

Call for Papers
Special Issue of Journal of Writing Assessment
The Common Core State Standards Assessments
The Journal of Writing Assessment is interested in scholars’ and teachers’ responses to the writing assessments connected with the implementation of the Common Core State Standards assessments. The two main consortia, Smarter Balanced Assessment Consortium (SBAC) and Partnership for Assessment of Readiness for College and Career (PARCC), have released various types of information about the writing assessments, including approach, use of technology, and sample items.
The assessments were piloted in 2013-14, and are being implemented in most participating states during the 2014-15 academic year. Both SBAC and PARCC are approving and releasing achievement levels based on student performance on the pilot assessments. The SBAC (http://www.smarterbalanced.org/) and PARCC (http://www.parcconline.org/) assessment instruments are reshaping the assessment of—and potentially the teaching and learning of—writing in elementary and secondary education in many states. These assessments are defining and measuring the writing skills students need for “college-and-career readiness.” This enterprise is one of the largest-scale writing assessment projects ever undertaken in the United States. Researchers need to evaluate not only the validity and reliability of these assessment instruments, but also their impacts on teaching and learning.
The Journal of Writing Assessment seeks articles that examine:
  • Theoretical stances behind the Common Core State Standards assessments,
  • Development processes for the CCSS assessment instruments,
  • Implementation of the assessments, and
  • Impacts of these assessments on writing curricula and instruction at the classroom, district, and/or state levels.

We are interested in manuscripts that explore the CCSS assessments from a variety of viewpoints including, but not limited to, empirical, historical, theoretical, qualitative, experiential and quantitative perspectives.
For inclusion in JWA 8.1, proposals (200-400 words) are due by Feb. 27, 2015 to the JWA Submission page. Full drafts of articles are due by May 30, 2015. Queries may be addressed to the JWA editors, Diane Kelly-Riley and Carl Whithaus, at journalofwritingassessment@gmail.com.
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and non-educational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.
For more information, visit JWA online http://www.journalofwritingassessment.org/.

Monday, December 8, 2014

Review of_Digital Writing Assessment & Evaluation_by Heidi A. McKee and Danielle Nicole DeVoss, Editors

Review of McKee, H. A., & DeVoss, D. N. (Eds.). (2013). Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae.

ISBN: 978-0-87421-949-4

By Leslie Valley, Eastern Kentucky University

Heidi McKee and Danielle DeVoss’s 2013 digital book, Digital Writing Assessment and Evaluation (DWAE), offers theoretical and practical approaches to understanding the assessment challenges posed by digital writing. An edited collection, DWAE features a foreword by Andrea Lunsford, a preface by the editors, fourteen chapters by thirty-eight authors, and an afterword by Edward White. While the book focuses primarily on digital writing assessment in post-secondary composition education, the attention to ethics, class structure, multimodal texts, and programmatic concerns highlight key discussions in digital writing that are helpful for K-12 teachers and Writing Across the Curriculum administrators as well.

McKee and DeVoss have organized the chapters of DWAE in a practical way, first addressing the issues of fairness and privacy before moving on to discussions of classroom and programmatic implementation. In the first section, “Equity and Assessment,” Mya Poe and Angela Crow assert the importance of ethical decision-making when gathering and storing data and implementing change based on that data. Having established ethical considerations as the foundation, DWAE then delves into the more specific concerns of grading rubrics, student engagement and responsibility, e-portfolios, and program assessment.

Those looking to understand the connection between digital writing and course learning outcomes also have much to gain from DWAE. In the second and third sections, “Classroom Evaluation and Assessment” and “Multimodal Evaluation and Assessment,” the authors provide specific examples of assignments, students’ digital texts, and approaches to assessment. While they offer different frameworks for assessment, each author emphasizes the connection between assessment and assignment design, the importance of language and early discussions with students, and the necessity of contextualizing assessment. In Chapter 4, for example, Colleen Reilly and Anthony Atkins demonstrate that assessment language can be designed in such a way that it is not only understandable to students but also stimulates their motivation and engagement in the production of digital compositions. Reilly and Atkins point to a primary trait scoring approach rather than a holistic approach as a way to account for both process and product in the classroom.

In addition to classroom and assignment-specific frameworks, DWAE also offers methodologies for program assessment. In the final section, “Program Revisioning and Program Assessment,” the four chapters discuss pedagogical, institutional, and financial motivations for revising program assessment. Again, authors make the connection between assessment and pedagogy, demonstrating the benefits of digital platforms for immediate programmatic feedback on assignments, instruction, and grading rubrics that prompt immediate programmatic revision. They explore the potential of these digital platforms for rethinking program design and professional development for instructors. Specifically, Beth Brunk-Chavez and Judith Fourzan-Rice illustrate their experience with MinerWriter, a digital distribution system that has allowed University of Texas at El Paso to standardize assessment. This approach, they contend, has allowed them to bridge the disconnect between assessment and instruction by identifying students’ struggles and responding with assignment revision and professional development at the programmatic level.

McKee, DeVoss, and the authors take advantage of the digital format, linking to additional information and resources, embedding videos and screenshots, and creating non-linear chapters (see, specifically, Chapter 6 by Susan Delagrange, Ben McCorkle, and Catherine Braun). These digital affordances allow DWAE to demonstrate the full rhetorical context in which these assessment models exist, providing readers with a fuller understanding of the connections between assessment, pedagogy, and digital technologies. The advantages of the digital format are especially evident in Meredith Zoeteway, Michelle Simmons, and Jeffrey Grabill’s chapter on assessment design and civic engagement. By including hyperlinks, screenshots, videos, and diagrams, they provide a complete overview of the values, goals, materials, assignments, discussions, and assessments included in a digital writing course focused on civic engagement.

In their preface, McKee and DeVoss acknowledge what DWAE does not address: digital writing and students with disabilities and Automatic Essay Scoring (AES) (although Edward White’s afterword does foreground the need for more research on AES). Despite these absences, DWAE is a comprehensive look at digital writing assessment in a variety of contexts. Rather than offering one overarching theory of assessment, the text establishes the importance of assessment in context. The variety of contexts and proposed methodologies prompt both teachers and WPAs to consider digital writing assessment in light of their own ideological and pedagogical values and institutional settings.

Tuesday, September 23, 2014

Exciting news from the _Journal of Writing Assessment_

As you know, Journal of Writing Assessment was founded in 2003 by Kathleen Blake Yancey and Brian Huot as an independent journal that publishes a wide range of writing assessment scholarship from a wide range of scholars and teachers.  JWA was originally a print and subscription-based journal published by Hampton Press.  In 2011, Peggy O’Neill and Diane Kelly-Riley became editors of JWA, and moved the journal to a free online, open-source publication.  Hampton Press generously donated all of the print-based issues of JWA, and they are available for free on the site at http://journalofwritingassessment.org

Since our move online, JWA has had a great deal of traffic.  In the last year, more than 25,000 visits and more than 251,000 hits have been recorded to the JWA site.  Additionally in the past year, scholarship published by JWA has received significant attention in the Chronicle of Higher Education and Inside Higher Education.  We are indexed in ERIC, MLA, and Comppile.org.  

So we'd like to update you about exciting news at the Journal of Writing Assessment:

Beginning January 2015, Carl Whithaus of the University of California Davis will replace Peggy O’Neill as co-editor of JWA.   Carl has an extensive and impressive record as a scholar and practitioner of writing assessment.  

Carl’s appointment as co-editor will continue to position JWA as a journal that makes peer-reviewed scholarship about writing assessment accessible to a wide audience.  His expertise in automated scoring of writing and connections with the National Writing Project will greatly benefit JWA as the move to mandated assessments continue—both in the K-12 setting and in higher education.  We’re committed to publishing a wide range of scholarship that can inform the quickly changing landscape of writing assessment in educational settings.

Additionally, our associate editor, Jessica Nastal-Dema will continue in her role with JWA as she transitions to a faculty position at Georgia Southern University. 

Likewise, we continue to engage graduate students who are up and coming scholars of writing assessment in our work.  Tialitha Macklin, PhD candidate at Washington State University, continues in her Assistant Editor role, and David Bedsole and Bruce Bowles, PhD students at  Florida State University, will co-edit the JWA Reading List.

We are pleased to announce the redesign of the Journal of Writing Assessment site.  We refreshed the look, and added a search function so that the entire site (including pdfs) is searchable.  This redesign makes the excellent scholarship published by JWA much more accessible to a wider audience.  JWA is hosted and designed by Twenty Six Design.

Finally, we want to acknowledge the financial support of the University of Idaho's College of Letters, Arts and Sciences and Department of English.  Their generous support enables JWA to remain an independent journal.

Diane Kelly-Riley, University of Idaho, and Peggy O'Neill, Loyola University Maryland, Editors

Monday, September 1, 2014

Part II: Review of Handbook of Automated Essay Evaluation: Current Applications and New Directions. Eds. Mark D. Shermis and Jill Burstein

Part II: Review of Handbook of Automated Essay Evaluation: Current Applications and New Directions. Eds. Mark D. Shermis and Jill Burstein

Shermis, M., & Burstein J. (2013). Review of Handbook ofAutomated Essay Evaluation: Current Applications and New Directions. New York, NY: Routledge.

By Lori Beth De Hertogh, Washington State University

This is the second installment of a two-part review of the Handbook of Automated Essay Evaluation: Current Applications and New Directions edited by Mark D. Shermis, University of Akron, and Jill Burstein, Educational Testing Service. Part I explains the workflow of several scoring systems and provides an overview of platform options. Part II discusses how various chapters deal with automated essay evaluation (AEE) in classroom contexts as well as advances in machine scoring.

Individuals interested in learning how to apply automated essay evaluation to classroom assessment contexts will appreciate Norbert Elliot and Andrew Klobucar’s chapter, “Automated Essay Evaluation and the Teaching of Writing.” Elliot and Klobucar, professors of English at the New Jersey Institute of Technology, argue that AEE can enhance students’ learning experiences when used judiciously. They also highlight how they have “identified evidence to support the use of AEE in first-year writing” so long as “special care” is taken to observe its impact on certain student populations and to investigate its overall influence on first-year writing programs (p. 27).

Another chapter of interest to classroom educators is Changhua Rich (CTB), Christina Schneider (CTB/McGraw-Hill), and Juan D’Brot’s (West Virginia Department of Education) “Applications of Automated Essay Evaluation in West Virginia.” This chapter outlines how West Virginia Writes™, a customizable online scoring engine, was implemented in K-12 classrooms across the state. The authors explain that this program reduced the time teachers spent scoring essays and provided “students with valuable practice to build writing skills and confidence”(p. 102). They argue that improvements in machine scoring (i.e. increased ability to accurately identify and score traits such as organization, sentence structure, development, etc.) make programs like West Virginia Writes better equipped at helping students improve their writing abilities. While such a claim is debatable, individuals working in educational measurement and writing assessment might see this research as a starting point for investigating how customizable scoring tools can be used in writing classrooms.

Sara Weigle, professor of applied linguists at Georgia State University, argues in Chapter Three that AEE is a useful tool for generating error-analysis feedback in second-language learning environments, a process which “holds a promise of reducing teachers’ burdens and helping students become more autonomous” (p. 47). She also suggests that AEE’s ability to provide instant, computer-based feedback on grammatical errors allows “students to save face in a way that submitting their writing to teachers does not” (p. 47). As scholarship on error and English language learning indicates,[1] teachers tend to respond more harshly to errors made by multilingual writers than native English speakers. Automated systems designed to provide feedback on grammatical errors may prove useful in helping to reduce teacher bias.

A long-standing complaint about AEE, particularly within the writing community, is that machine scoring does not produce a valid measurement of a student’s writing ability. Several chapters in the collection address this issue by suggesting that rather than focusing on the validity of machine scoring, educators should consider alternative ways AEE can assist teachers in classroom settings. In Chapter Fifteen, for example, authors Michael Gamon (Microsoft), Martin Chodorow (Hunter College and the Graduate Center of the City University of New York), Claudia Leacock (CTB/McGraw-Hill), and Joel Tetreault (Educational Testing Service), advocate for the use of automated essay evaluation as a tool for providing formative feedback on grammatical and sentence-level errors in second-language learning environments. Like Weigle, Gamon and his colleagues suggest that error-analysis feedback may “improve the quality of the user’s writing by highlighting errors, describing the types of mistakes the writer has made, and suggesting corrections” (p. 263). Rather than using automated essay evaluation as a means to determine students’ writing abilities, these authors view AEE as a tool—or even as an unbiased tutor—students can use to improve specific aspects of their writing.

Individuals working as writing program administrators or measurement technologists will be interested in several of the AEE advances highlighted in this collection. Chapter Fourteen, “Using Automated Scoring to Monitor Reader Performance and Detect Reader Drift in Essay Scoring,” focuses on the ability of automated scoring systems to “monitor hand-scoring accuracy” (p. 234). Drift occurs when human raters assign scores that are inconsistent or that fall outside an accepted variable range, thereby compromising “the validity of student scores” (p. 234). A scoring engine detects rater drift by comparing  human raters’ scores to a model that emulates human scoring behavior; results can indicate whether a particular cohort of raters demonstrate drift in their scoring samples. Unlike traditional monitoring techniques which require a read-behind or second read (often by putting a testing sample back into a pool of raters), an automated system can efficiently assess a large number of scores without burdening raters with rereads.

Other advances in AEE of interest to those working in educational measurement, cognitive psychology, and computational linguistics include those discussed in Chapter Seventeen, which highlights original research on AEE and sentiment analysis, or a writer’s use of personal opinion statements (i.e. “I believe that…”). Authors Jill Burstein, Beata Beigman-Klebanov (ETS), Nitin Madnani (ETS), and Adam Faulkner (City University of New York) argue that the ability to recognize sentiment analysis using automated scoring systems can help in identifying “the quality of argumentation in student and test-taker essay writing” (p. 282). A scoring engine, for instance, can use natural language processing to detect if a student has stated his or her opinion in a writing sample that requires a personal response; the absence of a personal opinion may indicate that the writer is not on task or does not understand the prompt.

In reviewing the Handbook of Automated Essay Evaluation: Current Applications and New Directions, I have come to two conclusions. The first is that by educating ourselves about the capabilities of automated essay evaluation, those of us involved in writing assessment can make more informed choices about the uses and implications of machine scoring. Second, while I am not a supporter of AEE, this collection makes me wonder whether automated scoring systems can be fruitful if used judiciously, particularly in English language learning contexts or as tools for identifying rater drift. The truth is that automated essay evaluation is, in some form or another, here to stay. This means we must continue to critically engage with these tools and their proponents.

[1] See Peggy Lindsey and Deborah Crusan’s article, “How Faculty Attitudes and Expectations Toward Student Nationality Affect Writing Assessment” and Lyndall Nairn’s work, “Faculty Response to Grammar Errors in the Writing of ESL Students.”

Sunday, August 24, 2014

Part I: Review of Handbook of Automated Essay Evaluation: Current Applications and New Directions. Eds. Mark D. Shermis and Jill Burstein

Part I: Review of Handbook of Automated Essay Evaluation: Current Applications and New Directions. Eds. Mark D. Shermis and Jill Burstein

Shermis, M., & Burstein J. (2013). Review of Handbook ofAutomated Essay Evaluation: Current Applications and New Directions. New York, NY: Routledge.

By Lori Beth De Hertogh, Washington State University

The Handbook of Automated Essay Evaluation: Current Applications and New Directions edited by Mark D. Shermis, University of Akron, and Jill Burstein, Educational Testing Service, features twenty chapters that each deals with a different aspect of automated essay evaluation (AEE). The overall purpose of the collection is to help professionals (i.e. educators, program administrators, researchers, testing specialists) working in a range of assessment contexts in K-12 and higher education better understand the capabilities of AEE. It also strives to demystify machine scoring and to highlight advances in several scoring platforms.   

The collection is loosely organized into three parts. Authors of the first three chapters discuss automated essay evaluation in classroom contexts. The next section examines the workflow of various scoring engines. In the final section, authors highlight advances in automated essay evaluation. My two-part review generally follows this organizational scheme, except that I begin by examining the workflow of several scoring systems as well as platform options. I then review how several chapters describe potential uses of AEE in classroom contexts and recent developments in machine scoring.

The Handbook of Automated Essay Evaluation devotes considerable energy to explaining how scoring engines work. Matthew Schultz, director of psychometric services for Vantage Learning, describes in Chapter Six how the IntelliMetric™ engine analyzes and scores a text:

The IntelliMetric system must be ‘trained’ with a set of previously scored responses drawn from expert raters or scorers. These papers are used as a basis for the system to ‘learn’ the rubric and infer the pooled judgments of the human scorers. The IntelliMetric system internalizes the characteristics or features of the responses associated with each score point and applies this intelligence to score essays with unknown scores. (p. 89)

While the methods platforms like IntelliMetric use to determine a score are slightly different, they all employ a multistage process, which consists of four basic steps:
  •  receiving the text,
  • using natural language processing to parse text components such as structure, content, and style,
  • analyzing the text against a database of previously human- and machine-scored texts,
  • producing a score based on how the text is similar or dissimilar to previously rated texts.
In Chapter Eight, Elijah Mayfield and Carolyn Penstein Rosé, language and technology specialists at Carnegie Mellon University, demonstrate how this four-step process works by describing the workflow of LightSIDE, an open source machine scoring engine and learning tool. In doing so, they illustrate how the program is able to match or exceed “human performance nearly universally” due to its ability to track and develop large-scale aggregate data based on text data. Mayfield and Rosé argue that this feature allows LightSIDE to tackle “the technical challenges of data collection” in diverse assessment contexts (p. 130). They also emphasize that this capability can help users curate large-scale data based on error-analysis. Writing specialists can then use this information to identify areas (i.e. grammar, sentence structure, organization) where students need instructional and institutional support.

Chapter Four, “The e-rater® Automated Essay Scoring System,” provides a “description of e-rater’s features and their relevance to the writing construct” (p. 55). Authors Jill Burstein, Joel Tetreault, and Nitin Madnani, research scientists at Educational Testing Service, stress that the workflow capabilities of scoring systems like e-rater or Criterion (a platform developed by ETS) make them useful tools for providing students with immediate, relevant feedback on the grammatical and structural aspects of their writing in addition to being useful in administrative settings where access to aggregate data is critical (pp. 64-65). The authors argue that e-rater’s ability to generate a range of data make it an asset in responding to both local and national assessment requirements (p. 65).

In Chapter Nineteen, “Contrasting State-of-the-Art Automated Scoring of Essays,” authors Mark D. Shermis and Ben Hamner (Kaggle) offer readers a comparison of nine scoring engines’ responses to a variety of prompts in an effort to assess and compare the workflow and performance levels of each system, some of which include Intelligent Essay Assessor, LightSIDE, e-rater, and Project Essay Grade. This chapter may be particularly useful to individuals tasked with determining which type of automated evaluation system to adopt or replace. In addition, this chapter provides a brief guide to understanding how a variety of systems operate and an overview of “vendor variability in performance” (p. 337).

The Handbook of Automated Essay Evaluation: Current Applications and New Directions provides assessment scholars, practitioners, and writing teachers relevant information about the workflow of various scoring engines and how these systems’ functioning capabilities can be applied to a range of educational settings. By understanding how these systems work and their potential applications, individuals tasked with writing assessment can make more informed choices about the potential benefits and consequences of adopting automated essay evaluation.  

Tuesday, March 18, 2014

JWA at RNF and CCCC in Indianapolis!

JWA will be at the Research Network Forum and CCCC in Indianapolis, March 19-22, 2014.

JWA will be at the Editors' Roundtable discussion on Wednesday, March 19, 2014 from 1:15-2:30 pm.

 If you would like to talk to someone from JWA about a potential project, you can reach Peggy O'Neill at poneill1 [at] loyola [dot] edu or you can contact Jessica Nastal-Dema at jlnastal [at] uwm [dot] edu.

See you there!

Wednesday, February 5, 2014

Review of _Building Writing Center Assessments that Matter_ by Ellen Schendel and William Macauley

Review of Building Writing Center Assessments that Matter by Ellen Schendel and William Macauley (2012). Utah State University Press.

ISBN 978-0-87421-816-9, paper $28.95; ISBN 978-0-87421-834-3 e-book $22.95

By Marc Scott, Shawnee State University

     Ellen Schendel and William Macauley’s 2012 book, Building Writing Center Assessments that Matter (Building), is a co-authored text featuring an introduction and coda by both authors, three chapters authored by Macauley, three by Schendel, a brief interchapter by Neal Lerner, and an afterward by Brian Huot and Nicole Caswell. Much of Building explores how important writing assessment scholarship can apply to writing center program assessment, and often uses specific examples from the authors’ experiences directing writing centers. Schendel and Macauley’s goal in writing Building is to provide Writing Center Directors (WCDs) new to program assessment with a text that speaks specifically to the unique needs and opportunities of writing center work. While the text is geared toward assisting WCDs navigate program assessment, Building also provides assessment scholars and practitioners with important ideas and concepts for program assessment, including how to frame assessment and how to think through methodological options.
     Those wishing to develop a culture of assessment at their institution can learn much from Schendel and Macauley’s text. Throughout Building, the authors use tutoring and writing processes as metaphors for assessment work. Just as writers gain invaluable insights by sharing their work with other writers, sharing assessment projects and data with peers only benefits writing assessment. Furthermore, in Writing Center scholarship and practice, tutors strive to help a writer establish a healthy writing process rather than just proofread or edit a text. When applied to writing assessment, a similar emphasis on process over product might help instructors and students engage in assessment as a reciprocal and recursive form of inquiry that improves the writer holistically, rather than a linear process with one correct approach for each context (p. xix). In addition, the assessment process—much like the writing process—benefits from careful attention to exigency, context, purpose, and audience. Using the recursion of writing processes and the context-sensitive nature of tutoring as metaphors for assessment may provide an accessible concept for colleagues reluctant to embrace assessment.
     Writing assessment practitioners can also benefit from Building’s discussion of assessment methodologies. Schendel describes how Writing Center Directors should work to connect a program assessment’s methodology with each specific project’s purpose, audience, and available data. In fact, Schendel provides a useful chart that describes different forms of data a WCD might collect and explains how the data might be collected and who might collaborate in such efforts (pp. 127-131). The design of a writing assessment—be it a placement exam, a portfolio program, or a classroom assessment technique—should take the assessment’s context and purpose into account at each stage of the process, not just in analyzing results.  Rather, a writing assessment should be sensitive to the context of the student and classroom. Neal Lerner’s brief interchapter helps WCDs understand how qualitative and quantitative assessment methodologies might impact assessment projects in writing centers, and his thoughts can also help persuade those reluctant to assess. He argues against “maintaining the status quo” and operating on only a “felt sense” of the work done in Writing Centers (p. 113). Classroom teachers and WPAs might also feel like they “know” their classrooms, but unless they can provide evidence through assessment for what they know, their claims will fail to persuade important stakeholders.

     Building, while effectively tailored to the needs of WCDs, provides assessment scholars and practitioners with useful metaphors for discussing assessment and a thoughtful discussion of assessment methodologies. The bulk of the text provides important information for those interested in programmatic assessment, but it does so by thoughtfully weaving together assessment scholarship in a way relevant to writing centers.