Wednesday, July 13, 2016

JWA at CWPA 2016 in Raleigh, North Carolina

Journal of Writing Assessment editorial team members will be attending the annual meeting of the Council of Writing Program Administrators in Raleigh, North Carolina, July 14-17, 2016.    Diane Kelly-Riley, Carl Whithaus, Jessica Nastal-Dema, and Ti Macklin will be at CWPA--please contact one of them if you'd like to discuss a potential project for JWA!

Also, authors from the 2015 JWA Special Issue on the Impact of the Common Core State Standards will be presenting on Friday, July 15, 2016 from 1:55-3:10 PM in Hannover I at the CWPA Conference.  Please join us for "Impacts and Implications of the Common Core State Standards Assessments for WPAs, Writing Faculty, and Postsecondary Writing Instruction."  The panel is comprised of Diane Kelly-Riley, University of Idaho; Brad Jacobson, University of Arizona; Sherry Rankins-Robertson, University of Arkansas at Little Rock; Duane Roen, Arizona State University; Session Chair: Tialitha Macklin, formerly of Washington State University and now California State University Sacramento; Respondent: Carl Whithaus, University of California-Davis.

Friday, April 8, 2016

Special Issue on the Theory of Ethics for Writing Assessment

The Journal of Writing Assessment is pleased to announce the publication of the 2016 Special Issue on the Theory of Ethics for Writing Assessment authored by David Slomp, Norbert Elliot, Mya Poe, John Cogan, Jr., Bob Broad, and Ellen Cushman.

We're pleased to share this excellent and important research with you.   Access this special issue at http://journalofwritingassessment.org.


JWA at the Independent Rhetoric and Composition Journals Table TODAY

Are you at 4C16?  Stop by and meet Bruce Bowles, Jr., co-editor of the JWA Reading List and of Florida State University.  He'll be at the Independent Rhetoric and Composition Journals Table today in the Exhibition Hall on the Fourth Floor of the Hilton of the Americas.

He'd love to talk with you about potential reviews for the JWA Reading List or give you more information about publication opportunities through the Journal of Writing Assessment.

Friday, April 1, 2016

JWA at 4Cs in Houston, Texas!

Journal of Writing Assessment editorial team members will attend the upcoming Conference on College Composition and Communication in Houston, Texas this April 5-10, 2016.

We'd love to talk with you about potential projects to the journal or to the JWA Reading List!

Diane Kelly-Riley will be at the RNF Editors' Roundtable on Wednesday, April 6, 2016, and Bruce Bowles will be at the Rhetoric and Composition Journal Editors' Table on Friday, April 8, 2016 in the Exhibition Hall.  Carl Whithaus will also be in attendance at the conference.

Stop by and see us, send us an email, or tweet us--we'd love to talk with you about ways to contribute to JWA!

Tuesday, March 8, 2016

Part IV: Review of Norbert Elliot's and Les Perelman's (Eds.) _Writing Assessment in the 21st Century: Essays in Honor of Edward M. White_

Jessica Nastal-Dema, Prairie State College

Elliot, N., & Perelman, L. (Eds.) (2012).  Writing assessment in the 21st century:  Essays in honor of Edward M. White.  New York, NY:  Hampton Press. 


Note: This is the final installment in a series of reviews: see Section I; Section II; and Section III.



I imagine readers of Writing Assessment in the 21st Century: Essays in Honor of Edward M. White will each take away something different from their interactions with the text. It could be used as a primer on writing assessment for graduate students, experienced instructors, and WPAs who seek to learn more about the field. It can serve as an introduction to educational measurement for those of us more comfortable on the Rhetoric and Composition/Writing Studies side of things. It’s a collection of important research by some of the field’s most prominent scholars. It is a significant resource, one I have turned to several times since its publication.

As I began writing this final, delayed installment of my review, I went back to the words that first struck me when I opened Writing Assessment in the 21st Century upon its publication in 2012: “Where would we be without each other? The effort to design, develop, and complete this collection reveals the strength of community” (xi).

This book ushered me into writing assessment in a number of ways. As a PhD student, I tried to get my hands on anything I could by the authors whose writing spoke to me. I mined the bibliographies in many of the twenty-seven chapters to prepare for my qualifying exams and later, my dissertation. I felt encouraged to make connections between historical approaches to admissions and placement practices and what I was observing in twenty-first century urban settings with diverse student bodies. I was energized to continue operating under the assumption that assessment can interact with curricula and classroom practice in generative ways.

I realized that my work, regardless of how different it was from my peers’, fit into a body of scholarship. I realized that I fit into a community of scholars, a community that welcomed me.

Norbert Elliot and Les Perelman helped me translate that realization from the page to my life. Norbert willingly struck up a correspondence with me in 2012, which led to his serving as a reader on my dissertation committee, and now, to his becoming a trusted mentor and friend. Les welcomed me as a guest on the CCCC Assessment Committee, and has been generous in acknowledging my work. While I still know Edward M. White mainly through his corpus, many of us meet him at sessions at the CWPA and CCCC, observe him mentoring graduate students, and read his frequent contributions to the WPA-Listserv. Together, these leaders guide emerging scholars. They show us the incredible range of possible inquiries into writing assessment. They demonstrate the power of collaboration. They, and this collection, embody the importance—the strength—of community.  


Section IV, “Toward a valid future: The uses and misuses of writing assessment” is the last in Norbert Elliot and Les Perelman’s edited collection, Writing Assessment in the 21st Century, and each chapter confronts the tension between outdated methods of writing assessment and the view held by instructors and WPAs of “writing as a complex cognitive act” (p. 410).

Writing Assessment in the 21st Century brings “together the worlds of writing teachers and of writing assessment” (p. 499) as it makes clear that the educational measurement and academic communities are not always at odds, and have always had at least some shared concerns. And writers in this section continue to complicate those shared concerns in productive ways. This is the only section where each chapter is written by a member of the Rhetoric and Composition/Writing Studies community; importantly, as Elliot and Perelman make clear in their introduction (and does Elliot’s On a Scale: A Social History of Writing Assessment in America, 2007), the issues these writers confront have persisted throughout the entire history of writing assessment in the teaching and testing of writing. In 1937, the creator of the SAT, Carl Campbell Brigham, believed testing specialists and teachers could work together, but “his vision of a new testing organization was one that favored the trained teacher over the educational measurement specialist” – an idea many teachers and assessment practitioners favor today (p. 408). And Paul Diederich proposed using multiple samples of student writing “for valid and reliable writing assessment” in 1974 (p. 408), which the academic side of assessment supports each time we require students to submit a portfolio of their work.

Les Perelman leads the pack here, in his powerfully written and convincing, “Mass-Market Writing Assessments as Bullshit” (Chapter 24). Perelman’s chapter is incendiary. His argument – that “[e]ducation should be the enemy of bullshit” seems neutral enough (p. 427). If, however, we view the educational landscape while considering the opening line of Harry G. Frankfurter’s On Bullshit, as Perelman invites us to do, we can see how controversial his position becomes: “One of the most salient features of our culture is that there is so much bullshit” (Frankfurter qtd. in Perelman, p. 426). Writing assessment has the potential to improve our teaching, writing programs, and the student learning that takes place within them. Perelman claims, however, as White did in his infamous “My Five-Paragraph Theme Theme,” it is overrun by bullshit, in the reports mass-market testing organizations distribute to drum up support for their cheap and fast—and effective!—methods, in the writing it encourages from students who are “not penalize[d]…for presenting incorrect information,” and in the scoring sessions more concerned with standardization than carefully reading student writing (p. 427). Ultimately, mass-market testing organizations are driven by “an obese bottom line on the balance sheet,” not “having students display and use knowledge, modes of analysis, or both” (p. 435; p. 429). After reading this chapter, I imagine readers will also want to explore the NCTE Position Statement on Machine Scoring and the “Professionals Against Machine Scoring Of Student Essays In High-Stakes Assessment”, in addition to Farley (2009) and Lemann (1999).  

The remaining chapters in this section provide some suggestions on how WPAs might challenge the “bullshit effect.” As rhetoricians, we know that language can represent and reinforce social power structures, which Cindy Moore (Chapter 26), Peggy O’Neill (Chapter 25), and Richard Haswell (Chapter 23) take up. each chapter in this section highlights the absolute necessity of collaborating with our colleagues and of communicating with people outside our communities.

Haswell discusses how WPAs are better positioned to design, report on, and control assessments that focus on students’ needs by embracing, not fearing or rejecting, numbers. While there are many reasons teacher-scholars have traditionally resisted numbers about writing, the most prominent is that quantitative data provides a limited perspective of students’ writing abilities. For many, numbers can only provide an abstraction of the complexities of writing. Haswell argues, however, that the more WPAs use numbers and data within their programs, the more able they will be to “stave off outside assessment” (p. 414). Numbers can be powerfully convincing; as such, Haswell claims we should “fight numbers with numbers” and be prepared with quantitative data to be more persuasive, with data and numbers that support our concerns and values about writing (p. 414). I agree with Haswell, and with White that the more we can do assessment, the more we can do with assessment. Cindy Moore insightfully explains the precarious position of a WPA and of writing faculty, and how using ambiguous, field-centric terms may, in fact, reduce our efficacy. While scholars like Patricia Lynne (2004) argue against using the term “validity” because of its association with the positivist tradition, Moore claims it is precisely because of this tradition that it holds such weight in our cultural, interdisciplinary, and institutional conversations. If WPAs were to use a different term, like Lynne’s “meaningfulness,” we would lose credibility with the very people with whom we need to establish it. O’Neill continues the work of Reframing Writing Assessment (2010) and examines WAC/WID programs at two universities to demonstrate how a frame of writing assessment influences “how others understand writing and writing assessment as well as the role of composition and rhetoric in the academy” (p. 450). As such, it is crucial for WPAs and those of us in Rhetoric and Composition/Writing Studies to use writing assessment not to further the bullshit Perelman sees, but instead, to shape the conversations about what it means to teach writing.  

Finally, Kathleen Blake Yancey continues the work of her foundational 1999 article, “Historicizing Writing Assessment,” as she discusses the current rhetorical situation of writing assessment (Chapter 27). While the third wave of writing assessment allowed for changes at the local level due to individual programs developing their own outcomes and assessments, Yancey sees the fourth wave characterized by collaborative practices that transcend a specific context. Rather than responding only to local issues, the collaborative models allow participants to align some practices and invent others, which can be critical in this era of increased participation in assessment by the federal government and institutional bodies.

I find Yancey’s position to be intriguing. Locally-controlled assessments are not a panacea; however, they certainly have much more face validity than mass-market exams, and they may offer us more opportunities to carefully examine how our practices affect our diverse bodies of students. I also see the benefit of frameworks like the WPA Outcomes Statement and the Standards for Educational and Psychological Testing (2014) in guiding and shaping the field, in serving as a foundation and touch-point for the wide range of writing class instructors. In my own work, for instance, the WPA Outcomes Statement has been of great use to discuss with different departments on campus what writing is and does and can be. Perelman and Elliot explain, “[t]his new model, independent of any specific local need, is located within multiple, diverse communities” (p. 411). I believe it is from understanding these multiple, diverse communities that we can improve our writing assessments and classroom practices.

References

Adler-Kassner, L., and O’Neill, P. (2010). Reframing writing assessment to improve
teaching and learning. Logan, UT: Utah State U P.

AERA, APA, & NCME. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. 


Council of Writing Program Administrators (2014). WPA outcomes statement for First-Year Composition (Revisions adopted 17 July 2014). WPA: Writing Program Administration, 38, 142–146.


Elliot, N. (2007). On a scale: A social history of writing assessment in America. New York: Peter Lang.

Farley, T. (2009). Making the grade: My misadventures in the standardized testing industry. Sausalito, CA: PoliPoint Press.

Frankfurt, H. G. (2004). On bullshit. Princeton, NJ and Oxford, UK: Princeton UP. 


Lemann, N. (1999). The big test: The secret history of the American meritocracy. New York, NY: Farrar, Straus, and Giroux. 


Lynne, P. (2004). Coming to terms: A theory of writing assessment. Logan, UT: Utah
State U P.

O’Neill, P., Moore, C., and Huot, B. (2009). Guide to College Writing Assessment.
Logan, UT: Utah State U P.

White, E. M. (2008). My five-paragraph-theme theme. College Composition and Communications, 59(3), 524-5. 

Yancey, K. B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 50(3): 483-503. 








Tuesday, January 12, 2016

Working Against Racism: a Review of Antiracist Writing Assessment Ecologies


Inoue, A.  (2015). Antiracist writing assessment ecologies:Teaching and assessing writing for a socially just future. Fort Collins, CO: WAC Clearinghouse/Parlor Press. 

By Katrina Love Miller, University of Nevada, Reno

Before I wrote this review, I did a key term search in the JWA archive for race and racism.  I got one hit: Diane Kelly-Riley (2011).  I did this search because one of Asao Inoue’s premises is that we have not yet adequately addressed race and racism in our theories of writing assessment.  According to this admittedly cursory check, Inoue is right. 

As practitioners and scholars of writing assessment, we have an ethical obligation to consider how writing assessments can unfairly impact our increasingly diverse student populations based on their race, ethnicity, gender, multilinguality, sexuality, and other social markers.  However, there has been renewed scholarly interest in writing assessment as a vehicle for social justice (Inoue and Poe 2012; Poe, Elliot, Cogan Jr., Nurudeen Jr.  2014).  In other words, we are not only looking for potential problems to be solved, but also reflecting on our practices as potential sites for inequality.  In Antiracist Writing Assessment Ecologies, Inoue challenges readers to not only take the stance that our classroom assessments of student writing should do no harm to minority students, but to go a step further by using writing assessment as a vehicle to promote social justice. 

Inoue constructs an antiracist ecological theory he contends is capable of informing the design and implementation of more fair and just classroom writing assessments.  Chapters one and two are theory-building chapters wherein Inoue lays the foundation of his concept of antiracist classroom writing assessment ecologies.  In chapter one, he offers robust definitions of key terms like race, racial formation, and racism but emphasizes his term racial habitus as the most useful way to think about how racism is manifested in writing classrooms, consciously or unconsciously.  For example, he argues that race is a factor in most if not all classroom writing assessments because such judgments of student writing are typically measured in terms of students’ ability to approximate the dominant discourse, which he says is always already closely associated with white racial formations in the U.S.  (p.  31).  He continues that “as judges of English in college writing classrooms, we cannot avoid this racializing of language when we judge writing, nor can we avoid the influence of race in how we read and value the words and ideas of others” (p.  33).  Inoue ends chapter one with a clear articulation of racial habitus, which builds upon Pierre Bourdieu’s term habitus to consider race as socially constructed in three dimensions: discursively/linguistically, material and bodily, performatively.  Racial habitus, Inoue explains, is a conglomerate of “structuring structures" or principles that construct racial designations and identities (p.  43).  Such structures maybe be seen in each of these three dimensions because “some [are] marked on the body, some in language practices, some in the ways we interact or work, write, and read, some in the way we behave or dress, some in the processes and differential opportunities we have to live where we do (or get to live where we can), or where we hang out, work, go to school, etc.” (p.  43).

Chapter two focuses on defining anti-racist ecologies as productive, as a system, and as a Marxian historic bloc.  Pulling together diverse theoretical traditions, this chapter constructs antiracist writing assessment ecologies as paces that provide sustainable and fair assessments by engaging in inquiry about the nature of judgment against the backdrop of the normative “white racial habitus” found in most writing classrooms (p.  10).  The ecological perspective, which seems to build upon Wardle and Roozen’s (2012) ecological model of writing assessment (although Inoue does not directly discuss their model) builds a definition of this space by blending together Frierian critical pedagogy, Buddhist theories of interconnectedness, and Marxian political theory.  Such a theoretical blend, he contends, provides “a structural and political understanding of ecology that doesn’t abandon the inherent interconnectedness of all people and things, and maintains the importance of an antiracist agenda for writing assessments” (p.  77).   In turn, this understanding can enable practitioners of classroom writing assessment to reconceive of classroom writing assessment as “cultivating and nurturing complex systems that are centrally about sustaining fairness and diverse complexity” (p.  12).

The next three chapters build from the framework in chapters one and two to explain and analyze classroom writing assessment ecologies.  Chapter three breaks down such ecologies into seven related elements—power, parts, purposes, people, processes, products, and places—and chapter four describes the assessment ecology in Inoue’s upper-division writing course at Fresno State and analyzes students’ writing (the author has since moved to the University of Washington, Tacoma).  In the final chapter, Inoue condenses the theoretical concepts from earlier chapters into an accessible and generative heuristic for antiracist writing assessment ecologies in the form of a list of questions that could help writing teachers construct their own antiracist classroom writing assessment ecologies. 

Inoue adeptly weaves in micro-narratives about his students as well as his own experiences as a person of color.  The rhythm of these reflections helps the reader periodically surface from the theoretical discussion to pause and consider the racial consequences of assessment.  For instance, Inoue reflects: “I will be the first to admit that I lost my ghetto English a long time ago (but not the swearing) for the wrong reasons, for racist reasons.  I cannot help that.  I was young and didn’t understand racism or language.  I just felt and experienced racism, and some of it was due to how I talked and wrote in school” (23).

Though classroom assessments remain Inoue’s main focus, he does take occasional detours to discuss the racial consequences of larger-scale assessments readers are likely familiar with, including IQ tests, the SAT, and CSU’s English Placement Test.  His discussion of the EPT’s consequences, which he says paint “a stunning racial picture,” is particularly insightful.  The test, which is purported a language competency test and is scored in blind readings by CSU faculty readers, consistently produces racially uneven results.  Inoue uses this as evidence that “language is connected to the racialized body” (p.  35).

JWA readers may recall Richard Haswell’s 2013 response to Inoue and Poe’s collection Race and Writing Assessment.  Haswell (2013) argues “any writing assessment shaped by anti-racism will still be racism or, if that term affronts, will be stuck in racial contradictions” (para.  6).   Inoue’s new book might face a similar criticism, but he bluntly addresses such criticisms.  Take, for example, his agreement with Haswell (2013) that we are all implicated in racism, even as we take-up an antiracist agenda.  Indeed, as Inoue explains, “racism is still here with us in our classrooms” (p.  9) and “You don’t have to actively try to be racist for your writing assessments to be racist” (p.  9).  Inoue argues that such contradictions become less problematic if we consider the point of such work is to eradicate racism, not race itself.  However, “We cannot eradicate racism in our writing classrooms until we actually address it first in our writing assessments, and our theories about what makes up our writing assessments” (p.  9).  Inoue similarly acknowledges in the introduction that his argument may rub some the wrong way because teachers as practitioners of classroom writing assessment might be uncomfortable with his assertion that our judgments are racially informed.  But, Inoue reiterates, “Any denial of racism in our writing assessments is a white illusion” (p.  24). 

Inoue’s book is important for its productive and respectful critiques of other writing assessment scholars for their limited treatment or avoidance of racism, including Brian Huot, Patricia Lynne, Peggy O’Neill, Bob Broad, Bill Condon, Kathleen Yancey, and Ed White.  He devotes most of this critique to problematizing Huot’s emphasis on individualism as problematic in its avoidance of racism in writing assessment.  “By referencing individualism, by referring to all students as individuals” (p.  21), Inoue argues, Huot’s theory and model fail to capture “broader patterns by any number of social dimensions” (p.  21).  Inoue sees his antiracist agenda as something that can help bring issues of racism to the forefront of future theoretical discussions of writing assessment. 

Inoue also confronts the fact that antiracist theory and practice often encounter resistance in the form of rationalizations of rival causes for white students’ better performance rates (e.g., some students simply do not write well and those students are sometimes students of color or multilingual).  While he agrees that students should not be judged on a different scale because they happen to be from minority groups, he emphasizes that it is the judgment that should be under examination because those judgments might be biased in their orientation to a discourse that privileges Standardized Edited American English and other discourses of whiteness or might exist in broader ecologies of writing assessment that might be racist themselves (6-7).

For conscientious writing teachers and WPAs who as leaders want to cultivate attention to the racial politics of writing assessment and perhaps further such an antiracist ecology that Inoue proposes, this book will provide a theoretically informed and accessible vocabulary for thinking about how to enhance assessment in their own writing classrooms, a useful heuristic for designing antiracist classroom writing assessments, and a sampling of Inoue’s classroom materials (a grading contract for an upper-division writing course, reflection letter prompt, weekly writing assessment tasks). 

Inoue’s argument is familiar in its implication that writing assessment will always cast a shadow on pedagogy because as a writing teacher, what you assess “trumps what you say or what you attempt to do with your students” (9).  However, Inoue invites us to reexamine our pedagogy and assessment practices to ask if writing assessments are not only productively connected to programmatic objectives like course outcomes, but also informed by a sense of ethics and fairness.  In this way, Antiracist Writing Assessment Ecologies complements recent work on writing assessment as social justice, including the joint NCTE-JWA webinar “NoTest is Neutral: Writing Assessments, Equity, Ethics, and Social Justice” and a forthcoming special issue of JWA on ethics and writing assessment.  Inoue admirably deploys concepts that are cutting-edge in writing assessment theory, which makes his book an exciting and timely addition to the canon of critical studies of writing assessment.

References

Hawell, R.  (2013).  Writing assessment and race studies sub specie aeternitatis: A response to race and writing assessment.  Journal of Writing Assessment Reading List Retrieved from http://jwareadinglist.blogspot.com/2013/01/writing-assessment-and-race-studies-sub_4.html

Inoue, A.  B., & Poe, M.  (Eds.).  (2012).  Race and writing assessment.  New York, NY: Peter Lang.  

Kelly-Riley, D.  (2011).  Validity inquiry of race and shared evaluation practices in a large-scale, university-wide writing portfolio assessment.  Journal of Writing Assessment4(1).  Retrieved from http://journalofwritingassessment.org/article.php?article=53

Poe, M., Elliot, N., Cogan, J.  A., & Nurudeen, T.  G.  (2014).  The Legal and the local: Using disparate impact analysis to understand the consequences of writing assessment.  College Composition and Communication, 65(4), 588-611.





Sunday, November 29, 2015

A Review of White, Elliot, and Peckham's Very Like a Whale: The Assessment of Writing Programs


White, E.M., Elliot N., & Peckham, I. (2015). Very like a whale: The assessment of writing programs. Logan: Utah State University Press.

By Peggy O'Neill, Loyola University Maryland

This volume offers readers a model for writing program assessment grounded in an overview of relevant theory and practice as well as case studies of two writing programs—Louisiana State University’s, where Peckham was the WPA, and New Jersey Institute of Technology’s, where Elliot served for many years. The text is organized into five main chapters—Trends, Lessons, Foundations, Measurement, and Design. It opens with an introduction and ends with a glossary of terms, references and the index. The text also includes 17 tables and 13 figures, one of which is the model for a genre of writing program assessment that the authors are putting forth (see Fig 1.1 and 5.1).

The introduction, which is available on the publisher’s website, summarizes each of the chapters and explains the authors’ approach and the framework of the text. While it provides standard features such as a summary of each chapter, it also explains the title this way: “With AARP cards embedded firmly in their wallets, the three seniors, formally educated in literary studies, selected a passage from Hamlet for the title” (p. 2). This opening threw me off, as a reader, (although I was wondering why this title) because of the way it positioned the authors and left me wondering why they situated themselves this way. A few paragraphs later, when articulating the audience for the book, they ask readers to “Imagine running into the three authors . . . at the annual meeting of the Conference on College Composition and Communication” (p. 4). They then present a dialog—“Let’s imagine just such a conversation” (p. 4) to illustrate “the tone for [their] book” (p. 4), which they describe as “chatting with colleagues and students” (p. 4). At this point, I was not sure where this book was going or what it was doing and felt a bit exasperated at the tone of the opening. However, the introduction then proceeds into a more straightforward overview of their approach and the chapter summaries.

The chatty tone that opened the book pops up now and again throughout the text. As a reader I found myself rushing through passages that address the reader directly (e.g., “Because the LSU case study is the first of four complex studies, you may want to review it briefly now and then review it again after completing the book” [p. 39]) or give background information that seems unnecessary (e.g., the brief tangent about the philosopher who distinguished between nomothetic and idiographic knowledge and the reference to Henry Fielding’s comment about The History of Tom Jones to make a point about history [p. 73]). For the most part, however, the book is more focused, which I think is the authors’ goal.

No doubt, readers charged with conducting program review, which the authors define  “as the process of documenting and reflecting on the impact of the program’s coordinated efforts” (p. 3), will benefit from the explanation of theory, methods, and practice that the authors offer. They seek, in their words, “to make clear and available recent and important concepts associated with assessment to those in the profession of rhetoric and composition/writing studies” (p. 3).  

In keeping with this goal, the authors provide a range of strategies, examples, and best practices for conducting a program assessment, grounded in the scholarship from writing studies as well as educational measurement. The strategies and approaches aren’t necessarily presented step-by-step so readers will need to read through the text and pull what they want if they are looking for a guide.

Although the case studies can help readers understand different questions and documentation methods, the level of detail sometimes seemed too much. While I realize case studies require detail, I felt some details were not important or were distracting, such as a brief history of WAC (p.50).  Or, in another example, referencing tagmemics  (p. 103) and Toulmin (p. 104) in discussing the way eportfolios would be evaluated seemed beyond the needs of most readers. Yet, I found myself wanting more explanation at other times. In discussing the assessment of eportfolios for a Writing about Science, Technology and Society, for instance, the explanation of the interreader reliability rates (p. 56-7) and the conclusions drawn from that information seemed to need more explanation, especially for readers less experienced with assessment. It also wasn’t clear how the data presented on interrater reliability demonstrated that students are improving over time (p 57). Although the authors explain the reasoning about student improvement, there seems to be a missing piece here. Yes, scores improved over the five years, but does that mean student writing improved? I am assuming different students were tested and other variables were in play (although admission test scores were consistent, they note). In other words, if the authors are assuming that many readers need basic information on WAC and WID, then I would expect that readers  would need a more complete and nuanced explanation of the technical data and analyses.

Lists of questions, such as that found in Chapter 3, Lessons, (p. 67) or the scoring sheet for a technical communication eportfolio in the same chapter (p. 56), can be of interest to readers looking for help in designing their own program assessments. Sharing examples of how eportfolios have been used is valuable for those of us trying to convince administrators to invest in the technology and faculty development time needed to implement them, yet I think this is somewhat limited view of the potential of eportfolios.

In addition to some practical examples, readers will get a sense of educational and writing theories that inform the authors’ approach to writing program assessment. However, the authors want to focus on more than practice—that is, how to conduct a program assessment. They want to contribute to the theoretical concept of writing program assessment: the “main purpose of this book,” they explain, is “to advance the concept of writing program assessment as a unique genre in which constructs are modeled for students within unique institutional ecologies” (p. 7).  

The book seems to achieve its first goal—providing readers with practical approaches and strategies—which is, I imagine, what most readers will be interested in. The second goal, to propose a genre of writing program assessment, is a bit more ambitious. While the model is unveiled in the first chapter, it is explained more fully in the last one, where it is presented with each of its nine components explained in more detail. Before delving into each of the components, the authors review fourteen of the key concepts that they have used throughout the book. These concepts are more general about the field of rhetoric and composition/writing studies (e.g., “Epistemologically, advancement of our field is best made by both disciplinary and multidisciplinary inquiry” [p. 151]); measurement (e.g., “In matters of measurement, analyses are most useful if they adhere to important reporting standards, including construct definitions” [p. 152]); and writing program assessment (e.g., “Imagining a predictable future for the assessment of writing programs reveals a need for attending to . . . . “ [p. 152]).

From here, the authors expound on their model, reminding readers that “acceptance of the model” (p. 153) is predicated on validity as Messick defined it in 1989: that validity is at the core of assessment and that it involves making a theoretical and empirical argument about the “adequacy and appropriateness of the inferences and actions based on test scores or other modes of assessment” (Messick, qtd. in White, Elliot, and Peckham, p. 154).

Their proposed model of assessment of writing programs, which is presented as a flow chart that loops around with the results feeding back into the writing program, is then explained. Although some of the terminology and/or concepts in the framework are unfamiliar in writing program assessment literature (e.g., standpoint) most of it will seem very familiar to those involved in assessment theory and practice (e.g., construct or documentation) or in writing program administration (e.g., communication). All in all, I didn’t think the actual processes, strategies, and approaches for program assessment presented in this monograph are all that new or different; instead, I think the book provides an overview of work in writing and writing program assessment that has been going on for the last several decades, pulling it together and presenting it in an attempt to link it to the broader fields of both writing studies and educational measurement.