National Council of Teachers of English Logo

National Policy Report

A network of NCTE, CCCC, and TYCA volunteers track state policy developments impacting English language arts, English studies, literacy, and the humanities. These volunteers, one covering P12 education and one covering higher education in each state, provide other members with analyzed information about state policies. With this knowledge, educators can better participate in the policymaking process that affects them, their students, their institutions, and their communities. Some policies and reports are national in scope; we wanted to give our readers access to those reports as well.

Please read our newest report Education-thought-leaders-forecast-2015-trends.pdf

 

ACSFA Summer Hearing Report

September 12, 2014

Trinity Washington University, Washington, D.C.

 

Advisory Committee Members:

Chair, Dr. Maria Harper-Marinick, Maricopa Community College District

Dr. Andrew Gillen, American Institute for Research

Dr. Fredrick Hurst, Northern Arizona University

Ms. Robert Johnson, Iowa State University

Ms. Patricia McGuire, Trinity Washington University

Dr. Michael Poliakoff, American Council of Trustees and Alumni

Ms. Deborah Stanley, Howard Community College

Ms. Tiffany Taylor, Student Member

Ms. Sharon Wurm, Nevada System of Higher Education

 

The purpose of this hearing was to gather testimony and written reports to inform the committee’s recommendations to the President on strategies for designing the College Scorecard in ways that minimize unintended negative consequences.

 

The first speaker, Jamienne Studley, Deputy Undersecretary of Education, offered a brief overview of and rationale for the Postsecondary Institutional Rating System (PIRS). The goals of the College Scorecard include addressing rising college costs and improving educational outcomes to create the most competitive workforce and to advance civic engagement. Studley asserted that students want to know which colleges provide the best value (based on comments received from 100 forums nationwide with over 4000 participants from various sectors) and that it is the government’s responsibility to hold colleges accountable for ensuring the federal funds they receive lead to meaningful student outcomes. Noting that rating is only one aspect of the scorecard, that colleges are already constantly rated, that the scorecard will not be a ranking system, and that many states are already comfortable with performance-based ratings in higher education, the College Scorecard, still very much a work in progress, capitalizes on systems already in place and will use data fairly in ways that minimize risks to colleges, particularly those that serve underrepresented populations.

 

Following the opening remarks, four sets of panelists (five speakers per panel) representing a wide range of interests and perspectives offered testimony, both cautions and recommendations, for the College Scorecard. (For summaries of each speaker’s testimony, see Appendix.)

 

What was consistent among all panelists is that the consumer information measures and the accountability measures should be separate. Some participants didn’t feel either function would be adequately served through the College Scorecard; some valued both functions, but recognized different measures would be needed (and similar metrics would need to be applied distinctly) to rate colleges for either purpose. Most, however, seemed to prefer one purpose over the other.

 

For those who advocated the College Scorecard to be used as a consumer tool, most advocated for disaggregated results and specific numerical information (or contextualized percentages) as opposed to averages. There was much debate about whether input adjustments should be made to account for institutional differences, though, among those who preferred the scorecard for informational purposes, most resisted such adjustments, which may be confusing to the public. Several speakers noted that for a consumer information system to be of use it must be clear, easy to understand, and easy to navigate, and it must provide data the consumers say they want or need. Additionally, while recognizing the government is a trusted source of financial information, some participants questioned the utility of directing resources toward gathering more consumer information, as many students and their families do not use the information available, whether because they do not have choice for which college to attend or because they are already overloaded with information. One panelist noted that a static website is a poor tool for disseminating information, and a few wondered whether the College Navigator wasn’t already sufficient for a consumer choice tool.

 

Among those who focused on using PIRS for institutional accountability, a few core concerns were raised, primary among them, questioning the quality of the available data (especially with respect to two-year colleges, for whom the IPEDS data often reflect only one-third of their student populations). Those representing two-year college organizations in particular spoke out against the current data and metrics, advocating use of state data until better federal data is collected and using input adjustments and metrics that reflect today’s student (e.g. 300% time to degree). Most panelists recognized that the scorecard is likely irrelevant to two-year college/open admissions college students as a consumer information tool, and the metrics and data collected are largely incomplete and inaccurate for accountability purposes. One speaker went so far as to argue that two-year colleges should be excluded from the scorecard; another suggested two-year colleges should have a separate system.

 

There was also much concern about the rating system. Some advocated “information points” or multiple measures as opposed to overall ratings or “peer” institution comparisons. Most who offered recommendations suggested that the rating system focus on minimum standards of acceptability with the goal of targeting those colleges that exploit the federal financial aid system and leave students worse off than they were before they started (high debt, no degree). “Top tier” ratings, some argued, could not be applied without consideration of learning outcomes (and would require additional measures, perhaps on a voluntary, opt-in basis, in order to assess). Several speakers mentioned that earnings metrics should be associated with federal poverty rates; earnings should not be competitive, rather they should be sufficient to keep students out of poverty and to enable students to pay back student loans. One panelist argued for using the rating system not solely to penalize, but also to trigger helpful actions, for instance, to identify gaps in resources to help colleges better serve students. Nearly all warned of the potential for metrics to create a disincentive for enrolling “at risk” students, a consequence directly opposed to Obama’s goal of improving education equity.

 

Submitted by Carolyn Calhoon-Dillahunt, CCCC Policy Fellow

 

Document and Site Resources

Share This On:

Page Tools:

Join NCTE Today

Related Search Terms

  • There currently are no related search terms for this page.

Copyright

Copyright © 1998-2017 National Council of Teachers of English. All rights reserved in all media.

1111 W. Kenyon Road, Urbana, Illinois 61801-1096 Phone: 217-328-3870 or 877-369-6283

Looking for information? Browse our FAQs, tour our sitemap and store sitemap, or contact NCTE

Read our Privacy Policy NEW! Statement and Links Policy. Use of this site signifies your agreement to the Terms of Use

Visit us on:
Facebook Twitter Linked In Pinterest Instagram