Responding to Accountability Mandates. ERIC Digest.

Responding to Accountability Mandates. ERIC Digest. 

by Outcalt, Charles - Rabin, Joel 

In recent years, state governments have advocated greater accountability on the part of community colleges, often coupling their calls with explicit guidelines on how educational outcomes are to be measured. Community college associations and individuals have joined this discussion by reviewing existing assessment efforts, as well as devising new evaluation procedures. Through this process, many community colleges across the nation have implemented innovative accountability-driven assessment programs. This Digest briefly reviews accountability mandates and evaluation guidelines provided by state legislatures, leading scholars and administrators, and the American Association of Community Colleges (AACC). This discussion is followed by three case studies of innovative assessment programs that use evaluation as a tool to increase institutional accountability. 

STATE-MANDATED ACCOUNTABILITY

In 1989, the California State Assembly passed legislation requiring the state's community colleges to devise a system-wide accountability program addressing educational and fiscal performance. Items to be evaluated included: student access, transfer programs and rates, student goal satisfaction, occupational preparation relative to state and local workforce needs, and fiscal conditions of the college districts (MacDougall & Friedlander, 1990). Based on pilot study results, the Chancellor's Office eventually devised a statewide accountability program featuring an annual report on performance indicators, in-depth accountability studies, statewide surveys, enhanced data collection and distribution, and a resource guide of outstanding examples of accountability programs (Walter & Fetler, 1992). 

Precedents for California's state mandate existed in several states including Florida, New Jersey, and Virginia. Initially, all Florida public, postsecondary institutions were required to respond to nineteen measures. Among them were percent of degree-seeking students who were awarded degrees and progress toward goals of the state plan for equal access/equal opportunity for students (MacDougall & Friedlander, 1990). Since then, new performance indicators have been added and institutions have been afforded some flexibility in developing campus-specific measures (Pensacola Junior College, 1996). 

This approach is similar to New Jersey's state-imposed accountability system that provides a voice for individual institutions in determining some standards to be measured; these are evaluated in tandem with areas mandated by the state (MacDougall & Friedlander, 1990). Finally, Virginia makes use of an approach in which the state establishes accountability categories, while the institutions are left to determine the means for measuring the outcomes (MacDougall & Friedlander, 1990). 

RESPONDING TO PRESSURES

In response to these various types of legislative mandates, community college researchers, administrators and associations have engaged in thoughtful and practical discussions of evaluation procedures and practices. One key issue has been the criteria by which measurements are made. For example, Hogan (1992) asks whether assessments are designed to evaluate the characteristics of the institution (e.g. library resources and faculty salaries) or are they intended to measure the institution's effectiveness (e.g. graduation rates and test scores). His commentary encourages assessors to reflect on the purposes of evaluation and accountability, as well as the values underlying the process. 

Satterlee's (1992) discussion of key components of successful assessments concurs. Clarity of purpose, process, and evaluative criteria as well as communication of how results will be used after the assessment are among the elements Satterlee cites. Furthermore, he cautions that disregard for evaluation results makes future change, as well as future assessment, much less likely to succeed. 

Even in states where accountability measures are not mandated, community colleges acknowledge that documenting the educational and fiscal status of their institutions is critical to maintaining public trust as well as public dollars. The AACC's "Community Colleges: Core Indicators of Effectiveness" (1994) is especially useful for colleges interested in analyzing and/or using effectiveness indicators in accountability efforts. The thirteen indicators described here encompass categories such as student persistence and transfer, the development of specific academic skills, employment rates, and the institution's relationship to the community it serves. In addition to defining each of the indicators proposed, the report suggests possible data sources for measuring success on each indicator, as well as possible related criteria to be used in conjunction with or in place of the indicators outlined. 

Hudgins (1995) notes that numerous barriers often hinder the establishment of accountability programs. Many faculty are not fully supportive of these efforts; data often are not well understood or used; and there tends to exist an unclear or non-existent relationship between assessment and budget appropriations. Despite these challenges, Hudgins offers community colleges several practical recommendations. They include: forming partnerships for assessment; developing closer relationships with government; involving faculty as partners; and beginning an assessment program based on a shared vision of outcomes, no matter what obstacles or challenges might be foreseen. 

INNOVATIVE RESPONSES

The following three case studies illustrate institutions that have fulfilled accountability and assessment requirements in exemplary ways. 

California's Los Rios Community College District undertook institutional assessment long before the California legislature mandated accountability efforts for community colleges (Jones & Brazil, 1996). To achieve its goal of creating a program that would combine research, planning and decision making, the district developed the Student Flow Research Model (SFRM) in 1983. The SFRM brings together data from four areas: the district's service population; enrolled students; student experiences; and student outcomes. Placing the college within a dynamic public environment as both a consumer and producer, the model enables the community colleges to maintain an emphasis on accountability and effectiveness in meeting the needs of the its surrounding community. Since its inception, the SFRM has been modified to become the Collegiate Yearly Accountability (CYA) model. Using the CYA, colleges are able to cross reference census and enrollment data with student demographic information, course enrollments, grading information, and survey responses of graduates to produce more accurate enrollment projections for the future. Because of information management practices embodied in the CYA, the Los Rios Community College District has greatly increased its ability to meet accountability and effectiveness standards. 

A recent assessment conducted by New Jersey's Hudson County Community College (HCCC) is noteworthy for its comprehensiveness and close articulation with HCCC's institutional mission. Oromaner (1995) discusses how HCCC used this assessment to determine how well the college was meeting its reformulated mission to meet the educational needs of a linguistically and ethnically/racially diverse community. Incorporating survey responses from a broad cross-section of the campus and its surrounding community, the assessment investigated HCCC's effectiveness across a wide range of indicators. These included student satisfaction and goal attainment; faculty professional development, workload, and service; college finances; the college's success in achieving regional and state educational needs; HCCC's program and degree offerings; and community perceptions of the college. To relate these findings to statewide educational policy as well as to institutional accountability issues, the report concludes by discussing ways in which HCCC is contributing to the fulfillment of New Jersey's Master Plan for education. 

A 1996 evaluation conducted by Pensacola Junior College (PJC) in Florida, the fourth phase of an evaluation program begun in 1990, demonstrates the value of a long-term, evolving assessment and accountability process (PJC, 1996). Over the course of PJC's assessment program, evaluation procedures have been changed to meet the institution's needs, and have incorporated changes suggested during prior phases. The current effort emphasizes comprehensiveness while it offers flexibility for respondents, who determine the indicators on which they are to be assessed. Focusing on outcome indicators, the assessment provided a thorough examination of institutional mission fulfillment and effectiveness in meeting 51 institutional goals in 16 functional areas. In keeping with the institution's goal of maintaining a responsive, flexible evaluation process, the next phase of PJC's assessment programs will incorporate refinements suggested during the fourth phase. These elements include a stronger focus on outcomes rather than on processes. 

CONCLUSION

As community colleges come under increasing pressure to demonstrate institutional effectiveness, several innovative responses and guidelines have been developed. The examples outlined above contribute significantly to the continuing development of assessment, each providing a unique perspective and set of issues for those interested in the improvement of community colleges. 

REFERENCES

Community College Roundtable. (1994). Community Colleges: Core Indicators of Effectiveness. Washington, DC: American Association of Community Colleges. (ED 367 411) 

Hogan, T. P. (1992). "Methods for Outcomes Assessment Related to Institutional Accreditation." In Accreditation, Assessment, and Institutional Effectiveness: Resource Papers for the COPA Task Force on Institutional Effectiveness. Washington, DC: Council on Institutional Effectiveness. (ED 343 513) 

Hudgins, J. L. (October, 1995). "Using Indicators of Effectiveness to Demonstrate Accountability of Community Colleges." Paper presented at a meeting of the Texas Association of Community College Trustees and Administrators, Austin, TX. (ED 394 602) 

Jones, J. C., and Brazil, B. (1996). From Accountability to Effectiveness: The Student Flow Model Ten Years Later. Sacramento, CA: Cosumnes River College, Office of Research. (ED 411 021) 

MacDougall, P.R. and Friedlander, J. (1990). A Proposed Accountability Model for California's Community Colleges: A Paper for Discussion. Santa Barbara, CA: Santa Barbara City College. (ED 314 123) 

Oromaner, M. (1995). Excellence and Accountability Report, September 1, 1995. Jersey City, NJ: Hudson County Community College. (ED 402 993) 

Pensacola Junior College. (1996). Pensacola Junior College Institutional Effectiveness Progress Report, 1996. Year 4, 1995-1996. Academic Year Progress-to-Date. Pensacola, FL: Pensacola Junior College, Office of Institutional Research and Effectiveness. (ED 409 035) 

Satterlee, B. (Dec 1992). Program Review and Evaluation: A Survey of Contemporary Literature. (ED 356 261) 

Walters, J.E., and Fetler, M.E. (1992). Accountability: Commitment to Quality. A Report. Sacramento: California Community Colleges Board of Governors. (ED 351 073) 

Library Reference Search
 

Please note that this site is privately owned and is in no way related to any Federal agency or ERIC unit.  Further, this site is using a privately owned and located server. This is NOT a government sponsored or government sanctioned site. ERIC is a Service Mark of the U.S. Government. This site exists to provide the text of the public domain ERIC Documents previously produced by ERIC.  No new content will ever appear here that would in any way challenge the ERIC Service Mark of the U.S. Government.

More To Explore