COUNSELING CENTER
UNIVERSITY OF MARYLAND
COLLEGE PARK, MARYLAND
FACILITATORS OF PROGRAM
EVALUATION
IN STUDENT AFFAIRS
Research Report #3‑89
COUNSELING CENTER
UNIVERSITY OF MARYLAND
COLLEGE PARK, MARYLAND
Facilitators of Program
Evaluation in Student Affairs
Kuh further noted that there are some advantages to informal assessment (e.g., time and minimal expenditure of resources), but to rely entirely on these less rigorous approaches could result in subsequent evaluations being conducted by persons external to the unit.
Patrick and Niles (1988)
described four roadblocks to accountability and evaluation in student. affairs
offices:
(1)The
evaluation process is value‑loaded (Barnes, Morton, a Austin, 1983), and
is seen as a struggle to compete with other academic units for scarce resources.
(2)
As student affairs administrators may have only limited control over budgetary
matters (Lombana, 1985), they can feel threatened by the potential for negative
evaluation findings to be used as justification for reduced funding.
(3)
Many student affairs programs are developed haphazardly (Celotta, 1979) and
therefore have ill‑defined objectives, making evaluation less likely to
reflect the true nature of their effectiveness.
(4)
Many student affairs professionals do not believe that their efforts are
measurable (Lombana, 1983;, resulting in a reluctance to conduct evaluations.
Here, accountability and evaluation are being confused with research.
Facilitators of Program
Evaluation
3
Over the past several years, the writers have planned and conducted evaluations of student affairs units. As a result of this experience, a number of facilitators of program evaluation in student affairs have emerged. These evaluation guidelines may be especially useful for units which offer several programs or services simultaneously, such as student unions or counseling centers. An evaluation which incorporates these facilitators can become more manageable and meaningful for all involved:
SELF‑EVALUATION Allows staff to assume responsibility for the process of evaluation, and capitalizes on
the specialized skills and knowledge they possess. When staff evaluate their own programs or services, they are more likely to accept and act upon evaluation findings.
OUTSIDE CONSULTATION An experienced consultant can coordinate the project and provide training in
evaluation methodology to staff whose jobs are more concerned with direct service in student affairs. Also, the consultant should advise administrators on responsible implementation of evaluation findings to benefit the entire unit.
A BALANCE BETWEEN CENTRALIZED AND DECENTRALIZED EVALUATION Even in a decentralized self‑evaluation, some of the process should be centralized to promote accuracy and efficiency. Tasks such as collecting and analyzing user satisfaction data, summarizing evaluation findings, and compiling final unit evaluation reports may be best done by the evaluation consultant.
Facilitators of Program Evaluation
4
INTERESTED AND INVOLVED ADMINISTRATORS Administrators possess crucial knowledge of their
units structure and are in a position to motivate staff and make needed accommodations (i.e., scheduling changes) for the evaluation.
TEAMWORK Evaluation has the potential to be a community building activity, with staff members motivating
each other to do a good job. Resources can be shared, and the outcome is more likely to reflect the same overall mission in student affairs.
THE USE OF REGULARLY COMPILED INFORMATION The use of regularly compiled data (i.e., user
counts) is expedient because the information is easily accessible and provides good documentation of daily activities and operations in the. unit.
A REALISTIC DEADLINE Because program evaluation is not the manifest purpose of most student affairs
units, it is in danger of being subsumed by more pressing tasks and functions. A realistic timeline detailing each stage of evaluation may help ensure that the plan actually materializes.
It is hoped that these evaluation facilitators are both general and specific: general enough to be compatible with many different models of program evaluation in a variety of student affairs settings; and specific enough to be utilized, rather than just pondered. In an editorial lamenting the dearth of evaluation method articles, Sedlacek (1987) observed, "Evaluation is more fun to talk about than to actually do. We are all a little afraid of evaluation, lest it be turned on us, so we keep it abstract" (p. 2). Student affairs practitioners
Facilitators of Program Evaluation
5
advise students to take calculated risks to further their personal and professional development. The advice applies to professionals, as well. Program evaluation is a developmental process with far‑reaching benefits. In initiating this process, in it helpful to recognize how facilitative elements can be used to the best advantage.
Facilitators of Program Evaluation
6
Barnes,
S.F., Morton, W.E., & Austin, A.D. (1983). The call for accountability, the
struggle for program definition in student affairs. NASPA Journal . 20(4
), 11‑20 .
Brown,
R.D. (1979). Key issues in evaluating student affairs programs. In G.D. Kuh
(Ed.) Evaluation in student affairs (pp. 13‑31). Cincinnati:
American College Personnel Association.
Celotta,
B. (1979). The systems approach: A technique for establishing counseling and
guidance programs. Personnel And Guidance Journal, 57, 412‑414.
Kuh,
G.D. (Ed.) (1979). Evaluation In student affairs. Cincinnati: American
College Personnel. Association.
Lombana,
J.H. (1985). Guidance accountability: A new look at an old problem. The
School Counselor, 32(5), 340‑346.
Patrick,
J. & Niles, Spencer G. (1988). Establishing accountability and evaluation
procedures in student affairs offices. NASPA Journal, 21(6), 291‑296.
Sedlacek,
W. (1987). Editorial. Measurement And Evaluation in Counseling and
Development., 20(1), p. 2.
Stufflebeam,
D.L., Foley, W.J., Gephart, W.J., Guba, E.G., Hammond, R., Merriman, H.p.,
& Provus, M.M. (1971). Educational evaluation and decision making.
Itasca, Ill.: Peacock..