It's
the principal constituencies of a volunteer program, their thoughts matter, for they are
the ones who must be pleased, committed, and performing well. In other words, they should
provide input for the evaluation as well as receiving it as an output.
Evaluation is necessary for all these people to justify the program and to improve it.
That means, once available it must be used, considered, discussed, decided, and acted
upon, as promptly as possible.
This makes evaluation far more than a ritual. It is a necessity to guard against
delusion and the substitution of good Intentions for evidence-in our volunteer world.
Program evaluation is, however, an extremely "luxurious necessity." We need
it, but we often can't afford it, either time-wise or money-wise. For example, the Center
presently has files on approximately 200 volunteer program evaluation or research reports.
Where budgets for them are available or can be estimated, they show an average range of
$5,000-10,000 each, with wide ranges, of course, from less costly to over $300,000! Many
volunteer programs have barely $10,000 as their total budget.
Technical evaluation also consumes inordinate quantities of time. We know of
coordinators who have spent up to 30-40% of their time in program evaluation. Yet they
should be primarily doers, not watchers; it is doubtful that spending more than 5-10% of
their time can be justified for this purpose.
One result of this large expenditure In terms of time and money is that volunteer
program evaluations are often talked about and only occasionally done--an unsatisfactory
situation. The need is to react rapidly to small problems before they fester, to act while
a band-aid suffices rather than waiting for gangrene to set in. This can only be done if
evaluation is a regular and continuous part of your program.
Faced with this necessary impossibility of program evaluation, program people must
frequently choose to do nothing--except state their faith. In an assemblage of directors
of volunteer services, the question, "Have you evaluated your volunteer
program?" rarely draws more than one-out-of-ten hands.
It is a hard choice today, then, between research and technology, raw witness and
testimony. The former is too expensive in time and money; the latter is equally costly in
credibility and buildup of untreated program toxins.
The Basic Feedback System is an attempt to provide a third alternative between the two.
It is not a technical evaluation, but it provides at least some feedback on how the
program is doing. It does so at a practical cost in time and money, continuously (not just
once every two years) and in terms of national norms, so any individual program can
compare itself with others, in its state or region, or nationally.
As for cost, reproduction of all the forms needed might cost a program $50-75 a year.
Time-wise, each form requires about ten minutes to complete, and anywhere from two to ten
minutes to score and norm. Much of this work can itself be taken over by volunteers.
Conscientious and continuous operation of BFS would take no more than 2-5% of a director's
or coordinator's time.
BFS provides estimated ongoing measures of (1) performance; (2) satisfaction; and (3)
commitment for directors, line staff, top administration, volunteers, and clients of a
volunteer program. Other forms for other settings could be developed to taste; some have
been, and one of the new experimental forms is reproduced later in this report. Beyond the
scoreable aspects of each of the forms to be described presently, there is an equally
important yield of free-running commentary which should be studied for a value beyond that
of numbers.
Two principal restrictions on the usefulness of the forms, at present, are local
adaptability and validity. The forms are necessarily general and in various degrees may
fail to reflect some unique local program conditions and needs. You should adapt wording
as necessary.
As for validity, these are self-report forms and may thus lack objectivity in the sense
of what an outside observer would say, or in any other "absolute" sense. One
form though, BFS-1, has been checked twice against independent outside observation and
proves to correlate roughly with it. Further study is needed on all of the forms.
Meanwhile, the forms basically have only "face validity." They represent only
what the various volunteer program constituencies do say about the program--right or
wrong. What they do say is in itself important, whether right or wrong, accurate or
dissembling. Moreover, if you do have outside observers, they can base their judgments on
the BFS forms and indices for direct comparison to self- report "inside"
evaluations on the same forms.
Except for one of the forms (BFS-1), the entire system is still under development. We
are releasing the system at this point simply as a guide to volunteer program directors,
not as an authority but as a supplement to be incorporated with more sophisticated
evaluative methods.
Users have full permission to reproduce the forms at will, although acknowledgment of
NICOV as source would be appreciated. This, of course, includes permission to adapt to
local needs and conditions. Indeed, users are encouraged to do so; however, comparability
in terms of national norms is impaired when forms are adapted.
For perfection of this system, your feedback is urgently needed: scores on the forms,
your experiences in administering and applying them, and, above all, suggestions for
improvement. Please send copies of completed forms, comments, and suggestions to the
authors.
A reminder: the Basic Feedback System is designed only as a guide to users, to be
incorporated with other impressions, evidence, and data on hand. Be flexible. Use it as
feedback, a guide to discussion or even training. Remember, it is only a general way of
taking the temperature of your program. Whenever possible, use BFS as an adjunct or
stimulus to more sophisticated and extensive evaluation.
Administration
Where possible, administration of the BFS forms should be in a face-to face setting,
either to individuals or to groups, rather than mailed. Structured interviews where the
Interviewer records are also a possibility. Ordinarily, it helps to offer anonymity and
confidentiality. Interviewees need to feel protected. A relaxed atmosphere is also
beneficial.
Delete the scoring instructions before the BFS forms are passed out. Interviewees
should be told to comply with the written instructions. Questions concerning the various
forms are usually quite numerous. Thus, the administrator should be well prepared to
answer questions concerning written instructions, individual questions concerning written
instructions, individual questions, etc.
Everyone does his/her own work on the forms, of course. No copying.
Use of Norms
Norms for BFS forms have been established over the past nine months. They show the
distribution of all scores the Center has received. Percentiles (or the estimated
percentage of the total population) are used to standardize the scores. If a raw score
falls in the fiftieth percentile, then it is fair to say that the raw score is an average
score with approximately half of all raw scores above the score and half below.
We suggest that you refrain from showing the norms to people filling out BFS forms
until the forms are completed. Once completed, the person or program can see where he/she
is in comparison to everyone else. It should be repeated that theres much in these
forms that is valuable, though not scoreable and entered with norms. Note: Norms are a
best only approximates: i.e., a percentile of, say 64% is to be understood only as the
approximate center of a range, extending at least 5 percentile points on each side of 64.
Forms
There are currently five forms in the system and a sixth under development:
BFS-1 Scorecard Volunteer Program (for
directors of volunteer programs)
BFS-2E Staff Reactions to Volunteer Programs (line
staff)
BFS3E Volunteer Feedback Form
BFS-4E Top Management Self-Checklist in Regard to
Volunteer Programs
BFS-6E You Have A volunteer What Do You Think?
(for clients)
BFS-9E Voluntary Action Center BFS Form
Each BFS will be briefly discussed separately.
Note: The missing numbers in the series refer to forms still under development and
under consideration for development in the BFS series. E = experimental in the sense that
feedback from the field may still substantially modify the content of the form and its
norming.
We suggest that you reproduce theses BFS forms on legal size paper in order to have
sufficient space for interviewee responses.
Other BFS Forms
Other BBS forms are being brought about; for example, for funding sponsors and
statewide directors of volunteer programs. Its relatively easy to construct your
own, though its time-consuming. Wed be glad to suggest how you might go about
it; any competent tests-and-measurement person can do so.
Putting It All Together
Basic feedback should be fed back as soon as possible, discussed and applied.
Evaluation is the beginning of the reaction which is the beginning of action; it is in no
way purely contemplative enterprise.
If BFS administrating, scoring, norming sometimes borders on "cookbook"
procedures, BFS application does not. Here are some general guidelines, though, simply as
rule-of-thumb.
- Administer all five forms (FFS-1, BFS-2E, 3E, 4E, and 6E) about every three-five months
at about the same time for all.
- Volunteer administrative assistant or director scores and norms all forms.
- Also pick up any major themes in free-running responses which are not scoreable.
- Get average indices and themes for each of five constituencies.
- If BFS forms have been given before compare them to previous applications of BFS.
- Volunteer evaluator or director summarizes and charts all above.
- Call representatives (constituencies) of all five groups together and discuss the
results. (Also, results can be discussed with a larger sample within each of the five
groups.)
- Agree on major recommendations for improvement in program. Set up a plan and time frame
for implementation.
- Agree on major strengths of program. Set plan to be sure theyre appreciated
properly within and without program.
A lot more could be said about Point 4 above. Particularly, there are obvious
interactions or interfaces between the five forms and these can be explored, as well as
scores or themes within each form.
A kind of common-sense insight should suffice here for the put-it-together person.
Thus, for example, both the director of volunteers and volunteers are rating such things
as volunteer training, support, recognition, and significance of work responsibility. Do
they agree or not? If not, it needs discussing. Indeed, what does it mean when the
Motivation-Incentive subsection on Scorecard is above average, while Volunteer
Satisfaction index is below average?
There are similar interfaces regarding suggested work roles for volunteers between
Scorecard and Staff Reaction forms, and between both of them and Client and Volunteer
feedback forms. Here you can do something very much like the Centers Need Overlap
Analysis in Helping Process (NOAH), simply using BFS forms for line staff, volunteers, and
clients.
These are only a few samples of interface. There are many more and there is much more
to be done in the development of Basic Feedback Systems. We look forward to working with
you on it. Your completed forms, your suggested formats, your formal and informal input
towards a system we all can use will be welcomed and add much to the development of BFS.