| |
Archive Home
Reader Development:
Strategy
Online
Research
Contacts
Staff development
Stock development
Reading
groups
Estyn
Allan
|
|
|
< back | print
page
Evaluation
The group discussed what evaluation tools were already being used, and what they hoped to measure in future.
- Word of mouth is proving a good advocacy tool. Trainees who have completed the course are invariably good advocates for it.
- Where Generic Learning Outcomes (GLOs) are used, the anecdotal evidence from Frontline is admissible and very useful.
- Most authorities use a pre and post-course assessment form to evaluate the impact on the trainee. An example can be found on www.branching-out.net. Self assessment by trainees is considered important as a way of measuring this too. Whatever measures that are adopted to discover the value of the course to trainees, it is vital to measure trainees’ attitudes and skills before they take the course, as well as after it.
- In some authorities, trainees are assessed as part of regular staff evaluation processes.
- The feeling was that senior management would want to measure the success of Frontline through user satisfaction, although an accurate tool or measure for this was hard to suggest.
- The group wondered if it was possible to say that a library will be different, once a certain percentage of staff has gone through the course. Is there a percentage tipping point? Could impact also be measured as part of regular library inspections?
Practical suggestions for evaluation
Exploit what is there already
- Some co-ordinators collect all the Trainee interviews done in Module 1 and collate the quotes under age group headings. This gives them direct user feedback from separate age groups, which they use in their regular reports to management.
- A suggestion was made that the Trainee uses the method of observation suggested in Module 7 to record how readers used their final promotion and that co-ordinators collected these results to use for an impact measure, and also for user satisfaction. It would be possible to use the target groups that trainees nominated, and record how that group benefited, and how other readers engaged with the work as well.
- The Library walkthrough in Module 6 was thought to be a major marker for change in libraries, and some co-ordinators could tell which libraries had trainees from this marker alone. What measures could be included in regular library check ups that measured this perception?
|
|
|
|