Finding out why we are doing it, why we did it and who cares
Yesterday we ran an evaluation and impact synthesis workshop and I must say it helped me see that, rather than being dry topics, evaluation and impact are actually practices which we can all benefit from. To achieve good results it is essential to plan ahead and try to stand back from what you are doing to establish some measures of success; ultimately something we all want. The first half of the day was taken up with exploring approaches to formative and summative evaluation but with the emphasis on thinking long-term towards how one might measure impact. It was very interesting to observe how people are thinking about the processes of evaluation they might deploy. Suggestions ranged from purely numeric analysis through to full-on focus group deployment. Embedding activities within a project obviously offers the most effective way of carrying out evaluation, but not every project can spend time on implementing such an approach, so it is really about trying to evaluate key elements of a project and then develop a portfolio approach to impact.
The second half of the day was given over to three presentations from projects which have now moved on to become business as usual (to use some jargon). The first presentation from Sarah Whatley focused on the impact of D-TRACES a project run by the University of Coventry and funded by JISC. This presentation explored the development of an online community around dance practice. The thrust of the presentation was about user engagement and, in particular, getting students to use the resources provided on Siobhan Davies Replay. Dance students were encouraged to create their own resources (Personal Development Plans) by developing blogs but also to feedback on their experiences in a continuous feedback loop. Ultimately it is about reflective practice but also professional development. What I found most interesting was that no formal evaluation was necessary; the results of the project are evidenced by the number of blogs the students are employing to develop their PDPs, something they may wind up doing for the rest of their dancing days. The blogs provide all the evidence needed.
We then heard from Bruce Tate of the Institute of Historical Research about techniques deployed to evaluate the impact of British History Online. This was a more technical presentation focusing on aspects of using online tools to help in evaluating impact. I was most interested to hear that Google Analytics, though great for specific monitoring of web resources, tells one nothing about the use of metadata. If you want to explore how people are using your site you need to become adept at weblog analysis. Again it was horses for courses; deploy the right tools for the job in hand!
You can find more information about evaluation work for these projects and a number of other projects here.
The final presentation by Liz Masterman of Oxford University Computing Services looking at the use and re-use of digital resources, in particular to do with Open Educational Resources (OERs). The presentation explored an in-depth and scholarly way of evaluating impact, focusing on how lecturers use OER materials. The researchers deployed a mixture of qualitative methods, ranging from surveys through to the deployment of focus groups. The project was obviously purely about evaluating impact but it provided a good overview of qualitative methods which could be deployed in the current Content Programme projects.
All in all the day left one with a sense that these rather dry topics become quite interesting once they are no longer seen as an adjunct but rather as something we need to consider when planning our projects and, where possible, seek to embed some activities in the project to enable evaluation and impact analysis. We all want to demonstrate why we are putting up digital resources, to find out who might be using them, to identify how they are being used and to learn what this tells us about doing a better job in future.