Friday 27 December 2013

The 'Carry On' factor

Image: freeimages

Following on from my thoughts about quantitative research, I'm looking at some of the dependent variables that will come into play, and thinking about how I might go about analysing them.

Intention to continue examining & job satisfaction


This is an extremely important factor for exam boards, as they are dependent on a large network of examiners to make our examinations possible. Meadows (2004) identified four factors that affect examiners' attitudes towards their jobs:
  1. The pressure and stress of examining
  2. Insight gained from examining
  3. Support from awarding body and senior examiners
  4. Pay
However, Meadows found that only the pressure and stress of examining, and the level of support received, predicted intention to continue examining; however pay did affect examiners' job satisfaction. One of the key sources of stress came from balancing examining duties with regular work, with the report recommending that resources should be diverted to lobbying for examiners to be given more time away from teaching to examine, in order to improve retention. Improving the level of support was also a recommendation, although the report notes that this would be less cost effective, since most examiners were already relatively satisfied with the support they received. Increasing pay would improve job satisfaction, but the report states that this would not improve retention.

The introduction of software tools


Tremain (2011) followed up this work to consider how the situation had changed after the introduction of electronic marking and online standardisation. The study looked at the factors that influence the satisfaction that examiners express about their work, and highlighted three factors underpinning examiners' intention to continue:

  1. The relationship between examining work and work outside examining
  2. The pressures of examining and support received
  3. The incentives for examining
The study states that although there is no imminent threat to examiner retention, future threats include the increasing use of online tools, which can contribute to examiners feeling unsupported or undervalued. Job satisfaction is considered to be more important in retention than reward for the majority of jobs, with social interaction and appropriate challenge being considered particularly valuable. The adoption of online tools had contributed to a sense of isolation amongst examiners, and also made the work more routine - although the reliability of marking has actually increased as a result.

A further study (Tremain, 2012) also set out to evaluate how specific factors involved in online marking & standardisation contributed to examiner satisfaction. This concluded that there was no significant difference in intention to continue marking between examiners who were standardised using face-to-face or online methods. Examiners who had marked using a mixture of paper and online methods showed a very slight increase in intention to continue examining. However, it was noted that the results were confounded by the different subjects and levels of experience amongst the participants.


Variables that we may be able to influence, and how:

  • Support received. By considering the different levels of support that are currently offered from the contextual model for learning (Shepherd, 2011) and identifying possible gaps, we may be able to improve the support offering for examiners in a rational way. I have already laid out some initial thoughts for this approach.
  • Insight gained from examining. Making key insights from senior examiners available in a digital form which can be shared more easily online, for instance through learning management systems or webinars, could help to ensure efficient dissemination of relevant information.
  • Social interaction. This is a long term goal that our organisation may want to consider for retaining examiners. Although we are increasingly unable to provide opportunities for examiners to meet in a face-to-face setting, there are possibilities for facilitating some more informal interaction around scheduled events. One of my colleagues is keen to run webinars for examiners to gain insight from senior examiners, and careful use of online chat could help to provide a better sense of community.
Any or all of these methods could be attempted, with measurement of the effect on intention to continue, and also examiner performance, being undertaken to determine effectiveness. One concern I have is that apparent failure to make a difference at first might result in a loss of enthusiasm for innovation, hence there would need to be trust established with stakeholders for future improvements. Undertaking action research alongside quantitative measurements to demonstrate a rational approach would be key to successful establishment of such trust.

References:

Wednesday 18 December 2013

Architecture


Going off on a little bit of a tangent, it's time to take a look at what support is being offered to our examiners through the e-learning provision, and how it could be improved. Shepherd (2011) provides a contextual model for learning, based on four contexts: experiential, on-demand, non-formal, and formal; and two perspectives: top-down and bottom-up.

  • Experiential learning is learning from as opposed to learning to. We have to be actively engaged with our task and - one hopes - reflecting on our successes and failures. There are a great wealth of lessons that can be learned while examining - Meadows (2004) and Tremain (2011) report that examiners consistently cite the insight gained from examining as one of the key benefits. Considering how to support or encourage such reflections might help to improve examiner performance.
  • On-demand learning is learning to perform a particular task and acquire the necessary knowledge, at the point of need or 'just-in-time'. Depending on how far in advance our examiners access the learning materials, they could be regarded in this fashion, although they are probably better used in the following category.
  • Non-formal learning is also learning to but with a more relaxed time frame, where employers take steps for employees to be prepared in the medium to long term, and is sometimes labelled as 'just-in-case' learning to set it apart from on-demand. It is distinguished from formal learning below by not being packaged as a formal 'course', which will be the case for this intervention, although there may be something to be said for carefully considering how the materials are to be presented to examiners.
  • Formal learning is defined by clearly set learning objectives, a means of assessment, and usually some kind of qualification. We definitely don't offer a qualification for learning to examine (perhaps some would say we should?!), assessment would be somewhat laborious unless it were to be done covertly through completion, and the learning objectives are difficult to define. It is far easier here to think in terms of business objectives - 'There are no learning metrics, only business metrics' (Cross) That being said, it will be worth considering what examiners will expect to see and make sure that objectives are clearly stated.
  • Top-down learning is aligned with employers' objectives, and is intended to ensure performance is not left to chance, and sets out to ensure that the requisite skills and knowledge can be acquired to do so. Our organisation is responsible for ensuring that results are delivered on time and accurately, with severe penalties possible for failing to do so.
  • Bottom-up learning occurs because of employees' motivation to perform effectively. In addition to the motives around improved insight, examiners are drawn to the extra pay for examining, and improved promotion prospects in their teaching roles (Meadows, 2004).
Moving into specifics from Shepherd's model, there are several components that are either currently present in our business model, introduced explicitly through the e-learning provision, and some that perhaps should be there. The existing provisions focus exclusively on the top-down perspective:
  • There are already performance appraisals built into our way of working (experiential);
  • help-desk is provided through our Contact Centre (on-demand);
  • Examiners typically receive a certain amount of on-job training through contact with their supervisors (non-formal);
  • For the live pilot, examiners will be receiving classroom courses (formal);
  • The majority of users will have access to the rapid e-learning materials (non-formal). Note that I avoid referring to our e-learning as self-study e-learning (a formal intervention), which by definition should provide 'instant and individualised feedback', something that is far beyond the scope of our planning.
If we open ourselves up to the full range of possibilities from Shepherd's model, I would be tempted to add the following methods:
  • Webinars could be used to convey a lot of the material and briefing that might take place in a face-to-face context, without examiners having to travel to a central location, and allow for some questions & answers
  • Online video could be a powerful tool for engaging examiners with the task at hand, especially if delivered by senior examiners involved in the pilot. The message would have to be particularly clear, relevant and to-the-point, requiring serious consideration before asking for this intervention.
  • Performance support materials could be leveraged for contact centre staff and senior examiners, drawing on key lessons from the live pilot.
  • Forums could be used amongst contact centre staff to post common questions from examiners; there could also be forums available to examiners to make common or emergent solutions available.
  • Of course I would love it if someone other than myself found a reason to keep a blog ...
That's all for now I think, I'll look at specific applications in future posts....

References:

Saturday 14 December 2013

Time for a numbers game?

Image: freeimages

Following on from my last blog post, I'm re-treading the sequence of reading from our Research Methods module to get my bearings again, and I'm coming back to the question of qualitative vs quantitative research. While I strongly identified with the Action Research methodology on my last project, it's worth deliberately opening up my mind to new possibilities, especially as there will be strong interest in some kind of numerical data from colleagues and external auditors if we are questioned on our approach.

So before I start to choose which quantitative disciplines I might wish to draw on, I'll look at the key aspects of quantitative research, consider those that appeal to me, and those I wish to avoid.



Concern with theory

Relating my findings to theory will be helpful to ensure some kind of tethers to related work, but there's a danger of getting obsessed with reproducibility and control here. When you're moving into the realm of on-demand learning, you can't guarantee learning outcomes, nor indeed that learners will even access the materials or activities that you produce for them. Newby (2010, p.96) acknowledge the limitations for educational researchers trying to identify pattern and control influences, as they are only able to view a small part of the overall education system. I would prefer to think in terms of Praxis (Wheeler, 2013), which requires practitioners to consider how closely their practice overlaps with the theories they identify with.

Concern with proof


Here lies one of the real problems for educational research - although I understand that establishing proof would give greater peace of mind, the complexity and ambiguity of the situation makes this extremely difficult:
  • The situation I face will not be the same as another practitioner does, even if our verbal descriptions of it seem similar to the untrained eye
  • The next situation that I (and the learners) face will not be the same as this one, even if it's 'just another e-marking system'
  • The time needed to establish proof would be completely at odds with the time pressures for the project, where the learning is 'on-demand'.
The best that I can hope for is to show that using theories to guide my design leads to a dependable business outcome, and that particular methods or techniques are better suited to my situation.

Identification of variables

This is one of the key aspects of quantitative theory that I see as helpful. Although my control over most independent variables involved will be limited to say the least, it will definitely be helpful to at least make some systematic efforts to identify variables in the design of materials that may be having an effect, and to measure any dependent variables which are of interest. Our particular concerns would be the performance of examiners, and intention to continue based on their experiences. Attempting to correlate these with participation in the different aspects of the support might yield useful insights into which components have succeeded, but this would have to be linked to effective practice in design.

Simply saying that an approach should be abandoned because it doesn't seem to have an effect in this situation would be potentially misleading without some understanding as to why. Creswell (2009, p.49) refers to confounding variables (e.g. discriminatory attitudes) that can come into play, which I have had some experience of when trying to introduce online learning methods in the past. Participants who are negative about the use of the tools go to great lengths to discredit them when given the opportunity to do so, whilst the majority of participants actually acknowledge a positive effect.

Conclusion

This project will benefit from the use of some quantitative approaches to analysing data about examiner performance and intention to continue, but these will need to be paired effectively with qualitative methods to understand what dependent variables relating to the choice and design of approaches might be influencing the outcomes. My next blog post will focus on the type(s) of quantitative research methods might be useful, followed by a look at rational design approaches for the learning provision.

References:

  • Creswell, J. W. (2009). Research Design: qualitative, quantitative and mixed methods approaches. (3rd edition) Sage.
  • Newby, P. (2010). Research Methods for Education. Pearson Education Limited.
  • Wheeler, S. (2013). Praxis makes perfect. Learning with e's [blog] 31 October. Available at: <http://steve-wheeler.blogspot.co.uk/2013/10/praxis-makes-perfect.html>

Sunday 8 December 2013

Back to where it all began

Image: freeimages

After a year or so of working on staff development programmes and doing a lot of creative and novel work, I'm back to where my career in online learning really began: making tutorial videos and briefings for examiners. As always, the time pressure is intense and I'm largely working solo - while some might see this as stressful I'm actually looking forward to it, because it's a welcome chance to really reflect on the circumstances where I first honed my skill set.

'Experience: that most brutal of teachers. But you learn, my God do you learn.'
C. S. Lewis

The tools changed several times in the first few years. The first recording software we started using because we had already invested in related software - I won't specify which software here, because it may have moved on since. You could record your screen with live narration, it was possible to easily publish the material using a hyperlink, the results could be reasonable, but it wasn't without its flaws. Particularly the editing could be a real nightmare if you made any mistakes in your recording, and quality seemed to degrade with each edit.

The second time around we started using Techsmith Camtasia Studio, which was a massive revelation. Screen recording was a great deal sharper, the editing process was vastly improved and I quickly found the value of highlighting, captions, and zooming & panning the view to draw attention to relevant areas. We now had the freedom to publish to good quality video formats, and with backup from the web team we could publish videos to an orphan page for examiners to view.


Finally we moved on to Adobe Captivate, which we have stuck with ever since for software demonstrations. It's a lot more technical than other software, which may put some people off, but it's allowed me to move forward with creating more interactive material (particularly simulations). As we finally moved over to our own LMS, the software had what we needed to publish with all the e-learning information for SCORM packages.


I've learned to stay mindful of the advice from Henderson (2012) to avoid being trapped by the tools, and that of Toth (2012) to choose the right tool for the job, so I always look for opportunities to use different e-learning tools. However I am finding it harder to take on new tools as my time gets increasingly bound up in development, so perhaps now isn't the time to take on something new for the recording. Instead I'll be looking to draw on the advice from Shepherd (2011) around considering carefully the context of your learners and what support they might need. In subsequent posts I'll be drawing up outlines for the additional approaches that might be used, and the opportunities to draw in different tools.


I'm automatically thinking of using Action Research methodology, since it was successful for last project, but this talk of not being trapped by your tools has made me pause. Perhaps it's worth re-treading some of the exercises from the Research Methods course and make sure that Action Research, and indeed qualitative research, is the correct approach.


References:

  • Henderson, A., 2012. Don't get trapped by your e-learning tools. In: Allen, M.W., 2012. Michael Allen’s E-learning annual 2012, San Francisco, Calif.: Pfeiffer.
  • Toth, T.A., 2012. The right e-learning tool for the job. In: Allen, M.W., 2012. Michael Allen’s E-learning annual 2012, San Francisco, Calif.: Pfeiffer.
  • Shepherd, C., 2011. The new learning architect, Chesterfield, U.K.: Onlignment.