Showing posts with label Action Research. Show all posts
Showing posts with label Action Research. Show all posts

Wednesday, 12 February 2014

Re-affirming the project outline

My project outline has remained a little vague, so I thought I would try to add more details to it...

We currently provide secondary qualifications for 14 - 19 year olds. This work is dependent on a network of some 35,000 teachers and other experts to help set and mark examinations, with technology playing an increasingly pivotal role to ensure fast and scalable transfer of marks. We are continually exploring new technologies for mark capture and transfer to ensure the best possible service for candidates.

Trialling and adopting a new marking technology requires training provision for large numbers of examiners. Previous technology adoptions have initially been dependent on government funding for their initial success, but this funding is no longer available. Training provision has increasingly moved towards the creation of online software demonstration videos and interactive simulations, now hosted on a secure Learning Management System (LMS).

We are currently piloting a new marking technology with a very small number of examiners who have received face-to-face training, and are now looking to move to exclusively online training as soon as possible. With previous technology adoptions the online provision has been developed largely through internal discussion after face-to-face training and released without a live test for examiners. For this project, the online learning materials are being developed alongside the first live pilot of the technology, with an opportunity for early testing and feedback.

We will be using action research methods to inform improvements to the online learning materials and identify additional support methods prior to general release. This research will be expanded for the live use of the software during the summer examination series, with a view to providing both evaluation of success and action research for practitioner development.

Saturday, 14 December 2013

Time for a numbers game?

Image: freeimages

Following on from my last blog post, I'm re-treading the sequence of reading from our Research Methods module to get my bearings again, and I'm coming back to the question of qualitative vs quantitative research. While I strongly identified with the Action Research methodology on my last project, it's worth deliberately opening up my mind to new possibilities, especially as there will be strong interest in some kind of numerical data from colleagues and external auditors if we are questioned on our approach.

So before I start to choose which quantitative disciplines I might wish to draw on, I'll look at the key aspects of quantitative research, consider those that appeal to me, and those I wish to avoid.



Concern with theory

Relating my findings to theory will be helpful to ensure some kind of tethers to related work, but there's a danger of getting obsessed with reproducibility and control here. When you're moving into the realm of on-demand learning, you can't guarantee learning outcomes, nor indeed that learners will even access the materials or activities that you produce for them. Newby (2010, p.96) acknowledge the limitations for educational researchers trying to identify pattern and control influences, as they are only able to view a small part of the overall education system. I would prefer to think in terms of Praxis (Wheeler, 2013), which requires practitioners to consider how closely their practice overlaps with the theories they identify with.

Concern with proof


Here lies one of the real problems for educational research - although I understand that establishing proof would give greater peace of mind, the complexity and ambiguity of the situation makes this extremely difficult:
  • The situation I face will not be the same as another practitioner does, even if our verbal descriptions of it seem similar to the untrained eye
  • The next situation that I (and the learners) face will not be the same as this one, even if it's 'just another e-marking system'
  • The time needed to establish proof would be completely at odds with the time pressures for the project, where the learning is 'on-demand'.
The best that I can hope for is to show that using theories to guide my design leads to a dependable business outcome, and that particular methods or techniques are better suited to my situation.

Identification of variables

This is one of the key aspects of quantitative theory that I see as helpful. Although my control over most independent variables involved will be limited to say the least, it will definitely be helpful to at least make some systematic efforts to identify variables in the design of materials that may be having an effect, and to measure any dependent variables which are of interest. Our particular concerns would be the performance of examiners, and intention to continue based on their experiences. Attempting to correlate these with participation in the different aspects of the support might yield useful insights into which components have succeeded, but this would have to be linked to effective practice in design.

Simply saying that an approach should be abandoned because it doesn't seem to have an effect in this situation would be potentially misleading without some understanding as to why. Creswell (2009, p.49) refers to confounding variables (e.g. discriminatory attitudes) that can come into play, which I have had some experience of when trying to introduce online learning methods in the past. Participants who are negative about the use of the tools go to great lengths to discredit them when given the opportunity to do so, whilst the majority of participants actually acknowledge a positive effect.

Conclusion

This project will benefit from the use of some quantitative approaches to analysing data about examiner performance and intention to continue, but these will need to be paired effectively with qualitative methods to understand what dependent variables relating to the choice and design of approaches might be influencing the outcomes. My next blog post will focus on the type(s) of quantitative research methods might be useful, followed by a look at rational design approaches for the learning provision.

References:

  • Creswell, J. W. (2009). Research Design: qualitative, quantitative and mixed methods approaches. (3rd edition) Sage.
  • Newby, P. (2010). Research Methods for Education. Pearson Education Limited.
  • Wheeler, S. (2013). Praxis makes perfect. Learning with e's [blog] 31 October. Available at: <http://steve-wheeler.blogspot.co.uk/2013/10/praxis-makes-perfect.html>

Sunday, 8 December 2013

Back to where it all began

Image: freeimages

After a year or so of working on staff development programmes and doing a lot of creative and novel work, I'm back to where my career in online learning really began: making tutorial videos and briefings for examiners. As always, the time pressure is intense and I'm largely working solo - while some might see this as stressful I'm actually looking forward to it, because it's a welcome chance to really reflect on the circumstances where I first honed my skill set.

'Experience: that most brutal of teachers. But you learn, my God do you learn.'
C. S. Lewis

The tools changed several times in the first few years. The first recording software we started using because we had already invested in related software - I won't specify which software here, because it may have moved on since. You could record your screen with live narration, it was possible to easily publish the material using a hyperlink, the results could be reasonable, but it wasn't without its flaws. Particularly the editing could be a real nightmare if you made any mistakes in your recording, and quality seemed to degrade with each edit.

The second time around we started using Techsmith Camtasia Studio, which was a massive revelation. Screen recording was a great deal sharper, the editing process was vastly improved and I quickly found the value of highlighting, captions, and zooming & panning the view to draw attention to relevant areas. We now had the freedom to publish to good quality video formats, and with backup from the web team we could publish videos to an orphan page for examiners to view.


Finally we moved on to Adobe Captivate, which we have stuck with ever since for software demonstrations. It's a lot more technical than other software, which may put some people off, but it's allowed me to move forward with creating more interactive material (particularly simulations). As we finally moved over to our own LMS, the software had what we needed to publish with all the e-learning information for SCORM packages.


I've learned to stay mindful of the advice from Henderson (2012) to avoid being trapped by the tools, and that of Toth (2012) to choose the right tool for the job, so I always look for opportunities to use different e-learning tools. However I am finding it harder to take on new tools as my time gets increasingly bound up in development, so perhaps now isn't the time to take on something new for the recording. Instead I'll be looking to draw on the advice from Shepherd (2011) around considering carefully the context of your learners and what support they might need. In subsequent posts I'll be drawing up outlines for the additional approaches that might be used, and the opportunities to draw in different tools.


I'm automatically thinking of using Action Research methodology, since it was successful for last project, but this talk of not being trapped by your tools has made me pause. Perhaps it's worth re-treading some of the exercises from the Research Methods course and make sure that Action Research, and indeed qualitative research, is the correct approach.


References:

  • Henderson, A., 2012. Don't get trapped by your e-learning tools. In: Allen, M.W., 2012. Michael Allen’s E-learning annual 2012, San Francisco, Calif.: Pfeiffer.
  • Toth, T.A., 2012. The right e-learning tool for the job. In: Allen, M.W., 2012. Michael Allen’s E-learning annual 2012, San Francisco, Calif.: Pfeiffer.
  • Shepherd, C., 2011. The new learning architect, Chesterfield, U.K.: Onlignment.

Tuesday, 14 May 2013

Selection of research method

Draft section of my reflective assignment, based on the blog post ‘A little more action (research) please!

I have been forced to challenge my own pre-disposition towards quantitative research methods, which was influenced by my physical sciences background. I identified this bias at an early stage in my writing, and found the distinction with qualitative research better defined in my mind by reading the comparisons made by Creswell (2009, Ch.7) and Newby (2010, Ch.3). I also realised that my previous experiences lacked any real involvement in the formal planning of research; my previous projects has always been funded without me having to submit research proposals myself. I decided that qualitative approaches seemed better suited to my context, but it took some time to fully challenge my unconscious habits. I was able to identify possible sources of bias towards theoretical models that I had used (Salmon, 2004).

However it took some additional reading (and re-reading) to fully isolate my unconscious assumptions. After extensive reading about how to create both quantitative and qualitative research proposals, I believed that I had created a set of reasonable questions for qualitative research. My initial research questions were phrased as ‘What effect does...’ and ‘How does...affect...’ Only by revisiting some of the initial reading did I notice that, despite my initial conclusions that qualitative research would be the best approach, that I had automatically designed my questions in a directive way, that would lead to bias towards theory rather than interpreting participants responses from a neutral standpoint.

After some more reading, I was able to present a much more complete and reasoned overview of my research proposal, showing a great deal more thought. This revised research plan fitted much more closely with the principles of action research (Creswell, 2009; Newby, 2010 pp.623-4; O’Brien, 1998), and represents a successful change in my thinking about research.

‘Re-learning means to abolish some toxic assumptions’ (Leonhard, 2013)

References:
  • Creswell, J. W. (2009). Research Design: qualitative, quantitative and mixed methods approaches. (3rd edition) Sage.
  • Leonhard, G. (2013). Beyond the obvious: re-defining the meaning of learning in a networked society. (video online) Available at: https://www.annotag.tv/learningtechnologies/play/18320
  • Newby, P. (2010). Research Methods for Education. Pearson Education Limited.
  • O'Brien, R. (1998). An Overview of the Methodological Approach of Action Research. (online) Available at: http://www.web.ca/~robrien/papers/xx%20ar%20final.htm (Accessed March 2013)
  • Salmon, G. (2004). e-Moderating: The Key to Teaching and Learning Online. 2nd ed. London: Routledge-Falmer.

Saturday, 11 May 2013

Validity

There is a danger of intertwining the concepts of reliability and validity, so I'm attempting to address them in separate posts.  Cohen, Manion & Morrison (2013, pp. 177-99) devote a great deal of attention to the concept of validity, drawing on a variety of sources to lend weight and richness to the discussion.  Several of the points (p.180) raised are applicable to my research data:
  • The natural setting is the principle source of data
  • Context-boundedness and 'thick description'
  • Data are socially situated
  • The researcher is part of the researched world
  • The researcher - rather than the research tool - is the key instrument of research
  • The data are descriptive
  • There is a concern for processes rather than simply with outcomes (inherent in my choice of action research)
  • Data are analysed inductively rather than using a priori categories
  • Data are presented in terms of the respondents rather than researchers
There are some points to beware of that might undermine the validity of my research - I will also list how these can be addressed:
  • Reactivity (internal validity) - I am aiming to improve the processes involved in the learning programs; this should not affect the data for previous cohorts, although the current cohort could potentially be affected by knowing that they are being observed.  This also touches on the issue of Researcher bias mentioned by the authors
  • Concensual validity (external) - since I am undertaking this research as a learning experience, will 'competent others' dismiss my findings due to my inexperience, or because they think I'm simply forcing the data to fit so that I pass my assessment?
Triangulation is dealt with as a means of ensuring validity.  Newby (2010, pp. 121-3) also mentions this technique, although in somewhat less detail.  So to what extent do my research methods yield themselves to this?
  • Time triangulation - I have used the same method for a number of cohorts, although this effect may be diminished because I did not carry out the survey for each group immediately following their participation, so earlier groups may not recall their experiences as accurately.
  • Theoretical triangulation - since I have avoided basing my questions on one particular theory, there is the opportunity to compare the results from the point of view of competing theories for social and online learning.
  • Investigator triangulation - the data are recorded electronically, so potentially other researchers could give their own interpretations.
  • Methodological triangulation - the same method has been used on different groups, so I can easily compare the results of each group to consider how well the results support conclusions for each group.
General points that have contributed to validity on this project:
  • Choosing an appropriate methodology for answering the research question - action research.  This has ensured that the focus is on processes - not outcomes, which I might be interested in unfairly interpreting!
  • Selecting appropriate instrumentation - using online questionnaires allowed the data to be gathered according to the time needs of respondents, and allowing them access to reminders (their forum postings) whilst responding to questions.
Limitations here include:
  • Sampling - by not requiring participation in the survey, I sacrificed control over sample sizes, which could potentially limit the validity (and reliability) of results.  However I considered that sensitivity to participants' wishes was of greater importance in this case, since some people are uncomfortable with the use of online forums in the first place.
Overall I believe I have sufficiently addressed issues of validity, but there are clearly many others that I have not encountered yet, including the points concerning data interpretation.

References
  • Cohen, L., Manion, L. & Morrison, K. (2007) Research Methods in Education.
  • Newby, P. (2010). Research Methods for Education. Pearson Education Limited.

Sunday, 28 April 2013

A little more action (research) please!

I've been thinking about some of my learning experiences during the course, and I've picked out one of the most critical for getting my qualitative research project on the right tracks.

When I was devising my research questions, the original versions came out like this:

Primary research question:
"What effect does online interaction have on participants' approach to learning?"

Secondary research questions:

1. "How does online socialisation before a face-to-face training event affect interactions within that event?"
2. "How does online interaction before a face-to-face training event affect participants preparation for that event?"
3. "How does the opportunity for online interaction after a training event affect the application of that training in the workplace?"
4. "What would be the effect of allowing participants to contribute anonymous comments to online discourse?"

Fortunately I was reading through the advice given by Creswell (2009, Ch.7) on devising research questions for qualitative research, and I realised that my choice of words was completely inappropriate. Using the word 'affect' (or 'effect') naturally leads towards a more quantitative result because it is inherently directional for responses, as opposed to the exploratory nature of asking people to describe their experiences.

A secondary learning experience that is occurring even as I write this is revision of some of the points I had originally put into my wiki pages. I described my change in questions as being due to advice that Creswell gives specifically about action research, but I've actually misread something again - the advice was simply about qualitative versus quantitative. But I digress...

With a much clearer mind, I can re-define my overall goal for this research project as being to draw out participants' experiences of using online discussion alongside a face-to-face training event, without seeking to establish whether the effects are positive or negative through the questions themselves. So my revised set of research questions comes out as:

Primary research question:
"How would participants describe their experiences of using online interactions to support a face-to-face training event?"

Secondary research questions:
1. "Describe your experience of socialising with other participants who you interacted with online before the training event"
2. "Describe your experience of preparing for a face-to-face event where online interaction was required, relative to an event with no prior interaction"
3. "Describe your experience of participating in online interactions around course related content after the face-to-face event"


I've deliberately left out the question of anonymity - not sure I want to open that can of worms right now!

Reference:
Creswell, J. W. (2009). Research Design: qualitative, quantitative and mixed methods approaches. (3rd edition) Sage.