Sunday 28 April 2013

A little more action (research) please!

I've been thinking about some of my learning experiences during the course, and I've picked out one of the most critical for getting my qualitative research project on the right tracks.

When I was devising my research questions, the original versions came out like this:

Primary research question:
"What effect does online interaction have on participants' approach to learning?"

Secondary research questions:

1. "How does online socialisation before a face-to-face training event affect interactions within that event?"
2. "How does online interaction before a face-to-face training event affect participants preparation for that event?"
3. "How does the opportunity for online interaction after a training event affect the application of that training in the workplace?"
4. "What would be the effect of allowing participants to contribute anonymous comments to online discourse?"

Fortunately I was reading through the advice given by Creswell (2009, Ch.7) on devising research questions for qualitative research, and I realised that my choice of words was completely inappropriate. Using the word 'affect' (or 'effect') naturally leads towards a more quantitative result because it is inherently directional for responses, as opposed to the exploratory nature of asking people to describe their experiences.

A secondary learning experience that is occurring even as I write this is revision of some of the points I had originally put into my wiki pages. I described my change in questions as being due to advice that Creswell gives specifically about action research, but I've actually misread something again - the advice was simply about qualitative versus quantitative. But I digress...

With a much clearer mind, I can re-define my overall goal for this research project as being to draw out participants' experiences of using online discussion alongside a face-to-face training event, without seeking to establish whether the effects are positive or negative through the questions themselves. So my revised set of research questions comes out as:

Primary research question:
"How would participants describe their experiences of using online interactions to support a face-to-face training event?"

Secondary research questions:
1. "Describe your experience of socialising with other participants who you interacted with online before the training event"
2. "Describe your experience of preparing for a face-to-face event where online interaction was required, relative to an event with no prior interaction"
3. "Describe your experience of participating in online interactions around course related content after the face-to-face event"


I've deliberately left out the question of anonymity - not sure I want to open that can of worms right now!

Reference:
Creswell, J. W. (2009). Research Design: qualitative, quantitative and mixed methods approaches. (3rd edition) Sage.

Evaluating online communities

Thoughts on Ke & Hoadley (2009)
 
Important point to consider: do we expect online learning communities (OLCs) to appear spontaneously or through design? Study inherently seems to favour studies of those that are well-defined in terms of support & structure, so either well-designed or evolved.

Taxonomy of online learning community evaluations

Recognises that this is a divergent research area, attempts to categorise studies in terms of four key components:

1. Evaluation purpose
Notable distinction between proving and improving purposes – convincing organisations that the community has a value at all, versus looking for ways of systematically enhancing the interactions within it. In my case the interactions don't yet amount to a community, so the emphasis is on making interactions sustainable and of benefit to participants.

2. Evaluation approach
Approaches were sometimes summative, usually for proving, or formative for improving, sometimes with elements of both of these. There are also the participatory and responsive approaches – a choice of whether to include participant evaluation or not. Oliver (2000) and Patton (1997) are cited as primary references here. My approach will be based entirely on participant responses, with a view to formative evaluation of the interactions.

3. Measures for evaluation
Outcome vs process measures. The outcome view looks at the community as a static system, evaluating the raw technical set-up of the environment and the learning outcomes. Process evaluation takes an in-depth look at the factors that facilitate or impede learning within the system. My study will need to focus on the process, with a possibility for pairing this up with outcome evaluations from colleagues.

4. Evaluation techniques
The authors seem to use the term objective in place of quantitative; they also refer to qualitative and mixed-method approaches. An important distinction between the two main forms is made – objective approaches deliberately remove context from the data, focusing on what could be directly comparable between other studies. Qualitative studies allowed for more direct insights for the learning processes within a community. I will be focusing on qualitative approaches, although there is potential for identifying the best factors to use in future studies for building up an objective measure of communities in future.

Conclusions
This gives a very good critical analysis of the factors at play in evaluating OLCs, and can serve as a guide point relating to the higher level discussions of Newby (2010), Cresswell (2009) and Colquhoon (2006). The authors also point out a good number of shortcomings in current research. Partly these are due to the constraints of researchers performing their studies for their own purposes, rather than to sit conveniently into the wider body of research. They also point out that the offline interactions between participants play a large role in the actual learning process, and these are very difficult to find any record of. There are also no studies that show how a community has evolved over time.

Long term goals of the researchers are to establish a framework for understanding OLCs, possibly towards a central theory. They make no reference to the five stage model identified by Salmon (2003); they do refer to phases of community development (Palloff and Pratt, 1999). It will be useful to pursue this systematic approach for research to inform my own understanding, and note if any similarities or contradictions with Salmon’s model emerge, which has previously been central to my understanding of online learning interactions.

Reference:
  • Colquhoon, D. (2006). Research Methods in Education Contexts. University of Hull.
  • Creswell, J. W. (2009). Research Design: qualitative, quantitative and mixed methods approaches. (3rd edition) Sage.
  • Ke, F. and Hoadley, C. (2009). Evaluating Online Learning Communities. Educational Technology Research and Development, 57(4), pp.487-510.
  • Newby, P. (2010). Research Methods for Education. Pearson Education Limited.
  • Salmon, G., 2003. E-Moderating: The Key to Teaching and Learning Online. 2nd ed. London: Routledge-Falmer.

Thursday 18 April 2013

Thoughts on Conrad

I've been going into more details on my literature review, and considering how other authors structure their writing, particularly from the perspective of what I find useful, and how it might influence my own writing. The first article is an interpretive study by Conrad (2002).
 
Conrad makes the point that while quantitative studies can give a useful overview of the area, understanding the experiences of users is a priority for development of communities. Although my context is somewhat different, her point about learners creating their own lines of defence sounds quite telling in the light of some initial comments observed in my study. Without understanding what barriers people put up, we can’t expect to engage meaningfully with them! Likewise, her point about our research agendas being shaped by our worldview matches with my research being guided by exploration of how network effects are re-shaping society, and the subsequent effects on education systems, both technological and organisational.

Writing a section about who, what, where, etc. is a useful step for grounding the paper, and setting the boundaries of the study, particularly the limitations of what might be achieved in the first place. She also breaks down the literature review itself to define different terms, building up how she wants the reader to understand her term of online community. She then examines each of the research questions in turn, looking at general patterns in responses, followed by particularly insightful comments by individuals. This approach will probably work very well for me, as I need to spot general patterns as well as bring out individual experiences.

Reference
Conrad, D. (2002). Deep in the Hearts of Learners: Insights into the Nature of Online Community. Journal of Distance Education, 17(1), 1-19