Vivien writes, following my Maturing of the Mooc report which, unusually for a formal literature review, made wide use of internet content that was not peer reviewed:
As you have highlighted they are rich sources of opinion, and often contain research data that people haven't got round to publishing.
Good observation by Vivien, that the emergent forms of knowledge production that replace publishing and peer reviewed articles are heavily reliant on web product and tools for building knowledge - and this calls for us to be critically aware of our practice.
So in the interests of Openness, here's a description for Vivien and anyone else who is interested, of the method we used when compiling the UK Government Report The Maturing of the MOOC
The approach to working with blogature on MOOCs was composite and improvised.
Starting point was to identify the most widely followed and cited bloggers in the field of ed-tech. This included those who blogged under their own moniker, and in umbrella blogs (like e-literate) and as guest bloggers in specialist titles. My key criterion was to tap the blogs of people with knowledge and influence. I felt it was not adequate, to merely review the blogs that were big on the web. For a couple of issues which I felt were important, but where I knew the field was slow and dozy (FE, developing world) I pushed a bit harder into more obscure and less well followed blogs in order to reach relevant content. But by and large, this was an adhoc blogger establishment on the topic.
Once I'd got a list of key writers in blog format, I -
- Analysed their blog rolls, citations, and comments to check for others I might have missed
- Used search term MOOC within the blog pages of these authors to identify relevant articles
- Also looked under their twitter feeds using the #MOOC tag to check what else they had read and thought worthy of passing on.
So there was a method up to this point.
Inevitably, my choice for review and critique in the publication, after I had made this selection of articles by these writers, was partial and not methodical. The output was too large to read all, let alone report. So the tried and trusted researcher's method of a lot of fast skimming, and a little focused analysis, came to my aid.
One perhaps could have applied also at this stage some kind of methodological criteria such as recency, number of comments/trackbacks etc, but I didn't. I merely applied judgement and knowledge of the field in choosing the elements of the blogature that made it to the report. The only safety checks to ensures that did not introduce distortion, were having two experienced senior reviewers on the writing team, each with a particular institutional focus, and giving the draft final text to four knowledgeable experts for a glance. My other personal rule, was that if a blog had generated an exchange or critical response among my core of key writers, it always deserved proper reading.
Search engines were very much a back-up at the very end, just to be sure I hadn't missed stuff that was popular but yet not reported in the professional discourse. Web searches didn't throw up much analytical writing on the topic that I didn't already know, actually. But there could have been circularity in my method, so it was important to get the outside perspective of search as a control.
For Maturing of the MOOC, we did an exercise of testing the Search Results Page (SRP) for the term MOOC over several weeks, at weekly intervals, using both Bing and Google engines. This allowed us to index the type and sentiment of web referrals to blogs, and to evaluate the volume of blogging on the topic as opposed to other content such as ads, journals, course listings etc. This was not an exercise in locating blog content, rather in recording how the websearch snapshot of the issue evolved over time. It threw up some intriguing suggestions. I'd like to think more about how this kind of web tracking exercise could be improved and automated by researchers. Search Engines aren't "clean" - they are contaminated by ones own search history, the algorithms' interpolation of information about you and your interests, and the evolving nature of the algorithms themselves. However, they do tell us something.
What worried me, when I was conducting all this research in the way described above, is that any method around blogging privileges individuals of authority, existing networks, and the ongoing debates of those with a vested interest in the phenomenon. It foregrounds a discourse that is Western, articulate, English-speaking, post-holding, institutional, continuous and engaged. Other voices that were important, reporting experiences that were significant, might be overlooked if they did not conform to this social and discourse framework. I responded to this risk in two ways.
Content about failure and rejection is not as frequently found as the number of drop-outs from MOOCs suggests it should be, so where I found it, I gave it more editorial weight than it deserved on the basis of frequency. Additionally, I gave the research a slight twist (and this is my personal political trait, and I don't claim it's a reputable method) I looked in a number of fringe, controversial, politically/geographically slanted and subaltern fora. These included: a Francophone and marxian perspective from Le Monde Diplomatique, some Chinese, Indian and African educational research networks I know, and teacher and student networks (often amplified through Unions) expression a rejection of MOOCs. Another technique I use to locate such voices is to search strings like "rubbish MOOC" or combinations like "terrible", "MOOC", "harm" to see if there is an undercurrent of dissenters. These did throw up a broader range of content and I was glad to have found it. As it happened, the brief for this publication (Education Policy in a developed economy) did not require them to be cited or analysed. But I was able to point to their existence and advise a watching brief on them.