Pages

Wednesday, September 10, 2014

Good MOOC bad MOOC: report 2 #flansoton #futurelearn

I’m at the one-day FutureLearn Academic Network (FLAN) meeting in Southampton and this is my 2nd report. By the way, tweets are at https://twitter.com/search?q=%23flansoton&src=typd.
First up after the coffee break was Naomi Colhoun (Sheffield University) “Learning from Learning Analytics: Can data analysis of a FutureLearn MOOC usefully inform design for learning?”. I have supervised her Masters dissertation on this topic, and therefore know a lot about it so I will do a separate blog post on this!
Rebecca Ferguson (OU) then talked about “Evaluating Educational Impact and Learner Support within OU [Open University] MOOCs”. As an aside, she started by showing an interactive map on MOOC retention that had been created by one of their PhD students. http://www.katyjordan.com/MOOCproject.html. Ferguson identified the range of people and units in the OU who were interested in MOOC evaluation. There were also a variety of reasons why evaluation was needed. It was, for example, for internal and external comparison, to support the journey from informal to formal learning. In terms of "what learning" it included learning about the subject, the online learning tools and about the people. Learning was also undertaken by learners, the course team, the facilitators and the university.
In terms of what is provided by Futurelearn on its MOOC dashboard, there are basic statistics on joiners, leavers, who has posted a comment etc. MOOC providers can also set up surveys in individual MOOCs (e.g. asking learners whether they think they have learnt anything). However, one issue is that end of course surveys are inevitably filled in by course completers, who are the minority. You can also looked at how many people have participated in an activity. The OU (which is entirely a distance learning university) has a standard learning design tool which they also use for MOOCs.
One thing that the speaker picked out as an example was that they found that in a couple of weeks people were completing step three more than steps one and two, but the learner support was in steps one and two. They can now investigate why this is (e.g. do steps one and two have boring names!) The speaker talked about text mining (e.g. for phrases like "I learnt") as evidence of learning. She also talked through an example of a high-frequency word that initially did not seem relevant to the MOOC, but had actually been a focus for community building and learning.
The OU MOOCs have facilitators. Turning to support structure, the speaker identified an explicit support structure, to help with motivating, socialising, information exchange, knowledge construction and learning development. The MOOC team identified potential support structure risks e.g. thinking that the university's learner support is poor. These can be used to evaluate support. Examples of things they looked at were the facilitator introductions, facilitator activity (e.g. posts, word count, likes of their comments); this included whether facilitators were modelling good behaviour (e.g. were they following people, were they liking things and encouraging learners to like things). The evaluating team could search for comments mentioning the facilitators and for the kind of thing that the facilitators were saying. One of the things that emerged was that facilitators needed more training in features specifically of the Futurelearn platform. This also carries a lesson as regards learners: experienced online learners may come with certain expectations of the platform and need educating specifically in Futurelearn's features.
Rebecca Ferguson's blog is at http://r3beccaf.wordpress.com/
I should mention that there is a Facebook page for FLAN (where the presentations will be put), but I'm afraid it is a closed group for FLAN members.
Photo by Sheila Webber: mallow by the Cherwell, September 2014

No comments:

Post a Comment