Posts Tagged ‘learning analytics’
In late October I attended Elliot Masie’s Learning Conference. I’ve blogged extensively about each individual session, but want to use this post to lift out the larger themes that I saw at the event and to ask the corporate learning departments a few challenging questions that relate to these themes.
A few years back Wayne Hodgins and Eric Duval started talking about the Snowflake Effect. They gave examples of media channels providing personalized offerings (think Last.fm) and could see this coming for learning too. Every learner is different (just like a snowflake) and has individual needs. Richard Culatta did a talk on personalized learning that resonated with his audience. He had a simple definition of what it means to personalize: you need to adjust the pace, you need to adjust the learning approach and you need to leverage the learner’s experiences and interests.
I would like to pose the following challenge to the corporate learning department: For every learning experience that you design, do you ask yourself: How would I design this if I had an audience of one?
Mobile and Video
The two hottest technologies at the conference clearly were mobile and video. Mobile learning technology is still in the early stages. There was a lot of debunking and few excellent or even interesting examples. I guess you could say that mobile learning is in the “through of disillusionment” from the perspective of Gartner’s Hype Cycle.
Video seemed to be further along the curve as there were many more concrete examples of video being used for learning (my personal favorite was how Masie kept connecting “over video” to people who were standing in the room next door). I was disappointed to see that most debates were very practical (e.g. about what equipment to use and how to create good quality audio) and often did not discuss how best to use video in learning. The practical debates occasionally lacked a bit of depth too. I didn’t hear anybody talk about searching, annotating and indexing video for example.
A few challenging questions for the corporate learning department: Have you invested in a platform to deliver video? Can this platform deliver to mobile devices? How do the videos get (socially) contextualized? Is there a way to Bring Your Own Device (BYOD) into the company, are you connected with the team that works on this?
Do-It-Yourself or Self-Directed Learning
Two trends are pushing this forward:
- Many companies are turning into information companies with knowledge workers doing complex tasks. These knowledge workers are the only people who can understand their job (barely!). This makes programmatic (i.e. curriculum based) learning offerings designed by others largely ineffective.
- The world is incredibly connected and the tools for collaboration can, for all practical purposes, be considered to be free. People can organize their own learning groups.
My challenge to the learning department is the following: Which of the five DIY imperatives (devolve responsibility, be open, create experiences rather than content, provide scaffolding and stimulate reflection) are you practicing?
IT Development Methodologies for Learning Content Development
I attended two sessions that explicitly talked about IT development methodologies applied to learning content development. One was about using hackathons and the other about Agile. There is a lot of inspiration to be found in how people write software that can be applied to how people develop learning (yes, I do understand the irony of this if you compare this to the previous point: but I still think designed experiences are useful for many occasions). If you look closely at the principles behind the Agile manifesto, then you see how easy these can be translated to learning: learner satisfaction by rapid delivery of useful learning experiences, welcome changing requirements (even late in development), learning experiences are delivered frequently (weeks rather than months), sustainable development (able to maintain a constant pace), close and daily co-operation between business people and developers, face-to-face conversation is the best form of communication (co-location), projects are built around motivated individuals (who should be trusted), continuous attention to technical excellence and good design, simplicity (the art of maximizing the amount of work not done) is essential, self-organizing teams, and regular adaptation to changing circumstances.
So here is my challenge for the learning department: Do you know and understand the cutting edge IT development methodologies like Agile, Scrum, Extreme programming? Have you thought about how these could be applied to your learning development process?
Massive Open Online Courses (MOOCs)
At the beginning of the year barely anybody had heard about Massive Open Online Courses (MOOCs). Today this seems to be the hottest topic in the educational technology field. Any Masie attendee that hadn’t heard about MOOCs before they came to the conference certainly had heard about it by the time they left. I attended an interesting session by Curtis Bonk. Audrey Watters has probably done the best write-up so far on how they work and what they mean (don’t miss all her other posts on the Ed-Tech Trends of 2012). I also enjoyed this podcast with Arnold Kling which discusses some of the issues with how MOOC in their institutionalized form work.
I want to create two different challenges for the learning department around MOOCs. The first one is based on the approach by the big universities (xMoocs): Have you thought about how the principles behind MOOCs around scaling the normal educational process can be applied to your company? Could this be an efficient way to scale a 20 person classroom to a 2000 or 20000 person “classroom”? The second challenge comes from the original MOOCs (cMOOCS): Can you create a corporate course which is divergent, distributed, virtual, exploratory and scales at the same time? What would that course be about?
Most learning profesionals don’t spend enough time looking at how our brains work and how that could be used in designing learning experiences. A few years ago John Medina wrote a very readable book translating the current state of brain research into actionable insights:
This year’s Masie conference had two keynote speakers that have created popular science books riding on top of the advances in neurology: Susan Cain on introversion and Charles Duhigg on forming habits. After reading my posts on these, Bert De Coutere connected me to Tiny Habits, a brain science inspired approach to changing behaviour.
Another challenge for the learning department: How many of your design heuristics are based on opinion, mimesis or history rather than on brain science? How do you keep up to date on the latest developments in brain science?
Focus on Cultural (and Organizational) Change
Even though I can’t pinpoint a session that I attended on this topic, I could feel how a shift towards organizational dynamics rather than personal dynamics was underlying many of the discussions. Learning in corporations often is about changing the behaviour or attitudes of large groups of people (I propose to rename the learning department to “the indoctrination department”). Making the organization rather than the learner the unit of change would change many things.
Even though it is early days for this, I would like to put out the following challenge: Imagine that your job is not to make an individual competent, but to change the culture inside an organization (e.g. maybe to become more innovative or to go from a “service provider to a consultative mindset”. What will you do differently?
Data as a Mystery
Learning analytics is all the rage. Also at the Masie Learning conference. Nigel Paine said the following for example:
Data is important. You should have the data from your organization and try and get some insights from it. Most people never take the trouble to go through the data.
I have serious issues with the current approaches to learning analytics:
- Learning analytics is nearly always seen as a top-down initiative that can be used to steer and manage. I believe it should be used as an empowerment tool to speed up and enrich the feedback cycle for learners (also see my post on a talk by Erik Duval).
- Everybody seems to be focused on capturing as much data as possible and using fancy (preferably iPad enabled) graphing and dynamic visualization technologies. Nobody seems to be asking interesting questions that can be answered by analyzing data.
My challenge to the learning department is related to that second point: What interesting (and difficult) learning related questions can you get an answer to, now that data capturing and visualization tools have become ubiquitous?
Patents and Licensing
I was shocked to hear Elliott Masie talk about a patent troll in the learning technology space. An article by Steven Levy in this month’s Wired gave me some more ridiculous examples. The law is important and if you don’t think about patents, copyright and trademarks then they might come and haunt you later on.
Very few corporations think about the license that they use for their learning content. Often the copyright of any work will just be with the company and all rights will be reserved. This might not be the best or smartest thing to do. Creative Commons licenses are one of the enablers of Open Educational Resources. Creating OERs could lead to much more flexibility around corporate content and might even create synergies in industries that can transcend individual corporations. This is a dynamic space with interesting debates (see the discussion on the non-commercial clause for example ,via Downes).
This is probably the most “advanced” challenge in this post: Have you thought about turning your learning content and courses into open educational resources (OER)? What could be the business case for OER in a corporation?
I would love to hear from you which challenges you’ve decided to pick up. Will you please share them in the comments?
Richard Culatta is with the US Department of Education at the Office of Educational Technology. He is an exceptional speaker and a “smart cookie”, I dig his self-deprecating style.
He kicked off by showing examples of what he calls “pencil sharpening” technology. Pencil sharpening is a metaphor for just making the same thing a little bit better without changing the paradigm. From the traditional blackboard to the digital whiteboard, from the traditional textbook to the e-book, from lectures to webinars. We should not be sharpening our pencils but instead let technology help us with the three challenges that prevent us from breaking with the one-size-fits-all methodology:
Challenge 1: We treat learners the same despite different needs and challenges. The least equitable thing we can do is treat everybody the same.
Challenge 2: We hold the schedule constant and allow the learning to vary.
Challenge 3: Student performance data comes too late to be useful.
The US government’s National Ed Tech Plan tries to address these challenges.
Culatta has made the following formula for personalized learning:
Adjusting the pace + Adjusting the learning approach + Leveraging student interests/experiences = Personalized learning
If you don’t have each of these three elements, then it isn’t personalized. This needs thinking on each of the following: learning, teaching and infrastructure.
Next he shared a set of examples. The one that stuck out for me is School of One, a concept school that creates personal curricula for its students using technology.
Taking a cue from this Richard suggests we should capture way more formative assessment data from our learners.
He finished by asking us all to download and read the Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics report. Allegedly a very readable introduction to the topic.
Today I keynoted the Dutch Moodlemoot (mootnl12). I talked about how current times force us to let go of curricula, why it is more important than anything else to teach students how to learn, what it means to work in a knowledge society (work becomes synonymous with learning) and what this might mean for a virtual learning environment like Moodle. Unfortunately this talk was in Dutch and so will be the accompanying blogpost.
Hieronder, ongeveer op volgorde van de presentatie, links naar achtergrond informatie:
De Open Schoolgemeenschap Bijlmer is een school waar een aantal van de jaren zeventig onderwijs-idealen nog hoog in het vaandel staan.
Het Peter Drucker Institure is een goed beginpunt om wat meer over de grote business denker te weten te komen. Probeer ook zijn Wikipedia pagina. Alle quotes in de presentatie komen uit het boek Management.
De Wikipedia pagina over het Cynefin framework legt goed uit wat het is. Harold Jarche heeft een ijzersterke blogpost geschreven waarin hij dat framework toepast op leren en daar vergaande conclusies voor organisaties uit trekt. Lees ook zijn drie principes voor “net work”.
Meer informatie over de pedagogiek van Moodle staat in de Moodle Docs.
Het artikel over Massive Open Online Courses (MOOCs) is een goede inleiding. Zelf heb ik actief meegedaan aan de Learning Analytics MOOC. De Moodle discussie over corporate use-cases van analytics vind je hier.
Twee voorbeelden van mijn eigen leer-experimenten zijn de grassroot leesgroep over het Learning in 3D boek en de workshop op de Online Educa over Learning Scenarios. Allebei deze sites zijn gemaakt met WordPress.
Scott Jenson had jaren zijn eigen design consultancy and werkt nu als Lead UI Designer for Mobile bij Google. Hij weet dus waar hij het over heeft. Zijn boek The Simplicity Shift staat integraal als PDF online.
Erik Duval is a professor at the Catholic University in Leuven. His team works on Human Computer Interaction. In the last few years, he has done a lot of work around Learning Analytics, which he defines as being about collecting traces that learners leave behind and using those traces to improve learning.
His students at the university do everything (and he means everything) using blogs and Twitter. He stopped giving lectures and instead works with students in a single place a few times a week. This makes it very hard for him to follow what is going on. The number of posts that are generated in his courses are too many for him to read them all. If you are facilitating a Massive Open Online Course (MOOC) this gets worse. This is why we do learning analytics. This has a lot of attention now with a conference and a Society for Learning Analytics Research
Next he mentions the quantified self movement: self-knowledge to self-tracking. If a tool gives you a good mirror about your behaviour, then this might make it easier to actually change your behaviour. He showed many examples from the consumer market (i.e. Nike+ Fuelband or the Fitbit. He is trying to see if you could develop similar applications for learning. Imagine setting a goal for how many words you want to learn every day and a device that shows you how many you’ve learned for the day. He wants to create awareness in the student, so that they can “drive” themselves better. This is different from the current efforts in learning analytics where they are mostly used to give more information to the institution (Duval doesn’t like that). He showed us an example of the dashboard that he uses to see the student’s activity on the blogs and on Twitter. The students have access to this information too and can see that data for their peers: openness and full transparency. This measuring leads to externalities that aren’t necessarily good (think students writing tweetbots to get good score). Duval depends on the self-regulating abilities of the group of students.
At the beginning of each course he tells his student that everything will be open in the course. He might have a debate about this, but he never gives in. He doesn’t think you can become an engineer without having the ability to engage openly with the society. If a student has very conscionable objections around privatey, then he sometimes allows them to publish under an alias.
If you collect a lot of data about people, then you can make technology enhanced learning more of an exact (i.e. hard) science. He wrote a paper titled: Dataset-driven Research for Improving Recommender Systems for Learning.
This whole field has a couple of issues:
- What can we measure? Time, time spent artefact produced, socal interactions, location. Many other things might be important.
- Privacy might become an issue: we will know so many things about everybody. One solution might be Attention Trust which defines four consumer rights for your (attention) data: property, mobility, economy and transparency. Our idea about privacy is changing, he referred to Public Parts by Jeff Jarvis.
- When does support become enslaving (see this blogpost)
His solution for the problems (once again): openness.
Duval’s talk had a lot of similarities with the talk I will be delivering tomorrow. Luckily we both come from slightly different angles and don’t share all our examples. If you attended his talk and didn’t enjoy his, then you can skip mine! If you loved it, come and get more tomorrow morning.
A couple of weeks ago I attended the Lift France 2011 conference. For me this was different than my usual conference experience. I have written before how Anglo-Saxon my perspective is, so to be at a conference where the majority of the audience is French was refreshing.
Although there was a track about learning, most of the conference approached the effects of digital technology on society from angles that were relatively new to me. In a pure learning conference, I am usually able to contextualize what I see immediately and do some real time reflecting. This time I had to stick to reporting on what I saw (all my #lift11 posts are listed here) and was forced to take a few days and reflect on what I had seen.
Below, in random order, an overview of what I would consider to be the big themes of the conference. Occasionally I will try to speculate on what these themes might mean for learning and for innovation.
Utilization of excess capacity empowered by collaborative platforms
Robin Chase gave the clearest explanation of this theme that many speakers kept referring back to:
This world has large amounts of excess capacity that isn’t used. In the past, the transaction costs of sharing (or renting out) this capacity was too high to make it worthwhile. The Internet has facilitated the creation of collaborative platforms that lower these transaction costs and make trust explicit. Chase’s most simple example is the couch surfing idea and her Zipcar and Buzzcar businesses are examples of this too.
Entangled with the idea of sharing capacity is the idea of access being more important than ownership. This will likely come with a change in the models for consumption: from owning a product to consuming a service. The importance of access shows why it is important to pay attention to the (legal) battles being fought on patents, copyrights, trademarks and licenses.
I had some good discussions with colleagues about this topic. Many facilities, like desks in offices, are underused and it would be good to try and find ways of getting the percentage of utilization up. One problem we saw is how to deal with peak demand. Rick Marriner made the valid suggestion that transparency about the demand (e.g. knowing how many cars are booked in the near future) will actually feed back into the demand and thus flatten the peaks.
A quick question that any (part of an) organization should ask itself is which assets and resources have excess capacity because in the past transaction costs for sharing them across the organization were too high. Would it now be possible to create platforms that allow the use of this extra capacity?
Another question to which I currently do not have an answer is whether we can translate this story to cognitive capacity. Do we have excess cognitive capacity and would there be a way of sharing this? Shirky’s Cognitive Surplus and the Wikipedia project seem to suggest we do. Can organizations capture this value?
The idea of the Internet getting rid of intermediaries is very much related to the point above. Intermediaries were a big part of the transaction costs and they are disappearing everywhere. Travel agents are the canonical example, but at the conference, Paul Wicks talked about PatientsLikeMe, a site that partially tries to disintermediate doctors out of the patient-medicine relationship.
What candidates for disintermediation exist in learning? Is the Learning Management System the intermediary or the disintermediator? I think the former. What about the learning function itself? In the last years I have seen a shift where the learning function is moving away from designing learning programs into becoming a curator of content and service providers and a manager of logistics. These are exactly the type of activities that are not needed anymore in the networked world. Is this why the learning profession is in crisis? I certainly think so.
The primacy (and urgency) of design
Maybe it was the fact that the conference was full of French designeurs (with the characteristic Philippe Starck-ish eccentricities that I enjoy so much), but it really did put the urgency of design to the forefront once again for me. I would argue that design means you think about the effects that you would like to have in this world. As a creator it is your responsibility to think deeply and holistically. I will not say that you can always know the results of your design (product, service, building, city, organization, etc.), there will be externalities, but it is important that you leave nothing to chance (accident) or to convenience (laziness).
There is a wealth of productivity to be gained here. I am bombarded by bad (non-)design every single day. Large corporations are the worst offenders. The only design parameter that seems to be relevant for processes is whether they reduce risk enough, not whether they are usable for somebody trying to get something done. Most templates focus on completeness and not on aesthetics or ease of use. When last did you receive a PowerPoint deck that wasn’t full of superfluous elements that the author couldn’t be bothered to remove?
We can’t afford not to design. The company I work for is full of brilliant engineers. Where are the brilliant designers?
Distributed, federated and networked systems
Robin Chase used the image below and explicitly said that we now finally realize that distributed networks are the right model to overcome the problems of centralized and decentralized systems.
I have to admit that the distinction between decentralized and distributed eludes me for now (I guess I should read Baran’s paper), but I did notice at Fosdem earlier this year that the open source world is urgently trying to create alternatives to big centralized services like Twitter and Facebook. Moglen talked about the Freedombox as a small local computer that would do all the tasks that the cloud would normally do, there is StatusNet, unhosted and even talk of distributed redundant file systems and wireless mesh networking.
Can large organizations learn from this? I always see a tension between the need for central governance, standardization and uniformity on the one hand and the local and specific requirements on the other hand. More and more systems are now designed to allow for central governance and the advantages of interoperability and integration, while at the same time providing configurability away from the center. Call it organized customization or maybe even federation. I truly believe you should think deeply about this whenever you are implementing (or designing!) large scale information systems.
Blurring the distinction between the real and the virtual worlds
Lift also had an exhibitors section titled “the lift experience“, mostly a place for multimedia art (imagine a goldfish in a bowl sat atop an electric wheelchair, a camera captured the direction the fish swam in and the wheelchair would then move in the same direction). There were quite a few projects using the Arduino and even more that used “hacked” Kinects to enable new types of interaction languages.
Most projects tried, in some way, to negotiate a new way of working between the virtual and the real (or should I call it the visceral). As soon as those boundaries disappear designers will have an increased ability to shape reality. One of the projects that I engaged with the most was the UrbanMusicalGame: a set of gyroscopes and accelerometers hidden in soft balls. By playing with these balls you could make beautiful music while using an iPhone app to change the settings (unfortunately the algorithms were not yet optimized for my juggling). This type of project is the vanguard of what we will see in the near term.
Discomfort with the dehumanizing aspects of technology
A surprising theme for me was the well articulated discomfort with the dehumanizing aspects of some of the emerging digital technologies. As Benkler says: technology creates feasibility spaces for social practice and not all practices that are becoming feasible now have positive societal impact.
One artist, Emmanuel Germond, seemed to be very much in touch with these feeling. His project, Exposition au Danger Psychologique, made fun of people’s inability to deal with all this information and provided some coy solutions. Alex Peng talked about contemplative computing, Chris de Decker showed examples of low-tech solutions from the past that can help solve our current problems and projects in the Lift Experience showed things like analog wooden interfaces for manipulating digital music.
This leads me to believe that both physical reality and being disconnected will come at a premium in the near future. People will be willing to pay for having real experiences versus the ubiquitous virtual experiences. Not being connected to the virtual world will become more expensive as it becomes more difficult. Imagine a retreat which markets itself as having no wifi and a giving you a free physical newspaper in the morning (places like this are starting to pop up, see this unplugged conference or this reporter’s unconnected weekend).
There will be consequences for Learning and HR at large. For the last couple of years we have been moving more and more of our learning interventions into the virtual space. Companies have set up virtual universities with virtual classrooms, thousands and thousands of hours of e-learning are produced every year and the virtual worlds that are used in serious games are getting more like reality every month.
Thinking about the premium of reality it is then only logical that allowing your staff to connect with each other in the real world and collaborate in face to face meetings will be a differentiator for acquiring and retaining talent.
Big data for innovation
I’ve done a lot of thinking about big data this year (see for example these learning analytics posts) and this was a tangential topic at the conference. The clearest example came from a carpool site which can use it’s data about future reservation to clearly predict how busy traffic will be on a particular day. PatientsLikeMe is of course another example of a company that uses data as a valuable asset.
Supercrunchers is full of examples of data-driven solutions to business problems. The ease of capturing data, combined with the increase in computing power and data storage has made doing randomized trials and regression analysis feasible where before it was impossible.
This means that the following question is now relevant for any business: How can we use the data that we capture to make our products, services and processes better? Any answers?
The need to overcome the open/closed dichotomy
In my circles, I usually only encounter people who believe that most things should be open. Geoff Mulgan spoke of ways to synthesize the open/closed dichotomy. I am not completely sure how he foresees doing this, but I do know that both sides have a lot to learn from each other.
Disruptive software innovations currently don’t seem to happen int the open source world, but open source does manage to innovate when it comes to their own processes. They manage to scale projects to thousands of participants, have figured out ways of pragmatically dealing with issues of intellectual property (in a way that doesn’t inhibit development) and have created their own tool sets to make them successful at working in dispersed teams (Git being my favorite example).
When we want to change the way we do innovation in a networked world, then we shouldn’t look at the open source world for the content of innovation or the thought leadership, instead we should look at their process.
A lot of the above is still very immature and incoherent thinking. I would therefore love to have a dialog with anybody who could help me deepen my thoughts on these topics.
Finally, to give a quick flavour of all my other posts about Lift 11, the following word cloud based on those posts:
Every week I will try and write down some reflections on the Open Online Course: Learning and Knowledge Analytics. These will by written for myself as much as for anybody else, so I have to apologise in advance about the fact that there will be nearly no narrative and a mix between thoughts on the contents of the course and on the process of the course.
So what do I have to write about this week?
My tooling for the course
There is a lot of stuff happening in these distributed courses and keeping up with the course required some setup and preparation on my side (I like to call that my “tooling”). So what tools do I use?
A lot of new materials to read are created every day: Tweets with the #lak11 hashtag, posts in all the different Moodle forums, Google groups and Learninganalytics.net messages from George Siemens and Diigo/Delicious bookmarks. Thankfully all of these information resources create RSS feeds and I have been able to add them all to special-made Lak11 folder in my Google Reader (RSS feed). That folder sorts its messages based on time (oldest first) allowing me some understanding of the temporal aspects of the course and making sure I read a reply after the original message. A couple of times a day I use the excellent MobileRSS reader on my iPad to read through all the messages.
There is quite a lot of reading to do. At the beginning of the week I read through the syllabus and make sure that I download all the PDF files to GoodReader on the iPad. All web articles are stored for later reading using the Instapaper service. I have given both GoodReader and Instapaper Lak11 folders. I do most of the reading of these articles on the train. GoodReader allows me to highlight passages and store bookmarks in the PDF file itself. With Instapaper thus is a bit more difficult: when I read a very interesting paragraph I have to highlight it and email it to myself for later processing.
Each and every resource that I touch for the course gets its own bookmark on Diigo. Next to the relevant tags for the resource I also tag them with lak11 and weekx (where x is the number of the week) and share them to the Learning Analytics group on Diigo. These will provide me with a history of the interesting things I have seen during the course and should help me in writing a weekly reflective post.
So far the “consumer” side of things. As a “producer” I participate in the Moodle forums. I can easily find back all my own posts through my Moodle profile and I hope to use some form of screen-scraper at the end of the course to pull a copy of everything that I have written. I use this Worpress.com hosted blog to write and reflect on the course materials and tag my course-related post with “lak11″ so that show up on their own page (and have their own feed in case you are interested). On Twitter I occasionally tweet with #lak11, mostly to refer to a Moodle- or blog post that I have written or to try and ask the group a direct question.
What is missing? The one thing that I don’t use yet is something like a mind mapping or a concept mapping tool. The syllabus recommends VUE and CMAP and one of the assignments each week is to keep updating a map for the course. These tools don’t seem to have an iPad equivalent. There is some good mind mapping tools for the iPad (my favourite is probably iThoughtsHD, watch this space for a mind mapping comparison of iPad apps), but I don’t seem to be able to add using it into my workflow for the course. Maybe I should just try a little harder.
My inability to “skim and dive”
This week I reconfirmed my inability to “skim and dive”. For these things I seem to be an all or nothing guy. There are magazines that I read completely from the first page to the last page (e.g. Wired). This course seems to be one of these things too. I read every single thing. It is a bit much currently, but I expect the volume of Moodle and Twitter messages to go down quite significantly as the course progresses. So if I can just about manage now, it should become relatively easy later on.
The readings of this week
There were quite a few academic papers in the readings of this week. Most of them provided an overview of education datamining or academic/learning analytics. Many of the discussions in these papers seemed quite nominal to me. They probably are good references to keep and have a wealth of bibliographical materials that I could look at at some point in the future. For now, they lacked any true new insights for me and appeared to be pretty basic.
Unfortunately I wasn’t able to attend any of the Elluminate sessions and I haven’t listened to them yet either. I hope to catch up this week with the recordings and maybe even attend the guest speaker live tomorrow evening.
It has been a while since I last actively participated in a Moodle facilitated course. Moodle has again proven to be a very effective host for forum based discussions. One interesting Moodle add-on that I had not seen before is Marginalia a way to annotate forum posts in Moodle itself which can be private or public. Look at the following Youtube video to see it in action.
I wonder if I will use it extensively in the next few weeks.
One thing that we were asked to try out as an activity was Hunch. For me it was interesting to see all the different interpretations that people in the course had about how to pick up this task and what the question (What are the educational uses of a Hunch-like tool for learning?) actually meant. A distributed course like this creates a lot of redundancy in the answers. I also noted that people kept repeating a falsehood (needing to use Twitter/Facebook to log in). My explanation of how Hunch could be used by the weary was not really picked up. It is good to be reminded at times that most people in the world do not share my perspective on computers and my literacy with the medium. Thinking otherwise is a hard to escape consequence of living in a techno-bubble with the other “digerati”.
I wrote the following on the topic (in the Moodle forum for week 1):
Indeed the complete US-centricness of the service was the first thing that I noticed. I believe it asked me at some point on what continent I am living. How come it still asks me questions to which I would never have an answer? Are these questions crowdsourced too? Do we get them randomly or do we get certain questions based on our answers? It feels like the former to me.
The recommendations that it gave me seemed to be pretty random too. The occasional hit and then a lot of misses. I had the ambition to try out the top 5 music albums it would recommend me, but couldn’t bear the thought of listening to all that rock. This did sneak a little thought into my head: could it be that I am very special? Am I so eclectic that I can defeat all data mining effort. Am I the Napoleon Dynamite of people? Of course I am not, but the question remains: does this work better for some people than for others.
One other thing that I noticed how the site seemed to use some of the tricks of an astrologer: who wouldn’t like “Insalata Caprese”, seems like a safe recommendation to me.
In the learning domain I could see an application as an Electronic Performance Support System. It would know what I need in my work and could recommend the right website to order business cards (when it sees I go to a conference) or an interesting resource relating to the work that I am doing. Kind of like a new version of Clippy, but one that works.
BTW, In an earlier blogpost I have written about how recommendation systems could turn us all into mussels (although I don’t really believe that).
Because of a very good intervention by George Siemens, the main facilitator of the course, we are now starting to have a good discussion about analytics in corporate situations here. The corporate world has learning as a secondary process (very much as a means to a goal) and that creates a slightly different viewpoint. I assume the corporate people will form their own subgroup in some way in this course. Before the end of next week I will attempt to flesh out some more use cases following Bert De Coutere’s examples here.
Bersin/KnowledgeAdvisors Lunch and Learn
At the end of January I will be attending a free Bersin/KnowledgeAdvisors lunch and learn titled Innovation in Learning Measurement – High Impact Measurement Framework in London (this is one day before the Learning Technologies 2011 exhibit/conference). I would love to meet other Lak11 participants there. Will that happen?
My participation in numbers
Every week I will try and give a numerical update about my course participation. This week I bookmarked 33 items on Diigo, wrote 10 Lak11 related tweets, wrote 25 Moodle forums post and 2 blog posts.