Lak11 Week 3 and 4 (and 5): Semantic Web, Tools and Corporate Use of Analytics

Two weeks ago I visited Learning Technologies 2011 in London (blog post forthcoming). This meant I had less time to write down some thoughts on Lak11. I did manage to read most of the reading materials from the syllabus and did some experimenting with the different tools that are out there. Here are my reflections on week 3 and 4 (and a little bit of 5) of the course.

The Semantic Web and Linked Data

This was the main topic of week three of the course. Basically the semantic web has a couple of characteristics. It tries to separate the presentation of the data and the data itself. It does this by structuring the data which then allows linking up all the data. The technical way that this is done is through so-called RDF-triples: a subject, a predicate and an object.

Although he is a better writer than speaker, I still enjoyed this video of Tim Berners-Lee (the inventor of the web) explaining the concept of linked data. His point about the fact that we cannot predict what we are going to make with this technology is well taken: “If we end up only building the things I can imagine, we would have failed“.

[youtube=http://www.youtube.com/watch?v=OM6XIICm_qo]

The benefits of this are easy to see. In the forums there was a lot of discussion around whether the semantic web is feasible and whether it is actually necessary to put effort into it. People seemed to think that putting in a lot of human effort to make something easier to read for machines is turning the world upside down. I actually don’t think that is strictly true. I don’t believe we need strict ontologies, but I do think we could define more simple machine readable formats and create great interfaces for inputting data into these formats.

Use cases for analytics in corporate learning

Weeks ago Bert De Coutere started creating a set of use cases for analytics in corporate learning. I have been wanting to add some of my own ideas, but wasn’t able to create enough “thinking time” earlier. This week I finally managed to take part in the discussion. Thinking about the problem I noticed that I often found it difficult to make a distinction between learning and improving performance. In the end I decided not to worry about it. I also did not stick to the format: it should be pretty obvious what kind of analytics could deliver these use cases. These are the ideas that I added:

  • Portfolio management through monitoring search terms
    You are responsible for the project management portfolio learning portfolio. In the past you mostly worried about “closing skill gaps” through making sure there were enough courses on the topic. In recent years you have switched to making sure the community is healthy and you have switched from developing “just in case” learning intervention towards “just in time” learning interventions. One thing that really helps you in doing your work is the weekly trending questions/topics/problems list you get in your mailbox. It is an ever-changing list of things that have been discussed and searched for recently in the project management space. It wasn’t until you saw this dashboard that you noticed a sharp increase in demand for information about privacy laws in China. Because of it you were able to create a document with some relevant links that you now show as a recommended result when people search for privacy and China.
  • Social Contextualization of Content
    Whenever you look at any piece of content in your company (e.g. a video on the internal YouTube, an office document from a SharePoint site or news article on the intranet), you will not only see the content itself, but you will also see which other people in the company have seen that content, what tags they gave it, which passages they highlighted or annotated and what rating they gave the piece of content. There are easy ways for you to manage which “social context” you want to see. You can limit it to the people in your direct team, in your personal network or to the experts (either as defined by you or by an algorithm). You love the “aggregated highlights view” where you can see a heat map overlay of the important passages of a document. Another great feature is how you can play back chronologically who looked at each URL (seeing how it spread through the organization).
  • Data enabled meetings
    Just before you go into a meeting you open the invite. Below the title of the meeting and the location you see the list of participants of the meeting. Next to each participant you see which other people in your network they have met with before and which people in your network they have emailed with and how recent those engagements have been. This gives you more context for the meeting. You don’t have to ask the vendor anymore whether your company is already using their product in some other part of the business. The list also jogs your memory: often you vaguely remember speaking to somebody but cannot seem to remember when you spoke and what you spoke about. This tools also gives you easy access to notes on and recordings of past conversations.
  • Automatic “getting-to-know-yous”
    About once a week you get an invite created by “The Connector”. It invites you to get to know a person that you haven’t met before and always picks a convenient time to do it. Each time you and the other invitee accept one of these invites you are both surprised that you have never met before as you operate with similar stakeholders, work in similar topics or have similar challenges. In your settings you have given your preference for face to face meetings, so “The Connector” does not bother you with those video-conferencing sessions that other people seem to like so much.
  • “Train me now!”
    You are in the lobby of the head office waiting for your appointment to arrive. She has just texted you that she will be 10 minutes late as she has been delayed by the traffic. You open the “Train me now!” app and tell it you have 8 minutes to spare. The app looks at the required training that is coming up for you, at the expiration dates of your certificates and at your current projects and interests. It also looks at the most popular pieces of learning content in the company and checks to see if any of your peers have recommended something to you (actually it also sees if they have recommended it to somebody else, because the algorithm has learned that this is a useful signal too), it eliminates anything that is longer than 8 minutes, anything that you have looked at before (and haven’t marked as something that could be shown again to you) and anything from a content provider that is on your blacklist. This all happens in a fraction of a second after which it presents you with a shortlist of videos for you to watch. The fact that you chose the second pick instead of the first is of course something that will get fed back into the system to make an even better recommendation next time.
  • Using micro formats for CVs
    The way that a simple structured data format has been used to capture all CVs in the central HR management system in combination with the API that was put on top of it has allowed a wealth of applications for this structured data.

There are three more titles that I wanted to do, but did not have the chance to do yet.

  • Using external information inside the company
  • Suggested learning groups to self-organize
  • Linking performance data to learning excellence

Book: Head First Data Analytics

I have always been intrigued by O’Reilly’s Head First series of books. I don’t know any other publisher who is that explicit about how their books try to implement research based good practices like an informal style, repetition and the use of visuals. So when I encountered Data Analysis in the series I decided to give it a go. I wrote the following review on Goodreads:

The “Head First” series has a refreshing ambition: to create books that help people learn. They try to do this by following a set of evidence-based learning principles. Things like repetition, visual information and practice are all incorporated into the book. This good introduction to data analysis, in the end only scratches the surface and was a bit too simplistic for my taste. I liked the refreshers around hypothesis testing, solver optimisation in Excel, simple linear regression, cleaning up data and visualisation. The best thing about the book is how it introduced me to the open source multi-platform statistical package “R”.

Learning impact measurement and Knowledge Advisers

The day before Learning Technologies, Bersin and KnowledgeAdvisors organized a seminar about measuring the impact of learning. David Mallon, analyst at Bersin, presented their High-Impact Measurement framework.

Bersin High-Impact Measurement Framework
Bersin High-Impact Measurement Framework

The thing that I thought was interesting was how the maturity of your measurement strategy is basically a function of how much your learning organization has moved towards performance consulting. How can you measure business impact if your planning and gap analysis isn’t close to the business?

Jeffrey Berk from KnowledgeAdvisors then tried to show how their Metrics that Matter product allows measurement and then dashboarding around all the parts of the Bersin framework. They basically do this by asking participants to fill in surveys after they have attended any kind of learning event. Their name for these surveys is “smart sheets” (an much improved iteration of the familiar “happy sheets”). KnowledgeAdvisors has a complete software as a service based infrastructure for sending out these digital surveys and collating the results. Because they have all this data they can benchmark your scores against yourself or against their other customers (in aggregate of course). They have done all the sensible statistics for you, so you don’t have to filter out the bias on self-reporting or think about cultural differences in the way people respond to these surveys. Another thing you can do is pull in real business data (think things like sales volumes). By doing some fancy regression analysis it is then possible to see what part of the improvement can be attributed with some level of confidence to the learning intervention, allowing you to calculate return on investment (ROI) for the learning programs.

All in all I was quite impressed with the toolset that they can provide and I do think they will probably serve a genuine need for many businesses.

The best question of the day came from Charles Jennings who pointed out to David Mallon that his talk had referred to the increasing importance of learning on the job and informal learning, but that the learning measurement framework only addresses measurement strategies for top-down and formal learning. Why was that the case? Unfortunately I cannot remember Mallon’s answer (which probably does say something about the quality or relevance of it!)

Experimenting with Needlebase, R, Google charts, Gephi and ManyEyes

The first tool that I tried out this week was Needlebase. This tool allows you to create a data model by defining the nodes in the model and their relations. Then you can train it on a web page of your choice to teach it how to scrape the information from the page. Once you have done that Needlebase will go out to collect all the information and will display it in a way that allows you to sort and graph the information. Watch this video to get a better idea of how this works:

[youtube=http://www.youtube.com/watch?v=58Gzlq4zSDk]

I decided to see if I could use Needlebase to get some insights into resources on Delicious that are tagged with the “lak11” tag. Once you understands how it works, it only takes about 10 minutes to create the model and start scraping the page.

I wanted to get answers to the following questions:

  • Which five users have added the most links and what is the distribution of links over users?
  • Which twenty links were added the most with a “lak11” tag?
  • Which twenty links with a “lak11” tag are the most popular on Delicious?
  • Can the tags be put into a tag cloud based on the frequency of their use?
  • In which week were the Delicious users the most active when it came to bookmarking “lak11” resources?
  • Imagine that the answers to the questions above would be all somebody were able to see about this Knowledge and Learning Analytics course. Would they get a relatively balanced idea about the key topics, resources and people related to the course? What are some of the key things that would they would miss?

Unfortunately after I had done all the machine learning (and had written the above) I learned that Delicious explicitly blocks Needlebase from accessing the site. I therefore had to switch plans.

The Twapperkeeper service keeps a copy of all the tweets with a particular tag (Twitter itself only gives access to the last two weeks of messages through its search interface). I manage to train Needlebase to scrape all the tweets, the username, URL to user picture and userid of the person adding the tweet, who the tweet was a reply to, the unique ID of the tweet, the longitude and latitude, the client that was used and the date of the tweet.

I had to change my questions too:

Another great resource that I re-encountered in these weeks of the course was the Rosling’s Gapminder project:

[youtube=http://www.youtube.com/watch?v=BPt8ElTQMIg]

Google has acquired some part of that technology and thus allows a similar kind of visualization with their spreadsheet data. What makes the data smart is the way that it shows three variables (x-axis, y-axis and size of the bubble and how they change over time. I thought hard about how I could use the Twitter data in this way, but couldn’t find anything sensible. I still wanted to play with the visualization. So at the World Bank’s Open Data Initiative I could download data about population size, investment in education and unemployment figures for a set of countries per year (they have a nice iPhone app too). When I loaded that data I got the following result:

Click to be able to play the motion graph
Click to be able to play the motion graph

The last tool I installed and took a look at was Gephi. I first used SNAPP on the forums of week and exported that data into an XML based format. I then loaded that in Gephi and could play around a bit:

Week 1 forum relations in Gephi
Week 1 forum relations in Gephi

My participation in numbers

I will have to add up my participation for the two (to three) weeks, so in week 3 and week 4 of the course I did 6 Moodle posts, tweeted 3 times about Lak11, wrote 1 blogpost and saved 49 bookmarks to Diigo.

The hours that I have played with all the different tools mentioned above are not mentioned in my self-measurement. However, I did really enjoy playing with these tools and learned a lot of new things.

Summary of and Reflections on “Learning in 3D”, Chapter 1

The first chapter of Learning in 3D titled “Here Comes the Immersive Internet” consists of three parts. The first part gives an overview of the three “Webvolution Waves”, the second part focuses on four convergence points that all lead to a next-generation Immersive Internet architecture and the chapter closes with a short analysis of what this might mean for the enterprise.

Three Webvolution Waves
The web browser arrived in 1993 and was used to connect “to” the information that was available on the web. The web grew fast and businesses helping people with getting on the web (Internet Service Providers like AOL) or finding the information on the web (e.g. Yahoo and Google) where the clear winners of the first wave.

In the early noughties companies like Google and Amazon truly started to leverage “the aggregated behaviour of many users to differentiate their [..] offerings”. This insight combined with the increased ability of people to participate in the web by uploading their own content became the core of “Web 2.0“, characterised by the authors as connecting “through“.

Allegedly the next phase of the web will be about connecting “within” and immersive 3D  experiences will be a fundamental part of that. Kapp and O’Driscoll give a couple of examples, mainly from MMORPGs. In games like World of Warcraft people come together in a (semi-) three-dimensional worlds and collaborate as teams to battle other team. There is real economic value in these games as the practice of gold farming clearly shows.

The description of this third phase obviously has much less clarity than the first two phases: we are now in this “webvolution” and we are not sure which of these points are the most salient aspects. I don’t think that “immersiveness” is the only candidate to be at the heart of the next generation of web technology. It could still be that the semantic web will have more impact on social practice. Or alternatively it could the social graph which will be the all pervasive aspect of the new web. In that latter case Facebook seems to be in prime position to be the next Google with their recently announced Graph API. I am sure these trends reinforce each other, but I am not sure that 3-dimensionality will be as important as this book seems to think it will be.

Four Convergence Points
The authors think there are four current technologies that are integrating with each other, creating four convergence points in the process. All these points converge to the immersive Internet. I don’t want to steal their diagram (you can find it on page 18 of the book), so I’ll describe it here.

  • 2D synchronous learning and knowledge sharing spaces are combining to create immediate networked virtual spaces.
  • Knowledge sharing spaces and web 2.0 technologies are integrating into intuitive dynamic knowledge discovery.
  • Web 2.0 technologies and virtual world technologies are coming together in interactive 3D social networking.
  • Virtual world technologies and 2D synchronous learning together can create immersive 3D learning experiences.

I really like this model as it provides four clear spaces in which you could look at technology. The problem for me is that in my job I do indeed see immediate networked virtual space and am starting to see intuitive dynamic knowledge discovery, but I do not see the two 3D convergence points yet. This could be my lack of knowledge and experience of what is out there, in which case I would gladly see some examples and demonstrations!

What does this mean for business?
The web has had a profound impact on the way we do business and organise ourselves. I want  to address the points that I thought most interesting by quoting three passages from the book. The first quote is about information abundance and the subversion of hierarchy by networks:

As the Internet continues to pervade society, the scarcity paradigm that undergirds most modern economic theory is being challenged. Unlike currency, information is non-appropriable, which essentially means that it can be shared without being given away. Today, information no longer moves in one direction, from the top to the bottom or from teacher to student. Instead, it has a social life all its own.

The second quote is about how the web allows people to come together without needing formal organisations to do it:

As communication costs have decreased and the quality of web-based interactivity has increased, communities of co-creators no longer need to rely on a formal organization to become organized. Rather than employing an enterprise infrastructure to plan ahead of time, they leverage the pervasive and immersive affordances of the web to coordinate their activities in real time.

The above is one of the most important points (and actually the subtitle) of Clay Shirky’s wonderful Here Comes Everybody and I think this reading group is an example of how this can work.

And finally a quote about how companies have to innovate faster and how this affects the role of the learning function in the enterprise:

For change to occur it is a precondition that learning take place. [..] In the case of the centralize hierarchies, [organizations] must unlearn all that brought it success in the pre-webvolution era and quickly learn how to leverage the Immersive Internet to reconfigure its resources and capabilities to achieve sustainable competitive advantage in a world gone web. […] The perennial challenge of the learning function within the enterprise is to ensure that human capital investment yields a workforce capable of innovating faster than the competition and work processes that allow the organization to adapt to changes with minimal disruption. This suggests that the learning function should become increasingly strategic to the enterprise.

The last sentence is the step-up to the rest of the book. I am looking forward to it!

Questions for discussion
Please participate in these two polls:

[polldaddy poll=3107820]

[polldaddy poll=3107841]

In the teleconference I would like to discuss the following questions:

  • In what way has your company or organisation changed because of the webvolution? How has this affected the learning function?
  • What are your thoughts about the convergence to an immersive web? Do you have examples of how 2D synchronous learning and web 2.0 combine with 3D virtual worlds?
  • What will change when we make the shift from a scarcity paradigm to an abundance paradigm for information.

We will discuss these questions in our weekly teleconference on Monday April 26th at 15:30 CET. Please contact me if you want to call in and don’t have the dial in details.