Lak11 Week 3 and 4 (and 5): Semantic Web, Tools and Corporate Use of Analytics

Two weeks ago I visited Learning Technologies 2011 in London (blog post forthcoming). This meant I had less time to write down some thoughts on Lak11. I did manage to read most of the reading materials from the syllabus and did some experimenting with the different tools that are out there. Here are my reflections on week 3 and 4 (and a little bit of 5) of the course.

The Semantic Web and Linked Data

This was the main topic of week three of the course. Basically the semantic web has a couple of characteristics. It tries to separate the presentation of the data and the data itself. It does this by structuring the data which then allows linking up all the data. The technical way that this is done is through so-called RDF-triples: a subject, a predicate and an object.

Although he is a better writer than speaker, I still enjoyed this video of Tim Berners-Lee (the inventor of the web) explaining the concept of linked data. His point about the fact that we cannot predict what we are going to make with this technology is well taken: “If we end up only building the things I can imagine, we would have failed“.

[youtube=http://www.youtube.com/watch?v=OM6XIICm_qo]

The benefits of this are easy to see. In the forums there was a lot of discussion around whether the semantic web is feasible and whether it is actually necessary to put effort into it. People seemed to think that putting in a lot of human effort to make something easier to read for machines is turning the world upside down. I actually don’t think that is strictly true. I don’t believe we need strict ontologies, but I do think we could define more simple machine readable formats and create great interfaces for inputting data into these formats.

Use cases for analytics in corporate learning

Weeks ago Bert De Coutere started creating a set of use cases for analytics in corporate learning. I have been wanting to add some of my own ideas, but wasn’t able to create enough “thinking time” earlier. This week I finally managed to take part in the discussion. Thinking about the problem I noticed that I often found it difficult to make a distinction between learning and improving performance. In the end I decided not to worry about it. I also did not stick to the format: it should be pretty obvious what kind of analytics could deliver these use cases. These are the ideas that I added:

  • Portfolio management through monitoring search terms
    You are responsible for the project management portfolio learning portfolio. In the past you mostly worried about “closing skill gaps” through making sure there were enough courses on the topic. In recent years you have switched to making sure the community is healthy and you have switched from developing “just in case” learning intervention towards “just in time” learning interventions. One thing that really helps you in doing your work is the weekly trending questions/topics/problems list you get in your mailbox. It is an ever-changing list of things that have been discussed and searched for recently in the project management space. It wasn’t until you saw this dashboard that you noticed a sharp increase in demand for information about privacy laws in China. Because of it you were able to create a document with some relevant links that you now show as a recommended result when people search for privacy and China.
  • Social Contextualization of Content
    Whenever you look at any piece of content in your company (e.g. a video on the internal YouTube, an office document from a SharePoint site or news article on the intranet), you will not only see the content itself, but you will also see which other people in the company have seen that content, what tags they gave it, which passages they highlighted or annotated and what rating they gave the piece of content. There are easy ways for you to manage which “social context” you want to see. You can limit it to the people in your direct team, in your personal network or to the experts (either as defined by you or by an algorithm). You love the “aggregated highlights view” where you can see a heat map overlay of the important passages of a document. Another great feature is how you can play back chronologically who looked at each URL (seeing how it spread through the organization).
  • Data enabled meetings
    Just before you go into a meeting you open the invite. Below the title of the meeting and the location you see the list of participants of the meeting. Next to each participant you see which other people in your network they have met with before and which people in your network they have emailed with and how recent those engagements have been. This gives you more context for the meeting. You don’t have to ask the vendor anymore whether your company is already using their product in some other part of the business. The list also jogs your memory: often you vaguely remember speaking to somebody but cannot seem to remember when you spoke and what you spoke about. This tools also gives you easy access to notes on and recordings of past conversations.
  • Automatic “getting-to-know-yous”
    About once a week you get an invite created by “The Connector”. It invites you to get to know a person that you haven’t met before and always picks a convenient time to do it. Each time you and the other invitee accept one of these invites you are both surprised that you have never met before as you operate with similar stakeholders, work in similar topics or have similar challenges. In your settings you have given your preference for face to face meetings, so “The Connector” does not bother you with those video-conferencing sessions that other people seem to like so much.
  • “Train me now!”
    You are in the lobby of the head office waiting for your appointment to arrive. She has just texted you that she will be 10 minutes late as she has been delayed by the traffic. You open the “Train me now!” app and tell it you have 8 minutes to spare. The app looks at the required training that is coming up for you, at the expiration dates of your certificates and at your current projects and interests. It also looks at the most popular pieces of learning content in the company and checks to see if any of your peers have recommended something to you (actually it also sees if they have recommended it to somebody else, because the algorithm has learned that this is a useful signal too), it eliminates anything that is longer than 8 minutes, anything that you have looked at before (and haven’t marked as something that could be shown again to you) and anything from a content provider that is on your blacklist. This all happens in a fraction of a second after which it presents you with a shortlist of videos for you to watch. The fact that you chose the second pick instead of the first is of course something that will get fed back into the system to make an even better recommendation next time.
  • Using micro formats for CVs
    The way that a simple structured data format has been used to capture all CVs in the central HR management system in combination with the API that was put on top of it has allowed a wealth of applications for this structured data.

There are three more titles that I wanted to do, but did not have the chance to do yet.

  • Using external information inside the company
  • Suggested learning groups to self-organize
  • Linking performance data to learning excellence

Book: Head First Data Analytics

I have always been intrigued by O’Reilly’s Head First series of books. I don’t know any other publisher who is that explicit about how their books try to implement research based good practices like an informal style, repetition and the use of visuals. So when I encountered Data Analysis in the series I decided to give it a go. I wrote the following review on Goodreads:

The “Head First” series has a refreshing ambition: to create books that help people learn. They try to do this by following a set of evidence-based learning principles. Things like repetition, visual information and practice are all incorporated into the book. This good introduction to data analysis, in the end only scratches the surface and was a bit too simplistic for my taste. I liked the refreshers around hypothesis testing, solver optimisation in Excel, simple linear regression, cleaning up data and visualisation. The best thing about the book is how it introduced me to the open source multi-platform statistical package “R”.

Learning impact measurement and Knowledge Advisers

The day before Learning Technologies, Bersin and KnowledgeAdvisors organized a seminar about measuring the impact of learning. David Mallon, analyst at Bersin, presented their High-Impact Measurement framework.

Bersin High-Impact Measurement Framework
Bersin High-Impact Measurement Framework

The thing that I thought was interesting was how the maturity of your measurement strategy is basically a function of how much your learning organization has moved towards performance consulting. How can you measure business impact if your planning and gap analysis isn’t close to the business?

Jeffrey Berk from KnowledgeAdvisors then tried to show how their Metrics that Matter product allows measurement and then dashboarding around all the parts of the Bersin framework. They basically do this by asking participants to fill in surveys after they have attended any kind of learning event. Their name for these surveys is “smart sheets” (an much improved iteration of the familiar “happy sheets”). KnowledgeAdvisors has a complete software as a service based infrastructure for sending out these digital surveys and collating the results. Because they have all this data they can benchmark your scores against yourself or against their other customers (in aggregate of course). They have done all the sensible statistics for you, so you don’t have to filter out the bias on self-reporting or think about cultural differences in the way people respond to these surveys. Another thing you can do is pull in real business data (think things like sales volumes). By doing some fancy regression analysis it is then possible to see what part of the improvement can be attributed with some level of confidence to the learning intervention, allowing you to calculate return on investment (ROI) for the learning programs.

All in all I was quite impressed with the toolset that they can provide and I do think they will probably serve a genuine need for many businesses.

The best question of the day came from Charles Jennings who pointed out to David Mallon that his talk had referred to the increasing importance of learning on the job and informal learning, but that the learning measurement framework only addresses measurement strategies for top-down and formal learning. Why was that the case? Unfortunately I cannot remember Mallon’s answer (which probably does say something about the quality or relevance of it!)

Experimenting with Needlebase, R, Google charts, Gephi and ManyEyes

The first tool that I tried out this week was Needlebase. This tool allows you to create a data model by defining the nodes in the model and their relations. Then you can train it on a web page of your choice to teach it how to scrape the information from the page. Once you have done that Needlebase will go out to collect all the information and will display it in a way that allows you to sort and graph the information. Watch this video to get a better idea of how this works:

[youtube=http://www.youtube.com/watch?v=58Gzlq4zSDk]

I decided to see if I could use Needlebase to get some insights into resources on Delicious that are tagged with the “lak11” tag. Once you understands how it works, it only takes about 10 minutes to create the model and start scraping the page.

I wanted to get answers to the following questions:

  • Which five users have added the most links and what is the distribution of links over users?
  • Which twenty links were added the most with a “lak11” tag?
  • Which twenty links with a “lak11” tag are the most popular on Delicious?
  • Can the tags be put into a tag cloud based on the frequency of their use?
  • In which week were the Delicious users the most active when it came to bookmarking “lak11” resources?
  • Imagine that the answers to the questions above would be all somebody were able to see about this Knowledge and Learning Analytics course. Would they get a relatively balanced idea about the key topics, resources and people related to the course? What are some of the key things that would they would miss?

Unfortunately after I had done all the machine learning (and had written the above) I learned that Delicious explicitly blocks Needlebase from accessing the site. I therefore had to switch plans.

The Twapperkeeper service keeps a copy of all the tweets with a particular tag (Twitter itself only gives access to the last two weeks of messages through its search interface). I manage to train Needlebase to scrape all the tweets, the username, URL to user picture and userid of the person adding the tweet, who the tweet was a reply to, the unique ID of the tweet, the longitude and latitude, the client that was used and the date of the tweet.

I had to change my questions too:

Another great resource that I re-encountered in these weeks of the course was the Rosling’s Gapminder project:

[youtube=http://www.youtube.com/watch?v=BPt8ElTQMIg]

Google has acquired some part of that technology and thus allows a similar kind of visualization with their spreadsheet data. What makes the data smart is the way that it shows three variables (x-axis, y-axis and size of the bubble and how they change over time. I thought hard about how I could use the Twitter data in this way, but couldn’t find anything sensible. I still wanted to play with the visualization. So at the World Bank’s Open Data Initiative I could download data about population size, investment in education and unemployment figures for a set of countries per year (they have a nice iPhone app too). When I loaded that data I got the following result:

Click to be able to play the motion graph
Click to be able to play the motion graph

The last tool I installed and took a look at was Gephi. I first used SNAPP on the forums of week and exported that data into an XML based format. I then loaded that in Gephi and could play around a bit:

Week 1 forum relations in Gephi
Week 1 forum relations in Gephi

My participation in numbers

I will have to add up my participation for the two (to three) weeks, so in week 3 and week 4 of the course I did 6 Moodle posts, tweeted 3 times about Lak11, wrote 1 blogpost and saved 49 bookmarks to Diigo.

The hours that I have played with all the different tools mentioned above are not mentioned in my self-measurement. However, I did really enjoy playing with these tools and learned a lot of new things.

The Future State of Capability Building in Organizations: Inspirations

CC-licenced photo by Flickr user kevindooley
CC-licenced photo by Flickr user kevindooley

I have been involved in organizing a workshop on capability building in organizations hosted on my employer‘s premises (to be held on October 20th). We have tried to get together an interesting group of professionals who will think about the future state of capability building and how to get there. All participants have done a little bit of pre-work by using a single page to answer the following question:

What/who inspires you in your vision/ideas for the future state of capability building in organizations?

Unfortunately I cannot publish the one-pagers (I haven’t asked their permission yet), but I have disaggregated all their input into a list of Delicious links, a YouTube playlist and a GoodReads list (for which your votes are welcome). My input was as follows:

Humanistic design
We don’t understand ourselves well enough. If we did, the world would not be populated with bad design (and everything might look like Disney World). The principles that we use for designing our learning interventions are not derived from a deep understanding of the humand mind and its behavioural tendencies, instead it is often based on simplistic and unscientific methodologies. How can we change this? First, everybody should read Christopher Alexander’s A Pattern Language. Next, we can look at Hans Monderman (accessible through the book Traffic) to understand the influence of our surroundings on our behaviour. Then we have to try and understand ourselves better by reading Medina’s Brain Rules (or check out the excellent site) and books on evolutionary psychology (maybe start with Pinker’s How the Mind Works). Finally we must never underestimate what we are capable of. Mitra’s Hole in the Wall experiment is a great reminder of this fact.

Learning theory
The mental model that 99% of the people in this world have for how people learn is still informed by an implied behaviourist learning theory. I like contrasting this with George Siemens’ connectivism and Papert’s constructionism (I love this definition). These theories are actually put into practice (the proof of the pudding is in the eating): Siemens and Stephen Downes (prime sense-maker and a must-read in the educational technology world) have been running multiple massive online distributed courses with fascinating results, whereas Papert’s thinking has inspired the work on Sugarlabs (a spinoff of the One Laptop per Child project).

Open and transparent
Through my work for Moodle I have come to deeply appreciate the free software philosophy. Richard Stallman‘s four freedoms are still relevant in this world of tethered appliances. Closely aligned to this thinking is the hacker mentality currently defended by organizations like the Free Software Foundation, the EFF, Xs4all and Bits of Freedom. Some of the open source work is truly inspirational. My favourite example is the Linux based operating system Ubuntu, which was started by Mark Shuttleworth and built on top of the giant Debian project. “Open” thinking is now spilling over into other domains (e.g. open content and open access). One of the core values in this thinking is transparency. I actually see huge potential for this concept as a business strategy.

Working smarter
Jay Cross knows how to adapt his personal business models on the basis of what technology can deliver. I love his concept of the unbook and think the way that the Internet Time Alliance is set up should enable him to have a sustainable portfolio lifestyle (see The Age of Unreason by the visionary Charles Handy). The people in the Internet Time Alliance keep amplifying each other and keep on tightening their thinking on Informal Learning, now mainly through their work on The Working Smarter Fieldbook.

Games for learning
We are starting to use games to change our lives. “Game mechanics” are showing up in Silicon Valley startups and will enter mainstream soon too. World Without Oil made me understand that playing a game can truly be a transformational experience and Metal Gear Solid showed me that you can be more engaged with a game than with any other medium. If you are interested to know more I would start by reading Jesse Schell’s wonderful The Art of Game Design, I would keep following Nintendo to be amazed by their creative take on the world and I would follow the work that Jane McConigal is doing.

The web as a driver of change
Yes, I am believer. I see that the web is fundamentally changing the way that people work and live together. Clay Shirky‘s Here Comes Everybody is the best introduction to this new world that I have found so far. Benkler says that “technology creates feasibility spaces for social practice“. Projects like Wikipedia and Kiva would not be feasible without the current technology. Wired magazine is a great way to keep up with these developments and Kevin Kelly (incidentally one of Wired’s cofounders) is my go-to technology philosopher: Out of Control was an amazingly prescient book and I can’t wait for What Technology Wants to appear in my mailbox.

I would of course be interested in the things that I (we?) have missed. Your thoughts?

Serendipity 2.0

Arjen Vrielink and I write a monthly series titled: Parallax. We both agree on a title for the post and on some other arbitrary restrictions to induce our creative process. This time we decided to try and find out whether it is possible to engineer serendipity on the web. The post should start with a short (max. 200 words) reflection on what the Internet has meant for serendipity followed by three serendipitous discoveries including a description of how they were discovered. You can read Arjen’s post with the same title here.

There is an ongoing online argument over whether our increasing use of the Internet for information gathering and consumption has decreased our propensity for having serendipitous discoveries (see for example here, here or here). I have worried about this myself: my news consumption has become very focused on (educational) technology and has therefore become very silo-ed. No magazine has this level of specificity, so when I read a magazine I read more things I wasn’t really looking for than when I read my RSS feeds in Google Reader. This is a bit of red herring. Yes, the web creates incredibly focused channels and if all you are interested in is the history of the second world war, then you can make sure you only encounter information about that war; but at the same time the hyperlinked nature of the web as a network actually turns it into a serendipity machine. Who hasn’t stumbled upon wonderful new concepts, knowledge communities or silly memes while just surfing around? In the end it probably is just a matter of personal attitude: an open mind. In that spirit I would like to try and engineer serendipity (without addressing the obvious paradoxical nature of doing that).

Serendipity algorithm 1: Wikipedia
One way of finding serendipity in the Wikipedia is by looking at the categories of a particular article. Because of the many to many relationship between categories and articles these can often be very surprising (try it!). I have decided to take advantage of the many hyperlinks in Wikipedia and do the following:

  • Start with the “Educational Technology” article
  • Click on the first two links to other articles
  • In these articles find two links that look interesting and promising to you
  • In each of these four articles pick a link to a concept that you haven’t heard about yet or don’t understand very well
  • Read these links and see what you learn

Instructional theory was the first link. From there I went to Bloom’s Taxonomy and to Paulo Freire. Bloom’s Taxonomy took me to DIKW, a great article on the “Knowledge Pyramid” explaining the data-to-information-to-knowledge-to-wisdom transformation. I loved the following Frank Zappa quote:

Information is not knowledge,
Knowledge is not wisdom,
Wisdom is not truth,
Truth is not beauty,
Beauty is not love,
Love is not music,
and Music is the BEST.

Paulo Freire took me to Liberation theology which is is a movement in Christian theology which interprets the teachings of Jesus Christ in terms of a liberation from unjust economic, political or social conditions. This began as a movement in the Roman Catholic church in Latin America in the 1950s-1960s. The paradigmatic expression of liberation theology came from Gutierrez from his book A Theology of Liberation in his which he coined the phrase “preferential option for the poor” meaning that God is revealed to have a preference for those people who are “insignificant”, “unimportant” and “marginalized”.

The second link was Learning theory (education). That led to Discovery learning and Philosophical anthropology. Discovery learning prompted me to read the The Grauer School. This link didn’t really work out. The Discovery learning article had alluded to the “Learn by Discovery” motto with which the school was founded, but the article about the school has no further information. A dead alley on the serendipity trail! Philosophical anthropology brought me to Hylomorphism which is a concept I hadn’t heard of before (or I had forgotten about: I used to study this stuff). It is a philosophical theory developed by Aristotle analyzing substance into matter and form. “Just as a wax object consists of wax with a certain shape, so a living organism consists of a body with the property of life, which is its soul.”

Conclusion: Wikipedia is excellent for serendipitous discovery.

Serendipity algorithm 2: the Accidental News Explorer (ANE)

The Accidental News Explorer
The Accidental News Explorer

The tagline of this iPhone application is “Look for something, find something else” and its information page has a quote by Lawrence Block: “One aspect of serendipity to bear in mind is that you have to be looking for something in order to find something else.” I have decided to do the following:

  • Search for “Educational Technology”
  • Choose an article that looks interesting
  • Click on the “Related Topics” button
  • Choose the most interesting looking topic
  • Choose an article that looks interesting
  • Click on the “Related Topics” button
  • Choose the most interesting looking topic
  • Read the most appealing article

The article that looked interesting was an article on Kurzweil educational Systems. The only related topic was “Dallas, Texas”. This brought me to an article on Nowitzki from where I chose “Joakim Noah” as a related topic. The most appealing article in that topic was titled: Who’s better: Al Horford or Joakim Noah?

Conclusion: An app like this could work, but it needs to be a little bit better in its algorithms and sources for finding related news. One thing I noticed about this particular news explorer is its complete US focus, you always seem to go to cities and then to sports or politics.

Serendipity algorithm 3: Twitter
Wikipedia allows you to make fortunate content discoveries, Twitter should allow the same but then in a social dimension. Let’s try and use Twitter to find interesting people. I have decided to do the following:

  • Search for a the hashtag “#edtech”
  • Look at the first three people who have used the hashtag and look at their first three @mentions
  • Choose which of the nine people/organizations is the most to follow
  • Follow this person and share/favourite a couple of tweets of this person

So the search brought me to @hakan_sentrk, @ShellTerrell and @briankotts. These three mentioned the following nine Twitter users/organizations:

  1. @mike08, ESP teacher; ICT consultant; e-tutor
  2. @MsBarkerED, Education Major, Michigan State University, Senior, Aspiring Urban Educator, enrolled in the course CEP 416
  3. @jdthomas7, educational tech/math coach, former math, computer teacher. former director of technology at a local private school. specializing in tech/ed integration
  4. @ozge, Teacher/trainer, preschool team leader, coordinator of an EFL DVD project, e-moderator, content & educational coordinator of Minigon reader series, edtech addict!
  5. @ktenkely, Mac Evangelist, Apple Fanatic, Technology Teacher, classroom tech integration specialist, Den Star, instructional coach
  6. @Parentella, Ever ask your child: What happened at school today? If so, join us.
  7. @Chronicle, The leading news source for higher education.
  8. @BusinessInsider, Business news and analysis in real time.
  9. @techcrunch, Breaking Technology News And Opinions From TechCrunch

I decided to follow @ozge who seems to be a very active Twitter user posting mostly links that are relevant to education.

Conclusion: the way I set up this algorithm did not help in getting outside of my standard community of people. I was already following @ShellTerrell for example. I probably should have designed a slightly different experiment, maybe involving lists in some way (and choosing an a-typical list somebody is on). That might have allowed me to really jump communities, which I didn’t do in this case.

There are many other web services that could be used in a similar fashion as the above  for serendipitous discovery. Why don’t you try doing it with Delicious, with Facebook, with LinkedIn or with YouTube?

Notes and Reflections on Day 1 of I-KNOW 2010

I-KNOW 2010
I-KNOW 2010

From September 1-3, 2010, I will attend the 10th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW 2010) in beautiful Graz, Austria. I will use my blog to do a daily report on my captured notes and ideas.

And now for something completely different
In the last few years I have put a lot of effort into becoming a participating member in the global learning technology community. This means that when I visit a “learning” conference I know a lot of the people who are there. At this conference I know absolutely nobody. Not a single person in my online professional network seems to know let alone go to this conference.

One of my favourite competencies in the leadership competency framework of Shell is the ability to value differences. People who master this competency actively seek out the opinion of people who have a different opinion than theirs. There are good reasons for this (see for example Page’s The Difference), and it is one of the things that I would like to work on myself: I am naturally inclined to seek out people who think very much like me and this conference should help me in overcoming that preference.

After the first day I already realise that the world I live and work in is very “corporate” and very Anglo-Saxon. In a sense this conference feels like I have entered into a world that is normally hidden from me. I would also like to compliment the organizers of the conference: everything is flawless (there even is an iPhone app: soon to be standard for all for conferences I think, I loved how FOSDEM did this: publishing the program in a structured format and then letting developers make the apps for multiple mobile platforms).

Future Trends in Search User Interfaces
Marti Hearst has just finished writing her book Search User Interfaces which is available online for free here and she was therefore asked to keynote about the future of these interfaces.

Current search engines are primarily search text based, have a fast response time, are tailored to keyword queries (that support a search paradigm where there is iteration based on these keywords), sometimes have faceted metadata that delivers navigation/organization support, support related queries and in some cases are starting to show context-sensitive results.

Hearst sees a couple of things happening in technology and in how society interacts with that technology that could help us imagine what the search interface will look like in the future. Examples are the wide adoption of touch-activated devices with excellent UI design, the wide adoption of social media and user-generated content, the wide adoption of mobile devices with data service, improvements in Natural Language Processing (NLP), a preference for audio and video and the increasing availability of rich, integrated data sources.

All of these trends point to more natural interfaces. She thinks this means the following for search user interfaces:

  • Longer more natural queries. Queries are getting longer all the time. Naive computer users use longer queries, only shortening them when they learn that they don’t get good results that way. Search engines are getting better at handling longer queries. Sites like Yahoo Answers and Stack Overflow (a project by one of my heroes Joel Spolsky) are only possible because we now have much more user-generated content.
  • “Sloppy commands” are now slowly starting to be supported by certain interfaces. These allow flexibility in expression and are sometimes combined with visual feedback. See the video below for a nice example.

[vimeo http://vimeo.com/13992710]

  • Search is becoming as social as possible. This is a difficult problem because you are not one person, you are different people at different times. There are explicit social search tools like Digg, StumbleUpon and Delicious and there are implicit social search tools and methods like “People who bought x, also bought…” and Yahoo’s My Web (now defunct). Two good examples (not given by Hearst) of how important the social aspects of search are becoming are this Mashable article on a related Facebook patent and this Techcrunch article on a personalized search engine for the cloud.
  • There will be a deep integration of audio and video into search. This seemed to be a controversial part of her talk. Hearst is predicting the decline of text (not among academics and lawyers). There are enough examples around: the culture of video responses on YouTube apparently arose spontaneously and newspaper websites are starting to look more and more like TV. It is very easy to create videos, but the way that we can edit videos still needs improvement.
  • A final prediction is that the search interface will be more like a dialogue, or conversational. This reality is a bit further away, but we are starting to see what it might look like with apps like Siri.

Enterprise 2.0 and the Social Web

Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed
Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed

This track consisted of three presentations. The first one was titled “A Corporate Tagging Framework as Integration Service for Knowledge Workers”. Walter Christian Kammergruber, a PhD student from Munich, told us that there are two problems with tagging: one is how to orchestrate the tags in such a way that they work for the complete application landscape, another is the semantic challenge of getting rid of ambiguity, multiple spellings, etc. His tagging framework (called STAG) attempts to solve this problem. It is a piece of middleware that sits on the Siemens network and provides tagging functionality through web services to Siemens’ blogging platform, wiki, discussion forums and Sharepoint sites. These tags can then be displayed using simple widgets. The semantic problem is solved by having a thesaurus editor allowing people define synonyms for tags and make relationships between related tags.

I strongly believe that any large corporation would be very much helped with a centralised tagging facility which can be utilised by decentralised applications. This kind of methodology should actually not only be used for tagging but could also be used for something like user profiles. How come I don’t have a profile widget that I can include on our corporate intranet pages?

The second talk, by Dada Lin, was titled “A Knowledge Management Scheme for Enterprise 2.0”. He presented a framework that should be able to bridge the gap between Knowledge Management and Enterprise 2.0. It is called the IDEA framework in which knowledge is seen as a process, not as an object. The framework consists of the following elements (also called “moments”):

  • Interaction
  • Documentation
  • Evolution
  • Adoption

He then puts these moments into three dimensions: Human, Technology and Organisation. Finally he did some research around a Confluence installation at T-Systems. None of this was really enlightening to me, I was however intrigued to notice that the audience focused more on the research methodologies than on the outcomes of the research.

The final talk, “Enterprise Microblogging at Siemens Building Technologies Division: A Descriptive Case Study” by Johannes Müller a senior Knowledge Management manager at Siemens was quite entertaining. He talked about References@BT, a community at Siemens that consists of many discussion forums, a knowledge reference and since March 2009 a microblogging tool. It has 7000 members in 73 countries.

The microblogging platform is build by himself and thus has exactly the features it needed to have. One of the features he mentioned was that it showed a picture of every user in every view on the microblog posts. This is now a standard feature in lots of tools (e.g. Twitter or Facebook) and it made me realise that Moodle was actually one of the first applications that I know that this consistently: another example of how forward thinking Martin Dougiamas really was!.

Müller’s microblogging platform does allow posts of more than 140 characters, but does not allow any formatting (no line-breaks or bullet points for example). This seems to be an effective way of keeping the posts short.

He shared a couple of strategies that he uses to get people to adopt the new service. Two things that were important were the provision of widgets that can be included in more traditional pages on the intranet and the ability to import postings from other microblogging sites like Twitter using a special hash tag. He has also sent out personalised email to users with follow suggestions. These were hugely effective in bootstrapping the network.

Finally he told us about the research he has done to get some quantitative and qualitative data about the usefulness of microblogging. His respondents thought it was an easy way of sharing information, an additional channel for promoting events, a new means of networking with others, a suitable tool to improve writing skills and a tool that allowed for the possibility to follow experts.

Know-Center Graz
During lunch (and during the Bacardi sponsored welcome reception) I had the pleasant opportunity to sit with Michael Granitzer, Stefanie Lindstaedt and Wolfgang Kienreich from the Know-Center, Austria’s Competence Center for Knowledge Management.

They have done some work for Shell in the past around semantic similarity checking and have delivered a working proof of concept in our Mediawiki installation. They demonstrated some of their new projects and we had a good discussion about corporate search and how to do technological innovation in large corporations.

The first project that they showed me is called the Advanced Process- Oriented Self- Directed Learning Environment (APOSDLE). It is a research project that aims to develop tools that help people learn at work. To rephrase it in learning terms: it is a very smart way of doing performance support. The video below gives you a good impression of what it can do:

[youtube=http://www.youtube.com/watch?v=4ToXuOTKfAU?rel=0]

After APOSDLE they showed me some outcomes from the Mature IP project. From the project abstract:

Failures of organisation-driven approaches to technology-enhanced learning and the success of community-driven approaches in the spirit of Web 2.0 have shown that for that agility we need to leverage the intrinsic motivation of employees to engage in collaborative learning activities, and combine it with a new form of organisational guidance. For that purpose, MATURE conceives individual learning processes to be interlinked (the output of a learning process is input to others) in a knowledge-maturing process in which knowledge changes in nature. This knowledge can take the form of classical content in varying degrees of maturity, but also involves tasks & processes or semantic structures. The goal of MATURE is to understand this maturing process better, based on empirical studies, and to build tools and services to reduce maturing barriers.

Mature
Mature

I was shown a widget-based approach that allowed people to tag resources, put them in collections and share these resources and collections with others (more information here). One thing really struck me about the demo I got: they used a simple browser plugin as a first point of contact for users with the system. I suddenly realised that this would be the fastest way to add a semantic layer over our complete intranet (it would work for the extranet too). With our desktop architecture it is relatively trivial to roll out a plugin to all users. This plugin would allow users to annotate webpages on the net creating a network of meta-information about resources. This is becoming increasingly viable as more and more of the resources in a company are accessed from a browser and are URL addressable. I would love to explore this pragmatic direction further.

Knowledge Sharing
Martin J. Eppler from the University of St. Gallen seems to be a leading researcher in the field of knowledge management: when he speaks people listen. He presented a talk titled “Challenges and Solutions for Knowledge Sharing in Inter-Organizational Teams: First Experimental Results on the Positive Impact of Visualization”. He is interested in the question of how visualization (mapping text spatially) changes the way that people share knowledge. In this particular research project he focused on inter-organizational teams. He tries to make his experiments as realistic as possible, so he used senior managers and reallife scenarios, put them in three experimental groups and set them out to do a particular task. There was a group that was supported with special computer based visualization software, another group used posters with templates and a final (control) group used plain flipcharts. After analysing his results he was able to conclude that visual support leads to significant greater productivity.

This talk highlights one of the problems I have with science applied in this way. What do we now know? The results are very narrow and specific. What happens if you change the software? Is this the case for all kinds of tasks? The problem is: I don’t know how scientists could do a better job. I guess we have to wait till our knowledge-working lives can really be measured consistently and in realtime and then for smart algorythms to find out what really works for increased productivity.

The next talk in this talk was from Felix Mödritscher who works at the Vienna University of Economics and Business. His potentially fascinating topic “Using Pattern Repositories for Capturing and Sharing PLE Practices in Networked Communities” was hampered by the difficulty of explaining the complexities of the project he is working on.

He used the following definition for Personal Learning Environments (PLEs): a set of tools, services, and artefacts gathered from various contexts and to be used by learners. Mödritscher has created a methodology that allows people to share good practices in PLEs. First you record PLE interactions, then you allow people to depersonalise these interactions and share them as an “activity pattern” (distilled and archetypical), where people can then pick these up and repersonalise them. He has created a Pattern repository, with a pattern store. It has a client side component implemented as a Firefox extension: PAcMan (Personal Activity Manager). It is still early days, but these patterns appear to be really valuable: they not only help with professional competency development, but also with what he calls transcompentences.

I love the idea of using design patterns (see here), but thought it was a pity that Mödritscher did not show any very concrete examples of shared PLE patterns.

My last talk of the day was on “Clarity in Knowledge Communication” by Nicole Bischof, one of Eppler’s PhD students in the University of St. Gallen. She used a fantastic quote by Wittgenstein early in her presentation:

Everything that can be said, can be said clearly

According to her, clarity can help with knowledge creation, knowledge sharing, knowledge retention and knowledge application. She used the Hamburger Verständlichkeitskonzept as a basis to distill five distinct aspects to clarity: Concise content, Logical structure, Explicit content, Ambiguity low and Ready to use (the first letters conveniently spell “CLEAR”). She then did an empirical study about the clarity of Powerpoint presentations. Her presentation turned tricky at that point as she was presenting in Powerpoint herself. The conclusion was a bit obvious: knowledge communication can be designed to be more user-centred and thus more effective, clarity helps in translating innovation and potential of knowledge and can help with a clear presentation of complex and knowledge content.

Bischof did an extensive literature review and clarity is an underresearched topic. After just having read Tufte’s anti-Powerpoint manifesto I am convinced that there is a world to gain for businesses like Shell’s. So much of our decision making is based on Powerpoint slidepacks, that it becomes incredibly urgent to let this be optimal.

Never walk alone
I am at this conference all by myself and have come to realise that this is not the optimal situation. I want to be able to discuss the things that I have just seen and collaboratively translate them to my personal work situation. It would have been great to have a sparring partner here who shares a large part of my context. Maybe next time?!

Learning in 3D: Please Join My Reading Group

Learning in 3D
Learning in 3D

My company is piloting serious gaming in the learning domain using an immersive 3D environment based on the Unreal engine. We are on the cusp of developing a game around hazard recognition scenarios that are based on real life experiences. Because of this I am reading up on serious gaming and game design in general. After finishing the brilliant The Art of Game Design by Jesse Schell (more about that book in a later post), I now want to tackle Learning in 3D, Adding a New Dimension to Enterprise Learning and Collaboration by Kapp and O’Driscoll.

I have decided to start a reading group which will read the ten chapters of the book in ten weeks (there is a preview of the chapters here). We will use blogs, Twitter, Delicious and a weekly teleconference to communicate around the book.

So how will this work?

Goal
The book provides principles for architecting 3D learning experiences (including a maturity model for immersive technologies) and has lessons on and examples of implementations in enterprise situations. The goal of the reading group is to actively internalise these lessons and see how they can be applied in our own organisation(s).

Participants
As I want this reading group to impact the learning function in my own organisation I intend for about 50% of the participants to work for Shell and for the rest to come from my network outside of Shell. The minimum number of participants is 5 (doing two chapters each) and the maximum is 40 (four people per chapter and incidentally the limit of our teleconferencing solution). Everybody will have to acquire their own copy of the book. (I used the Book Depository to buy this book, as they have free shipping, note that I will earn a small referral fee if you click this link and then buy the book).

Process
The reading group will have a weekly rhythm with a particular chapter of the book as the focus of attention. The following activities will happen every week:

  • One or more people will be assigned to write a summary of the chapter on their blog (if they don’t have a blog, they email me the summary and I will publish it on this blog). The summary ends with at least one multiple choice poll and a discussion question/proposition, both used as input for the teleconference.
  • All reading group participants will be tweeting questions and comments about the book (using a designated hashtag, see below).
  • Each participant will try to add at least one interesting link to Delicious (again with a hashtag) that relates to the chapter of that week.
  • At the end of the week (actually on a Monday), there is a teleconference where the summarisers for that week lead a discussion about the chapter, using the poll and the discussion question/proposition as input.

Hashtag and aggregation
All Delicious URLs, blogposts and Tweets should be tagged with the #Lin3DRG hash tag (stands for: Learning in 3D Reading Group). This will allow me to try some smart ways of aggregating and displaying the data using things like Yahoo Pipes or Downes’ gRSShopper. I promise to write another post on my aggregation strategies.

When and where?
It is going to be a virtual affair, co-creating on the web. We will start reading on April 19th, will have our first weekly 30 minute teleconference on Monday April 26th at 15:30 Amsterdam time and will close out on June 28th (so we will have 10 telcons on ten consecutive Mondays at the same time, it is not a problem if you miss one, we will record them).

Do  you want to join the reading group? Then please fill out a comment with your name, email address, blog URL (not required) and any comments or questions you might have at the bottom of this post. I will get back to you with your assigned chapter(s), some more information on the process and the call in details for the teleconference. You can put your name down until Monday April 19th.

I am really looking forward to it!