What Does an Innovation Manager for Learning Technologies Do?

Magazine interview
Magazine interview

A couple of weeks back I was interviewed by Amir Elion who was a guest editor for an Israeli magazine on Human Resources. The interview has now been published (if you can read Hebrew it is available online for free). The publisher has kindly given me permission to publish the English version of the interview on my blog. It might give people a better idea of what I do every day.

Please tell us about yourself, what you do, and how you got there?
Basically I studied to be a philosopher and a physical education teacher. I taught high school children in a difficult neighborhood in Amsterdam. I got very interested in educational technology that I could use personally to support my teaching. This was for project based work, where children had to do real assignments for an external party. I needed educational technology to support the process. That’s when I got to know Moodle, an online course management system. I was one of the first people to use Moodle in the world, and the first one to use it in the Netherlands, for sure. I translated it into Dutch, and started consulting around Moodle. I got picked up by the Dutch Moodle Partner to work as a consultant. Shell was one of our clients, and I made a switch to Shell, to become a blended learning advisor. Blended learning is one of the core strategies of Shell HR. I did a lot of “evangelising” about the strategy, trying to give people a nuanced view of what blended learning really means, and that it’s not just a move from the classroom to an online system but slightly more complex than that. Following this, I moved to a new job as Innovation Manager in Shell’s Learning Technologies team.

And what does that mean?
I am actually not a part of the HR function, but of the global IT function. In the IT function that is responsible for HR applications, I work for what we call a Business Systems Manager (BSM). We have one for Learning, one for Talent, and one for Payments and Remuneration. And each of these BSMs has an Application Portfolio Manager, somebody who is aware of all the applications in their landscape and tries to manage these sensibly – usually by reducing the amount of applications, to rationalize. And learning is the only BSM that has an innovation role. Actually it makes sense, because in learning the tools that you use actually affect the process. My example is always Payments – the fact that we use a particular tool to process our payroll, doesn’t affect when and how much salary I receive, and how I get it. So what the systems do is protect the integrity, and the correctness, make sure things are automated and that costs a relatively low. But with learning – the applications that you have in your portfolio change the outcome of the learning. So they actually change the learning process, the learning events. The more the tools are on the delivery side of things, the more this is the case (less so for administrative things). That is why I have to keep abreast of new learning technologies, because they are so entwined with what you can do with the learning function itself and how you deliver your learning.

What is it I really do? – I manage an innovation funnel. We have an innovation process which takes ideas and develops them. This crosses each part of the business and includes innovation around drilling, refining and retailing, for example. It’s kind of a classic funnel idea, where on the one end of the funnel you have a lot of ideas that you investigate minimally, that other people can give to you. Then there are certain stage gates, with documentation and more research that you need to do for each opportunity to progress through the stages. It’s quite a structured way of doing things. A lot of stages in the funnel have to do with doing small Proofs of Concepts, pilots, in kind of “micro-ecologies” of the real situations. So you would always try something out before you do a global rollout. By doing it this way, when you are ready to implement in a larger fashion, you should have already answered the most difficult questions around implementation.

So you take this global innovation structure, with its stage gates, and apply it to learning technologies?
Correct. And my direct partner in doing this is the Learning Strategy and Innovation Manager. In the HR Learning function there is somebody who tries to do innovation from a learning perspective, more than from a technology perspective – and we share our funnel together. Of course, most learning innovations have a technology component, but there are innovations in there that are purely process innovations around learning that are also managed with the same process.

So looking at the structured Innovation processes at Shell, how do you encourage employees to support these efforts – to submit ideas, to participate in the pilots or gate reviews, to be positive towards change?
First of all, I have noticed personally that “Innovation” seems to be the current Buzzword in the business world. It’s all about innovation now – how to do it, how not to do it or whatever.

I have read a book a little while back, I think it was Innovation to The Core, that argued that with innovation we should reach a similar point which we reached with Quality management about 20-25 years ago. It started with the car manufacturers in Japan, that made quality management an integrated part of the company – and not something that you have a Quality Manager to do – but you make everybody’s responsibility. I agree with the sentiment that companies should be innovative, and should keep innovating, and like we as people should keep learning – companies should do the same. In that sense Learning and Innovating could become synonyms. It should be a lens that you put on everything. The fact that I have “Innovation Manager” in my title is actually not a good sign of how mature we are with innovating. If we were more mature it wouldn’t be the case. It is a good sign because it is better than what we had – we are moving in the right direction.

If you look in what is happening our company – first of all our CEO is driving a couple of behaviors very strongly, expecting everybody to take those behaviors on board – all his senior managers, etc. And one of those is innovation. Whenever we have a speech by the CEO, there are three things that he will always talk about – the first has always been for Shell – Safety (especially of course now – after the BP oil spill – but it didn’t really change because it was already a big part of the culture), and he nearly always talks about having an external focus and about innovation.

So that would be a top-down approach to innovation?
Yes – that is a top down approach to the innovation effort. At the same time, there are been some processes that have existed for a long time in Shell. One of them is called “Game Changer”. Anybody who has an idea which they think can change our business, can hand it in, in Game Changer. There is a committee who looks into these things and writes back to people. The committee has the authority to hand out an initial set of money to good ideas. These ideas can be developed further with that money, and perhaps make the next step. So that can be a grass-roots approach to innovation.

And what does the person who suggested the idea get from it?
Usually that person is very involved in the implementation. Often that person would be made free from their regular work and be able to work as a project lead or in developing these ideas. I don’t think there is any real monetary reward – it’s more that you can go in a direction that you’d like to go.

That’s interesting. And does it really happen?
Oh, it really happens. Because there is quite a significant budget, ands some really good ideas have come from it in the past. A lot of them are on the technical side of our business – around engineering, sustainable energies, etc. It gives people the chance to experiment in a sanctioned way

The other drivers are people like me – there are a lot more people like me dispersed throughout the business in different locations. I try to do two things – I try to first look outside – so I see what technologies are out there, in the learning technology world. I try to translate those into what would work for Shell. I noticed that there are lots of Web technologies out there that aren’t even developed with learning in mind, but if you “translate” them in a smart way, they can be very beneficial for learning. For example – the way that new companies do their customer support, or what universities are doing around their teaching – and try to adapt that into the business world. That’s the external focus of what I try to do. When I find something that I think has potential then my job is to find some stakeholder inside Shell who’s willing to experiment with it. I can’t experiment by myself because I don’t have the customers and I don’t have the resources. I need to find a partner somewhere in the business who’s willing to work with me.

A recent example is something relatively simple – the idea of capturing lectures. We still have a lot of face-to-face education, and we will continue you to have it. But none of it is captured (by video). So we’ve brought in this system which is widely used by universities to capture both the speaker with a camera, and their presentation – this allows you to playback both synchronized and skip to different slides. A thing like that – you find a customer inside Shell – in one learning center, who has a need for this kind of stuff. The other way around happens as well, where there are people inside the company that have technology related problems that they cannot solve with our current offerings of learning g landscape, and you try to find a solution that could help them. It goes both ways. You are in a kind of a broker role, and at the same time really look ahead. There is a bit of tension because I am supposed to look 3-5 years ahead of time, but my appraisal is annual.

So you have to find the mix of short term and long term things?
Yes. You have to be creative in how you deliver. You have to be creative in where you find budgets, in that sense quite challenging.

So this would be the other channel – innovation brokers throughout the organization each in their field (as you are in Learning Technologies), that drive innovation – connected of course with business needs, and with an outlook they see a few years ahead?
Yes.

Do you have links with other innovation people in other fields in Shell?
I work with the general Information Technology innovation people as well. We have a new business. We used to have a downstream business and an upstream business, and a Gas business. What we’ve done is put the Gas into Upstream and we’ve created a new business that’s called “Projects and Technology” – that does our big projects. This allows our upstream and downstream businesses to leverage each others strong points. The head of Innovation of Shell is at this new business, and they have an innovation team. They have a larger budget so I am in touch with them to make sure we are aligned, and occasionally there are things that have a learning aspect but are bigger than just learning. One example would be the Serious Gaming where if we create 3D models of our plants, we can create learning scenarios in them using a gaming engine. But you can also imagine that a lot of other processes in the business could benefit from having this kind of information. For example – we’ve done some research into decommissioning plants using 3D models to see how to do it quickly and efficiently. That’s when we find them and ask them to co-sponsor some ideas or maybe even drive it a bit more than we drive it ourselves.

This is a general HR magazine. Your examples are related to Learning due to your role, but perhaps you can share some other HR stories of innovation – i.e. in recruiting, performance management, etc.
The one thing in which Shell is different perhaps from other global companies is in how global the rollout of our performance management is. I don’t know to what level that’s the case at Motorola, but in Shell literally everybody has the same goals and performance appraisal. The goals for your year are in a central system. From the top down you can see a complete overview. And it really flows down from the top. The first person every year to write their goals for the year is the CEO. Then the next level of management writes theirs. Then their people write something that reflects those. It gets more and more specified rolling down from the top. I think that’s quite an interesting way of doing it. I haven’t seen it globalized to that extent anywhere else (but maybe I haven’t looked a lot).

Another thing I like is that everybody has an individual development plan. This is really part of the manager’s responsibility that people fulfill their individual development plan. It goes even so far that you can use it like a leverage point with your manager to do the things that you want to do as an employee.

I think we also have some innovative things in the way that we set up our HR practices and services, but I am not sure how that is done.

Of course we have global competence profiles, and jobs match these competence profiles. In fact all learning should be geared towards filling competence gaps. Our learning framework is really based on a competence framework.

In my understanding, what you describe could be called an application of best practices in performance management, in learning management, in employee development, which is of course very difficult to achieve. It’s good to let others know that it does work if you really do it. Can you think of stories of going “beyond the best practices”? I mean – you have the organizational structures and the way you do things – but do you “shake” them a bit and make things work not only according to paradigms – when it is called for and it’s worthwhile?
If I am very honest – I think that’s a difficult point for a company like Shell. Because exactly as you are saying – we completely focus on best practice. So that means that in everything we do we look at external benchmarks, and we try to be top quartile – the best 25% in those categories. What that means is that it’s relatively hard to do something “out of the box” at a global level. Because whenever you want to do anything, the first questions will be – “What other companies have already done it?”, “Can we prove that’s it effective?”. In the learning technologies and knowledge management fields there have been certain practices that were innovative when they started but are now best practices. Shell is one of the first companies to have a truly global Wiki implementation. We have an internal Wikipedia, using the same software as Wikipedia – it has about 70,000 users and 30,000 articles – it’s a real big and incredibly useful Wiki that has a lot of business value into it.

The one thing that I think that Shell is innovative about is in its complete focus of alignment of learning and work. We focus more and more on On the Job Training, on learning events that are completely relevant to somebody’ s work. The way that learning events are designed – they always have work-related assignments to them, and most of the require supervisor involvement. You need to agree with your supervisor on what you need to do. Learning is often integrated with knowledge management – through the Wiki for example.

It’s not just a course. You have to think about application and about follow-up…
Yes. And it’s part of the standard knowledge management process. Not a course that stands in itself. A part of it is alive with the business.

That’s sound very good. Could you compare how things were 3 years ago, to something that’s happening today and how you see it in 3 years’ time? (Possibly in learning or learning technologies because that’s where your focus is).
If you look at learning 3 years ago, and is still the case a bit (but less so)…there are different ways of delivering learning. It could be face to face, fully virtual learning, a-synchronous, synchronous. We have email based courses where you automatically get a weekly email with an assignment and some reading to do, etc.

There’s really a broad spectrum in the delivery of learning, but everything is still delivered from a course paradigm and from the idea of competence profiles. What you are starting to see is the course paradigm is starting to crumble a bit. So it’s called informal learning or on the job learning. I think you will see (and we are starting to see it here in the way we are architecting our next steps in our learning landscape), is smaller, modular content pieces; A different perspective of what we consider to be a learning event, what things can be seen as a learning event. More of a “Pull” idea – learning when you need it, than a “Push” approach. Specific learning event interventions around very current direct business problems – instead of through competencies. Because competency based learning for me has two abstractions. First you have to make sure that your competence framework is a very good reflection of your business, the skills it requires, and their translation into competencies. You hope you’ve done that correctly. Then there’s another abstraction when you create learning that has to match these skills and competencies, and you hope that the learning that you create can produce these competencies. If one of those steps goes wrong so the learning is pointless. What you are seeing more and more is a very direct, shorter event interventions. They will also have a shorter lifespan. This has to translate into your development methodology. So I am really starting to see an increase in how fast the learning function is expected to deliver for less cost – so it has to be cheaper and faster! Those requirements are starting to drive change in the way things are done.

If we are moving away from the competency based model, can you predict or imagine what model we are heading towards?
Yes. I think we will go to a simplified competence model. Because a competence model with the amount of functions like Shell has is an incredibly complicated structure that offers room to be simplified. What we are really working on in the next years is a high quality learning typology. The different business-related things where learning interventions could be a good solution. And they are very different, because on the one end there is a big need for “Certified Knowledge” (for instance – airplane pilots that are assessed yearly, and have to prove that they have the right skills and knowledge). That kind of certification levels will be seen more and more in our business. I am sure there will be legislation after the BP oil disaster that will push us even more towards having certified geo-engineers. That’s an aspect learning will have to cover, and it requires a very different solution than helping people implement change in the business as quickly as possible. I think a lot of learning will be related to projects – and that’s a different part of the learning typology. If the learning function can help get a factory online in 4 years instead of in 5 years – that’s a massive cost savings and business results improvement for a company like us. I think there will be a lot of focus on that.

Thank you very much for this interview. It was really interesting to hear these things.

How Disaggregation Will Affect Our Jobs

Arjen Vrielink and I write a monthly series titled: Parallax. We both agree on a title for the post and on some other arbitrary restrictions to induce our creative process. This time we decided to write about how disaggregation will affect our (your) jobs. This post is a remix of existing content on the web. We were not allowed to write any original content but had to compose our post from at least 5 different sources on the web. Any web content could be used. You can read Arjen’s post with the same title here.

Photo by Flickr user Lee J Haywood, CC-licensed
Photo by Flickr user Lee J Haywood, CC-licensed

Content sources are disaggregating. Courses, albums, newspapers, and even TV programs (i.e. the 5 min YouTube video) are fragmenting into smaller pieces. Which, of course, increases options for re-creating/remixing (smaller the size, greater the opportunities for repurposing). (source)

Librarians and publishers are familiar with the term “the least publishable unit,” which referred to e-journal articles at the time they came into vogue. Now, “microcontent” is generally used to describe even smaller units of content that come from some larger whole. [..] E-learning disaggregates the learning process from the institution as students avail themselves of the “least unit”: a course can possibly be independent of place and time—tied to the parent institution in name only. Banking online reduces “the bank” to a series of activities, and the ordered presentation of a library’s physical collection of content and its highly structured services can be irrelevant and even inhibitory in a digital world. A nine-year-old’s Web page about spiders coexists with a presentation at a conference by the world’s expert on spiders and may be deemed more useful to a nine-year-old searcher than the expert’s paper. It’s about more than just content. It’s about context. (source)

[slideshare id=1572111&doc=reformation-2-090612001717-phpapp02]

Reuse, remix, mashup open learning:

[youtube=http://www.youtube.com/watch?v=4csN-cxyS-s&rel=0]

Today while more data is reaching us faster, our capacity to absorb and process this information has limitations. Rather than reading long passages of information, users merely “scan” for information, prompting web writers to group chunk information into smaller, consumable portions. According to Wikipedia, chunking is a method of presenting information which splits concepts into small pieces or “chunks” of information to make reading and understanding faster and easier. Chunked content usually contains bullet lists and shortened paragraphs with increased usage of subheads and scannable text, and bold key phrases. [..] Today our expectations for information equals our need immediate gratification. We want to-the-minute updated information and content, in an easily consumable form. It is therefore not surprising to see new forms of information consumption (and disbursal) have evolved to keep pace. [..] Examples of this type of microlearning include reading a paragraph of text, listening to a podcast or educational video-clip, viewing a flashcard, memorizing a word, vocabulary, definition or formula, selecting an answer to a question, answering questions in quizzes etc. By delivering learning content in small, consumable portions, mobile learning enables educators to supplement mainstream education through a method of quick review and research. (source)

As an instructional technology, microlearning focuses on the design of micro learning activities through micro steps in digital media environments, which already is a daily reality for today’s knowledge workers. These activities can be incorporated in learner’s daily routines and tasks. Unlike “traditional” elearning approaches, microlearning often tends towards push technology through push media, which reduces the cognitive load on the learners. Therefore, the selection of micro learning objects and also pace and timing of micro learning activities are of importance for didactical designs. (source)

New “tagging” technologies and practices, creating “soft” metadata, show a possible way to new kinds of collaborative knowledge environments. It will be shown that is not only microcontent itself, but also its contextualization through learner-centered approaches, discussion through trackbacks or commentaries, and “soft” object metadata which contribute to an understanding of microlearning and provide insights for implementing personal publishing systems in (educational) institutions. Until now, most of these conceptions are emergent on the web, so future research would have to identify possible uses and integration into learning environments and didactical applications. (source)

Notes and Reflections on Day 2 and 3 of I-KNOW 2010

I-KNOW 2010
I-KNOW 2010

These are my notes and reflections for the second and third days of the 10th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW 2010).

Another appstore!
Rafael Sidi from Elsevier kicked of the second day with a talk titled “Bring in ‘da Developers, Bring in ‘da Apps – Developing Search and Discovery Solutions Using Scientific Content APIs” (the slightly ludicrous title was fashioned after this).

He opened his talk with this Steve Ballmer video which, if I was the CIO of any company, would seriously make me reconsider my customer relationship with Microsoft:

[youtube=http://www.youtube.com/watch?v=8To-6VIJZRE&rel=0]

(If you enjoyed that video, make sure you watch this one too, first watch it with the sound turned off and only then with the sound on).

Sidi is responsible for Elservier’s SciVerse platform. He has seen that data platforms are increasingly important, that there is an explosion of applications and that people work in communities of innovation. He used Data.gov as an example: it went from 47 sources to 220,000+ sources within a year’s time and has led to initiatives like Apps for America. We need to have an “Apps for science” too. Our current scientific platforms make us spend too much time gathering instead of analysing information and none of them really understand the user’s intent.

The key trends that he sees on the web are:

  • Openness and interoperability (“give me your data, my way”). Access to APIs helps to create an ecosystem.
  • Personalization (“know what I want and deliver results on my interest”). Well known examples are: Amazon, Netflix and Last.fm
  • Collaboration & trusted views (“the right contacts at the right time”). Filtering content through people you trust. “Show me the articles I’ve read and show me what my friends have right differently from me”. This is not done a lot. Sidi didn’t mention this but I think things like Facebook’s open API are starting to deliver this.

So Elsevier has decided to turn SciVerse, the portal to their content, into a platform by creating an API with which developers can create applications. Very similar to Apple’s appstore this will include a revenue sharing model. They will also nurture a developers community (bootstrapping it with a couple of challenges).

He then demonstrated how applications would be able to augment SciVerse search results, either by doing smart things with the data in a sidebar (based on aggregated information about the search result) or by modifying a single search result itself. I thought it looked quite impressive and thought it was a very smart move: scientific publishers seem to be under a lot of pressure from things like Open Access and have been struggling to demonstrate their added value in this Internet world. This could be one way to add value. The reaction from the audience was quite tough (something Sidi already preempted by showing an “I hate Elsevier”-tweet in his slides). One audience member: “Elsevier already knows how to exploit the labour of scientists and now wants to exploit the labour of developers too”. I am no big fan of large publisher houses, but thought this was a bit harsh.

Knowledge Visualization
Wolfgang Kienreich demoed some of the knowledge visualization products that the Know-Center has developed over the years. The 3D knowledge space is not available through the web (it is licensed to a German encyclopedia publisher), but showed what is possible if you think hard about how a user should be able to navigate through large knowledge collections. Their work for the Austrian Press Agency is available online in a “labs” evironment. It demonstrates a way of using faceted search in combination simple but insightful visualizations. The following example is a screenshot showing which Austrian politicians have said something about pensions.

APA labs faceted visual search
APA labs faceted visual search

I have only learned through writing this blog post that Wolfgang is interested in the Prisoner’s Dilemma. I would have loved to have talked to him about Goffman’s Expression games and what they could mean for the ways decisions get made in large corporations. I will keep that for a next meeting.

Knowledge Work
This track was supposed to have four talks, but one speaker did not make it to the conference, so there were three talks left.

The first one was provocatively titled “Does knowledge worker productivity really matter?” by Rainer Erne. It was Drucker who said that is used to be the job of management to increase the productivity of manual labour and that is now the job of management to make knowledge workers more productive. In one sense Drucker was definitely right: the demand for knowledge work is increasing all the time, whereas the demand for routine activities are always going down.

Erne’s study focuses on one particular part of knowledge workers: expert work which is judgement oriented, highly reliant on individual expertise and experience and dependent on star performance. He looked at five business segments (hardware development, software development, consulting, medical work and university work) and consistently found the same five key performance indicators:

  • business development
  • skill development
  • quality of interaction
  • organisation of work
  • quality of results

This leads Erne to belief that we need to redefine productivity for knowledge workers. There shouldn’t just be a focus on quantity of the output, but more on the quality of the output. So what can managers do knowing this? They can help their experts by being a filter, or by concentrating their work for them.

This talk left me with some questions. I am not sure whether it is possible to make this distinction between quantitative and qualitative output, especially not in commercial settings. The talk also did not address what I consider to be the main challenge for management in this information age: the fact that a very good manual worker can only be 2 or maybe 3 times as productive as an average manual worker, whereas a good knowledge worker can be hundreds if not thousands times more productive than the average worker.

Robert Woitsch talk was titled “Industrialisation of Knowledge Work, Business and Knowledge Alignment” and I have to admit that I found it very hard to contextualize what he was saying into something that had any meaning to me. I did think it was interesting that he really went in another direction compared to Erne as Woitsch does consider knowledge work to be a production process: people have to do things in efficient ways. I guess it is important to better define what it is we actually mean when we talk about knowledge work. His sites are here: http://promote.boc-eu.com and http://www.openmodels.at.

Finally Olaf Grebner from SAP research talked about “Optimization of Knowledge Work in the Public Sector by Means of Digital Metaphors”. SAP has a case management system that is used by organisations as a replacement for their paper based system. The main difference between current iterations of digital systems and traditional paper based systems is that the latter allows links between the formal case and the informal aspects around the case (e.g. a post-it note on a case-file). Digital case management systems don’t allow informal information to be stored.

So Grebner set out to design an add-on to the digital system that would link informal with formal information and would do this by using digital metaphors. He implemented digital post-it notes, cabinets and ways of search and his initial results are quite positive.

Personally I am bit sceptical about this approach. Digital metaphors have served us well in the past, but are also the cause for the fact that I have to store my files in folders and that each file can only be stored in one folder. Don’t you lose the ability to truly re-invent what a digital case-management system can do for a company if you focus on translating the paper world into digital form? People didn’t like the new digital system (that is why Grebner was commissioned to do make his prototype I imagine). I believe that is because it didn’t allow the same affordances as the paper based world. Why not focus on that first?

Graz Kunsthaus, photo by Marion Schneider & Christoph Aistleitner, CC-licensed
Graz Kunsthaus, photo by Marion Schneider & Christoph Aistleitner, CC-licensed

Knowledge Management and Learning
This track had three learning related sessions.

Martin Wolpers from the Fraunhofer Institute for Applied Information Technology (FIT) talked about the “Early Experiences with Responsive Open Learning Environments”. He first defined each of the terms in Responsive Open Learning Environments:
Responsive: responsiveness to learners’ activities in respect to learning goals
Open: openness for new configurations, new contents and new users
Learning Environment: the conglomerate of tools that bring together people and content artifacts in learning activities to support them in constructing and processing information and knowledge.

The current generation of Virtual Learning Environments and Learning Management Systems have a couple of problems:

  • Lack of information about the user across learning systems and learning contexts (i.e. what happens to the learning history of a person when they switch to a different company?)
  • Learners cannot choose their own learning services
  • Lack of support for open and flexible personalized contextualized learning approach

Fraunhofer is making an intelligent infrastructure that incorporates widgets and existing VLE/LMS functionality to truly personalize learning. They want to bridge what people use at home with what they use in the corporate environment by “intelligent user driven aggregation”. This includes a technology infrastructure, but also requires a big change in understanding how people actually learn.

They used Shindig as the widget engine and Opensocial as the widget technology. They used this to create an environment with the following characteristics:

  • A widget based environment to enable students to create their own learning environment
  • Development of new widgets should be independent from specific learning platforms
  • Real-time communication between learners, remote inter-widget communication, interoperable data exchange, event broadcasting, etc.

He used a student population in China as the first people to try the system. It didn’t have the uptake that he expected. They soon realised that this was because the students had come to the conclusion that use or non-use of the system did not directly affect their grades. The students also lacked an understanding of the (Western?) concept of a Personal Learning Environment. After this first trial he came to a couple of conclusions. Some where obvious like that you should respect the cultural background of your students or that responsive open learning environments create challenges on the technology and the psycho-pedagogical side. Other were less obvious like that using an organic development process allowed for flexibility and for openly addressing emerging needs and requirements and that it makes sense to enforce your own development to become the standard.

For me this talk highlighted the still significant gap that seems to exist between computer scientists on the one side and social scientists on the other side. Trying out Personal Learning Environments in China is like sending CliniClowns to Africa: not a good idea. Somebody could have told them this in advance, right?

Next up was a talk titled “Utilizing Semantic Web Tools and Technologies for Competency Management” by Valentina Janev from the Serbian Mihajlo Pupin Institute. She does research to help improve the transferability and comparability of competences, skills and qualifications and to make it easier to express core competencies and talents in a standardized machine accessible way. This was another talk that was hard for me to follow because it was completely focused on what needs to happen on the (semantic) technical side without first giving a clear idea of what kind of processes these technological solutions will eventually improve. A couple of snippets that I picked up are that they are replacing data warehouse technologies with semantic web technologies, that they use OntoWiki a semantic wiki application, that RDF is the key word for people in this field and that there is thing called DOAC which has the ambition to make job profiles (and the matching CVs) machine readable.

The final talk in this track was from Joachim Griesbaum who works at the Institute of Information Science and Language Technology. The title of his talk must have been the longest in the conference: “Facilitating collaborative knowledge management and self-directed learning in higher education with the help of social software, Concept and implementation of CollabUni – a social information and communication infrastructure”, but as he said: at least it gives you an idea what it is about (slides of this talk are available here, Griesbaum was one of the few presenters that made it clear where I could find the slides afterwards).

A lot of social software in higher education is used in formal learning. Griesbaum wants to focus on a Knowledge Management approach that primarily supports informal learning. To that end he and his students designed a low cost (there was no budget) system from the bottom up. It is called CollabUni and based on the open source e-portfolio solution (and smart little sister of Moodle) Mahara.

They did a first evaluation of the system in late 2009. There was little self-initiated knowledge activity by the 79 first year students. Roughly one-third of the students see an added value in CollabUni and declare themselves ready for active participation. Even though the knowledge processes that they aimed for don’t seem to be self-initiating and self-supporting, CollabUni still shows and stands for a possible low-cost and bottom-up approach towards developing social software. During the next steps of their roll out they will pay attention to the following:

  • Social design is decisively important
  • Administrative and organizational support components and incentive schemes are needed
  • Appealing content (for example an initial repository of term papers or theses)
  • Identify attractive use cases and applications

Call me a cynic, but if you have to try this hard: why bother? To me this really had the feeling of a technology trying to find a problem, rather than a technology being the solution to the problem. I wonder what the uptake of Facebook is with his students? I did ask him the question and he said that there has not been a lot of research into the use of Facebook in education. I guess that is true, but I am quite convinced there is a lot use of Facebook in education. I believe that if he had really wanted to leverage social software for the informal part of learning, he should have started with what his students are actually using and try to leverage that by designing technology in that context, instead of using another separate system.

Collaborative Innovation Networks (COINs)
The closing keynote of the conference was by Peter A. Gloor who currently works for the MIT Center for Collective Intelligence. Gloor has written a couple of books on how innovation happens in this networked world. Though his story was certainly entertaining I also found it a bit messy: he had an endless list of fascinating examples that in the end supported a message that he could have given in a single slide.

His main point is that large groups of people behave apparently randomly, but that there are patterns that can be analysed at the collective level. These patterns can give you insight into the direction people are moving. One way of reading the collective mind is by doing social network analysis. By combining the wisdom of the crowd with the wisdom of groups of experts (swarms) it is possible to do accurate predictions. One example he gave was how they had used reviews on the Internet Movie Database (the crowd) and on Rotten Tomatoes (the swarm) to predict on the day before a movie opens in the theatres how much the movie will bring in in total.

The process to do these kinds of predictions is as follows:

COIN cycle
COIN cycle

This kind of analysis can be done at a global level (like the movie example), but also in for example organizations by analysing email-archives or equipping people with so called social badges (which I first read about in Honest Signals) which measure who people have contact with and what kind of interaction they are having.

He then went on to talk about what he calls “Collaborative Innovation Networks” (COINs) which you can find around most innovative ideas. People who lead innovation (think Thomas Edison or Tim Berners-Lee) have the following characteristics:

  • There are well connected (they have many “friends”)
  • They have a high degree of interactivity (very responsive)
  • They share to a very high degree

All of these characteristics are easy to measure electronically and thus automatically, so to find COINs you find the people who score high on these points. According to Gloor high-performing organizations work as collaborative innovation networks. Ideas progress from Collaborative Innovation Network (COIN) to Collaborative Learning Network (CLN) to Collaborative Interest Network (CIN).

Twitter is proving to be a very useful tool for this kind of analysis. Doing predictions for movies is relatively easy because people are honest in their feedback. It is much harder for things like stock, because people game the system with their analysis. Twitter can be used (e.g. by searching for “hope”, “fear” and “worry” as indicators for sentiment) as people are honest in their feedback there.

Finally he made a refence in his talk to the Allen curve (the high correlation between physical distance and communication, with a critical distance of 50 meters for technical communication). I am sure this curve is used by many office planners, but Gloor also found an Allen curve for technical companies around his university: it was about 3 miles.

Interesting Encounters
Outside of the sessions I spoke to many interesting people at the conference. Here are a couple (for my own future reference).

It had been a couple of years since I had last seen Peter Sereinigg from act2win. He has stopped being a Moodle partner and now focuses on projects in which he helps global virtual teams in how they communicate with each other. There was one thing that he and I could fully agree on: you first have to build some rapport before you can effectively work together. It seems like such an obvious thing, but for some reason it still doesn’t happen on many occasions.

Twitter allowed me to get in touch with Aldo de Moor. He had read my blog post about day 1 of this conference and suggested one of his articles for further reading about pattern languages (the article refers to a book on a pattern language for communication which looks absolutely fascinating). Aldo is a independent research consultant in the field of Community Informatics. That was interesting to me for two reasons:

  • He is still actively publishing in peer reviewed journals and speaking at conferences, without being affiliated with a highly acclaimed research institute. He has written an interesting blog post about the pros and cons of working this way.
  • I had never heard of this young field of community informatics and it is something I would like to explore further.

I also spent some time with Barend Jan de Jong who works at Wolters Noordhoff. We had some broad-ranging discussions mainly about the publishing field: the book production process and the information technology required to support this, what value a publisher can still add, e-books compared to normal books (he said how a bookcase says something about somebody’s identity, I agreed but said that a digital book related profile is way more accessible than the bookcase in my living room, note to self: start creating parody GoodReads accounts for Dutch politicians), the unclear if not unsustainable business model of the wonderful Guardian news empire and how we both think that O’Reilly is a publisher that seem to have their stuff fully in order.

Puzzling stuff
There were also some things at I-KNOW 2010 that were really from a different world. The keynote on the morning of the 3rd day was perplexing to me. Márta Nagy-Rothengass titled the talk “European ICT Research and Development Supporting the Expansion of Semantic Technologies and Shared Knowledge Management” and opened with a video message of Neelie Kroes talking in very general terms about Europe’s digital agenda. After that Nagy-Rothengass told us that the European Commission will be nearly doubling its investment into ICT to 11 billion Euros, after which she started talking about the “Call 5” of “FP7” (apparently that stands for the Seventh Framework Programme), the dates before which people should put their proposals in, the number of proposals received, etc., etc., etc. I am pro-EU, but I am starting to understand why people can make a living advising other people how best to apply for EU grants.

Another puzzling thing was the fact that people like me (with a corporate background) thought that the conference was quite theoretical and academic, whereas the researchers thought everything was very applied (maybe not enough research even!). I guess this shows that there is quite a schism between universities furthering the knowledge in this field and corporations who could benefit from picking the fruits of this knowledge. I hope my attendance at this great conference did its tiny part in bridging this gap.

Notes and Reflections on Day 1 of I-KNOW 2010

I-KNOW 2010
I-KNOW 2010

From September 1-3, 2010, I will attend the 10th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW 2010) in beautiful Graz, Austria. I will use my blog to do a daily report on my captured notes and ideas.

And now for something completely different
In the last few years I have put a lot of effort into becoming a participating member in the global learning technology community. This means that when I visit a “learning” conference I know a lot of the people who are there. At this conference I know absolutely nobody. Not a single person in my online professional network seems to know let alone go to this conference.

One of my favourite competencies in the leadership competency framework of Shell is the ability to value differences. People who master this competency actively seek out the opinion of people who have a different opinion than theirs. There are good reasons for this (see for example Page’s The Difference), and it is one of the things that I would like to work on myself: I am naturally inclined to seek out people who think very much like me and this conference should help me in overcoming that preference.

After the first day I already realise that the world I live and work in is very “corporate” and very Anglo-Saxon. In a sense this conference feels like I have entered into a world that is normally hidden from me. I would also like to compliment the organizers of the conference: everything is flawless (there even is an iPhone app: soon to be standard for all for conferences I think, I loved how FOSDEM did this: publishing the program in a structured format and then letting developers make the apps for multiple mobile platforms).

Future Trends in Search User Interfaces
Marti Hearst has just finished writing her book Search User Interfaces which is available online for free here and she was therefore asked to keynote about the future of these interfaces.

Current search engines are primarily search text based, have a fast response time, are tailored to keyword queries (that support a search paradigm where there is iteration based on these keywords), sometimes have faceted metadata that delivers navigation/organization support, support related queries and in some cases are starting to show context-sensitive results.

Hearst sees a couple of things happening in technology and in how society interacts with that technology that could help us imagine what the search interface will look like in the future. Examples are the wide adoption of touch-activated devices with excellent UI design, the wide adoption of social media and user-generated content, the wide adoption of mobile devices with data service, improvements in Natural Language Processing (NLP), a preference for audio and video and the increasing availability of rich, integrated data sources.

All of these trends point to more natural interfaces. She thinks this means the following for search user interfaces:

  • Longer more natural queries. Queries are getting longer all the time. Naive computer users use longer queries, only shortening them when they learn that they don’t get good results that way. Search engines are getting better at handling longer queries. Sites like Yahoo Answers and Stack Overflow (a project by one of my heroes Joel Spolsky) are only possible because we now have much more user-generated content.
  • “Sloppy commands” are now slowly starting to be supported by certain interfaces. These allow flexibility in expression and are sometimes combined with visual feedback. See the video below for a nice example.

[vimeo http://vimeo.com/13992710]

  • Search is becoming as social as possible. This is a difficult problem because you are not one person, you are different people at different times. There are explicit social search tools like Digg, StumbleUpon and Delicious and there are implicit social search tools and methods like “People who bought x, also bought…” and Yahoo’s My Web (now defunct). Two good examples (not given by Hearst) of how important the social aspects of search are becoming are this Mashable article on a related Facebook patent and this Techcrunch article on a personalized search engine for the cloud.
  • There will be a deep integration of audio and video into search. This seemed to be a controversial part of her talk. Hearst is predicting the decline of text (not among academics and lawyers). There are enough examples around: the culture of video responses on YouTube apparently arose spontaneously and newspaper websites are starting to look more and more like TV. It is very easy to create videos, but the way that we can edit videos still needs improvement.
  • A final prediction is that the search interface will be more like a dialogue, or conversational. This reality is a bit further away, but we are starting to see what it might look like with apps like Siri.

Enterprise 2.0 and the Social Web

Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed
Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed

This track consisted of three presentations. The first one was titled “A Corporate Tagging Framework as Integration Service for Knowledge Workers”. Walter Christian Kammergruber, a PhD student from Munich, told us that there are two problems with tagging: one is how to orchestrate the tags in such a way that they work for the complete application landscape, another is the semantic challenge of getting rid of ambiguity, multiple spellings, etc. His tagging framework (called STAG) attempts to solve this problem. It is a piece of middleware that sits on the Siemens network and provides tagging functionality through web services to Siemens’ blogging platform, wiki, discussion forums and Sharepoint sites. These tags can then be displayed using simple widgets. The semantic problem is solved by having a thesaurus editor allowing people define synonyms for tags and make relationships between related tags.

I strongly believe that any large corporation would be very much helped with a centralised tagging facility which can be utilised by decentralised applications. This kind of methodology should actually not only be used for tagging but could also be used for something like user profiles. How come I don’t have a profile widget that I can include on our corporate intranet pages?

The second talk, by Dada Lin, was titled “A Knowledge Management Scheme for Enterprise 2.0”. He presented a framework that should be able to bridge the gap between Knowledge Management and Enterprise 2.0. It is called the IDEA framework in which knowledge is seen as a process, not as an object. The framework consists of the following elements (also called “moments”):

  • Interaction
  • Documentation
  • Evolution
  • Adoption

He then puts these moments into three dimensions: Human, Technology and Organisation. Finally he did some research around a Confluence installation at T-Systems. None of this was really enlightening to me, I was however intrigued to notice that the audience focused more on the research methodologies than on the outcomes of the research.

The final talk, “Enterprise Microblogging at Siemens Building Technologies Division: A Descriptive Case Study” by Johannes Müller a senior Knowledge Management manager at Siemens was quite entertaining. He talked about References@BT, a community at Siemens that consists of many discussion forums, a knowledge reference and since March 2009 a microblogging tool. It has 7000 members in 73 countries.

The microblogging platform is build by himself and thus has exactly the features it needed to have. One of the features he mentioned was that it showed a picture of every user in every view on the microblog posts. This is now a standard feature in lots of tools (e.g. Twitter or Facebook) and it made me realise that Moodle was actually one of the first applications that I know that this consistently: another example of how forward thinking Martin Dougiamas really was!.

Müller’s microblogging platform does allow posts of more than 140 characters, but does not allow any formatting (no line-breaks or bullet points for example). This seems to be an effective way of keeping the posts short.

He shared a couple of strategies that he uses to get people to adopt the new service. Two things that were important were the provision of widgets that can be included in more traditional pages on the intranet and the ability to import postings from other microblogging sites like Twitter using a special hash tag. He has also sent out personalised email to users with follow suggestions. These were hugely effective in bootstrapping the network.

Finally he told us about the research he has done to get some quantitative and qualitative data about the usefulness of microblogging. His respondents thought it was an easy way of sharing information, an additional channel for promoting events, a new means of networking with others, a suitable tool to improve writing skills and a tool that allowed for the possibility to follow experts.

Know-Center Graz
During lunch (and during the Bacardi sponsored welcome reception) I had the pleasant opportunity to sit with Michael Granitzer, Stefanie Lindstaedt and Wolfgang Kienreich from the Know-Center, Austria’s Competence Center for Knowledge Management.

They have done some work for Shell in the past around semantic similarity checking and have delivered a working proof of concept in our Mediawiki installation. They demonstrated some of their new projects and we had a good discussion about corporate search and how to do technological innovation in large corporations.

The first project that they showed me is called the Advanced Process- Oriented Self- Directed Learning Environment (APOSDLE). It is a research project that aims to develop tools that help people learn at work. To rephrase it in learning terms: it is a very smart way of doing performance support. The video below gives you a good impression of what it can do:

[youtube=http://www.youtube.com/watch?v=4ToXuOTKfAU?rel=0]

After APOSDLE they showed me some outcomes from the Mature IP project. From the project abstract:

Failures of organisation-driven approaches to technology-enhanced learning and the success of community-driven approaches in the spirit of Web 2.0 have shown that for that agility we need to leverage the intrinsic motivation of employees to engage in collaborative learning activities, and combine it with a new form of organisational guidance. For that purpose, MATURE conceives individual learning processes to be interlinked (the output of a learning process is input to others) in a knowledge-maturing process in which knowledge changes in nature. This knowledge can take the form of classical content in varying degrees of maturity, but also involves tasks & processes or semantic structures. The goal of MATURE is to understand this maturing process better, based on empirical studies, and to build tools and services to reduce maturing barriers.

Mature
Mature

I was shown a widget-based approach that allowed people to tag resources, put them in collections and share these resources and collections with others (more information here). One thing really struck me about the demo I got: they used a simple browser plugin as a first point of contact for users with the system. I suddenly realised that this would be the fastest way to add a semantic layer over our complete intranet (it would work for the extranet too). With our desktop architecture it is relatively trivial to roll out a plugin to all users. This plugin would allow users to annotate webpages on the net creating a network of meta-information about resources. This is becoming increasingly viable as more and more of the resources in a company are accessed from a browser and are URL addressable. I would love to explore this pragmatic direction further.

Knowledge Sharing
Martin J. Eppler from the University of St. Gallen seems to be a leading researcher in the field of knowledge management: when he speaks people listen. He presented a talk titled “Challenges and Solutions for Knowledge Sharing in Inter-Organizational Teams: First Experimental Results on the Positive Impact of Visualization”. He is interested in the question of how visualization (mapping text spatially) changes the way that people share knowledge. In this particular research project he focused on inter-organizational teams. He tries to make his experiments as realistic as possible, so he used senior managers and reallife scenarios, put them in three experimental groups and set them out to do a particular task. There was a group that was supported with special computer based visualization software, another group used posters with templates and a final (control) group used plain flipcharts. After analysing his results he was able to conclude that visual support leads to significant greater productivity.

This talk highlights one of the problems I have with science applied in this way. What do we now know? The results are very narrow and specific. What happens if you change the software? Is this the case for all kinds of tasks? The problem is: I don’t know how scientists could do a better job. I guess we have to wait till our knowledge-working lives can really be measured consistently and in realtime and then for smart algorythms to find out what really works for increased productivity.

The next talk in this talk was from Felix Mödritscher who works at the Vienna University of Economics and Business. His potentially fascinating topic “Using Pattern Repositories for Capturing and Sharing PLE Practices in Networked Communities” was hampered by the difficulty of explaining the complexities of the project he is working on.

He used the following definition for Personal Learning Environments (PLEs): a set of tools, services, and artefacts gathered from various contexts and to be used by learners. Mödritscher has created a methodology that allows people to share good practices in PLEs. First you record PLE interactions, then you allow people to depersonalise these interactions and share them as an “activity pattern” (distilled and archetypical), where people can then pick these up and repersonalise them. He has created a Pattern repository, with a pattern store. It has a client side component implemented as a Firefox extension: PAcMan (Personal Activity Manager). It is still early days, but these patterns appear to be really valuable: they not only help with professional competency development, but also with what he calls transcompentences.

I love the idea of using design patterns (see here), but thought it was a pity that Mödritscher did not show any very concrete examples of shared PLE patterns.

My last talk of the day was on “Clarity in Knowledge Communication” by Nicole Bischof, one of Eppler’s PhD students in the University of St. Gallen. She used a fantastic quote by Wittgenstein early in her presentation:

Everything that can be said, can be said clearly

According to her, clarity can help with knowledge creation, knowledge sharing, knowledge retention and knowledge application. She used the Hamburger Verständlichkeitskonzept as a basis to distill five distinct aspects to clarity: Concise content, Logical structure, Explicit content, Ambiguity low and Ready to use (the first letters conveniently spell “CLEAR”). She then did an empirical study about the clarity of Powerpoint presentations. Her presentation turned tricky at that point as she was presenting in Powerpoint herself. The conclusion was a bit obvious: knowledge communication can be designed to be more user-centred and thus more effective, clarity helps in translating innovation and potential of knowledge and can help with a clear presentation of complex and knowledge content.

Bischof did an extensive literature review and clarity is an underresearched topic. After just having read Tufte’s anti-Powerpoint manifesto I am convinced that there is a world to gain for businesses like Shell’s. So much of our decision making is based on Powerpoint slidepacks, that it becomes incredibly urgent to let this be optimal.

Never walk alone
I am at this conference all by myself and have come to realise that this is not the optimal situation. I want to be able to discuss the things that I have just seen and collaboratively translate them to my personal work situation. It would have been great to have a sparring partner here who shares a large part of my context. Maybe next time?!