Notes and Reflections on Day 2 and 3 of I-KNOW 2010

I-KNOW 2010
I-KNOW 2010

These are my notes and reflections for the second and third days of the 10th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW 2010).

Another appstore!
Rafael Sidi from Elsevier kicked of the second day with a talk titled “Bring in ‘da Developers, Bring in ‘da Apps – Developing Search and Discovery Solutions Using Scientific Content APIs” (the slightly ludicrous title was fashioned after this).

He opened his talk with this Steve Ballmer video which, if I was the CIO of any company, would seriously make me reconsider my customer relationship with Microsoft:

[youtube=http://www.youtube.com/watch?v=8To-6VIJZRE&rel=0]

(If you enjoyed that video, make sure you watch this one too, first watch it with the sound turned off and only then with the sound on).

Sidi is responsible for Elservier’s SciVerse platform. He has seen that data platforms are increasingly important, that there is an explosion of applications and that people work in communities of innovation. He used Data.gov as an example: it went from 47 sources to 220,000+ sources within a year’s time and has led to initiatives like Apps for America. We need to have an “Apps for science” too. Our current scientific platforms make us spend too much time gathering instead of analysing information and none of them really understand the user’s intent.

The key trends that he sees on the web are:

  • Openness and interoperability (“give me your data, my way”). Access to APIs helps to create an ecosystem.
  • Personalization (“know what I want and deliver results on my interest”). Well known examples are: Amazon, Netflix and Last.fm
  • Collaboration & trusted views (“the right contacts at the right time”). Filtering content through people you trust. “Show me the articles I’ve read and show me what my friends have right differently from me”. This is not done a lot. Sidi didn’t mention this but I think things like Facebook’s open API are starting to deliver this.

So Elsevier has decided to turn SciVerse, the portal to their content, into a platform by creating an API with which developers can create applications. Very similar to Apple’s appstore this will include a revenue sharing model. They will also nurture a developers community (bootstrapping it with a couple of challenges).

He then demonstrated how applications would be able to augment SciVerse search results, either by doing smart things with the data in a sidebar (based on aggregated information about the search result) or by modifying a single search result itself. I thought it looked quite impressive and thought it was a very smart move: scientific publishers seem to be under a lot of pressure from things like Open Access and have been struggling to demonstrate their added value in this Internet world. This could be one way to add value. The reaction from the audience was quite tough (something Sidi already preempted by showing an “I hate Elsevier”-tweet in his slides). One audience member: “Elsevier already knows how to exploit the labour of scientists and now wants to exploit the labour of developers too”. I am no big fan of large publisher houses, but thought this was a bit harsh.

Knowledge Visualization
Wolfgang Kienreich demoed some of the knowledge visualization products that the Know-Center has developed over the years. The 3D knowledge space is not available through the web (it is licensed to a German encyclopedia publisher), but showed what is possible if you think hard about how a user should be able to navigate through large knowledge collections. Their work for the Austrian Press Agency is available online in a “labs” evironment. It demonstrates a way of using faceted search in combination simple but insightful visualizations. The following example is a screenshot showing which Austrian politicians have said something about pensions.

APA labs faceted visual search
APA labs faceted visual search

I have only learned through writing this blog post that Wolfgang is interested in the Prisoner’s Dilemma. I would have loved to have talked to him about Goffman’s Expression games and what they could mean for the ways decisions get made in large corporations. I will keep that for a next meeting.

Knowledge Work
This track was supposed to have four talks, but one speaker did not make it to the conference, so there were three talks left.

The first one was provocatively titled “Does knowledge worker productivity really matter?” by Rainer Erne. It was Drucker who said that is used to be the job of management to increase the productivity of manual labour and that is now the job of management to make knowledge workers more productive. In one sense Drucker was definitely right: the demand for knowledge work is increasing all the time, whereas the demand for routine activities are always going down.

Erne’s study focuses on one particular part of knowledge workers: expert work which is judgement oriented, highly reliant on individual expertise and experience and dependent on star performance. He looked at five business segments (hardware development, software development, consulting, medical work and university work) and consistently found the same five key performance indicators:

  • business development
  • skill development
  • quality of interaction
  • organisation of work
  • quality of results

This leads Erne to belief that we need to redefine productivity for knowledge workers. There shouldn’t just be a focus on quantity of the output, but more on the quality of the output. So what can managers do knowing this? They can help their experts by being a filter, or by concentrating their work for them.

This talk left me with some questions. I am not sure whether it is possible to make this distinction between quantitative and qualitative output, especially not in commercial settings. The talk also did not address what I consider to be the main challenge for management in this information age: the fact that a very good manual worker can only be 2 or maybe 3 times as productive as an average manual worker, whereas a good knowledge worker can be hundreds if not thousands times more productive than the average worker.

Robert Woitsch talk was titled “Industrialisation of Knowledge Work, Business and Knowledge Alignment” and I have to admit that I found it very hard to contextualize what he was saying into something that had any meaning to me. I did think it was interesting that he really went in another direction compared to Erne as Woitsch does consider knowledge work to be a production process: people have to do things in efficient ways. I guess it is important to better define what it is we actually mean when we talk about knowledge work. His sites are here: http://promote.boc-eu.com and http://www.openmodels.at.

Finally Olaf Grebner from SAP research talked about “Optimization of Knowledge Work in the Public Sector by Means of Digital Metaphors”. SAP has a case management system that is used by organisations as a replacement for their paper based system. The main difference between current iterations of digital systems and traditional paper based systems is that the latter allows links between the formal case and the informal aspects around the case (e.g. a post-it note on a case-file). Digital case management systems don’t allow informal information to be stored.

So Grebner set out to design an add-on to the digital system that would link informal with formal information and would do this by using digital metaphors. He implemented digital post-it notes, cabinets and ways of search and his initial results are quite positive.

Personally I am bit sceptical about this approach. Digital metaphors have served us well in the past, but are also the cause for the fact that I have to store my files in folders and that each file can only be stored in one folder. Don’t you lose the ability to truly re-invent what a digital case-management system can do for a company if you focus on translating the paper world into digital form? People didn’t like the new digital system (that is why Grebner was commissioned to do make his prototype I imagine). I believe that is because it didn’t allow the same affordances as the paper based world. Why not focus on that first?

Graz Kunsthaus, photo by Marion Schneider & Christoph Aistleitner, CC-licensed
Graz Kunsthaus, photo by Marion Schneider & Christoph Aistleitner, CC-licensed

Knowledge Management and Learning
This track had three learning related sessions.

Martin Wolpers from the Fraunhofer Institute for Applied Information Technology (FIT) talked about the “Early Experiences with Responsive Open Learning Environments”. He first defined each of the terms in Responsive Open Learning Environments:
Responsive: responsiveness to learners’ activities in respect to learning goals
Open: openness for new configurations, new contents and new users
Learning Environment: the conglomerate of tools that bring together people and content artifacts in learning activities to support them in constructing and processing information and knowledge.

The current generation of Virtual Learning Environments and Learning Management Systems have a couple of problems:

  • Lack of information about the user across learning systems and learning contexts (i.e. what happens to the learning history of a person when they switch to a different company?)
  • Learners cannot choose their own learning services
  • Lack of support for open and flexible personalized contextualized learning approach

Fraunhofer is making an intelligent infrastructure that incorporates widgets and existing VLE/LMS functionality to truly personalize learning. They want to bridge what people use at home with what they use in the corporate environment by “intelligent user driven aggregation”. This includes a technology infrastructure, but also requires a big change in understanding how people actually learn.

They used Shindig as the widget engine and Opensocial as the widget technology. They used this to create an environment with the following characteristics:

  • A widget based environment to enable students to create their own learning environment
  • Development of new widgets should be independent from specific learning platforms
  • Real-time communication between learners, remote inter-widget communication, interoperable data exchange, event broadcasting, etc.

He used a student population in China as the first people to try the system. It didn’t have the uptake that he expected. They soon realised that this was because the students had come to the conclusion that use or non-use of the system did not directly affect their grades. The students also lacked an understanding of the (Western?) concept of a Personal Learning Environment. After this first trial he came to a couple of conclusions. Some where obvious like that you should respect the cultural background of your students or that responsive open learning environments create challenges on the technology and the psycho-pedagogical side. Other were less obvious like that using an organic development process allowed for flexibility and for openly addressing emerging needs and requirements and that it makes sense to enforce your own development to become the standard.

For me this talk highlighted the still significant gap that seems to exist between computer scientists on the one side and social scientists on the other side. Trying out Personal Learning Environments in China is like sending CliniClowns to Africa: not a good idea. Somebody could have told them this in advance, right?

Next up was a talk titled “Utilizing Semantic Web Tools and Technologies for Competency Management” by Valentina Janev from the Serbian Mihajlo Pupin Institute. She does research to help improve the transferability and comparability of competences, skills and qualifications and to make it easier to express core competencies and talents in a standardized machine accessible way. This was another talk that was hard for me to follow because it was completely focused on what needs to happen on the (semantic) technical side without first giving a clear idea of what kind of processes these technological solutions will eventually improve. A couple of snippets that I picked up are that they are replacing data warehouse technologies with semantic web technologies, that they use OntoWiki a semantic wiki application, that RDF is the key word for people in this field and that there is thing called DOAC which has the ambition to make job profiles (and the matching CVs) machine readable.

The final talk in this track was from Joachim Griesbaum who works at the Institute of Information Science and Language Technology. The title of his talk must have been the longest in the conference: “Facilitating collaborative knowledge management and self-directed learning in higher education with the help of social software, Concept and implementation of CollabUni – a social information and communication infrastructure”, but as he said: at least it gives you an idea what it is about (slides of this talk are available here, Griesbaum was one of the few presenters that made it clear where I could find the slides afterwards).

A lot of social software in higher education is used in formal learning. Griesbaum wants to focus on a Knowledge Management approach that primarily supports informal learning. To that end he and his students designed a low cost (there was no budget) system from the bottom up. It is called CollabUni and based on the open source e-portfolio solution (and smart little sister of Moodle) Mahara.

They did a first evaluation of the system in late 2009. There was little self-initiated knowledge activity by the 79 first year students. Roughly one-third of the students see an added value in CollabUni and declare themselves ready for active participation. Even though the knowledge processes that they aimed for don’t seem to be self-initiating and self-supporting, CollabUni still shows and stands for a possible low-cost and bottom-up approach towards developing social software. During the next steps of their roll out they will pay attention to the following:

  • Social design is decisively important
  • Administrative and organizational support components and incentive schemes are needed
  • Appealing content (for example an initial repository of term papers or theses)
  • Identify attractive use cases and applications

Call me a cynic, but if you have to try this hard: why bother? To me this really had the feeling of a technology trying to find a problem, rather than a technology being the solution to the problem. I wonder what the uptake of Facebook is with his students? I did ask him the question and he said that there has not been a lot of research into the use of Facebook in education. I guess that is true, but I am quite convinced there is a lot use of Facebook in education. I believe that if he had really wanted to leverage social software for the informal part of learning, he should have started with what his students are actually using and try to leverage that by designing technology in that context, instead of using another separate system.

Collaborative Innovation Networks (COINs)
The closing keynote of the conference was by Peter A. Gloor who currently works for the MIT Center for Collective Intelligence. Gloor has written a couple of books on how innovation happens in this networked world. Though his story was certainly entertaining I also found it a bit messy: he had an endless list of fascinating examples that in the end supported a message that he could have given in a single slide.

His main point is that large groups of people behave apparently randomly, but that there are patterns that can be analysed at the collective level. These patterns can give you insight into the direction people are moving. One way of reading the collective mind is by doing social network analysis. By combining the wisdom of the crowd with the wisdom of groups of experts (swarms) it is possible to do accurate predictions. One example he gave was how they had used reviews on the Internet Movie Database (the crowd) and on Rotten Tomatoes (the swarm) to predict on the day before a movie opens in the theatres how much the movie will bring in in total.

The process to do these kinds of predictions is as follows:

COIN cycle
COIN cycle

This kind of analysis can be done at a global level (like the movie example), but also in for example organizations by analysing email-archives or equipping people with so called social badges (which I first read about in Honest Signals) which measure who people have contact with and what kind of interaction they are having.

He then went on to talk about what he calls “Collaborative Innovation Networks” (COINs) which you can find around most innovative ideas. People who lead innovation (think Thomas Edison or Tim Berners-Lee) have the following characteristics:

  • There are well connected (they have many “friends”)
  • They have a high degree of interactivity (very responsive)
  • They share to a very high degree

All of these characteristics are easy to measure electronically and thus automatically, so to find COINs you find the people who score high on these points. According to Gloor high-performing organizations work as collaborative innovation networks. Ideas progress from Collaborative Innovation Network (COIN) to Collaborative Learning Network (CLN) to Collaborative Interest Network (CIN).

Twitter is proving to be a very useful tool for this kind of analysis. Doing predictions for movies is relatively easy because people are honest in their feedback. It is much harder for things like stock, because people game the system with their analysis. Twitter can be used (e.g. by searching for “hope”, “fear” and “worry” as indicators for sentiment) as people are honest in their feedback there.

Finally he made a refence in his talk to the Allen curve (the high correlation between physical distance and communication, with a critical distance of 50 meters for technical communication). I am sure this curve is used by many office planners, but Gloor also found an Allen curve for technical companies around his university: it was about 3 miles.

Interesting Encounters
Outside of the sessions I spoke to many interesting people at the conference. Here are a couple (for my own future reference).

It had been a couple of years since I had last seen Peter Sereinigg from act2win. He has stopped being a Moodle partner and now focuses on projects in which he helps global virtual teams in how they communicate with each other. There was one thing that he and I could fully agree on: you first have to build some rapport before you can effectively work together. It seems like such an obvious thing, but for some reason it still doesn’t happen on many occasions.

Twitter allowed me to get in touch with Aldo de Moor. He had read my blog post about day 1 of this conference and suggested one of his articles for further reading about pattern languages (the article refers to a book on a pattern language for communication which looks absolutely fascinating). Aldo is a independent research consultant in the field of Community Informatics. That was interesting to me for two reasons:

  • He is still actively publishing in peer reviewed journals and speaking at conferences, without being affiliated with a highly acclaimed research institute. He has written an interesting blog post about the pros and cons of working this way.
  • I had never heard of this young field of community informatics and it is something I would like to explore further.

I also spent some time with Barend Jan de Jong who works at Wolters Noordhoff. We had some broad-ranging discussions mainly about the publishing field: the book production process and the information technology required to support this, what value a publisher can still add, e-books compared to normal books (he said how a bookcase says something about somebody’s identity, I agreed but said that a digital book related profile is way more accessible than the bookcase in my living room, note to self: start creating parody GoodReads accounts for Dutch politicians), the unclear if not unsustainable business model of the wonderful Guardian news empire and how we both think that O’Reilly is a publisher that seem to have their stuff fully in order.

Puzzling stuff
There were also some things at I-KNOW 2010 that were really from a different world. The keynote on the morning of the 3rd day was perplexing to me. Márta Nagy-Rothengass titled the talk “European ICT Research and Development Supporting the Expansion of Semantic Technologies and Shared Knowledge Management” and opened with a video message of Neelie Kroes talking in very general terms about Europe’s digital agenda. After that Nagy-Rothengass told us that the European Commission will be nearly doubling its investment into ICT to 11 billion Euros, after which she started talking about the “Call 5” of “FP7” (apparently that stands for the Seventh Framework Programme), the dates before which people should put their proposals in, the number of proposals received, etc., etc., etc. I am pro-EU, but I am starting to understand why people can make a living advising other people how best to apply for EU grants.

Another puzzling thing was the fact that people like me (with a corporate background) thought that the conference was quite theoretical and academic, whereas the researchers thought everything was very applied (maybe not enough research even!). I guess this shows that there is quite a schism between universities furthering the knowledge in this field and corporations who could benefit from picking the fruits of this knowledge. I hope my attendance at this great conference did its tiny part in bridging this gap.

Notes and Reflections on Day 1 of I-KNOW 2010

I-KNOW 2010
I-KNOW 2010

From September 1-3, 2010, I will attend the 10th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW 2010) in beautiful Graz, Austria. I will use my blog to do a daily report on my captured notes and ideas.

And now for something completely different
In the last few years I have put a lot of effort into becoming a participating member in the global learning technology community. This means that when I visit a “learning” conference I know a lot of the people who are there. At this conference I know absolutely nobody. Not a single person in my online professional network seems to know let alone go to this conference.

One of my favourite competencies in the leadership competency framework of Shell is the ability to value differences. People who master this competency actively seek out the opinion of people who have a different opinion than theirs. There are good reasons for this (see for example Page’s The Difference), and it is one of the things that I would like to work on myself: I am naturally inclined to seek out people who think very much like me and this conference should help me in overcoming that preference.

After the first day I already realise that the world I live and work in is very “corporate” and very Anglo-Saxon. In a sense this conference feels like I have entered into a world that is normally hidden from me. I would also like to compliment the organizers of the conference: everything is flawless (there even is an iPhone app: soon to be standard for all for conferences I think, I loved how FOSDEM did this: publishing the program in a structured format and then letting developers make the apps for multiple mobile platforms).

Future Trends in Search User Interfaces
Marti Hearst has just finished writing her book Search User Interfaces which is available online for free here and she was therefore asked to keynote about the future of these interfaces.

Current search engines are primarily search text based, have a fast response time, are tailored to keyword queries (that support a search paradigm where there is iteration based on these keywords), sometimes have faceted metadata that delivers navigation/organization support, support related queries and in some cases are starting to show context-sensitive results.

Hearst sees a couple of things happening in technology and in how society interacts with that technology that could help us imagine what the search interface will look like in the future. Examples are the wide adoption of touch-activated devices with excellent UI design, the wide adoption of social media and user-generated content, the wide adoption of mobile devices with data service, improvements in Natural Language Processing (NLP), a preference for audio and video and the increasing availability of rich, integrated data sources.

All of these trends point to more natural interfaces. She thinks this means the following for search user interfaces:

  • Longer more natural queries. Queries are getting longer all the time. Naive computer users use longer queries, only shortening them when they learn that they don’t get good results that way. Search engines are getting better at handling longer queries. Sites like Yahoo Answers and Stack Overflow (a project by one of my heroes Joel Spolsky) are only possible because we now have much more user-generated content.
  • “Sloppy commands” are now slowly starting to be supported by certain interfaces. These allow flexibility in expression and are sometimes combined with visual feedback. See the video below for a nice example.

[vimeo http://vimeo.com/13992710]

  • Search is becoming as social as possible. This is a difficult problem because you are not one person, you are different people at different times. There are explicit social search tools like Digg, StumbleUpon and Delicious and there are implicit social search tools and methods like “People who bought x, also bought…” and Yahoo’s My Web (now defunct). Two good examples (not given by Hearst) of how important the social aspects of search are becoming are this Mashable article on a related Facebook patent and this Techcrunch article on a personalized search engine for the cloud.
  • There will be a deep integration of audio and video into search. This seemed to be a controversial part of her talk. Hearst is predicting the decline of text (not among academics and lawyers). There are enough examples around: the culture of video responses on YouTube apparently arose spontaneously and newspaper websites are starting to look more and more like TV. It is very easy to create videos, but the way that we can edit videos still needs improvement.
  • A final prediction is that the search interface will be more like a dialogue, or conversational. This reality is a bit further away, but we are starting to see what it might look like with apps like Siri.

Enterprise 2.0 and the Social Web

Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed
Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed

This track consisted of three presentations. The first one was titled “A Corporate Tagging Framework as Integration Service for Knowledge Workers”. Walter Christian Kammergruber, a PhD student from Munich, told us that there are two problems with tagging: one is how to orchestrate the tags in such a way that they work for the complete application landscape, another is the semantic challenge of getting rid of ambiguity, multiple spellings, etc. His tagging framework (called STAG) attempts to solve this problem. It is a piece of middleware that sits on the Siemens network and provides tagging functionality through web services to Siemens’ blogging platform, wiki, discussion forums and Sharepoint sites. These tags can then be displayed using simple widgets. The semantic problem is solved by having a thesaurus editor allowing people define synonyms for tags and make relationships between related tags.

I strongly believe that any large corporation would be very much helped with a centralised tagging facility which can be utilised by decentralised applications. This kind of methodology should actually not only be used for tagging but could also be used for something like user profiles. How come I don’t have a profile widget that I can include on our corporate intranet pages?

The second talk, by Dada Lin, was titled “A Knowledge Management Scheme for Enterprise 2.0”. He presented a framework that should be able to bridge the gap between Knowledge Management and Enterprise 2.0. It is called the IDEA framework in which knowledge is seen as a process, not as an object. The framework consists of the following elements (also called “moments”):

  • Interaction
  • Documentation
  • Evolution
  • Adoption

He then puts these moments into three dimensions: Human, Technology and Organisation. Finally he did some research around a Confluence installation at T-Systems. None of this was really enlightening to me, I was however intrigued to notice that the audience focused more on the research methodologies than on the outcomes of the research.

The final talk, “Enterprise Microblogging at Siemens Building Technologies Division: A Descriptive Case Study” by Johannes Müller a senior Knowledge Management manager at Siemens was quite entertaining. He talked about References@BT, a community at Siemens that consists of many discussion forums, a knowledge reference and since March 2009 a microblogging tool. It has 7000 members in 73 countries.

The microblogging platform is build by himself and thus has exactly the features it needed to have. One of the features he mentioned was that it showed a picture of every user in every view on the microblog posts. This is now a standard feature in lots of tools (e.g. Twitter or Facebook) and it made me realise that Moodle was actually one of the first applications that I know that this consistently: another example of how forward thinking Martin Dougiamas really was!.

Müller’s microblogging platform does allow posts of more than 140 characters, but does not allow any formatting (no line-breaks or bullet points for example). This seems to be an effective way of keeping the posts short.

He shared a couple of strategies that he uses to get people to adopt the new service. Two things that were important were the provision of widgets that can be included in more traditional pages on the intranet and the ability to import postings from other microblogging sites like Twitter using a special hash tag. He has also sent out personalised email to users with follow suggestions. These were hugely effective in bootstrapping the network.

Finally he told us about the research he has done to get some quantitative and qualitative data about the usefulness of microblogging. His respondents thought it was an easy way of sharing information, an additional channel for promoting events, a new means of networking with others, a suitable tool to improve writing skills and a tool that allowed for the possibility to follow experts.

Know-Center Graz
During lunch (and during the Bacardi sponsored welcome reception) I had the pleasant opportunity to sit with Michael Granitzer, Stefanie Lindstaedt and Wolfgang Kienreich from the Know-Center, Austria’s Competence Center for Knowledge Management.

They have done some work for Shell in the past around semantic similarity checking and have delivered a working proof of concept in our Mediawiki installation. They demonstrated some of their new projects and we had a good discussion about corporate search and how to do technological innovation in large corporations.

The first project that they showed me is called the Advanced Process- Oriented Self- Directed Learning Environment (APOSDLE). It is a research project that aims to develop tools that help people learn at work. To rephrase it in learning terms: it is a very smart way of doing performance support. The video below gives you a good impression of what it can do:

[youtube=http://www.youtube.com/watch?v=4ToXuOTKfAU?rel=0]

After APOSDLE they showed me some outcomes from the Mature IP project. From the project abstract:

Failures of organisation-driven approaches to technology-enhanced learning and the success of community-driven approaches in the spirit of Web 2.0 have shown that for that agility we need to leverage the intrinsic motivation of employees to engage in collaborative learning activities, and combine it with a new form of organisational guidance. For that purpose, MATURE conceives individual learning processes to be interlinked (the output of a learning process is input to others) in a knowledge-maturing process in which knowledge changes in nature. This knowledge can take the form of classical content in varying degrees of maturity, but also involves tasks & processes or semantic structures. The goal of MATURE is to understand this maturing process better, based on empirical studies, and to build tools and services to reduce maturing barriers.

Mature
Mature

I was shown a widget-based approach that allowed people to tag resources, put them in collections and share these resources and collections with others (more information here). One thing really struck me about the demo I got: they used a simple browser plugin as a first point of contact for users with the system. I suddenly realised that this would be the fastest way to add a semantic layer over our complete intranet (it would work for the extranet too). With our desktop architecture it is relatively trivial to roll out a plugin to all users. This plugin would allow users to annotate webpages on the net creating a network of meta-information about resources. This is becoming increasingly viable as more and more of the resources in a company are accessed from a browser and are URL addressable. I would love to explore this pragmatic direction further.

Knowledge Sharing
Martin J. Eppler from the University of St. Gallen seems to be a leading researcher in the field of knowledge management: when he speaks people listen. He presented a talk titled “Challenges and Solutions for Knowledge Sharing in Inter-Organizational Teams: First Experimental Results on the Positive Impact of Visualization”. He is interested in the question of how visualization (mapping text spatially) changes the way that people share knowledge. In this particular research project he focused on inter-organizational teams. He tries to make his experiments as realistic as possible, so he used senior managers and reallife scenarios, put them in three experimental groups and set them out to do a particular task. There was a group that was supported with special computer based visualization software, another group used posters with templates and a final (control) group used plain flipcharts. After analysing his results he was able to conclude that visual support leads to significant greater productivity.

This talk highlights one of the problems I have with science applied in this way. What do we now know? The results are very narrow and specific. What happens if you change the software? Is this the case for all kinds of tasks? The problem is: I don’t know how scientists could do a better job. I guess we have to wait till our knowledge-working lives can really be measured consistently and in realtime and then for smart algorythms to find out what really works for increased productivity.

The next talk in this talk was from Felix Mödritscher who works at the Vienna University of Economics and Business. His potentially fascinating topic “Using Pattern Repositories for Capturing and Sharing PLE Practices in Networked Communities” was hampered by the difficulty of explaining the complexities of the project he is working on.

He used the following definition for Personal Learning Environments (PLEs): a set of tools, services, and artefacts gathered from various contexts and to be used by learners. Mödritscher has created a methodology that allows people to share good practices in PLEs. First you record PLE interactions, then you allow people to depersonalise these interactions and share them as an “activity pattern” (distilled and archetypical), where people can then pick these up and repersonalise them. He has created a Pattern repository, with a pattern store. It has a client side component implemented as a Firefox extension: PAcMan (Personal Activity Manager). It is still early days, but these patterns appear to be really valuable: they not only help with professional competency development, but also with what he calls transcompentences.

I love the idea of using design patterns (see here), but thought it was a pity that Mödritscher did not show any very concrete examples of shared PLE patterns.

My last talk of the day was on “Clarity in Knowledge Communication” by Nicole Bischof, one of Eppler’s PhD students in the University of St. Gallen. She used a fantastic quote by Wittgenstein early in her presentation:

Everything that can be said, can be said clearly

According to her, clarity can help with knowledge creation, knowledge sharing, knowledge retention and knowledge application. She used the Hamburger Verständlichkeitskonzept as a basis to distill five distinct aspects to clarity: Concise content, Logical structure, Explicit content, Ambiguity low and Ready to use (the first letters conveniently spell “CLEAR”). She then did an empirical study about the clarity of Powerpoint presentations. Her presentation turned tricky at that point as she was presenting in Powerpoint herself. The conclusion was a bit obvious: knowledge communication can be designed to be more user-centred and thus more effective, clarity helps in translating innovation and potential of knowledge and can help with a clear presentation of complex and knowledge content.

Bischof did an extensive literature review and clarity is an underresearched topic. After just having read Tufte’s anti-Powerpoint manifesto I am convinced that there is a world to gain for businesses like Shell’s. So much of our decision making is based on Powerpoint slidepacks, that it becomes incredibly urgent to let this be optimal.

Never walk alone
I am at this conference all by myself and have come to realise that this is not the optimal situation. I want to be able to discuss the things that I have just seen and collaboratively translate them to my personal work situation. It would have been great to have a sparring partner here who shares a large part of my context. Maybe next time?!

Online Educa Berlin 2008: Day 1

Norbert Bolz
Norbert Bolz

I am the Online Educa with Stoas for a commercial purpose: we have a stand with four European Moodle partners and are trying to talk to as many people as possible about Moodle

This means that I have not had the opportunity to really go to any of the sessions. I did manage to go to the keynotes of the first day though, so I would like to write down some of the things that I have noticed there.

Just like Wilfred Rubens I had really looked forward to hear Michael Wesch speak. I should have known that I would have been disappointed. This had nothing to do with Wesch, who is an insightful and entertaining speaker, but with the fact that I already know what he does. He focused on the lowest common denominator in the audience and that wasn’t me.

I guess you could say that he suffered from the exact problem that he is trying to solve in his educational practice: how do you stay significant when you stand in front of an audience in a design built for non-participation. The title of his talk “The Crisis of Significance and the Future of Education” is highly relevant. I thought it was unfortunate that he only focused on the first part of his title and did not talk about recent educational projects like his World Simulation Project.

One slight disappointment was followed by a very pleasant surprise. The Berlin based media scholar Norbert Bolz gave a slide-less talk titled “From Knowledge Management to Identity Management”. This talk was highly conceptual and sociological (if not philosophical).

He talked about five Internet related phenomena and what kind of effects these are having on society:

  • Serious play or the “paradise of work”. Bolz thinks there will be less of a difference between work and private time. Successful people will be absorbed in their work. The software tools that we buy are also toys. We should learn how to play with these tools (just like with toys) to use them effectively. Younger people are naturally the avant garde of this development.
  • Self design, also known as branding yourself. Personal brands are humans who have learned how to catch people’s attention. He described a progression from broadcasting to narrowcasting to echocasting and considers Youtube to be a prime example.
  • Identity management has to do with social wealth. He thinks we are living in the age of reputation and recommendation.
  • Attention management is about the interrelation between ignorance and trust. To know more is to also know less. All our options are disproportionate to our available time resources. Attention should be considered a naturally scarce resource. There is huge battle for this resource in trying to grab our attention.
  • Linking value is the most surplus value add in this century. This is because of the logic of networks. Bolz referred to Granovetter’s “ground breaking essay” The Strength of Weak Ties. Old social networks have strong ties, whereas the current social network have weak ties (e.g. a Facebook users with 2600 “friends”) . Networks with weak ties are more information rich while the information flow between strong ties is very small (he gave the example of how lover’s communicate).

All of these are topics which invite more exploration. I am looking forward to doing that over the next couple of weeks and will start with Granovetter’s essay.

Tomorrow is another day. I am hoping to see another keynote session and go to the Battle of the Bloggers (with Jay Cross, Wilfred Rubens and Stephen Warburton; looking forward to the strong language!).