Technology, Innovation, Education

"technology creates feasibility spaces for social practice"

Posts Tagged ‘apple

Reflecting on South by Southwest (SxSW) 2012

SxSW: The Place to Be (photo CC-licensed by Debbs)

SxSW: The Place to Be (photo CC-licensed by Debbs)

It has been a few months since I attended SxSW in Austin. Time to do a bit of reflection and see which things have stuck with me as major takeaways and trends to remember.

Let me start by saying that going there has changed the way I think about learning and technology in many tacit ways that are hard to describe. That must have something to do with the techno-optimism, the incredible scale/breadth and the inclusive atmosphere. I will definitely make it a priority to go there again. The following things made me think:

Teaching at scale

One thing that we are now slowly starting to understand is how to do things at scale. Virtualized technology allows us to cooperate and collaborate in groups that are orders of magnitude larger than groups coming together in a physical space. The ways of working inside these massive groups are different too.

Wikipedia was probably one of the first sites that showed the power of doing things at this new scale (or was it Craigslist?). Now we have semi-commercial platforms like WordPress.com or hyper-commercial platforms like Facebook that are leveraging the same type of affordances.

The teaching profession is now catching on too. From non-commercial efforts like MOOCs and the Peer 2 Peer university to initiatives springing from major universities: Stanford’s AI course, Udacity, Coursera, MITx to the now heavily endowed Khan Academy: all have found ways to scale a pedagogical process from a classroom full of students to audiences of tens of thousands if not hundreds of thousands. They have now even become mainstream news with Thom Friedman writing about them in the New York Times (conveniently forgetting to mention the truly free alternatives).

I don’t see any of this in Corporate Learning Functions yet. The only way we currently help thousands of staff learn is through non-facilitated e-learning modules. That paradigm is now 15-20 years old and has not taken on board any of the lessons that the net has taught us. Soon we will all agree that this type of e-learning is mostly ineffectual and thus ultimately also non-efficient. The imperative for change is there. Events like the Jams that IBM organize are just the beginning of new ways of learning at the scale of the web.

Small companies creating new/innovative practices

The future of how we will soon all work is already on view in many small companies around the world. Automattic blew my mind with their global fully distributed workforce of slightly over a hundred people. This allows them to truly only hire the best people for the job (rather than the people who live conveniently close to an office location). All these people need to start being productive is a laptop with an Internet connection.

Automattic has also found a way to make sure that people feel connected to the company and stay productive: they ask people to share as much as possible what it is they are doing (they called it “oversharing”, I would call it narrating your work). There are some great lessons there for small global virtual teams in large companies.

The smallest company possible is a company of one. A few sessions at SxSW focused on “free radicals”. These are people who work in ever-shifting small project groups and often aren’t very bounded to a particular location. These people live what Charles Handy, in The Elephant and The Flea, called a portfolio lifestyle. They are obviously not on a career track with promotions, instead they get their feedback, discipline and refinement from the meritocratic communities and co-working spaces they work in.

Personally I am wondering whether it is possible to become a free radical in a large multinational. Would that be the first step towards a flatter, less hierarchical and more expertise-based organization? I for one wouldn’t mind stepping outside of my line (and out of my silo) and finding my own work on the basis of where I can add the most value for the company. I know this is already possible in smaller companies (see the Valve handbook for an example). It will be hard for big enterprises to start doing this, but I am quite sure we will all end up there eventually.

Hyperspecialization

One trend that is very recognizable for me is hyperspecialization. When I made my first website around 2000, I was able to quickly learn everything there was to know about building websites. There were a few technologies and their scope was limited. Now the level of specialization in the creation of websites is incredible. There is absolutely no way anybody can be an expert in a substantial part of the total field. The modern-day renaissance man just can’t exist.

Transaction costs are going down everywhere. This means that integrated solutions and companies/people who can deliver things end-to-end are losing their competitive edge. As a client I prefer to buy each element of what I need from a niche specialist, rather then get it in one go from somebody who does an average job. Topcoder has made this a core part of their business model: each project that they get is split up into as many pieces as possible and individuals (free radicals again) bid on the work.

Let’s assume that this trends towards specialization will continue. What would that mean for the Learning Function? One thing that would become critical is your ability to quickly assess expertise. How do you know that somebody who calls themselves and expert really is one? What does this mean for competency management? How will this affect the way you build up teams for projects?

Evolution of the interface

Everybody was completely focused on mobile technology at SxSW. I couldn’t keep track of the number of new apps I’ve seen presented. Smartphones and tablets have created a completely new paradigm for interacting with our computers. We have all become enamoured with touch-interfaces right now and have bought into the idea that a mobile operating system contains apps and an appstore (with what I like to call the matching “update hell”).

Some visionaries were already talking about what lies beyond the touch-based interface and apps (e.g. Scott Jenson and Amber Case. More than one person talked about how location and other context creating attributes of the world will allow our computers to be much smarter in what they present to us. Rather than us starting an app to get something done, it will be the world that will push its apps on to us. You don’t have to start the app with the public transport schedule anymore, instead you will be shown the schedule as soon as you arrive at the bus stop. You don’t start Shazam to capture a piece of music, but your phone will just notify you of what music is playing around you (and probably what you could be listening to if you were willing to switch channel). Social cues will become even stronger and this means that cities become the places for what someone called “coindensity” (a place with more serendipity than other places).

This is likely to have profound consequences for the way we deliver learning. Physical objects and location will have learning attached to them and this will get pushed to people’s devices (especially when the systems knows that your certification is expired or that you haven’t dealt with this object before). You can see vendors of Electronic Performance Support Systems slowly moving into this direction. They are waiting for the mobile infrastructure to be there. The one thing we can start doing from today is to make sure we geotag absolutely everything.

One step further are brain-computer interfaces (commanding computers with pure thought). Many prototypes already exist and the first real products are now coming to market. There are many open questions, but it is fascinating to start playing with the conceptual design of how these tools would work.

Storytelling

Every time I go to any learning-related conference I come back with the same thought: I should really focus more on storytelling. At SxSW there was a psychologist making this point again. She talked about our tripartite brain and how the only way to engage with the “older” (I guess she meant Limbic) parts of our brain is through stories. Her memorable quote for me was: “You design for people. So the psychology matters.”

Just before SxSW I had the opportunity to spend two days at the amazing Applied Minds. They solve tough engineering problems, bringing ideas from concept to working prototype (focusing on the really tough things that other companies are not capable of doing). What was surprising is that about half of their staff has an artistic background. They realise the value of story. I’m convinced there is a lot to be gained if large engineering companies would start to take their diversity statements seriously and started hiring writers, architects, sculptors and cineasts.

Open wins again

Call it confirmation bias (my regular readers know I always prefer “open”), but I kept seeing examples at SxSW where open technology beats closed solutions. My favourite example was around OpenStreetMap: companies have been relying on Google Maps to help them out with their mapping needs. Many of them are now starting to realise how limiting Google’s functionality is and what kind of dependence it creates for them. Many companies are switching to Open Street Map. Examples include Yahoo (Flickr), Apple and Foursquare.

Maybe it is because Google is straddling the line between creating more value than they capture and not doing that: I heartily agree with Tim O’Reilly and Doc Searl‘s statements at SxSW that free customers will always create more value than captured ones.

There is one place where open doesn’t seem to be winning currently and that is in the enterprise SaaS market. I’ve been quite amazed with the mafia like way in which Yammer has managed to acquire its customers: it gives away free accounts and puts people in a single network with other people in their domain. Yammer maximizes the virality and tells people they will get more value out of Yammer if they invite their colleagues. Once a few thousand users are in the network large companies have three options:

  1. Don’t engage with Yammer and let people just keep using it without paying for it. This creates unacceptable information risks and liability. Not an option.
  2. Tell people that they are not allowed to use Yammer. This is possible in theory, but would most likely enrage users, plus any network blocks would need to be very advanced (blocking Yammer emails so that people can’t use their own technology to access Yammer). Not a feasible option.
  3. Bite the bullet and pay for the network. Companies are doing this in droves. Yammer is acquiring customers straight into a locked-in position.

SaaS-based solutions are outperforming traditional IT solutions. Rather than four releases a year (if you are lucky), these SaaS based offerings release multiple times a day. They keep adding new functionality based on their customers demands. I have an example of where a SaaS based solution was a factor 2000 faster in implementation (2 hours instead of 6 months) and a factor 5000 cheaper ($100 instead of $500,000) than the enterprise IT way of doing things. The solution was likely better too. Companies like Salesforce are trying very hard to obsolete the traditional IT department. I am not sure how companies could leverage SaaS without falling in another lock-in trap though.

Resource constraints as an innovation catalyst

One lesson that I learned during my trip through the US is that affluence is not a good situation to innovate from. Creativity comes from constraints (this is why Arjen Vrielink and I kept constraining ourselves in different ways for our Parallax series). The African Maker “Safari” at SxSW showed what can become possible when you combine severe resource constraints with regulatory whitespace. Make sure to subscribe to Makeshift Magazine if you are interested to see more of these type of inventions and innovations.

I believe that many large corporations have too much budget in their teams to be really innovative. What would it mean if you wouldn’t cut the budget with 10% every year, but cut it with 90% instead? Wouldn’t you save a lot of money and force people to be more creative? In a world of abundance we will need to limit ourselves artificially to be able to deliver to our best potential.

Education ≠ Content

There is precious few people in the world who have a deep understanding of education. My encounter with Venture Capitalists at SxSW talking about how to fix education did not end well. George Siemens was much more eloquent in the way that he described his unease with the VCs. Reflecting back I see one thing that is most probably at the root of the problem: most people still equate education/learning to content. I see this fallacy all around me: It is the layperson’s view on learning. It is what drives people to buy Learning Content Management Systems that can deliver to mobile. It is why we think that different Virtual Learning Environments are interchangeable. This is why we think that creating a full curriculum of great teachers explaining things on video will solve our educational woes. Wrong!

My recommendation would be to stop focusing on content all together (as an exercise in constraining yourself). Who will create the first contentless course? Maybe Dean Kamen is already doing this. He wanted more children with engineering mindsets. Rather than creating lesson plans for teacher he decided to organise a sport- and entertainment based competition (I don’t how successful he is in creating more engineers with this method by the way).

That’s all

So far for my reflections. A blow-by-blow description of all the sessions I attended at SxSW is available here.

Quick Lessons From Losing an iPad

A couple of weeks ago I forgot my iPad on the train.

After getting over the initial overwhelming feelings of idiocy on my part, I started thinking a bit deeper about the consequences and whether I had taken sensible precautions to mitigate those consequences.

The Problems

A couple of problems dawned on me:

  1. I had lost something that is quite valuable (one colleague told me with some measure of sincerity: “Nice gift for somebody else”. I don’t spend €700 casually and was distressed about losing something that is worth that much.
  2. More important than the device is the data that is on it. There are two potential problems here. The first is that you might have lost access to data that is important to you. The second is that somebody else suddenly might have gained access to your data. Both of these made me feel very uncomfortable.
  3. Finally, losing the device made it clear to me that all iPads look alike, especially in their locked state, and that there is no way for an honest finder to know who the rightful owner of the device is.

The Solutions

So here is my advice on how to minimize these problems. I recommend for you to apply these immediately if you haven’t done so already.

  • Fully insure your device (I had actually done this). Even though this is prohibitively expensive and even though you really shouldn’t insure devices if you can afford to replace them yourself (those insurance companies have to live of something), I still think it is a good idea as there are so many things that can go wrong with it, just through bad luck. I take the cost of the insurance into account when buying the tablet and amortize that over two to three years.
  • Ask yourself this question: Could I throw my current device in the water, walk over to any random computer with a browser and an Internet connection and access all the data that matters to me from there? If next, you would get a new device, would you be able to easily get that data back on the device? If your answer is no to either of these questions you should change your strategy. Some people might think I ask for too much as they are happy to backup to iTunes. I prefer to be as independent from iTunes as possible (I only use it for updates) and think most people would still lose a couple of days of data if all they had was an iTunes backup. Even before I lost my iPad, I was ok in this area. Here are some of the things that I have done: I like to have all my data in apps that keep both a local copy (for when I am offline) and transparently sync to the cloud. For email, contacts and my calendar that is easy: I use Google Apps for my domain and set it up to sync (you have your own domain right?). My task are managed with ToodleDo. My news reader of choice is Google Reader. All my notes are done with Momo. I have copies of my most important documents synced in a Dropbox folder. Dropbox also provides the syncing architecture for my iThoughts mindmaps and for the large collection of PDFs I have sitting the Goodreader app. I buy my ebooks DRM free and read them with Goodreader or I get books as a service through the Amazon Kindle bookstore. Apple now allows easy redownload of the apps you have purchased in the past.
  • Make sure you set a passcode on your iPad (this I had done too). I’ve set it up so that it only comes on after a couple of minutes of being in standby mode. This why I get to keep some of the instant on and off convenience, but also know that if somebody steals it from my bag they won’t just be able to access my data. One thing I am still not sure about is how secure the passcode lock is. What happens when people try to connect a stolen iPad to their iTunes? Is there access to the data?
  • Find my iPad

    Find my iPad

    Apple provides a free Find my iPad service. I had never bothered to set it up, but have since found out that it literally only takes two minutes to do. Once you have it installed you will be able to see where your iPad is, send a message to the iPad and even wipe its contents remotely. All of this can only work once your iPad has an Internet connection though.

  • Finally, I have downloaded a free iPad wallpaper and have used GIMP to add my contact information on top of the wallpaper file (making sure not to put the info underneath the dialog that asks for the passcode. This way, when somebody with good intentions finds the iPad they will have an easy way to find out who the rightful owner is.

To finish the story: a couple of days after I lost my iPad I called the railway company to see if they had some news for me (I had asked them to try and locate it as soon as I realized it was missing). They told me a fellow traveler had brought in my iPad to the service desk and that I could pick it up. Unfortunately, I have no way of thanking this honest person, other than by writing this post.

Written by Hans de Zwart

27-07-2011 at 05:43

Notes and Reflections on Day 2 and 3 of I-KNOW 2010

I-KNOW 2010

I-KNOW 2010

These are my notes and reflections for the second and third days of the 10th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW 2010).

Another appstore!
Rafael Sidi from Elsevier kicked of the second day with a talk titled “Bring in ‘da Developers, Bring in ‘da Apps – Developing Search and Discovery Solutions Using Scientific Content APIs” (the slightly ludicrous title was fashioned after this).

He opened his talk with this Steve Ballmer video which, if I was the CIO of any company, would seriously make me reconsider my customer relationship with Microsoft:

(If you enjoyed that video, make sure you watch this one too, first watch it with the sound turned off and only then with the sound on).

Sidi is responsible for Elservier’s SciVerse platform. He has seen that data platforms are increasingly important, that there is an explosion of applications and that people work in communities of innovation. He used Data.gov as an example: it went from 47 sources to 220,000+ sources within a year’s time and has led to initiatives like Apps for America. We need to have an “Apps for science” too. Our current scientific platforms make us spend too much time gathering instead of analysing information and none of them really understand the user’s intent.

The key trends that he sees on the web are:

  • Openness and interoperability (“give me your data, my way”). Access to APIs helps to create an ecosystem.
  • Personalization (“know what I want and deliver results on my interest”). Well known examples are: Amazon, Netflix and Last.fm
  • Collaboration & trusted views (“the right contacts at the right time”). Filtering content through people you trust. “Show me the articles I’ve read and show me what my friends have right differently from me”. This is not done a lot. Sidi didn’t mention this but I think things like Facebook’s open API are starting to deliver this.

So Elsevier has decided to turn SciVerse, the portal to their content, into a platform by creating an API with which developers can create applications. Very similar to Apple’s appstore this will include a revenue sharing model. They will also nurture a developers community (bootstrapping it with a couple of challenges).

He then demonstrated how applications would be able to augment SciVerse search results, either by doing smart things with the data in a sidebar (based on aggregated information about the search result) or by modifying a single search result itself. I thought it looked quite impressive and thought it was a very smart move: scientific publishers seem to be under a lot of pressure from things like Open Access and have been struggling to demonstrate their added value in this Internet world. This could be one way to add value. The reaction from the audience was quite tough (something Sidi already preempted by showing an “I hate Elsevier”-tweet in his slides). One audience member: “Elsevier already knows how to exploit the labour of scientists and now wants to exploit the labour of developers too”. I am no big fan of large publisher houses, but thought this was a bit harsh.

Knowledge Visualization
Wolfgang Kienreich demoed some of the knowledge visualization products that the Know-Center has developed over the years. The 3D knowledge space is not available through the web (it is licensed to a German encyclopedia publisher), but showed what is possible if you think hard about how a user should be able to navigate through large knowledge collections. Their work for the Austrian Press Agency is available online in a “labs” evironment. It demonstrates a way of using faceted search in combination simple but insightful visualizations. The following example is a screenshot showing which Austrian politicians have said something about pensions.

APA labs faceted visual search

APA labs faceted visual search

I have only learned through writing this blog post that Wolfgang is interested in the Prisoner’s Dilemma. I would have loved to have talked to him about Goffman’s Expression games and what they could mean for the ways decisions get made in large corporations. I will keep that for a next meeting.

Knowledge Work
This track was supposed to have four talks, but one speaker did not make it to the conference, so there were three talks left.

The first one was provocatively titled “Does knowledge worker productivity really matter?” by Rainer Erne. It was Drucker who said that is used to be the job of management to increase the productivity of manual labour and that is now the job of management to make knowledge workers more productive. In one sense Drucker was definitely right: the demand for knowledge work is increasing all the time, whereas the demand for routine activities are always going down.

Erne’s study focuses on one particular part of knowledge workers: expert work which is judgement oriented, highly reliant on individual expertise and experience and dependent on star performance. He looked at five business segments (hardware development, software development, consulting, medical work and university work) and consistently found the same five key performance indicators:

  • business development
  • skill development
  • quality of interaction
  • organisation of work
  • quality of results

This leads Erne to belief that we need to redefine productivity for knowledge workers. There shouldn’t just be a focus on quantity of the output, but more on the quality of the output. So what can managers do knowing this? They can help their experts by being a filter, or by concentrating their work for them.

This talk left me with some questions. I am not sure whether it is possible to make this distinction between quantitative and qualitative output, especially not in commercial settings. The talk also did not address what I consider to be the main challenge for management in this information age: the fact that a very good manual worker can only be 2 or maybe 3 times as productive as an average manual worker, whereas a good knowledge worker can be hundreds if not thousands times more productive than the average worker.

Robert Woitsch talk was titled “Industrialisation of Knowledge Work, Business and Knowledge Alignment” and I have to admit that I found it very hard to contextualize what he was saying into something that had any meaning to me. I did think it was interesting that he really went in another direction compared to Erne as Woitsch does consider knowledge work to be a production process: people have to do things in efficient ways. I guess it is important to better define what it is we actually mean when we talk about knowledge work. His sites are here: http://promote.boc-eu.com and http://www.openmodels.at.

Finally Olaf Grebner from SAP research talked about “Optimization of Knowledge Work in the Public Sector by Means of Digital Metaphors”. SAP has a case management system that is used by organisations as a replacement for their paper based system. The main difference between current iterations of digital systems and traditional paper based systems is that the latter allows links between the formal case and the informal aspects around the case (e.g. a post-it note on a case-file). Digital case management systems don’t allow informal information to be stored.

So Grebner set out to design an add-on to the digital system that would link informal with formal information and would do this by using digital metaphors. He implemented digital post-it notes, cabinets and ways of search and his initial results are quite positive.

Personally I am bit sceptical about this approach. Digital metaphors have served us well in the past, but are also the cause for the fact that I have to store my files in folders and that each file can only be stored in one folder. Don’t you lose the ability to truly re-invent what a digital case-management system can do for a company if you focus on translating the paper world into digital form? People didn’t like the new digital system (that is why Grebner was commissioned to do make his prototype I imagine). I believe that is because it didn’t allow the same affordances as the paper based world. Why not focus on that first?

Graz Kunsthaus, photo by Marion Schneider & Christoph Aistleitner, CC-licensed

Graz Kunsthaus, photo by Marion Schneider & Christoph Aistleitner, CC-licensed

Knowledge Management and Learning
This track had three learning related sessions.

Martin Wolpers from the Fraunhofer Institute for Applied Information Technology (FIT) talked about the “Early Experiences with Responsive Open Learning Environments”. He first defined each of the terms in Responsive Open Learning Environments:
Responsive: responsiveness to learners’ activities in respect to learning goals
Open: openness for new configurations, new contents and new users
Learning Environment: the conglomerate of tools that bring together people and content artifacts in learning activities to support them in constructing and processing information and knowledge.

The current generation of Virtual Learning Environments and Learning Management Systems have a couple of problems:

  • Lack of information about the user across learning systems and learning contexts (i.e. what happens to the learning history of a person when they switch to a different company?)
  • Learners cannot choose their own learning services
  • Lack of support for open and flexible personalized contextualized learning approach

Fraunhofer is making an intelligent infrastructure that incorporates widgets and existing VLE/LMS functionality to truly personalize learning. They want to bridge what people use at home with what they use in the corporate environment by “intelligent user driven aggregation”. This includes a technology infrastructure, but also requires a big change in understanding how people actually learn.

They used Shindig as the widget engine and Opensocial as the widget technology. They used this to create an environment with the following characteristics:

  • A widget based environment to enable students to create their own learning environment
  • Development of new widgets should be independent from specific learning platforms
  • Real-time communication between learners, remote inter-widget communication, interoperable data exchange, event broadcasting, etc.

He used a student population in China as the first people to try the system. It didn’t have the uptake that he expected. They soon realised that this was because the students had come to the conclusion that use or non-use of the system did not directly affect their grades. The students also lacked an understanding of the (Western?) concept of a Personal Learning Environment. After this first trial he came to a couple of conclusions. Some where obvious like that you should respect the cultural background of your students or that responsive open learning environments create challenges on the technology and the psycho-pedagogical side. Other were less obvious like that using an organic development process allowed for flexibility and for openly addressing emerging needs and requirements and that it makes sense to enforce your own development to become the standard.

For me this talk highlighted the still significant gap that seems to exist between computer scientists on the one side and social scientists on the other side. Trying out Personal Learning Environments in China is like sending CliniClowns to Africa: not a good idea. Somebody could have told them this in advance, right?

Next up was a talk titled “Utilizing Semantic Web Tools and Technologies for Competency Management” by Valentina Janev from the Serbian Mihajlo Pupin Institute. She does research to help improve the transferability and comparability of competences, skills and qualifications and to make it easier to express core competencies and talents in a standardized machine accessible way. This was another talk that was hard for me to follow because it was completely focused on what needs to happen on the (semantic) technical side without first giving a clear idea of what kind of processes these technological solutions will eventually improve. A couple of snippets that I picked up are that they are replacing data warehouse technologies with semantic web technologies, that they use OntoWiki a semantic wiki application, that RDF is the key word for people in this field and that there is thing called DOAC which has the ambition to make job profiles (and the matching CVs) machine readable.

The final talk in this track was from Joachim Griesbaum who works at the Institute of Information Science and Language Technology. The title of his talk must have been the longest in the conference: “Facilitating collaborative knowledge management and self-directed learning in higher education with the help of social software, Concept and implementation of CollabUni – a social information and communication infrastructure”, but as he said: at least it gives you an idea what it is about (slides of this talk are available here, Griesbaum was one of the few presenters that made it clear where I could find the slides afterwards).

A lot of social software in higher education is used in formal learning. Griesbaum wants to focus on a Knowledge Management approach that primarily supports informal learning. To that end he and his students designed a low cost (there was no budget) system from the bottom up. It is called CollabUni and based on the open source e-portfolio solution (and smart little sister of Moodle) Mahara.

They did a first evaluation of the system in late 2009. There was little self-initiated knowledge activity by the 79 first year students. Roughly one-third of the students see an added value in CollabUni and declare themselves ready for active participation. Even though the knowledge processes that they aimed for don’t seem to be self-initiating and self-supporting, CollabUni still shows and stands for a possible low-cost and bottom-up approach towards developing social software. During the next steps of their roll out they will pay attention to the following:

  • Social design is decisively important
  • Administrative and organizational support components and incentive schemes are needed
  • Appealing content (for example an initial repository of term papers or theses)
  • Identify attractive use cases and applications

Call me a cynic, but if you have to try this hard: why bother? To me this really had the feeling of a technology trying to find a problem, rather than a technology being the solution to the problem. I wonder what the uptake of Facebook is with his students? I did ask him the question and he said that there has not been a lot of research into the use of Facebook in education. I guess that is true, but I am quite convinced there is a lot use of Facebook in education. I believe that if he had really wanted to leverage social software for the informal part of learning, he should have started with what his students are actually using and try to leverage that by designing technology in that context, instead of using another separate system.

Collaborative Innovation Networks (COINs)
The closing keynote of the conference was by Peter A. Gloor who currently works for the MIT Center for Collective Intelligence. Gloor has written a couple of books on how innovation happens in this networked world. Though his story was certainly entertaining I also found it a bit messy: he had an endless list of fascinating examples that in the end supported a message that he could have given in a single slide.

His main point is that large groups of people behave apparently randomly, but that there are patterns that can be analysed at the collective level. These patterns can give you insight into the direction people are moving. One way of reading the collective mind is by doing social network analysis. By combining the wisdom of the crowd with the wisdom of groups of experts (swarms) it is possible to do accurate predictions. One example he gave was how they had used reviews on the Internet Movie Database (the crowd) and on Rotten Tomatoes (the swarm) to predict on the day before a movie opens in the theatres how much the movie will bring in in total.

The process to do these kinds of predictions is as follows:

COIN cycle

COIN cycle

This kind of analysis can be done at a global level (like the movie example), but also in for example organizations by analysing email-archives or equipping people with so called social badges (which I first read about in Honest Signals) which measure who people have contact with and what kind of interaction they are having.

He then went on to talk about what he calls “Collaborative Innovation Networks” (COINs) which you can find around most innovative ideas. People who lead innovation (think Thomas Edison or Tim Berners-Lee) have the following characteristics:

  • There are well connected (they have many “friends”)
  • They have a high degree of interactivity (very responsive)
  • They share to a very high degree

All of these characteristics are easy to measure electronically and thus automatically, so to find COINs you find the people who score high on these points. According to Gloor high-performing organizations work as collaborative innovation networks. Ideas progress from Collaborative Innovation Network (COIN) to Collaborative Learning Network (CLN) to Collaborative Interest Network (CIN).

Twitter is proving to be a very useful tool for this kind of analysis. Doing predictions for movies is relatively easy because people are honest in their feedback. It is much harder for things like stock, because people game the system with their analysis. Twitter can be used (e.g. by searching for “hope”, “fear” and “worry” as indicators for sentiment) as people are honest in their feedback there.

Finally he made a refence in his talk to the Allen curve (the high correlation between physical distance and communication, with a critical distance of 50 meters for technical communication). I am sure this curve is used by many office planners, but Gloor also found an Allen curve for technical companies around his university: it was about 3 miles.

Interesting Encounters
Outside of the sessions I spoke to many interesting people at the conference. Here are a couple (for my own future reference).

It had been a couple of years since I had last seen Peter Sereinigg from act2win. He has stopped being a Moodle partner and now focuses on projects in which he helps global virtual teams in how they communicate with each other. There was one thing that he and I could fully agree on: you first have to build some rapport before you can effectively work together. It seems like such an obvious thing, but for some reason it still doesn’t happen on many occasions.

Twitter allowed me to get in touch with Aldo de Moor. He had read my blog post about day 1 of this conference and suggested one of his articles for further reading about pattern languages (the article refers to a book on a pattern language for communication which looks absolutely fascinating). Aldo is a independent research consultant in the field of Community Informatics. That was interesting to me for two reasons:

  • He is still actively publishing in peer reviewed journals and speaking at conferences, without being affiliated with a highly acclaimed research institute. He has written an interesting blog post about the pros and cons of working this way.
  • I had never heard of this young field of community informatics and it is something I would like to explore further.

I also spent some time with Barend Jan de Jong who works at Wolters Noordhoff. We had some broad-ranging discussions mainly about the publishing field: the book production process and the information technology required to support this, what value a publisher can still add, e-books compared to normal books (he said how a bookcase says something about somebody’s identity, I agreed but said that a digital book related profile is way more accessible than the bookcase in my living room, note to self: start creating parody GoodReads accounts for Dutch politicians), the unclear if not unsustainable business model of the wonderful Guardian news empire and how we both think that O’Reilly is a publisher that seem to have their stuff fully in order.

Puzzling stuff
There were also some things at I-KNOW 2010 that were really from a different world. The keynote on the morning of the 3rd day was perplexing to me. Márta Nagy-Rothengass titled the talk “European ICT Research and Development Supporting the Expansion of Semantic Technologies and Shared Knowledge Management” and opened with a video message of Neelie Kroes talking in very general terms about Europe’s digital agenda. After that Nagy-Rothengass told us that the European Commission will be nearly doubling its investment into ICT to 11 billion Euros, after which she started talking about the “Call 5″ of “FP7″ (apparently that stands for the Seventh Framework Programme), the dates before which people should put their proposals in, the number of proposals received, etc., etc., etc. I am pro-EU, but I am starting to understand why people can make a living advising other people how best to apply for EU grants.

Another puzzling thing was the fact that people like me (with a corporate background) thought that the conference was quite theoretical and academic, whereas the researchers thought everything was very applied (maybe not enough research even!). I guess this shows that there is quite a schism between universities furthering the knowledge in this field and corporations who could benefit from picking the fruits of this knowledge. I hope my attendance at this great conference did its tiny part in bridging this gap.

Kaizen versus Good Enough

Arjen Vrielink and I write a monthly series titled: Parallax. We both agree on a title for the post and on some other arbitrary restrictions to induce our creative process. For this post we agreed to write about how Kaizen (the philosophy of continuous improvement) relates to the rise of the Good Enough paradigm. The post also has to include a non-digital example of Kaizen versus Good Enough. You can read Arjen’s post with the same title here.

The world is full of badly designed things. I find this infuriating. A little bit of thought by the designer could make many things so much easier to use. My favourite book on this topic is The Design of Everyday Things by Donald Norman. It is years ago since I read the book, but I can still remember Norman agitating against all kind of design flaws: why would an object as simple as a door need a manual (“push”). I have therefore decided to start a new Twitter account titled unusablestuff in which I post pictures of things that fail to be usable.

Through Alper I recently learnt about the Japanese concept of Kaizen. This is a philosophy of continuous improvement that aims to eliminate waste (wasted time, wasted costs, wasted opportunities, etc.). Kaizen as described on Wikipedia is very much a particular process that you can go through with a group of people:

Kaizen is a daily activity, the purpose of which goes beyond simple productivity improvement. It is also a process that, when done correctly, humanizes the workplace, eliminates overly hard work [..], and teaches people how to perform experiments on their work using the scientific method and how to learn to spot and eliminate waste in business processes.

I’d also like to see it as being a mindset.

Another thing I recently read was a Wired article titled: The Good Enough Revolution: When Cheap and Simple is just Fine.

Cheap, fast, simple tools are suddenly everywhere. We get our breaking news from blogs, we make spotty long-distance calls on Skype, we watch video on small computer screens rather than TVs, and more and more of us are carrying around dinky, low-power netbook computers that are just good enough to meet our surfing and emailing needs. The low end has never been riding higher. [...]
what consumers want from the products and services they buy is fundamentally changing. We now favor flexibility over high fidelity, convenience over features, quick and dirty over slow and polished. Having it here and now is more important than having it perfect. These changes run so deep and wide, they’re actually altering what we mean when we describe a product as “high-quality.”

The article is full of examples where cheap, convenient and fast wins out over high quality. Think netbooks, MP3 files and the Flip videocamera.

Both ideas have their appeal to me, but at a superficial level they might seem to contradict each other. Why would you spend a lot of time trying to continually improve on something, when good enough is just good enough? This contradiction isn’t truly there. Good enough is essentially relevant at a higher level than Kaizen. Good enough means you design for a specific task, context, audience or zeitgeist and don’t add things that aren’t necessary. It is about simplicity and lowering the costs, but not about lowering the design effort. Kaizen is about the details: once you have decided to build a netbook (smaller screen, less processing power, but good enough for basic browsing on the net), you should still make sure to design it in such a way that people can use with a little waste as possible.

Oscar in the classic bin

Oscar in the classic bin

Let’s look at garbage bins as an example. A garbage bin is a relatively simple product. It is a bin with a lid that can hold a bag in which you put the garbage. Oscar lives in one of the classic bins. In essence this is good enough. You don’t need auto-incinerators, sensors that tell you when the bag is full, odour protection, etc. The simple bin-lid-bag concept does have a couple of issues and problems that can be solved with good design.

The Brabantia 30 liter Retro Bin is a bin that has done exactly this. What problems are solved with the design of this bin and how?

Problem: Sometimes you need two hands to get your garbage in the bin. If you have to scrape some leftover peels from a cutting board for example. In that case you have no hands free to lift the lid of the bin.
Solution: You create a bin with a foot-pedal. A foot-pedal also keeps you hands clean as you don’t have to touch the lid of the bin which is often dirty.

Problem: When the bin is empty, pressing the pedal might make the bin move.
Solution: A rubber ring at the bottom prevents the bin from moving on any flooring.

Brabantia Retro Bin

Brabantia Retro Bin

Problem: It can be irritating to constantly have to press the pedal if you want to throw away multiple things and have to walk back and forth to get the garbage to throw in the bin.
Solution: Hinge the lid in such a way that if it opens all the way it stays open. Allow this to be done by a persistent movement of the foot on the pedal.

Problem: If the bag gets really full (by pressing down the garbage) it might press against the mechanism that is used to open the bin, making it hard to open.
Solution: Make sure that the mechanism for opening the lid on the basis of the pedal movement lies completely outside of the bin and is unaffected by the pressure.

Problem: When you put in a new bag it often happens that there is air trapped between the bag and the bin. This makes it hard to throw aways things as the full space of the bag is not used.
Solution: Put little holes in the top of the bags. This allows the air to escape when putting in a new bag.

Problem: There is often a vacuüm between the bag and the bin when you try to lift a full bag out. This gives you the feeling that the bag is stuck.
Solution: Have little holes in bottom of the sides of the bin. This way air can come in, preventing the vacuüm. Brabantia rightly thought that holes at the side of a bin look a bit weird, so they have created an inner bin and outer bin. This also solves an aesthetic (if not design) problem: the top edge of the bag being shown. This top edge now hides between the inner and the outer bin.

Problem: A lot of garbage has some liquid components. These liquids sometimes drip from the bottom of the bag.
Solution: Create an extra strong bottom for the bag of an extra impenetrable plastic.

Problem: When a bag is full it can be hard to tie it up.
Solution: First make sure that the bag is slightly bigger than the bin. Once the bag is out of the bin, the garbage has more space to spread and the top of the bag will have more space to tie up. Next, have a built-in string that can be used to tie up the bag (also highly useful for lifting out the bag). Make sure that this string is long enough to make for an easy knot.

I have had all these problems with garbage bins at some point, the Brabantia bin solves them all.

Many people will probably consider me a whiner (there are bigger problems in the world, can’t you get over these minor garbage issues?) or a weirdo (garbage bins, honestly?) and both are probably true, but that doesn’t negate my point. Getting a product on the market requires that is designed. Now think about the extra design effort to create a bin that solves common bin problems. How many more man months for the Brabantia design than for the classic “Oscar bin”? Now imagine the small problems that a user of a classic garbage bin encounters and multiply them by all the garbage bin users in this world. Any idea how many times an hour something is spilled in this world because there is no pedal on the bin? People like to blame themselves (“I am so terribly clumsy”), I like to blame the designer. Why not just spend some extra design effort and get it right?

I want to draw an analogy with the design of software. I think the believe in Kaizen is what makes Apple products stand out. The example I love to show people is the difference in the calculator on the Symbian S60 3rd edition (I used it on the Nokia E71, my previous phone) and on the iPhone (my current phone).

A calculator is a simple thing. Most people only need addition, subtraction, multiplication and division capabilities. Both default calculators deliver exactly this functionality. Nokia’s effort looks like this:

Nokia's default calculator

Nokia's default calculator

You need to use the keyboard (there are designated keys for the numbers) and the D-pad to make a calculation. The D-pad is necessary to navigate from one operator to the next. To do a simple calculation like 6 / 2 = 3 requires you to press eleven buttons!

The iPhone calculator looks like this:

iPhone's default calculator

iPhone's default calculator

You just use your finger to tap the right numbers and operators. 6 / 2 = 3 only requires four finger taps.

It is not just the touch interface that makes it possible to have a great working calculator. I managed to download another calculator for the Nokia phone, Calcium. It looks like this:

Calcium calculator

Calcium calculator

This calculator makes clever use of natural mapping to create a calculator that is as easy, if not easier, to use as Apple’s calculator. 6 / 2 = 3 takes indeed four button presses. Nokia could have made this. The fact that Nokia was willing to ship a phone with the default calculator as it was is one of the reasons why I have a hard time believing they have a bright future in the smartphone space.

In a next post I might rant about how many designers think the whole world is right-handed. Do you have any thoughts on design?

Written by Hans de Zwart

03-03-2010 at 10:00

Mozilla and the Open Internet

The Mozilla Foundation

The Mozilla Foundation

For some reason I have recently equated the Mozilla foundation to Firefox. Sitting in the Mozilla room at Fosdem for a couple of hours has cured me of that.

Mitchell Baker, chairperson of the Mozilla foundation, talked about the right for self-determination on the Internet. She explained that having a completely open (meaning free as in freedom)  stack to access the Internet does not necessarily mean that you have ownership over your digital self. There is a tendency for web services on the net to be free as in free beer (think Facebook), without giving users true ownership of their data. Mozilla has started a couple of projects to try and move the open spectrum away from the internet accessing device to the net. Trying to make sure that at least one slice of the net is open. Mozilla Weave is an example project that aligns with this goal. I really like the fact that Weave does client side encryption of all data and that it is offered as a service by Mozilla but can also be installed locally.

Tristan Nitot then talked about “hackability”. He actually doesn’t like to use that word because it has negative connotations for the media. What he means with it is “generativity” (see The Future of the Internet and How to Stop It), but that word is even harder to understand. His argument was relatively simple though: vendors aren’t always creative imagining what their products can be used for. The telephone, for example, was thought to be used mainly for listening to opera music. It is important that people are allowed to play with technology, because that is where innovation comes from. Tristan finished his talk with a slide with the following text: “Hackability is getting the future we want, not the one they are selling us.”

Paul Rouget then demoed a couple of very interesting hacks using Firefox with Stylish, Greasemonkey and some HTML5 functionality. A lot of his work can be found at on the Mozilla Hacks site. An example is this HTML5 image uploader:

Finally we had Robert Nyman introduce HTML5 to us. I thought it was interesting to see that it was Mozilla, Apple and Opera that started the WHATWG and got the work on creating the HTML spec started. Their work will be very important (for example, it might mean the end for Flash) and should make a lot of web designer’s lives less miserable. Robert’s presentation is on Slideshare:

Some things will be much easier in HTML5: what caught my eye were some new elements (allowing more semantic richness, e.g. elements like <header> or <aside>), the new input types which can include client-side validation and the new <video> and <canvas> elements.

Finally I would like to point you towards the Mozilla Manifesto. This is the introduction to the document which is available in many languages:

The Mozilla project is a global community of people who believe that openness, innovation, and opportunity are key to the continued health of the Internet. We have worked together since 1998 to ensure that the Internet is developed in a way that benefits everyone. As a result of the community’s efforts, we have distilled a set of principles that we believe are critical for the Internet to continue to benefit the public good. These principles are contained in the Mozilla Manifesto.

Mozilla has endeared me again. Cool people, great projects, an important cause.

Written by Hans de Zwart

07-02-2010 at 15:26

Why Chromium is Now My Primary Browser

Arjen Vrielink and I write a monthly series titled: Parallax. We both agree on a title for the post and on some other arbitrary restrictions to induce our creative process. For this post we agreed to include our personal browser histories in the post. You can read Arjen’s post with the same title here.

Chromium Logo

Chromium Logo

If you are not interested in Browsers and/or usability, I would suggest you don’t bother to read this post.

I cannot exactly remember the first time I used the Internet. It probably was in 1996 in the library at the Universiteit Utrecht. I wasn’t particularly aware of the browser I was using, but I am quite sure that is was Netscape Navigator with which I did the Altavista searches. I used Netscape throughout my education, only to switch to Internet Explorer 5 when I got my own computer with Windows 98 and a dial-up Internet connection. I then used nothing but IE until I read about Mozilla Firefox in a magazine in 2004. Through Moodle I had started appreciating open source software and I liked working with Firefox and its tabs. I stuck with Firefox for a year or so, feeling quite the rebel whenever a site would only load in IE. At some point I noticed how much faster IE was than Firefox. That is when I switched to Avant Browser, a freeware skin around the IE browser engine which included tabs and some other advanced features. A little while later (somewhere in late 2005 or early 2006) I learnt about Opera. Opera had a lot of appeal to me. I liked how their developers pushed so many innovations in the browser space: tabbed browsing, advanced security features and mouse gestures were all inventions of Opera. I loved how fast it was and how many features they managed to cram in so little megabytes. Its cross platform nature allowed me to stay with Opera when I permanently switched to Ubuntu in the summer of 2006. I switched back to Firefox in early 2007 because of my slightly more hardcore open source attitude and because of its wonderful extensions. The latter allowed me to keep all the functionality that I loved about Opera and more.

About two weeks ago I switched to Chromium. This is Google’s relatively new open source offering in the browser market. I am able to automatically download new builds every day through the PPA for Ubuntu Chromium Daily Builds. Even though it is still beta alpha software, it is highly usable.

So why did I switch? I think there are three reasons:

1. Performance
Since a couple of months my private computing is done with a Samsung NC10. This Intel Atom based netbook is slightly underpowered. You really notice this when you are doing things like recoding a video or doing some CPU intensive image editing. I also noticed it terribly in Firefox. Things like Google Reader, DabbleDB (watch that 8 minute demo!) and the WordPress admin interface were nearly unusable. A cold start of Firefox (the 2.x version that comes with Ubuntu 9.04) takes nearly a minute. Chromium on the other hand starts up in a couple of seconds and is very spiffy with Javascript-heavy web-apps.

I tried to quantify my unmistakable feelings with some benchmarking. I used Peacekeeper, but Firefox could not finish the benchmark and would crash! I then used the Sunspider Javascript benchmark and got a total score of 3488.8ms for Chromium and a total score of 18809.6ms for Firefox. This means that in certain cases Chromium would load something in less than one fifth of the time that Firefox 2.x will load it.

While writing this post I decided to try installing Firefox 3.5 (without add-ons) and see how that would perform. After a sudo apt-get install firefox-3.5 I could start Firefox by selecting “Shiretoko Web Browser” in the “Internet” menu. The total score was 5781.2ms, a major improvement, but still more than one and half times slower than Chromium. Its interface is also still less responsive than I would like it to be.

Another nice aspect about Chromium’s performance is that each tab is its own process. This so called Multi Process Architecture isolates problem webpages so that one Flash page crashing does not affect the other browser tabs, something that happened very often to me with Firefox.

2. Screen Real Estate
Another thing that a netbook lacks is pixels. My screen is 1024 pixels wide and 600 pixels high. Especially the lack of height is sometimes taxing. I have done a lot of things in Ubuntu to mitigate this problem (if you are interested I could write a post about that) and I had to do the same with Firefox.

In Firefox I used Tiny Menu, chose small icons, used no bookmarks and combined many toolbars into one to make sure that I have more content and less browser. To my surprise I had to do nothing with Chromium and still got a bigger canvas with a bigger font in the address bar! Compare the screenshots below to see the differences: 

 

Screenshot Firefox (click to enlarge)

Screenshot Firefox (click to enlarge)

Screenshot Chromium (click to enlarge)

Screenshot Chromium (click to enlarge)

 

Chromium shows more of the page and accomplishes this by doing a couple of smart things:

  • There is no status bar. I could have turned the status bar off in Firefox, but I need to see where a link is pointing to before I click on it. Chromium shows this information dynamically as soon as you hover over a link. When you don’t hover it shows nothing.
  • The tabs are moved into the title bar. It looks a bit weird for a while, but it uses some very valuable space.
  • Some things only appear when you need them. The bookmark bar, for example, only shows up when you open a new tab.

3. It is a fresh look at what a browser should/could be
Most of my time behind my computer is spent using a browser. More and more of the applications I use daily have moved into the cloud (e.g. mail and RSS reading). It is thus important to have a browser that is made to do exactly those functions.

The developers of Chromium have looked at all aspects of a traditional browser and have rethought how they work. A couple of examples:

  • The address bar is actually a tool with four functions. It contains your web history, typing some terms will execute a search in your default search engine (saving me two characters compared to how I search in Firefox), you can type a normal web address and you can use keywords to search. If I type w chromium in the address bar it will search for chromium in Wikipedia. The keyword search also works in Firefox, but Chromium has a prettier and more clear implementation.
  • When you open a new tab, you see a Dashboard of sites you use often (a variant of another Opera invention). That page also conveniently displays recently closed tabs with a link to your browsing history. The history page has excellent search (it is Google after all!) and has that simple Google look.
  • The downloads work in a particular way. They automatically save in a default location unless you tick a box confirming that you always like to open that type of file from now on. This takes a little getting used to (I like saving my downloads in different folders), but once again the download history is searchable and looks clean.

In conclusion: Chromium is a browser in which some hard choices were made. No compromises. That means that I, as a user, have to worry about less choices and settings and can focus on being more productive. Making tough interface design choices can be a very successful strategy: witness Apple’s iPod.

For now I will be using Chromium as my primary browser and will use Firefox when I need certain functionalities that only Firefox add-ons can provide.

I am looking forward to what the browser future holds!

Online Educa’s Platinum Sponsor Fronter is a Closed Source Proprietary Product

The most Deceptive Sign in LA

The most Deceptive Sign in LA

Warning, this is a bit of a rant…

I hate false advertising. That is why I was delighted to read that Apple had to pull an iPhone ad recently (see: What the banned iPhone ad should really look like).

I am currently at the Online Educa in Berlin where Fronter is the Platinum sponsor. I found their brochure in the conference bag and was appalled by what I read.

Fronter has decided to adopt the discourse of open source software without actually delivering an open source product. Recently, this has been a strategy for many companies who produce proprietary software and are losing market share to open source products. This is the first time that I have seen it done in such a blatant way though.

Some quotes from their brochure:

The essence of Fronter’s Open Philosophy is to give learning institutions the benefit of an open source and open standard learning platform – while at the same time issuing guarantees for security, reliability and scalability, all included in a predictable fixed cost of ownership package.

And:

Fronter’s Open Platform philosophy combines the best of two worlds; innovation based on open source, with guarantees and fixed cost of ownership issued by a corporation.

Finally:

Open source: The Fronter source code is available to all licensed customers.
Open guarantee: In contrast to traditional open source products, Fronter offers tight service level agreements, quality control and a zero-bug regime.

I am sure the Advertising Standards Authority (ASA) would not appreciate these untruths. So let us do some debunking.

The term open source actually has a definition. The Open Source Definition starts with the following statement: “Open source doesn’t just mean access to the source code.” It then continues by listing the ten conditions that need to be met before a software license can call itself open source. Many of these conditions are not met by Fronter (e.g. free distribution, allowing distribution of the source code or allowing derived works).

These conditions exist for a reason. Together they facilitate the community based software development model which has proven itself to be so effective (read: The Cathedral and the Bazaar if you want to know more). Just giving your licensees access to the source code, does not leverage this “many eyeballs” potential.

I really dislike how they pretend that open source products cannot have proper service level agreements or quality control.SLA’s and QA is exactly what European Moodle partners like eLeDia, CV&A Consulting, MediaTouch 2000 srl and my employer Stoas (all present at this Educa) have been delivering in the last couple of years.

What is a “zero-bug regime” anyway? Does it mean that your customers cannot know any of the bugs in your software? Or is Fronter the only commercially available software product in the world that has no bugs? I much prefer the completely transparent way of dealing with bugs that Moodle has.

Fronter people, please come and meet me at the Moodle Solutions stand (E147 and E148). I would love to hear you tell me how wrong I am.

Written by Hans de Zwart

04-12-2008 at 02:35

The Chumby: sexy open hardware

The Chumby

The Chumby

I have a problem with locked-down hardware. It is not that I don’t like Apple’s products (the iPod Touch is a wonderful piece of hardware), I just don’t like the way Apple’s products treat their customers. I had to help somebody who’s Windows laptop had died. She bought a new Apple laptop and wanted to move her music from her iPod to her new laptop: impossible! It took Linux as an intermediary to get it done.

That is why I love the concept of open hardware. I personally own a Neuros OSD (great when you are on a holiday and want to watch your own videos on the hotel TV) and, since a couple of months, a Chumby.

The Chumby is a computer the size of a coffee mug and made of leather. It has a touch screen, an accelerometer, a microphone, stereo speakers, two USB ports, a WIFI connection and a nice soft button on the top.

So what can it do? I see it as having a couple of distinct functions. It is:

  • An excellent alarm clock with an easy interface. You can set multiple alarms and decide whether you want to wake up with music or a tone. You can even set the length of your snooze.
  • A relatively decent speaker set for your iPod.
  • An Internet radio player. It is full of Shoutcast and other streams.
  • A digital picture frame for photos that live on the Internet (e.g. Flickr, Facebook, Picasa). It can display photos from a particular user, but also from a particular tag.
  • An RSS reader.
  • And finally, an Internet enabled device for any kind of content.

The last point is the important one. You can load your Chumby with widgets. There are hundreds of widgets available. You use a web-based interface to add these widgets into channels. Then you set your Chumby to watch a certain channel.

I have created this virtual Chumby (please click the link, it opens in a new window!) to give you an idea of what these widgets look like. This chumby shows a particular channel which I created for this blog post and has a couple of example widgets. Each widget will be shown for about 20-45 seconds. It starts with some random Flickr images showing my favourite tag: decay. You can interact with the screen to move to the next or the previous tag. Next up is Twistori, this displays recent tweets with the word “believe” in it. If you prefer “love”, “hate”, “think”, “feel” or “wish” then you can click on those words to switch to them. The Chumby will then display recent top news stories from Google news. Next this blog using Chumby’s RSS reader (you might see this blog entry). It finishes off with the weather in Amsterdam (including a forecast), a web cam looking at Abbey Road (do you see people trying to imitate the famous Beatles cover?), some video’s from the excellent videojug and the classic blue ball machine animation.

As you can see the Chumby mostly pulls content in. My colleague Job Bilsen had the interesting idea of using it as a device for pushing content to people. He had visions of companies putting Chumbies on the desks of their employees and sending them important updates about things like compliance, RSI, internal news, etc. I can already see a plug-in for a VLE like Moodle. Imagine doing your homework on your laptop with your Chumby on your desk displaying updates from your courses and playing your favourite Last.fm channel (they are working on a Last.fm widget)!

The best thing about the Chumby: the specifications are completely open. I had to get an European adapter for it and they have the precise information about the power supply listed on their website. You are even encouraged to hack into it! Use it as a web server or log into over ssh? No problem.

Where do you get one? Currently the Chumby is only available in the US. They are in the process of complying to all the European rules and regulations so it shouldn’t take much longer before you can buy one over here as well. Want one now? Ebay is your friend!

Written by Hans de Zwart

01-10-2008 at 11:35

Follow

Get every new post delivered to your Inbox.

Join 4,814 other followers

%d bloggers like this: