Most conferences and workshops report on success stories only. This is somewhat perplexing:
* Isn’t research supposed to deal with challenging approaches?
* Aren’t we supposed to take risks?
* Doesn’t that imply that things should occasionally not work out as hoped?
* If nothing ever goes wrong, then maybe we are not taking enough risks?
* If things do occasionally go wrong, then why do we never hear about those failures in conferences and workshops?
More importantly: what can we learn from failures in E-Learning projects? How do we utilize failures to improve our work?
The funny thing is: I’ve rarely gotten more positive reactions on anything I have helped organize. It seems like everybody agrees that this is an important issue. Yet, the number of submissions was surprisingly low.
Here is what I would like to ask you to consider:
* Any suggestions on how to improve the way we report on what we learn from things that did not work out as planned?
* Any lessons you’ve learned the hard way? You can report on them at the workshop or leave a comment here…
One of our lessons in this category, to “lead by example”: I personally tried for many years to convince people to contribute extensive amounts of metadata, in order to describe their resources. I was actually sometimes kind of successful: people did enter the metadata in the electronic forms we provided. But the problem was that this success was always short-lived: after a few weeks, they would revert back to not entering anything beyond the very basic title and maybe the author information. After a long time and much resistance, I concluded that “electronic forms must die” and that we need to find other ways to collect the metadata. That was a very tedious process, but it did lead to our quite successful work on automatic metadata generation.
Again, you may want to share some of your lessons learned… You already have my attention!