LazyWeb Challenge

Much can, and will, be said about the lazy web. One thing that I have learned is that the people talking about lazyweb are probably the least “lazy people” by any normal standards. It took Ben Hammersley only 4 hours to track down my comment about his offer (and I didn’t even trackback ping him), and now commands me (YES! You must! Now! Now! Now!) to come up with something. Guess I’d better do so then (does it work, the trackback?).

Hmmm. What is a ‘lagom’ lazyweb idea, Ben? Lagom? I know you live in Sweden, and guess you have learned that word by now? No? Lagom defines the space between too much and too little, and is a very Swedish word. Almost like Danish ‘hygge’ (1, 2, 3), though meaning something else. It was one of the words that took me a while to understand, but once living in Sweden, one gets it quickly, I guess (I lived in Sweden 1997-2001).

The challenge I am thinking of is perhaps too ambitious, but I have an idea that it is possible to make something really interesting with rss, trackback pinging, web services and stuff like controlled vocabularies (“A controlled vocabulary is a way to insert an interpretive layer of semantics between the term entered by the user and the underlying database to better represent the original intention of the terms of the user.”). Also, it would be interesting with some more plain/manual taxonomy and sematics, but this may be too scary for lazy people, so let’s no go too much into that. What I have in mind here is something more or less automated. I am not sure how this would work, and what will be needed to make it happen, but I can see lots of possibilities here (well, maybe not right there, but in the idea).

Did I explain the idea in a way that makes any kind of sense? Probably not. Perhaps a quick scenario would help? That would go somewhere like this:

Mid-morning, Lazy Peter gets a mail from his boss saying “working home, check this, write memo now, skip lunch if need be” containing a link. Peter visits this link. He finds it remotely interesting, but is lazy and decides to add it to his lazyblog and then go for lunch. Adding the link is done in two clicks – one on the bookmarklet “Lazy, Later!”, and one to confirm the addition. The bookmarklet fetches basic link information automatically, and he nearly never needs to edit anything in the form, so he is a bit annoyed with the confirmation click, but since the confirmation itself serves a purpose, he accepts this “waste” of time. After getting some coffee and a cigarette, Peter decides to take lunch before getting back to his boss. After lunch, Peter goes to his lazyblog. He clicks to the “Less Lazy” bookmarklet, which opens a window with his blog entry in a special context (LazyEdit mode). Unlike many others, who don’t like to use LazyEdit as the standard view setting, Peter likes the one-click availability of all the lazy services (he even runs it in Full Mode). His favorite on a really lazy day like this is the “Respond to boss” service. This presents findings from the “yzaliser”, the reverse of laziness, his energetic trackaround bot, which has analysed the link he submitted earlier, and now continuously tracks relevant information, news, other links, and of course, blogs that talk about the same issue, and offers automatically generated stuff like a response to his boss, illustrated below with imaginary codes:

Dear boss,
if LinkAuthorFOAF==BossName
[Kiss-ass factor: Relative high] Thanks for pointing to the link Title (URL), which as you know is an interesting article about getCoreAttribute, that we should all read. It is a good point that getKeyword[0] is the central theme in getCoreAttribute, but that it is also important to beware of how getKeyword[1] plays a role, a fact the article explains well. The author has clearly read and understood the basic, important works on this issue, namely:
get5GoogleSibblings.Keyword[1]
As I read Title, it occured to me that getMostPopularAttribute.allSibblings.Title is something we should monitor more closely. I skipped lunch to create a new subsite on our public website with relevant resources, see the result here: open.window(get25GoogleSibblings.Keyword[1+2+3]).CreatePage.
else
Boring. get2GoogleSibblings.Keyword[1] are much better.
endif
I closely monitor news about getCoreAttribute, and also wants to draw your attention to this important news story:
get1GoogleNews.Keyword[1].
ifDaypopRanking.getRelatedLinks.FollowAll
print “More:”+5RelatedLinks
CheckRDFforTB2LazyWeb
ifYes.MarkToSendTBPing
forAll.createRSS4LazyVocabulary

Yours,
Peter

Get it? Now it’s probably even more unclear what I am thinking of, because I got a bit carried away there, and mixed several things together there. At least these:

* What I called getCoreAttribute, by which I specifically think of some vocabulary magic, is basically an idea of having a kind of More Like this from Others combined with Category sniffing and a bit of pattern thinking and “law and order” (as in controlled, standardised). Huh? You bet. In plain English: If I submit a link about apples, the system should find a way to know that I talk about fruits. It will be told that apples are fruits, either by its own vocabulary or by “learning” from others it “meets trackingaround”. I have no idea how this would work, so there’s the challenge!
* The bookmarklets (at least the first; the second is imaginary, but easy to make) are of course based on my recent coding adventures. The code I produced actually does the work as described here, more or less. More so when more people start using metatagging.
* The Google-stuff could be composed using web services. Google api will do some of the work, but it still doesn’t do the news service, does it?
* Maybe, just maybe, WholeSystem would be useful in some kind of implementation of this (in MT) … OK, maybe not, but it looks pretty cool anyway.

Want more challenges? I have already mentioned that I sure could use some help with my bookmarklet, if anyone out there is interested. On the more academic part of it, I must start looking at intellectual rights and would love some help here too. If I had written the code from scratch, I’d use it to learn more about GPL and CC and whatnot. But whose code is it? I’ve taken bits and pieces from a number of sources and put them together in a new context. That I got the idea via Jon Udell I mentioned, and I will at all times credit him. I also borrowed snippets from others. How do I licence the bookmarklet, and do so in a way that is lawyer-readable, human-readable and computer-readable?

I’m also still struggling with more like this from others, and could need some assistance there, and would appreciate if any kind soul out there would give me a hand.

Previous Post
Million dollar markup?
Next Post
A million dollar pattern

Related Posts

No results found.
Menu