Google’s latest mail API

Stamp

As Brad spotted, my previous post strong-armed Google into introducing a new mail migration API. Well, there was correlation even if I’m not so sure on the causation. Looking through Google’s latest offering, it’s clearly aimed at one-way migration from other systems to Google Apps, rather than being a two-way interoperability standard that would allow a mix of Exchange and Gmail use within the same system.

To quote from the announcement, they introduced it because "some customers are reluctant to step into the future without bringing along the email from their past". I’d imagine there’s some customers who are ‘reluctant to step into the future’ if it’s a one-way trip for all their email data too, locking them into Google’s OS going forward. Email, calendars and contacts are crying out for a nice open integration layer. The information you need is comparatively well-defined and bounded, and there’s already supported standards for the components of the problem, like imap, vcard and icalendar.

Microsoft has always had a strategy with strong developer support as a priority. This is great for third-party vendors but arguably was a factor in a lot of their security and usability issues. Google doesn’t feel the same need to look after external developers, as shown by the removal of their search API. They’d much rather simplify the engineering and user-experience by avoiding the clutter of hosting third-party code within their apps.

Even though it’s ugly and COM-tastic, it’s possible with enough effort to dig deep into Exchange’s data stores and build deeply integrated tools. Moving to Google Apps (or most other SAAS apps I’ve seen) you’re losing that level of access. My hunch is that in a few years time we’ll see the same customer pressure that drove MS to open their enterprise tools to customization pushing SAAS companies to either offer APIs or lose business.

Colorado trip

Flatirons

Liz joined me in Denver on the last day of Defrag, and we spent the rest of the week exploring Colorado together. We started off that evening with a visit to Pints Pub. When I first came to the US, I felt very odd visiting simulacrums of British tea-shops and pubs, they always felt like such an exaggerated, "Mary Poppins" version of the old country. I’ve been over here long enough now that I’m very happy to find even a half-decent Bangers and Mash, and the other details are no longer jarring.

Pints was actually a great place, they had an amazing selection of scotch, some really impressive draught beers, and good food. They’d got the right atmosphere too, there were the obligatory pictures of Churchill and policemen, but the furnishings, fittings and lighting were all very pub-like.

We then spent three days up in the mountains, doing some early-season snowboarding at Loveland and Keystone, staying at the Inn at Keystone. It was so early in the season that there were only a few runs open, and they were very icy, so it was a pretty challenging experience. Liz ended up getting pretty bruised and battered from falls, but we both found the martinis on offer at the Inn’s bar very soothing.

We spent the last two nights in Boulder, the hotels were packed so we ended up at the slightly tattered Golden Buff Lodge. It wasn’t a bad location, with a nice Indian restaurant nearby, but the heater sounded like a helicopter taking off when it started, there was no cable for the internet, and no sound insulation in the ceiling, so just the upstairs neighbors walking around was enough to wake us up. We still had a great time in Boulder though, we hiked from the Rangers Cottage up the Chattaqua trail on the first afternoon, and then did a big loop from NCAR up to Bear Peak and back on the Saturday.

The Bear Peak loop was tough, with about 2,300 feet of elevation gain over about four miles and starting at over 6,000 feet. The gain was concentrated in the climb up the peak itself, and the final half mile was a really steep uphill scramble. Here’s Liz coming down from the peak:
Lizbearpeak

We took the Fern Valley trail back to NCAR, and that was shorter than the Bear Canyon route we took up, but involved about a mile and a half of extremely steep and slippy downhill that would have been a lot easier if we’d brought our hiking poles. The view from the top of Bear Peak was incredible though, both looking out towards the mountains and back towards Denver it was beautiful.

It was fun people-watching on the trail too; everybody looked like they could plausibly be part of the university faculty or students, and there was at least a dog for every person we saw, which made us think again about getting one ourselves. Even better, Boulder has a scheme where you can have your dog off-leash on the trails if you have a special tag hat proves it’s under your sight and voice control. I feel sorry for the dogs here in California, they never get to have any fun out in the mountains like that.

Google, Yahoo and MSN Mail APIs

Mailbox

Whilst there’s no official Gmail API, there is an unsupported but widely used standard, the functions used by mobile phones to access Google mail. Luckily for me, there’s been a lot of work already done to figure out the format and protocol, probably the best documentation is the source of the libgmailer PHP project. The downside of it being unofficial is that it keeps getting broken by Google’s changes, but there’s an active community using it who seem to patch it up again very quickly.

Yahoo actually has an official mail API, but it suffers from a couple of serious flaws. First there’s this language at the start of the documentation: "You may not use the Yahoo! Mail Web Service API to mine or scrape user data from the user’s Yahoo! account." Umm, so I can access the data but can’t ‘mine’ or ‘scrape’ it, whatever that means? Does that include creating a social graph from their mailbox? It certainly sounds like it.

Secondly, some basic functions like GetMessage to grab information about an individual email are only available to premium accounts. I’d imagine that would instantly cut down the potential audience by an order of magnitude.

MSN/Hotmail used to have a nice undocumented API through WebDAV/HttpMail. Unfortunately they shut down access to non-premium customers in apparent response to spammers. There are reports (bottom of article) that it’s still possible to use it to download messages, just not send them, but I haven’t tested that. It looks like the only alternative is screen-scraping.

This is a great example of the ‘separate data silos with unusable content’ problem that Doc Searls discussed in his Defrag talk. The user could gain a lot from allowing other services access to their mail, for example decent external mail integration onto Facebook, but it’s not in the interest any of the companies that physically hold their data to allow that.

Lots of interesting mail/social graph buzz

Buzz
As Brad says, it is pretty obvious once you connect the dots, but I was still interested to see the NY Times article about the big players looking at their email services, and figuring out they’re not that far from having their own social networks.

It was good to learn about a new site covering this thanks to the comments section of Feld Thoughts; Email Dashboard. I’ll also need to write up what I learnt about Trampoline Systems and ClearContext at Defrag, but that’s for another post.

Defrag: Visualizing social media: principles and practice

Homer
Matthew Hurst, from Microsoft, gave the second Defrag talk on the topic of visualizing social media. He described JC Herz’s first talk as complementary to his, covering some of the same problems, but from a different angle. He started by laying out his basic thesis. Visualization is so useful because it’s a powerful way to present context to individual data points. It ties into the theme of the conference because while web 1.0 was a very linear experience, flicking through pages in some order, 2.0 is far more non-linear, and visualizations can help people understand the data they now have to deal with through placing it in a rich context.

He then ran through a series of examples, starting with the same blog map that he’d created, and JC had used as a negative example in her talk. He explained the context and significance of the images, as well as the fact they were stills from a dynamic system, but did agree that in general these network visualizations have too much data. He introduced a small ‘Homer’ icon that he added to any example that produced an ‘mmmm, shiny, pretty pictures’ reaction in most people, without necessarily communicating any useful information.

The next example was a graph of the blogosphere traffic on the Gonzales story, generated by BuzzMetrics. This was a good demonstration of how useful time can be in a visualization. After that came an impressive interlocked graph, which after giving the audience a few seconds to oh and ah over, he introduced as a piece of 70’s string art! A pure Homer-pleaser, with no information content.

The next picture was a visualization of the changes in Wikipedia’s evolution article over time. This was really useful image, because you could see structures and patterns emerge in the editing that would be tough to see any other way. There’d been an edit war over the definition of evolution, and the picture made it clear exactly how the battle had been waged.

TwitterVision got a lot of attention, but isn’t much use for anything. It gives you information in a fun and compelling way, but unfortunately it’s not information that will lead you to take any action. To sum up the point of showing these visualizations, he wanted to get across that there’s a lot of techniques beyond network graphs.

He moved on to answering the question "What is visualization?". His reply is that the goal of visualization is insight, not graphics. Visualizations should answer questions we didn’t know we had. He returned to the blogosphere map example, to defend it in more detail. He explained how once you knew the context, the placement and linkages between the technology and political parts of the blogosphere were revealed as very important and influential, and how the density of the political blogosphere revealed the passion and importance of blogs on politics.

(Incidentally, this discussion about whether a visualization makes sense at first glance reminds me of the parallel endless arguments about whether a user interface is intuitive. A designer quote I’ve had beaten into me is ‘All interfaces are learnt, even the nipple’. The same goes for visualization, there always has to be some labelling, explanation, familiarity with the metaphors used and understanding of the real-world situation it represents to make sense of a picture. Maps are a visualization we all take for granted as immediately obvious, but basing maps on absolute measurements rather than travel time or symbolic and relative importance isn’t something most cultures in history would immediately understand.)

He also talked about some to Tufte’s principles, such as "Above all else, show the data". He laid out his own definition of the term visualization; it’s the projection of data for some purpose and some audience. There was a quick demonstration of some of the ‘hardware’ that people possess for image processing that visualizations can take advantage of. A quick display of two slides, containing a scattering of identical squares, but one with a single small circle in place of a square, shows how quickly our brains can spot some differences using pre-attentive visual processing.

A good question to ask before embarking on a visualization is whether a plain text list will accomplish the same job, since that can be both a lot simpler to create, and easier to understand if you just need to order your data in a single dimension. As a demonstration, he showed a comparison of a table listing the ordering of 9/11 terrorists in their social network based on four different ranking measures, such as closeness, and then presented a graph that made things a lot cleared.

He has prepared a formal model for the visualization process, with the following stages:

  • Phenomenon. Something that’s happening in the real world, which for our purposes includes out on the internet.
  • Acquisition. The use of some sensor to capture data about that activity.
  • Model/Storage. Placing that data in some accessible structure.
  • Preparation. Selection and organization of the data into some form.
  • Rendering. Taking that data, and displaying it in a visual way.
  • Interaction. The adjustment and exploration of different render settings, and easy other changes that can be made to view the data differently.

There’s actually a cycle between the last three stages, where you refine and explore the possible visualizations by going back to the preparation to draw out more information from the data after you’ve done a round of understanding more about it by rendering. You’re iteratively asking questions of the data, and hoping to get interesting answers, and the iteration’s goal is finding the right questions to ask your data.

Web 2.0 makes visualizations a lot easier, since it’s a lot more dynamic than the static html that typified 1.0, but why is it so important? Swivel preview is a great example of what can be done once you’ve got data and visualizations out in front of a lot of eyes, as a social experience. The key separation that’s starting to happen is the distinction between algorithmic inference, where the underlying systems make decisions about importance and relationships of data to boil it down into a simple form, and visual inference, where more information is exposed to the user and they do more mental processing on it themselves. (This reminded me of one of the themes I think is crucial in search, the separation of the underlying index data and the presentation of it through the UI. I wish that we could see more innovative search UIs than the DOS-style text list of results in page-rank order, but I think Google is doing a good job of fetching the underlying data. What’s blocking innovation at the moment is that in order to try a new UI, you have to also try to catch up with Google’s massive head-start in indexing. That’s why I tried to reuse Google’s indexing with a different UI through Google Hot Keys.)

One question that came up was why search is so linear? Matt believes this can be laid squarely at the door of advertising, there’s a very strong incentive for search engines to keep people looking through the ads.

Defrag: Web 2.0 goes to work

Hardhat

Rod Smith, the IBM VP for emerging technology, had a lot to squeeze into a short time. I had trouble keeping my notes up with his pace, and I wish I had more time to look at his slides. They often seemed to have more in-depth information on the subjects he described, I will contact him and see if they’re available online anywhere. (Edit- Rod sent them on, thanks! Download defrag_keynote.pdf
 

They are well worth looking through.)

He started off by outlining his mission in this presentation. He wanted to talk about the nuts-and-bolts issues of the technology behind 2.0, and why so many businesses are interested in it. The first question was why 2.0 apps are produced so much quicker than traditional enterprise tools?

Part of the reason is that they tend to be a lot simpler, and more focused on solving a specialized problem for a small number of people, rather than tackling a general need for a wide audience. Being built on the network, they are naturally more collaborative, and support richer interactions between people. They also tend to be built around either open or de facto standards. Because they are comparatively light-weight, they can be altered to respond to change a lot more easily too.

DIY or shadow IT, technology developed outside of the IT department, has always been around. Business unit people have been writing applications as Excel macros for a long time. (On a personal note, Liz is an actuary with a large health insurer, and she’s been creating complex VBA and SAS applications for many years as part of her job.) What 2.0 brings to the table is a lot of interesting ways to link these isolated projects together, for example by outputting to an RSS feed, which can then be routed around the company. People in business units are now a lot more tech savvy than they used to be, which also really helps the adoption of these tools.

He moved on to talk about the practicalities of creating "five minute applications" or mashups. The biggest hurdle always seems to be how to get easy access to the data? "I have all this data from years of doing business, how do I unlock it?"

As an example, he looked at how StrikeIron had created a location-based mashup of Dun and Bradstreet’s business information service, for establishing the legitimacy of a company you’re dealing with, or finding likely sales prospects. (I saw a screenshot of an actual map display, rather than a text summary, but I can’t locate that.)

Old companies have accumulated a lot of potentially very useful and valuable data, but there’s not much use being made of most of it. The question, as above, is how to make that data mashable. The term often used for this part of the process is ‘widget composition’, which covers a lot of different technologies, from Google gadgets to TypePad widgets.

There are of course some dangers with the brave new world of Web 2.0 in business. One of the strengths of traditional IT is that there’s accountability and responsibility for ensuring service availability and data accuracy. If a service created by a business unit member becomes widely popular, should they be the ones to maintain and update it, or is there a process to transfer that to IT? There’s little visibility from the CIO and IT manager level as to what’s going on with these shadow IT projects. It’s like the early days of internal web servers being installed across companies in an ad hoc way, we’re only just sorting out the tangle that resulted from that. There’s also some unique issues with digital rights management and copyright once you’re sending data through feeds. It’s not so much like music DRM where the problem is malicious actors trying to steal, as just allowing people to keep track of what the right attribution and correct uses of the data are.

Copyright.com has done some interesting work in this area, creating meta-tags to attach to data that allows automatic handling based on rules for different attributions.

Defrag: How taxonomies meet folksonomies; or the role of semantics on the web

Bookshelves_3

Karen Schneider gave a talk drawing on the centuries of experience that the library community has in classifying and organizing information, and the relationship between those formal taxonomies and tagging approaches like de.icio.us. She’s made her slides available here.

She started by laying out the relevance of libraries, with a look at community college students library usage. They’re checking out roughly the same number of books as a decade ago, but they now check out many ebooks too, as well as accessing databases, so overall usage has actually increased. Community college students are 49% of the total undergraduate population in the US, and they’re poorer and work longer hours outside of college than average. They’re a very demanding audience, so their heavy usage demonstrates that libraries are providing an efficient and useful service.

She then tackled a few librarian stereotypes. I’ve been a library-blog lurker for years, so I didn’t need any convincing, but she got some laughs with Donna Reed the spinster librarian from the alternate world in Its a Wonderful Life. I was disappointed that Giles was missed out, but you can’t have everything.

The next point was a quick demonstration of some typical library software, and how awful it was. The presentation was essentially the same as a card catalog, very static and uninvolving. Doc Searls had talked the day before about data in general being trapped in disconnected silos, in hard-to-use formats, and library systems suffer from the exact same problems.

WorldCat is a universal library catalog that lets you find books and other items in libraries near you. It’s still based on the old card-index model of marked (edit- MARC, that makes more sense, thanks Karen!) data, but it’s a big step forward because it’s linking together a lot of different libraries’ data sources.

There’s some issues that traditional taxonomies have been wrestling with for a long time, that are also problems for the newer technologies. Authority control is the process of sorting out terms which could be ambiguous, for example by adding a date or other suffix to a name to make it clear which person is referred to. Misspelling is another area that librarians have spent a lot of time developing methods to cope with. Stemming is problematic enough in English, but she discussed Eastern European languages that have even tougher word constructions. Synonyms are another obstacle to finding the results you need. She showed a del.icio.us example where the tags covering the same wireless networking technology included "wifi", "wi-fi", "802.11", "802.11b". Phrase searching is something that library data services have been handling for a lot longer than search engines. And finally, libraries have been around for long enough that anachronisms have become an issue, something that tagging systems have not had to cope with. Until the 90’s, the Library of Congress resisted changing any of its authoritative terms, such as afro-american or water closet, even though they’d become seriously out-dated.

Disambiguation or authority control is something that taxonomies are very good at. The creators of the system spot clashes, and figure out a resolution to them. Worldcat Identities is a good example of the power of this approach. Interestingly, Wikipedia is very good at this too, as the disambiguation page for ‘apple’ shows. She believes this is the result of a very strict, well-patrolled community where the naming is held to be extremely important, and believes the value of the naming is under-appreciated.

Another strong point of traditional cataloging approaches is the definitions. Wikipedia seems to have informally developed a convention where the first paragraph of any entry is actually a definition too.

Having somebody with in-depth expertise and authority on a subject do a centralized classification can be extremely efficient. She gave the example of a law library in California that as an excellent del.icio.us tagging scheme, but I couldn’t find the reference unfortunately. (edit- Here it is, from the Witkin State Law Library ).

Library catalogs have an excellent topic scheme, they have a good hierarchy for organizing their classifications, which is still something that folksonomies are  trying to catch up with, using ideas like facets.

These are all areas where folksonomies can learn from taxonomies, but there’s plenty of ideas that should flow the other way too. One of the strengths of tagging is that it’s really easy to understand how to create and search with tags. The same can’t be said for the Dewey decimal system. Cataloging in a library involves following a very intimidating series of restrictions. Tagging doesn’t frighten off your workforce like this.

In the short term, tagging is satisficing, and trumps the ‘perfection’ of a traditional taxonomy. In 2006, the Library of Congress was proud to report they’d cataloged 350,000 items with 400 catalogers. That works out to only about 3.5 records per day!

Tagging is also about more than just description. It’s a method for discovery and rediscovery, use and reuse, with your own and other peoples bookmarks. Folksonomies produce good meta-data; some people seem concerned that 90% of flickr photos fall within six facets, but this is actually a good reflection of the real world.

It seems like library conferences are a lot more advanced than defrag in handling tags, since there’s a formal declaration of a tag for each event in advance, and then that’s used by everybody involved. There wasn’t anything this well-publicized for Defrag, and the one chosen, ‘defragcon’, caused a sad shake of the head, since that’s not future-proof for next years conference, and we’ll end up having to change our tags to add a year suffix.

She also brought up the point that basic cataloging and classification techniques seem to be instinctive, and not restricted to a highly-trained elite of catalog librarians. We all tend to pick around four terms to classify items.

There is a common, but useless critique of folksonomies; that personal tags pollute them. This is a useless criticism because it’s easy for systems to filter them out. A more real problem is the proliferation of tags over time, which ends up cluttering up any results. There’s also the tricky balance between splitters and lumpers, where too finely divided categories give ‘onesie’ results where every item is unique, or overly broad classes where the signal of the results you want are overwhelmed by the noise of irrelevant items.

There’s some examples of ‘uber-folksonomies’, which take the raw power of distributed classification, and apply a layer of hierarchy on top. Wikipedia is the best-known example, and its greatest strength is how well-patrolled the system is. LibraryThing is a system that lets you enter and tag all the books in your personal library. The Danbury library actually uses the information people have entered in LT to recommend books for their patrons who search online, as well as using pre-vetted tags to indicate the categories each book belongs to. The Librarians Internet Index is another well-patrolled classification system for websites (though it looked a surprisingly sparse when I checked it out). The Assumption College for Sisters has been using del.icio.us to classify its library. Karen pointed out that it’s hard to imagine anyone more trustworthy than a nun librarian! Thunder Bay Public Library has also been busy on del.icio.us.

A deep lesson from the success of folksonomies is that great things can be achieved if people want to get involved. We need to incentivize that activity, and she used the phrase ‘handprints and mirrors’. She didn’t expand on the mirrors part, but I took that to mean the enjoyment people took from looking at a reflection of themselves in their work. We all want to feel like we’ve left some kind of handprint on society, so any folk-based system should reflect that desire too.

She only took one question, asking how libraries are doing? She replied that they’re an example of the only great non-commercial third-space. She also gave examples about how people like to be in that space when they’re dealing with information, even if they’re not there for the books.