Yesterday I attended the Social Web incubator Bar Camp of the W3C, focusing on issues in web support for social media. It was a small group, overall, but an interesting group, including folks keen on issues like technical underpinnings (discussion of FOAF, RDF, etc), and folks with an interest in more applied topics like enterprise, health, and journalism.
The issue on the table is what sorts of standards might be necessary or desirable to support social networking on the web in interoperable ways. One statement that resonated was a comparison between the social web and social networks as being analogous to open space versus silos. As a general rule, if someone can lock you into their proprietary approach, you are subject to their whims. If, instead, there are open standards, you’re free to approach things in different ways. For example, email took off once one email standard took hold and allowed different systems to interoperate. On the other hand, proprietary standards may provide the capital and motivation necessary to invest in the development of advanced features (e.g. the ability of Linden Labs to continue to expand Second Life capabilities as they grabbed market share).
The internet as an open standard (e.g.TCP/IP) has allowed for the development of other standards on top. If not for the http standard, we wouldn’t have have the world wide web. However, continued development is needed to meet new needs. So, for example the Salmon project was represented, which is trying to make a mechanism whereby any comment on a piece of web content, regardless of location and tool (e.g. blogging about someone’s Flickr picture) could be aggregated back to the original content to maintain the discussion.
This can be real propeller-head stuff, e.g. it was admitted that RDF’s uptake has been hampered by a difficult syntax. Even Sir Tim Berners-Lee, responsible for the http protocol, admits that the // in the protocol isn’t necessary, and regrets it. I no longer can get down in the weeds, but fortunately understand it well enough conceptually to talk intelligently about the requirements and see the opportunities.
And opportunities there are. The next generation, I believe, so-called web 3.0, is when we move to system-generated content. The discussions that occurred on pulling together useful information to the benefit of organizations, like adding valuable information as a response to your searches and discussions. Rules operating on data by description has powerful capabilities, e.g. the way Amazon provides mass customization. There are entailments, of course; taxonomies and ontologies need governance as do other content activities.
Naturally, some of it was more approachable than the geek speak, such as the fact that social engineering was as important as semantic engineering, for example that clever interface design can mitigate getting users to tag content. Similarly, problems that can arise from bad behavior may be better solved as cultural issues rather than technical ones.
The folks there were fabulously knowledge, for example a post-meeting request for ontologies around project management and pharmaceuticals were richly answered. While much of this stuff is still in development, the opportunities are coming, and having the necessary understanding on hand to capitalize on it is important. Note that these people are working to make this stuff work for all of us. Truly valuable and much to be appreciated.
My recommendation is to be aware of the possibilities and requirements. While you are likely not quite ready to take advantage of it (and there are already opportunities, seriously), you don’t want to do anything that would subsequently make the opportunities harder to capitalize on. So look into your content data engineering from a semantic point of view as well, and prepare for some truly awesome capabilities.