Thursday, January 10, 2008

So how do we Identify a Chunk?


The trick to any machine to machine communication is identification of the product and clarity as to what to do. The ISBN was a great identifier and has served the industry well and no fits even more comfortably within the EAN structure. The adoption of the ONIX/XML message again was a significant step forward and has positioned the industry well in it e communication.


We read today of the edict that the industry should provide a separate ISBN for each digital rendition of a title. This makes logic sense as is what has to happen. After all who would dream of giving the paperback, hardback, large-print copies of the same title the same ISBN? The long overdue ISTC number, which is to identify and effectively group all impressions of the same title, also makes good sense. It’s just a pity it has taken so long to come through the process to the market.


The challenge however lies with the fragments or what is commonly referred to as chunks. The DOI is a great mechanism for referential linking and provides persistent resolution but is it the right means of identifying chunks? We think it may be a sledgehammer to crack a nut and its structure is not one that would be easy for the trade to adopt for this purpose. It works well in an online world of search and discovery but the trade environment it will be significantly different from the academic or educational ones. The question is about what we need to identify chunks for and what we want to do with them once identified. Even if we want to sell chunks do we have the pricing structure or the permission rights processes to do it and track the money through to royalty. Today the easiest solution to fragments is to issue them ISBNs and to link them to the title via a combination or an ISTC.


This issue will grow as content starts to fragment and break from the jacket which once held it so neatly and is a challenge to us all.