Tuesday 8 November 2011

DITA - Understanding Blog No. 5 & 6 - Web 2.0, Web Services and APIs

I have decided to consolidate my learning on the first two topics of the second half of the DITA module, as they seem to interlink quite nicely. Explaining how the Web 2.0 technologies work, then describing the methods by which they provide information as a service to users via the internet, and finally looking at the interfaces (APIs) created to mask the internal complexities of the client systems underneath to make the information personalized to the user will be my main focus.

Its only human nature to be nosy. We spend our waking hours actively locating and investigating information about the world, other people and sometimes about ourselves! The most accessible portal for doing this is undeniably now the internet, a powerhouse of interconnected data networks, sprinkled with services and applications that essentially hand us the information we seek if we know how to find it or where to find it.

Web 2.0 was the term coined in 2005 to describe the emergence of ICTs being used to provide online services that allow the user to collaborate with other users across the same network. Traditionally the internet has been catergorised by the server-client model by which requests for data are made, received and sent back with a definite start (the client request made to the server) and a definite end (the server response received by the client). Web 2.0 effectively turns this on its head, whereby the clients, no longer statically content or restricted on waiting to receive another's generated answers, are now empowered by technology to pro-actively create and send their own data, rapidly and at will. The clients become the servers: the internet becomes the system platform; the server computers become the access point rather than the facilitator.

The online world truly becomes a global social network.

Web 2.0 applications have consumed our daily lives: they are addictive, often gluttonous in terms of data access, rapidly evolving and rapidly updating according to our needs and whims. We have now more social places to 'hang out' online, either by killing our own time (YouTube, Wikipedia) or be using our time to informally engage with others (Twitter, Facebook). Proximity and time is never an issue when we can access these places on the go using mobile devices.

All these Web 2.0 applications feel inclusive as they give us the choice as to whether we engage or spectate - create our own new data to cast-off, or swim in the same sea of other people's data. The choice is ours because it is now in our hands - technologies have become cheaper and quicker to produce and maintain, which enables us to post updates, share photos and write own our website without any technical knowledge or skill required on our part. It creates a rich user experience without any of stresses involved in understanding how to make it work. It is open and available to all, although again the choice as to whether we involve ourselves is subjective, the choice determined by own own ethical, moral and political sensibilities.

The Web 2.0 applications (such as the very blog I am typing) are all examples of  web services. They are in essence computer software applications that have been installed 'on the internet', as opposed to the local hard-drive contained in your laptop/PC. In a similar vein, the data created through a new blog entry, a tweet or a facebook status update isn't saved or stored on your PC, it floats in limbo somewhere on the internet until such point when we ask to access it. We can access this data from any location that has internet connectivity. Cloud computing appears to be the next big thing, what with Google (Googledocs) and Apple (iCloud) offer cloud services to their users.

In his lecture notes, Richard Butterworth sets out a concise definition for web services, by distinguishing it from a web page:


A web page is a way of transferring information over the internet which is primarily aimed to be read by humans
A web service, in contrast, is a way of transferring information over the internet which is primarily aimed to be read by machines.

So in essence, a web service uses web technology to pass information around the internet that is readable by machines. It is in the form of a 'language' that computers read and process in accordance with the metatags that are assigned to the data therein. The information pushed around is content only: there are no structure or presentation instrcutions included. Computers do not know or understand the meaning behind text: they can't distinguish between different parts of data as text unless there is some explicit instruction in the programming code they receive that 'label' the text accordingly as having some different meaning. Computers don't know the different between the titles and authors of some work: we as humans do though!

Web services are not intended expected to be the user end point. They are the means by which we send machine-readable data to client PCs, who then reprocess it and make it more appropriate and accessible to the user.

The programming code for a web service is XML (eXtensible Mark-Up Language). It provides, as a set of machine-readable instructions, the core data marked up with metadata (via metags) to clearly give a value or worth ("name", "price", "location" etc.) that can be interpreted by a number of other machine systems, which then display the data in the correct context, albeit in different parameters.

A good example of this would be Facebook. The positioning and level of information that are visible to the user when logged in through a computer terminal will be different (fuller, due to the optimisation of space and function provided by internet browsers and plugins) than for the same page accessed through a different machine (a tablet, or smart phone for example).

XML allows us to manipulate data  to describe it in the form of your choice. Facebook understand they can't replicate the exact same layout on a web browser and on say an iPhone, so they create a new interface (an app) for the platform they wish to deliver their service to, to enable the same data in the XML code to be reproduced in the most efficient way on that platform.

This is an example of an API (Application Programming Interface). Think of using the analogy of a car: You don't need to know what's under the bonnet of your car in order to drive it!

It allows programmers to build an external shell (such as a mobile phone application), compatible with the XML code, without being concerned with how the complicated internal workings of the system underneath actually works. Programmers build upon the functionality of existing web services by creating add-ons that slot into the DNA of the service and allow users to interact in innovative or progressive ways. Examples of APIs are widgets that you can write into HTML code, and effectively place a portal to another part or service on the internet. A Twitter feed box that updates with your tweets as you send them, a button under a news story allowing you to 'like' that story and publish it on your facebook profile, or a Google map box which reproduces a section of map and marks your business/office location to enable a website visitor to find you. I have just described and examples of some of the combinations of web services with APIs, which allow for interesting mash-ups to be created in the online community. Advanced programming language such as Javascript in your web browser makes allow for this level of  web service manipulation. As part of the practical lab exercise, I set up a page and included some APIs into the HTML code. Click here to see some of the examples explained above!

The same old dangers seemingly lurk under the surface however: the amount of information going online needs moderation and control, permanence and integrity of data is compromised. How data is stored, accessed and retrieved, and the reasons behind these activities are highly contentious, controversial and  potentially damaging. How we classify, order and regulate the information we create, by creating metadata such as tag clouds and folksonomies, is loose and imprecise if there are no existing guidelines to follow, and leads to misinterpretations and cyber squabbles over use in context, if we don't agree on it. Web 2.0 threatens to engulf our lives and identities if we allow such technologies to define us as a society.

Final thought: the real danger appears to be that we don't know the extent of how much of our personal data is held on the internet. We may never get to see it all ... we only see whatever they want us to see!

No comments:

Post a Comment