XQuery/Wikipedia Page scraping

Page scraping allows any web page to be the source of raw data suitable for transformation. This example takes the data on the Wikipedia current events page 24 September 2007 and transforms to a simple HTML page

The key components of an XQuery page scraper are:


 * 1) The fn:doc function which accepts a URL and retrieves the page as XML.  Many pages are not well-formed XML, but the Wikipedia pages are.
 * 2) Setting a namespace if the page has a default namespace. This page has a default namespace of "http://www.w3.org/1999/xhtml" so a namespace must be declared and its namespace prefix used in path expressions which access the page's XML.
 * 3) Identification of a path to the selected content. In this case, the content is located in a td tag with a class of 'description'
 * 4) Re-basing any relative URLs.  Here the links to Wikipedia articles have relative URLs.  To re-base these, the XML is serialized to a string with util:serialize, the relative URLs edited with replace, and the string  converted back to XML using util:parse

Sample XQuery to Extract Data from Wikipedia Current Events Page
In this example, there is some date re-formating to do since the date format in the page's URL is not the XML formated date. Links to the previous and next days are included, making use of XQuery date arithmetic.