<< Chapter < Page
  Online humanities scholarship:     Page 19 / 32
Chapter >> Page >

4.5.3. workflow

From a given corpus or record set, the basic workflow for the REKn Crawler is as follows:

  1. Extract keywords from every document in a given corpus. For the prototype, we used a large MARC file from Iter as our record set and used PHP-MARC, an open source software package built in PHP that allows for manipulation and extraction of MARC records.
  2. Build search strings from the keywords extracted earlier. The following combinations were used in our experimentation: author; author and title; title; author and subject; subject.
  3. Query the web using each constructed search string. Up to fifty web page results per search are then collected and stored in a site list. Search engines that follow the OpenSearch standard can be queried from the back-end of a software application—the REKn Crawler employs this technique. OpenSearch-compatible search engines provide access to a variety of materials.
  4. Send a crawler into the web to harvest web pages from the site list generated in step 3. We are currently exploring implementation strategies for this stage of the project. Nutch is currently the best candidate because it is an open source web-search software package that builds on Lucene Java.

Consider the following example. A user views a document in PReE; for instance, Edelgard E. Dubruck, “Changes of Taste and Audience Expectation in Fifteenth-Century Religious Drama.” DuBruck (1983). Viewing this document triggers the crawler, which begins crawling via the document’s Iter MARC record (record number, keywords, author, title, subject headings). Search strings are then generated from the Iter MARC record data (in this particular instance the search strings will include: DuBruck, Edelgard E.; DuBruck, Edelgard E. Changes of Taste and Audience Expectation in Fifteenth-Century Religious Drama; DuBruck, Edelgard E. Religious drama, French; DuBruck, Edelgard E. Religious drama, French, History and criticism; Changes of Taste and Audience Expectation in Fifteenth-Century Religious Drama; Religious drama, French; Religious drama, French, History and criticism). The Crawler conducts searches with these strings and stores them for the later process of weeding out erroneous returns.

In the example given above, which took under an hour, the Crawler generated 291 unique results to add to the knowledgebase relating to the article and its subject matter. In our current development environment, the Crawler is able to harvest approximately 35,000 unique web pages in a day. We are currently experimenting with a larger seed set of 10,000 MARC records, which still amounts to a 1% subset of Iter’s bibliographical data.

4.5.4. application

The use of the REKn Crawler in conjunction with both REKn and PReE suggests some interesting applications, such as: increasing the scope and size of the knowledgebase; being able to analyze the results of the Crawler’s harvesting to discover document metadata and document ontology; and harvesting blogs and wikis for community knowledge on any given topic, and well beyond.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Online humanities scholarship: the shape of things to come. OpenStax CNX. May 08, 2010 Download for free at http://cnx.org/content/col11199/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Online humanities scholarship: the shape of things to come' conversation and receive update notifications?

Ask