The University of Alberta added WorldCat Local to its web offerings a while ago, but for my own library use I’ve gone on using our old OPAC by default, for no other reasons than familiarity and inertia. Now, though, I’ve found a solid advantage that WorldCat Local offers to my personal workflow: fixed record-level URLs.
I was trying to solve the age-old problem of capturing a call number so that it will be easy to consult when I get to the stacks to pick up the book. The OPAC record is on the screen of my workstation, my iPhone is on my belt: how to bridge the gap? Emailing it to myself is tedious, copying and pasting into a note and then syncing even more so. Taking a picture with the iPhone’s camera may get the call number but it’s hard to include enough of the citation to show which item this is, if I’m fetching more than one.
The solution I want, which was inspired by a tweet by Lorcan Dempsey, is a QR code that gives me the URL of the record. That way when I need it I’ll have the full citation, the call number, everything. QR codes don’t appear in WorldCat Local or in the OPAC (like Huddersfield), but there’s Greasemonkey for that, specifically the “QR Code for Everything!” script (and probably others, I didn’t explore). I can pop up a QR code for any page I visit in Firefox on my workstation, grab it into the iPhone using the free i-nigma app (or one of the other QR-reading apps) to snap a picture of the QR code, and then easily consult the full record in the stacks.
The only problem with my default behavior is that the OPAC uses a session URL, which is meaningless once the session expires or when accessed from another device, so capturing it does me no good. WorldCat Local gives me a URL that doesn’t depend on the current session. That’s what I need: a cool URI that doesn’t change, at least across two devices and within the time-frame of my interest in a given book. I suppose I could customize the Greasemonkey script to use the OPAC’s permalink service before it generates the QR code, or we could enhance our unAPI service to provide a QR code as one of the options, but hell, WorldCat Local just works for this.
Access always leaves me with at least one major shift in perception that I need to chew on for a while. This year it came, not unexpectedly, from Dan Chudnov: we need to merge our metasearch engine and our OpenURL resolver.
This fits with some recent incidents in which the resolver I manage failed to resolve because of incomplete metadata from more peripheral sources of OpenURLs: citation managers like RefWorks and Zotero. Citation management weirds the metadata, as far as OpenURL is concerned. Dates intended for human consumption don’t get parsed properly, and our resolver often needs a date and not just a volume number to determine availability. This is a serious problem, because your service looks very bad if a user exports a record from an e-resource to their citation manager, clicks the OpenURL button, and doesn’t get sent back to the place they know has the text. You don’t want to be told it’s unavailable when you just saw it a couple of clicks ago.
When the metadata is inadequate, what should have been resolution becomes search. Annette Bailey’s LibX demonstration showed some imaginative use of search to supplement resolution: sending the user on an article title search in Google Scholar, and then generating an OpenURL from the citation found there. Resolving and searching are the poles of a continuum, and where you find yourself depends on the quality and granularity of the metadata in your OpenURL. Give me good detailed metadata and my resolver should nail it for you first time; give me vague or faulty metadata and my search service should kick in to help you find your path. Ross Singler’s Ümlaut would enhance your metadata from other sources, the metasearch service would provide you with links to attempt different searches (like article title), and so on. Crucially, you shouldn’t have to choose in advance which service you’re going to use for a given full-text quest. Hence the merger.
This will make the resolution process more reliable, but also more complex, at least in the hard cases. This brings us to Roy Tennant’s complaint that the resolver menu is an irritating waste of a click for the user who just wants to get to the full text. In the simple cases, he’s right; in the hard cases, the menu is our only chance to give the user the expanded range of options that may be required to find the full text. The menu becomes an application rather than a list of options, and the user is engaged in an operation rather than passively following a link. Until our resolution services become much more reliable than they are now for the hard cases, I don’t think we can do without the menu.
In a merged system we run the risk of snaring the user in more loops: when we give more options, and try to do smart things to tweak the metadata before we send the user out to explore the trails we’ve blazed, they may end up clicking more OpenURL buttons and circling back with slightly different metadata each time, leading to slightly different options. The circles might be vicious or virtuous, who knows. I think the menu helps with this: when users know that clicking an OpenURL button always brings up the familiar menu, they’ll be less confused than they would be if it sometimes sends them directly off to something that is usually, but not always, the full text of the article they wanted. It will feel like repeating a Google search with different terms until you get what you want.
UnAPI is set to become a very useful addition to the web toolkit of sites that purvey metadata, such as library catalogues. Within its domain, it renders the human-readable web machine-readable at a fine level of granularity. It does this by providing a simple mechanism by which a client application can fetch a full metadata record for the item being viewed. All that is needed is a couple of lines of HTML in the source page (which are invisible to non-unAPI applications and to human readers), and a back-end service that can provide the metadata on demand.
The unAPI server can easily be layered on top of an existing service such as an OAI-PMH server or an SRU server. This project provides an SRU-to-unAPI service. If you have a catalogue with a Z39.50 server, it is easy to provide an SRU service using IndexData‘s YAZ Proxy. And if your Z39.50 server can search by record id, it is easy to use that SRU service to provide unAPI service. This project shows how to do this using Apache Cocoon as the unAPI server, but it could just as easily be done with other XML-aware web scripting environments.
This service is currently running experimentally in the University of Alberta catalogue. It can be seen in action by pointing a unAPI client at any full-record page.
- Install YAZ Proxy.
- Configure YAZ Proxy to allow searching by record id, using the
rec.idfield. This may require opening use attribute 12. Make sure that pqf.properties contains the line:
index.rec.id = 1=12
And in the configuration file for your server make sure that use attribute 12 is available by adding it to the default attribute list:
<attribute type="1" value="1-11,13-1010,1013-1023,1025-1030"/>
(Change the first part to
1-1010). Test it to be sure you can retrieve records using the record id.
- Install Apache Cocoon.
- Unzip unAPI-SRU.zip and add it to Cocoon’s mount table with a line something like this:
<mount src="/Docume~1/Peter/MyDocu~1/projects/unAPI-SRU/" uri-prefix="sru"/>
- Edit the unAPI-SRU
sitemap.xmapto set the address of your SRU server:
Test the Cocoon interface to be sure it can retrieve records from SRU.
- Modify your OPAC full record display to include the required unAPI elements.
- Test with Xiaoming Liu‘s Greasemonkey unAPI script or Ed Summers‘ unAPI Validator.
If you would like to use some other field than rec.id to retrieve the record from the SRU server, you can modify the template for the SRU search url:
You can test the unAPI component without an SRU server by commenting out the generator above in
sitemap.xmap, and uncommenting the one that retrieves a local record. Of course, you’re then limited to the sample records provided in the
samples directory. In this case the only available record id is “
The Cocoon component makes use of flowscript to manage the response. Have a look at
flowscript/unapi.js to see how this works: it looks at the request parameters and routes the request to the appropriate pipeline. If the request is invalid, it sends the appropriate status code. If it tried to retrieve a record from the SRU server, it examines the response to determine whether a record was in fact returned, indicating that the record id is valid.
Cocoon depends on the SRU server to generate the record in the appropriate format. By default YAZ Proxy can generate marcxml, mods, and dc records, using XSLT stylesheets to produce the latter two. These formats are hard-coded in
unapi.js and in
xml/formats.xml; if you want to offer other formats, you’ll have to modify these files (as well as providing appropriate stylesheets).