Tuesday, August 19, 2008

Law Aleph Users Group Meeting Notes

Law Aleph Users Group
July 15, 2008 Meeting
AALL Conference in Portland, Oregon

Mila Rush (Minnesota), chairing.

Agenda is transcribed from Mila’s handout, with notes taken by Ellen McGrath during the meeting following each item.

Housekeeping:
*All expressed gratitude to Ex Libris for supplying us with delicious box lunches, which were also provided for the Law Voyager users meeting next door.
*Mila explained that we would switch rooms with the Voyager users after 45 minutes so we could hear/view the presentation from the Ex Libris staff member in attendance.

Participants’ self-introductions.

Scope of Group’s attention: some shift to other products (both from Ex Libris and other ILS vendors) that directly/intimately affect Aleph.
*There was general agreement that discussions should include other products too.

Caitlin Robinson (Iowa): Aleph and the reorganization at Iowa. To what extent and how did it influence the decision. Planning, process, experience, lessons learned, satisfaction.
*Unfortunately Caitlin was not able to travel to Portland at the last minute. All were interested in it though and Mila said she would followup with Caitlin to see if there was some way it could be shared with the group.

On demand discussions. Possible action plans? Spontaneous contributions from the floor.
*Buffalo is upgrading to Aleph version 19 about 3 weeks after the AALL conference. We would like to hear if anyone else is on version 19 or plans to be and what changes have been experienced on that version — Minnesota will be upgrading to version 19 fairly soon, but Mila was not sure of the exact date. Nobody else is on version 19.
*Any libraries using another discovery tool on top of their Aleph OPAC, either Ex Libris’ own product (Primo) or an open source product, such as Endeca? -- Minnesota is bringing up Primo as the primary OPAC interface very soon. Florida State has not had a good experience with Endeca and even tried to create its own system (Mango), but without much luck. It was suggested that questions about Endeca be directed to Jon Lutz, who was not present at this meeting.
*Fantasy serials module – We did not get to this agenda item.
*Changes in electronic counting as asked by ABA. How can we generate these figures from our Aleph data? Or Verde? – We didn’t really discuss this, though Baltimore had to cancel their Verde contract since the product was delayed so long.
*Other items from the floor -- There was a discussion about handling of budgets and interaction with separate business systems. Specific topics included how many budgets are optimal and how they are setup. The suggestion of transferring acquisitions data to a spreadsheet so that it can be re-sorted in many different ways was offered. (I confess that much of this was over my head, as I am a cataloger.) Minnesota is looking into use of the booking module, but nobody else uses it. Buffalo also looked into it, but their AV department did not like the system and decided to stick with their own scheduling system.

Jenny Forbes (Ex Libris): Development report on, and plans for, Aleph and related products.
*The group swapped rooms with the Voyager users so Jenny could keep her projector setup in one place.
*Aleph version 19 was released in January 2008 and included improvements to course reserves, batch job management, and staff privileges. If there are specific questions about these changes, perhaps Jenny would be willing to share her slides, which included many screen shots. There are some acquisitions options to update Aleph with data from the university accounting system, as well as enhancements to the generic vendor records loader.
*Ex Libris is always working to implement evolving standards. Specifically mentioned were the SRU/SRW protocol that enhances Z39.50 and MARC-XML as a record output option.
*Aleph version 20 is currently being developed. Ex Libris is also working on a next generation product, of which there will be only one, not the separate Aleph and Voyager products. It was mentioned that at present, even with a discovery tool added to the catalog, it is necessary to have a federated searching tool in place too. Jenny mentioned that there is a message from the President of Ex Libris online about the next gen system, but I was unable to find it quickly on their website.

Miscellaneous (we did not get to discuss any of these items before adjourning)
*New leadership.
*Subscribe to ELUNA-LAW-IG-L@LISTSERV.ND.EDU
*Updates to list of Aleph law libraries: http://www.bc.edu/schools/law/library/aleph.html
*New project: Current state of installations, including near-term plans. Cf. June/July 2004 list.
*Any other business.

Notes taken by Ellen McGrath

Thursday, August 07, 2008

Demystifying Batch-Load Analysis: What You Need to Know About Vendor-Supplied Bibliographic Records

When: Sunday, July 13, 2008, 4:15-5:15 PM

*Coordinator: Ellen McGrath, University at Buffalo
*Moderator: Kevin Butterfield, College of William and Mary
*Speaker: Yael Mandelstam, Fordham University

This program was standing room only–-well, actually a number of people were sitting on the floor, but you get the idea, it was popular!

There are a number of vendor-supplied record sets of interest to law libraries, including: Making of Modern Law (MOML), LLMC-Digital, BNA, CALI, HeinOnline Legal Classics, HeinOnline World Trials, and LexisNexis/Westlaw Cassidy collections.

Yael Mandelstam got right down to the nitty-gritty and showed us how she analyzes batches of vendor-supplied bibliographic records before she loads them into Fordham’s catalog. The importance of the “before” part became evident when Yael described the situation with the original batch of MOML records. Many law libraries loaded them, only to discover that the bibliographic records for the electronic versions overlaid the records for the microfiche versions by mistake. Oops … there were a number of nodding heads in the room, which I took to mean some of those present had been burned in that manner. But never again, as Yael gave us valuable advice about how to keep that from happening.

Before getting down to specifics, Yael cautioned that “this technique is not meant to replace proper authority control, use of URL checkers, etc.” She makes use of two readily-available tools in her analysis: MarcEdit (a free editing utility available for download at http://oregonstate.edu/~reeset/marcedit/html/) and Microsoft Excel (spreadsheet software). She emphasized repeatedly how essential it is that you save a copy of your original file of records before you start rearranging it and that you save each iteration of a file.

The PowerPoint handout Yael prepared is excellent, so I am not going to spend time here on details you can more easily see there. It is available at: http://tsvbr.pbwiki.com/Batchload+Analysis

The approach to record set analysis was presented in three steps:
* step 1: Examine several individual records
* step 2: Count fields in file
* step 3: View isolated fields

The first step is important and should almost go without saying. Step 2 is a quick way to verify the number of occurrences of certain fields. For example, if you have 100 records in your batch, there must be 100 each of required fields, such as the 245 (title) and 856 (URL). If there are less, that is a big red flag! The “What’s wrong with this picture?” examples on the slides are very revealing.

I especially like the subtitle on the slides for step 3: The power of eyeballing. The value of isolating fields for analysis became clear immediately when each individual field was removed from its record and grouped together with its counterparts. When all the same fields are sorted together, the errors and inconsistencies truly do just jump out at you—amazing!

Yael shared helpful tips on how to cleanup those errors and inconsistencies using the global update capabilities of MarcEdit. Unfortunately it is not possible to view the changes in MarcEdit before you apply them, so she recommended doing that in your ILS instead. She concluded by giving a general overview of the work of the TS-SIS Task Group on Vendor-Supplied Bibliographic Records (http://www.aallnet.org/sis/tssis/committees/cataloging/vendorbibrecords/) which has setup a wiki (http://tsvbr.pbwiki.com/) in order to share the results of such batch-load analysis.

There wasn’t much time for questions: Should a batch be analyzed every time you are ready to load it? Yes. But there were a few comments, one of which was that MarcEdit cannot be used with some ILSs unless the whole database is extracted. The session closed with a comment about the fact that these batches are creating many duplicates for the same content in our catalogs. The aggregator-neutral record approach for e-resources (both serials and monographs) was mentioned, but naturally that raises other complexities for which there is no easy solution at present. Many thanks to OBS and TS for sponsoring this excellent program!