12/17/07 – Resurgens – 1/11/08

Last Thursday, Johns Hopkins University Libraries went live with the Ümlaut (Ü2). This comes slightly less than four weeks after Georgia Tech took theirs down (although they were using the much more duct tape and bailing wire version 1), and it’s nice to see a library back in the land of Röck Döts.

Ü2 shares very little except superficially with the original Ümlaut, and I owe Jonathan Rochkind a lot for getting it to this level. It’s an interesting dynamic between us (as anybody who has spent a minute in #code4lib in the last eight months knows) that seems to work pretty well. It would be nice to expand the community beyond just us, though. It’s pretty likely that the Ümlaut will work its way into Talis’ product suite in some form or another, so that would probably draw some people in, but it would be nice to see more SFX (or other link resolvers) customers join the party.

This isn’t to say that JHÜmlaut doesn’t need some work. In fact, there’s something really wrong with it: it’s taking way too long to resolve (Georgia Tech’s was about twice as fast, although probably with a lighter load). If I were to guess I would assume that the SFX API is the culprit; when GT’s was performing similarly, there was a bug in the OCLC Resolver Registry lookup that was causing two SFX requests per Ümlaut request (it wasn’t recognizing that it was duplicating). This isn’t the case with JHU (not only did Jonathan remove the OCLC Registry feature, it wouldn’t be affecting me, sitting at home in Atlanta, anyway).

Performance was one of the reasons GT’s relationship soured with the Ümlaut (an unfortunate bout of downtime after I left was the biggie, I think, though), so I hope we can iron this out before JHU starts getting disillusioned. Thankfully, they didn’t have the huge EBSCO bug that afflicted GT on launch.

For reasons only known in Ipswich, MA, EBSCO appends their OpenURLs with <<SomeIdentifier. Since this is injected into the location header via JavaScript (EBSCO sends their OpenURLs via a JavaScript popup), Internet Explorer and Safari don’t escape the URL which causes Mongrel to explode (these are illegal characters in HTTP, after all). Since the entire state of Georgia gets about half their electronic journal content from EBSCO, this was a really huge problem (which was fixed by dumping Mongrel in favor of LigHTTPD and FastCGI). These are the sorts of scenarios that caused the reference librarians to lose confidence.

JHU has the advantage of GT’s learning curve, so hopefully we can circumvent these sorts of problems. It’s still got to get faster, though.

Still, I’m happy. It’s good and refreshing to see the Ümlaut back in action.

2 comments
  1. [FYI, since I’m still getting referer hits to my blog from here, people are still reading it, so I’ll mention that we definitely have solved the performance problems, more or less. As far as response time, umlaut is performing better than it ever has been, and I believe reasonably, as far as response time.

    At the moment, Umlaut typically adds about 0.8 seconds on top of how long the SFX API takes to respond, for a total response time of 1-4 seconds. Don’t get me wrong, I’d definitely like it to be a lot faster, but SFX itself is definitely the main bottleneck at this point. When I have time, I can probably whittle down that 0.8 seconds yet further, but I’ve gotten the low hanging fruit already.

    Our Umlaut uptime is also pretty darn good. The service pretty much only goes down when it’s SFX crashing, Umlaut’s crashed I think once in over 2 years of use. SFX has crashed a half dozen times or more, alas. ]

  2. Make that total response time of 2-5 seconds. I wish SFX could deliver an API response in 0.2s?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>