Practical Relevancy Testing

  • Naomi Dushay, Stanford University Libraries, ndushay@stanford.edu

Slides
Additional Info


Code4Lib 2011, Thursday 10 February, 11:00 - 11:20

Evaluating search result relevancy is difficult for any sizable amount of data, since human vetted ideal search results are essentially non-existent. This is true even for library collections, despite dedicated librarians and their familiarity with the collections. So how can we evaluate if search engine configuration changes (e.g. boosting, field analysis, search analysis settings) are an improvement? How can we ensure the results for query A don’t degrade while we try to improve results for query B? Why yes, Virginia, automatable tests are the answer. This talk will show you how you can easily write these tests from your hidden goldmine of human vetted relevancy rankings.