01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Alternatives to Bpref
Tetsuya Sakai
Fast Generation of Result Snippets in Web Search
Andrew Turpin, Yohannes Tsegay, David Hawking
HITS hits TREC: Exploring IR evaluation results with network analysis
Stefano Mizzaro, Stephen Robertson
Query Performance Prediction in Web Search Environments
Yun Zhou, Bruce Croft
Reliable Information Retrieval Evaluation with Incomplete and Biased Judgements
Stefan Buettcher, Charles Clarke, Peter Yeung, Ian Soboroff
FRank: A Ranking Method with Fidelity Loss
Ming-Feng Tsai, Tie-Yan Liu, Tao Qin, Hsin-Hsi Chen, Wei-Ying Ma
How well does result relevance predict session satisfaction?
Scott Huffman, Michael Hochster
Strategic System Comparisons via Targeted Relevance Judgments
Alistair Moffat, William Webber, Justin Zobel
Feature Selection for Ranking
Xiubo Geng, Tie-Yan Liu, Tao Qin
Building Simulated Queries for Known-Item Topics: An Analysis using Six European Languages
Azzopardi Leif, Maarten de Rijke, Krisztian Balog
Deconstructing Nuggets: The Stability and Reliability of Complex Question Answering Evaluation
Jimmy Lin, Pengyi Zhang
Investigating the Querying and Browsing Behavior of Advanced Search Engine Users
Ryen White, Dan Morris
Towards Task-based PIM Evaluations
David Elsweiler, Ian Ruthven
Test Theory for Assessing IR Test Collections
David Bodoff, Pu Li
Supporting Multiple Information Seeking Strategies in a Single System Framework
Xiaojun Yuan, Nicholas Belkin
Robust Evaluation of Information Retrieval Systems
Ben Carterette
On the Robustness of Relevance Measures with Incomplete Judgments
Tanuja Bompada, Chi-Chao Chang, John Chen, Ravi Kumar, Rajesh Shenoy
Studying the Use of Popular Destinations to Enhance Web Search Interaction
Ryen White, Mikhail Bilenko, Silviu Cucerzan
Estimation and Use of Uncertainty in Pseudo-relevance Feedback
Kevyn Collins-Thompson, Jamie Callan
Enhancing Relevance Scoring With Chronological Term Rank
Adam Troy, Guo-Qiang Zhang
Learn from Web Search Logs to Organize Search Results
Xuanhui Wang, ChengXiang Zhai
Using Query Contexts in Information Retrieval
Jing Bai, Jian-Yun Nie, Hugue Bouchard, Guihong Cao
User-Oriented Text Segmentation Evaluation Measure
Martin Franz, J. Scott McCarley, Jian-Ming Xu
Recommending Citations for Academic Papers
Trevor Strohman, Bruce Croft, David Jensen
Understanding the Relationship of Information Need Specificity to Search Query Length
Peter Bailey, Nina Phan, Ross Wilkinson
Intra-assessor consistency in question answering
Ian Ruthven, Leif Azzopardi, Mark Baillie, Ralf Bierig, Emma Nicol, Simon Sweeney, Murat Yakici
Automatic Classification of Web Pages into Bookmark Categories
Chris Staff, Ian Bugeja
Validity and Power of t-Test for Comparing MAP and GMAP
Gordon Cormack, Thomas Lynam
The Relationship between IR Effectiveness Measures and Users' Satisfaction
Azzah Al-Maskari, Mark Sanderson, Paul Clough
A Comparison of Pooled and Sampled Relevance Judgments
Ian Soboroff
Clustering Short Texts using Wikipedia
Somnath Banerjee, Krishnan Ramanathan, Ajay Gupta
Enhancing Patent Retrieval by Citation Analysis
Atsushi Fujii
Finding Similar Experts
Krisztian Balog, Maarten de Rijke
Generative modeling of persons and documents for expert search
Pavel Serdyukov, Maarten Fokkinga, Peter Apers
Comparing Query Logs and Pseudo-Relevance Feedback for Web-Search Query Refinement
Ryen White, Charlie Clarke, Silviu Cucerzan
Problems with Kendall's Tau
Mark Sanderson, Ian Soboroff
Opinion Holder Extraction from Author and Authority Viewpoints
Yohei Seki
Heads and Tails: Studies of Web Search with Common and Rare Queries
Doug Downey, Susan Dumais, Eric Horvitz
Viewing Online Searching as a Learning Paradigm
Bernard Jansen, Brian Smith, Danielle Booth
Effects of highly agreed documents in relevancy prediction
Hideo Joho, Andres Masegosa, Jose Joemon
ISKODOR: Unified User Modeling for Integrated Searching
Melanie Gnasa, Armin B. Cremers, Douglas Oard
DiscoverInfo: A Tool for Discovering Information with Relevance and Novelty
Chirag Shah, Gary Marchionini
A "Do-It-Yourself" Evaluation Service for Music Information Retrieval Systems
M Cameron Jones, Mert Bay, J. Stephen Downie
Relevance to SIGIR : 4 5 6 4
Originality of work : 4 3 3 2
Impact of results : 4 4 2 2
Quality of arguments : 4 4 3 2
Quality of presentation : 3 4 3 2
Confidence in review : 4 3 3 4
Overall recommendation : 4 4 3 2