ISBN-13: 9783031011443 / Angielski / Miękka / 2010 / 77 str.
Many information retrieval (IR) systems suffer from a radical variance in performance when responding to users' queries. Even for systems that succeed very well on average, the quality of results returned for some of the queries is poor. Thus, it is desirable that IR systems will be able to identify "difficult" queries so they can be handled properly. Understanding why some queries are inherently more difficult than others is essential for IR, and a good answer to this important question will help search engines to reduce the variance in performance, hence better servicing their customer needs. Estimating the query difficulty is an attempt to quantify the quality of search results retrieved for a query from a given collection of documents. This book discusses the reasons that cause search engines to fail for some of the queries, and then reviews recent approaches for estimating query difficulty in the IR field. It then describes a common methodology for evaluating the prediction quality of those estimators, and experiments with some of the predictors applied by various IR methods over several TREC benchmarks. Finally, it discusses potential applications that can utilize query difficulty estimators by handling each query individually and selectively, based upon its estimated difficulty. Table of Contents: Introduction - The Robustness Problem of Information Retrieval / Basic Concepts / Query Performance Prediction Methods / Pre-Retrieval Prediction Methods / Post-Retrieval Prediction Methods / Combining Predictors / A General Model for Query Difficulty / Applications of Query Difficulty Estimation / Summary and Conclusions