编辑: kieth 2019-07-04

For search engine researchers and developers, it helps them to compare the systems and algorithms, ?nd out the bottleneck and improve the system'

s quality. Search engines are traditionally evaluated in the Cran?eld paradigm. However, search engines have changed drastically since the Cran?eld paradigm was proposed, so this methodology cannot do this job without change. First of all, modern search engines handle a large volume of queries and documents, so it is almost impossible to acquire a complete (information need, document) judgement set. Second, most modern search engines do not only return users a list of result documents, they generally have more features (e.g., document snippet generation, query suggestion, etc.). The quality of these modules varies a lot across different search engines. In this paper, we investigate the problem of accurately and ef?ciently evaluating search engines with the clickthrough data from the users. This paper proposes two kinds of techniques to evaluate search engines by users'

clickthrough data. In the ?rst method, we present the users a merged result list from two search engines, and use the clickthrough on this merged result to compare the performance of these two search engines. In the second method, we present the result (of one or two search engines), estimate the document/snippet features from the clickthrough data, and then inte........

下载(注:源文件不在本站服务器,都将跳转到源网站下载)
备用下载
发帖评论
相关话题
发布一个新话题