Information Retrieval Evaluation

Information Retrieval Evaluation
Author: Donna K. Harman
Publsiher: Morgan & Claypool Publishers
Total Pages: 122
Release: 2011
Genre: Computers
ISBN: 9781598299717

Download Information Retrieval Evaluation Book in PDF, Epub and Kindle

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation / Conclusion

Introduction to Information Retrieval

Introduction to Information Retrieval
Author: Christopher D. Manning,Prabhakar Raghavan,Hinrich Schütze
Publsiher: Cambridge University Press
Total Pages: 135
Release: 2008-07-07
Genre: Computers
ISBN: 9781139472104

Download Introduction to Information Retrieval Book in PDF, Epub and Kindle

Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures.

Evaluating Information Retrieval and Access Tasks

Evaluating Information Retrieval and Access Tasks
Author: Tetsuya Sakai,Douglas W. Oard,Noriko Kando
Publsiher: Springer Nature
Total Pages: 225
Release: 1901
Genre: Electronic books
ISBN: 9789811555541

Download Evaluating Information Retrieval and Access Tasks Book in PDF, Epub and Kindle

This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, todays smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students--anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one.

Methods for Evaluating Interactive Information Retrieval Systems with Users

Methods for Evaluating Interactive Information Retrieval Systems with Users
Author: Diane Kelly
Publsiher: Now Publishers Inc
Total Pages: 246
Release: 2009
Genre: Database management
ISBN: 9781601982247

Download Methods for Evaluating Interactive Information Retrieval Systems with Users Book in PDF, Epub and Kindle

Provides an overview and instruction on the evaluation of interactive information retrieval systems with users.

Experiment and Evaluation in Information Retrieval Models

Experiment and Evaluation in Information Retrieval Models
Author: K. Latha
Publsiher: CRC Press
Total Pages: 518
Release: 2017-07-28
Genre: Computers
ISBN: 9781315392608

Download Experiment and Evaluation in Information Retrieval Models Book in PDF, Epub and Kindle

Experiment and Evaluation in Information Retrieval Models explores different algorithms for the application of evolutionary computation to the field of information retrieval (IR). As well as examining existing approaches to resolving some of the problems in this field, results obtained by researchers are critically evaluated in order to give readers a clear view of the topic. In addition, this book covers Algorithmic Solutions to the Problems in Advanced IR Concepts, including Feature Selection for Document Ranking, web page classification and recommendation, Facet Generation for Document Retrieval, Duplication Detection and seeker satisfaction in question answering community Portals. Written with students and researchers in the field on information retrieval in mind, this book is also a useful tool for researchers in the natural and social sciences interested in the latest developments in the fast-moving subject area. Key features: Focusing on recent topics in Information Retrieval research, Experiment and Evaluation in Information Retrieval Models explores the following topics in detail: Searching in social media Using semantic annotations Ranking documents based on Facets Evaluating IR systems offline and online The role of evolutionary computation in IR Document and term clustering, Image retrieval Design of user profiles for IR Web page classification and recommendation Relevance feedback approach for Document and image retrieval

Online Evaluation for Information Retrieval

Online Evaluation for Information Retrieval
Author: Katja Hofmann,Lihong Li,Filip Radlinski
Publsiher: Unknown
Total Pages: 134
Release: 2016-06-07
Genre: Computers
ISBN: 1680831631

Download Online Evaluation for Information Retrieval Book in PDF, Epub and Kindle

Provides a comprehensive overview of the topic. It shows how online evaluation is used for controlled experiments, segmenting them into experiment designs that allow absolute or relative quality assessments. It also includes an extensive discussion of recent work on data re-use, and experiment estimation based on historical data.

Test Collection Based Evaluation of Information Retrieval Systems

Test Collection Based Evaluation of Information Retrieval Systems
Author: Mark Sanderson
Publsiher: Now Publishers Inc
Total Pages: 143
Release: 2010-06-03
Genre: Computers
ISBN: 9781601983602

Download Test Collection Based Evaluation of Information Retrieval Systems Book in PDF, Epub and Kindle

Use of test collections and evaluation measures to assess the effectiveness of information retrieval systems has its origins in work dating back to the early 1950s. Across the nearly 60 years since that work started, use of test collections is a de facto standard of evaluation. This monograph surveys the research conducted and explains the methods and measures devised for evaluation of retrieval systems, including a detailed look at the use of statistical significance testing in retrieval experimentation. This monograph reviews more recent examinations of the validity of the test collection approach and evaluation measures as well as outlining trends in current research exploiting query logs and live labs. At its core, the modern-day test collection is little different from the structures that the pioneering researchers in the 1950s and 1960s conceived of. This tutorial and review shows that despite its age, this long-standing evaluation method is still a highly valued tool for retrieval research.

Information Retrieval Evaluation

Information Retrieval Evaluation
Author: Donna Harman
Publsiher: Springer Nature
Total Pages: 107
Release: 2022-05-31
Genre: Computers
ISBN: 9783031022760

Download Information Retrieval Evaluation Book in PDF, Epub and Kindle

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation / Conclusion