Methods for Evaluating Interactive Information Retrieval Systems with Users

Methods for Evaluating Interactive Information Retrieval Systems with Users
Author: Diane Kelly
Publsiher: Now Publishers Inc
Total Pages: 246
Release: 2009
Genre: Database management
ISBN: 9781601982247

Download Methods for Evaluating Interactive Information Retrieval Systems with Users Book in PDF, Epub and Kindle

Provides an overview and instruction on the evaluation of interactive information retrieval systems with users.

Information Retrieval Evaluation

Information Retrieval Evaluation
Author: Donna K. Harman
Publsiher: Morgan & Claypool Publishers
Total Pages: 122
Release: 2011
Genre: Computers
ISBN: 9781598299717

Download Information Retrieval Evaluation Book in PDF, Epub and Kindle

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation / Conclusion

Interactive IR User Study Design Evaluation and Reporting

Interactive IR User Study Design  Evaluation  and Reporting
Author: Jiqun Liu,Chirag Shah
Publsiher: Springer Nature
Total Pages: 75
Release: 2022-05-31
Genre: Computers
ISBN: 9783031023194

Download Interactive IR User Study Design Evaluation and Reporting Book in PDF, Epub and Kindle

Since user study design has been widely applied in search interactions and information retrieval (IR) systems evaluation studies, a deep reflection and meta-evaluation of interactive IR (IIR) user studies is critical for sharpening the instruments of IIR research and improving the reliability and validity of the conclusions drawn from IIR user studies. To this end, we developed a faceted framework for supporting user study design, reporting, and evaluation based on a systematic review of the state-of-the-art IIR research papers recently published in several top IR venues (n=462). Within the framework, we identify three major types of research focuses, extract and summarize facet values from specific cases, and highlight the under-reported user study components which may significantly affect the results of research. Then, we employ the faceted framework in evaluating a series of IIR user studies against their respective research questions and explain the roles and impacts of the underlying connections and "collaborations" among different facet values. Through bridging diverse combinations of facet values with the study design decisions made for addressing research problems, the faceted framework can shed light on IIR user study design, reporting, and evaluation practices and help students and young researchers design and assess their own studies.

Interactive Information Seeking Behaviour and Retrieval

Interactive Information Seeking  Behaviour and Retrieval
Author: Ian Ruthven,Diane Kelly
Publsiher: Facet Publishing
Total Pages: 337
Release: 2011
Genre: Computers
ISBN: 9781856047074

Download Interactive Information Seeking Behaviour and Retrieval Book in PDF, Epub and Kindle

Information retrieval (IR) is a complex human activity supported by sophisticated systems. Information science has contributed much to the design and evaluation of previous generations of IR system development and to our general understanding of how such systems should be designed and yet, due to the increasing success and diversity of IR systems, many recent textbooks concentrate on IR systems themselves and ignore the human side of searching for information. This book is the first text to provide an information science perspective on IR. Unique in its scope, the book covers the whole spectrum of information retrieval, including: history and background information behaviour and seeking task-based information searching and retrieval approaches to investigating information interaction and behaviour information representation access models evaluation interfaces for IR interactive techniques web retrieval, ranking and personalization recommendation, collaboration and social search multimedia: interfaces and access. Readership: Senior undergraduates and masters' level students of all information and library studies courses and practising LIS professionals who need to better appreciate how IR systems are designed, implemented and evaluated.

Interactive Information Retrieval in Digital Environments

Interactive Information Retrieval in Digital Environments
Author: Xie, Iris
Publsiher: IGI Global
Total Pages: 376
Release: 2008-04-30
Genre: Computers
ISBN: 9781599042428

Download Interactive Information Retrieval in Digital Environments Book in PDF, Epub and Kindle

"This book includes the integration of existing frameworks on user-oriented information retrieval systems across multiple disciplines; the comprehensive review of empirical studies of interactive information retrieval systems for different types of users, tasks, and subtasks; and the discussion of how to evaluate interactive information retrieval systems. "--Provided by publisher.

Information Retrieval Evaluation

Information Retrieval Evaluation
Author: Donna Harman
Publsiher: Springer Nature
Total Pages: 107
Release: 2022-05-31
Genre: Computers
ISBN: 9783031022760

Download Information Retrieval Evaluation Book in PDF, Epub and Kindle

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation / Conclusion

Special Issue Evaluation of Interactive Information Retrieval Systems

Special Issue  Evaluation of Interactive Information Retrieval Systems
Author: Pia Borlund,Ian Ruthven
Publsiher: Unknown
Total Pages: 432
Release: 2007
Genre: Electronic Book
ISBN: OCLC:878688285

Download Special Issue Evaluation of Interactive Information Retrieval Systems Book in PDF, Epub and Kindle

Test Collection Based Evaluation of Information Retrieval Systems

Test Collection Based Evaluation of Information Retrieval Systems
Author: Mark Sanderson
Publsiher: Now Publishers Inc
Total Pages: 143
Release: 2010-06-03
Genre: Computers
ISBN: 9781601983602

Download Test Collection Based Evaluation of Information Retrieval Systems Book in PDF, Epub and Kindle

Use of test collections and evaluation measures to assess the effectiveness of information retrieval systems has its origins in work dating back to the early 1950s. Across the nearly 60 years since that work started, use of test collections is a de facto standard of evaluation. This monograph surveys the research conducted and explains the methods and measures devised for evaluation of retrieval systems, including a detailed look at the use of statistical significance testing in retrieval experimentation. This monograph reviews more recent examinations of the validity of the test collection approach and evaluation measures as well as outlining trends in current research exploiting query logs and live labs. At its core, the modern-day test collection is little different from the structures that the pioneering researchers in the 1950s and 1960s conceived of. This tutorial and review shows that despite its age, this long-standing evaluation method is still a highly valued tool for retrieval research.