User Perspectives on Relevance Criteria: A Comparison among Relevant, Partially Relevant, and Not-Relevant Judgments
dc.contributor.author | Maglaughlin, Kelly L. | |
dc.contributor.author | Sonnenwald, Diane H. | |
dc.date.accessioned | 2006-08-18T00:00:01Z | |
dc.date.available | 2010-06-18T23:19:03Z | |
dc.date.issued | 2002-03 | en_US |
dc.date.submitted | 2006-08-18 | en_US |
dc.identifier.citation | User Perspectives on Relevance Criteria: A Comparison among Relevant, Partially Relevant, and Not-Relevant Judgments 2002-03, 53(5):327-342 Journal of the American Society for Information Science and Technology | en_US |
dc.identifier.uri | http://hdl.handle.net/10150/105087 | |
dc.description.abstract | This study investigates the use of criteria to assess relevant, partially relevant and not relevant documents. Each study participant identified passages within 20 document representations that were used in making relevance judgments, judged each document representation as a whole to be relevant, partially relevant or not relevant to their information need, and explained their decisions in an interview. Analysis revealed 29 criteria, discussed positively and negatively, used by the participants when selecting passages that contributed or detracted from a document's relevance. These criteria can be grouped into 6 categories: author, abstract, content, full text, journal or publisher and personal. Results indicate that multiple criteria are used when making relevant, partially relevant and not relevant judgments. Additionally, most criteria can have both a positive or negative contribution to the relevance of a document. The criteria most frequently mentioned by study participants in this study was content, followed by criteria concerning the full text document. These findings may have implications for relevance feedback in information retrieval systems, suggesting that users give relevance feedback using multiple criteria and indicate positive and negative criteria contributions. Systems designers may want to focus on supporting content criteria followed by full text criteria as this may provide the greatest cost benefit. | |
dc.format.mimetype | application/pdf | en_US |
dc.language.iso | en | en_US |
dc.publisher | Wiley Periodicals, Inc. | en_US |
dc.subject | Information Retrieval | en_US |
dc.subject | Information Seeking Behaviors | en_US |
dc.subject.other | Searching | en_US |
dc.subject.other | Search term selection | en_US |
dc.subject.other | Professional librarian | en_US |
dc.title | User Perspectives on Relevance Criteria: A Comparison among Relevant, Partially Relevant, and Not-Relevant Judgments | en_US |
dc.type | Journal Article (Paginated) | en_US |
dc.identifier.journal | Journal of the American Society for Information Science and Technology | en_US |
refterms.dateFOA | 2018-08-21T10:07:46Z | |
html.description.abstract | This study investigates the use of criteria to assess relevant, partially relevant and not relevant documents. Each study participant identified passages within 20 document representations that were used in making relevance judgments, judged each document representation as a whole to be relevant, partially relevant or not relevant to their information need, and explained their decisions in an interview. Analysis revealed 29 criteria, discussed positively and negatively, used by the participants when selecting passages that contributed or detracted from a document's relevance. These criteria can be grouped into 6 categories: author, abstract, content, full text, journal or publisher and personal. Results indicate that multiple criteria are used when making relevant, partially relevant and not relevant judgments. Additionally, most criteria can have both a positive or negative contribution to the relevance of a document. The criteria most frequently mentioned by study participants in this study was content, followed by criteria concerning the full text document. These findings may have implications for relevance feedback in information retrieval systems, suggesting that users give relevance feedback using multiple criteria and indicate positive and negative criteria contributions. Systems designers may want to focus on supporting content criteria followed by full text criteria as this may provide the greatest cost benefit. |