Addressing structural hurdles for metadata extraction from environmental impact statements
Author
Laparra, EgoitzBinford‐Walsh, Alex
Emerson, Kirk
Miller, Marc L.
López‐Hoffman, Laura
Currim, Faiz
Bethard, Steven
Affiliation
School of Information, University of ArizonaSchool of Government and Public Policy, University of Arizona
James E. Rogers College of Law, University of Arizona
Department of Management Information Systems, University of Arizona
School of Information, University of Arizona
School of Natural Resources and the Environment, Udall Center for Studies in Public Policy, University of Arizona
Issue Date
2023-06-14Keywords
Library and Information SciencesInformation Systems and Management
Computer Networks and Communications
Information systems
Metadata
Show full item recordCitation
Laparra, E., Binford‐Walsh, A., Emerson, K., Miller, M. L., López‐Hoffman, L., Currim, F., & Bethard, S. (2023). Addressing structural hurdles for metadata extraction from environmental impact statements. Journal of the Association for Information Science and Technology, 74(9), 1124-1139.Publisher
WileyAbstract
Natural language processing techniques can be used to analyze the linguistic content of a document to extract missing pieces of metadata. However, accurate metadata extraction may not depend solely on the linguistics, but also on structural problems such as extremely large documents, unordered multi-file documents, and inconsistency in manually labeled metadata. In this work, we start from two standard machine learning solutions to extract pieces of metadata from Environmental Impact Statements, environmental policy documents that are regularly produced under the US National Environmental Policy Act of 1969. We present a series of experiments where we evaluate how these standard approaches are affected by different issues derived from real-world data. We find that metadata extraction can be strongly influenced by nonlinguistic factors such as document length and volume ordering and that the standard machine learning solutions often do not scale well to long documents. We demonstrate how such solutions can be better adapted to these scenarios, and conclude with suggestions for other NLP practitioners cataloging large document collections.Type
ArticleLanguage
enISSN
2330-1635EISSN
2330-1643Sponsors
National Science Foundationae974a485f413a2113503eed53cd6c53
10.1002/asi.24809