Show simple item record

dc.contributor.authorHarootonian, S.K.
dc.contributor.authorEkstrom, A.D.
dc.contributor.authorWilson, R.C.
dc.date.accessioned2022-03-18T00:04:06Z
dc.date.available2022-03-18T00:04:06Z
dc.date.issued2022
dc.identifier.citationHarootonian, S. K., Ekstrom, A. D., & Wilson, R. C. (2022). Combination and competition between path integration and landmark navigation in the estimation of heading direction. PLoS Computational Biology.
dc.identifier.issn1553-734X
dc.identifier.pmid35143474
dc.identifier.doi10.1371/journal.pcbi.1009222
dc.identifier.urihttp://hdl.handle.net/10150/663674
dc.description.abstractSuccessful navigation requires the ability to compute one's location and heading from incoming multisensory information. Previous work has shown that this multisensory input comes in two forms: body-based idiothetic cues, from one's own rotations and translations, and visual allothetic cues, from the environment (usually visual landmarks). However, exactly how these two streams of information are integrated is unclear, with some models suggesting the body-based idiothetic and visual allothetic cues are combined, while others suggest they compete. In this paper we investigated the integration of body-based idiothetic and visual allothetic cues in the computation of heading using virtual reality. In our experiment, participants performed a series of body turns of up to 360 degrees in the dark with only a brief flash (300ms) of visual feedback en route. Because the environment was virtual, we had full control over the visual feedback and were able to vary the offset between this feedback and the true heading angle. By measuring the effect of the feedback offset on the angle participants turned, we were able to determine the extent to which they incorporated visual feedback as a function of the offset error. By further modeling this behavior we were able to quantify the computations people used. While there were considerable individual differences in performance on our task, with some participants mostly ignoring the visual feedback and others relying on it almost entirely, our modeling results suggest that almost all participants used the same strategy in which idiothetic and allothetic cues are combined when the mismatch between them is small, but compete when the mismatch is large. These findings suggest that participants update their estimate of heading using a hybrid strategy that mixes the combination and competition of cues. Copyright: © 2022 Harootonian et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
dc.language.isoen
dc.publisherPublic Library of Science
dc.rightsCopyright © 2022 Harootonian et al. This is an open access article distributed under the terms of the Creative Commons Attribution License.
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleCombination and competition between path integration and landmark navigation in the estimation of heading direction
dc.typeArticle
dc.typetext
dc.contributor.departmentMcKnight Brain Institute, University of Arizona
dc.contributor.departmentCognitive Science Program, University of Arizona
dc.identifier.journalPLoS Computational Biology
dc.description.noteOpen access journal
dc.description.collectioninformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.
dc.eprint.versionFinal published version
dc.source.journaltitlePLoS Computational Biology
refterms.dateFOA2022-03-18T00:04:06Z


Files in this item

Thumbnail
Name:
journal.pcbi.1009222.pdf
Size:
3.278Mb
Format:
PDF
Description:
Final Published Version

This item appears in the following Collection(s)

Show simple item record

Copyright © 2022 Harootonian et al. This is an open access article distributed under the terms of the Creative Commons Attribution License.
Except where otherwise noted, this item's license is described as Copyright © 2022 Harootonian et al. This is an open access article distributed under the terms of the Creative Commons Attribution License.