PublisherThe University of Arizona.
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
AbstractArtificial Intelligence is being integrated into the recruiting and hiring systems for firms all across the globe. The algorithms are designed to vet the resume and application of job candidates based on what is considered the ideal candidate for the role. However, what happens when these systems work ineffectively or start displaying human bias? How does our perception of Artificial Intelligence change based on its effectiveness? In this paper, we use previous literature to develop a predictive model on the behavioral constructs of trust, privacy, and self-efficacy. We then conduct an empirical study in which college students (future job applicants) react to a music recommending AI. By exposing the participants to an AI that is either ineffective or effective, we compare how the students experience with the AI influences their perceived trust propensity, privacy concerns, and self-efficacy. The participants perceptions of AI are primed in the context of hiring systems. We conclude our research with several regression analyses that analyze the relationship between the behavioral constructs within our model. We then speculate on potential explanations for the findings and establish a basis for future behavioral research on perceptions of AI.
Degree ProgramManagement Information Systems