Moderation analysis.
As a vital tool for human-computer interaction, artificial intelligence (AI) voice assistants have become an integral part of individuals’ everyday routines. However, there are still a series of problems caused by privacy violations in current use. This research aims to explore users’ risk perceptions and innovation resistance arising from privacy concerns when utilizing AI voice assistants. Descriptive statistics and correlation analysis were conducted using SPSS21.0 software to examine each variable. The mediating and moderating effects were analyzed through specific models provided by PROCESS. The findings of this research indicate that perceptions of risk serve as a mediator in the relationship between privacy violations and resistance to innovation. This elucidates the indirect pathway through which privacy concerns impact opposition to new technologies. Furthermore, the study reveals that anthropomorphism and informativeness can mitigate the perceived risks associated with AI voice assistants, consequently reducing resistance to innovation. By focusing on user psychology, this study offers valuable insights for the development and enhancement of AI voice assistants, underscoring the importance of addressing user concerns regarding privacy and risk.