A Theory-Driven Deep Learning Method for Voice Chat–Based Customer Response Prediction

Abstract
As artificial intelligence and digitalization technologies are flourishing real-time, online interaction–based commercial modes, exploiting customers’ purchase intention implied in online interaction processes may foster huge business opportunities. In this study, we target the task of voice chat–based customer response prediction in an emerging online interaction–based commercial mode, the invite-online-and-experience-in-store mode. Prior research shows that satisfaction, which can be revealed by the discrepancy between prior expectation and actual experience, is a key factor to disentangle customers’ purchase intention, whereas black-box deep learning methods empirically promise us advantageous capabilities in dealing with complex voice data, for example, text and audio information incorporated in voice chat. To this end, we propose a theory-driven deep learning method that enables us to (1) learn customers’ personalized product preferences and dynamic satisfaction in the absence of their profile information, (2) model customers’ actual experiences based on multiview voice chat information in an interlaced way, and (3) enhance the customer response prediction performance of a black-box deep learning model with theory-driven dynamic satisfaction. Empirical evaluation results demonstrate the advantageous prediction performance of our proposed method over state-of-the-art deep learning alternatives. Investigation of cumulative satisfaction reveals the collaborative predictive roles of theory-driven dynamic satisfaction and deep representation features for customer response prediction. Explanatory analysis further renders insights into customers’ personalized preferences and dynamic satisfaction for key product attributes. History: Yong Tan, Senior Editor; Jingjing Zhang, Associate Editor. Funding: This work was supported in part by the National Natural Science Foundation of China [Grants 71971067 and 72271059] and the China Postdoctoral Science Foundation [Grant 2022M722394]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/isre.2022.1196.