Introduction
Patient experience is a core pillar of healthcare quality, reflecting how individuals perceive and respond to the care they receive. In China, policymakers have pushed digital transformations in health services, including the deployment of artificial intelligence (AI) driven conversational agents in outpatient settings. These chatbots can collect patient information—such as disease history, medications, and allergy data—before a consultation, presenting physicians with a structured summary to speed up history taking and improve diagnostic efficiency.
Previous work suggests that AI-assisted conversational agents have the potential to save time and raise satisfaction by streamlining information gathering and enhancing communication. Yet quantitative evidence linking such technology to objective measures of patient experience, especially in relation to physicians, has been limited. This cross-sectional study aims to fill that gap by comparing outpatient patient experience related to physicians between users and nonusers of AI-assisted conversational agents in tertiary public hospitals across economically developed regions of China.
Methods
Design and participants: The study surveyed adult residents who had outpatient visits to tertiary public hospitals within the past two weeks. Using a Chinese Outpatient Experience Questionnaire, researchers focused on four dimensions related to physicians: physician-patient communication, health information, short-term outcome, and general satisfaction (19 items total). An additional aim was to determine whether participants used AI-assisted conversational agents during their visit.
Data collection: An electronic questionnaire was disseminated via a national data platform to a broad adult population. In total, 394 eligible responses were analyzed after quality screening. Respondents were classified as conversational agent users or nonusers based on self-report. The study captured demographic details and visit information to adjust for potential confounders.
Measures and analysis: Each item used a 5-point Likert scale. The total patient experience score related to physicians was derived by aggregating the 19 items. Descriptive statistics described participant characteristics, while t-tests compared user and nonuser groups. A multiple linear regression model adjusted for demographic and visit-related covariates and used a Benjamini-Hochberg correction for multiple testing to assess the independent association between conversational agent use and patient experience scores. Reliability and validity of the instrument (Cronbach’s alpha) were confirmed in past research and in the current study.
Results
Participant characteristics showed that about half (53.0%) of respondents reported using AI-assisted conversational agents during their outpatient visit. Users differed from nonusers in sex, education level, income, self-rated health, and physician title. Across the four physician-related dimensions—and the 19 individual items—conversational agent users reported significantly higher scores than nonusers. Specifically, total physician-related patient experience scores were higher among users (P<.001), with improvements in physician-patient communication, health information access, short-term outcomes, and general satisfaction (P-values ranging from .006 to <.001).
Adjusted analyses indicated that use of AI-assisted conversational agents was a significant predictor of better patient experience relating to physicians (B=0.298, P=.013), corresponding to an average increase of 7.5% in the overall score after controlling for other factors. Self-rated health status also influenced experience, with individuals rating their health as better reporting higher scores. The model explained about 25% of the variance in physician-related patient experience (R2=0.2554). No alarming collinearity was detected among predictors (VIF 1.15–2.51).
Discussion
The findings support the idea that AI-assisted conversational agents can meaningfully improve patient experience related to physicians during outpatient visits. The mechanisms may include enhanced pre-consultation information capture, allowing physicians to understand patient conditions more quickly and tailor inquiries, thereby improving communication efficiency and information accessibility. This, in turn, can positively influence short-term outcomes and overall satisfaction.
Compared with prior mobile-health studies, the observed improvement (about 7.5%) is notably larger, potentially reflecting the dual benefit of richer pre-visit data and more efficient in-visit dialogue. The study also highlights the potential for AI-assisted agents to strengthen the physician-patient relationship, particularly when time is limited in crowded outpatient settings.
Policy and implementation implications include expanding AI chatbot deployment to more hospitals, integrating conversational agents with existing mobile health apps, and ensuring sustained funding, especially in less-developed regions. Regular training and user-friendly interfaces can help address digital literacy barriers and maximize impact on patient experience. Hospitals should also consider targeted outreach to patients with poorer self-rated health to reduce disparities in experience.
Limitations
The study relies on self-reported data over a two-week window, which may introduce recall bias. The cross-sectional design cannot prove causality. Participant self-selection into AI-assisted usage may reflect unmeasured factors such as digital literacy and health attitudes. Despite adjustments, residual confounding cannot be ruled out.
Conclusions
Evidence from this cross-sectional analysis suggests that the use of AI-assisted conversational agents during outpatient visits is associated with improved patient experience related to physicians in China. By enhancing physician-patient communication, providing targeted health information, and improving perceived short-term outcomes and satisfaction, these tools hold promise for broader health system gains. Public hospitals are encouraged to expand AI chatbot use in outpatient departments and the government should consider broader funding to ensure scalable adoption nationwide.