Overview
Heart disease remains the leading cause of death for women in the United States. Despite public campaigns, awareness has declined in recent years, underscoring the need for scalable, effective education strategies. In response, researchers explored two distinct approaches to teach women about heart attack recognition and response: a human-delivered SMS education intervention and a fully automated AI chatbot called HeartBot. This article summarizes the quasi-experimental analysis comparing these two formats, their outcomes, and implications for future health communication.
Study Design and Context
The project comprised two phases conducted between 2022 and 2024. Phase 1 used a human interventionist (a master-prepared cardiovascular nurse) to deliver two online SMS conversations over two days, focusing on heart attack symptoms and response. Phase 2 introduced HeartBot, a rule-guided SMS chatbot designed to deliver the same educational goals in a single session. Both phases recruited U.S. women aged 25 and older without a history of heart disease, with data collected via online surveys before and after the interventions.
Content development drew on clinical guidelines and the American Heart Association materials. A Wizard of Oz setup helped refine HeartBot before full automation. Participants were incentivized with e-gift cards after completing studies. The primary outcome assessed knowledge and awareness of heart attack symptoms and appropriate actions, with secondary measures evaluating user experience and conversational quality.
Key Outcomes: Knowledge and Awareness
Both delivery formats significantly increased participants’ knowledge and confidence in recognizing heart attack signs, distinguishing symptoms from other problems, calling emergency services, and reaching an emergency department promptly. In the human-delivered phase, odds of correctly answering each knowledge item were substantially higher than pre-intervention, indicating strong learning gains with more extended engagement. HeartBot also produced meaningful improvements, though the magnitudes were smaller, reflecting differences in conversation length, depth, and interactivity.
Specific findings showed the human-delivered intervention achieving the largest gains across four questions, while HeartBot demonstrated significant improvements, particularly for recognizing signs and symptoms. An interaction analysis suggested that the human format generally outperformed HeartBot for most questions, with a borderline difference for calling an ambulance where the gap diminished (P = .09). Nonetheless, HeartBot’s ability to improve knowledge confirms the viability of AI-driven education as a scalable alternative when human resources are constrained.
User Experience and Conversational Quality
Beyond knowledge gains, researchers examined message effectiveness, perceived humanness, naturalness, and coherence. Across both studies, participants rated the human-delivered conversations higher on conversational quality measures. Interestingly, many participants could not reliably discern whether they were talking to a human or a chatbot, underscoring the growing sophistication of AI text-based health education tools. Yet, perceptions of humanness and naturalness favored human interactions, suggesting current AI limitations in fully replicating relational nuance and affect in health communication.
<h3Implications for Design and Policy
These findings highlight a pragmatic takeaway: AI chatbots like HeartBot can meaningfully increase heart attack knowledge among women and offer scalable, around-the-clock access. However, human-delivered sessions—especially when spread across multiple encounters—may yield deeper learning and stronger perceived engagement. To maximize impact, future programs could combine multi-session AI coaching with occasional human-guided check-ins, or gradually increase HeartBot’s conversational depth through adaptive learning and richer content selection.
Limitations and Future Research
The study is not a randomized controlled trial, which limits causal inference. Differences in session length, topic coverage, and incentives complicate attribution of effects to the delivery mode alone. The authors advocate future randomized trials to establish causal efficacy and to test longer-term retention and behavior change. Expanding recruitment beyond social media and ensuring representation of diverse populations will also improve generalizability.
Conclusions
Both human-delivered and AI HeartBot interventions can improve women’s knowledge and awareness of heart attack symptoms and appropriate responses in the United States. HeartBot shows promise as a cost-effective, scalable educational tool, with room to grow through longer interactions, adaptive personalization, and iterative design improvements. Rigorous randomized testing will be essential to confirm efficacy and guide deployment in public health campaigns.