Introduction
Digital health research increasingly relies on smartphone-based surveys to monitor health trajectories in real time. While this approach offers timely data with reduced recall bias, maintaining engagement over months or years remains a core challenge, especially among older adults with chronic conditions. Previous studies have highlighted survey fatigue and higher dropout rates in mobile health research, underscoring the need for deployment strategies that preserve data quality and participant motivation.
Purpose and Trial Design
This randomized controlled trial (RCT), embedded within the electronic Framingham Heart Study (eFHS), evaluated whether distributing smartphone-administered surveys into smaller, more frequent batches would improve longitudinal response rates among older adults. Participants were randomized to receive half of the surveys every 2 weeks (experimental group) or all surveys every 4 weeks (control group) across four time periods over 32 weeks.
The study followed rigorous ethical standards, with approval from the Boston University Medical Campus Institutional Review Board and centralized randomization to balance characteristics such as age and phone type. All outcomes were analyzed on an intention-to-treat basis using mixed-effects models to account for repeated measures and intraclass correlations.
Methods in Brief
The MyDataHelps Designer platform delivered surveys on enrollment and at scheduled intervals. Participants answered a range of questions covering mood, sleep, pain, physical function, cognitive tasks, and lifestyle factors, with some task-based assessments (e.g., Trail Making Test) included for iPhone users. The primary outcome was the proportion of surveys returned per participant within each period; the secondary outcome was the proportion of questions or tasks completed within returned surveys.
Sample size calculations targeted 240 participants per group to detect meaningful differences in response slope over time, achieving 83% power. The analysis incorporated fixed effects for group, time, age, and phone type, plus a group-by-time interaction to assess deployment-pattern effects across the study duration.
Key Findings
The trial enrolled 492 participants (mean age 74, SD 6.3; 58% women). Across the four time periods, the more frequent, smaller batches demonstrated higher response rates than the larger, less frequent batches. Specifically, from baseline to week 8, both groups hovered around 75–76% survey return. Over time, the experimental group maintained higher completion rates, with differences of 3%, 5%, and 8% emerging in the subsequent periods. The secondary outcome mirrored this pattern, given the high correlation between response rate and item completion.
Dropout accumulated more in the control group by weeks 24–32 (38% vs. 28% in the experimental group). Sensitivity analyses, coding surveys as returned/not returned, supported the primary finding: the experimental deployment pattern slowed the decline in response over time (odds ratios increasing across periods). Subgroup analyses by age and sex did not show statistically significant three-way interactions, though the study acknowledged limited power for these exploratory tests.
Interpretation and Implications
The results suggest that in long-term digital health studies involving older adults, delivering surveys in smaller, more frequent batches can sustain participation better than batching all surveys together. Possible mechanisms include reduced cognitive burden per session, more consistent engagement through regular touchpoints, and improved alignment with participants’ daily routines. Importantly, the total survey burden remained constant between arms, indicating that deployment pattern, not workload, drove the differences observed.
These findings contribute to a nuanced understanding of engagement strategies in digital health research. While older adults may face barriers to technology adoption, well-designed survey deployment can enhance data continuity and study validity in this population.
Strengths and Limitations
Key strengths include the embedded, long-running eFHS framework, high participant familiarity with the study, and a 26–32 week follow-up window that provides insight into extended engagement. Limitations include potential generalizability constraints because the cohort is predominantly older, non-Hispanic White, English-speaking, and well-educated in a specific U.S. region. Additionally, some cognitive and physical assessments were platform-dependent (iOS), which may affect cross-device comparability.
Conclusions
In smartphone-based longitudinal health research among older adults, deploying surveys in smaller, more frequent batches sustains response rates more effectively over time than larger, less frequent batches. This deployment pattern offers a practical strategy to improve data quality and retention in digital health trials focused on aging populations.