Categories: Digital health and mobile surveys

Deploying Smartphone Surveys in Older Adults: A Randomized Trial of Batch Frequency and Longitudinal Response

Deploying Smartphone Surveys in Older Adults: A Randomized Trial of Batch Frequency and Longitudinal Response

Background and Rationale

Smartphone-based surveys offer a promising approach for real-time health data collection, especially in longitudinal studies. They enable frequent self-reports on mood, cognitive function, physical activity, and clinical experiences while reducing recall bias. Yet, maintaining engagement over months poses a major challenge, particularly for older adults who may face barriers to digital health technologies. A key question is how deployment patterns—the timing and batch size of survey requests—affect ongoing participation and data quality.

What the Trial Tested

This randomized controlled trial was embedded in a long-running electronic Framingham Heart Study (eFHS) cohort. Participants were randomly assigned to one of two deployment patterns for smartphone-administered surveys over multiple 8-week periods: (1) a more frequent, smaller-batch approach (half of the surveys every 2 weeks) and (2) a less frequent, larger-batch approach (all surveys every 4 weeks). The total survey burden remained the same in both groups, isolating the effect of deployment pattern on response behavior.

Methods in Brief

Participants were English-speaking adults with smartphone access, enrolled from the eFHS. Randomization was stratified by age and device type, with a simple couple-based approach to ensure spouses were assigned together. Outcomes focused on longitudinal response rates: (a) the proportion of surveys returned per participant in each time period and (b) the proportion of individual questions or tasks completed within returned surveys. A mixed-effects regression model assessed group differences over four defined time periods, adjusting for age, device type, and spouse clustering.

Key Findings

The primary outcome favored the more frequent, smaller-batch deployment. Over time, the experimental group maintained higher response rates than the control group. Specifically, response rates were similar in the initial period (approximately 75%), but differences grew in later periods, reaching an 8% higher completion rate in the final period for the experimental group. The secondary outcome—the completeness of responses—showed a near-perfect correlation with the primary outcome, indicating that more frequent batching not only increased participation but also preserved data quality by reducing missing items.

Dropout also rose over time in both arms, but cumulative dropout was consistently lower in the experimental group (about 28%) versus the control group (about 38%) by the study’s end. Sensitivity analyses treating each survey as simply returned or not returned corroborated these patterns. Subgroup analyses by age (<75 vs. ≥75) and sex did not reveal significant three-way interactions, though the study had limited power for such tests. Overall, the results suggest that deploying surveys in smaller, more frequent batches can sustain longitudinal engagement among older adults without increasing total workload.

Interpretation and Implications

Several mechanisms may explain why the small-batch, biweekly approach improved retention. Presenting fewer surveys per session reduces cognitive burden and fatigue, while more frequent touchpoints may reinforce a sense of ongoing participation and connection to the research team. Regular reminders likely help participants maintain a routine around survey completion, which is particularly important in aging populations who may experience fluctuating health status or competing priorities.

The findings contribute to the broader digital health literature by showing that deployment patterns matter as much as survey length or incentives. For older adults in digital health research, strategies that balance frequency with per-visit burden can improve adherence, enhance data completeness, and reduce missing data, ultimately strengthening study validity.

Strengths and Limitations

Strengths include an embedded RCT design within a well-characterized, long-running cohort and robust longitudinal analysis with intention-to-treat principles. Limitations involve a relatively homogeneous sample (older, predominantly non-Hispanic White, English-speaking) which may limit generalizability. Additionally, the trial focused on smartphone-based data collection and may not reflect experiences with other devices or populations with lower digital literacy.

Future Directions

Future research should test deployment patterns across more diverse populations and health contexts, and explore additional engagement metrics such as time-to-survey completion and time spent per survey. Broadly, refining smartphone-based data collection strategies will help digital health studies capture higher-quality longitudinal data from older adults and other underrepresented groups.