Categories: Digital health / Mobile health

Smartphone Survey Deployment Patterns and Response Rates in Older Adults: An RCT

Smartphone Survey Deployment Patterns and Response Rates in Older Adults: An RCT

Introduction

Smartphone-enabled data collection is transforming health research by enabling frequent, real-time assessments. In digital health studies, however, maintaining engagement over time is a persistent challenge, especially among older adults who may face usability and cognitive barriers. A growing body of evidence suggests that survey deployment patterns—how often surveys are delivered and in how many questions per batch—can influence longitudinal response rates. This article summarizes a randomized controlled trial (RCT) embedded in the electronic Framingham Heart Study (eFHS) that tested whether distributing smartphone-administered surveys into smaller, more frequent batches would sustain higher response rates among an older population.

Trial Design and Setting

The study (ClinicalTrials.gov NCT04752657) was conducted within the eFHS, a digital extension of the Framingham Heart Study. Fifty-year collaboration with a long-term cohort offered a stable environment to assess engagement with smartphone-based surveys. Participants were randomized to two deployment patterns over four time periods (baseline to week 8, 8–16, 16–24, and 24–32):

  • Experimental group: half of the surveys every 2 weeks (more frequent, smaller batches).
  • Control group: all surveys every 4 weeks (less frequent, larger batches).

Both groups received the same total number of surveys within each period, isolating the effect of deployment pattern on response behavior. The trial enrolled 492 participants (mean age 74 years; 58% women), most of whom were White and well-educated. All participants owned a smartphone and consented to app-based data collection.

Methods and Outcomes

Survey content included short-form patient-reported outcomes on mood, fatigue, sleep, physical function, pain, and cognitive measures, along with task-based assessments like the Trail Making Test and gait tasks. The primary outcome was the proportion of surveys returned per participant within each period (including partial completions when eligible). Secondary outcomes captured the proportion of questions answered within returned surveys, offering a more granular view of engagement and survey completeness.

Analyses used mixed-effects regression to compare outcomes between groups over time, adjusting for age, phone type (iPhone vs Android), and spousal pairing. Intention-to-treat principles guided all analyses, with sensitivity analyses using a binary return variable corroborating the primary findings.

Key Findings

The trial found that the experimental group exhibited higher and more durable response rates over time. In early weeks, both groups performed similarly (roughly 75% of surveys returned). By weeks 8–16, the difference widened to about 3 percentage points (70% vs 67%), and by weeks 24–32, the experimental group showed a relative advantage of around 8 percentage points (58% vs 50%). The pattern was mirrored for the secondary outcome (questions completed per participant), which tracked closely with survey return rates, highlighting the link between deployment pattern and overall engagement.

Notably, dropout escalated over time in both groups, but remained consistently lower in the experimental group (up to 28% dropout by week 34 vs 38% in the control group). Post-hoc subgroup analyses by age and sex did not reveal statistically significant interactions, though power for such subgroup testing was limited. Across age strata, women, and men alike, the more frequent, smaller-batch approach maintained higher response rates in later periods.

Implications for Digital Health Research

For researchers conducting longitudinal digital health studies with older adults, deploying surveys in smaller, more frequent batches can buffer against rapid declines in engagement. The trial suggests that regular, manageable bursts of surveying create cognitive spacing that reduces fatigue and keeps participants connected to the study routine. Importantly, the total survey burden remains constant between strategies, indicating that deployment pattern alone can meaningfully influence data quality without increasing participant workload.

These findings have practical implications for decentralized trials and real-world health monitoring through smartphones. They point toward a design principle: align survey cadence with human factors—habit formation, attention, and routine—especially when participants are older or managing chronic conditions.

Strengths, Limitations, and Future Directions

The study leverages a well-characterized, trusted cohort with decades of engagement, which strengthens external validity for similar populations. Limitations include a predominantly White, educated sample, which may limit generalizability to more diverse groups. Future work should test deployment patterns across varied demographics and health conditions, and examine additional engagement metrics such as time-to-complete and user experience, to refine best practices for digital health data collection in older adults.

Conclusion

In this randomized trial within the eFHS framework, distributing surveys into smaller batches every two weeks markedly improved and sustained response rates among older adults compared with delivering all surveys every four weeks. These results support adopting deployment patterns that minimize batch size and maintain regular touchpoints to optimize longitudinal data quality in smartphone-based health research.