Overview
The rapid rise of digital and new media has transformed how people seek, share, and evaluate cancer information. This systematic review and meta-analysis examines empirical studies published between 2014 and 2023 that assess the quality of cancer-related information on social media platforms and AI chatbots. The goal is to understand how information quality has evolved, what factors influence assessments, and what patterns emerge across platforms, cancer types, and time.
Why new media matters for cancer information
New media—from text-based networks like Twitter (X), Facebook, Reddit, and Instagram to video platforms such as YouTube and TikTok, as well as AI-generated chatbots—has become a major source of health information for patients, survivors, and caregivers. While these channels can improve health literacy and decision-making, they can also propagate misinformation, encourage unproven treatments, and present challenges to trust and care. The review highlights the need to balance engagement with rigorous quality assessment as platforms evolve and new AI tools enter public discourse.
What the review covers
Inclusion criteria focused on peer-reviewed studies with empirical findings (qualitative, quantitative, or computational) addressing cancer-related information quality on new media, published in English between 2014 and 2023. Researchers extracted study characteristics, platforms analyzed, cancer types, and the quality assessment tools used (e.g., DISCERN, PEMAT, GQS, JAMA-BC). Studies were synthesized to answer three primary questions: (1) how study characteristics and platforms have evolved, (2) what factors influence quality assessments, and (3) what patterns emerge in information quality across media and cancer types.
Key findings on platforms and cancer types
The literature shows a shift from text- and image-based platforms to video-centric content, with YouTube and, more recently, TikTok, receiving substantial attention. Across 75 studies, video-based content often achieved high engagement but tended to score lower on overall quality measures compared with text-based media. Generative AI content entered the discourse in 2023, with several studies reporting moderate to high-quality AI outputs, though concerns about understandability and hallucinations persist.
Common cancers such as breast, prostate, skin, and colorectal cancers dominated the research landscape, while rarer cancers (e.g., gastric, thyroid, spinal) were studied less frequently and tended to yield lower quality assessments. Notably, content produced by medical professionals and institutions generally scored higher on quality tools like DISCERN, GQS, and JAMA benchmarks than content from nonprofessional sources.
Quality dimensions and what drives them
Across the reviewed studies, several quality dimensions repeatedly emerged as important: accuracy and misinformation, transparency and attribution, completeness of topics, understandability, actionability, and potential harm. Overall quality tended to be modest to moderate, with substantial heterogeneity across studies. Common weaknesses included insufficient sourcing, limited discussion of uncertainties, and limited coverage of adverse outcomes or risks. Actionable guidance and user-friendly presentation were frequently lacking, particularly in AI chatbot responses and short-form video content.
Implications and recommendations
For researchers: adopt consistent, validated tools (e.g., DISCERN, PEMAT, GQS, JAMA-BC) and pursue standardized reporting to enable robust meta-analyses. For content creators and health professionals: collaborate with medical experts to deliver accurate, accessible, and actionable information, especially on video-first platforms where reach is highest. For AI developers: address hallucinations, improve source citation, and enhance readability without sacrificing scientific accuracy. For clinicians: proactively inquire about patients’ use of new media to support informed decision-making and align discussions with patients’ information needs.
Limitations and future directions
The review is constrained by language (English-only studies), platform heterogeneity, and evolving platform policies that affect data collection. Heterogeneity across measurement tools and cancer types limits generalizability. Future work should expand to non-English literature, investigate Facebook and Instagram more deeply, and study a broader array of AI chatbots and health information tools to map information quality trajectories over time.
Conclusions
High-quality cancer information on new media supports informed decisions but remains moderate in overall quality, with notable gaps in transparency, understandability, actionability, and risk communication. The medium matters: text-based sources often yield higher quality assessments, while video and AI-driven content require stronger quality controls. The findings underscore a public health imperative to improve information quality across platforms as digital health ecosystems continue to evolve.