The recent window is materially active
636 eligible papers appear in the current 30-day evidence window, compared with 153 in the prior 30 days. The busiest visible day is 2026-04-21 with 69 eligible papers.
4.2x prior-window volumeWeekly trend brief
AI cancer-detection papers are concentrated in mammography, pathology slides, and deployment barriers. The current 30-day evidence window contains 636 eligible papers, 4.2x the prior 30-day window, with 633 abstract-backed papers available for a closer scan. Representative papers point to mammographic self-supervision, pathology foundation models, whole-slide metastasis detection, cytology classification, interpretable histopathology models, pathologist-AI interaction, and clinical implementation barriers.
636 eligible papers appear in the current 30-day evidence window, compared with 153 in the prior 30 days. The busiest visible day is 2026-04-21 with 69 eligible papers.
4.2x prior-window volume633 recent papers include abstracts, about 100% of the eligible set. That gives the brief enough signal for topic-specific commentary while keeping claims limited to paper metadata and representative titles.
633 abstract-backed papersThe selected papers point toward mammographic self-supervision, pathology foundation models, whole-slide metastasis detection, cytology classification, interpretable histopathology models, pathologist-AI interaction, and clinical implementation barriers. That gives the brief a visible research direction rather than only a ranked list of recent papers.
8 representative papers8 representative papers span 6 sources, including 2 preprints that should be treated as preliminary.
6 representative sourcesMammography and breast-cancer survival papers show the current evidence set is not just generic imaging AI.
8 representative papersWhole-slide, cytology, histopathology, and foundation-model papers make digital pathology a central thread.
8 representative papersPathologist-AI interaction and implementation-barrier papers keep clinical workflow questions next to model-performance work.
8 representative papersSelected because it anchors a mammography, pathology, explainability, pathologist-workflow, or clinical-deployment question; this paper appears in arXiv (Cornell University) (2026) and is matched to AI in cancer detection.
Selected because it anchors a mammography, pathology, explainability, pathologist-workflow, or clinical-deployment question; this paper appears in arXiv (Cornell University) (2026) and is matched to AI in cancer detection. Treat as preliminary because it is marked as a preprint.
Selected because it anchors a mammography, pathology, explainability, pathologist-workflow, or clinical-deployment question; this paper appears in arXiv (Cornell University) (2026) and is matched to AI in cancer detection. Treat as preliminary because it is marked as a preprint.
Selected because it anchors a mammography, pathology, explainability, pathologist-workflow, or clinical-deployment question; this paper appears in Scientific Reports (2026) and is matched to AI in cancer detection.
Selected because it anchors a mammography, pathology, explainability, pathologist-workflow, or clinical-deployment question; this paper appears in Journal of Imaging (2026) and is matched to AI in cancer detection.
Selected because it anchors a mammography, pathology, explainability, pathologist-workflow, or clinical-deployment question; this paper appears in Archives of Pathology & Laboratory Medicine (2026) and is matched to AI in cancer detection.
Selected because it anchors a mammography, pathology, explainability, pathologist-workflow, or clinical-deployment question; this paper appears in Advances in Anatomic Pathology (2026) and is matched to AI in cancer detection.
Selected because it anchors a mammography, pathology, explainability, pathologist-workflow, or clinical-deployment question; this paper appears in PeerJ Computer Science (2026) and is matched to AI in cancer detection.