Accessibility settings

Published on in Vol 28 (2026)

Our AI-Powered Discoveries Are Trapped in a Predigital System

Our AI-Powered Discoveries Are Trapped in a Predigital System

Our AI-Powered Discoveries Are Trapped in a Predigital System

Authors of this article:

Boon-How Chew, JMIR Correspondent

Key Takeaways

  • The academic publishing system’s foundational issues—speed, cost, and reproducibility—are being met with a chaotic array of fragmented artificial intelligence (AI) tools that only patch symptoms rather than solve the core problem.
  • A fundamental shift to an integrated, data-native ecosystem is required to ensure the trustworthy and rapid translation of digital health discoveries into practice.

Dr Boon-How Chew is an academic physician and professor of family medicine at Universiti Putra Malaysia, specializing in the psychosocial and technological aspects of chronic disease management. His research increasingly explores the intersection of artificial intelligence (AI) innovation in medicine and the reform of scientific writing workflows within legacy academic infrastructures. He is actively developing models for AI-verified publishing to address systemic delays, scientific integrity, and reproducibility challenges in biomedical research.

The world of digital health is electric with the promise of AI. From AI-driven diagnostic tools that can diagnose invasive breast cancers earlier [1] and democratize retinal disease screening among a high-risk population [2] to large language models accelerating drug discovery [3,4], we are at the dawn of an unprecedented era of innovation. We are generating health data at a staggering rate and developing algorithms that can turn data into potentially lifesaving insights faster than ever before.

But this incredible engine of discovery is hitting a 17th-century bottleneck: the academic publishing system [5]. It is like we are pouring rocket fuel into a horse-drawn carriage. As a clinical academic, I see a perilous and growing chasm between the speed at which we can generate evidence and the glacial pace at which we can formally validate, share, and trust it. This great delay is no longer just an academic frustration; it is becoming a direct threat to patient care and the entire promise of a data-driven biomedical ecosystem.

While the rise of preprint servers has commendably solved some problems of immediate dissemination, the formal validation and publication process remains a significant bottleneck [6]. The subsequent journey from a preprint to a peer-reviewed, recognized publication still involves an agonizing delay, often extending from 12 to 18 months [5]. For digital health, where technology can become obsolete in a single year, this means the evidence base is consistently lagging behind the innovation curve.

This obsolescent system is governed by an economic model that creates profound access and equity issues [7]. Top-tier research universities report annual subscription expenditures exceeding US $10 to $15 million [8], while author-facing article processing charges in prestigious journals can range from US $5000 to over $11,000 per article [9], creating an unsustainable dual financial burden.

Beyond speed and cost, the very foundation of our scientific evidence faces new and ongoing threats. The scholarly record is plagued by growing research integrity issues, with an exponential rise in issues from gift authorship and peer-review rings to outright data fabrication from organized paper mills [10]. This is compounded by the well-documented reproducibility crisis, where it is estimated that a significant percentage of published research findings are not reproducible, with the proportions ranging widely from 50% to 90% by different disciplines and measures [11,12].

In my view, a core constraint underlying these issues is the system’s primary output: the static, text-based article. This opaque narrative summary functionally decouples an author’s claims from the underlying data and analytical methods, making verification challenging. For digital health, the stakes are particularly high. The black box of a clinical AI model cannot be built on the black box of a nonreproducible study.

In response to this crisis, a chaotic ecosystem of AI tools has emerged. While many general-purpose large language models already have deep research functionality, they represent a patchwork approach that ultimately adds complexity without addressing systemic failures.

For Researchers

There’s a proliferation of AI “super-assistants” and a wave of platforms aiming to centralize the fragmented authoring process. Tools like Paperpal, WriteSonic, SciSpace, and Prism by OpenAI and Scitex describe themselves as comprehensive AI-powered platforms designed to streamline the entire research workflow, from literature review to drafting and paraphrasing [13]. Similarly, tools like TERA (The Evidence Review Accelerator), EPPI-Reviewer, Covidence, RobotReviewer, Perplexity, and Elicit help users navigate the literature through systematic review [14], and semantic mapping platforms like ResearchRabbit, Litmaps, Semantic Scholar, and Iris.ai help pinpoint underexplored niches [14]. The data analysis phase is increasingly empowered by natural language assistants like Julius AI, AskVi.ai, Vizly, and Tableau GPT for intuitive statistical interpretation and visualization of complex datasets [14].

While these platforms may boost individual productivity, they are nonetheless designed to optimize the creation of the traditional static manuscript. They help authors write papers faster, but they do not change the fact that the final output is noninteractive, decoupled from its data, and subject to the same slow, opaque peer-review process on the publishers’ side.

For Publishers

Major publishers are embedding AI into their legacy workflows as a defensive measure. Elsevier’s Reviewer Recommender and Springer Nature’s SNAPP system use AI to speed up administrative tasks like prescreening content and finding reviewers [15]. Frontiers uses AIRA (Artificial Intelligence Review Assistant) for automated quality and image checks, while Taylor & Francis integrates Reviewer Locator, Imagetwin, and PaperPal for matching and editing. IEEE uses Publication Recommenders and misconduct prescreening, and MDPI uses AI for reviewer selection and generative AI safeguards. While these tools accelerate the traditional assembly line, they do not address its fundamental flaws.

Future Tools

Besides Sakana.ai, an AI coscientist by Google and Microsoft Discovery, ambitious new projects are emerging from startups and universities. Keiji AI’s TrialMind and Aika by Biorce aim to create AI agents to assist with clinical research design, and Stanford’s Agentic Reviewer offers AI-powered feedback on a submitted PDF. These are important steps, especially to overcome the plagued and prolonged peer review [16], but they still operate within the old paradigm as fragmented services that either prepare for or analyze a static document that is largely left unverifiable [17].

This fragmented response, while well-intentioned, adds more tools to an obsolescent workflow, increasing the burden on researchers to learn and manage a complex tech stack while leaving the core issues of data opacity, analytical audit trail, gift- and ghost-authorship contribution [18], and flawed validation processes untouched.

The solution is not to incrementally speed up the old system or add more disconnected tools. We need a fundamental reimagining of how we create, validate, and share scientific knowledge. We need a new operating system for science that is dynamic, transparent, data driven, and prespecified in research protocols, unified coherently on an open platform, and powered by AI technologies with endorsement from humans, who are responsible for its outputs.

This new model must move beyond the static paper as its primary output. The future unit of publication must be a living, verifiable, and interactive record as an enriched dynamic research object, where the data, methods, analysis log, fair author contributions, and transparent peer validation are all structurally and permanently linked, captured, and time-stamped.

This single integrated ecosystem—where AI is not a collection of fragmented patches but the core engine of one workflow that ensures rigorous reporting and transparency by design—is imperative for the integrity of science and fidelity of academic communication. The digital health community is at the forefront of innovation. We have a unique responsibility and opportunity to lead the charge in building such an AI-powered publishing model and scientific evidence ecosystem that is aligned with open science principles [19] and worthy of an immediate presence and the future [20]. The technology is (almost) here [21,22]. What is required now is the collective will to build, adopt, and apply it.

Conflicts of Interest

None declared.

  1. Gommers J, Hernström V, Josefsson V, et al. Interval cancer, sensitivity, and specificity comparing AI-supported mammography screening with standard double reading without AI in the MASAI study: a randomised, controlled, non-inferiority, single-blinded, population-based, screening-accuracy trial. Lancet. Jan 31, 2026;407(10527):505-514. [CrossRef] [Medline]
  2. Wu Y, Qian B, Li T, et al. An eyecare foundation model for clinical assistance: a randomized controlled trial. Nat Med. Oct 2025;31(10):3404-3413. [CrossRef] [Medline]
  3. Lu J, Choi K, Eremeev M, et al. Large language models and their applications in drug discovery and development: a primer. Clin Transl Sci. Apr 2025;18(4):e70205. [CrossRef] [Medline]
  4. Kang H, Li J, Hou L, Xu X, Zheng S, Li Q. Large language model-enhanced drug repositioning knowledge extraction via long chain-of-thought: development and evaluation study. JMIR Med Inform. Oct 7, 2025;13:e77837. [CrossRef] [Medline]
  5. Björk BC, Solomon D. The publishing delay in scholarly peer-reviewed journals. J Informetr. Oct 2013;7(4):914-923. [CrossRef]
  6. Smyth AR, Rawlinson C, Jenkins G. Preprint servers: a “rush to publish” or “just in time delivery” for science? Thorax. Jul 2020;75(7):532-533. [CrossRef] [Medline]
  7. Pooley J. Collective funding to reclaim scholarly publishing. Commonplace. Aug 16, 2021;1(1). [CrossRef]
  8. Andrews G. New data show universities are increasing R&D activity. Association of American Universities. Dec 6, 2024. URL: https:/​/www.​aau.edu/​newsroom/​leading-research-universities-report/​new-data-show-universities-are-increasing-rd-activity [accessed 2026-03-12]
  9. Morgan TJH, Smaldino PE. Author-paid publication fees corrupt science and should be abandoned. Sci Public Policy. Oct 15, 2025;52(5):805-809. [CrossRef]
  10. Xie Y, Wang K, Kong Y. Prevalence of research misconduct and questionable research practices: a systematic review and meta-analysis. Sci Eng Ethics. Jun 29, 2021;27(4):41. [CrossRef] [Medline]
  11. Ioannidis JPA. Why most published research findings are false. PLoS Med. Aug 2005;2(8):e124. [CrossRef] [Medline]
  12. Baker M. 1,500 scientists lift the lid on reproducibility. Nature New Biol. May 26, 2016;533(7604):452-454. [CrossRef] [Medline]
  13. Watanabe Y. SciTeX Writer. GitHub. 2026. URL: https://github.com/ywatanabe1989/scitex-writer?tab=readme-ov-file [accessed 2026-03-31]
  14. Chew BH. Enhancing research proposal preparation with artificial intelligence: an emerging guide for novel researchers in clinical and health sciences. Zenodo. Preprint posted online on Nov 14, 2025. [CrossRef]
  15. Musa Z. How big academic publishers use AI. PublishingState.com. Jul 25, 2025. URL: https://publishingstate.com/how-big-academic-publishers-use-ai/2025/ [accessed 2026-03-12]
  16. Perlis RH, Christakis DA, Bressler NM, et al. Artificial intelligence in peer review. JAMA. Nov 4, 2025;334(17). [CrossRef] [Medline]
  17. Lawrence R. The need for verification markers on published content. Inf Services Use. May 11, 2025;45(1-2):78-82. [CrossRef]
  18. Ruben A. Why scientific journal authorship practices make no sense et al. Science. Oct 28, 2021. URL: https:/​/www.​science.org/​content/​article/​why-scientific-journal-authorship-makes-absolutely-no-sense-et-al [accessed 2026-03-31]
  19. Canadian National Commission for UNESCO. An introduction to the UNESCO recommendation on open science. UNESCO; Nov 2022. [CrossRef]
  20. Klebel T, Traag V, Grypari I, Stoy L, Ross-Hellauer T. The academic impact of open science: a scoping review. R Soc Open Sci. Mar 2025;12(3):241248. [CrossRef] [Medline]
  21. Burgelman JC, Pascu C, Szkuta K, et al. Open science, open data, and open scholarship: European policies to make science fit for the twenty-first century. Front Big Data. 2019;2:43. [CrossRef] [Medline]
  22. Zhang P, Hu X, Huang G, et al. A next-generation open access ecosystem for scientific discovery generated by AI scientists. arXiv. Preprint posted online on Aug 20, 2025. [CrossRef]

Keywords

© JMIR Publications. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 31.Mar.2026.