Accessibility settings

Published on in Vol 28 (2026)

Immortal AI, Mortal Life: Long-Term Perspectives on AI and Human Knowledge

Immortal AI, Mortal Life: Long-Term Perspectives on AI and Human Knowledge

Immortal AI, Mortal Life: Long-Term Perspectives on AI and Human Knowledge

Authors of this article:

Hyunjin Shim, JMIR Correspondent

Key Takeaways

  • The evolution of artificial intelligence is not subject to the same limitations as the evolution of human intelligence, with significant implications for knowledge and education.
  • Researchers should be cognizant of the potential for knowledge monoculture and the diversion of resources and focus away from core issues.
  • Educators should consider focusing on cultivating uniquely human capacities that complement or supplement artificial intelligence.

Hyunjin Shim, PhD, is an assistant professor at California State University, Fresno, where her lab researches and develops next-generation infection diagnostics and therapeutics, with the long-term aim of combating antimicrobial resistance. Her background is in biotechnology, bioengineering, and innovative genomics. In this piece, she explores some of the higher-level risks and potential implications of artificial intelligence for our knowledge systems through her lenses as a biologist, researcher, and educator.

As a biologist, witnessing the evolution rate of artificial intelligence (AI) is remarkable. Since the conception of AI in the 1950s, AI research was revived and accelerated in the early 21st century by the development of deep learning, thanks to advances in hardware and access to big data [1]. Since then, we have been experiencing the penetration of AI or AI-related technologies in almost every aspect of our intellectual lives.

This evolution rate is unprecedented in the realm of any scientific and technological field, particularly biology. Human evolution, as for any organic life form, is driven by powerful evolutionary forces but ultimately limited by several factors, from mutation rates and generation times to environmental stability and the speed of natural selection [2]. While the evolution of AI is also subject to limits—for example, physical, computational, and data limits—it enables knowledge storage and generation that can be more rapid and persistent than human understanding. Human knowledge, by contrast, depends on ongoing education, with individual comprehension in a sense “resetting” generationally.

This asymmetry underscores the need to take a longer-term perspective in examining the implications of AI’s role in, and impact on, human knowledge generation and education.

As a researcher, the focus of human intelligence has recently shifted from volume to depth, given the superiority of AI in broad knowledge access and processing speed. Since the generative AI boom in 2019, a positive feedback loop has developed between the accumulation of AI and human intelligence: the training datasets of deep learning methods have been annotated from human activities, and AI has been enhancing the advances of human knowledge through cognitive extension [3].

The key purported benefits of AI are accelerated research and discovery. In biology, the advances of deep learning began to influence various research areas as early as 2015, including sequencing technologies [4-6], protein engineering [7,8], pathogen evolution [9,10], and biomedical images [11,12]. AI has also shown unprecedented capacity in integrating information and predicting outcomes in some scientific domains, such as predicting protein structures using the AI system AlphaFold [13,14]. However, AI advances have not solved all critical problems in scientific research and have created new challenges.

For some critical issues, AI integration diverts attention and investment from solving the fundamental problem to chasing shiny tools. For example, the spread of antimicrobial resistance—an urgent public health issue—is making common infections untreatable and impeding modern medicine [15]. While the antibiotic development pipeline suffers from the lack of a viable market for newly developed antibiotics [16], a more serious issue is bacteria rapidly developing resistance to small-molecule antibiotics [17]. Yet, fuelled partially by AI hype, most research studies on antibiotics focus on high-throughput screening or de novo design of small molecules [18,19]. The attention of the scientific community and industry has been diverted away from investing resources into higher-risk—and potentially higher reward—research developing entirely new strategies to tackle antimicrobial resistance [16].

Perhaps as concerning is the potential stifling of creativity and innovation. Generative AI output is fundamentally based on averaging and predicting patterns from big training data [20]; when used in research, it creates monocultures of knowing—a reduction in the diversity of thought and ideas—that can weaken the production of scientific knowledge [21]. Another concern related to utilizing AI in research is shifting the focus from solving real-world problems to maximizing output by exploiting AI’s speed and breadth of knowledge [22]. The current AI bubble exists in academic settings as much as in industry settings, driven by opportunity costs and output inflation in areas of research that require intuition and strategic innovation.

The incorporation of AI into higher education is inevitable and has implications for the future of knowledge generation. Younger generations are rapidly adapting to new technologies, and attempts to prohibit AI tools in classrooms and coursework are unlikely to succeed. Unlike previous technologies such as calculators, AI intersects with nearly all aspects of learning, including writing, critical thinking, and creative production. Furthermore, the use of AI tools raises significant concerns about academic honesty and integrity, as it is more difficult to detect and regulate than other types of academic misconduct, such as plagiarism [23].

The benefits of AI in education have been widely accepted as personalized learning support, real-time feedback, and engaging with complex information. Ironically, this integration of AI in education also shifts the focus of educational practice. Some educators are reverting to analog or low-tech assessments, such as oral exams or handwritten essays, to ensure authenticity in student work, reflecting ongoing uncertainty about how to assess student learning in AI-augmented contexts [24].

Historically, higher education has been central to innovation by preparing specialists through extended periods of study. For instance, the education of a doctor often spans more than two decades. In contrast, AI systems can absorb and synthesize vast amounts of information in a fraction of that time, which prompts questions about traditional educational timelines and the skills most relevant to acquire.

Indeed, much of the content traditionally taught in the current education system can be distilled into core principles that AI systems already master efficiently. There may be extensive periods in education where traditional knowledge transfer is inefficient, hindering students from acquiring skills and competencies that complement rather than compete with AI. Education should make knowledge transfer more effective by focusing on cultivating uniquely human capacities where AI currently has limitations or requires oversight—for example, thinking outside the box, identifying core problems, and developing interpersonal skills.

AI can be a powerful tool for supporting human knowledge, but dependency on that tool comes with potential risks for the quality, diversity, sustainability, and appropriate deployment of that knowledge—risks that are amplified by the rapid rate at which it is evolving.

Ultimately, human-centered pathways for education and knowledge generation need to be carefully preserved alongside, and outside of, AI systems, to diversify knowledge sources and to ensure that human decisions reflect and prioritize human needs and values.

Higher education has a responsibility to ensure human intelligence remains distinct from AI and that both serve the long-term greater good for human well-being.

Conflicts of Interest

None declared.

  1. Goodfellow I, Bengio Y, Courville A. Deep Learning. MIT Press; 2016.
  2. Ao P. Laws in Darwinian evolutionary theory. Phys Life Rev. Jun 2005;2(2):117-156. [CrossRef]
  3. Colther C, Doussoulin JP. Artificial intelligence: driving force in the evolution of human knowledge. Journal of Innovation & Knowledge. Oct 2024;9(4):100625. [CrossRef]
  4. Angermueller C, Lee HJ, Reik W, Stegle O. DeepCpG: accurate prediction of single-cell DNA methylation states using deep learning. Genome Biol. Apr 11, 2017;18(1):67. [CrossRef] [Medline]
  5. Arango-Argoty G, Garner E, Pruden A, Heath LS, Vikesland P, Zhang L. DeepARG: a deep learning approach for predicting antibiotic resistance genes from metagenomic data. Microbiome. Feb 1, 2018;6(1):23. [CrossRef] [Medline]
  6. Shim H. Futuristic methods in virus genome evolution using the third-generation DNA sequencing and artificial neural networks. In: Shapshak P, Balaji S, Kangueane P, Chiappelli F, Somboonwit C, Menezes LJ, et al, editors. Global Virology III: Virology in the 21st Century. Springer International Publishing; 2019:485-513. [CrossRef]
  7. Alipanahi B, Delong A, Weirauch MT, Frey BJ. Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat Biotechnol. Aug 2015;33(8):831-838. [CrossRef] [Medline]
  8. Alley EC, Khimulya G, Biswas S, AlQuraishi M, Church GM. Unified rational protein engineering with sequence-based deep representation learning. Nat Methods. Dec 2019;16(12):1315-1322. [CrossRef] [Medline]
  9. Bartoszewicz JM, Genske U, Renard BY. Deep learning-based real-time detection of novel pathogens during sequencing. Brief Bioinform. Nov 5, 2021;22(6):bbab269. [CrossRef] [Medline]
  10. Shim H. Feature learning of virus genome evolution with the nucleotide skip-gram neural network. Evol Bioinform Online. 2019;15:1176934318821072. [CrossRef] [Medline]
  11. Ahmad A, Hettiarachchi R, Khezri A, Singh Ahluwalia B, Wadduwage DN, Ahmad R. Highly sensitive quantitative phase microscopy and deep learning aided with whole genome sequencing for rapid detection of infection and antimicrobial resistance. Front Microbiol. 2023;14:1154620. [CrossRef] [Medline]
  12. Spahn C, Gómez-de-Mariscal E, Laine RF, et al. DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches. Commun Biol. Jul 9, 2022;5(1):688. [CrossRef] [Medline]
  13. Jumper J, Evans R, Pritzel A, et al. Highly accurate protein structure prediction with AlphaFold. Nature New Biol. Aug 2021;596(7873):583-589. [CrossRef] [Medline]
  14. Abramson J, Adler J, Dunger J, et al. Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature New Biol. Jun 2024;630(8016):493-500. [CrossRef] [Medline]
  15. Murray CJL, Ikuta KS, Sharara F. Global burden of bacterial antimicrobial resistance in 2019: a systematic analysis. Lancet. Feb 12, 2022;399(10325):629-655. [CrossRef] [Medline]
  16. Shim H. Three innovations of next-generation antibiotics: evolvability, specificity, and non-immunogenicity. Antibiotics (Basel). Jan 18, 2023;12(2):204. [CrossRef] [Medline]
  17. Butler MS, Paterson DL. Antibiotics in the clinical pipeline in October 2019. J Antibiot. Jun 2020;73(6):329-364. [CrossRef]
  18. Krishnan A, Anahtar MN, Valeri JA, et al. A generative deep learning approach to de novo antibiotic design. Cell. Oct 16, 2025;188(21):5962-5979. [CrossRef] [Medline]
  19. Wong F, Zheng EJ, Valeri JA, et al. Discovery of a structural class of antibiotics with explainable deep learning. Nature New Biol. Feb 2024;626(7997):177-185. [CrossRef] [Medline]
  20. Shumailov I, Shumaylov Z, Zhao Y, Papernot N, Anderson R, Gal Y. AI models collapse when trained on recursively generated data. Nature New Biol. Jul 2024;631(8022):755-759. [CrossRef] [Medline]
  21. Messeri L, Crockett MJ. Artificial intelligence and illusions of understanding in scientific research. Nature New Biol. Mar 2024;627(8002):49-58. [CrossRef] [Medline]
  22. Berisha V, Krantsevich C, Hahn PR, et al. Digital medicine and the curse of dimensionality. NPJ Digit Med. Oct 28, 2021;4(1):153. [CrossRef] [Medline]
  23. Schmidt DA, Alboloushi B, Thomas A, Magalhaes R. Integrating artificial intelligence in higher education: perceptions, challenges, and strategies for academic innovation. Computers and Education Open. Dec 2025;9:100274. [CrossRef]
  24. Balalle H, Pannilage S. Reassessing academic integrity in the age of AI: a systematic literature review on AI and academic integrity. Social Sciences & Humanities Open. 2025;11:101299. [CrossRef]

Keywords

© JMIR Publications. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 29.Apr.2026.