The Silent Revolution: How APIs Are Transforming Science
The digital transformation of contemporary scientific practice is quietly reshaping the epistemological foundations upon which modern science was built. Every time a researcher looks for references, reads a paper, or writes an article, they must connect to a system that filters, ranks, and even suggests what they should read. This reality isn’t a glimpse of the future—it’s already the everyday norm in laboratories and universities across the globe.
This transformation goes far beyond technology; it is altering the very foundations of how science is practiced. Today’s researcher has become what we might call a “plug-in scientist”: a knowledge producer who operates through technical interfaces that mediate every step of the research process. From searching papers on Semantic Scholar to using ChatGPT to summarize literature, the connection to algorithmic systems is now permanent and ubiquitous.
The daily routines of any researcher illustrate this new paradigm. To consume science, academics rely on APIs from Crossref, Scopus, and arXiv to find papers; they use tools like Elicit and ChatGPT for summaries and analysis; they manage bibliographies with Zotero and Mendeley; they generate ideas with GPT to formulate hypotheses and arguments; and write code with Copilot and HuggingFace. To produce and disseminate science, researchers follow a fully integrated, digital workflow. They upload preprints to repositories like arXiv and SSRN and publish through automated editorial platforms. They track impact using tools such as Altmetric and Scopus Analytics. Their work is shared on academic networks like ResearchGate and Academia.edu and increasingly monetized through payment APIs like Stripe.
The connection is seamless. The system “packages and unpacks” the entire scientific process, creating what could be described as a total algorithmic mediation of academic knowledge.
Not all researchers relate to this transformation in the same way. Millennial academics—born between 1980 and 1995—were trained without artificial intelligence and adopted it later as a tool. Generation Z, or “Plug-in I” (born between 1996 and 2010), studied with AI and uses it actively to write papers. Generation Alpha, or “Plug-in II” (born between 2011 and 2025), will grow up with AI as a natural standard. If current trends continue, graduate programs will be populated by “native plug-in scientists” within five years, and within a decade, these researchers will lead projects, sit on evaluation committees, and shape the future of science. The transition has already begun, and its consolidation will be evident in just over ten years.
To grasp the magnitude of this shift, we must examine how six foundational pillars of modern science are being transformed: truth, error, epistemic risk, the epistemic subject, scientific temporality, and institutional structures.
Truth, once the guiding star of scientific inquiry, now competes with other validation criteria. Algorithmic relevance—measured by visibility and citation counts—has begun to replace traditional epistemic standards. Optimized formatting means that form increasingly outweighs substance, and usability now determines whether knowledge is valuable based on its potential for monetization or integration into productive systems. Verisimilitude has replaced veracity: something can circulate as true simply because it appears convincing—even if it isn’t.
Preprints circulate without peer review, AIs cite fabricated but plausible papers, and some AI-generated articles pass superficial reviews. The central question is no longer simply “Is it true?” but rather “Does it work?” This marks a profound reconfiguration of how scientific knowledge is validated.
Error, long regarded as a scientific virtue that fosters learning, delimitation, and progress, is now, within the Plug-in Model, considered a glitch that must be automatically corrected. New forms of problematic errors are emerging: invisible errors, where AI produces false but persuasive content; tolerable errors, where a certain error rate is accepted because the model is “good enough”; and outsourced errors, where responsibility is shifted to the algorithm or the dataset. Error is no longer embraced as part of the learning process—it is systematically erased.
Epistemic risk, once defined as the capacity to challenge consensus and venture into the unknown, is being replaced by what we might call “controlled innovation.” Current systems optimize for the probable rather than exploring the uncertain; they reward thematic convergence, replicate successful formulas, and enable papers to simulate novelty without real disruption. Funding agencies demand predictable results; AIs forecast what should be researched; and algorithmic reviewers penalize outliers. Epistemic risk is no longer considered a virtue but rather a “system failure” to be minimized.
The epistemic subject, traditionally imagined as an autonomous and reflective scientist, is becoming a “prompt operator.” We are witnessing a shift from the rational agent to the interface operator, from the responsible author to the AI co-author, from the deliberative member to the always-connected user, and from the knowledge producer to the assembler of outputs. The epistemic subject hasn’t vanished but has been transformed into an “input/output node” in a larger network, losing much of its traditional agency.
Scientific temporality has undergone a particularly dramatic shift. The slow, cumulative time of modern science has been replaced by a prompt-response loop. Science used to unfold at a reflective, deliberative pace—its timeline measured in ideas and debates preserved in disciplinary memory. Now it is instantaneous, reactive, and accelerated—measured in papers per year, dictated by rankings and algorithms. Scientists struggle to publish negative results, knowledge aligns with specific grant calls, and the system’s timing dictates what gets researched—and when.
Traditional scientific institutions are losing ground to new actors and mechanisms. Universities, journals, and public laboratories—once the backbone of an autonomous space for knowledge—are now competing with recommendation algorithms that curate knowledge, automated metrics that validate it, tutorials and generative models that train researchers, and private platforms backed by venture capital that fund research projects.
These transformations pose varying levels of threat to traditional science. The most critical areas are epistemic temporality and scientific institutions, both of which are being replaced in ways that may be irreversible. The management of error and epistemic risk is in a state of alert: they still exist, but under constant pressure from the efficiency-driven logic of the Plug-in Model. The least threatened domains, for now, are truth and the epistemic subject, which, although reconfigured, have not been completely displaced.
This gradation suggests where efforts should be focused by those who wish to preserve the distinctive features of modern science. The most urgent interventions should aim to protect autonomous epistemic temporalities—those that resist the logic of perpetual deadlines—and to create institutions capable of developing technological mediation that enhances, rather than subordinates, human knowledge production. It is essential to maintain spaces for research that exceed the predictive frameworks imposed by today’s algorithmic systems.
The core question raised by this analysis is, what forms of knowledge and subjectivity are still possible when scientific practice is increasingly governed by algorithmic protocols that prioritize productive efficiency over critical reflection? Universities are being silently occupied by this new anthropotechnics, demanding a reassessment of the conceptions of science that dominated the 20th century.
The rise of the Plug-in Model is not an inevitable fate but a trend—one that demands deliberate action to be altered. For those who value the traditional dimensions of scientific practice, the task is to imagine forms of resistance or adaptation that preserve the functions most at risk of transformation. This shift is not a poetic metaphor—it is an emerging structural condition that warrants immediate critical attention before its consolidation becomes irreversible.