Last week, I asked whether the academic paper was losing its monopoly on legitimacy in knowledge production. I explored how AI is nudging us toward alternative forms of scholarship—interactive, oral, and embodied knowledge—raising the question: Are we ready to move beyond the written page?
Let’s push this further. What if the real shift isn’t just about how we communicate research but how we think about knowledge itself?
We assume that thinking happens through writing. Academic papers are not just a format—they are a cognitive process, a way of structuring arguments, clarifying ideas, and making knowledge legible. But what happens when AI starts intervening before the writing process even begins? What if text isn’t just being supplemented by AI but shaped by it from the ground up?
AI as an Epistemic Agent (or Just a Really Fast Research Assistant?)
We’ve all seen the headlines: AI writes research papers, AI generates new scientific hypotheses, AI is revolutionising academic work. But the interesting question isn’t whether AI can write—it’s whether AI can think in an epistemically meaningful way.
Right now, we see AI taking on three broad roles in knowledge production:
AI as an accelerator – speeding up literature reviews, synthesising sources, and structuring arguments.
AI as a pattern-finder – surfacing unexpected connections, clustering concepts, and generating hypotheses.
AI as a co-producer of knowledge – suggesting original insights and even designing new experiments.
Take DeepMind’s AlphaFold, which solved the 50-year protein folding problem in the biological sciences domain—an achievement that had eluded human scientists for decades. Was that an “act of knowledge creation”? The AI didn’t just analyse data; it generated a model that restructured how we understand biology.
Or consider AI-driven discovery engines that suggest novel research questions based on massive textual corpora. IBM’s Watson has been used to propose new cancer treatments by identifying overlooked biomedical relationships. That’s not just retrieval—it’s conceptual recombination.
This raises an uncomfortable tension:
If AI can generate testable hypotheses, does it become a research co-author?
Does AI count as an epistemic agent if it can propose new theories?
If AI can connect dots faster than humans, are we still the primary thinkers in scholarship?
For centuries, we’ve treated writing as the primary evidence of thinking. But what if AI forces us to rethink that assumption?

What Happens When We Don’t Need to Write?
Let’s take this to its logical extreme. Imagine a future where:
AI-generated research assistants conduct exploratory literature reviews, mapping out all possible angles of an argument before you start writing.
Interactive knowledge maps replace traditional citation chains, allowing researchers to explore entire disciplines spatially rather than sequentially.
Generative models create interactive, multimodal research experiences, where findings are navigated rather than read.
If knowledge becomes a networked, evolving process, rather than a fixed textual output, then the question isn’t just how do we publish?—it’s how do we think?
Is Academia Ready for a Post-Textual Future?
The academic ecosystem is definitely built on text:
We know that promotion or tenure depends on written papers.
Grants require written proposals.
Peer review is designed for static documents.
Even as AI changes how research is produced, academia still clings to the assumption that "serious knowledge" must be written down. But let’s be honest—this assumption is already eroding.
As I mentioned last week, academics are increasingly:
Using podcasts, blogs, and YouTube as legitimate scholarly outputs.
Engaging in AI-mediated co-writing and idea generation.
Experimenting with AI-assisted real-time peer review.
But institutions haven’t fully caught up. And here’s the real challenge: What happens when AI doesn’t just change how we write, but whether we need to write at all?
Where Do We Go From Here?
We have three broad futures:
The Traditionalist Future – Academia resists AI’s deeper role in knowledge production, keeping text as the primary legitimate output while AI remains a behind-the-scenes research assistant.
The Hybrid Future – AI integrates into research workflows, but human scholars remain the primary authors and interpreters. Academic institutions expand their definition of scholarship to include AI-generated and post-textual work.
The AI-Native Future – AI is accepted as an epistemic agent. Knowledge production becomes an evolving, interactive, non-linear process, and traditional papers are replaced by dynamic research objects.
The second future feels the most likely—at least for now. But the third future? It’s not as far-fetched as it sounds.
The real question is: Are we ready to move beyond writing as the foundation of knowledge?
If AI becomes an epistemic agent—capable of hypothesis generation, pattern recognition, and even conceptual recombination—then the next question is: how do we work with it effectively? Next week, I’ll dive into the idea of how AI thrives in the divergent thinking phase of research—connecting ideas in unexpected, nonlinear ways—and what it takes for institutions to sift through noise, manage hallucinations, and extract real epistemic value from machine-generated insights. The challenge is no longer whether AI can contribute to knowledge but whether we’re ready to harness its creative chaos. Let’s see where this takes us.
I’d love to hear what you think:
Will academia ever accept AI as a full epistemic participant?
Could we see a world where research is explored, not written?
Let’s keep the conversation going!
Presumably they would accept AI as a full epistemic participant if it was proven to be actually thinking. Which it isn't as of now.
It's a useful tool in the hands of experts, with serious threats to development of new expertise.