The Curatorial Turn: From Knowledge Producers to Knowledge Curators
The academic as curator, or keep rethinking scholarship in the age of generative artificial intelligence.
I’m guessing a lot of people that is reading this post have had the same situation I will describe here: I experimented with the “reasoning” large language models from different sources (OpenAI, Anthropic, Google, Perplexity, etc) by feeding them a dense research paper from my field. The result was startling: in minutes, the AI generated a coherent summary and even drafted a respectable literature review connecting ideas I hadn’t considered. It felt as if I’d been momentarily outsourced as the author. But rather than threaten my role as an academic, this experience pointed to a more profound shift. The value of my expertise wasn’t in churning out another written summary, as the AI could handle that. My value was in deciding what mattered in that flood of content: which insights to highlight, which sources were credible, and what context was missing. In short, I was becoming less of a producer of academic knowledge and more of a curator of it.
Academics as Curators in an AI Writing Era
For centuries, being an academic meant writing – producing papers, books, reports as the primary output of scholarship. Now, with generative AI and its “reasoning” and deep research models automating written production, that paradigm is being upended (as I exposed in my first article of this substack series). When a language model can produce a passable essay, literature review, or even a research article draft on demand, the act of putting words to page is no longer the bottleneck. Instead, the new bottleneck is quality control and guidance: figuring out the right questions to ask, providing the correct data or references, and critically reviewing AI-generated text for accuracy and insight. In other words, academic expertise is tilting toward a curatorial role. That is: selecting, shaping, and validating knowledge rather than writing it from scratch.
This isn’t an abstract theory; it’s happening in real time. I know colleagues who use AI tools to generate first drafts of papers, then spend their re-gained time curating the output, editing for clarity, checking sources, and injecting domain expertise where the model glossed over nuance. The written volume one can produce with AI is practically unlimited; the real skill therefore is determining what’s worth including and what isn’t. In a world where an “infinite first draft” is always available, expertise shows in how we prune, augment, and contextualise that draft.
Consider the task of a literature review. An AI can fetch and summarise 100 articles on a topic in seconds. Impressive for sure. But dumping 100 summaries on a reader isn’t the same as a coherent review. The academic’s job becomes one of synthesis: which concepts connect these works? Which findings are most robust or relevant? What biases or gaps exist in the collective literature? This is curation. We’re moving from being the author of a single narrative to the curator of an entire knowledge landscape.
There’s also a social dimension to this. Academic credit and authority have historically come from being the originator, the person who wrote the paper or discovered the finding. But when AI is doing more of the drafting, human expertise might be recognised more for guidance and judgment. The scholar becomes the one who orchestrates knowledge: the person who knows where to look, what to trust, and how to weave disparate insights together into understanding. In a sense, the professor becomes more like a professor in the literal sense – a “professor” as in one who professes knowledge, contextualises it, and helps others navigate it, rather than one who simply produces text.
This is a profound shift, and it’s only the beginning.
Beyond the Paper: Curating New Forms of Knowledge
In my previous posts on The Reflexive Machine, I asked whether the written academic paper might be losing its monopoly on scholarly communication. We considered the “return of oral knowledge”, the rise of academic podcasts, recorded lectures, and even interactive AI-driven dialogues as legitimate forms of sharing research. If text is no longer king, and we move toward multimodal scholarship (including audio, video, interactive media, and even virtual reality), the curatorial role of academics only grows more essential.
Why? Because these new forms are abundant and scalable in ways that traditional papers are not. A single discussion can be live-streamed to thousands; a recorded lecture can be accessed worldwide; an AI-driven tutoring system can disseminate an idea interactively to anyone who asks. But volume isn’t value. Without careful curation, we risk info overload or losing the thread of what’s important. Academics, as subject experts, will be needed to curate these new knowledge forms: to design meaningful podcast series that curate voices and topics in a field, to build interactive models or exhibitions that curate key concepts for the public, or even to act as behind-the-scenes guides for AI interfaces that answer questions.
In many ways, this is a return to an older mode of scholarship. As I have discussed previously, before the printed journal article, knowledge was often shared through oral disputation, curated lecture series, or carefully compiled encyclopedic works. We’re coming full circle, but now with AI-augmented tools at our disposal. A modern scholar might curate a conversation between a human expert and an AI persona (imagine an automated Socratic dialogue on a research topic), or curate a dataset that an AI system uses to demonstrate a phenomenon in real-time. If the written word becomes just one medium among many, academics must ensure that these other media carry rigour and insight. That means curating the content and context of knowledge, regardless of format.
I’ve started to see hints of this. Some academics host “research podcasts” where they, in essence, curate a narrative, inviting certain guests, editing the discussion to highlight insights, weaving in references to literature. The expertise lies in the curation of content and people, not in writing a script word-for-word. Others experiment with AI-generated simulations: for example, feeding historical data into a model to create a playable simulation of an economic phenomenon. The academic’s role there is to validate and contextualise the simulation’s output, curating the experience so that a learner draws correct conclusions.
Even teaching, traditionally centred on lectures and textbooks, is shifting this way. With generative AI, a professor could auto-generate endless practice problems or even custom explanatory articles for students. But dumping a thousand practice questions on a student is useless; the professor’s skill is selecting the right questions, the ones that illuminate misconceptions and build mastery. Once again, the human adds value by choosing and framing, not by sheer volume of production.
Building Bridges, Not Piling Papers
One hopeful aspect of this shift is that it could elevate the societal value of academic expertise. If we’re (academics, that is) freed from the pressure to produce endless papers just to meet a quota (the notorious “publish or perish” volume game), perhaps we can focus on what academics really bring to society: the ability to build conceptual bridges, provide deep context, and see the bigger picture. And educate our students accordingly with those skills to engage with the complexity of the real world before they start to intervene and practice on it.
Generative AI will flood us with content, some of it insightful, much of it mediocre or flat-out incorrect. The world doesn’t need humans to add more random papers to the pile; it needs humans who can separate signal from noise. The curator-scholar can and should do this. They can take a sprawling field of AI-generated outputs and human contributions and map it, like an intellectual cartographer, showing others: “Here’s the lay of the land. Here’s where these ideas connect. Here’s where there’s consensus, and here’s where the controversies or unknowns lie.”
Crucially, this bridging role can enhance interdisciplinary and transdisciplinary understanding. An AI might master a narrow domain and produce many papers within it, but a human scholar can draw connections across domains. Something our institutions sorely need. Imagine an environmental scientist who uses AI to generate reports on climate data, but their unique value lies in linking those insights to economics, public policy, and ethics. They become a bridge between AI-generated technical analysis and human-centric decision-making. That is a curatorial superpower: connecting dots that weren’t connected before.
I like to describe this as moving from divergence to convergence. As I also mentioned in a previous article, AI is fantastic at divergent thinking – generating a plethora of ideas or possibilities (Large language models can even outperform humans on some tests of creative divergent thinking). But making sense of those ideas, that is, converging them into coherent knowledge, often requires human judgment. In design terms, if AI is the ultimate brainstorming partner generating wild possibilities, the human is the editor who can eventually say, “Out of these 50 ideas, these two are truly novel and useful.” It’s in that convergence that academic expertise shines. We choose the narrative that makes sense, the finding that matters, the theory that holds water.
This shift also means critical context becomes a key output of scholarship. Rather than just publishing a result (“X causes Y under conditions Z”), academics will be called upon to also communicate why it matters, how it fits into broader understanding, and what caveats to keep in mind. Contextualisation has sometimes been undervalued, considered the garnish around the “meat” of new results. But as AI generates more results (some real, some spurious), context is precisely what society will crave.
We’ve already seen what happens in a world of information overload. As educator John Spencer notes, we live in an “instant information ecosystem” where “content is cheap. Cheap to make. Cheap to share. Cheap to consume. And even cheaper now with AI.” In such a world, “the best stuff doesn’t always rise to the top” and it’s easy to “mistake the speed of consumption for the depth of knowledge”spencereducation.com.
The flood of content can drown out depth and quality. That’s why Spencer argues that critical consumption is now as important as creation: it’s about “identifying quality, recognising originality, and filtering through a flood of content to find what actually matters.” spencereducation.com.
Replace “students” with “everyone” in that quote, and it captures the task before us. Academics, armed with subject mastery and research training, are well-positioned to be these critical filters and sense-makers for society.
Elevating curation doesn’t mean devaluing creation. It means recognising that creation is now a distributed collaboration. AI produces some of the raw material, some by other humans, some by automated experiments (more on that in a moment). The academic becomes the hub in this network, synthesising and guiding it toward insight. In doing so, our epistemic authority might increase in the public eye, not because we write more papers than everyone else, but because we help people navigate knowledge in a trustworthy way. The scholar of the future might be valued more like a great museum curator or a great librarian: not for putting more items on the shelf, but for knowing which items to display and how to explain their significance in context.

Tools of the Trade: NotebookLM and the AI-Curator’s Workbench
If all this sounds a bit theoretical, let’s talk about tools. The “curatorial turn” in academia is being accelerated by the very real AI tools that scholars are starting to use. One example I keep finding fascinating (particularly in its constant growth and adaptation and the incorporation of new features) is Google’s NotebookLM, an AI-powered research assistant that feels explicitly designed for this curator role. Unlike a traditional search engine or a static summariser, NotebookLM lets me upload my own source documents and now even find its own sources across the web, then helps me explore, cross-reference, and synthesise them. It’s like a smart librarian that I, the researcher, get to direct.
When I first tried NotebookLM, I loaded a PDF of one of my old research papers and my PhD to see what it would do. Sure enough, it generated a concise summary of each section. Useful, but not mind-blowing – lots of tools can summarise. However, more recently (after its latest -April 2025- updates) I asked a probing question that wasn’t answered in the paper: what connections exist between my findings and related theories in urban planning? The AI went beyond the document, essentially saying, “Let’s see what else is out there.” With its new features, it scanned the web and suggested a list of outside sources I might want to read, each with an explanation of why it might be relevant. It was as if I had an intern quickly gathering threads from the vast academic web, but an intern who could work in seconds and summarise each finding in real time.
Even more striking, NotebookLM has made the audio overviews (the ones that made the tool famous) multilingual, available in 50+ languages. I actually had the surreal experience of listening to an AI “discussion” of my own paper in Spanish (my mother tongue), and French, and Portuguese, just to see how well it captured the nuances of these different languages I’m privileged to be fluent in. It was pretty good. The AI hosts emphasised points that I would consider key, and did so in an engaging, conversational way. It felt like my research had been re-curated for a different audience and format (and culture, if we address the language differences) almost without my direct involvement.
Tools like NotebookLM exemplify the shift from production to curation. They don’t create new knowledge out of thin air, they help us navigate existing knowledge, remix it, and present it in new forms. But they still need a guiding hand. They need an academic to say, “Yes, include this source, but not that one, it’s a dubious study,” or “Focus the overview on these themes, because they’re more important.” In a sense, the AI is an actant in the scholarly network, an agent participating in knowledge work, but it’s the human scholar who grants it that role and steers it. (Actor–Network Theory fans, as I certainly am! This is precisely the idea of AI as an actant in our socio-technical network of knowledge: an entity that can “act” in the system, alongside humans, by doing things like summarising, searching, or even hypothesising.)
We see other tools emerging that similarly position the researcher as an orchestrator. For instance, there are AI platforms that will autonomously run experiments or simulations given a hypothesis, but they rely on the scientist to set the hypothesis and interpret the results. DeepMind’s famous AlphaFold system solved a 50-year-old biological puzzle by predicting protein structures from sequences: deepmind.google. That achievement produced knowledge (protein structures we hadn’t known before) – something that used to require human experimentation. Now that AI has given us this treasure trove of protein data, it falls to human experts (biologists, biochemists) to curate and leverage it: figuring out which predicted structures matter for, say, developing a new drug, and ensuring the predictions are used carefully. In other words, AI may generate discoveries, but humans turn those discoveries into usable knowledge.
Or take IBM’s Watson in the medical domain. Watson was able to sift through medical literature and suggest treatment options for hundreds of cancer patients that doctors had overlooked: bigthink.com. But it didn’t replace the doctors – it augmented them. Oncologists had to validate those AI-suggested treatments and decide which ones actually to pursue with patients. Again, the pattern is the same: the AI generates a wealth of options, and the human curator or expert filters and applies them.
NotebookLM, AlphaFold, Watson… these are early examples of a new ecosystem of academic tools. I sometimes imagine my future “research lab” as including a few human collaborators and several AI assistants: one AI combs the literature, another proposes some hypotheses or connections, another runs simulations or analyses data, and maybe another helps translate our findings into different formats (reports, a slide deck, a podcast, an interactive visualisation). My ideal job in this lab is to coordinate all of these agents (human and AI), keep the process on track, and ensure that the end results make sense and serve a purpose. It’s less like being a lone genius authoring a paper, and more like being a project manager or creative director, ensuring that a meaningful piece of knowledge comes to life.
Rethinking Academic Identity and Institutions
Such a fundamental change raises significant questions for universities and the broader educational system. If academics become curators of knowledge more than lone producers, how do our institutions recognise and reward that? Today, a scholar’s reputation might rest on how many papers they publish or how many citations they get. But if much of the writing and initial drafting can be offloaded to AI, those metrics lose meaning. We might instead value curated contributions: high-quality review articles that synthesise a field, creation of knowledge databases or annotated bibliographies, development of open educational resources, or cross-disciplinary white papers that map out new problem spaces. These are already valued to some extent, but they might move from the periphery to the centre of academic achievement.
It also calls for a reconfiguration of learning. Students will need to be trained not just in doing things from scratch, but in intelligently using AI outputs as a starting point. “Don’t use AI” is not a sustainable stance. Instead, we should teach (the cliché, I know) critical use of AI, which is a curatorial skill.
And what about the future of the university? Universities have long been seen as knowledge producers (research) and distributors (teaching). What if they also embraced a third role as knowledge curation hubs? One could imagine universities hosting curated repositories on various topics, maintained by experts, as public resources. Or professors acting as community-facing curators: like public intellectuals who don’t just publish in journals, but maintain blogs, podcasts, or interactive tools that help society make sense of complex issues (with help from AI assistants). Some of this happens already, but it could be more formally recognised. The lines between research, teaching, and public engagement might blur, as all three become about orchestrating knowledge rather than just adding more pages to the archive.
There’s also an angle of governance here. As AI becomes an epistemic agent in its own right, questions arise: Who curates the curators? In other words, how do we govern the use of AI in knowledge creation to ensure quality and ethics? Academic communities may need to establish norms for AI-assisted work, much like we have for peer review and research ethics. We’ll need a reflexive approach (true to the spirit of The Reflexive Machine): continuously reflecting on how these tools are shaping our knowledge and adjusting our practices accordingly.
One thing I find encouraging is that this curatorial view of expertise is inherently collaborative and dynamic. It frames knowledge not as a static commodity (the paper that sits on a shelf) but as a living network that we navigate and tend to. It acknowledges the contributions of many actants – humans, AIs, instruments, datasets – in the network of discovery. Embracing that doesn’t diminish human scholars; it challenges us to rise to a new level of meta-intelligence, where our job is orchestration, sense-making, and stewardship. That feels like a deeply human (and rewarding) role to play!
Continuing the Conversation
I think this curatorial turn in academia is still in its early days, and it raises as many questions as it answers. I’ll conclude by inviting you into the dialogue:
Will academia adapt its rewards and training to encourage curation, or will we cling to the romantic ideal of the solitary author?
How can we ensure that curatorial expertise is valued by society, as well as by funding bodies and tenure committees, as much as traditional research outputs?
Do you see yourself (or academics in your field) already taking on a curator role with AI tools, and what does that look like in practice?
As always, let’s keep the conversation going. I believe the shift from producer to curator is a chance to reimagine what expertise means in the 21st century. Rather than being displaced by our machines, we might become the bridge builders – connecting people, ideas, and innovations in ways that make knowledge more accessible and meaningful than ever before. I, for one, am excited to see where this new path leads us.