The Ghost in the Machine: From Dumas to AI Agents and the Future of Writing
Ghostwriting has existed for centuries. AI has been writing for a few years. The questions they raise are surprisingly the same and the stakes have never been higher.
The Evolution of Authorship
When James Patterson releases another thriller or when a celebrity publishes a tell-all memoir, few readers pause to consider who actually put those words on the page. The reality is that many of the books lining our shelves were not written solely by the person whose name appears on the cover. This practice, known as ghostwriting, has been part of literature for centuries and raises fundamental questions about authorship, creativity, and what it means to write. Now, with the emergence of large language models capable of generating human-like text, these questions have taken on new urgency and complexity.
The history of ghostwriting is rich with surprising names and stories. Alexandre Dumas, the celebrated French author of The Three Musketeers and The Count of Monte Cristo, operated what critics called a fiction factory in nineteenth-century Paris. His most important collaborator was Auguste Maquet, who contributed significantly to the plots and drafts of some of Dumas’s most famous works. Though Dumas maintained tight editorial control and added his distinctive flair, Maquet’s role was substantial enough that he eventually sued for recognition and royalties. The courts sided with Maquet in part, acknowledging his contributions while still crediting Dumas as the primary author.
In the modern publishing world, James Patterson has become perhaps the most prolific and transparent example of author collaboration. He works with dozens of co-authors, typically creating detailed outlines and then reviewing and revising the drafts his collaborators produce. Patterson is remarkably open about this process, viewing it more as a production system than a secret to be kept. His name alone can sell millions of copies, and his co-authors benefit from the exposure and experience. The arrangement works because Patterson brings the vision, the brand, and the editorial oversight, while his collaborators bring the daily work of transforming outlines into prose.
Tom Clancy followed a similar path later in his career, particularly with his techno-thrillers. Books bearing his name but written with co-authors became commonplace, and after his death in 2013 the Clancy brand has continued with other writers producing novels in his established universe. The same pattern appears with V.C. Andrews, the gothic novelist whose estate has published far more books under her name after her death in 1986 than she wrote while alive. Ghostwriter Andrew Neiderman has been the actual author behind the V.C. Andrews name for decades, maintaining her distinctive style and themes while creating new stories for her devoted readership.
Celebrity memoirs represent another domain where ghostwriting is standard practice, though often hidden behind the phrase “written with” or “as told to” on the cover. Donald Trump’s The Art of the Deal was largely written by journalist Tony Schwartz, who spent eighteen months shadowing Trump and then crafted the narrative. Schwartz has since spoken publicly about his role, explaining how he created the voice and structure that made the book a bestseller. Similar arrangements exist for countless political memoirs, business books, and celebrity autobiographies, where the famous person provides the experiences and approval while a professional writer shapes the material into readable prose.
The Algorithmic Pen
The arrival of large language models has introduced a new player into these questions of authorship. Unlike human ghostwriters who bring their own creativity and judgment to the work, these AI systems generate text based on patterns learned from vast amounts of existing writing. When someone uses a tool like Claude or Gemini to help draft an article or develop a story, whether as a simple text generator or as an autonomous agent using tools and taking actions, they are engaging with something fundamentally different from hiring a human collaborator, yet also different from using a simple spell-checker or grammar tool.
The differences between human and AI writing styles become apparent when examined closely. Human writers bring lived experience, cultural context, and genuine emotional understanding to their work. A human ghostwriter can interview a subject for hours, absorb their manner of speaking, understand their values and contradictions, and then produce prose that captures something essential about that person. The ghostwriter makes countless small decisions about tone, emphasis, structure, and detail that reflect not just technical skill but human judgment about what matters and why.
Large language models, by contrast, operate through pattern recognition and statistical prediction regardless of whether they function as simple text generators or as sophisticated agents. They excel at producing grammatically correct, contextually appropriate text that follows conventional structures and uses familiar phrases. What they lack is any genuine understanding of meaning or any authentic experience to draw upon. An LLM can describe heartbreak in technically proficient language, but it has never felt the weight of loss. It can explain scientific concepts clearly because it has processed millions of explanations, but it cannot have the sudden insight that leads to a new way of understanding a problem. This fundamental limitation persists even when AI operates as an agent, iterating and refining its work, researching information, and adapting to specific requirements.
AI agents can reduce some of the generic quality issues through iterative refinement and tool use. When an AI can research, draft, revise, check facts, and adapt based on specific requirements, the output becomes more customized and less obvious than simple one-shot text generation. However, this represents a more sophisticated application of the same underlying capabilities rather than a fundamental change in nature. The agent is still assembling text based on learned patterns, just through a more complex process. Even when an AI agent produces something that appears creative or insightful, it is fundamentally synthesizing and recombining patterns from its training data. It can spot patterns humans might miss by processing vast amounts of information, but it does not understand those patterns in any meaningful sense.
This distinction manifests in subtle but important ways. Human writing tends to have irregularity, personal quirks, unexpected word choices, and structural variations that reflect individual thinking patterns. AI-generated text, especially when produced quickly without extensive prompting or revision, often exhibits a kind of smooth competence that can feel generic. The language flows well and makes sense, but it rarely surprises or challenges the reader in the way that distinctive human voices do. The metaphors tend toward the familiar, the sentence structures favor the conventional, and the overall effect can be one of capable blandness.
The Standardization Question
For casual users who turn to AI writing assistance without extensive customization or careful prompting, the risk of standardization becomes particularly acute. When thousands of people use the same tool with similar prompts to write business emails, blog posts, or social media content, a certain homogenization of style becomes almost inevitable. The AI has been trained on common patterns and will naturally reproduce those patterns unless specifically directed otherwise. A business email drafted by an AI will likely have a professional but generic tone, hitting familiar beats and using standard phrases that millions of other AI-drafted emails also employ.
This standardization extends beyond just style to affect structure and even thinking. If an AI is asked to write an article about climate change, it will likely organize the information in predictable ways because those are the patterns it has learned from existing climate articles : introduction establishing the problem, explanation of causes, discussion of impacts, consideration of solutions, conclusion with a call for action. There is nothing wrong with this structure, but when it becomes the default for vast amounts of content, we risk losing the diversity of approaches that comes from different human minds tackling the same topic.
The issue becomes more serious when we consider how people learn to write by reading. If the next generation of writers grows up reading content that is increasingly AI-generated and therefore increasingly standardized in style and structure, what models will they internalize? Will they learn that writing means conforming to these smooth, competent but ultimately generic patterns? The feedback loop is genuinely concerning: AI systems trained on human writing produce standardized output, which humans then read and potentially emulate, which then becomes part of the training data for the next generation of AI systems.
Yet this concern must be balanced against a more optimistic perspective. Writing has always involved learning and applying conventions. Students study the five-paragraph essay not because that structure is sacred but because it teaches them how to organize thoughts coherently. Business writing follows templates because consistency and clarity serve important purposes in professional communication. The presence of standards and common patterns is not inherently bad. The question is whether we maintain enough diversity and creativity alongside those standards.
The Standardized Future
Imagining a world where knowledge expression becomes more standardized while ideas continue to flow freely requires us to distinguish between the packaging of ideas and the ideas themselves. Scientific papers already follow highly standardized formats. Researchers must conform to strict conventions about structure, citations, and terminology, yet this standardization has not prevented the flourishing of novel ideas and discoveries. Indeed, the standardization serves a purpose. A biologist can quickly scan a paper’s methods section because it follows a predictable format, spending their cognitive energy on evaluating the actual research rather than decoding an idiosyncratic presentation.
The advantages of such standardization are real and should not be dismissed. When business reports follow similar structures, executives can find key information quickly. When news articles conform to the inverted pyramid style, readers can grasp the essential facts in the opening paragraphs. Standardization reduces cognitive load and improves efficiency in contexts where communication serves primarily instrumental purposes.
For many writers, AI assistance could free them from the mechanical aspects of writing that they find burdensome, allowing them to focus on what they really want to say. Someone with brilliant insights but poor grammar skills might use AI to polish their prose. A non-native English speaker might use it to express complex ideas that they struggle to articulate in their second language. In these cases, standardization in the service of clear communication seems more like a feature than a bug.
The dangers, however, cannot be ignored. If standardization becomes too pervasive, we risk losing the distinctive voices and unconventional approaches that often lead to breakthrough thinking. The most innovative ideas frequently come packaged in surprising ways precisely because the thinker approached the problem from an unusual angle. James Joyce’s stream-of-consciousness technique in Ulysses was not just stylistic flourish but integral to his exploration of human consciousness. Virginia Woolf’s experimental structures reflected and enabled her insights about time and subjectivity. These achievements required writers willing to break conventions, not conform to them.
There is also the question of cultural homogenization. Language reflects culture, and different cultures have different rhetorical traditions, different ways of structuring arguments, different relationships between writer and reader. If AI systems trained predominantly on English-language internet content begin to shape how people around the world write, we could see a flattening of these cultural differences. A Japanese business email might start to sound more like an American one not because the Japanese writer chose that style but because the AI suggested it as the most natural way to write. The loss would be subtle but real: a gradual erosion of the diversity of human expression.
Perhaps the most profound concern is what happens to human creativity when AI becomes the default tool for putting thoughts into words. Writing is not just a way to record pre-existing ideas but a way to develop and refine those ideas. The act of struggling to find the right words, of revising and rethinking, is central to how many people think. If AI handles too much of this process, we might lose something essential about how human minds develop and express original thought.
Finding Balance
The comparison to ghostwriting offers some guidance for navigating these challenges. Ghostwriting has existed for centuries without destroying literature or eroding the value of authorship. It persists because it serves genuine needs: helping busy executives share their knowledge, enabling celebrities to tell their stories, allowing prolific authors to produce more books. The key is disclosure and appropriate use.
AI writing assistance might follow similar norms. For routine communications, reports, and other functional writing, AI assistance could become as unremarkable as using spell-check. For creative works, academic papers, and other contexts where original thinking and distinctive voice matter, the use of AI might require more careful consideration and possibly disclosure. The key is to preserve the connection between human intention and written expression, ensuring that AI serves as a tool for human communication rather than a replacement for human thought.
Education will play a crucial role in this future. If students learn to use AI as a genuine writing assistant rather than a shortcut around the difficult work of learning to write, the technology could be enormously valuable. Teaching people to craft effective prompts, to critically evaluate AI output, to revise and personalize AI-generated drafts, and to recognize when they should write from scratch rather than relying on AI tools would help ensure that the technology enhances rather than diminishes human capability.
The issue is not the technology itself but how we choose to use it. A world where AI assists with the mechanics of writing while humans focus on original thinking and authentic expression could be enriching. A world where AI standardizes thought as well as style (where the hard work of finding one’s own voice is abandoned for the ease of algorithmic generation) would be impoverished. The choice between these futures is not yet determined, and the decisions we make now about how to integrate these powerful tools into our writing practices will shape which world we create.
The Ghost in the Machine: From Dumas to AI Agents and the Future of Writing was originally published in Mind In The Loop on Medium, where people are continuing the conversation by highlighting and responding to this story.


