The Ghost in the Gig Economy: How AI is Being Used to Cheat the Authors Who Can Least Afford It

If you've ever hired a cover designer, editorial reviewer, or beta reader through a freelance marketplace — read this before you do it again.

ADS Publishing

5/7/20267 min read

white concrete building
white concrete building

There is a thriving cottage industry built on the desperation of indie authors.

It has always existed. The vanity press that charged thousands for publishing deals worth nothing. The "literary agent" who collected reading fees and disappeared. The formatter who delivered a Word document with the margins changed.

But something has shifted in the past two years. The scam has been industrialised.

AI tools that can generate a developmental editorial letter, a beta reader report, a cover design brief, or a line-edited manuscript in under three minutes are now freely available to anyone with a browser. And a growing number of people selling their services on freelance marketplaces are using exactly these tools, charging professional rates and delivering AI output.

This isn't speculation. It's a pattern that authors are reporting with increasing frequency, and one that is becoming harder to detect as the technology improves.

What's Actually Being Sold

Let's be specific about what this looks like in practice.

Editorial services are the most lucrative target. A developmental edit of an 80,000-word novel from a qualified human editor costs between £800 and £2,500. A freelancer using an AI tool can generate something that looks like a developmental editorial letter, complete with structural observations, chapter-by-chapter notes, and character arc commentary, in a matter of minutes. The output will be fluent, confident, and almost entirely generic.

The problem is not that it's wrong. The problem is that it's not really about your book. It applies observations that would be broadly true of almost any manuscript in the genre. It identifies issues that statistical analysis of fiction would predict exist, slow midpoints, underdeveloped secondary characters, thematic inconsistency in act three, without actually reading your work to know whether any of those things are true of yours.

You pay for a mirror. You receive a template.

Beta reading services are the easiest to fake and the hardest to detect. A thoughtful beta reader brings their emotional response to your story, what confused them, where they stopped caring, which character they loved without expecting to. These reactions cannot be fabricated. They can, however, be convincingly simulated. AI can generate reader response reports that hit the expected beats of what a beta read should contain. They will be articulate. They will feel credible. They will tell you almost nothing useful.

Cover design is more visible but no less problematic. The issue here is not always deliberate fraud, it is the widespread practice of passing AI-generated imagery off without disclosure. An author pays for a cover reflecting their characters and world. They receive an image generated in thirty seconds from a prompt, with no licensing, no creative input, and potentially identical visual elements to covers already on the market.

The Red Flags

Some warning signs remain consistent regardless of how sophisticated the tools become.

Turnaround time that doesn't add up. A developmental edit of a novel-length manuscript takes an experienced editor between two and four weeks of focused work. If you submit on Monday and receive a full editorial report by Thursday, something is wrong. Speed is the giveaway that human attention was not applied.

Generic structural observations. Read your editorial letter. Does it describe specific scenes? Does it name secondary characters correctly and comment on their function? Does it reference actual passages of your prose? Or does it speak in broad terms about "the midpoint" and "the protagonist's arc" without anchoring those observations to anything you actually wrote? The latter is a template. The former is a read.

Suspiciously consistent formatting. AI-generated editorial feedback has a characteristic structure: lists frequently in groups of three, numbered points, evenly weighted sections, a balanced mix of praise and critique that never tips too far in either direction. Human editorial feedback is messier. It gets excited. It returns repeatedly to the same concern. It loses the thread occasionally and doubles back. Perfect structured feedback is not a sign of quality, it is a sign of automation.

No questions before delivery. A human beta reader or editor who has engaged with your work will almost always have questions. About your intentions. About a scene that confused them. About whether a particular choice was intentional. If your feedback arrives without a single question having been asked, the person delivering it did not read enough to have any.

Portfolio work that all looks the same. For cover designers and illustrators, examine their portfolio. AI-generated imagery has visual tells, in the rendering of hands, in background details that don't quite cohere, in a smoothness of texture. More tellingly, AI-generated portfolios tend toward the same visual style regardless of brief, because they are all produced by the same underlying models.

Reviews that praise speed above quality. When you read testimonials for a freelance service and the recurring praise is "fast turnaround" and "very professional" rather than specific descriptions of how the work helped, be cautious. These are the reviews of people who received something that looked plausible, not something that changed their book.

Why This Is Getting Harder to Catch

The answer is that detection is becoming difficult, and will continue to become more so.

Early AI-generated text had clear markers. Certain phrases appeared with regularity. The prose had flatness. Structural patterns were predictable. Detection tools emerged to identify these markers, and for a period they were reasonably effective.

That period is ending.

Current AI models produce output that passes most automated detection with ease, particularly when lightly edited by a human who knows what they are doing. The freelancer who runs a manuscript through an AI tool and then spends twenty minutes personalising the output, adding your character names, inserting one or two genuine observations, is now nearly impossible to identify through the text alone.

This is not a future problem. It is a present one. And it disproportionately harms the authors who can least afford it: the indie writer spending £300 they saved over three months on an editorial assessment that will shape their next revision, the debut novelist whose first cover needs to stand out in a crowded marketplace, the author who genuinely needs a reader's honest reaction before they decide whether to publish.

Who Gets Hurt

The experienced, well-networked author is largely insulated from this. They have relationships. They work with editors they've known for years, beta readers who are also writers in their community, designers whose human portfolio they've watched develop over time. They have enough experience to recognise when feedback doesn't ring true.

The beginner does not have these things. They don't yet know what editorial feedback looks like, so they can't identify its absence. They don't have a community that can vouch for a designer's work. They don't know that a three-day turnaround on a full manuscript is a red flag, because they don't know how long a real read should take.

This is who these services prey on. Often through nothing more than meeting a low bar that the buyer hasn't yet learned to set higher.

How to Protect Yourself

None of what follows will make you immune. But it will make you significantly harder to deceive.

Build relationships before you need them. The time to find a trustworthy editor is not when your manuscript is finished. Join writing communities, Discord servers, genre-specific forums, in-person groups if any exist near you. Ask who other authors in your genre have worked with. Personal referrals from writers whose judgement you trust are the single most reliable filter available.

Ask for a sample before committing. Most editors will offer a sample edit of your first chapter or a defined page count. This serves two purposes: it shows you their style and whether you're a good fit, and it gives you something concrete to evaluate. If a freelancer declines to provide any sample work and asks you to judge from reviews, that is a reason for caution.

Ask specific questions. Before hiring, email the person with a question that requires having read your submission materials. Not "are you available?", something specific to your project. How they respond will tell you whether a human being is on the other end of the conversation who has actually looked at what you sent.

Read feedback against your own manuscript. When you receive an editorial letter, open your manuscript alongside it. Find the scenes being referenced. Verify that the observations are specific to these pages. If you can't locate the moments being described, the feedback may not have been written with your manuscript in front of the writer.

Pay attention to what's missing. Human feedback includes surprise, the character the reader didn't expect to love, the scene that hit harder than anticipated, the subplot that worked better than the main plot. It includes confusion, the moment where they didn't understand what was happening and had to reread. AI feedback tends toward confident assessment. If your feedback contains nothing that surprised you, nothing that felt like a personal reaction, consider what that absence means.

Use contracts and stage payments. Never pay in full upfront. Agree a fee structure, a delivery schedule, and a revision round. Freelancers expect this. Anyone unwilling to operate on a staged payment basis for a significant commission is a risk.

A Final Thought

ADS Publishing exists, in part, because its founder experienced exactly what this article describes. Money spent on services that were not what they claimed to be. Feedback that didn't help. Work that had to be redone. It is a common experience among indie authors that it shaped a decision to build something different.

AI is not going away. It will continue to improve, and it will continue to find its way into creative services whether the industry welcomes it or not. In this it is not unlike every transformative technology that preceded it, the computer did not destroy writing, but it did make it much easier to produce. Spell-check did not replace the copy-editor. Grammar tools did not replace the editorial eye. They shifted the baseline.

This is the same shift, at greater speed and higher stakes.

The answer is not to reject the technology. It is to insist, always, that a human being is involved in work that matters, not as a rubber stamp on an AI output, but as the intelligence in the room. Technology is a tool. The question to ask of any service you commission is not whether AI was used, but whether a skilled, attentive human being took responsibility for what was delivered.

If the answer is no, or if you can't tell, that's your answer.

ADS Publishing is a genre fiction publisher and creative technology company based in the UK. We publish horror, fantasy, science fiction, and romance, and we build the tools our authors use: because we believe technology should serve writers, not replace them.