Since last November, we’ve seen a steadily-increasing influx of submissions written with these ChatGPT and other LLMs. In February, it increased so sharply that we were left with no alternative but to temporarily close submissions, regroup, and start investing in changes to our submission system to help deter or identify these works.
All of this nonsense has cost us time, money, and mental health. (Far more than the $1000 threshold that Microsoft Chief Economist and Corporate Vice President Michael Schwarz sees as a harm threshold before any regulation of generative AI should be implemented. But let’s not go down that path. It’s not like I’d trust regulation advice from the guy with his hand in the cookie jar.)
When we reopened submissions in March, I acknowledged that any solution would need to evolve to meet the countermeasures it encounters. Much like dealing with spam, credit card fraud, or malware, there are people constantly looking for ways to get around whatever blocks you can throw in their path. That pattern held.
Here’s a look at how things have been since November:
This graph represents the number of “authors” we’ve had to ban. With very rare exceptions, those are people sending us machine-generated works in violation of our guidelines. All of them are aware of our policy and the consequences should they be caught. It’s right there on the submission form and they check a box acknowledging it.
Our normal workload is about 1100 legitimate submissions each month. The above numbers are in addition to that. Before anyone does the “but the quality” song and dance number, none of those works had any chance at publication, even if they weren’t in violation of our guidelines.
As you can see, our prevention efforts bore some fruit in March and April before being thwarted in May. So then, why aren’t we closing this time?
Honestly, I’m not ruling out the possibility of future temporary closures. We were honestly surprised by just how effective some basic elimination efforts were at reducing volume in March and April. That was a shot in the dark, is still blocking some, but is not a viable option for the latest wave. We’re only keeping our head above water this time because we have some new tools at our disposal. From the start, our primary focus has been to find a way to identify “suspicious” submissions and deprioritize them in our evaluation process. (Much like how your spam filter deals with potentially unwanted emails.) That’s working well enough to help.
I’m not going to explain what makes a submission suspicious, but I will say that it includes many indicators that go beyond the story itself. This month alone, I’ve added three more to the equation. The one thing that is presently missing from the equation is integration with any of the existing AI detection tools. Despite their grand claims, we’ve found them to be stunningly unreliable, primitive, significantly overpriced, and easily outwitted by even the most basic of approaches. There are three or four that we use for manual spot-checks, but they often contradict one another in extreme ways.
If your submission is flagged as suspicious, it isn’t the end of the world. We still manually check each of those submissions to be certain it’s been properly classified. If you’re innocent, the worst that happens is that it takes us longer to get to your story. Here are the possible outcomes:
- Yep, you deserved it. Banned.
- Hmm… not entirely sure. Reject as we would have normally, but future submissions are more likely to receive this level of scrutiny.
- Nope, innocent mistake. Reset suspicion indicator and process as a regular submission.
If spam doesn’t get caught by the filter, the process changes to:
- Make note of any potentially identifying features that can be worked into future detection measures.
- Banned.
As I’ve said from the beginning, this is very much a volume problem. Since reopening, we’ve experienced an increasing number of double-workload days. Based on those trends and what we’ve learned from source-tracing submissions, it’s likely that we will experience triple or quadruple volume days within a year. That’s not sustainable, but each enhancement to the present model (or even applying temporary submission closures) buys us some more time to come up with something else and we’re not out of ideas yet.
EDIT 6/1/2023:
Updated Graph for full-month data (below).
Nick Nolan
Suggestion.
Implement a deposit system of $X dollars for any submitter (author). The sum can be small, like $5 or so.
Author deposits $ into to get submission in. Author loses the deposit money if the content is deemed as generated spam. Honest author needs only one deposit. Author can use his/her own deposit to submit other’s work.
Andrew Hickey
He’s already explained numerous times that that wouldn’t be an option, because it would cut off a large number of genuine but marginalised submitters.
Neil Clarke
1. It would create a barrier to entry for financially disadvantaged authors, those without credit cards or PayPal accounts, or authors from other parts of the world where international transactions are more restrictive. We’ve published authors that fit those profiles and are not willing to shut out others in similar circumstances.
2. Submission fees are culturally unacceptable in science fiction and fantasy. Money is supposed to flow towards the author (Yog’s Law) and many authors would (rightly) stop submitting work even if we waived the fee for them.
3. The logistics of such a system would actually create a lot of work managing payments and refunds. There would likely be a high percentage of refunds, which would make the payment processor unhappy. In turn, they would probably cancel our account.
4. We would likely be hit with a lot of credit card fraud by scammers, which would be yet another nightmare.
kaibutsu
Metafilter had long had a ‘$5 or send us a postcard’ policy for registration, which cuts out almost all spammers. The two differences here are that it’s a small payment for registration rather than per submission, and there’s basically a human-effort alternative to paying the registration fee.
Ashley Carter
The deposit would also open the magazine up for legal action if they keep the $5. The submitter could sue and make grandiose claims about defamation, etc. They wouldn’t win, but Clark doesn’t have the time or lawyer fees to respond to all the possible lawsuits.
William Lexner
In light of today’s Washington Post article on a professor falsely accusing his students of using ChatGPT, I am curious as to how you are able to differentiate genuine text from artificial, Neil. I’m not trying to open Pandora’s box, I’m genuinely curious.
Neil Clarke
Above, I said:
“I’m not going to explain what makes a submission suspicious, but I will say that it includes many indicators that go beyond the story itself.”
Quite simply, I’m not putting information out in the world that would help someone get around our detection efforts.
William Lexner
Gotcha, and that makes a lot of sense. I’ll remain curious.
Eric B
Keep us posted on how this is going for you. If editors and other gatekeepers can’t keep the machines out, there’s little hope for honest writers.
Matt Mikalatos
I just want to say, Neil, that I really respect you and your team for the way you’re handling all this, and the way you’re keeping (legitimate) authors centered as people worth protecting in the process. Truly appreciate it, and I know that comes at a price for all of you. Thank you.
Joker B
Curious that the question of quality hasn’t mentioned. Are the GPT-enabled stories of significantly different quality? That is, are they really “spam” in the way it has been perceived for past decades?
There are more ways to provide credibility-of-humanity than pay-to-play. Might I suggest keeping a list of known “respectable” humans, who got that way either through past good writing, through service to the community of writers (as editors, reviewers, etc), or via recommendation — someone with a lot of brownie points might spend a few to recommend a promising outsider author. Or cash, but cash needn’t be the only way to open a private door.
Neil Clarke
It’s right there in the post: “none of those works had any chance at publication, even if they weren’t in violation of our guidelines.”
We’ve worked with a lot of authors that would have been excluded by a process that requires having to know someone.
Coagulopath
> Are the GPT-enabled stories of significantly different quality?
I can answer, sight-unseen: yes.
AI-written fiction is incredibly poor. Unless a human prompter steers it at every point (thus defeating the purpose of using AI), it just dives straight into a trough of blandness and cliche. You’ll get a story about Bob, who journeys out on a quest to find a magic sword and save a kingdom, with treacly sentimentality and purple prose piled on at every turn.
“In conclusion, the hero of this tale learned a wonderful lesson about the importance of friendship, and of being true to yourself.” <– it's not quite THAT bad, but it's close.
Cady
Is there any merit to threatening legal action towards the spammers? Like, they’re clearly in violation of the site’s TOS. I’m no expert on any of this stuff and don’t really understand the submission process (I just read the amazing stories), but maybe AI spammers would be less willing to click through that box if said box said, “Lying about this means we can sue you”. If the motivations are primarily financial, this might make a few of them stop and think.
Neil Clarke
Words aren’t a deterrent with these people. We only put the checkbox statement there to make it clear there would be consequences. More for our protection. Even if they believed us, they’d know that it would cost us more than we’d get… if anything. Best to stick to what we will do than hollow threats.
Tiffany S
Wishing you the best of luck. What a nightmare. Would going back to paper submissions be a barrier, I wonder?
Neil Clarke
That would be detrimental and not much different than charging a submission fee (that charged international authors even more). The industry’s move to digital submissions resulted in much greater international participation and going back would likely reverse all those gains. Price is too high.
Lancelot Schaubert
This. Prarie Fire returned to this. They’re Canadian. By the time I was done with the postage and everything, it was about $4-$5
Not to mention the time.
Though… nostalgically I do miss my stake full of paper rejections at the old house. Kind of kicking myself for getting rid of it because it had names of many editors who are retired or have passed beyond the veil.
Thanks for leading the charge, Neil.
Coagulopath
Am I correct in suspecting that the driver is a grifter network on Youtube/Discord sharing ways to make money with ChatGPT? Do you see it burning out, after it becomes clear there’s no ROI in doing this?
Neil Clarke
It’s definitely still in play and will continue to contribute to the problem for the foreseeable future. Keep in mind that the people doing this know it will not work. It’s not about this grift working (which is really just the shiny bait), it’s about the other one: getting more views on their channel (ad revenue) or in some cases offering paid courses to those who can’t make it “work” like it did for them. They have the secret to success and for only $199 it can be yours too!
We’ve seen hundreds of their followers hit our site. That’s a drop in the bucket in YouTube/TikTok grifter land. It will be a while before they run out of people willing to try.