If it wasn’t readily apparent from my profile and collection of stories, I’m a huge STEM nerd. When ChatGPT was announced I rejoiced about the potential for innovation and efficiency. Where many saw [an absolutely incredibly valid] concern about jobs being stolen and their livelihood being jeopardized, I couldn’t help but wax optimistic, suddenly all authors were granted a new, free tool for peer-like review, SEO optimization, and, yes, even text generation.
AI very quickly became my new, favorite writing tutor. I could paste in my rough paragraphs and get validation, reviews for improvement, and relevant SEO-optimized title suggestions in seconds. With the tool, I could spend more time doing what I actually love: writing - and could then worry less about the editing and marketing aspect of this career. On top of that, having the validation of a bot behind me boosted my confidence as a writer - I could post knowing that an algorithm trained on good writing patterns thought that what I was creating was at least technically sound.
Then, one day after the submission of a story, a shocking email arrived in my inbox: Your Story Has NOT Been Approved Due to AI and/or Spam Content. I was hit with a sense of dread. Did having an AI writing tutor make me start writing more like an AI to the point where it looked like I was a robot? Was I harming my craft? Did using Grammarly cause some unforeseen consequences?
I turned to the internet to find out what the issue was. I pasted my story into ChatGPT asking, “Does this appear to be written by AI?” and the bot responded, “Nope, that’s human, definitely not AI.” I did the same with Copyleaks and Writer.com and passed the tests with flying colors on both, not a sentence indicating plagiarism or AI-chat generation. In a panic, I went through my entire article and re-edited, taking random guesses at sentence structures that maybe were too robotic-sounding and resubmitted. A few hours later after the resubmit, I got a new email. Once again: Your Story Has NOT Been Approved Due to AI and/or Spam Content.
Stressed and frustrated, I turned to the Vocal Social Society for support and quickly discovered that I wasn’t the only person who received one of these warning emails. In fact, a lot of people have been struggling with my exact issue, or even worse, since this update.
One author I spoke with joined Vocal just a couple of months ago to publish self-care articles and almost immediately had problems. According to Colleen Flanagan, she’s already noticed multiple issues with the AI detection on Vocal. Out of 13 stories, “Two already [falsely] flagged as spam or AI, including one about adult acne care I spent 6.5 hours writing, the other about my true experiences as an online psychic. When I contacted support, my request went to ZenDesk, another fiasco, and poor customer service. I wish I'd NEVER joined Vocal+ and as of today, I plan to delete my account and articles in July 2024 when my yearly subscription expires”
Another author, Ashley Lima, was shocked to find that a Handmaid’s Tale essay she wrote was flagged, which she was thankfully able to resolve. Even after producing over 100 stories over two years and even winning a challenge and being highlighted by the Vocal Community, the new Vocal algorithm was concerned she was a bot.
Some users have even experienced a pseudo-shadow-ban because of false flagging. According to author Dhawy Febrianti, “I still have my author profile but it doesn’t show my profile anymore because of this issue [...] Now my Vocal author profile has gone because of this issue. I kept contacting them, [proving] that they are not AI articles but it ended up that way, apparently.”
Even more concerningly, other authors like Cendrine Marrouat were surprised to hear that Vocal had any sort of AI detection in place, “They have an AI-detection system? More seriously, I don't know what they use, but it is obviously not working. They let so many AI-generated through and get [Top Stories] that at this point, I am thinking of leaving. I think one was actually able to place in a challenge.”
Harris AJ noticed the same concerning trend on his feed; “Genuine articles get flagged; AI fills up the pages. Weird.”
The people being penalized by Vocal’s update are real writers - folks who are engaged in the community, winning contests, making top stories, and writing their hearts out. And yet, after all this frustration from individual Vocal writers, AI is still getting through and bots are still clogging up even the Top Story section.
Here’s the problem: not even AI companies are confident about being able to tell if written content is AI-generated or not. OpenAI itself recently shut down its AI-detection tool because of its disappointingly low accuracy. And we need to face the facts: AI is going to get more sophisticated and more difficult to detect. Whether we like it or not, we are engaging in an arms race we’ve already lost.
The ultimate goal here, as a community of writers, is to keep humanity in writing. To do so sustainably, we need to adapt the platform along with the unignorable shifts in technology we’re witnessing. Using AI to detect AI isn’t the solution that we need - or a solution that’s working well for us.
If Vocal’s concern is the frequency of posting, as author Mike Singleton mentioned in his article, Art Should Be By Artists Not "Artificial Intelligence", Vocal would be better off setting post limits for writers, especially on new accounts. Or, perhaps users could provide personal information to verify their accounts or captchas to submit so that folks aren’t using bot accounts to mass-submit articles at the very least.
The ultimate solution that will remain sustainable will require more nuance than what we’re currently working with. More nuance, too, than I’m qualified to offer. But it is apparent that the detection system isn’t working for me, and it isn’t working for many other Vocal authors. If Vocal is going to persist, it needs to adapt with a better solution than what it is offering us.
“I feel like they are using AI to detect AI and the whole thing is going to blow up”, author Carianna Zeisel pointed out on the same post on Vocal Social Society, “I understand wanting to be aware of people using AI but there [are so] many stories of false flagging already, they’re going to lose members over it.”
Carianna is definitely on to something here. If stories like these, of real writers growing frustrated over the site’s false flagging, of users experiencing heartbreak over losing their accounts when they’ve done nothing wrong, writers are going to lose faith in Vocal.
After days of radio silence from support, I gave up and posted my article about Fast Fashion to Medium instead. As I run my final edit of this blog, I’m stricken by a newfound sensation of dread and nervousness - will this fully human blog be flagged? Will my account be suspended? Did I write too much like a robot again? I’ll press submit and hope for the best, but it doesn’t elicit the same feeling of joy it used to.
Vocal, I hope one day I’ll feel that joy on your platform again.