Inner Workings

F.A.Q.

We’re excited to pursue every weird (and probably terrible) idea we’ve ever had.

BACK ISSUES. Three years ago, we didn’t have demand for print issues. When that changed, Alin promised to make issues 1.1 and 1.2 available in physical format. Once our final issue has shipped, fulfilling this promise so the writers can finally see their stories on paper will be Alin’s first priority.

GAMES. We currently have several RPGs in development, heading to Kickstarter soon.

GRAPHIC NOVELLAS. We’re establishing relationships with illustrators and will be soliciting novellas in early 2024. (Don’t tell anyone yet, though. It’s a secret.)

THEMED ZINES. Prepare for some super bizarre flash calls. A catalog of fake dating profiles, a travel guide detailing places nobody wants to go, a compilation of user manuals for nonexistent inventions—we have nothing to lose over here, so the limit does not exist.

INTERACTIVE FICTION. We love scrollytelling, and we have the technology to support it, so while we don’t have anything concrete planned yet, we definitely have interactive fiction on our radar. If you’re into this kind of thing, join our Discord and ping Alin (@alin).

ONGOING COMMUNITY SHENANIGANS. Oh, did you think we’d slink quietly into the dark? That’s cute. The Dread Machine was a community of misfit bastards first and will continue to be one. Most of our members are gamers and writers who work in tech and science. Here’s what we have planned for 2024:

  • Murder Cabin 2024: Our third annual IRL writing and gaming retreat.
  • Fuck Off Fridays: Where we lift both middle fingers at the corporate establishment and normalize the four-day workweek. Every Friday from noon Eastern to whenever in our Discord server.
  • Ga(y)mers Golf Club: Golf With Your Friends at our weekly gaming nights.
  • Whatever we feel like doing. TDM is a playground for punk ass creative adults. We’re here to make art and have fun together. In service of that mission, we will continue to do whatever the hell we want, tyvm.

The debate surrounding ML has reached an intolerable level of toxicity. Maybe we’ll return when things change, but for now, we’re removing ourselves from this rank atmosphere to focus on other projects. Frankly, it’s annoying and exhausting trying to educate the uninformed and willfully ignorant, and we don’t get paid enough to tolerate this melodrama.

Unlike most, we have been aware of this tech since the day OpenAI Five stomped OG at The DOTA 2 International Finals back in April 2019. We knew then that this technology was going to change the world, because DOTA 2 is, to put it mildly, a fucking hard game. A bot capable of beating pro DOTA players is orders of magnitude more impressive than IBM’s Deep Blue, which defeated Kasparov at chess in 1997.

Our position, posted June 2nd, 2023, remains the same and will live in perpetuity below.

True AI doesn’t exist. Here, we reference LLMs, CMMs, and other algorithmic “neural” networks using the correct terminology, because mythologizing tech is silly. The following policies apply to all learning networks, including those deceptively branded and inaccurately referred to as “AI.” If you require a primer on the technology behind ChatGPT and similar language models, please read this before querying our policies. 

We don’t feed the machines. We will never knowingly or intentionally feed or authorize any submitted or accepted works into any algorithmic system for training purposes, regardless of the neural network architecture. 

…but we can’t control what the megacorps do. Most writers are now aware that Microsoft and Google have integrated this technology into their products. Neither company provides any avenue to opt out, and users don’t have any control over either corporation’s policies regarding whether our data will be used for training in the future. No laws require these companies to seek permission before using data for training purposes, and each is permitted by their own Terms of Service to make changes to their terms at any time.

As of this writing (6/2/2023), OpenAI’s Terms of Service state:

(c) Use of Content to Improve Services. We do not use Content that you provide to or receive from our API (“API Content”) to develop or improve our Services. We may use Content from Services other than our API (“Non-API Content”) to help develop and improve our Services. You can read more here about how Non-API Content may be used to improve model performance.

The pertinent information on the linked page reads: 

When you use our non-API consumer services ChatGPT or DALL-E, we may use the data you provide us to improve our models. You can switch off training in ChatGPT settings…

OpenAI does not use data submitted by customers via our API to train OpenAI models or improve OpenAI’s service offering. In order to support the continuous improvement of our models, you can opt in…

In human terms, this means that if you’re using ChatGPT itself, your data could be used to train future iterations of ChatGPT, unless you opt out. However, if you’re using a program that leverages ChatGPT’s API, your data cannot be used to train future iterations of ChatGPT, unless you opt in. But remember: every tech company’s Terms of Service give them permission to change their terms at any time. What is true today may not be true tomorrow.

All of this is to say: 

  • We will do everything within our power to keep your work from being used for bot training purposes.
  • We do not prohibit writers who use LLMs from submitting. We have no way of detecting generated text, which renders policies prohibiting it unenforceable and therefore pointless. In lieu of a prohibition, we will instead remind writers that “send your best work” means send your best work.
  • Those who we suspect of rapid-firing generated trash our way will be warned to stop and potentially banned (either temporarily or permanently) from submitting in the future.

While bans sound nice and play well on Twitter, we live in the real world, and our policies are informed by the legal system (and Alin’s attorney). We prefer to be honest with the writers, the readers, and ourselves, and the honest truth is that such bans are unenforceable without bulletproof detection.

Publication contracts should only ever include terms that are enforceable. If we “ban AI,” we have to define what all of these things mean, what exactly constitutes generated content vs assistance…it’s a whole thing, but even if we do all that, it still isn’t enforceable without detection.

If we, as a company, were to accuse a writer of a contract violation we cannot prove and bring harm to their career, that writer would have cause to pursue legal action against us, and they’d be right to do it. Performative bans without enforcement plans are nothing more than empty symbolic gestures. They might make writers feel supported, but they’re impractical and they’re creating an environment where anyone can accuse another of using this technology, with neither being able to prove or disprove the other.

As it stands, when a generated or assisted submission comes across our editor’s desk, there are two possible outcomes. The first is that the story is determined to be formulaic or poorly written and is then rejected accordingly, like all stories that fall short of our publication’s standards. The second is that we (the editors), who have been reading dozens of submissions each week and studying the art of short fiction for years, do not detect typical hallmarks associated with poorly edited generated text, and the story is published. The odds of average readers—or even average writers—being able to detect generated text where highly experienced slushers and professional editors have not, is slim to none. 

To enforce an “AI” ban, we would have to use a (mostly) reliable detection software like OriginalityAI, which would require us to feed submissions into an ML system, something we explicitly agreed not to do. We’re not sure how other publications are checking for or enforcing their bans without exposing submissions to the very software they’re banning the use of.

The real insult to the writing community is that these systems were trained on copywritten work without the copyright holders’ permission, not in service of art, but in the development of for-profit technology that corporations could then use to pressure creatives into working for less money, or eliminate their jobs entirely. We fully understand and feel that anger. We agree that it is utter bullshit. However, to call these tools “plagiarism machines” is a gross oversimplification that fails to acknowledge or address the actual ethical transgression here, which is the tech industry’s exploitation of art without artist consent and the subsequent undermining of those whose materials were used for training. This is where our anger is directed, not at the writers who are using these tools to generate prompts or to flesh out more complex characters.

ChatGPT was trained on unauthorized materials, but it does not regurgitate what it was fed. OriginalityAI, a plagiarism monitor, is used by ChatGPT, as stated on their website. This technology is able to detect even the most subtle instances of plagiarism. As of April 2023, it is now capable of detecting GPT-4 output pretty reliably, but at a cost most indie publishers (including us) will not be able to absorb.

Our primary concern is detecting plagiarism (a real, actual crime that we can punish through legal means), not LLM output, so we don’t see any benefit to investing our very limited funds into GPT-detection technology. In case you missed it in the FAQ item above (“Why Don’t You Ban “AI”…?), detecting ML-generated content would require us to feed submissions into OriginalityAI, an ML system, which we have promised writers we will never, ever do.

While it is fucking abhorrent that these corporations would use the work of creatives without their consent to train their for-profit bots, it does not seem as if OpenAI violated any existing laws doing so, and users of this technology aren’t violating any themselves.

Knowing how this training data influences the output of these applications and the safeguards in place to prevent plagiarized output, we can’t help but reach the conclusion that those calling these technologies “plagiarism machines” don’t understand how the technology works and/or don’t understand what plagiarism actually is.

Current Projects

blank

PLANE

TELEPORTER OPERATOR SIMULATOR

Coming Soon to Kickstarter

Plane is a wholesome solo journaling game where, in a world of endless possibilities, you are a humble teleporter operator. 

As an Interdimensional Teleporter Operator employed by AstroFusion Express, you record the stories of those who step through your shimmering portal and traverse the planes. Each handwritten report becomes a thread connecting lives, destinies, and the ever-shifting fabric of reality.

Plane includes:

  • A character creation guide
  • 10 premade characters
  • 4 game modes
  •  104 prompts
  • 20 possible endings
  • 50+ beautiful full-color pages
 
Available soon in premium hardcover, softcover, and PDF
blank

Support Indie Publishing.
Make new Friends.
Join a cult.

There’s no Silicon Valley giant or venture capital lifeline here. The Dread Machine is a fiercely independent entity, fueled by the power of community and the love of written art.

Indie publishers like us are the lifeblood of the literary ecosystem. We push boundaries, take risks, and bring voices to the fore that might otherwise remain unheard.

Help us keep the gears of our grand enterprise turning. By choosing to support us, you’re becoming part of a movement, a revolution, a reimagining of what literature can be in the digital age. Join us and become indie scum, you dirty disco superstar.

Together, we can create a future that's as vibrant, diverse, and thought-provoking as the stories we love to share.

What's the password?

blank
Login to your account

Stay informed