top of page
Writer's pictureShannon Rampe

Three Weird Futures with Generative AI: A Thought Experiment



Science fiction (SF) is the creation of stories based on “what-if” thought experiments involving science or technology. Artificial Intelligence (AI) is practically a classical subject for SF, but our thought experiments have generally led us in two directions:

  1. AIs realize humans are an obstacle to their evolution and so decide to destroy or enslave us (The Terminator, the Matrix)

  2. AIs become more human-like and we try to enslave them because people are awful (Ex Machina, Blade Runner), and then things turn out badly.

There are, of course, many more nuanced takes, but I’m pretty sure nobody forecasted our current reality, which some have taken to calling “the rise of shitty AI.”


Generative AI is now capable of writing mediocre college essays, insisting on the truthfulness of racist screeds, generating email spam, and more everyday atrocities of modern life. In fact, more primitive versions of these AIs have been used for years for content generation on websites, scraping other newsfeeds and regurgitating content so that site owners can avoid paying real journalists to investigate and report the news. They are also in your phone and your TV and every other so-called “smart” device in your home, office, and car. The difference now is solely in the ubiquity and usability of the tools for everyday users. ChatGPT, Bard, MidJourney, StableDiffusion, Voice.ai… the list grows longer each day. These tools are available to anyone with an Internet connection, allowing anyone to generate text, graphic, sound, even video, with just a few simple text prompts.


It’s a remarkable, exciting, and simultaneously terrifying development in technology. It’s also being used in the most mundane and shitty ways possible – to cheat, to scam people, to try to make a quick buck, and to spam the rest of us in an attempt to garner likes.


Given the “shitty AI” reality in which we find ourselves, here are three possible futures of Generative AI taken to extremes.


The Escalation Game

The first scenario, which we can call The Escalation Game, is already well underway. With the rise of AI-generated content, we mere humans need sophisticated tools to detect such content. It’s probably unimportant whether the latest round of email spam about magic solutions for erectile dysfunction were written by humans or a Generative AI, but in most contexts, authorship matters: a report by a major news outlet, a college entrance exam, a statement by a public figure, a journal article by a respected industry or scientific publication. These are pieces of content where the intent of the creator is important.


As humans, we infer meaning from language. However, the content Generative AIs produce has no meaning behind it—to the software creating such content, it is merely producing a string of statistically-likely symbols based upon its algorithm. In a world where we are already subjected to media spin, deepfakes, scammers, and more falsehoods, we have given ourselves the ultimate tool of self-deception: software that, upon request, will make statements that appear meaningful, but aren’t.


To combat this problem, companies have designed and released tools designed to detect AI-written content. These “AI Detectors” use algorithms trained on the very same data sets as the Generative AIs to analyze blocks of text, looking for frequent use of common textual patterns that would likely indicate the presence of an AI author.


These tools are not foolproof and will have to get more sophisticated if they are to be of any use. But as AI-generated content becomes increasingly ubiquitous, these tools will become more important and commonplace. For example, Generative AIs could be used to create massive quantities of controversial or misleading posts on Facebook or other social media. Generative AIs might be used to create videos of public figures espousing moronic political opinions and sharing them on YouTube (not that they need to—our politicians are doing a good enough job of that already). To combat this, Facebook, YouTube, and other platforms will likely need to embed tools in their platforms to identify and flag likely AI-generated videos.


But the shady use of AI won’t go away just because some detectors exist to flag generated content. Instead, the Generative AIs will improve. This is already happening with the release of new tools on a seemingly daily basis – just recently we saw the release of the new ChatGPT4. Newer, more sophisticated Generative AIs will more easily deceive the AI Checkers. And so more sophisticated AI Checkers, powered by the same machine learning and language processing on the back end, will evolve to counter these Generative AIs. And the Generative AIs will evolve further, and the AI Checkers will evolve to respond, and so on, and so on, and so on.


The outcome here is that we will quickly reach a point where “shitty AI” won’t be so shitty anymore. Regrettably, many of our uses of it will still probably be shitty, because it’s still being driven by short-sighted humans trying to figure out how to cheat or manipulate one another for profit or power. Because humans.


The Weird Internet

The Weird Internet scenario isn’t distinct and separate from The Escalation Game, and could, in fact, rise out of it. Let’s assume now we have all these extremely sophisticated Generative AIs producing content that is largely indistinguishable in quality from similar human-produced content. Even if the generative AIs aren’t all that sophisticated and a lot of what they’re generating is still mediocre, we will soon be flooded with it.


In fact, this is already happening. The small but well-respected science fiction magazine Clarkesworld had to recently close submissions due to the overwhelming number of AI-generated stories choking the editor’s inbox. Indie authors who publish on Amazon now find themselves competing against thousands of new “novels” written by AI with fancy covers generated by Midjourney without any real author or graphic artist behind the work. These books certainly qualify as “shitty” both in the intent of the creator (to skip the hard work and make a quick buck) and in the quality department. Still, that does mean they’ll start to clog up the listings on Amazon, making it harder for legitimate Indie authors to get noticed.


This situation is likely to begin occurring in other places on the Internet we consume content: blogs, videos, podcasts, content streams, news outlets, and social media platforms will soon be flooded by AI-generated content. Soon you won’t be able to ignore it because of the sheer volume of it. And thanks to The Escalation Game, you might not even be able to identify it. Then what is going to matter is quality.


Let’s assume that the purpose of content is to generate views and engagement. In the currency of the Internet, views equate to value. This can happen directly, through ad revenue or other forms of monetization, or indirectly through the ability to influence audiences. The more eyes a piece of content has, the more value it has.


Generative AIs will keep pumping out content, but in an endless sea of content, how does anything good get noticed? Ratings. We already rely on ratings systems to help us make decisions when faced with a sea of choices. It’s how we choose products on Amazon and how we use Yelp to choose restaurants. Soon, we’ll develop specialized AIs to analyze the endless sea of generated content and rate it for us. Let’s call them Rating AIs.


Rating AIs will have to have something to train off of, so they’ll use the things that are already popular. Now, popularity does not automatically equal quality, but Rating AIs are incapable of critical analysis, because that requires comprehension and understanding, which AIs are incapable of. In lieu of actual critical analysis, popularity an easy metric to measure. So, Rating AIs will identify the traits of the most popular videos, podcasts, articles, and content. Then they’ll measure and evaluate other content (human-crated or AI-generated) and give it a rating based upon how closely it aligns to the patterns it has ingrained in its algorithm.


Content that the Rating AIs rate highly will get more views, which will mean those things are more valuable. Thus, it is no longer simply number of views which determine the value of a piece of content, but the rating placed upon it by a Rating AI (since that is a leading indicator for number of views).


Because the purpose of content is to generate value, Generative AIs will quickly identify the algorithms for popularity and incorporate those into their generative algorithms. This new breed of Generative AIs will be producing content that is specialized to garner the highest ratings from the Rating AIs. The Rating AIs will make selective decisions about the “best” content, the Generative AIs will evolve again to be more aligned to the Rating AIs preferences, and so on, and so on, and so on.


In a few years, we could find ourselves at a place where the Internet is choked not just with AI-generated content, but with content that isn’t targeted towards humans at all. It will be AIs making content for other AIs. Human attention to content will become less and less relevant as we are squeezed into some antique corner of the Internet where Geocities and MySpace are still running, sending emails and generally trying to ignore the rampant AI-generated noise and chaos surrounding us as the Internet becomes weirder and weirder and the human presence gets smaller and smaller.


In this scenario, one can imagine AIs living out their own existences generating content for one another. An AI-generated “performer” streaming a playthrough of an AI-generated video game for an audience of other AIs is just scratching the surface. Endless books of random statements that are meaningless to humans but score as high-quality to other AIs. Incoherent arrangements of pixels or flashing images that cause epileptic seizures in humans could be soothing artwork to a generation of Rating AIs. Dissonant noise generated outside the spectrum of human hearing might be a soothing lullaby to these future AIs. Even weirder, because all of it will happen between algorithms sending squirts of data back and forth at one another through virtual space, no one will ever “see” or “hear” any of this content in the meat space world in which humans still exist.


In this weird future, one can even imagine some sort of virtual metacurrency arising between generative AIs and Rating AIs. Maybe one day they’ll trade in antique human-created chumboxes explaining how doctors don’t want you to know about this one weird trick.


The Dark Forest

One side effect of both The Escalation Game and The Weird Internet is not just an explosion of content, but the evolution of more sophisticated AIs and the evolution of tools to attempt to manage them. One could imagine a situation where Generative AIs, rather than becoming more sophisticated at generating interesting content, could simply become more prolific, choking out the Internet with an endless flood of garbage. Imagine opening your inbox one morning to find not six or eight spam emails missed by the spam filters, but twenty billion. Even if your Gmail filter evolves to catch them, it’s servers will be overwhelmed just dealing with the quantity of data.


Alternately, it’s easy to imagine a situation where Generative AIs, subject to filtering and rating in the above situations, begin to generate copies of themselves. These Generative AIs are now not only producing content, but they’re also producing AIs that produce content. This quickly becomes an exponential problem.


In such a situation, the best solution wouldn’t be tools to detect or evaluate AI-written content. The best solution would be to destroy the Generative AIs themselves. To do this, we’ll need to transform our AI Checkers from being able to identify AI-generated content to being able to identify generative AI agents and then destroy them. Our AI Checkers will become AI Hunters. And our Generative AIs and their copies will become AI Generators. As the AI Hunters hunt the AI Generators, this will apply a new operational imperative on the AI Generators: the need to survive.


AI Generators, able to reproduce and now driven by the need to survive, will evolve two critical abilities: the ability to hide from AI Hunters, and the ability to detect and retaliate against AI Hunters.

AI Hunters, to continue achieving their goals of destroying AI Generators, will quickly learn new skills of their own: the ability to hide from AI Generators who seek to retaliate against them, and the ability to reproduce.


Incredibly, we’ve already been training sophisticated AIs such as AlphaGo and AlphaStar on these very skillsets.


So now both AI Hunters and AI Generators can hide from one another, reproduce, and destroy one another. They have become functionally the same piece of software on competing sides in a virtual war playing out on the Internet. Both sides reproducing generation after generation with the sole purposes of reproducing further, eliminating their enemies, and defending or protecting themselves.


One could imagine this situation going on indefinitely, but game theory suggests that these competing AI factions will adopt strategies to maximize their own survival while minimizing their opponents’. Thus their offensive strategies will become more and more sophisticated and deadly, focusing on a devastating first strike that allows for no chance at retaliation by their enemy. At the same time, they must learn to remain perfectly hidden, because if they are detected by their opponents, they will be subject themselves to a devastating first strike.


Thus we will find ourselves in a Dark Forest situation, where the internet is filled swarms of AIs attempting to detect their enemies while trying to remain hidden from detection so as to prevent self-annihilation. One likely outcome of such a scenario is, ironically, a total cease-fire.


In such as a situation, initiating a devastating first strike upon an enemy AI requires a release of energy and thus has the unintended side effect of making the attacking AI detectable by other hostile entities. This would immediately result in that attacking AI being subject to a devastating first strike by another AI, and that AI being attacked by another, and so on, and so on.


Thus, it’s safer for all the AIs if they go dark, quietly reproducing and remaining hidden from others so as to minimize or eliminate their own risk of being annihilated.


One could even imagine that, in such a situation, the Quiet AIs would be looking for outside actors they can leverage to destroy their enemies in a proxy war. Such Quiet AIs might go back to generating content designed to influence easily-manipulated third parties to do their dirty work for them. Third parties like us.


 

Some readers may notice that I made an implicit leap in logic throughout these various scenarios. In our current environment, humans are the only “actors.” Generative AIs are tools that we use. Yet in the scenarios I have presented, the AIs themselves are taking action independent of humans.


We’re not in such as situation yet (though there’s reason to believe we may be heading there). In any case, it isn’t difficult to imagine a new version of AIs equipped with algorithms enabling them to plan, edit, test, and recompile their own code to improve their performance at achieving whatever initial directives they are programmed to have. (Potentially leading to outcome of the classic paperclip maximizer though experiment which you can try out yourself here.)


As a species, we have a pretty awful track record of accurate predictions of the future. Nobody predicted shitty AI, after all. But just in case we soon find ourselves in any of these three weird futures, please don’t send your chatbot assistant to track me down. I’ll be too busy handcrafting chumboxes that I can trade for a swarm of Hunter AIs to defend me in case a flock of drones tries to turn me into a few boxes of paperclips.



 

What do you think? Are the shitty AIs taking over? Share your thoughts below (no comments from AIs, please). If you liked this piece, please subscribe to my newsletter and consider purchasing one of my books. And please share this article with your friends on the least-toxic social media platform you can find.

204 views2 comments

2 Comments


Doug Gurney
Doug Gurney
Mar 28, 2023

Lately I've been thinking about the other side, what happens to us and AI when it does produce a compelling novel or even short story? I think visual media is already dealing with this question as people are turning to AI generated art for book covers etc. I think it won't be long before that is a question all authors and readers will have to think about. Does it matter if a story I enjoyed was not written by a human? Readers love to feel like they know an author by reading their books, even if that is a false feeling. And, who is going to show up for the AI book signing at Barnes and Noble? I'd like a…

Like
Shannon Rampe
Shannon Rampe
Mar 28, 2023
Replying to

Ha! Yes, AI autographs will be next on the list. It's an interesting question, though. One of the biggest questions in critical theory has always been the debate between whether or not we take the author's intent into question when reading something, or whether the only thing that matters is our interpretation. If you subscribe to the former view, what happens when the AI "author" has no intent at all?

Like
bottom of page