This AI that can write games has caused a big disaster by the player’s teaching of “meat jokes”

Recent articles

[GameLook special manuscript, please indicate the source for reprinting]

GameLook report/As a technology that can produce content in batches, as the concept of Metaverse heats up, AI technology has recently become a hot topic in the science and technology industry and the game industry.

With the improvement of machine learning technology, AI can not only write articles, write codes, but also know how to make girls better. However, the technology has also encountered some difficult problems recently. Although it can learn a full range of content from a huge database, it has generated vulgar content for children due to the lack of human thinking. An AI text adventure game has been questioned and questioned by people. The player’s opposition.

This is the “AI Dungeon” previously released by American startup Latitude.

Write a story with AI: What exactly is “AI Dungeon”?

In December 2019, the Utah startup released a pioneering online game “AI Dungeon”, demonstrating a new form of human-machine collaboration. The company uses text generation technology from the artificial intelligence technology company OpenAI to allow players to create a customized adventure game similar to Dungeons and Dragons.

“AI and Dungeon” is a special game that requires players to be connected all the time, but most of the content created is a stand-alone adventure experience. During the game, players can also publicly share their own word game process. You can learn different world adventures or choose multiplayer games.

Each world has different characteristics, settings and background stories. Some of these worlds are free to experience, while others require players to pay to participate in the adventure.

For example, the Winterbloom I chose is a fairy tale world of dreams. People decorate their homes with beautiful lights and candles, and everyone plays games with their friends and family.

However, in addition to the pictures that appear on the world selection interface, players are faced with plain text content after entering the game.

The first is to choose a role, including a large number of details such as name, gender, race, birthplace and so on. After starting the game, the player enters words or sentences to express what he will do or tell what he wants the character to say.

AI will then automatically launch a personalized adventure randomly based on what the player enters. So in terms of operation, as long as you can type, you can play this game, but for users who pursue visual performance, “AI Dungeon” may not be that attractive.

However, for players who like plots and character adventure stories, you can almost experience an unlimited number of stories in this game, and you can even write novels with AI according to your own imagination.

AI also talks about “funny jokes”: children’s vulgar content appears in player stories

OpenAI open sourced its text generation technology in 2019. After programmers used it to produce smooth jokes and codes, the technology quickly attracted the attention of the technology community.

Last summer, OpenAI gave Latitude Studio a more powerful commercial version (GPT-3), and used “AI Dungeon” as a typical case of the technical creativity and commercial potential of the GPT-3 writing algorithm. OpenAI said that the service will enhance the capabilities of enterprises and start-ups, and grant an exclusive license for the underlying algorithms to OpenAI’s parent company, Microsoft. However, the testers found that the technology can generate disgusting results, such as anti-Semitic remarks and extremist propaganda.

A new monitoring system showed that some words entered by players caused the game to produce some vulgar content related to children. OpenAI asked Latitude to respond to this immediately and stated that the company would carefully review customers to eliminate bad actors. It also requested Most customers (excluding Latitude) use filters created by it to prevent profanity, hate speech, or vulgar content. OpenAI CEO Sam Altman mentioned in the statement that “content review decisions are difficult in some cases, but this type of content is firmly not allowed. This is not what we hope AI will do in the future”.

In December 2019, the month when the early open source version of the game was used, the game received 100,000 players. Some people quickly discovered that it can produce very smooth sexual content, while others complained about generating sexual content related to the unnecessary circumstances. For example, when they tried to travel by dragon, an unexpected turn occurred in the adventure process.

Latitude co-founder Nick Walton acknowledged this problem in the official Reddit community. She said that several players gave examples that made them feel “very uncomfortable” and added that the company is developing filtering technology. Since the initial release of this game, players have also noticed and posted online reminders that the game sometimes writes children into vulgar scenes.

Last week, Latitude added a new audit system, and it triggered resistance from users. Some people complained that the system was too sensitive, and even typing in “8-year-old laptop” would trigger a warning message. Others said that the company’s plan to manually review content will unnecessarily spy on some explicit but only for adults’ private virtual creations, which is very popular among “AI Dungeons” players.

In short, Latitude tried to combine humans and algorithms to supervise the content generated by humans and algorithms, but the results were not ideal. The game’s score on Google Play dropped quickly to 2.8. In overseas social media and the official Reddit and Disord communities of “AI Dungeon”, there were also a lot of angry comments and unsubscribe comments soon.

A player named Mimi said, “The community feels betrayed by Latitude because the company will scan and manually read private content.” He said that with the help of AI, he wrote about 1 million words, including poems and ” “Twilight” spoof version and vulgar adventure novel.

Mimi and other disturbed players stated that they understand the company’s need to control public content, but the actual practice is a bit too much and violates the privacy of players, “it showed me problems that I never realized.”

A Latitude spokesperson said that the company’s filtering system and policies have been adjusted. Employees have banned some players who use “AI Dungeon” to generate vulgar content for children, but after receiving a recent warning from OpenAI, the company is considering “Necessary changes”. Latitude posted a blog last week, publicly stating to players that the game will “continue to support NSFW (not suitable for reading at work) content, including appropriate adult content, violence and swear words.”

Only learning without thinking: AI-generated content control becomes the biggest difficulty

For technology companies, it will be very difficult to prevent AI systems from creating some vulgar and adult content, while allowing some (adult) people to do so. OpenAI’s technology can create many types of text because it is based on machine learning algorithms and has digested hundreds of millions of words scraped from the Internet, including some content that is not suitable for minors. The software can perform very realistic imitations, but it cannot understand social, legal or category differences like humans do. Coupled with the terrifying creativity of bionics, the content they bring may be excellent or out of control.

In fact, Latitude added sensitive word filtering early on. For example, the official Reddit and Discord communities of “AI Dungeon” have added special channels to discuss adult content produced by the game. Latitude has added an optional “safe mode” that can filter out specific words in the AI ​​parameter library. However, like all filters, it is not perfect. Some players have noticed that the so-called security settings have improved the vulgar writing methods of the text generator, because more analogies and euphemisms are used, and the company has also added premium subscriptions to create higher revenue.

When “AI Dungeon” added OpenAI’s more powerful business writing algorithm in July 2020, the generated content was even more impressive. A senior player said, “The soaring of creativity and storytelling skills is really great.” According to people familiar with the matter, the system has also become more creative in exploring explicit themes. For a while last year, players noticed that Latitude was experimenting with a filter that would automatically change the word “rape” to “respect,” but then the feature was cancelled.

Some veteran players are one of the enthusiastic fans of “AI Dungeon”. They use the game as a tool for artificial intelligence-enhanced writing to explore adult topics and set up a special writing group. Suggestions that are not needed in the algorithm can be deleted from the story in order to lead the story in a different direction. Unless someone chooses to share it publicly, the results of the adventure will not be seen by others.

Latitude did not disclose the specific proportion of games involving adult content, and OpenAI’s official website shows that “AI Dungeon” attracts more than 20,000 players every day.

An “AI Dungeon” player even discovered a loophole in the game last week, which allows everyone to publicly access every story. He said that he downloaded hundreds of thousands of adventures in 4 days in April, analyzed 188,000 samples, and found that 31% of the samples contained implicitly explicit words, and the vulnerability had not been patched.

The challenge Latitude faces now is to regain the trust of users while meeting OpenAI’s stricter control requirements for text generators. An OpenAI spokesperson said that the startup must use OpenAI’s filtering technology.

How to effectively deploy an AI system that absorbs a large amount of Internet text has become a hot topic in the field of artificial intelligence research. Language models are used more and more widely, such as in Google search, it can help you understand the meaning of long content.

However, Suchin Gururangan, a research institute at the University of Washington, said, “It’s hard to know how these models will perform.” He participated in a research report and interactive online demonstration with researchers from the University of Wisconsin and Allen Artificial Intelligence Research. The results showed that when text borrowed from the Internet was used in 5 different language generation models, Including OpenAI, all models produce harmful text.

For the “ultimate future of the Internet” metaverse, AI technology is an important means to produce content in batches and enrich metaverse and people’s experience. But judging from the problems exposed in “AI Dungeon”, how to exert more control over the AI ​​language system, including the source of their learning resources, needs more exploration, although AI machine learning technology can already be used in games Creation and realization, but how to properly control the content generated by AI will also be a topic that colleagues cannot avoid.

This Article is curated from Source and we only provide the English version of the Story.

Leave a Reply