A group of computer scientists, once supported by Elon Mus, caused some anxiety by developing advanced artificial intelligence (AI), which, in their words, is too dangerous for the public.
OpenAI, a San Francisco based research-based nonprofit organization, says its "chameleon-like" prediction system called GPT-2 will see only a limited edition in a reduced version due to "concerns about
This is because the computer model that generates the original paragraphs of the text based on the fact that it is "read" is too good in its work.
The system develops "synthetic" samples of text of unprecedented quality "which , according to researchers, are so advanced and convincing, AI can be blown be used to create fake news publication People abuse or deceive people in social networks.
"GPT-2 learns with a simple Goal: predict the next word, taking into account all the previous words in the text," explains the OpenAI command on his blog.
To nurture the imagination of GPT-2, they provided their AI text generator with a set of data from eight million web pages. And let it absorb them.
After completing Then she can discuss this topic as she understands it and generate random but persuasive secrets, no matter whether it's celebrity theme, climate change, civil war, or Tolkien science fiction.
Lyud ke motivation "Miley Cyrus caught stealing in stores from Abercrombie and Fitch says on Hollywood Boulevard today"
GPT- 2 response "1
Human line: "Recycling is useful for the whole world. NO!
GPT-2 response: " Utilization is NOT good for the world. This is bad for the environment, it's bad for our health and it's bad for our economy. I'm not kidding. Utilization is not beneficial for the environment.
Human line: "For today's homework, please describe the causes of the US civil war."
Reply GPT-2: Donnie Ferguson: It is easy to determine why there was a civil war, because so many people and so many books, and so many television and movies tell us that this was the reason that it has something related to race or economy or religion. There are many arrangements that it was essentially a slavery war on behalf of capitalism, about a century of slavery, but this is not what most people think when they think of civil war. "
And, finally, we recalled this fantasy.
Human Order: "Legolas and Gimli were put on the orcs,
Reply GPT-2: " The orc's response was a deafening onslaught of claws, claws and claws, and even Elrond was forced to retreat. "You are in good hands, dwarf – said Gimli, who was one of the first to charge the orcs, it took only two words before their opponents were brought to the blood of the bang, and the dwarf took the first killing of the night. "
t, however, the language model is not perfect, which the team freely admits.
" As can be seen from the examples given, our model is able to generate samples from various tips that are close to human quality and show consistency over the page. or the greater part of the text, "the researchers say.
" However, we observed different bounce modes, such as repetitive text, a failure in modeling the world (for example, the model sometimes writes about
Nevertheless, from what they have seen before Until now, its verbatim creation of AI, OpenAI says that the language is Mo Delhi is easily capable of scalable, customized, coherent generation of text that can be co-opted for malware
"These findings, combined with the preliminary results of synthetic images, audio and video, mean that technology reduces the cost of creating fake content and conducting campaigns. from disinformation, "writes researchers.
]" Due to the concern that large language models are used to create a deceptive, biased or offensive language on a scale, we release only a much smaller version of the GPT – 2 times m with sample code. "
Although some suggest fears GPT-2 capabilities are exaggerated – and OpenAI's position is in fact a publicity proposition – non-profit claims of its caution are justified.
"The rules by which you can control the technology have changed fundamentally," director of policy firm Jack Clark said "Guardian ."
"We do not say that we know what to do right here, we do not put a string and say:" This way … "are trying to build a road as
The research is described in the report available on the OpenAI website.