OpenAI is aware of anxiety around fake news, said Jack Clark, Policy Director for Policy
OpenAI, an artificial intelligence research team founded by billionaire Elon Mus, has demonstrated a piece of software that can produce authenticity by looking for fake news articles after , as they were given only a few details.
In an example published yesterday by OpenAI, the system received a sample text: "Cincinnati was abducted today with a controlled nuclear material wagon unknown." This software managed to generate convincing news from seven points, including quotes from government officials, and the only reservation is that it does not entirely correspond to reality.
"The texts they can create from the prompts are fair, stunning," said Sam Bowman, a computer scientist at New York University who specializes in the processing of natural languages and did not participate in the OpenAI project, but was acquainted with it. "He is capable of doing things that are qualitatively much more complex than anything we've seen before."
OpenAI is aware of the concern about fake news, says Jack Clark, director of policy organization. "One of the very good goals will be misinformation because it can produce Fucking things that sound consistent, but not accurate, "he said.
As a precautionary measure, OpenAI has decided not to publish or release the most sophisticated versions of its software .. However, it has created a tool that allows politicians, journalists , writers and artists to experiment with the algorithm to see what text it can generate and what other tasks it can perform.
The potential for software to be able to create almost instantly fake news articles during global worries every day the role of technology in spreading misinformation. European regulators threatened to take action if technical firms did more to prevent their voters' products, and Facebook is working on US 201
Clarke and Bowman said that at present, the system's capabilities are not sufficiently consistent to constitute a direct threat. "Today it's not a technology ready to blade, and it's good," Clark said.
Published in a newspaper and blog on Thursday, the creation of OpenAI is learning for a task known as language modeling, which involves predicting the next word fragment of a text based on the knowledge of all previous words, similar to how an automated execution of jobs when entering an email on your mobile phone. It can also be used to translate and answer open questions.
One possible way to use is to help creative writers generate ideas or dialogue, said Geoff Wow, a researcher at OpenAI who worked on the project. Others include checking for grammatical errors in the text or hunting for errors in the program code.
Over the past year, researchers have made a series of sudden jumps in the process of language processing. In November, Google Alphabet Inc. unveiled a similar multi-talented algorithm called BERT, which can understand and answer questions. Earlier, the Allen Institute for Artificial Intelligence, the Seattle Research Laboratory, has achieved significant results in the processing of natural language using an algorithm called Elmo. Bowman said that BERT and Elmo have been "the most influential developments" in this field over the past ninety years. On the contrary, he said that the new OpenAI algorithm was "significant" but not as revolutionary as BERT
Although he was co-founder of Musk, he retired from OpenAI last year. He helped kick-start a non-profit research organization in 2016 along with Sam Altman and Jessica Livingstone, Silicon Valley Entrepreneurs for the startup of the Incubator Y Combinator. Other early OpenAI supporters include Peter Till and Reid Hoffman.
(This story was not edited by the staff of NDTV and is automatically generated from the syndicated channel.)