OpenAI, a non-profit artificial intelligence research group in San Francisco, has unveiled a machine learning algorithm that can generate coherent text, including fake news articles, after being given just a small sample to build on. The algorithm can be tuned to imitate the writing style of the sample text.
Here’s an example of the system at work that OpenAI provided. The system was only given the first two sentences, which were written by a human: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.
And this is the rest of the news article that the AI wrote, with no human guidance:
The incident occurred on the downtown train line, which runs from Covington and Ashland stations.
In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.
“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”
The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.
The Nuclear Regulatory Commission did not immediately release any information.
According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.
“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.
Because of its potential to be misused for creating disinformation, OpenAI said it had chosen not to make the most powerful versions of its algorithm available for public use nor was it publishing the source code and training data used to create the software.