Summary: A new computer system that can generate entire songs, stories, and essays on its own has taken many people by surprise. The software, known as GPT-3, was developed by a team of researchers and engineers who work at OpenAI in San Francisco. However, it’s proving controversial as it sometimes produces offensive content.
Original author and publication date: Sam Shead – July 23, 2020
Futurizonte Editor’s Note: GPT-3 (Generative Pre-training) can generate content and narratives. Yet, it is not human. That, in itself, it is a philosophical problem in need of close attention: are we humanizing AI or are we dehumanizing ourselves? As the article says, it is “too dangerous”/
From the article:
Social media is awash with people talking about a new piece of software called GPT-3, which has been developed by OpenAI, an Elon Musk-backed artificial intelligence lab in San Francisco.
GPT-3 (Generative Pre-training) is a language-generation tool capable of producing human-like text on demand.
The software learned how to produce text by analyzing vast quantities on the internet and observing which letters and words tend to follow one another.
OpenAI started releasing it to a select few people last week who had requested access to a private early version, and many of them have been blown away.
“It’s far more coherent than any AI language system I’ve ever tried,” wrote entrepreneur Arram Sabeti in a blog post after testing.
“All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future.”
The company wants developers to play with GPT-3 and see what they can achieve with it, before rolling out a commercial version later this year. It’s unclear how much it will cost or how the system might be able to benefit businesses, but it could potentially be used to improve chatbots, design websites, and prescribe medicine.
OpenAI first described GPT-3 in a research paper published in May. It follows GPT-2, which OpenAI chose not to release widely last year because it thought it was too dangerous. It was concerned that people would use it in malicious ways, including generating fake news and spam in vast quantities.
GPT-3 is 100x larger than GPT-2. It is said to be far more competent than its predecessor due to the number of parameters it is trained on: 175 billion for GPT-3 versus 1.5 billion for GPT-2.
While it may be brilliant in many ways in its current form, it’s certainly got its flaws: developers have noticed that GPT-3 is prone to spouting out racist and sexist language, even when the prompt is something harmless. It also churns out complete nonsense from time to time that’s hard to imagine any person saying.