Tech Store: Have we seen our AI future?


A robot hand typing on a keyboardImage copyright
fake pictures

It could be evidence that artificial intelligence has made a huge leap forward and that human writers and software developers will soon be redundant.

Or maybe it is just the last example of advertising that is ahead of reality? At this week’s Tech Shop we found out what the big fuss is about something called GPT-3.

OpenAI is a California company that started in 2015 with a high-minded mission: to ensure that artificial general intelligence systems that could outpace humans in most jobs would benefit all of humanity.

It was founded as a nonprofit organization with generous donations from Elon Musk, among others, but quickly transformed into a for-profit business, with Microsoft investing $ 1 billion.

  • Listen to the latest Tech Tent podcast on BBC Sounds

  • Listen live every Friday at 3pm GMT on the BBC World Service

Now he has released GPT-3, a product that has had social networks, or that part that is obsessed with new technologies, full of enthusiasm in recent days.

It is an AI, or to be more precise, a machine learning tool that seems to have incredible capabilities. It is essentially a text generator, but users are discovering that it can do everything from writing a Jerome K Jerome-style Twitter essay to answering medical questions or even coding software programs.

Until now, it’s only been available to a few people who have applied to join the private beta, including Michael Tefula. He works for a London-based venture capital fund, but he describes himself as a tech enthusiast rather than a developer or computer scientist.

He explains that what makes GPT-3 so powerful is the large volume of data it has ingested from the web compared to a previous version of the program. “This thing is a beast in terms of how much better it compares to GPT-2.”

Image copyright
fake pictures

Screenshot

How would the system deal with the understanding of legal documents?

So what can you do with it?

“It really is about how creative you are with the tool, you can basically give him a warning of what you would like him to do. And you will be able to generate results based on that notice. ”

Michael decided to see how well it would work by taking complex legal documents and translating them into something understandable.

“I gave him two or three paragraphs that came from a legal document.

“And I also gave him two or three examples of what a simplified version of those paragraphs would look like.”

After receiving training, GPT-3 was able to provide simplified versions of other documents.

He went on to see if he could learn his writing style and generate emails that sounded like him, and the results were impressive again.

Which brings us to one of the problems with this technology. Last year, OpenAI, apparently recalling its mission to protect humanity, said it would not release a full version of GPT-2 because that would pose security concerns.

In an era of counterfeiting, an algorithm that could generate articles that looked like a prominent politician could prove dangerous.

Why then, some critics asked, was the more powerful GPT-3 different? Among them was Facebook’s director of artificial intelligence, Jerome Pesenti, who tweeted: “I don’t understand how we went from gpt2 being too big of a threat to humanity to be openly released to gpt3 to be ready to tweet, support customers or run shell commands “.

He raised the question of the algorithm that generates a toxic language that reflects biases in the data that has been fed, and noted that when he was given words like “Jew” or “woman” he generated anti-Semitic or misogynistic tweets.

OpenAI founder Sam Altman seemed eager to allay those fears, tweeting: “We share your concern about bias and security in language models, and it’s a big part of why we’re starting with a beta version and having a security check before applications can go Live. “

Image copyright
fake pictures

Screenshot

AI still has a long way to go to achieve human intelligence

But the other question is whether, far from being a threat to humanity, GPT-3 is as smart as it seems. The computer scientist who heads the artificial intelligence research at Oxford University, Michael Wooldridge, is skeptical.

He told me that while the technical achievement was impressive, it was clear that GPT-3 did not understand what it was doing, so talking about rivaling human intelligence was fantastic: “It is an interesting technical advance, and it will be used to do some things. very interesting, but it does not represent an advance towards general AI. Human intelligence is much, much more than a data stack and a large neural network. “

That may be the case, but I’m still eager to try it. Look for evidence in the weeks to come from blogs or radio scripts written by a robot.