In the last few weeks, Twitter has been abuzz with details of GPT-3, the latest improvement in a language model that is touted to have moved the needle on AGI [Artificial General Intelligence] -The ability of a machine to decipher and learn any intellectual task that a human can.
Here are 8 Facts about GPT-3 that will de-code it for you!
What is GPT-3?
GPT-3 [generative Pre-Training] – A 3rd Generation language model that knows the world by ‘reading’ 45 terabytes of text data – Wikipedia, Books, Movie scripts! It knows them all.
Who is the creator?
Open AI – An Artificial intelligence research lab [Founded by Elon Musk, Sam Altman and others].
Why the Buzz?
Compared to its closest competitor, Microsoft’s Turing NLG’s 17 Billion Parameters, GPT-3 has 10x -175 Billion parameters. This makes GPT-3 the largest neural-network powered language model that can perform tasks without any major training.
What can GPT-3 do?
The potential and the list is endless. GPT can be a programmer (get it to write SQL code), an author (give a brief storyline and get it to write a book), a poet (write poems in Keats’ style) or even a website designer.
What are some practical applications?
For machine learning professionals, the key is the reduced training time. You do not have to gather thousands of examples of training data to teach tasks like translation, train chat-bots on specific user journeys etc.
What can it mean for us in the future?
GPT-3 will help lower the barrier to create future-oriented AI- powered applications and products. At eClerx, we view it as a potential to upgrade our NLP models developed for unassisted support channels, Sentiment Analysis, and Chatbots.
How can you get on the GPT-3 Bandwagon?
Join the waiting list to get access to the GPT-3 API while it is in beta phase. Open AI does plan to turn GPT-3 into a commercial product later this year.
Are there any downsides of GPT-3?
There can be potential negative use-cases stemming out of this as the 45TB of training data does contain controversial topics. Open API has acknowledged the possibility of developing a ‘toxicity’ filter.