Home Science GPT-3: The Big Leap Forward in Language Generation Models

GPT-3: The Big Leap Forward in Language Generation Models

OpenAI's GPT-3 is here to give a stiff competition to other language modeling systems with its wide range of capabilities.

Artificial intelligence has dominated the world with its wide-range of use-cases. From basic chatbots to eCommerce, Artificial Intelligence has been leveraged in several domains. One such disruptive application of AI is in developing language modeling systems.

A third-generation language prediction model, the Generative Pre-trained Transformer 3 or GPT-3, has taken the internet by storm. From poetry to fully-functional code, GPT-3 seems to have mastered it all. Let’s dive deep into the capabilities and features of this tool to understand what makes it the “most powerful language model” to date. 

GPT-3 And Its Features

The Generative Pre-trained Transformer 3 is a powerful language generation model that uses Natural Language Processing and Deep Learning to generate meaningful text. It is the third generation language generating AI in the GPT-n series developed by California-based OpenAI. 

WireX August Download

GPT-3 was released for private beta testing on 11th June 2020. Since then, several developers and writers have shared a glimpse of its capabilities with the world. A San Francisco-based developer had tweeted: “Playing with GPT-3 feels like seeing the future. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s shockingly good.”

GPT-3 has a whopping 175 billion parameters compared to its predecessor, GPT-2, which has 1.5 billion parameters. These parameters are optimized by the model at the time of training. To give you an idea, GPT-2 was already generating convincing human-like text with its 1.5 billion parameters (which is still huge). Therefore, we cannot even begin to imagine the potential of the GPT-3 language model with its vast size of parameters. 

The GPT-3 interface has the ability to recognize a particular theme or structure, and it then sticks to it. For example, if we start with the Q&A structure, GPT-3 continues to generate text in that format. We can also start writing a story in a particular language, and after about one line, the model generates an entire story in the same language. Mind you, these stories do not comprise of some randomly connected sentences, but in most cases, they make perfect sense. Similarly, it sticks to the other structures coherently. 

GPT-3’s chatbot settings can be changed from a range of helpful, creative, friendly to brutal, stupid, and very unfriendly. 

GPT-3 chatbot when friendly

Writing with GPT-3

Another very interesting feature of the model includes its ability to imitate the writing style of certain writers. On giving a prompt: ‘Here is a poem by Shakespeare’, the model generated an entire poem in lines with the style of Shakespeare. Although the results were mind-boggling, it was seen that GPT-3 reused existing text that it was trained on earlier.

The model supports several languages, including German, Japanese, and Russian. 

Recently, MIT Technology Review reported that a college student managed to land the No. 1 spot in Hacker News with a blog post created by GPT-3. The student was trying to demonstrate that the content generated by the model is capable of tricking people into thinking that it was written by a human in reality. He stated that the process was “super easy, actually, which was the scary part.” The post went viral in a couple of hours and garnered 26,000 views. He further stated that only one person reached out to him to find out if the article was AI-generated while a few others commented that GPT-3 could be the author. 

All in all, the model has produced rather impressive results when it comes to the writing style. However, it will have to be fine-tuned further for a better meaning match and to produce sensible content. Additionally, a lot of adjustments will be required to remove any sexist and racist slurs. 

Generating Code with GPT-3

The most fascinating bit is GPT-3’s capability to generate fully-functional code by taking only the description as input. 

The Q&A preset can also be used to get code snippets according to the requirements. For example, a researcher asked GPT-3 to create ‘a CSS class to make an element a little see through’ to which GPT-3 answered ‘.transparent {opacity: 0.5}’.

One of the researchers was also able to use GPT-3 to design an Instagram-like clone in a very short time.

The capabilities of GPT-3 look very promising. It is important to note that the model is still in the beta testing phase. The experiments have shown that the model has a lot of potential and can be very helpful to humans, as long as we use it with the right intentions. At the end of the day, it is still a tool created by humans, which can be easily misused, like any other technology.

WireX August Download

Muskan Bagrecha
Muskan is an undergraduate student pursuing a Bachelor's in Technology. With a zealous spirit for writing, she finds herself open to the vast realm of learning. She is an avid programmer with a keen interest in technology and science.


Please enter your comment!
Please enter your name here

Most Popular

US hospitals fall victims to ransomware attacks amid rise in Covid cases

Some US hospitals have been hit by simultaneous ransomwareattacks, structured to cause harm to healthcare systems. These attacks were carried out by cyberattackers in...

Viral copy-paste app ClipDrop now available for $39.99

Back in May, the world got to know about an exciting research project (ClipDrop) testing out the concept of using AR (Augmented Reality) to...

Vu launches 85-inch Masterpiece TV for ₹3.5 lakh

Vu has launched its latest flagship — Vu Masterpiece 85-inch TV — for Rs 3.5 lakh. The smart television features QLED technology and a...

Samsung Galaxy Z Fold 3: All we know so far

Samsung is coming up with yet another smartphone with its out-of-the-box innovations and design – Samsung Galaxy Z Fold 3. This is likely to...

Recent Comments