News

AI intelligence models of language have become very good at specific tasks!

AI

In the past few years, AI intelligence models of language have become very good at specific tasks. These models are good at predicting the next word in a string of text. This technology helps to text apps and search engines indicate the next word you are going to type. The most recent predictive language models also seem to learn something about the underlying meaning of language. Not only words are predicted by these models that come next, but they also perform tasks that seem to require some degree of genuine understanding.

For the specific function of predicting text, such models were designed to optimize performance without attempting to mimic how the human brain performs this task or understands the language. As per the new study from MIT neuroscientists, the underlying function of these models is similar to the function of language-processing centres in the human brain.

Similarity to the human brain is not shown by computer models that perform well on other language tasks. With this, there is clear evidence that the human brain uses next-word protection to drive language processing. The more accurate the model predicts the next word, the more closely it fits the human brain. One of the authors of the new study said, “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.” The high-performing next word prediction new models belong to a class of model neural networks. Computational nodes are found in these networks that form connections of varying strength and layers and pass information between each other in prescribed ways.

The researchers 43 different language models, and many of them are used for optimized for next-word prediction. A model called GPT-3 is included in these, which, when given a prompt, can generate text similar to what humans would produce. Other models were designed to perform different language tasks. As each model had a string of words, the researchers measured the activity of the nodes that make up the network. Then these were compared to activity in the human brain. They were measured in subjects performing three language tasks:

  • Reading sentences and one word is revealed at a time.
  • Reading sentences one at a time.
  • Listening to stories.

The researchers found that the best-performing next-word prediction models activity patterns are very similar to those seen in the human brain. In those same models, activity was also highly correlated with measures of human behavioural measures. The models in which neural responses are predicted well in reading times tend to predict human responses best. The major takeaway from the search is that language processing is a highly constrained problem. The best solutions to it to the solutions found by the evolutionary process that created the human brain.

One of the essential features of predictive models is a forward one-way predictive transformer. What is going to come next? This kind of transformer can make predictions based on previous sequences. It can make predictions based on a very long prior context, not just the last few words. 

 

 

 

 

About the author

Joseph Wood

A news media professional with a strong experience in online journalism, content management, and social media. Dwayne’s strength includes the sound knowledge of online media, detecting potential trend worthy subjects, discovering news and proficiency in packaging content for web and mobile. [email protected]

Add Comment

Click here to post a comment