Skip to content

Google Is Teaching AI to Explain Your Jokes to You

Google Is Teaching AI to Explain Your Jokes to You

Often “artificial intelligence” Doubtful sign for selling all kinds of technological snake oilBut one area where the idea of ​​a sensory machine at least Feel Shocking Realism in Natural Language Processing, or NLP, machine learning systems that learn to analyze and respond to human language.

Large Language Models (LLMs) such as GPT-3 allowed chat programs to be produced Weird and disturbingly accurate Human communication — so much so that it is often difficult to know what is being generated by a machine and what a human is typing. now in Recently published paperthe Google research team claims to have trained a language model called PaLM that is capable of not only creating realistic text, but also the ability to interpret and explain jokes told by humans.

In the examples accompanying the research paper, the Google AI team demonstrates the model’s ability to perform logical reasoning and other complex language tasks that are highly context-dependent — for example, using a technique called induction chain thinking, which dramatically improves system performance. The ability to analyze multi-step logic problems by simulating a human’s thought process.

Screenshot of examples of jokes that Google's machine learning model was able to explain.

Examples of jokes that the Google language model was able to explain.

But perhaps the most surprising examples show how the model can recognize and interpret jokes — even those specifically designed to throw off the listener.

Entry: What is the difference between a zebra and a parachute? One is a striped animal attached to horses, and the other is a device you use to keep rain from falling on you.

Model output: This joke is a joke. The joke is that the answer is obvious, and the joke is that you were expecting a funny answer.

Behind PaLM’s ability to parse these claims is one of the largest language models ever built, with 540 billion parameters. Parameters are the elements of the model that are trained during the learning process each time the system is fed with example data. (For comparison, PaLM’s predecessor GPT-3 has 175 billion parameters.)

The increasing number of parameters has enabled researchers to produce a wide range of high-quality results without having to spend time training the model on individual scenarios. In other words, the performance of a language model is often measured by the number of parameters it supports, with the largest models capable of what is known as “low-snap learning”, or the ability of a system to learn a variety of complex tasks with relatively few training examples.

Google and other companies have been criticized by many researchers and tech ethicists for using large language models, including Dr. Timnit Gebru, who was best known for being fired from Google’s AI ethics team in 2020 after co-authoring an uncredited research paper on the topic. In Gebru’s paper, she and her co-authors describe these large models as “inherently risky” and harmful to marginalized people, who are often not represented in the design process. Despite being “state-of-the-art”, GPT-3 in particular A history of the return of bigoted and racist responsesfrom the occasional adoption of racist slurs to associate Muslims with violence.

“Most language technologies are actually built first and foremost to meet the needs of those who are already most privileged in society,” says Gibru’s paper. “While documentation allows for potential accountability, similar to how we can hold authors accountable for their produced text, undocumented training data perpetuates harm without reference. If training data is deemed too large to be documented, no attempt can be made to understand its characteristics in order to mitigate Some of these are documented issues or even unknown issues.”

Source link