Language Models Running Wild – Asian Scientist Magazine

Asian Scientist Magazine (2 Nov 2022) — When you think about the world of artificial intelligence, it may seem that we are far away from real machines that think and reason. In reality, however, AI is already all around us, with applications in almost every conceivable industry and sector. And while modern artificial intelligence systems are still some way from reaching the high level of intelligence displayed by humans, the rate of progress in recent years has been staggering, to say the least.

I did not write this introductory paragraph. Not even a word. I just went online and searched for sites that used AI-based language prediction models. On one such website, I put “Large Language Model and Its Future” as the headline, and voilà – I got the opening paragraph in seconds.

Large language models (LLMs) are AI tools that feed off a pile of text that is freely available from sources such as digitized books, Wikipedia, newspapers, and articles. The models can read, summarize and translate texts and predict future words in a sentence, so they can generate sentences similar to how humans would speak or write. As tech giants like Google, Meta, Microsoft, Alibaba and Baidu race to develop their own language models, it’s hard to predict how they might affect consumers. Little, if any, effort has been made by governments, academic institutions and companies in Asia and other parts of the world to set and implement policies and ethical boundaries around the use of LLMs.

Intelligent guesswork

Researchers trace the origins of language models to the 1950s, when the English mathematician and philosopher Alan Turing proposed that a machine should be considered intelligent if a human could not tell whether another human or a computer was answering their questions. In recent years, technological advances gave rise to natural language processing, or NLP, which allowed computers to learn what makes language a language by identifying patterns in texts.

An LLM is a much more advanced and sophisticated step up from NLP. For example, a popular artificial intelligence language model called GPT-3—the same program I used to write this article’s introduction—can consume up to 570 GB of textual information to make statistical correlations between hundreds of billions of words as well as generate sentences, paragraphs , and even articles based on language prediction. In fact, researchers have even used the language model to write a scientific research paper and submit it for publication in a peer-reviewed journal.

Nancy Chen, an AI researcher at Singapore’s Agency for Science, Technology and Research (A*STAR), told Asian Scientist Magazine that the basis of such language models is simple. “The model basically predicts the subsequent words as it got the first several,” she said. It works in the same way as how a human can guess the missing words in a conversation.

Resource demanding

These LLMs can be immensely useful to both governments and private industries. For example, service-oriented companies can develop better chatbots to respond to unique customer queries, while governments can use the models to summarize public opinion or comments on a policy issue to make changes. LLMs can also be used to simplify technical research papers and reports for the general audience. However, it is resource-intensive to develop an LLM, so mostly large technology companies are in the race for now.

“Big companies are all doing it because they assume there’s a very large lucrative market out there,” Shobita Parthasarathy, a policy researcher at the University of Michigan’s Ford School of Public Policy, told Asian Scientist Magazine.

Researchers like Parthasarathy, who study these models and their potential applications, say the models need to be scrutinized, especially because LLMs work on historical data sets.

“History is often full of racism, sexism, colonialism and various forms of injustice. So technology can actually amplify and maybe even exacerbate these problems,” Parthasarathy said.

Parthasarathy and her team recently published a 134-page report pointing out how LLMs can have a huge socio-environmental impact. As LLMs become widespread, they will require huge data centers, potentially displacing marginalized communities. Those living near data centers will experience resource scarcity, higher utility prices and pollution from backup diesel generators, the report said. The operation of such data centers would require significant human and natural resources such as water, electricity and rare earth metals. This would ultimately exacerbate environmental injustice, particularly for low-income communities, the report concluded.

No rules

As this is a growing phenomenon, these language models do not have a clear set standard and well-defined rules and regulations for what they should be allowed or restricted to do.

As of now, “they’re all privately run and privately tested, and companies have to decide what they think a good big language model is,” Parthasarathy said.

Additionally, like any technology, LLMs can be abused.

“But we shouldn’t stop their development,” Pascale Fung, a lead AI researcher at the Hong Kong University of Science and Technology, told Asian Scientist Magazine. “The most critical aspect is putting the principles of responsible AI into the technology [by] assess any bias or toxicity in these models and make the necessary changes.”

Researchers studying LLMs believe there should be more comprehensive data protection and security laws. That could be achieved by making companies transparent about their input data sets and algorithms and forming a complaint system where people can register problems or potential problems, Parthasarathy said.

“We really need broader public scrutiny for the regulation of large language models because they are likely to have a huge societal impact.”

This article was first published in the print version of Asian Scientist Magazine, July 2022.
Click here to subscribe to Asian Scientist Magazine in print.

Copyright: Asian Scientist Magazine. Illustration: Shelly Liew/Asian Scientist Magazine

William

Leave a Reply

Your email address will not be published. Required fields are marked *