Pentagon Wants AI to Predict Events Before They Occur

A look back at the decades since the meeting shows how often AI researchers’ hopes have been shattered – and how little these setbacks have deterred them. Today, even though AI is revolutionizing industries and threatening to create upliftment in the global labor market, many experts wonder if today’s AI reaches its limits. As Charles Choi delineates in “Seven Revealing Ways AIs Fail,” the weaknesses of today’s deep-learning systems become more and more apparent. Still, there is little sense of doom among researchers. Yes, it is possible that we are waiting for another AI winter in the not so distant future. But that may just be the time when inspired engineers finally usher us into an eternal summer with the machine mind.

Scientists are developing symbolic AI set out to explicitly teach computers about the world. Their basic principle was that knowledge can be represented by a set of rules, and computer programs can use logic to manipulate this knowledge. Leading symbolists Allen Newell and Herbert Simon argued that if a symbolic system had sufficiently structured facts and premises, aggregation would ultimately produce broad intelligence.

The connectionists, on the other hand, inspired by biology, worked on “artificial neural networks” that would take information and make sense of it themselves. The groundbreaking example was the Perceptron, an experimental machine built by Cornell psychologist Frank Rosenblatt with funding from the US Navy. It had 400 light sensors that together acted as a retina and fed information to about 1,000 “neurons” that performed the treatment and produced a single output. In 1958, one New York Times the article quoted Rosenblatt as saying that “the machine would be the first device to think like the human brain.”

Frank Rosenblatt invented the perceptron, the first artificial neural network.Cornell University Division of Rare and Manuscript Collections

Unbridled optimism encouraged government agencies in the United States and the United Kingdom to pour money into speculative research. In 1967, MIT professor Marvin Minsky wrote: “Within a generation … the problem of creating ‘artificial intelligence’ will be solved significantly.” But soon after, state funding began to dry up, driven by a sense that AI research did not live up to its own hype. In the 1970s, experienced the first AI winter.

True believers, however, continued to soldier. And in the early 1980s, renewed enthusiasm brought a heyday to symbolic AI researchers who received recognition and funding from “expert systems” that encode the knowledge of a particular discipline, such as law or medicine. Investors hoped that these systems would quickly find commercial applications. The most famous symbolic AI adventure began in 1984, when scientist Douglas Lenat began working on a project he called Cyc, which aimed to code common sense in a machine. To this day, Lenat and his team continue to add terms (facts and concepts) to Cyc’s ontology and explain the relationship between them via rules. In 2017, the team had 1.5 million terms and 24.5 million rules. Yet Cyc is still nowhere near achieving general intelligence.

In the late 1980s, the cold winds of trade led to the second AI winter. The market for expert systems crashed because they required specialized hardware and could not compete with the cheaper desktops that were becoming commonplace. In the 1990s, it was no longer academically fashionable to work on either symbolic AI or neural networks because both strategies seemed to have flopped.

But the cheap computers that supplanted expert systems proved to be a blessing to the connectionists, who suddenly had access to enough computing power to run neural networks with many layers of artificial neurons. Such systems became known as deep neural networks, and the approach they enabled was called deep learning. Geoffrey Hinton, of the University of Toronto, used a principle called back-propagation to get neural networks to learn from their mistakes (see “How Deep Learning Works”).

One of Hinton’s postdocs, Yann LeCun, went on to AT&T Bell Laboratories in 1988, where he and a postdoc named Yoshua Bengio used neural networks for optical character recognition; US banks soon adopted the technique of processing checks. Hinton, LeCun and Bengio eventually won the Turing Award 2019 and are sometimes called godfathers for deep learning.

But the proponents of the neural network still had a big problem: They had a theoretical framework and growing computing power, but there was not enough digital data in the world to train their systems, at least not for most applications. Spring had not yet arrived.

Over the last two decades have everything has changed. In particular, the World Wide Web flourished, and suddenly there was data everywhere. Digital cameras and then smartphones filled the Internet with images, sites like Wikipedia and Reddit were full of freely available digital text, and YouTube had plenty of videos. Finally, there was enough data to train neural networks for a wide variety of applications.

The second major development came with law for the gaming industry. Companies like Nvidia had developed chips called graphics processor units (GPUs) for the heavy processing required to render images in video games. Game developers used GPUs to make sophisticated forms of shadow and geometric transformations. Computer scientists who need serious computing power realized that they could essentially trick a GPU into performing other tasks – such as training neural networks. Nvidia noticed the trend and created CUDA, a platform that enabled researchers to use GPUs for general processing. Among these researchers was a Ph.D. student at Hinton’s lab named Alex Krizhevsky, who used CUDA to write the code for a neural network that blew everyone away in 2012.

Picture of MIT professor Marvin Minsky.
MIT professor Marvin Minsky predicted in 1967 that true artificial intelligence would be created within a generation.MIT Museum

He wrote it for the ImageNet competition, which challenged AI researchers to build computer vision systems that could sort more than 1 million images into 1,000 categories of objects. While Krizhevsky’s AlexNet was not the first neural network used for image recognition, its performance in the 2012 competition caught the world’s attention. AlexNet’s error rate was 15 percent compared to 26 percent error rate for the second-best post. The neural network owed its runaway victory to GPU power and a “deep” multi-layered structure containing 650,000 neurons in total. In next year’s ImageNet competition, almost all neural networks were used. By 2017, many of the competitors’ error rates had dropped to 5 percent, and the organizers ended the competition.

Deep learning took off. With the computing power of GPUs and plenty of digital data to train in-depth learning systems, self-driving cars could navigate roads, voice assistants could recognize users’ speech, and web browsers could translate between dozens of languages. AIs have also won human champions at several games that were previously thought to be invincible by machines, including the old board game Go and the video game StarCraft II. The current boom in AI has touched every industry and offers new ways to recognize patterns and make complex decisions.

A look back over the decades shows how often AI researchers’ hopes have been shattered – and how little these setbacks have deterred them.

But the greater number of triumphs in deep learning has been dependent on increasing the number of layers in neural networks and increasing the GPU time dedicated to training them. An analysis by AI research firm OpenAI showed that the amount of computing power required to train the largest AI systems doubled every two years until 2012 – and then doubled every 3.4 months. As Neil C. Thompson and his colleagues write in “Deep Learning’s Diminishing Returns,” many researchers are concerned that AI’s computational needs are on an unsustainable trajectory. To avoid ruining the planet’s energy budget, scientists need to break out of the established ways of constructing these systems.

Although it may seem as if the neural net camp has definitely trampled the symbolists, the outcome of the struggle is in truth not so simple. Take, for example, the robot hand from OpenAI that created headlines for manipulating and solving a Rubik’s cube. The robot used neural networks and symbolic AI. It is one of many new neurosymbolic systems that use neural networks for perception and symbolic AI for reasoning, a hybrid approach that can provide gains in both efficiency and clarity.

Although deep learning systems tend to be black boxes that make inferences in opaque and mystifying ways, neurosymbolic systems allow users to look under the hood and understand how AI reached its conclusions. The U.S. Army is particularly wary of relying on black-box systems, as Evan Ackerman describes in “How the U.S. Army Turns Robots Into Team Players,” so Army researchers are researching different hybrid methods of driving their robots and autonomous vehicles.

Imagine if you could take one of the U.S. Army clearing robots and ask it to make you a cup of coffee. It is a ridiculous proposition today because deep-learning systems are built for narrow purposes and cannot generalize their abilities from one task to another. What’s more, learning a new task usually requires an AI to erase everything it knows about how to solve its previous task, a riddle called catastrophic forgetfulness. At DeepMind, Google’s London-based AI lab, renowned robotics expert Raia Hadsell tackles this problem with a range of sophisticated techniques. In “How DeepMind Inventes the Robot”, Tom Chivers explains why this problem is so important for robots acting in the unpredictable real world. Other researchers are exploring new types of meta-learning in hopes of creating AI systems that learn to learn and then apply that skill to any domain or task.

All of these strategies can help researchers’ efforts to achieve their highest goal: to build AI with the kind of fluid intelligence that we see our children develop. Toddlers do not need a massive amount of data to draw conclusions. They simply observe the world, create a mental model of how it works, take action, and use the results of their action to adjust the mental model. They repeat until they understand. This process is hugely efficient and effective, and it goes far beyond the capabilities of even the most advanced AI today.

Although the current level of enthusiasm has given AI its own Gartner hype cycle, and although the funding of AI has reached an all-time high, there are few signs that there is a soda in our future. Companies around the world are adopting AI systems because they see immediate improvements in their bottom line and they never come back. It’s just to see if researchers will find ways to adapt deep learning to make it more flexible and robust or devise new approaches that have not yet been dreamed of in the 65-year-old quest to make machines more like us.

This article appears in the October 2021 issue as “The Turbulent Past and Uncertain Future of AI.”

From your site articles

Related Articles on the Internet

Leave a Comment