/Accelerating AI computing to the speed of light

Accelerating AI computing to the speed of light

Summary:  A research team led by associate professor Mo Li at the University of Washington Department of Electrical & Computer Engineering (UW ECE), in collaboration with researchers at the University of Maryland, has come up with a system that could contribute toward speeding up AI while reducing associated energy and environmental costs

Original author and publication date: Wayne Gillam – January 6, 2021

Futurizonte Editor’s Note: If AI can “think” at the speed of light, what will AI be able to accomplish? We will soon find out.

From the article:

Artificial intelligence (AI) and machine learning are already an integral part of our everyday lives online, although many people may not yet realize that fact. For example, search engines such as Google are facilitated by intelligent ranking algorithms, video streaming services such as Netflix use machine learning to personalize movie recommendations, and cloud computing data centers use AI and machine learning to facilitate a wide array of services. The demands for AI are many, varied and complex. As those demands continue to grow, so does the need to speed up AI performance and find ways to reduce its energy consumption. On a large scale, energy costs associated with AI and machine learning can be staggering. For example, cloud computing data centers currently use an estimated 200 terawatt hours per year — more than a small country — and that energy consumption is forecasted to grow exponentially in coming years with serious environmental consequences.

Now, a research team led by associate professor Mo Li at the University of Washington Department of Electrical & Computer Engineering (UW ECE), in collaboration with researchers at the University of Maryland, has come up with a system that could contribute toward speeding up AI while reducing associated energy and environmental costs. In a paper published January 4, 2021, in Nature Communications the team described an optical computing core prototype that uses phase-change material (a substance similar to what CD-ROMs and DVDs use to record information). Their system is fast, energy efficient and capable of accelerating neural networks used in AI and machine learning. The technology is also scalable and directly applicable to cloud computing, which uses AI and machine learning to drive popular software applications people use everyday, such as search engines, streaming video, and a multitude of apps for phones, desktop computers and other devices

“The hardware we developed is optimized to run algorithms of an artificial neural network, which is really a backbone algorithm for AI and machine learning,” Li said. “This research advance will make AI centers and cloud computing more energy efficient and run much faster.”

The team is among the first in the world to use phase-change material in optical computing to enable image recognition by an artificial neural network. Recognizing an image in a photo is something that is easy for humans to do, but it is computationally demanding for AI. Because image recognition is computation-heavy, it is considered a benchmark test of a neural network’s computing speed and precision. The team demonstrated that their optical computing core, running an artificial neural network, could easily pass this test.

“Optical computing first appeared as a concept in the 1980s, but then it faded in the shadow of microelectronics,” said lead author Changming Wu, who is an electrical and computer engineering graduate student working in Li’s lab. “Now, because of the end of Moore’s law [the observation that the number of transistors in a dense, integrated circuit doubles about every two years], advances in integrated photonics, and the demands of AI computing, it has been revamped. That’s very exciting.”

READ the complete story here