26 08

The secret to achieving energy efficiency lies in the silicon neurons’ ability to learn to communicate and form networks, as shown by new research from the lab of Shantanu Chakrabartty, the Clifford W.

For several years, his research group studied dynamical systems approaches to address the neuron-to-network performance gap and provide a blueprint for AI systems as energy efficient as biological ones.

It’s as if the neurons were all embedded in a rubber sheet formed by energy constraints; a single ripple, caused by a spike, would create a wave that affects them all.

In the latest research, Chakrabartty and doctoral student Ahana Gangopadhyay showed how the neurons learn to pick the most energy-efficient perturbations and wave patterns in the rubber sheet.

They show that if the learning is guided by sparsity (less energy), it’s like the electrical stiffness of the rubber sheet is adjusted by each neuron so that the entire network vibrates in a most energy-efficient way.

The key to building an efficient system that can learn new things is the use of energy and structural constraints as a medium for computing and communications or, as Chakrabartty said, “Optimization using sparsity.”

In essence, a silicon neuron can attempt all communication routes at once, finding the most efficient way to connect in order to complete the assigned task.

Add your comment