Corlett Novis talks with UCL Engineering postgraduate student Dan Mannion, who thinks it’s time for computer science to get a hardware update
Neural networks, deep mind and machine learning are hot topics for tech. The age of Artificial Intelligence (AI) is here, and the world of technology is being flooded with new software. From neural networks capable of learning video games to DeepMind Technologies’ AI that can help treat cancer, every day it becomes easier to imagine a world in which sentient AI is far more than just a blockbuster fantasy trope.
At the same time computing is coming up against hard boundaries. Today, the latest silicon circuits measure just 7 nm wide, more than 10 times smaller than a red blood cell, and the smallest transistors measure exactly one atom in size. Even theoretical circuitry can’t get smaller than a single atom: it’s just not possible for modern components to get any smaller.
As for AI, our computational systems are dramatically inefficient. Google’s Alpha Go required 1,378 processors and 1 megawatt (a million watts) of power to beat a single human brain running on roughly 20 watts (about the power consumption of a lightbulb). That’s a level of inefficiency no level of software optimising is going to fix. Therefore, at some point, somehow, we’ll need to rethink our hardware.
UCL postgraduate student Dan Mannion is working on a solution: artificial neurons. Dan’s work falls under a branch of electronic engineering called “neuromorphic engineering” which was first theorised in the 1980s as a way for computer systems to become more like the human brain by creating synthetic neurons as building blocks for computation. Today, these systems are quickly becoming a reality.
How do they work?
Just two months ago researchers at the National Institute of Standards and Technology in Colorado were able to use neuromorphic technology to create a superconducting computer chip both faster and more efficient than the human brain, an achievement that won them a place in the journal Science Advances. The reason why these chips require so little energy has to do with how their components differ from conventional computers. In normal computing there are only two discrete pieces of information known as ’bits’. These come in the form of 1 or 0. Neuromorphic circuitry, however, can accumulate signals from several sources until they build up a high enough voltage to pass a threshold, much like an organic neuron, at which point they pass a current or ’fire’. Because of this they are far more energy efficient.
To accomplish this, neuromorphic circuits use a component called a ’memristor’. These devices can ’remember’ how much current was flowing through them after the current stops and retain their resistance during the interim, acting as a sort of dynamic resistor. The memristors used by Dan’s research team, led by Professor Tony Kenyon, are incredibly simple, and are made up of three layers consisting of gold, silicon substrate and silica (silicon dioxide).
Dan’s research involves replicating ’Spike-Timing-Dependent-Plasticity’ (or STDP for short), the process which adjusts the strength of connections between neurons and is responsible for learning and memory. This is a difficult task, but it only represents a single part of what neurons do, meaning that any successful models are only partially accurate.
Despite the complexity of what he does, Dan insists “I am not a neuroscientist.” The work done by Dan and his team requires an oversimplification of the operation of the neuron. But simplification is a common technique used in machine learning. “[Doing] this allows me to quickly design and test ideas that are based on somewhat undisputed theories from the neuroscience field”, notes Dan. “I can ignore the details and treat everything like a black box. This may not be ideal, but it does produce results.”
Neuromorphic versus Quantum
But neuromorphic systems aren’t the only solution to ever-shrinking systems. In fact, it’s quantum computing which is most widely covered by the press. Although record breakingly small and revolutionary, quantum computers lack one essential component.
“Quantum computers are highly sensitive to their surroundings and so are generally operated in very controlled environments”, says Dan. “This isn’t ideal for applications in machine learning because we want to use this technology in phones and portable devices”. Artificial neurons, it would seem, have the potential to be far more commercially marketable due to their cheaper manufacturing and upkeep costs.
Although it may seem like breakthrough research, it’s always difficult to tell which pieces of investigation done today are going to make headlines tomorrow. This is just as true for Dan as it is for any researcher. “I want to further machine learning”, says Dan, “but If it turns out memristor based STDP is the best way to go, then I should probably start playing roulette.”
Featured Image Credit: pixabay