Are the robots really taking over?
By robots, I mean artificially intelligent machine-based learning algorithms that mimic the workings of a human brain.
Except… they don’t really.
With all the buzz about Artificial intelligence (AI) and machine learning now you would think that the human brain and the way we think, act, make decisions, and operate in the world is on the way out.
In fact, the way the current technology operates – using neural networks – is somewhat incongruent to the workings of the human brain. At least according to the “father” of the growth in AI, Geoffrey Hinton:
“I don’t think it’s how the brain works. We clearly don’t need all the labeled data.”
To understand what I’m talking about, and why it matters, let’s take a step back…
The cornerstone to AI: Neural networks and deep learning
At the core of AI research are these things called neural networks. Neural networks are a framework that enables machine learning algorithms to work together, process complex data, and perform tasks based on the information. They “learn” without human input of tasks or steps required in a process.
“Deep neural networks” are widely used in a process referred to as “deep learning” in fields such as image recognition, natural language processing, analysis of medical images, social network filtering, and other applications.
The problem with neural networks and deep learning
More and more people are recognizing that neural networks and deep learning have limitations. Deep Learning systems, contrary to their designation, don’t learn. They are trained using millions of samples and perform only the action for which they are trained. They are not intelligent. Trained systems can only respond, they can’t learn or adapt.
There are also major concerns with this form of artificial intelligence and machine learning due to the amount of power required to perform actions and the amount of training that is required. For the most part, the grunt work needs to be passed off to remote servers on the cloud which significantly increases the time in which the technology responds.
Apple’s voice assistant, Siri, is an example of cloud processing. The words are sent over the internet as digital data to a remote computer, the answer is processed and relayed back in the same way. The actual speech processing and search is performed on the remote computer. Which is fine for things like Siri, but not for autonomous vehicles and other time-critical situations.
For AI and machine learning to progress to the next level, a faster and more efficient way to process these computations needed to be discovered…
Enter “Edge Processing” and BrainChip
Edge processing is – in contrast to processing in “the cloud” – performed right on the device itself.
The leaders in this space are a company by the name of BrainChip. BrainChip has developed a revolutionary network called Akida that is small enough, fast enough and efficient enough to perform these machine learning and intelligent tasks on the device itself by learning autonomously. This is in comparison to passing on the work to remote servers – which may not be available everywhere. In critical devices, where human life can depend on AI, you don’t want to be dependent on a remote server that may or may not be reachable.
By handling these complex computations on the device itself, it significantly accelerates the process. For example, BrainChip’s object classification technology can process 1400 images per second at 92% accuracy compared to current technologies that only process up to 15 images per second at 82% accuracy! See a representation of this powerful technology in the video below:
A key success factor to the implementation of BrainChip’s technology at scale is the fact that it builds on the technology already available – whereby other proposed self-learning networks throw the baby out with the bath water.
Here’s a quote from BrainChip’s founder, Peter van der Made;
“Akida embodies a method of bridging the gap from deep learning to autonomous learning systems that goes beyond the current capabilities of AI. There is no need to throw it all away and start over. Akida can help to preserve a company’s current investment in AI and progress towards autonomous learning intelligent systems from there.”
It’s predicted that future Akida chips will incorporate even more characteristics of the brain, such as episodic memory – that is, remembering and anticipating things that happen in sequence, and predicting which leads to understanding.
It truly is the technology that will take AI to new and mysterious places, with it’s first release expected in the second half of 2019.
“Real world” applications of Akida
Of course, all this talk about robots, artificial intelligence, and human-like learning machines is interesting… but how will this technology be used in the “real world”?
BrainChip designed and tested the first version of its technology in 2015. It contained a relatively small network that was able to learn, without help and within seconds, to drive a simulated car around a track without running into the sides. The simulated car behaved like a mouse in a maze, “feeling” the boundaries of the track to learn to navigate it.
In parallel to the BrainChip powered car, another test car using common and currently available machine learning methods went around the track too. This car bumped into the sides and learned the boundary, then returned to the beginning and tried again, bumped again and learned some more. This is similar to the learning method that is used today in deep learning networks, where errors are repeatedly fed back to the network to change its behavior – rather than learning autonomously and changing in real-time as Akida does.
See this race car demonstration in the video below:
BrainChip Race Car Demonstration (Milestone 1) from Aziana Limited on Vimeo.
A real-world application of this technology includes cybersecurity, where an unbelievable 98% of threats can be detected in microseconds with just minutes of training. In contrast, a deep learning network would require many hours or days of training on a machine to detect only 97% of threats.
Akida can also be used for image and video processing to detect objects, hand gestures, and other moving elements. This has already started to revolutionize security camera monitoring, especially in casinos where cheating is a complex and costly problem. This technology can learn, detect, and alert casino staff of problems in real-time video footage.
Conclusion
The Akida technology created by BrainChip is at the forefront of technological change and the future of AI. It is an entirely new way of processing data based on the way the brain works and is suited to both deep learning networks as well as self-learning and autonomous networks – something we haven’t seen in mainstream society yet.
The cybersecurity, object classification and hand-gesture demonstrations developed so far are only scratching the surface of what is possible with this technology. It is the next generation of AI.
The post The Secret Technology That Is Taking Artificial Intelligence To New and Mysterious Places appeared first on Jeffbullas's Blog.
No comments:
Post a Comment