Numenta Brings Brain Theory to Machine Learning, Compares HTM to Other Machine Learning Techniques in New Paper

HTM Adapts Quickly To Changes. Numenta used HTM and other algorithms to predict taxi passenger count in the New York City. Left: Overall prediction error for various algorithms. Right: After a new pattern is introduced (black dashed line), HTM quickly learns the new pattern and gives better prediction accuracy than LSTM due to its ability of continuous learning. (Graphic by Numenta, Inc. Published under a Creative Commons Attribution 3.0 Unported (CC BY 3.0) license.)

REDWOOD CITY, Calif.--()--Numerous proposals have been offered for how intelligent machines might learn sequences of patterns, which is believed to be an essential component of any intelligent system. Researchers at Numenta Inc. have published a new study, “Continuous Online Sequence Learning with an Unsupervised Neural Network Model,” which compares their biologically-derived HTM sequence memory to traditional machine learning algorithms.

The paper has been published in MIT Press Journal’s Neural Computation 28, 2474–2504 (2016). You can read and download the paper here.

Authored by Numenta researchers Yuwei Cui, Subutai Ahmad, and Jeff Hawkins, the new paper serves as a companion piece to Numenta’s breakthrough research offered in “Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex,” which appeared in Frontiers in Neural Circuits, in March 2016.

The earlier paper described a biological theory of how networks of neurons in the neocortex learn sequences. In this paper, the authors demonstrate how this theory, HTM sequence memory, can be applied to sequence learning and prediction of streaming data.

“Our primary goal at Numenta is to understand, in detail, how the neocortex works. We believe the principles we learn from the brain will be essential for creating intelligent machines, so a second part of our mission is to bridge the two worlds of neuroscience and AI. This new work demonstrates progress towards that goal,” Hawkins commented.

In the new paper, HTM sequence memory is compared with four popular statistical and machine learning techniques: ARIMA, a statistical method for time-series forecasting (Durbin & Koopman 2012); extreme learning machine (ELM), a feedforward network with sequential online learning (Huang, Zhu, & Siew, 2006); and two recurrent networks, long-short term memory (LSTM) (Hochreiter and Schmidhuber 1997) and echo state networks (ESN) (Jaeger and Hass 2004).

The results in this paper show that HTM sequence memory achieves comparable prediction accuracy to these other techniques. However, the HTM model also exhibits several properties that are critical for streaming data applications including:

  • Continuous online learning
  • Ability to make multiple simultaneous predictions
  • Robustness to sensor noise and fault tolerance
  • Good performance without task-specific tuning

“Many existing machine learning techniques demonstrate some of these properties,” Cui noted, “but a truly powerful system for streaming analytics should have all of them.”

The HTM sequence memory algorithm is something that machine learning experts can test and incorporate into a broad range of applications. In keeping with Numenta’s open research philosophy, the source code for replicating the graphs in the paper can be found here. Numenta also welcome questions and discussion about the paper on the HTM Forum or by contacting the authors directly.

* Yuwei Cui, Subutai Ahmad, Jeff Hawkins (2016) Continuous Online Sequence Learning with an Unsupervised Neural Network Model. Neural Computation 28(11), 2474–2504. doi:10.1162/NECO_a_00893

* Hawkins, J., and Ahmad, S. (2016). Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex. Front. Neural Circuits 10. doi:10.3389/fncir.2016.00023

About Neural Computation

Neural Computation disseminates important, multidisciplinary research results in a field that attracts psychologists, physicists, computer scientists, neuroscientists, and artificial intelligence investigators, among others. For researchers looking at the scientific and engineering challenges of understanding the brain and building computers, Neural Computation highlights common problems and techniques in modeling the brain, and in the design and construction of neurally-inspired information processing systems.

About Numenta

Founded in 2005, Numenta develops theory, software technology, and applications all based on reverse engineering the neocortex. Laying the groundwork for the new era of machine intelligence, this technology is ideal for analysis of continuously streaming data and excels at modeling and predicting patterns in data. Numenta has also developed a suite of products and demonstration applications that utilize its flexible and generalizable Hierarchical Temporal Memory (HTM) learning algorithms to provide solutions that encompass the fields of machine generated data, human behavioral modeling, geo-location processing, semantic understanding and sensory-motor control. In addition, Numenta has created NuPIC (Numenta Platform for Intelligent Computing) as an open source project. Numenta is based in Redwood City, California. Connect with Numenta on Twitter, Facebook, Google+ and LinkedIn.


Krause Taylor Associates
Betty Taylor, 408-981-7551

Release Summary

Researchers at Numenta, Inc. have published a study in the new edition of Neural Computation that compares their biologically-derived HTM sequence memory to traditional machine learning algorithms.


Krause Taylor Associates
Betty Taylor, 408-981-7551