With the Loihi 2 neuromorphic chip, machines can perform application processing, problem-solving, adaptation,and learning much faster than before. Four years after Intel first introduced Loihi, the company’s first neuromorphic chip, the company has released its second generation processor, which Intel says will provide faster processing, greater resource density, and improved power efficiency. CPUs are often called the brains of the computer but aren’t, really, since they process only a handful of tasks at once in a serial manner, nothing like what the brain does automatically to keep you alive. Neuromorphic computing attempts to replicate the functions of the brain by performing numerous tasks simultaneously, with emphasis on perception and decision making Neuromorphic chips mimic neurological functions through computational “neurons” that communicate with one another. The first generation of Loihi chips had around 128,000 of those digital neurons; the Loihi 2 has more than a million. Intel states that early tests of Loihi 2 required more than 60 times fewer ops per inference when running deep neural networks compared to Loihi 1, without a loss in accuracy. This can mean real-time application processing, problem-solving, adaptation and learning. It has even learned how to smell. Loihi 2 also features faster I/O interfaces to support Ethernet connections with vision-based sensors and larger meshed networks. This will help the chip better integrate with the robotics and sensors that have been commonly used Loihi 1 in the past. Loihi isn’t sold like regular Intel chips. It is sold through complete systems to select members of its Intel Neuromorphic Research Community (INRC). Those systems are called Oheo Gulch, which uses a single Loihi 2 chip and is intended for early evaluation, and Kapoho Point, which offers eight Loihi 2 chips and will be available soon. Intel Releases Lava Framework To support development for neuromorphic applications, Intel has also introduced an open, modular, and extensible software framework known as “Lava”, which the company says provides the neuromorphic computing community with a common development framework. A component of Lava is Magma, an interface for mapping and executing neural-network models and other processes using neuromorphic hardware. Lava also includes offline training, integration with third-party frameworks, Python interfaces and more. The Lava framework is available now on GitHub. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe