top of page

Neuromorphic computing paradigms enhance robustness through spiking neural networks

  • Feb 12
  • 7 min read

Updated: Apr 17

Mimicking the complex biological design of the human brain and leveraging the characteristics of spiking neural networks to build more reliable and robust AI models 



In today’s world, people use and trust popular artificial intelligence models for a large variety of day-to-day tasks and queries. This trust has been developed between the creators of such AI models and users solely because they have achieved human-like speed, quality and relevance in daily tasks. However, these models have a rather alarming issue: they confidently make incorrect predictions as a result of such an insignificant alteration in the data that such models are trained on - it’s just a matter of a few misplaced pixels or characters. Through these "adversarial attacks”, scientists Jianhao Ding, Zhaofei Yu, Jian K. Liu & Tiejun Huang have exhibited their impact on high-level artificial neural networks (ANNs) to completely misidentify a certain object.

The inspiration behind this research was the finding that our human brain is easily able to correctly classify objects, even if they possess imperfections. For example, we can recognize a stop sign even if it is covered in graffiti or viewed through fog or rain. Jianhao Ding et al. aim to find out if the unique characteristics of Spiking Neural Networks, more specifically its nature of Temporal Encoding, can be leveraged to create AI models that are far more robust and protected from such noise and imperfections in data.



Jianhao Ding et al. departed from the standard frame-based processing of traditional AI and instead employed the unique features of Spiking Neural Networks. More specifically, their methodology consists of a few identifiable concepts: 

  1. Temporal Encoding: As discussed earlier, unlike conventional ANNs which process data as static values, SNNs convert input information (like pixels, characters etc.) into a series of “spikes”. Spikes are discrete pulses - a simple state of on or off. Insights are obtained from the number of spikes and the timing of the spikes over a certain time period. The researchers utilised Task Critical Encoding, where the most critical features of the input data are converted into spikes that enter earlier into the input stream.

  2. Fusion Encoding Strategy: To prevent the model from becoming too specialized and therefore prone to adversarial attacks, the team combined different techniques to generate spikes (rate-based encoding & Poisson encoding). Consequently, the model is able to learn a more generalised version of the input data, making it harder for adversarial attack to occur.

  3. Early Exit Decoding: In neuromorphic systems, the model accumulates a certain voltage until it hits a threshold. These researchers implemented a system where a model provides the answer as soon as this “confidence” threshold is met. As a result, by “exiting” the process earlier, the system ignores the latter end of input spikes where malicious adversarial noise is usually hidden. Additionally, this makes the model much more energy efficient since it doesn’t need to waste more time processing extra data.


Dataset availability: The researchers primarily made use of two datasets popular for AI research. The first one was the CIFAR-10, consisting of 60,000 color images, each of size 32 × 32 pixels, divided into 10 classes. Each class contains 6000 images, with 50,000 images used for training and 10,000 for testing. The second one was the Fashion-MINST dataset, a benchmark dataset of 70,000 images of handwritten digits from 0 to 9, created in 1994. The samples are 28 × 28 grayscale images and are categorized into 10 classes. Both datasets are the gold standard in the AI community and by using them, Ding et al. can easily compare their SNN’s performance with thousands of traditional models on the same set of data.


Additionally, Ding et al.’s choice of adversarial attacks to test the models were also popular, used in previous studies. First, they primarily evaluated their system against white-boxed gradient based attacks, a benchmark for AI security since the attacker has complete access to the internal math. The most prominent attack used was the Projected Gradient Descent (PGD), often described as the strongest first-order adversary. PGD functions by finding the direction that confuses the AI model, nudges the input pixels in that direction, while remaining within a certain “noise budget” to remain undetected by humans. Moreover, the team also tested against Fast Gradient Sign Method (FGSM), a quicker, single-step attack used to measure baseline vulnerability. They also accounted for the unique nature of Spiking Neural Networks (SNNs) by using Surrogate Gradient (SG) attacks, which are specifically designed to bypass the "non-differentiable" nature of spikes.


By surviving such varied & powerful attacks, Ding et al. proved that the robustness of their neuromorphic paradigm was not accidental but a result of the model's fundamental temporal design.


Ding et al. managed to yield three major findings from their research:

  1. Gain in Robustness: When tested against standard adversarial attacks, the team’s SNN framework recorded twice the accuracy of traditional ANNs. Even when the noise added to input data was increased, the SNN maintained its performance for longer than standard AI.

  2. Retaining Accuracy: Usually, making a model tougher sacrifices accuracy on clean and normal data. However, the teams’ Fusion Encoding Strategy allowed their SNN to remain sharp while simultaneously building a shield against malicious data.

  3. Efficiency & Speed: Since an Early Exit Decoding method was used, the SNN became faster and used significantly less energy than traditional models that have to process every single pixel of every frame.



Historically, neuromorphic computing has always been about energy efficiency and high performance computing for simple tasks such as pattern recognition and sensory processing. The main goal of researchers was to integrate energy efficient AI systems into everyday products, and robustness usually meant physical reliability (i.e., the hardware wouldn't break if a wire snapped).

Jianhao Ding et al. shifts those narratives. Rather than focusing on energy efficiency, the team proved that these models need to actually function properly in the first place - without “hallucinating”. Sure enough, Ding et al. developed SNN frameworks that were much more advanced and protected from adversarial attacks than existing models. Nevertheless, the core differences between prior work and this research paper are presented as follows:

  1. Generally, when models are toughened and protected from malicious input data, they are shown millions of actual attack images that they must learn and recognize in future occurrences. This is slow & expensive. Ding et al.’s research exhibits that the fundamental structure of the model makes it naturally “tough” from day one, without any extra costs of training.

  2.  Most prior work tried to add "filters" on top of standard AI to block noise. This research proves that Spiking Neural Networks (SNNs) have a filter built into their DNA. Because Ding et al. 's model only "fires" when it’s sure it has enough data, it naturally ignores the weak, "fuzzy" signals used in cyberattacks.



Although the paper’s findings are a major breakthrough, we can agree that all computers can’t be replaced just yet.

  1. Issues with SNNs: Standard AI is easily trainable due to predictable & smooth mathematics. However, SNNs use spikes which are essentially “on” or “off”, meaning that it makes it much harder to train AI using today’s techniques. Workarounds which are still being perfected must be used.

  2. Unproven Dataset: This paper employs a relatively small dataset (CIFAR-10). The question is if the SNN can retain its accuracy & features on high level and immensely large datasets.

  3. Accessibility: To utilise neuromorphic computing, special chips must be produced and distributed all over the world. Most people still use standard GPUs. Until these "brain-like chips" become as common as the ones in our phones, these defense strategies will be hard to use in everyday life.



Ding et al.’s research is important because it shifts the question regarding AI from “how smart can it be?” to “how safe can it be?” This is crucial because in today’s world, AI models are being assigned increasingly larger responsibilities - like autonomous cars, medical surgeon robots & security systems. If these models are easily influenced by a slight flaw or imperfection, they could function incorrectly and put human lives at risk.


By showing that “thinking in pulses” (spikes) makes AI more like a human brain, this paper gives us a blueprint for Trustworthy AI. It emphasises that we don’t need faster processors, we need more reliable ones that aren’t easily confused at the site of graffiti on a stop sign, the messy and sometimes malicious reality of the world we live in.  Essentially, it proves that if we want AI we can trust with our lives, we need it to "think" more like we do.


Glossary

Neuromorphic Computing: Computer hardware designed to mimic the structure and processes of the human brain.

Spiking Neural Networks (SNNs): An evolution of ANNs, models that function using timed pulses (spikes) at specific points in time instead of the usual constant mathematical values.

Von Neumann Architecture: Computer design where data & instructions are stored in a single memory unit, which are processed sequentially by a CPU (consisting of an ALU & CU).

Adversarial Attacks: Techniques that deliberately manipulate input data with imperceptible changes, causing incorrect predictions & classifications.

Robustness: A model’s ability to retain accuracy when faced with noise, errors, or attacks.

Temporal Encoding: Encoding data based on the point in time in which it arrives.

Early Exit: When an AI model is confident in its answer, it “exits” the process stage and provides an output.

Fusion Encoding: Jianhao Ding et al.’s method of synthesising different spike-generation techniques to increase the robustness of an AI model.

Perturbation: Small amounts of noise added to a signal that can disrupt predictions

Leaky Integrate-and-Fire: A rule used by artificial neurons to “charge up” and fire a pulse only if enough input is received.

Surrogate Gradients: Mathematical technique of training SNNs since standard training doesn’t satisfy “spiking”



Bibliography

Ding, Jianhao, et al. “Neuromorphic Computing Paradigms Enhance Robustness through Spiking Neural Networks.” Nature Communications, vol. 16, no. 1, Springer Science and Business Media LLC, Nov. 2025, https://doi.org/10.1038/s41467-025-65197-x. Accessed 1 Dec. 2025.

Caballar, Rina, and Cole Stryker. “What Is Neuromorphic Computing? | IBM.” Www.ibm.com, 27 June 2024, www.ibm.com/think/topics/neuromorphic-computing.

SEO, Arthur. “Frames in Artificial Intelligence?” Globy, 24 Sept. 2025, gogloby.com/ai-glossary/frames-in-artificial-intelligence/. Accessed 25 Jan. 2026.


Shaunak Anand Wasker | Writer | The STEM Review

Comments


bottom of page