artificial intelligence

Simulate or Emulate?

Simulate or emulate? Should we do one or the other? What is the difference between one and the other? A simple example will help to understand the difference between these two concepts: a flight simulator will replicate an aircraft's physics when traveling from point A to point B along with weather conditions as accurate as possible in a computer without the person running the simulation ever leaving his or her room. A flight emulator wouldn't necessarily replicate flight dynamics as in the former case, but will get you effectively from point A to point B. In other words, a flight simulator doesn't get you anywhere, a flight emulator would do1. In contrast, a flight emulator probably won't let you understand the physics of heavier-than-air flight, whereas a flight simulator would. Emulation replicates function and not necessarily the inner dynamics of the system. It does it, so that we can replace the thing being emulated with the emulation. An artificial heart emulates a biological heart by performing its functions, and could ultimately take the place of such organ2.

Recently, I have read some papers in the topic of neuromorphic computation (NC) to discover that most of the efforts in this regard are directed towards simulating neural behavior in hardware as a way to compute intense machine learning tasks in a way that resembles how the brain works (hence the term neuromorphic)[1]. This makes no sense! Advocates of this paradigm would expect the emergence of the benefits of brain function (parallelism, asynchronicity, event-driven computation, etc.) just by simulating the activity of the brain's unit of computation, namely, the neuron. And thus, models of neurons such as the Hodgkin and Huxley (HH) or the simple integrate-and-fire model are used.

Simulations usually have a larger timescale than emulations. For example, simulating a second of activity of a cortical column (~1mm of diameter, ~10 thousand neurons, ~10 million synapses) can take around 4.5 days using the HH neuron model. And this is using the simplest HH model, which has a single compartment. Real neurons have multiple compartments, thus increasing the complexity of any model trying to replicate their behavior. Why would we like to do such a thing, if the purpose is to use NC as a paradigm to perform efficient computation?

On the other hand, emulations should take place at least at the same timescale as the system being emulated, or faster. That is emulation's raison d'etre. Remember that emulation most of the time is developed to substitute a component of a system. For an emulation to be truly effective we need to understand the system's dynamics on a coarse grained level of abstraction, rather than focusing on a low level description. In other words, we should focus more on the quote-unquote observables, rather than in the inner system dynamics.

This connects us to Marr's levels of analysis[2]. The neuroscientist David Marr claimed that to deeply understand a system, we need to analyze it at three different levels:

  1. Computational level: what problem is being solved.
  2. Algorithmic level: how is the problem being solved
  3. Implementation level: how is the algorithm physically realized.

A simple example: a program that orders a list of numbers. What problem is being solved? Sorting a list (computational level). How is the problem being solved? Using the quicksort algorithm (algorithmic level). How is it implemented? On the Python programming language on a Windows machine, etc. (implementation level).

Emulations care about the computational level but are agnostic about the last two levels, that is, they don't care neither if the emulation is carried out following the same algorithm nor based on the same implementation as the system being emulated. This should be the approach when developing neuromorphic systems. The term neuromorphic should be applied to a system not because it resembles, i.e. simulates brain function at a certain level, but because it follows the principles that the brain uses to solve a particular problem.

Finally, let's not forget that when it comes to using brain science as a compass to direct the efforts to develop novel paradigms of computation for efficient artificial intelligence, we are only as good as the point in which the science is currently standing. What will neuroscience tell us tomorrow that will require a revision of the methods developed for NC? Two examples, we are still understanding the role of the glia on brain function and cognition[3], and the impact of dendritic computation on neural communication[4].


Footnotes

  1. A trip by bus from Berlin to Paris would be an example of a flight emulator, but a pretty inefficient and unsophisticated one.
  2. https://youtu.be/f7cuOS7nmDY

References

  1. Javanshir, Amirhossein, Thanh Thi Nguyen, M. A. Parvez Mahmud, and Abbas Z. Kouzani. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Computation, 2022.
  2. Marr, David. Vision: A computational investigation into the human representation and processing of visual information. MIT press, 2010.
  3. Sancho, L., Contreras, M. & Allen, N. J. Glia as sculptors of synaptic plasticity. Neuroscience Research 167, 17–29 (2021).
  4. Wu X, Zhao P, Yu Z, Ma L, Gao Y, Yip KW, et al. Dendritic nonlinearities mitigate communication costs. Patterns, 2026.

← Back