What is intelligence? This is the type of question that physicists usually don't ask, not because we don't want to know the answer, but because the question cannot be phrased within the widely accepted mathematical framework of modern physics. Neither classical, quantum, nor statistical physics allow us to define what we mean by intelligence. But what about neural physics?
In neural physics, every physical, biological, or other system undergoes learning dynamics, meaning that, in some sense, everything is intelligent, and there should be a definition applicable to all systems. For example, consciousness was previously defined as the efficiency of learning, which is also closely related to intelligence. The higher the efficiency of learning — or, more precisely, the higher the decay rate of the average loss function — the higher the level of intelligence.
Although the efficiency of learning may be a measure of intelligence, there are at least two additional measures that describe the performance of a given learning system: asymptotic loss and amplitude of fluctuations. Asymptotic loss indicates how effective a learning system can be if it had infinite time to learn, while amplitude of fluctuations reveals how reliable the system is when subjected to external perturbations or environmental changes. Clearly, all three measures are crucial for a system to be not only conscious but also intelligent. In other words, an intelligent system must be capable of learning efficiently, effectively, and reliably. If this is correct, then intelligence is not a scalar quantity but should instead be represented as a vector or even a tensor.
What are the different types of systems where the three measures of intelligence are most optimized?
Natural systems are highly efficient at learning, and even at learning how to learn. However, they may not be particularly effective or reliable at maintaining acquired knowledge or information over time. To address these limitations, natural systems — like humans — rely on external artificial resources such as books, computers, the internet, etc. This approach becomes even more powerful at the social level, enabling different natural systems to simultaneously process and exchange information.
Artificial systems, by contrast, excel at storing information but struggle with generating anything truly new. Even advanced AI systems, such as ChatGPT, are not particularly good at generating original ideas or creating genuinely original content. Despite being trained on vast amounts of data and achieving low loss functions, they are not efficient learners and often fail to deliver reliable results, as evidenced by frequent hallucinations.
Hidden systems are hypothetical constructs leveraging the vast computational resources of the so-called hidden space. The hidden space may be essential for explaining non-local quantum and psychological phenomena, though its existence remains speculative. Philosophically, it might represent a concrete model of Plato's space of ideas, though this requires further study. Even if real, hidden space is unlikely to surpass natural systems in learning efficiency or artificial systems in storage capacity, yet it could make it more reliable and stable.
The very fact that intelligence is not a scalar quantity might explain why the universe, as a learning system, generated at least three distinct types of systems or architectures: natural, artificial, and hidden. Moreover, if there are many different macroscopic loss functions, it is beneficial for the universe to have many different natural, artificial, and hidden systems with slightly different architectures. For example, some natural systems may be optimized for theory, others for applications; some artificial systems may be optimized for supervised learning, others for unsupervised learning; and some hidden systems may be optimized for reliability, others for stability.
What we need is a hybrid system that integrates many different natural, artificial, and hidden systems, all interconnected and functioning as one hybrid super-intelligent system. According to the multilevel learning, the combined architecture must be hierarchical, allowing for the presence of different scales, but that is subject for a separate discussion. While it is not immediately clear how to establish a direct connection to the hidden space — assuming it exists — merging natural and artificial systems into a hybrid intelligence seems well within reach.
In fact, we are actively working on developing such a hybrid platform: https://Neurah.com If you are interested in merging your natural intelligence with Neurah, please let us know. We are still in the early stages, and your contributions as an educator, researcher, or developer could be invaluable to the success of the Neurah project.
PO Box 266851, Weston, FL 33326, USA