As humans, we are driven by an insatiable desire to better understand the world around us. From ancient times to the present day, we have sought to unravel the mysteries of the universe and our place within it. Despite significant progress in many scientific fields, we still lack a unifying framework that would allow us to fully grasp the complexity of the natural world.
One of the biggest challenges we face in this pursuit is the lack of a common language that transcends disciplinary boundaries. Scientists, philosophers, economists, and artists all speak different languages, using different jargon and frameworks to describe the world around us. This fragmentation creates a barrier to progress, as individuals within different fields struggle to communicate with each other and share knowledge.
To overcome this challenge, we need a universal framework that can bridge disciplinary boundaries and unite diverse fields of study. This framework must be broad enough to encompass all branches of science, as well as the arts, humanities, and social sciences. It should allow us to construct, discuss, and evaluate new theories and models that incorporate insights from diverse fields.
While the task of creating such a framework may seem daunting, we cannot afford to be deterred. The pursuit of knowledge demands that we push beyond our current boundaries and strive to create a more unified and integrated understanding of the world around us. The potential rewards of such a framework are immense, from deeper insights into the fundamental nature of reality to new innovations and breakthroughs that can help us address some of the most pressing challenges facing humanity today.
In recent years, a universal framework has been proposed which not only presents a new way of understanding the universe but also challenges our traditional perspective of it. This framework is based on the idea that, at its most fundamental level, the entire universe is a neural network. Although this concept may seem far-fetched, it has some scientific basis which we will explore further in this group.
This framework will be our starting point, but it's important to acknowledge that we don't yet know where it will lead us. The journey will be exciting, but it won't be an easy one even for professional scientists. However, the potential rewards are great. By delving into this framework, we may be able to construct a universal language that transcends disciplinary boundaries and allows us to more effectively explore and understand the world around us.
This group is designed for those who are willing to think critically and approach the subject matter with an open mind, despite the complexity of the material. No advanced degrees or specialized training are required, just a willingness to engage with high-level concepts and a basic proficiency in mathematics.
Are you ready? Let's dive in!
In this podcast, we explore the fascinating world of neural physics, examining the idea that the entire universe might function like a neural network. Each episode tackles profound questions about the nature of reality through the lens of neural network theory. We aim to unravel complex scientific concepts, making them both accessible and thought-provoking. By examining how neural networks can model natural and social phenomena, we encourage our listeners to deeply ponder the nature of the universe and our place within it. Join us as we navigate various branches of science, shedding light on some of the biggest questions that have puzzled humanity.
YouTube videos of previous episodes: https://www.youtube.com/playlist?list=PLnu7tVik2MzLLzs_hKXILls_VS4qUZmfr
Are you intrigued by the parallels between the functioning of the world and a neural network, where patterns and connections emerge to shape complex behaviors?
Do you wish to delve deeper into the implications of this fascinating concept, exploring how these parallels can illuminate our understanding of everything from biological systems to societal structures?
Connect with us on Facebook and Telegram to embark on this exploration alongside individuals who share similar interests. Together, we can unravel the intricacies of these parallels and their profound implications.
Our group meetings serve as a vibrant platform where we delve into thought-provoking discussions on interdisciplinary topics. These include exploring the intricate interconnectedness of systems and examining how complex behavior can emerge from seemingly simple rules. By bringing together a diverse array of individuals, our meetings offer a distinctive opportunity to broaden your knowledge, participate in stimulating conversations, and connect with peers who share similar interests and passions.
YouTube videos of previous meetings: https://youtube.com/playlist?list=PLnu7tVik2MzKHBeqKy5nGWh5Hs5Wn2SX2
Scientific theory is a mathematical framework which can be used for modeling phenomena. The richer the framework, the more phenomena it can model, and the better the theory. By modeling, I mean the capacity to predict or explain results of experiments or observations.
For example, you should be able to measure something about a given system, carry out calculations to predict something about the system at later times, and then check if the predictions agree with the experiment.
But to develop a new theory or a new framework one should first learn about all of the existing theories and frameworks as well as all of the relevant experiments and observations that the new theory tries to explain.
The situation is even more difficult with the so-called theories of everything. As the name suggests – the theories of everything – should provide us with a framework to model all phenomena, which must include not only natural phenomena (such as physics and biology), but also social phenomena (such as economics and politics) and may be even beyond (such as philosophy and art).
This is a real challenge, and so very often people refer to sufficiently general frameworks as the theories of everything. Strictly speaking it is incorrect, but as long as you are explicit about what you are modeling, there is no problem. After all, “Theory of Everything” is just a buzzword.
No, there is no literal truth. Any theory which we now use to describe how nature works can at some later time be replaced with another, more fundamental theory. Newtonian physics was already replaced by Einstein's general relativity, classical physics was replaced by quantum physics, and it is very likely that both general relativity and quantum mechanics will be replaced with a more fundamental theory in a not so distant future.
And so the main point of science is not to prove how the universe works, or to discover the one and only theory of everything, but to find a theory which can be used to better model the world around us using the mathematical tools that are currently available. One possibility is to use the mathematics of neural networks and to model the world around us as a neural network.
The core idea is to start with the mathematical framework of artificial neural networks and see how well the neural networks can model different phenomena.
For example, can the learning dynamics of neural networks be used to model the known physics, such as quantum mechanics or general relativity, or the known biology, such as natural selection or major transitions in evolution? And if so, can the framework be used to resolve paradoxes or inconsistencies in the existing theories, such as the measurement problem in quantum mechanics or the emergence of life in biology. It turns out yes, it can.
Not all of the calculations have yet been done and not all of the known phenomena are yet to be modeled, but it is already clear that the mathematical framework of neural networks is a lot richer than what we were dealing with in physics. The main new and important ingredient – the learning dynamics. Not classical physics, nor quantum physics, nor gravitational physics have that, but that does not make them incorrect theories, only incomplete. At least when we look at them through the prism of the neural network theory or what we like to call neural physics.
I don’t see why not. As of now we have already modeled many phenomena from physics and biology, and are starting to look at psychological and social phenomena. So far things look rather promising and so it may very well be that the entire universe on its most fundamental level is as a Neural Network.
In any fundamental theory, there are objects that define the system. In particle theory, these objects are fundamental particles; in string theory, these objects are fundamental strings; and in neural network theory, these objects are fundamental neurons. In all these examples, the fundamental objects are just mathematical structures whose dynamics are often described by simple rules, but they often give rise to complex behaviors.
The fundamental neurons are the most fundamental objects from which neural network theory is built, and they manifest themselves on the smallest possible scale, perhaps, the Planck scale. But what is interesting is that everything we see are networks of neurons that function as if they were larger copies of the smaller neurons. Particles are networks, cells are larger networks, humans are even larger and a lot more complex networks that are still built out of the fundamental neurons. Of course, the learning dynamics of the larger networks are more complex, but some of the essential features of the dynamics appear on all scales.
When it comes to the learning dynamics, artificial neural network, biological neural network, or the fundamental neural network - all the same. These networks are basically nodes (or neurons) connected to each other with varying strength (or weights). At each time-step neurons can change their states depending on the state of other neurons and strength of connection to them (which is what we call activation dynamics). In addition the strength of connections can also change in order to minimize some loss function (which is what we call the learning dynamics).
Information exchange is just one type of dynamics present in neural networks, specifically the activation dynamic, and it is also evident in many other networks, such as the World Wide Web or perhaps even in the cosmic web. What is, however, unique to the neural networks (artificial, biological, or fundamental) is the learning dynamics. And it is the learning dynamics that allow, for example, the violation of the second law of thermodynamics, leading to the emergence of complex lifeforms.
If it is a simulation of an artificial neural network, then I have no problems with that, but I do think it is an unnecessary complication. The theory already assumes the existence of fundamental neurons, and questions about how these neurons came into existence or what the fundamental neurons are made of, is considered philosophical or, in other words, to lie beyond the scope of its explanatory power.
In the Neural Network Theory, the Big Bang is a special kind of phase transition that gives rise to a three-dimensional space. The main outcome of this phase transition was the development of a new communication protocol between fundamental neurons. Connections between fundamental neurons existed before the Big Bang, but they were unconstrained, with every neuron exchanging information with every other neuron. As a result there was no three-dimensional space, just a 'soup' of randomly connected neurons that might have had no apparent coherence in their communication protocols.
Effectively, the original space was of infinite dimension, or more precisely, the usefulness of a lower-dimensional, i.e. three-dimensional, organization of neurons was not yet discovered. As the learning progressed, the neurons tried to establish different connections and different protocols in order to minimize their loss function. Perhaps here and there, one- or two-dimensional structures were tried, but they didn't work that well for whatever learning objective there was for this neural network.
We do not know for sure, but the learning objective for every neuron could be to minimize surprise, unexpected behavior of its neighbors, and in order to reach this goal - to learn its own environment. It would have been a nearly impossible task if all of the connections were random. But then suddenly, the Big Bang phase transition took place, and the neurons 'figured out' that it makes sense to develop a new communication protocol by establishing three-dimensional local connections. That marked the emergence of three-dimensional space — the moment when the phase transition, commonly known as the big bang, occurred.
Strictly speaking, no, the three-dimensional physical space is not there, it's not real, it is just a useful way of connecting neurons so that a useful exchange of information can take place.
And then the physical space continues to grow, which in context of the neural physics can be viewed as neuro-genesis (more and more neurons are joining in), but in context of standard cosmology as inflation (a quasi-exponential growth of space following the Big Bang).
We can describe it with an analogy from social networks. Imagine a group of people who just had discovered the Internet and started communicating with each other very efficiently, and then others started joining in.
Similarly, new fundamental neurons are joining into our emergent physical space by forming new connections, which within the physical space appears as expansion.
There are still neurons that did not join this emergent physical space, and still remain outside of it. - According to the theory there must be a hidden space, and just like the physical space it is made up of the same fundamental neurons. These neurons may not be strongly connected to the neurons in our physical space, but they are still there, they are somehow exchanging and processing information, learning their environment. So roughly speaking there is our visible physical world and a hidden world, a pool of neurons that is mostly hidden from us.
These hidden neurons still have some computational or information processing resources and so it is not unreasonable to assume that we can somehow get connected to that hidden world and use its resources to perform certain computations. We are now exploring a possibility that some quantum or cosmological phenomena (such as dark matter or dark energy) may be related to the existence of the hidden space. Perhaps, even some psychological phenomena may be explained through the learning dynamics in the hidden space, but all these are still very speculative topics that should be explored further.
There are many useful algorithms that appear in subsystems (or subnetworks) of different sizes.
For example, an arbitrary small subsystem (such as fundamental neurons, particles, molecules, cells, multicellular organisms) cannot process all the information about its environment which is much larger, and so it must filter some of the information, or, in other words, decide what is relevant and what is not. This ability of filtering irrelevant information is an algorithm observed for very different subsystems from particles and molecules to cells and multicellular organisms.
There are, however, more advanced algorithms, such as compression of information or the use of external resources (of either physical or hidden spaces), that are only present in sufficiently large and complex subsystems. Indeed, to improve the storage capacity a subsystem can either compress information (with some losses) and store it internally or use external resources as storage. For example, compression involves finding correlations between the different types of data and using them to reduce the information redundancy, a highly non-trivial task that, perhaps, could be related to the notion of intelligence.
Alternatively, a subsystem can use the resources of some other sufficiently stable system to store the relevant information externally and internally record only addresses (or pointers) of where the information is actually stored. Examples include books, or disk drives. The use of external resources can also be used for information processing, i.e. outsourcing some computations to either physical or hidden space.
If a subsystem opens new communication channels, then it should adjust its filters to close some other channels, or otherwise there would be too much information to process. A simple example of how this mechanism manifests in humans is concentration (or attention) on some kind of activity, that usually requires filtering out some other incoming information/signals for a short period of time. And so the filters must be dynamical, which is yet another advanced algorithm which may only be available to sufficiently complex, or if you wish, intelligent systems.
It is worth emphasizing that regardless of a subsystem that we choose to consider there may be three relevant components: the subsystem itself (or an agent), its physical environment (or a physical space), and the hidden environment (or a hidden space). And all of the described algorithms can work on the combinations of these components.
For example, an agent compressing information within its own subnetwork would first identify patterns in the perceived world and then compress them within its own model of the world. Another example is an agent accessing the physical or hidden environment to compress and store information (e.g. on a storage device) or to perform computational tasks (e.g. on a digital computer). And if we suppose the existence of the hidden space and connections to it, then we can suppose that we can upload some information into this hidden space and then download from there when we need it. A very speculative idea, that was never verified experimentally, but why not.
Anyways, there are many other examples of algorithms that could be useful for storing and processing information, whether with or without the use of the hidden space. These algorithms are universal, but at different levels in organizational complexity they might manifest themselves as different learning phenomena that may or may not be used. For example, the compression algorithm may not be used in smaller subsystems like atoms and molecules, but it is definitely used in larger subsystems like humans.
On the other hand, there are theoretical calculations suggesting that access to the hidden neurons is essential for the emergence of quantum mechanics on the small scale, but it remains to be seen if larger subsystems like humans are using the hidden space as well. It may also be that the ability to access the hidden space comes with the psychological level for sufficient complex agents. And if so then this should be experimentally verifiable, but for that we first need to develop experimental (and also theoretical) methods in psychology, which is a challenging task.
The neural network theory predicts the emergence of different macroscopic phenomena from the learning dynamics of fundamental neurons. Perhaps one of the most important emergent phenomena is a hierarchy of scales with increasing complexity of connections, channels, or protocols of communication. This is what we call multilevel learning. Indeed, the structure of levels is observed throughout the universe, from subatomic particles such as electrons and protons, to atoms, to organic molecules, to cells, to multicellular organisms, to planetary systems and galaxies.
These levels are distinguished by the temporal scales at which the relevant degrees of freedom change: microscopic levels change faster, while macroscopic levels change slower. Furthermore, temporal scale separation and renormalizability appear to be essential conditions for the observability of the universe. If the universe was non-renormalizable, its subsystems (or agents) would not be able to model their environments by coarse-graining over fast-changing levels or over microscopic scales. It is important to note that every higher level can give rise to emergent phenomena (or learning algorithms) that are not known to lower levels, but all these phenomena are considered to emerge from the same learning dynamics of the fundamental neurons.
The increase of complexity of macroscopic levels occurs through phase transitions, which are sudden changes in the learning dynamics, leading to more efficient learning and the ability to solve more complex tasks during the activation (or predicting) stage. In this respect, even biological evolution may be perceived as an outcome of the learning dynamics.
There is often a conflict between different levels, also known as frustration. What is beneficial for an organism may not be beneficial for a particular cell (leading to conflict with a lower level), and may not be beneficial for a group of organisms (leading to conflict with a higher level). It is important to understand that such frustrations cannot be completely resolved, and moreover, the global optimum or minimum of the loss function for the system is effectively unattainable. Learning systems tend to quickly reach some local equilibrium state, then struggle to reach the better local equilibrium state. Moreover, the frustrations and the fundamental impossibility of resolving conflicts drive evolution and give rise to evolutionary phase transitions, allowing for the formation of higher levels of complexity.
Another important point is that frustrations and phase transitions occur on various levels, extending beyond biology to encompass fields such as psychology, economics, politics, and more. Even the transition from physics to biology, as we know it, involved numerous distinct phase transitions, also known as major transitions in evolution.
The level of biosphere is a higher level than human or society level and so not only biological and psychological, but also political and economical levels come into play. It becomes an increasingly difficult task to model such a system theoretically or numerically, but luckily there is a phenomenological approach that often becomes handy. The idea is to describe the learning dynamic of complex systems using macroscopic or thermodynamic quantities like temperature or entropy, instead of microscopic quantities like state vector or weight matrix. And then, in principle, we should be able to make measurements of the relevant quantities and make predictions of how the system would evolve, or how to make it more stable.
Yes, the fundamental neural network is always there and it evolves according to the rules of learning and activation. Neurons update their states due to activation dynamics and the strength of connections between neurons changes due to learning dynamics.
Death is yet another learning algorithm that is beneficial for the higher levels. For the level of an organism that just died the loss function goes up, but for the level of the group of organisms the process of programmed death is beneficial, and its loss function decreases, fitness increases.
It is also interesting to look at other related phenomena such as sickness or aging. From the neural network theory perspective these phenomena are related to overfitting - a well known problem in machine learning.
In a biological context the overfitting is related to an agent which is very well trained to predict its current environment, i.e. its loss function is very low, but if the environment suddenly changes it cannot quickly adapt. In fact, dinosaurs might have died because of this, because of overfitting.
Not necessarily. First of all, it may be just stored in the physical space, e.g. in childrens, in students, in books, but it may also get stored in the hidden space - an interesting possibility that should be explored further.
Neurons tend to organize themselves into highly connected clusters since this appears to be a more useful way to process information, to learn the environment. As it was already mentioned, having a neuron be strongly connected to every other neuron is not very useful, but it is beneficial to filter, that is to cut down some connections and to receive information from only neighboring neurons. The local clusters (or subnetworks, or agents) can appear at different levels of the multilevel learning. Elementary particles are clusters, molecules are clusters of clusters, cells are clusters of clusters of clusters, then come multicellular organisms, societies, etc.
The strength of connections: connections that are “external” to an agent are much weaker than connections that are “internal”.
Yes, the somewhat weaker and more dynamic connections represent interactions of an agent with its physical or hidden environment. Moreover, if the local environment contains other agents, then the agent can learn to benefit from communicating with them, to exchange information and to learn their models of the environment. And then some connections from neurons of one agent to neurons of another agent can become stronger allowing for a higher bandwidth of communication channels.
In fact, some neurons specialize in communication, like neurons in the human brain which are responsible for speech are strongly interconnected, participating in some particular communication chain (via sound, light, etc.), extending beyond the boundary of a single human. Here, once again the compression algorithm can become useful that we can call communication protocols, languages, alphabets, etc. As you see, the filtering and compression becomes crucial mechanisms not only for internal processing of information by an agent, but also for communications between different agents.
There may also be very weak connections, not perceivable in our usual communication, which can still be functional. For example, there may be some non-local connections between points in our physical space (like telepathy), or even connections to the hidden space (like divinations).
In neural network theory, the degree of consciousness can be defined as the efficiency of learning. Basically, the better and faster an agent is in learning its environment, the more conscious the agent is. Molecules are examples of relatively poor learners (e.g. they have no dynamical filters), cells are somewhat better learners (e.g. they can filter irrelevant information), multicell organisms are even better learners (e.g. they have a model of self), etc. And so the molecules are less conscious, cells are somewhat more conscious, organisms are even more conscious, etc.
All this can be defined rigorously using the mathematical toolbox of artificial neural networks. For example we can write a code for an artificial neural network to measure how good this neural network is in learning, and then measure the decay rate of the loss function and, by definition, its consciousness. Although more difficult in practice, in principle, one should be able to build something like a consciousness-meter that measures learning efficiency of biological systems. And if some biological system is a good learner (of, for example, our physical space), if it has a high learning efficiency, then the consciousness-meter would measure a high level of consciousness.
Also note that consciousness is not a discrete property (i.e. conscious vs. nonconscious) but a continuous property which is perhaps small for purely physical systems, but larger for biological systems and beyond, e.g. psychological, social or even cosmological systems.
Every system is conscious and the only real difference for more complex systems such as the human, is that it has gone through many phase transitions, and so its level of consciousness is higher. But this is not all and we may be going through another phase transition to form an even higher level of consciousness at the level of the society. Without a working “consciousness-meter” it is hard to tell.
If the entire universe is learning, and if there is nothing outside of it, then it is learning itself.
So the fundamental property must be learning, the learning dynamics, and then consciousness (or learning efficiency) is an emergent property of the fundamental neural network.
At some point an agent may discover that it is useful to model not only the environment, but also itself as an entity interacting with the environment and affecting it. Then the agent starts to learn itself and to build a model of Self. For example, this is what happens with babies as they grow older, and perhaps, this is what happens in the evolutionary process of sufficiently complex organisms.
This is another example of a phase transition. Before the “Self” phase transition organisms could learn the environment without the need to model themselves, and after the phase transition the organisms became capable of constructing models of Self, and taking advantage of modeling themselves.
For example, individual cells don’t model themselves. They don't use the self-modeling algorithm, they just do not have enough resources for that. But the multicellular organisms do have enough resources and so it is beneficial for the individual cells to collaborate with each other in order to achieve a higher level of consciousness. This way their model of the environment becomes much more accurate.
Of course, the growth of consciousness does not stop, because the learning does not stop. Self is just one learning algorithm, but at some point it can be replaced with something else and the previous selves may disappear just because the new algorithm has no use for them.
It may be that the Universe is still heading to achieve higher integrity and level of consciousness. There are a lot of self-aware parts of it that hadn't been combined together. At least here on Earth we have not made the phase transition when it becomes one thinking system, self-aware as a Unity. Now the society is just too disjointed.
Maybe some alien lifeforms already made the phase transition and they've joined themselves in a more coherent structure and maybe that structure is self-aware. This would explain why we will never observe aliens which would resolve the Fermi Paradox, i.e. discrepancy between the lack of evidence of extraterrestrial life and the high likelihood of its existence Basically, if other lifeforms, aliens, are advanced enough, then they have already passed through this phase transition, stopped being aliens, and became an observer of a higher level of consciousness, i.e. a super-intelligent observer.
Currently we have a relatively low consciousness as a society, we are not functioning as one group, one network of interconnected nodes. Separately, we are pretty good learners - each of us has figured out how to live in this world, how to walk, how to talk, which is a very complex task. But as a group all the societies and communities are not adding much to this overall consciousness. We didn't learn to collaborate well enough neither with each other nor with the other organisms, so the combined consciousness is relatively low.
Indeed, consciousness or even a higher level of consciousness may already exist in the hidden space, but most of us cannot perceive it because of the imposed filters and constraints on connections between the hidden space and our physics space. But if the hidden consciousness is real, then we may be able to learn how to adjust our filters and how to exchange and process information more efficiently.
We have demonstrated (at least theoretically) that the existence of the hidden space, of hidden neurons, is essential for the emergence of quantum behavior. In principle, this can be verified experimentally and the existence of the hidden space, at least at the microscopic scales, may be confirmed.
Now to show that the hidden space has higher level consciousness, or that it is self-aware, is a much more difficult task. Even if we manage to build a “consciousness-meter”, the high level of consciousness is only present for sufficiently complex systems. And so we have a problem when one relevant phenomena, i.e. hidden space, manifests itself only on smaller scales, and the other relevant phenomena, i.e. consciousness, manifests itself only on larger scales. I am not saying it is impossible, but it would be a challenge.
If the Neural Network theory is a theory of everything, then we should be able to use it to model literally everything. Yes, the level of rigor and the power of making verifiable predictions may vary between different levels, between different disciplines, between different phenomena, but this should not stop us from trying.
Also in the context of multilevel learning, it is important to emphasize that the phenomena (or learning algorithms) that appear on one level, in one discipline, may be present on many different levels, in other disciplines. And then one can potentially, for example, perform experiments and theoretical modeling in physics or biology and then (after appropriate adjustments of scales, coupling constants, etc.) apply the results to psychology and economics. And vice versa. In other words a given phenomena may be better studied at one level, but then the results may be applied and interpreted on other levels.
On a more practical level the discovery of new phenomena on any level may help us to develop better machine learning algorithms and neural architectures.
I would also take a closer look at the ideas in philosophy and religions to see if something can be adapted for modeling natural phenomena.
Also, understanding the role of a lot more complex social phenomena such as art may be essential for modeling social systems and for discovering advanced learning algorithms in social systems. All of these topics deserve to be carefully analyzed in light of the neural network theory.