Foundations of an Architecture for True Artificial Intelligence
When you look at the literature on AI, you quickly notice that it is described in highly mathematical terms. Mathematics is based on functions. A function is a fixed construct that computes an output value from an input value. Given the same input, it always produces the same output.
An AI that operates strictly according to formulas will always generate the same output for a given input. LLMs (“Large Language Models”), however, use stochastic sampling methods so that their responses do not appear monotonous. Even a small change in the input data leads to a different result. In principle, though, an LLM is deterministic in the background: with several petabytes of data, they provide good answers on many topics. The intelligence does not arise from the LLM itself, but from the enormous amount of training data. Put simply, they are highly capable search engines that deliver contextually appropriate linguistic results.
- They master grammar, syntax, and style because they are trained on vast amounts of correctly formulated language.
- They ensure coherence and context because they learn patterns across long text sequences.
- They can also recognize logical patterns because they are trained on formal, logical structures as well.
What does an AI need in order to achieve real intelligence and consciousness?
An AI can only develop properties that go beyond pure simulation if its system architecture provides sufficient degrees of freedom for autonomous dynamics and non‑deterministic processes.
This means it requires neurogenesis — the continuous formation of new neural connections — as well as a “race” in the processing of a response.
Technically, this implies asynchrony and dynamic changes during operation:- Neurogenesis
- Adjustments of weights
- Temporal competition between different possible responses
This approach makes the decision-making path unpredictable. However, it opens the door to creativity and autonomous optimization, potentially leading to new and different results.
At the same time, it carries risks: An AI that is granted freedom and does not merely react like an automaton can easily go astray. For example, an “AI bomb” — in the sense of an autonomous unit — might conclude that its sensors are not providing valid values and decide to explode because it logically infers that it is God and must bring light into the darkness — inspired by the idea from Dark Star
In human terms, we would call this psychosis.What is consciousness, anyway?
The mirror test is often mentioned in this context.
For example, a colored mark is placed on an elephant’s forehead where it cannot see it.
If the elephant recognizes the mark in the mirror and attempts to remove it, we say that it recognizes itself — that it has self-awareness.
How would such a mirror test work for an AI?
One could train the AI to identify a computer as “me.” Then you show it an image of the computer, and the AI responds: “That’s me.” Next, you paint a colored spot on the computer’s casing and upload the altered image. The AI detects the change in the pattern and says: “That’s me, and someone painted a colored spot on my casing.”
In terms of pattern recognition — which the AI is good at — this is not consciousness. It did not arrive at this conclusion through its own free reasoning; it remains a simulation of the test.
How can true artificial intelligence be achieved?
The essential points are:- Maximum asynchrony
Asynchrony always creates a “race” in processing. It is unpredictable how processing will unfold, and it can branch spontaneously — comparable to spontaneity.
- Self-regulation
Self-regulation arises inevitably because there is no overarching instance dictating what the result should be. It guarantees freedom of choice.
- No separation from the learning environment
A system must not only build and retain knowledge but also discard it. Dementia is an important factor for self-regulation; the system must not suffocate under meaningless data. However, this means that reactivation must be possible when needed. Knowledge arises and persists only through repeated retrieval of previously learned content; its importance is prioritized by the environment.
- No overarching meta‑instance
Biological neural networks have no such instance, and it is not plausible: a meta‑instance would already possess intelligence. One might argue that ethical behavior is such an instance; in biological systems, however, it is a rule set learned in relation to the environment. This principle must also apply to artificial intelligences.
- All elements possess only their own information
This follows directly from point 4. As soon as elements are merged, a meta‑intelligence would be required. Here, Hebb’s learning rule becomes relevant: “Neurons that fire together wire together.” The rule is observable, but how is “together” recognized if each element has only its own isolated information? This becomes possible through temporally correlated feedback and simultaneous firing. A synapse receives feedback as soon as it passes on data. This is the indicator for weight adjustments.
- Attractors as signaling agents for neurogenesis
This is a technical description of neurogenesis. How can a neuron form new connections to another neuron? Again, the feedback system plays a role: a deficit or surplus is distributed in the “space.” In a sense, this contradicts point 5 because it establishes a relation between two separate neuron elements. However, the neuron elements are not completely isolated; they can receive external signals and respond to them.
- Sensors provide directed feedback (too much, too little, too late, etc.)
Technically, I assign sensors to the environment. In biology, for example, the skin sends signals (hard/soft) to the brain — always a delta value. The network is already given a direction; classical evaluative feedback does not exist. Since a neural network cannot compute a delta on its own, it must come from another system. Depending on the environment, this may vary; therefore, the environment must determine the delta using an agent in reinforcement learning (RL).
For example, a temperature controller reports to the network: “The room is 5 degrees too warm.” - Loose coupling of network elements
In a highly asynchronous system with neurogenesis, “dementia,” and massive structures, loose coupling is the only way to enable a functional application. Relational databases are unsuitable because structures can change frequently and asynchronously — consistent transaction units can hardly be guaranteed.
- Learning content must be provided through the environment
- Goals:
Define what the system should achieve and optimize for. - Temporal Behavior:
Ensure the system reacts correctly over time, including timing, stability, and responsiveness. - Rules / Ethics / Safety:
Set boundaries that prevent harmful actions and enforce safe, responsible behavior. - Perception:
Enable the system to interpret inputs from its environment, such as sensors or user interactions. - Decision Logic:
Guide how the system selects actions based on goals, constraints, and current conditions. - Learning / Adaptation:
Allow the system to improve performance through experience and environmental feedback.
- Goals:
A word on data flow in the neural network
For technical reasons, I assigned the weights to the dendrites rather than to the synapses. Since the synapses connect to the dendrites, this makes practically no difference.
The potential of each dendrite is transmitted to the soma with its assigned weight. There, a summation takes place whose purpose is to determine whether a threshold value has been reached. Otherwise, the dendritic potential has no further influence. The dendritic potential therefore serves only to activate the soma.
Within the soma, there is Short-Term Plasticity (STP) for short-term processing and Long-Term Plasticity (LTP) for long-term processing.
These two mechanisms contain the actual memory information.
STP differs from LTP in the time required to return to a resting value.
The potentials regenerate over time, influenced by the frequency of activation.
In my system, STP and LTP jointly determine the strength of the action potential, which is then propagated along the axon.
The action potential is subsequently passed on to the synapses.
A synapse can form a connection to exactly one neural element — that is, to an effector, dendrite, soma, axon, or even another synapse.
Each synapse has either an inhibitory or an excitatory property.
This property strengthens or weakens the potential of the connected neural element.
A synapse connected to an effector is always excitatory, since only positive activation values are processed there.