-
Notifications
You must be signed in to change notification settings - Fork 46
ConceptGraphImplementation
Implementing a Knowledge Graph with Neurons
In this video I'll demonstrate something you haven't seen anywhere else. An information graph implemented with biologically modeled neurons. These demonstrations reveal how a network of simple simulated neurons can store and retrieve knowledge in a way that mirrors how our own brains might work.
On the right side of the screen are cortical columns. Each is a vertical column of 15 neurons, representing a tiny fraction of a neocortex. On the left is an input and output section built in horizontal clusters of seven neurons each, which could conceivably represent activities of the thalamus. In between are six hard-coded relationship types and a bit of control circuitry, which could conceivably be in the hippocampus.
All the neurons and synapses you see are pre-allocated. The only things that change in these demos to represent information are the weights of selected synapses. We can think of this pre-allocated structure as akin to the brain's initial structure as governed by your DNA.
Now let's move on to how information got into this network. To start, I'll reset the content of the system. All learning synapse weights are cleared, but the synapses themselves remain. Then by firing inputs like FIDO, the system detects that those inputs aren't connected to any cortical columns. It searches for a free column and uses it. Connections to and from the column are made automatically.
How does the system know if you're searching for existing information or adding new information? Simple. If you give it a source and a relationship type, you're looking for information. If you also define a desired output, then you are directing the system to learn a new fact.
- Source: 2025-08-12 Built With Neurons: A Knowledge Graph with Inherited Attributes
To illustrate this, imagine that you heard the word Fido and you want to perform some query like, can Fido play fetch? This is different from a query like, what is the name of Susie's dog? In the first case, Fido is the source of the query, it's a word you heard. In the second, Fido could be the result of the query, it's a word you might want to say. This is greatly simplified if there are two distinct neurons representing Fido as an input or source and Fido as an output or result.
We previously introduced inheritance. The idea is that if a dog has some attributes and Fido is a dog, Fido inherits those attributes as well, and I demonstrated how that worked like this. Unfortunately, when we separated neurons into inputs and outputs, we need to add more relationships so that the output of the first level query can be the input to the second level query. I call this the recursion relationship because it transfers the firing state of the output to the input and continues.
Real world reasoning is full of compound logic such as FIDO plays fetch if the weather is sunny. This goes beyond the source relationship type target. We are now combining two full relationships using a meta relationship for example if then. Just as a basic relationship cluster links FIDO and dog, a compound relationship cluster links to such clusters. For example, FIDO plays fetch and the weather is sunny with a conditional connector like if then. This structure models not only conditionals but also causality, time, sequences, and hypothetical reasoning, core features of human thought.
- Source: 2025-07-15 How Your Brain Stores Knowledge (And AI Still Doesn't)