Neuron Dynamics — Force Graph
Click nodes/edges to track activation history • Drag to rearrange • Scroll to zoom
Live Frame Analysis
Real-time breakdown of neural activations at the current inference step
Sparse Brain Analytics
Activation density across layers — x (recall activations) vs y (Hebbian gate). BDH achieves ~3–8% sparsity per layer.
Graph Topology
Emergent connectivity structure — community detection reveals functional neuron clusters. Color = community. Size = degree.
Attention Atlas
Per-cell incoming attention weight across layers. Brighter cells receive stronger attention signals. Watch pathfinding emerge as attention concentrates on the route.
Concept Probe
Monosemantic neuron analysis — identifies which board concepts (wall, path, open, start, end) each neuron has specialized for. High purity = monosemantic.
Hebbian Learning — "Fire Together, Wire Together"
3D synapse reinforcement across 30 boards — watch memory pathways emerge as the network processes more board configurations
3D Neural Walkthrough
Memory Formation Charts
Cumulative Hebbian synapse strength Σ|y| and co-activation rate per layer. Shows how memory deepens with each processing layer.
Hebbian Memory Formation
The gating mechanism and synapse updates that create persistent memory.
The Neural Pathfinding Engine treats the grid as a graph where functional connectivity emerges through training. The memory component y forms a trace of active transitions — encoding the "path of least resistance."
As the signal propagates from Start → End, Hebbian updates strengthen connections along the shortest path. The memory trace updates as:
The dual-stream design enables O(T) inference: x carries the current state, y carries accumulated memory — no KV-cache required.
Scaling Lab
BDH achieves O(T) inference while Transformer attention is O(T²). Drag the slider to see the compute gap grow with context length.
BDH Architecture — Exhaustive Visual Explainer
End-to-end training pipeline with exact hyperparameters — click the Attention Core block for recurrent inference animation