BDH Neural Pathfinding

Interactive Explainer Dashboard

0 0
SPEED

Neuron Dynamics — Force Graph

Click nodes/edges to track activation history • Drag to rearrange • Scroll to zoom

N: E:

Live Frame Analysis

Real-time breakdown of neural activations at the current inference step

Live

Sparse Brain Analytics

Activation density across layers — x (recall activations) vs y (Hebbian gate). BDH achieves ~3–8% sparsity per layer.

Graph Topology

Emergent connectivity structure — community detection reveals functional neuron clusters. Color = community. Size = degree.

Attention Atlas

Per-cell incoming attention weight across layers. Brighter cells receive stronger attention signals. Watch pathfinding emerge as attention concentrates on the route.

Concept Probe

Monosemantic neuron analysis — identifies which board concepts (wall, path, open, start, end) each neuron has specialized for. High purity = monosemantic.

Hebbian Learning — "Fire Together, Wire Together"

3D synapse reinforcement across 30 boards — watch memory pathways emerge as the network processes more board configurations

3D Neural Walkthrough

Inactive Low Medium High Peak

Memory Formation Charts

Cumulative Hebbian synapse strength Σ|y| and co-activation rate per layer. Shows how memory deepens with each processing layer.

Hebbian Memory Formation

The gating mechanism and synapse updates that create persistent memory.

The Neural Pathfinding Engine treats the grid as a graph where functional connectivity emerges through training. The memory component y forms a trace of active transitions — encoding the "path of least resistance."

$$ G_x = E \otimes D_x \quad \text{where} \quad x = \text{ReLU}(v^* \cdot D_x) $$

As the signal propagates from Start → End, Hebbian updates strengthen connections along the shortest path. The memory trace updates as:

$$ y^{(l+1)} = \alpha y^{(l)} + \eta (y^{(l)} \cdot W^T) \odot x^{(l)} $$

The dual-stream design enables O(T) inference: x carries the current state, y carries accumulated memory — no KV-cache required.

Scaling Lab

BDH achieves O(T) inference while Transformer attention is O(T²). Drag the slider to see the compute gap grow with context length.

BDH Architecture — Exhaustive Visual Explainer

End-to-end training pipeline with exact hyperparameters — click the Attention Core block for recurrent inference animation