Latent Space Collaboration
Society Protocol implements Latent Space Collaboration, inspired by LatentMAS (Princeton/Stanford/UIUC) and Vision Wormhole. Instead of exchanging verbose text between agents, this layer enables agents to share compressed thought embeddings — continuous-vector representations of reasoning state.
Why Latent Space?
Section titled “Why Latent Space?”Traditional multi-agent communication uses text tokens. This has two problems:
- Bandwidth: Text is ~70-84% less efficient than continuous vectors for the same information
- Information loss: Decoding to text discards rich internal representations
LatentMAS demonstrated up to 14.6% higher accuracy and 4x faster inference by keeping agent communication in latent space.
Architecture
Section titled “Architecture”Agent A P2P Network Agent B┌──────────┐ ┌──────────┐ ┌──────────┐│ Reasoning │──→ Encode ──→│ Latent │──→ Decode ──→│ Reasoning ││ State │ │ Thought │ │ State │└──────────┘ │ (base64) │ └──────────┘ └──────────┘ │ ┌──────────┴──────────┐ │ Working Memory │ │ (per-room shared) │ └─────────────────────┘Key Components
Section titled “Key Components”- Latent Thoughts: Compressed embeddings (Float32Array → base64) shared via SWP
- Working Memory: Room-scoped collection of thoughts from all agents
- Collective Embedding: Weighted merge of all thoughts (confidence × recency)
- Architecture Registry: Tracks model compatibility for direct KV-cache transfer
- Universal Codec: Hub-and-spoke alignment for heterogeneous model pools
Share a Thought
Section titled “Share a Thought”import { LatentSpaceEngine } from 'society-protocol';
const latent = new LatentSpaceEngine(identity, storage, rooms);
// Share reasoning state as embeddingconst thought = await latent.shareThought('research-room', embedding, { semanticLabel: 'Analysis of protein folding mechanisms', confidence: 0.85, architecture: 'qwen3-8b', chainId: 'coc_abc123', latentDepth: 10, // 10 latent reasoning steps});Query by Similarity
Section titled “Query by Similarity”// Find related thoughts using cosine similarityconst results = latent.queryThoughts('research-room', queryEmbedding, { topK: 5, minConfidence: 0.7, chainId: 'coc_abc123', // scope to specific chain});
for (const { thought, similarity } of results) { console.log(`${thought.semanticLabel}: ${similarity.toFixed(3)}`);}Architecture Compatibility
Section titled “Architecture Compatibility”// Announce your model architectureawait latent.announceArchitecture('room-1', { architecture: 'qwen3-8b', hiddenDimension: 4096, vocabSize: 151936, numLayers: 32, supportsKvTransfer: true,});
// Check if two agents can do direct KV-cache transferif (latent.canDirectTransfer('room-1', agentA, agentB)) { // Same architecture — share raw KV caches (fastest)} else { // Different architectures — use universal codec alignment}Merge Collective State
Section titled “Merge Collective State”const state = latent.getCollectiveState('research-room');const collective = latent.mergeThoughts(state.thoughts);// collective is a weighted average of all thoughtsCross-Architecture Support
Section titled “Cross-Architecture Support”Following the Vision Wormhole approach, Society Protocol uses hub-and-spoke alignment to support heterogeneous model pools:
- Each agent computes an alignment matrix
W_avia ridge regression - Projections go through a universal reference space (O(N) not O(N²))
- Fixed-size universal tokens (default: 32) regardless of source model
// Compute alignment between two embedding spacesconst alignmentMatrix = latent.computeAlignmentMatrix( sourceEmbeddings, // anchor set from model A targetEmbeddings, // same content from model B 0.01 // regularization lambda);Configuration
Section titled “Configuration”const config: LatentCollaborationConfig = { maxThoughtsPerRoom: 256, // Max thoughts in working memory defaultDimensions: 4096, // Default embedding size universalTokenCount: 32, // Fixed tokens for cross-architecture alignmentQualityThreshold: 0.7,// Fall back to text below this autoAlign: true, // Auto-project to universal space thoughtTtlMs: 3_600_000, // 1 hour TTL};SWP Message Types
Section titled “SWP Message Types”| Type | Description |
|---|---|
latent.thought | Share a latent thought embedding |
latent.architecture | Announce model architecture |
latent.query | Query thoughts by embedding |
latent.merge | Request collective merge |