Nota Memoria Engine (NME)
Memory structuring service between intent processing and wave-based storage. Transforms unstructured input into structured, trait-based memories with deterministic encoding into RFS waveforms.
Overview
NME sits between intent processing and field storage. It takes raw, unstructured input and produces structured meaning — decomposed into 10 atomic traits, classified by memory type, and encoded as complex waveforms ready for Resonant Field Storage.
Meaning Extraction
Decomposes raw input into 10 atomic meaning components using a high-performance semantic role labeling pipeline. Handles passive voice, relative clauses, multi-verb coordination, and structural ambiguity at sub-millisecond latency.
Memory Classification
Organizes structured memories into four types: episodic (event-specific), semantic (abstracted knowledge), working (short-term active context), and long-term (persistent storage). Different memory types evolve differently in the field — episodic memories decay faster while semantic memories form stable attractors.
Deterministic Encoding
Transforms structured traits into complex-valued waveforms where phase encodes structure and magnitude encodes content. Every encoding step is bounded, deterministic, and formally verified through Mathematical Autopsy.
The 10 Components of Meaning
Three temporal groupings that work together to determine meaning — from within a single utterance, between utterances, and accumulated over time.
| Trait | Grouping | Description |
|---|---|---|
| Structure | Per-Utterance | Who did what to whom — syntactic frames and semantic role labels |
| Content | Per-Utterance | What the words refer to — entities, denotational meaning |
| Modality | Per-Utterance | Certainty, possibility, obligation — epistemic status of claims |
| Affect | Per-Utterance | Emotional coloring — sentiment, judgment, stance |
| Relationships | Between-Utterance | Causal, temporal, and contrastive links between events |
| Reference | Between-Utterance | Co-reference tracking — resolving pronouns and anaphora |
| Intent | Between-Utterance | Why it was said — speech acts like assertions, requests, promises |
| Context | Over-Time | Surrounding discourse, situation, and prior conversation |
| World Knowledge | Over-Time | Background assumptions and common-sense reasoning |
| Ambiguity Resolution | Over-Time | Collapsing meaning superposition via accumulated context |
How Meaning Composes
Components 1-4 (Structure, Content, Modality, Affect) are directly encoded into each chunk's waveform. Components 5-7 (Relationships, Reference, Intent) connect chunks to each other as link operators. Components 8-10 (Context, World Knowledge, Ambiguity Resolution) emerge from the field itself after multiple inputs shape its topology. This is a fundamentally different approach to meaning — it is not a point in space, it is a wave in a field.
Key Capabilities
10-Head Meaning Encoder
Unified multi-task architecture with specialized extraction heads for each atomic trait
Titans Memory Classification
Automatic classification into episodic, semantic, working, and long-term memory types
Deterministic Waveform Encoding
Structured meaning converted to complex waveforms via Fourier Holographic Reduced Representations
SRL Chunking
Input decomposed into per-predicate semantic frames — the atomic unit of meaning
Ambiguity Preservation
Ambiguous parses stored as superposed waveforms, collapsed at query time by context
Trait Composition
Individual trait extractions compose into unified meaning representations through wave superposition
Use Cases
Persistent AI Memory
Give AI systems memory that persists across sessions with structured, retrievable meaning — not just raw text logs.
Knowledge Base Construction
Transform unstructured documents into structured knowledge with full meaning decomposition and relationship extraction.
Conversational Context
Maintain rich conversational context by extracting and encoding meaning from every interaction, enabling continuity over long time horizons.
Semantic Search Infrastructure
Power search systems that understand meaning structure — not just keyword matching — by querying over decomposed trait dimensions.
Meaning Decomposition Research
NME is built on active research into computational models of meaning — how to decompose, represent, and compose semantic structure in ways that are both linguistically grounded and computationally tractable.
Current research explores unified multi-task architectures for simultaneous extraction of all meaning components, holographic encoding schemes that preserve compositional structure in fixed-width representations, and self-organizing field dynamics where memories with shared semantic structure naturally cluster through wave interference.
Explore ResearchResearch Areas
- ▸ Unified multi-task meaning extraction architectures
- ▸ Holographic encoding for compositional semantics
- ▸ Self-organizing memory topology through wave physics
- ▸ Ambiguity preservation and context-driven resolution
- ▸ Cross-modal meaning alignment and transfer
Structured Memory for AI Systems
Transform unstructured input into meaning-rich, deterministically encoded memories.