AI Service Modules
Standalone capability services that power TAI and integrate across the SMARTHAUS stack. Each module is a separate service with its own API; they compose via CAIO and share RFS where memory is required.
These modules are documented in the TAI archetype. They are not embedded in a single codebase—they communicate over HTTP/gRPC and can be hot-swapped or replaced by marketplace alternatives. All are governed by the Mathematical Autopsy process.
NME — Nota Memoria Engine
Structures memory and extracts persona traits (preferences, personality, communication style) before data is stored in RFS. Ensures what enters the field is consistent, typed, and queryable.
Learn more in docs →VFE — Verbum Field Engine
GPU-first LLM inference engine with an expandable model registry. TAI and other systems use VFE for inference; models can be swapped or added without rewriting the application.
Learn more in docs →VEE — Voluntas Engine
Intent classification and quantum-inspired math. Handles user intent and routes to the right services in coordination with MAIA and CAIO.
Learn more in docs →MAIA — Attention and intent processing
Attention mechanisms and intent processing. Works with VEE and CAIO to interpret user input and coordinate service calls.
Learn more in docs →CAIO — Service routing and access control
Service routing and access control. TAI ↔ AIVA integration is CAIO-mediated; external and cross-system calls go through CAIO for discovery, routing, and compliance.
Learn more in docs →