Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.synheart.ai/llms.txt

Use this file to discover all available pages before exploring further.

Syni is Synheart’s on-device LLM stack. It runs adaptive, persona-conditioned language model inference locally on the user’s device, with an optional cloud fallback for higher-capability requests. Syni is composed of four artifacts, each released independently and pinned by the consumer at install time:
PieceWhat it isReleased as
Syni Flutter SDKDart/Flutter consumer SDK — agent, install lifecycle, persona binding, streaming chatpackage:syni on pub.dev
Syni RuntimeRust-based inference engine with a stable C ABI (FFI). The native artifact every SDK calls.dist.synheart.ai/syni-runtime/
Syni SpecCanonical persona / safety / schema / grammar contractsdist.synheart.ai/syni-spec/
Syni Cloud GatewayHTTP + SSE chat endpoint for cloud-fallback inference, persona-awareapi.synheart.ai
The same persona id resolves to the same behavior whether served by the local runtime or the cloud gateway. That’s the entire point of the spec — it pins behavior independently of model choice.

Install

The Synheart CLI handles the runtime + spec on disk:
synheart install syni             # both runtime + spec (entitled subset)
synheart install runtime syni     # just the LLM engine
synheart install spec             # just the persona contracts
Lands artifacts under synheart/vendor/syni-runtime/ and synheart/vendor/syni-spec/. The verb shape is product + component: product nouns (syni, core) install every component you’re entitled to; component nouns (runtime, spec) fan out across products; two-arg forms pin to one canonical package. The Flutter SDK is added separately via pub:
flutter pub add syni
It uses the runtime artifacts that the CLI dropped into your app’s synheart/vendor/syni-runtime/ tree.

Why Syni is split this way

  • Runtime ≠ SDK. The Rust runtime + C ABI is the slow-changing artifact. SDKs in three languages (Dart, Swift, Kotlin) layer over the same ABI.
  • Spec ≠ code. Persona behavior, safety rails, output schemas, and grammars are versioned contracts that can ship independently of any inference change. Same spec id → same behavior on local + cloud.
  • Cloud is opt-in. Every persona declares an execution_policy (local / cloud / hybrid). Apps can run fully offline, or fall back to cloud per call.

Persona model

Personas are JSON contracts in syni-spec. Each one declares:
  • a stable id (e.g. focus.coach.v1)
  • output_schema_id — the JSON Schema the response must satisfy
  • safety_rule_ids — global policies that constrain output
  • execution_policy — local / cloud / hybrid
  • privacy — what data may leave the device
  • budget — token / request limits
Five production-tier personas ship in v0.0.1:
  • focus.coach.v1
  • stress.coach.v1
  • cognitive.companion.v1
  • performance.coach.v1
  • wellness.guide.v1
Apps load them by id. Out-of-spec ids fail closed.