// APPLIANCES

The SandLogic stack —
in a box you own.

Three tiered appliances. One coherent product family. From individual servers up to datacenter racks — pre-loaded with SandLogic models, runtime, and observability. On your hardware. Inside your perimeter. With a clean path from a starter token appliance to a full agentic AI stack.

Appliance tiers
3
Form factors
3
Silicon platforms
5
Deployment
On-prem

Some workloads can't leave.

Cloud is the right answer for many AI workloads. It is not the right answer for all of them. The appliance line is built for the cases where the data, the regulator, or the architecture says the stack has to live inside your perimeter.

REASON / 01

Data residency

Your data never crosses your perimeter. Inference, training, and observability all run on hardware you own and operate — eliminating the residency questions that block enterprise AI in regulated jurisdictions.

REASON / 02

Regulated industries

BFSI, healthcare, telecom, defense, public sector. The compliance posture you already maintain for your core systems extends to the AI workload — without a new vendor relationship, a new audit boundary, or a new data flow to defend.

REASON / 03

Sovereign deployment

Public-sector and sovereign customers can run the full stack inside national infrastructure, with no external dependencies on cloud APIs that may be unavailable, restricted, or geopolitically constrained.

REASON / 04

Predictable latency

Co-locating inference with the application removes the wide-area network from the critical path. For voice agents, agentic workflows, and low-latency interaction, on-prem is not a fallback — it is the architectural choice.

Three tiers. One stack.

Each tier is a clean superset of the previous — same runtime, same observability, same model layer. Start at the scope that matches today's workload. Move up without re-platforming when the workload grows.

TIER / 01

Token Appliance

Entry

SandLogic models, on your hardware.

The starting point. SandLogic's models pre-loaded onto an appliance you operate, with the runtime that makes them fast and the observability that makes them trustworthy. Built for enterprises that aren't ready for full agentic infrastructure but want our models on-prem with control over the cost-per-token they generate.

What's in it
  • Shakti LLMs, Lexicons, and Nexons — pre-loaded
  • EdgeMatrix token efficiency platform
  • Baseline observability and traceability
  • Model fine-tuning and training engine
Who it's for

Enterprises that want SandLogic models running on their own hardware, with token efficiency and a minimum viable observability layer built in.

Talk to us about this tier
TIER / 02

Voice Appliance

Mid

Voice agents in a box — telephony to model.

Everything in Token, plus an end-to-end voice agents stack. SIP, WebSockets, Webhooks, ASR, TTS, and LLM — orchestrated through the same runtime, observable end-to-end. Removes the integration burden of stitching seven vendors together to make voice work at enterprise scale.

What's in it
  • Everything in Token Appliance
  • End-to-end voice stack: SIP, WebSockets, Webhooks, ASR, TTS, LLMs
  • Model gateway
  • Voice agent authoring framework — Aira + Lingo bundled
  • Voice-tuned observability: per-call quality, performance, analytics
Who it's for

Enterprises deploying voice agents at scale who want a turnkey, telephony-to-model appliance with voice analytics built in.

Talk to us about this tier
TIER / 03

Agentic AI Appliance

Advanced

The full SandLogic stack, on-prem.

Everything in Voice, plus the complete agentic AI platform. LingoForge orchestration, Lingo end-to-end, the full model zoo, the full observability stack, complete training and fine-tuning. The highest evolution of the appliance line — for enterprises that want the entire SandLogic platform deployed inside their perimeter.

What's in it
  • Everything in Voice Appliance
  • End-to-end LingoForge — orchestration, RAG, MCP, multi-agent chains
  • End-to-end Lingo — full speech analytics platform
  • Full observability and traceability stack
  • Full model zoo — every released SandLogic model
  • Complete training and fine-tuning capabilities
Who it's for

Enterprises that want the entire SandLogic agentic AI platform deployed on-prem — full stack, in a box.

Talk to us about this tier
// FORM FACTORS

From a single workstation to a full rack.

Every tier ships across three hardware classes. The form factor scales the throughput and concurrency you can serve — the capability set is the same end to end.

And every form factor is silicon-agnostic by design — the appliance runs on the silicon you already buy: NVIDIA, AMD, Intel, ARM, or Qualcomm. See the silicon partners →

FORM / 01

PC class

Edge sites · branch · small team

A heavy workstation-class appliance for departmental pilots, edge sites, and small-footprint deployments where a full server rack is overkill.

FORM / 02

Server class

Departmental · regional

An individual server appliance for departmental-scale and regional deployments — the most common starting point for enterprise voice and agent workloads.

FORM / 03

Datacenter rack

Enterprise · sovereign

A full-rack appliance for enterprise-scale and sovereign deployments. High concurrency, high throughput, and the headroom to run the full Agentic AI tier with the entire model zoo loaded.

Start where you are. Grow without re-platforming.

The three tiers share one runtime, one observability surface, and one model layer. The model you fine-tune on a Token Appliance is the model that runs in a Voice or Agentic AI Appliance.

No data migration. No re-integration. No second vendor to onboard. Adding a tier adds capability — it does not replace the foundation.

That makes the buying decision easier. Pick the tier that matches the workload today. Move up when the workload — or the regulator, or the architecture — calls for it.

Building, co-branding, or reselling? Let's talk.

Several partners are already building appliances on the SandLogic stack. We work with system integrators, OEMs, and regional resellers across the three tiers — co-development, white-label, and partner-led deployment models are all on the table.

// BRING THE STACK INSIDE

Your AI. Your hardware. Your perimeter.