A continuum of intelligence.
Open knowledge, refined bridges,
original innovation.
Three model lines that work together. Lexicons — curated open-source models, quantized for enterprise. Nexons — open foundations enhanced with our own datasets for sharper performance, trust, and relevance. Shakti — a fully in-house family of small and mid language models, built ground-up for edge and enterprise AI.
From open knowledge to original innovation.
Most enterprises don't need a single model — they need the right model for the workload, with a path to upgrade as their data matures. Lexicons, Nexons, and Shakti are the three rungs of that ladder: from accessible open-source baselines, through enhanced fine-tunes, to fully in-house frontier work.
Lexicons
Open knowledge.
Curated open-source models, quantized and made accessible for enterprises and developers. Permissive licenses. HuggingFace-hosted. Drop-in via the EdgeMatrix runtime.
Nexons
Refined bridges.
Enhanced models — strong open foundations fine-tuned with our datasets to bring sharper performance, trust, and relevance. First Nexon releasing soon.
Shakti
Original innovation.
A fully in-house family of small and mid language models, built ground-up for edge and enterprise AI. Six released (100M – 4B). Two in flight (8B, 30B). Three arXiv papers.
"From open knowledge, to refined bridges, to original innovation. The vision: make AI accessible, reliable, and meaningful — whether it runs on the cloud, on-premise, or at the edge."
— Kamalakar Devaki, Founder · CEO, SandLogic Technologies
Open-source models, made enterprise-ready.
Lexicons is our growing zoo of curated open-source foundation models — quantized, packaged, and benchmarked for enterprise deployment. Permissive licenses on HuggingFace and GitHub. Quick customization, minimal retraining, full transparency.
Curated
We benchmark every release before it ships. Models that don't hold up don't make the catalog.
Quantized
Q4_KM, Q5_KM, and Q8 variants where they meaningfully reduce footprint. Same quality bar, smaller binary.
Permissive
Apache 2.0 / MIT / OpenRAIL where the upstream license allows. No surprise restrictions.
Runtime-ready
Every Lexicon ships with a manifest the EdgeMatrix runtime understands — load and serve in one line.
Open foundations, sharpened with our data.
Nexons take strong open foundations and fine-tune them with SandLogic's proprietary datasets to bring sharper performance, trust, and relevance for enterprise workloads. The bridge between the open ecosystem and the in-house Shakti family — built for teams that want better-than-baseline accuracy without committing to a fully proprietary stack.
Sharper performance
Targeted fine-tunes on enterprise corpora — telephony, contracts, claims, code-switched calls. Higher accuracy on the workloads our customers actually run.
Trust by construction
Trained alongside HaluMon evaluation. Hallucination rates measured before release. Confidence calibration tuned for regulated deployment.
Domain relevance
Indic languages, BFSI compliance vocabulary, healthcare clinical terminology. The vocabulary your customers expect, not what generic instruction-tuning produces.
First Nexon releases shortly.
The first Nexon is in late-stage evaluation now. Sign up to be notified when it ships — or talk to us if you want a Nexon trained on your domain corpus before the public release.
One family. Every parameter range.
Shakti is our fully in-house family of small and mid language models — built ground-up for edge and enterprise AI. Six released (100M to 4B parameters), two in flight (8B and 30B). Pick the smallest model that meets your accuracy bar — Shakti models are tuned to outperform peers 2–3× their size, so the right deployment is almost always smaller than you think.
From wearables to frontier — log-scaled.
3× smaller.
Match for match.
Shakti-2.5B (Q4_KM) benchmarked against Llama 3 8B and Phi-3.5-Mini. Bold = Shakti leads.
MMLU · SocialQA · TruthfulQA — head to head.
Source: Shakti-2.5B technical report — arXiv 2410.11331
Vision-language at a fraction of the size.
Shakti-VLM-4B uses QK-normalization and hybrid normalization (Pre-LayerNorm in early layers, Post-LayerNorm with RMSNorm in later ones). Despite using significantly fewer training tokens, it beats Qwen2VL-7B and MiniCPM-V-2.6-8B on document and chart understanding.
Document understanding — at 4B parameters.
QK-Normalization
Improved stability and convergence behavior.
Hybrid normalization
Pre-LayerNorm early, Post-LayerNorm with RMSNorm later — optimal stability/efficiency balance.
Three-stage training
Pre-train, alignment, fine-tune. Lower token budget. Better task generalization.
Source: Shakti-VLM technical report — arXiv 2502.17092