We are passionate engineers
and crazy entrepreneurs.
SandLogic was founded in 2018 with a single, stubborn vision — build one of the world's most vertically integrated AI stacks, from silicon to language models. Eight years later, we are an 80-person team in Bengaluru, with customers in production across India, Southeast Asia, and Japan.
To make AI live on the chip
— not the cloud.
At SandLogic, our mission is to make AI live on the chip — not the cloud. We are engineering a world where a TV, a car, or a phone can run large-scale reasoning and speech models natively, without waiting for the network.
Four principles. Everything else follows.
Co-design over over-build
Every layer is engineered with awareness of the layers above and below. The compiler knows the chip. The runtime knows the model. The model knows the use case.
Sovereignty by default
On-prem, air-gapped, edge — these are not deployment options bolted on later. They are the design starting point for every product we ship.
Compounding knowledge
We've kept the core team together for 5+ years. Every project teaches the next one. Eight years of compound learning is our moat against single-layer giants.
Cost-per-token is the only KPI
The market does not care about flops or parameters. It cares about predictable inference economics. Every layer is optimized for fewer cycles, lower watts, smaller footprints.
Eight years. Compounding.
From a single dashcam deployment in 2018 to a vertically integrated AI stack in 2026.
The driver-behavior project
First model deployed on Qualcomm 603/605 dashcams. Became the founding insight: edge AI is the future, and you need full-stack control to deliver it.
Microprocessor Challenge — #1 (ExSLerate V1)
Selected into the Final 30 and ranked #1 in MeitY-backed national microprocessor challenge with ExSLerate V1. Foundational silicon recognition.
CORE conceptualized
Build once, run any model anywhere. The compiler-runtime philosophy that became EdgeMatrix.
Lingo enters production
Speech analytics platform with 22 Indic languages. First commercial wins.
Government validation — DPIIT + MeitY C2S
National Startup Award 2023 from DPIIT, plus selection into the MeitY C2S Program (June 2023) as 1 of 13 companies. Plus Aegis Graham Bell Award 2023 for the chip program.
Lingo platform launched
Aug 2023 GA. 21+ enterprises onboarded within 18 months.
Shakti family released
Six small language models from 100M to 4B parameters published on HuggingFace.
Qualcomm QSMP — industry partner validation
Selected into Qualcomm Startup Mentorship Program 2024 as 1 of only 2 companies. Plus Maruti Suzuki Incubation Cohort 2024 — 1st Runners-up.
IP mining begins
Structured IP exercise. 47 patentable innovations identified, 15 filed (10 at PCT phase).
Shakti-VLM published
Vision-language family beats Qwen2VL-7B and MiniCPM on document benchmarks. arXiv preprint released.
Shakti runs on Qualcomm
Live demo at Indian Mobile Congress 2025 — Shakti running over QDC stack on-device.
Brandworks co-development partnership
Strategic partnership with Brandworks Technologies (Dec 2025) to co-develop full-stack edge-AI hardware — designed and built in India. First wave planned for 2026.
Krsna SoC tape-out
First silicon expected. ExSLerateV2 IP family with five variants from Lite to Apex.
Mainstream press features
Deep-reporting feature in The Ken (April) and long-form profile on AIM Front Page (May) — the narrative arc lands in independent journalism.
Founder-led. Engineer-built.
Kamalakar founded SandLogic in 2018 after recognizing that edge AI deployment required full-stack control — a recognition that came from the very first project: porting a driver-behavior model onto Qualcomm dashcam silicon, only to discover that quantization and operator support weren't there.
Eight years later, that insight has become a vertically integrated stack: AI chip IP, hardware-agnostic runtime, sovereign LLMs, and production-grade applications. SandLogic ships across BFSI, healthcare, telecom, automotive, and public sector — all on infrastructure customers own.
The company runs on a co-design culture: hardware teams know what the compiler needs; the compiler team knows what the runtime needs; the runtime team knows what the model needs. Most of the core team has been together for over five years — a continuity that quietly compounds into deep institutional knowledge most AI startups never accumulate.
"We are not another AI company. We are not another chip company. We are the company that decided both layers had to be co-designed — because that's the only way edge AI economics actually work."
— Kamalakar Devaki, Founder & CEO
The team that's been here from day one.
Eight years together. Hardware, compiler, runtime, models, applications — co-designed because the people are co-located, in the same room, on the same problem.

Jesudas Fernandes
Co-founder, Head of Silicon
Leads the silicon program — ExSLerate IP family, Krsna SoC architecture, and the co-design contract that ties the chip to the compiler and runtime.
LinkedIn ↗
Ravi Kumar Rayana
Co-founder & Head of Engineering
Owns the engineering org across EdgeMatrix, LingoForge, and the application layer — from compiler internals to production deployments at enterprise customers.
LinkedIn ↗
Radhika Kanigiri
Co-founder
Founding member since 2018. Anchors the operational backbone that has kept the core team together for over five years — the continuity that compounds into the company’s deepest moat.
LinkedIn ↗Plus an ~80-person team across silicon, compiler, runtime, models, and applications — most of whom have been with the company for over five years.
