In 2024, Sanjay was studying for the GMAT when his feed started serving proxy-ring ads — paid services that would sit the exam for you. He pulled the thread with Akshay, who had spent a decade on the offensive side of security and had already noticed the same tooling leaking into hiring pipelines: invisible overlays, local LLM helpers, remote shells disguised as productivity apps.
The assessment industry had optimized for the wrong threat model. Webcams and browser lockdowns were designed for a world where the cheating tool was a friend in the next room or a Google tab. The new tool was a 13-billion-parameter model running locally on-device, invisible to the camera, invisible to the browser, invisible to the operating system's install registry.
Divya — an AI safety and red-team engineer — had spent years reverse-engineering the overlay architectures now being productized as cheating tools. Sudhakar had nine years inside enterprise security organizations, watching how networking primitives could be repurposed for defense. We kept arriving at the same conclusion: the only place you can reliably interrupt the AI-assistance loop is the network layer. Below the browser. Above the OS. Ephemeral. Deployable in under a minute on any candidate laptop, with no driver install and no admin permission.
We incorporated as a Delaware C Corp and shipped the first version. Twenty-five attack vectors neutralized in the MVP. A pilot at an Ivy League research university. One hundred-plus interviews with assessment and talent leaders to validate the buyer problem. The company exists because the next generation of exams can't be secured with the previous generation's tools — and nobody else was building what the math required.