← Back to glossary
Threat landscape

On-Device LLM Cheating

On-device LLM cheating is the use of a language model running entirely on the candidate's own machine to generate answers during an assessment, producing no outbound request that a network-monitoring tool could inspect.

What it is
On-device LLM cheating is the use of a language model running entirely on the candidate's own machine to generate answers during an assessment, producing no outbound request that a network-monitoring tool could inspect.
Why it matters
Cloud-traffic signatures are the last line of defense for most proctoring stacks — once the model is local, there is no DNS lookup, no TLS handshake, and no API call to flag.
How Aiseptor addresses it
Aiseptor enforces device posture at the system level: locally hosted inference runtimes, suspicious model files, and the associated process footprint are treated as hard exam-integrity signals, regardless of whether the model ever touches the network.

Canonical definition

On-device LLM cheating describes the use of a language model hosted locally on a candidate's laptop — via runtimes such as LM Studio, Ollama, or custom inference binaries — to generate answers during an online assessment. Because the model runs in local memory with no outbound requests, traditional network-signature proctoring cannot see it; and because it can be paired with a voice pipeline or overlay, it can deliver real-time assistance without ever touching the internet. Consumer-grade hardware can now host capable 7B–70B models, and loadable model files are distributed freely on the major model-sharing platforms. The emergence of capable on-device inference turns the cheating surface from a network-visible problem into a device-integrity problem, and makes device-layer enforcement the only durable defense.

Akshay Aggarwal·Founder, Aiseptor

Citations

  1. [1]Aiseptor threat intelligence log on local-LLM cheating (2026)
  2. [2]Talview AI Threat Index Report 2026 (2026)
  3. [3]Fabric, analysis of 19,368 AI-conducted interviews, January 2026 (2026)

Aiseptor is the security layer for high-stakes assessments.