Technical · 6 min read
DNS Filtering in Exam Proctoring: How It Works and Why It Matters
By Akshay Aggarwal · May 13, 2026
When a candidate opens ChatGPT, Claude, or any hosted AI assistant during an exam, the first thing that happens is a DNS lookup. Not the TLS handshake. Not the HTTP request. Not the prompt payload. A name resolution request goes out — api.openai.com, api.anthropic.com, generativelanguage.googleapis.com — asking the DNS resolver to return an IP address.
This lookup happens before any connection is established, before any data is sent, before the exam violation is complete. It is the earliest observable event in any AI API call.
That makes DNS the most efficient place to enforce exam policy. You can stop the violation at its first network event rather than inspecting encrypted payloads downstream.
How DNS Filtering Works in an Exam Context
In a network-layer exam security architecture, the candidate's device is connected to a policy enforcement gateway for the duration of the session. The gateway controls name resolution: all DNS queries from the device pass through the gateway's resolver before being answered.
The resolver applies an approved-domain list. Domains on the list — the exam platform itself, any resource explicitly whitelisted for the exam — resolve normally. Everything else gets dropped. The resolution fails. The connection never starts.
The speed advantage here is meaningful. Blocking at DNS means the TLS handshake never begins. No connection is established. No TCP SYN packet leaves the gateway. From the AI provider's perspective, the request never existed. From the candidate's device, the connection attempt fails at resolution — a few milliseconds after it started.
Compare this to a downstream approach that waits for a connection to establish before inspecting it: the handshake has already completed, metadata has already been exchanged, and you are now racing to close a connection that is already open. DNS filtering wins by not letting the connection open at all.
Why This Beats Browser-Level Controls
Browser extensions are the conventional answer to in-exam AI use. Block the ChatGPT domain at the browser extension level, and the candidate cannot access it.
The problem is scope. Browser extensions only intercept traffic initiated within the browser context. A desktop application making a direct socket connection — an AI overlay tool, a locally-installed helper, a native app — bypasses the browser extension entirely. The extension never sees the request.
DNS filtering operates at the OS level, below any specific application. When the candidate's device makes a DNS query, it goes through the system resolver. The system resolver is configured to point to the policy gateway. That configuration applies to every process on the device, not just the browser.
This matters because the most capable AI cheating tools do not run inside a browser. They run as separate processes — overlays that render on top of the exam window, screenshot-and-upload tools, locally running inference servers. A browser extension is invisible to all of them.
DNS filtering is not. Every process that needs to resolve a domain name — regardless of the application, regardless of what it calls itself — goes through the same resolver. If that resolver is controlled by the policy gateway, every process is subject to the same approved-domain policy.
This also addresses a specific evasion pattern: process renaming. A candidate might rename a Cluely binary to something innocuous. The process name changes. The DNS behavior does not. It still resolves api.openai.com. That resolution still goes to the gateway resolver. The gateway resolver still doesn't find it on the whitelist. The lookup still fails.
The DoH/DoT Evasion Problem — and Why It Is Solvable
DNS filtering has an obvious attack surface: encrypted DNS. DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT) let a client bypass a local resolver entirely by sending encrypted DNS queries to a remote server — typically 8.8.8.8:443 or 1.1.1.1:853. If a candidate's device is configured to use an encrypted DNS provider, the local policy resolver is skipped.
Two observations on why this is a solved problem in a properly implemented architecture:
First, the network layer can block the encrypted DNS providers themselves. DoH traffic goes to a small set of well-known IP addresses — Google, Cloudflare, Quad9. A gateway that drops outbound connections to these IPs prevents DoH from establishing. The client's encrypted DNS query never reaches its upstream resolver. The system falls back to the gateway resolver by default.
Second, resolver configuration changes are detectable. A network-layer security agent monitors the system's DNS configuration. If something modifies the resolver to point away from the policy gateway mid-session, that change is an observable event. The agent detects it and flags the session.
Firefox's trusted recursive resolver deserves specific mention because it is the most common DoH bypass that candidates encounter without trying. Firefox, if configured to use its own DoH resolver, will route DNS queries outside the system resolver. This is addressable at the OS policy level — a managed profile or system configuration that disables Firefox's DoH override will prevent it. This is a one-time configuration, not a continuous monitoring task.
The general principle: encrypted DNS is an attack surface, but it is a bounded one with known countermeasures at the network layer.
What DNS Filtering Cannot Catch Alone
DNS filtering is one layer. It does a significant amount of the work — hosted AI APIs are the dominant vector for AI cheating, and every hosted AI API requires DNS resolution. But it does not operate alone in a complete architecture.
Hardcoded IP connections. A sophisticated attacker can bypass DNS entirely by connecting to a hardcoded IP address. No resolution required. DNS filtering has no visibility into these connections. A second enforcement layer — IP allowlisting at the gateway — is needed to cover this vector.
On-device local LLMs. A candidate running Ollama, LM Studio, or any local inference server makes zero outbound connections. DNS filtering sees nothing. Detecting local LLM use requires OS-level monitoring: process detection, port scanning, filesystem scanning for model weight files, and GPU memory monitoring for inference-characteristic VRAM usage patterns. These are orthogonal layers.
Compromised device configurations. DNS filtering depends on the candidate's device routing DNS through the gateway. A sufficiently motivated candidate with enough OS access could, in theory, configure the device to route around the gateway entirely. This is why DNS filtering is paired with tunnel-based enforcement — all traffic exits through the tunnel, not just DNS.
Each of these gaps has a corresponding layer. DNS filtering addresses the highest-volume, lowest-sophistication attack vector — the candidate who opens a ChatGPT tab or uses a browser-based overlay — at the earliest possible point in the connection lifecycle.
Why DNS Filtering Is Efficient
The volume argument matters here. The majority of AI cheating attempts use hosted AI APIs. Every one of those attempts begins with a DNS lookup. DNS filtering stops all of them at the same point — before a connection opens — with low overhead and no false-positive risk to legitimate traffic.
The remaining vectors — hardcoded IPs, local models, device-level bypasses — are real but require significantly more effort from the candidate. They are also detectable through other layers.
DNS filtering is not glamorous. It is a resolver configuration and a domain list. But it stops the most common attack at the earliest observable moment, before any connection completes, at a fraction of the computational cost of downstream inspection.
The elegant solution is often the simple one applied at the right point in the stack.
Network-layer exam security enforces access policy at the DNS and network layers — stopping AI API connections before they complete, regardless of which process makes the request. See how the architecture works →