Intel TDX vs AMD SEV-SNP: A Systems Engineer's Field Guide

11 Mar 2026

A layer-by-layer comparison for engineers who know one side and want to understand the other.


Introduction

Modern cloud VMs run on hypervisors their tenants don’t control. A traditional hypervisor can read guest memory, inspect registers, and inject arbitrary interrupts — it’s an implicit member of the trust boundary.

Confidential computing hardware removes the hypervisor from that boundary. Intel TDX and AMD SEV-SNP are the two production implementations. They share the same threat model — an actively malicious hypervisor — but make different architectural choices at every layer. This post maps those layers side by side.


The Trust Architecture

The core question: who enforces isolation, and where do they live?

  Intel TDX AMD SEV-SNP
Security enforcer TDX Module (SEAM mode, VMX root) AMD Secure Processor (PSP) — ARM Cortex-A5 on-die
Firmware TCB ACMs + SEAM loader (x86 microcode path) PSP firmware (signed by AMD)
Guest name Trust Domain (TD) SNP VM
Host control structure VMCS (hardware-cached, SEAM-protected) VMCB (memory-resident, 4KB page)

Intel’s enforcer runs on the main CPU in a new processor mode (SEAM). AMD’s enforcer is a separate ARM processor (the PSP) embedded on the same die. Different attack surfaces: the TDX Module shares cache/memory bus with the host; the PSP is isolated silicon with its own firmware TCB.


Memory Encryption

Both architectures use hardware AES engines in the memory controller. The divergence is in how they tag which key to use.

  Intel TDX AMD SEV-SNP
Per-VM key selector HKID — encoded in physical address bits ASID — tagged TLB/cache entries
Shared memory indicator Bit 51 of GPA = 1 (shared → HKID=0) C-bit in guest PTE = 0 (unencrypted)

Intel puts the private/shared split in the address: set bit 51 of a GPA and the memory controller routes it through HKID 0 (no encryption). AMD puts it in the page table entry: the C-bit in each PTE selects encrypted or plaintext. AMD’s vTOM feature simplifies this with a single watermark GPA — all addresses below it are private — closer to Intel’s model.


Page Ownership and Integrity

Encryption alone isn’t enough. A malicious hypervisor can remap, replay, or alias pages. Both architectures solve this with a hardware-enforced ownership table checked on every memory access.

  Intel TDX AMD SEV-SNP
Ownership table PAMT (Physical Address Metadata Table) RMP (Reverse Map Table)
Hypervisor assigns page TDH.MEM.PAGE.ADD (SEAMCALL) RMPUPDATE instruction
Guest validates page TDG.MEM.PAGE.ACCEPT (TDCALL) PVALIDATE instruction

The guest validation step is the key anti-replay mechanism: the hypervisor assigns a page with Validated=0; the guest must validate it before use. If the hypervisor swaps in a different physical page at the same GPA, the replacement’s Validated bit is 0 and the guest gets a fault instead of silently using compromised memory. The RMP also enforces a reverse-map check (each physical page can be mapped to only one GPA), preventing aliasing attacks.

AMD’s own framing of the guarantee: “If a VM is able to read a private page of memory, it must always read the value it last wrote.”


Guest–Hypervisor Communication

Even with a malicious hypervisor, guests need device emulation and configuration. Both architectures define a structured interface and protect guest register state in transit.

  Intel TDX AMD SEV-ES/SNP
Register state protection TDVPS (encrypted, TDX Module only) VMSA (encrypted with guest’s ASID key)
Communication interface TDG.VP.VMCALL / shared GPA VMGEXIT / GHCB page
Hardware-triggered exception #VE (vector 20) #VC (vector 29)

When hardware intercepts a guest event the VMM must handle, both architectures turn it into a guest-handled exception rather than an invisible VM exit — keeping register state inside the guest. The guest’s exception handler then explicitly communicates outward: AMD’s #VC handler writes to the GHCB (a shared 4KB page) and calls VMGEXIT; Intel’s #VE handler calls TDG.VP.VMCALL. Structurally: #VC → VMGEXIT#VE → TDG.VP.VMCALL.


Privilege Levels Within the Guest

Both architectures support privileged services inside the confidential VM (vTPM, attestation proxy, unenlightened guest support) without pushing them to the untrusted hypervisor.

  Intel TDX AMD SEV-SNP
Mechanism TD Partitioning: L1 VMM + nested L2 TDs VMPLs 0–3: per-page R/W/X permissions per level
Privileged service layer L1 VMM inside TD SVSM (Secure VM Service Module) at VMPL 0
Open-source reference Linux KVM inside a TD Coconut-SVSM (Rust)

AMD’s VMPL model is compact — it adds only per-page permission bits to existing RMP entries. VMPL 0 (SVSM) gets full permissions on each page and grants subsets to VMPL 1–3 via RMPADJUST. When the guest OS at VMPL 3 triggers a #VC, the ReflectVC feature reflects it to VMPL 0 for handling, keeping all hypervisor interaction mediated by the SVSM.


Closing

Both architectures converge on the same high-level answers: hardware memory encryption, a page ownership table to block remapping attacks, a protected guest–hypervisor communication channel, and intra-guest privilege separation. The differences are in philosophy — mediation vs delegation, on-CPU firmware vs off-CPU co-processor, address-based shared bit vs PTE-based C-bit.

The naming maps cleanly once you know it: PAMT → RMP, HKID → ASID, VAPIC → AVIC, TDVMCALL → GHCB/VMGEXIT, #VE → #VC. The challenge is in the details — different field layouts, different instruction names, and different assumptions about what the guest must actively participate in vs what the hardware handles transparently.