Appearance
M-mat-C1: GGX Distribution (D)
Taught: 2026-05-02 Milestone: Phase 1 M-mat — Disney BRDF Result: PASS on retry, after re-teach via histogram/voting-density angle (first attempt inverted both the math limits and the physical meaning)
Why this concept matters here
M-mat replaces M2's flat Lambert shader with a Disney microfacet BRDF. The very first term of Cook-Torrance — before Fresnel, before geometry — is the normal distribution function D(h). Everything else in the BRDF is a correction or attenuation; D is the actual statistical statement about the surface microstructure. Get this clean and the rest of the BRDF math has a foundation.
The walkthrough
Step 1 — What D actually represents (the statistical statement)
Imagine zooming into a square millimeter of the sphere surface. It looks smooth in screen space, but at the microfacet scale it's a sea of tiny mirror facets, each with its own normal vector. The macro-normal N you sample from scene_normal() is just the area-weighted average of all those microfacet normals.
When light comes from direction L and you view from V, only one microfacet orientation can possibly reflect L toward V: the one whose normal is the half-vector H = normalize(L + V). So the question "how bright is the specular highlight at this pixel?" reduces to "what fraction of microfacet area, in this footprint, has its normal aligned with H?"
That fraction — as a probability density over the unit hemisphere — is exactly D(h). It is in units of inverse steradians. It is not a probability between 0 and 1; it can and does exceed 1 for sharp roughness because it's a density, not a mass.
Step 2 — Why GGX specifically (not Beckmann, not Phong)
Three NDF candidates historically:
- Phong / Blinn-Phong:
cos^n(theta_h)— fast, but tails fall off too quickly. Real measured BRDFs (the MERL database) show much more energy at grazing half-angles than Phong predicts. - Beckmann: Gaussian on slope space — physically motivated, but again the tails are too thin compared to measurements.
- GGX (Trowbridge-Reitz): rational function with a
1/x²tail. Walter et al. 2007 showed it matches MERL measurements far better, especially for the "halo" around bright highlights on roughened metals.
The phrase "heavy tails" is the operative one. When you light a roughened steel ball, the highlight isn't a sharp dot fading abruptly to black — there's a soft glow that extends well beyond the geometric core. GGX captures that glow; Beckmann clips it.
Step 3 — The formula and the alpha convention
Here is the form we will use in WGSL:
wgsl
fn D_GGX(NdotH: f32, roughness: f32) -> f32 {
let alpha = roughness * roughness;
let alpha2 = alpha * alpha;
let denom_inner = NdotH * NdotH * (alpha2 - 1.0) + 1.0;
return alpha2 / (PI * denom_inner * denom_inner);
}The alpha = roughness * roughness step is the one place artists get burned. In Disney's parameterization, the slider value roughness in [0, 1] is perceptually linear — the difference between 0.1 and 0.2 should look about the same as the difference between 0.7 and 0.8 to a human. To make that perceptual linearity hold, the math wants roughness². So everywhere downstream — the D function, the Smith G, the visible-normal sampling — uses alpha, never raw roughness.
If you forget the square, your sphere at roughness = 0.5 will look way rougher than the artist expected, and the falloff curve will be wrong.
Step 4 — Worked example, concrete values
Take a fragment near the top of the sphere where the half-vector is almost aligned with the surface normal: NdotH = 0.95.
Sphere case — roughness = 0.4:
alpha = 0.4² = 0.16alpha² = 0.0256NdotH² = 0.9025denom_inner = 0.9025 · (0.0256 − 1) + 1 = −0.8794 + 1 = 0.1206denom_inner² = 0.01454D = 0.0256 / (π · 0.01454) ≈ 0.560
Pedestal case — roughness = 0.7, same NdotH = 0.95:
alpha = 0.49,alpha² = 0.2401denom_inner = 0.9025 · (0.2401 − 1) + 1 = 0.3142denom_inner² = 0.0987D = 0.2401 / (π · 0.0987) ≈ 0.774
Counterintuitive: at NdotH = 0.95, the rougher pedestal gives higher D than the smoother sphere. It's correct, and it's the crux of GGX. At NdotH = 1.0 (perfect alignment), the smoother surface peaks much higher (microfacets all crowd around N), but it also drops off faster. NdotH = 0.95 is already past the peak's narrow core for roughness = 0.4. For roughness = 0.7 the peak is broader and lower, so at NdotH = 0.95 we're still near the top.
At the exact peak NdotH = 1.0:
- Sphere:
D ≈ 12.43 - Pedestal:
D ≈ 1.325
At the exact peak, the sphere is almost 10× brighter than the pedestal. That order-of-magnitude difference is the highlight tightness you see on a polished metal vs. a rough plaster.
Step 5 — What the limits mean physically
roughness → 0 (perfect mirror): alpha² → 0, denom_inner = NdotH² · (−1) + 1 = 1 − NdotH². So for any NdotH < 1, denominator stays nonzero but numerator → 0, giving D → 0. At exactly NdotH = 1, you get 0/0 — formally a Dirac delta. Physically: a true mirror reflects only when H is exactly N, no spread.
roughness → 1 (fully rough): alpha = 1, alpha² = 1, denom_inner = NdotH² · 0 + 1 = 1, so D = 1/π for all NdotH. Constant! The microfacet normals are uniformly distributed over the hemisphere — there is no preferred direction. This is the diffuse limit of the specular term.
The mental model in one sentence
D(h) is the per-steradian density of microfacets oriented to bounce light from L to V, and GGX shapes that density with a fat tail because real measured materials have fat tails — sharp at the peak for low roughness, wide and flat for high roughness, with a smooth interpolation governed by alpha = roughness².
Explain-back question
If you're shading a fragment where the half-vector is exactly aligned with the surface normal (
NdotH = 1), what does GGX D return forroughness = 0(mirror) vsroughness = 1(fully rough)? Walk through the formula — what does the difference mean physically?
First attempt — FAIL (inverted the limits)
So, for half-vector =1, which is exactly aligned to the surface normal. For a pure mirror, D will be 0, meaning it will be scattered fully. For full rought, D will 1, meaning everything will be reflected back?
Wrong on both axes:
- Math: mirror at NdotH=1 is 0/0 (indeterminate, Dirac delta), not 0. Rough is 1/π, not 1.
- Physics: mirror = concentrated at one direction (NOT scattered); rough = uniformly spread (NOT "all reflected back"). The user inverted the entire physical picture.
Re-teach via histogram-of-voting-microfacets angle
Reframed D as a histogram of microfacet "votes":
- 1000 microfacets, each casts one vote for its tilt direction.
- Polished mirror: all 1000 vote for the same direction (N) → tall narrow spike → Dirac delta.
- Fully rough: 1000 votes spread uniformly → flat histogram at 1/π everywhere.
Drove home that scattering means SPREAD OUT, not "reflected back" — the misconception in the first attempt.
The integral of D over the hemisphere is always 1; what roughness changes is how the same total microfacet area is distributed across directions.
Re-test prompt
A polished steel ball is lit by a single bright bulb directly overhead. You take two photos: one of a polished version (roughness = 0.05) and one of a sandblasted version (roughness = 0.95). Describe what each photo looks like — specifically, where the highlight is, how big it is, and how bright. Tie your answer to what D's histogram shape is doing differently in each case.
User's answer (PASS)
The ball with polished version has more specular highlight, because the spread is concentrated at exactly the half-angle. For a sanblaseted version the D is uniform since it is uniformaly scattered. For the first one D = NOT DETERMINED. for second D = 1/pi!
Judgment
PASS. Got:
- Polished → concentrated highlight at the half-angle (correct: Dirac spike).
- Sandblasted → uniform scatter (correct: flat histogram at 1/π).
- Math limits now correct: 0/0 indeterminate vs 1/π.
The "concentrated at exactly the half-angle" phrasing shows the histogram framing landed. He didn't elaborate on what "indeterminate" means physically (Dirac delta with infinite peak height), but the key insight stuck: it's not zero, it's a singularity. Good enough — Dirac depth would be over-teaching for the WGSL implementation, where the alpha-clamp prevents the singularity from ever materializing.