top of page

PBR Part 3: The Radiance Integral

Took a few more days than I expected, but now let's jump right into it. This is a direct continuation of the last post about Image-Based Lighting.

So, we need a PMREM for the indirect specular lighting. But the CubeMapGen tool that we've been using doesn't support the Cook-Torrence Microfacet model that's being used. So, the math for that has to be done manually.

What we need to do is solve an equation called the radiance integral, shown below:

This can be solved with a technique called Importance Sampling. If you've ever taken Statistics, that term might sound familiar. Very simply, we want to estimate the properties of a particular distribution. In the code, we want to use this to get a manageable, low-variance estimate of the integral.

Luckily, the code to this technique is already provided for us! Brian Karis from Epic Games has documented the process in his notes here. It's important to note that the pages relevant to my project are pages 4-8.

So, here is the code for the Importance Sampling, which is stored in a file I call "CommonVars.hlsl" (in this file, the BRDF equations are also stored, as well as some constants):

float3 ImportanceSampleGGX(float2 Xi, float Roughness, float3 N) { float a = Roughness * Roughness; // DISNEY'S ROUGHNESS [see Burley'12 siggraph]

float Phi = 2 * Pi * Xi.x; float CosTheta = sqrt((1 - Xi.y) / (1 + (a * a - 1) * Xi.y)); float SinTheta = sqrt(1 - CosTheta * CosTheta);

float3 H; H.x = SinTheta * cos(Phi); H.y = SinTheta * sin(Phi); H.z = CosTheta;

float3 UpVector = abs(N.z) < 0.999 ? float3(0, 0, 1) : float3(1, 0, 0); float3 TangentX = normalize(cross(UpVector, N)); float3 TangentY = cross(N, TangentX);

// Tangent to world space return TangentX * H.x + TangentY * H.y + N * H.z; }

Note: It is importance to keep roughness as a[squared] to be consistent with our earlier math.

Note 2: The below method, which does the actual sampling, would be in a Pixel Shader.

float3 SpecularIBL(float3 SpecularColor, float Roughness, float3 N, float3 V)

{

float3 SpecularLighting = 0;

const uint NumSamples = 1024; //this is a generally got number to aim for, a multiple of 16

for(uint i=0; i<NumSamples; i++)

{

float2 Xi = Hammersley(i, NumSamples); //In CommonVars, we solve Hammersley

float3 H = ImportanceSampleGGX(Xi, Roughness, N);

float3 L = 2 * dot(V, H) * H - V;

//N dot V, N dot L, etc.

float NoV = saturate(dot(N, V));

float NoL = saturate(dot(N, L));

float NoH = saturate(dot(N, H));

float VoH = saturate(dot(V, H));

if(NoL > 0)

{

//EnvMap is a TextureCube object

//EnvMapSampler is a predefined Sampler State

float3 SampleColor = EnvMap.SampleLevel(EnvMapSampler, L, 0).rgb;

float G = G_Smith(Roughness, NoV, NoL); //Smith from BRDF...it all comes together

float Fc = pow(1 - VoH, 5);

float F = (1 - Fc) * SpecularColor + Fc;

//Incident Light = SampleColor * NoL

//Microfacet Specular = D*G*F/(4*NoL*NoV)

//pdf = D * NoH / (4 * VoH)

SpecularLighting += SampleColor * F * G * VoH / (NoH * NoV);

}

}

return SpecularLighting / NumSamples;

}

This isn't the end of the journey, however. More samples still need to be taken to achieve the result we're looking for. The sample counts can be reduced by using mip maps, but the count has to be at least 16 or greater (preferably greater). Epic blends between many environment maps per pixel for a local reflection, so there can be only one sampler per map.

Not only does Epic do this, but so do very many companies in the industry; thus, for most graphics programmers, more than one sample per map is unfeasible. HOWEVER, if the environment map doesn't change, parts of the Radiance Integral can be pre-calced.

So, to solve our problem, enter: The Split Sum Approximation!

The integral is split into two sums and each one calculated separately (as such, the above "SpecularIBL()" method does not appear in the final code).

The first sum is going to help us generate a PMREM. We'll calculate for different roughness values and store those values in the mip levels of a cubemap, with one value per mip level (thus, PMREM).

Now, I've made separate Pixel and Vertex Shaders to handle these maps (RadMapVS and RadMapPS). The following is from RadMapPS, and is adopted from the Epic article mentioned earlier in this post:

float3 PrefilterEnvMap(float Roughness, float3 R, TextureCube EnvMap) { //weighting is not present in the above image, the sum was left in simpler form

float TotalWeight = 0.0000001f;

//assume viewing angle to the surface is 0

//with Microfacet, distribution changes based on viewing angle to the surface float3 N = R; float3 V = R;

float3 PrefilteredColor = 0;

const uint NumSamples = 1024;

for (uint i = 0; i < NumSamples; i++) { //does this math look familiar? :)

float2 Xi = Hammersley(i, NumSamples); float3 H = ImportanceSampleGGX(Xi, Roughness, N); float3 L = 2 * dot(V, H) * H - V; float NoL = saturate(dot(N, L));

if (NoL > 0) { //mip level of 0 for desired blurriness, so biggest map is used

PrefilteredColor += EnvMap.SampleLevel(SamplerAnisotropic, L, 0).rgb * NoL; TotalWeight += NoL; } }

return PrefilteredColor / TotalWeight; }

The second sum will integrate our earlier BRDF work with our map. To do this, we make a 2D Look-Up Texture (2D LUT). The LUT is based on different roughness values. It takes roughness and cos(theta)v as inputs, and outputs a scale and bias to Fresnel (as such, the types of output are the actual values stored). These values are withing the range of [0, 1]. Here is a visual of what we can expect to get:

According to Brian Karis in the Epic article: "...we discovered both existing and concurrent research that led to almost identical solutions to ours". So, the integration method below, that will solve for the second sum, is (currently) relatively standard in the field:

Note: I made a VS and PS for BRDF, the following excerpt is from "BrdfPS.hlsl". However, since it's the only method other than main, you could put this in any PS you please

float2 IntegrateBRDF(float Roughness, float NoV) { float3 V;

V.x = sqrt(1.0f - NoV * NoV); // Sine V.y = 0; V.z = NoV; // Cosine

float A = 0; float B = 0;

float3 N = float3(0.0f, 0.0f, 1.0f);

const uint NumSamples = 1024;

for (uint i = 0; i < NumSamples; i++) { float2 Xi = Hammersley(i, NumSamples); float3 H = ImportanceSampleGGX(Xi, Roughness, N); float3 L = 2.0f * dot(V, H) * H - V;

float NoL = saturate(L.z); float NoH = saturate(H.z); float VoH = saturate(dot(V, H));

if (NoL > 0) { float G = G_Smith(Roughness, NoV, NoL); float G_Vis = G * VoH / (NoH * NoV);

float Fc = pow(1 - VoH, 5);

A += (1 - Fc) * G_Vis; B += Fc * G_Vis; } }

return float2(A, B) / NumSamples; }

Now that we finally have both pre-calculated sums, it's time to multiply them! Back in our main Pixel Shader:

float3 ApproximateSpecularIBL(float3 specularAlbedo, float3 reflectDir, float nDotV) { // Mip level is in [0, 6] range and roughness is [0, 1]. float mipIndex = roughness * 6;

float3 prefilteredColor = radianceMap.SampleLevel(SamplerAnisotropic, reflectDir, mipIndex); float3 environmentBRDF = integrationMap.Sample(SamplerAnisotropic, float2(roughness, nDotV));

return prefilteredColor * (specularAlbedo * environmentBRDF.x + environmentBRDF.y); }

So, in essence, I now have all the necessary math for PBR!

Here is the github link that coincides with the blogposts. The relevant files to peruse would be the .HLSL shader files: https://github.com/Milhau5/DX11Starter-NI

Major credit goes to this tutorial by Dirkie Kunz. It briefly glosses over every topic involved in PBR, so I still had a lot of other research to do. These posts were me expanding on my findings, as well as making the process more digestible for those interested. Hope you enjoyed the read.

Featured Posts
Check back soon
Once posts are published, you’ll see them here.
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page