February 2, 2019

2019-02-02 03:00

I’m sorry I haven’t given any updates for a while. But really not much has actually happened. I gave up on the laptop idea, and then I gave up on the pre-built desktop idea, and decided to build my own computer. I did tons of research and have landed on this list—
  • Microsoft Windows 10 Home
  • Intel i9-9900K
  • EVGA Z390 FTW
  • Noctua NH-D15
  • Samsung 970 Pro 512GB
  • 4 x Ballistix Sport AT 16GB DDR4 2666MHz
  • Zotac RTX 2080 Ti Blower
  • EVGA 1000W G3
  • Verbatim Illuminated Keyboard
  • Logitech G203 Prodigy
  • BenQ PD2700U
I think that’s everything. Right now I’m trying to decide on a case and am comparing different options. The only item I have gotten so far is the BenQ PD2700U monitor. It is still at the store ready for pick-up and I won’t be able to use it anyway until I build my computer.

The reason I so seldom post here is because it is so much easier and quicker to drop a line on Twitter. That is where I post most of my updates. You can find me @MMKMatichuk. Follow me if you want. I have a bunch of followers already and I don’t even know why. All I talk about is Blender and building computers. Well, almost. I do sometimes affirm my support for Christian values and President Trump. Sometimes, I almost forget about Trudeau. That’s the way Trump helps Canadians. True Canadians. Canadians that aren’t just leftist zombies.

But that is besides the point. The point is, I’m ready to build this computer already! C’mon!

December 20, 2018

2018-12-20 20:30

Well, I didn’t get the equipment last month as I had hoped. The laptops I was interested in have now all been removed from staples.ca, so it may be a fair while before I get my hands on one.

However, I don’t think it will be long before I own the BenQ EW3270U monitor. I plan to purchase it before the end of the year, as it is now on sale again.

I also plan to buy the Wacom Intuos Pro Medium graphics tablet, but that might be early next year.

The laptop I want is still the HP Omen 17-an188nr, but as I cannot purchase it in Canada, I will have to buy it through relatives in the United States. That may be an advantage anyway, as it will cost much less (even after dollar conversion).

What do you think? If you would like to recommend equipment or discuss my plans with me, I welcome you to do so. I really have no experience, and I’m trying to build a content creation studio! So if you have experience and can give me advice, I would appreciate it. I’ll tell you when I get my monitor! (though I won’t be able to use it until I also get a computer).

November 22, 2018

PBR and the “Uber” Shader

A while back I promised to post my research essay when I finished it; but, being a Master Procrastinator, I did not. Please forgive me, and allow me to update you on some news. First, on last Monday evening I got an email granting me Goodreads Librarian status. I already made some improvements regarding Dr. James Hogg Hunter and his books since I began reading one, The Hammer of God, with my father. It was published in 1965 and belonged to my grandmother. The other piece of news is that I think that I will very soon be making a couple big purchases. I’m planning to buy an HP 17-an188nr laptop computer and a BenQ EW3270U monitor. We’ll have to wait and see what happens. By the way, my aunt just purchased an HP 17-an188nr and I’m waiting for her to tell me what she thinks of it. But she’s currently on holiday and it might be a few days before she’ll try it out. But anyway, here is that long-awaited essay. I hope you learn something from it.

PBR and the “Uber” Shader

Micah Morris Kent Matichuk


If you’ve been keeping up with the computer graphics world over the past few years, you’ve probably heard of PBR. But what is physically-based rendering? and what should you, as an artist, be doing about it?

As you may know, the ideal of photorealism has long been a prime driver of computer graphics development (Sillion 1; Burda 17): engineers and artists have always been trying to use computers to create believable visual imagery that would be too difficult to create in any other way. But beyond this common goal, engineers and artists have found each other at odds, because “physically based shading models fail to provide intuitive artist controls” (Sadeghi et al. 1) and “most . . . shading models are parameterized using the physical properties of the material, rather than parameters that directly relate to its visual appearance. Such models tend to be unintuitive to control for artists, making it difficult to quickly obtain a desired look” (Chiang et al. 1). Engineers have closely observed the interaction of light with real-world materials, and proposed scientific theories and mathematical models based on those observations. On the other hand, artists have wanted a simple, logical way to create any type of material they need to compose their artwork. While engineers could provide shaders with inputs such as index of refraction and absorption coefficient, giving too much direct control over intrinsic physical properties only caused the artists confusion. “In the physically based world, the appearance of materials is being determined by intrinsic properties. . . . These physically based properties have complex and unintuitive effects on the final . . . appearance. . . . This makes it very hard even for trained artists, to guess the shader parameter values in order to get a desirable appearance. . . .” (Sadeghi et al. 2).

In the 1960s and 1970s, computer graphics shading models had to be simplified to allow them to run efficiently on the primitive computers available at the time. But in later years—the 1990s and early 2000s—these models were developed more fully to the point where physically-based shading was a reality (Russell). But artistic control remained an issue, and animation studios relied on ad-hoc shading methods, simplifying the real-world properties to case-specific controls that artists could use to create a limited set of materials.

The main issue resulting from these ad-hoc methods was the behavior of a given material in varying lighting setups (Burda 17). An artist could tweak a material’s properties until it looked plausible in one lighting scenario (i.e. indoors), but when the material was used in a different scene (i.e. outdoors), it had to be adjusted again. This was because the parameters the artist was adjusting were more to do with appearance than the actual physical properties of the material. Thus an artist would be adjusting the amount of reflection visible in a surface, for example, and not the properties that cause the reflection. This was an especial problem in gaming and interactive applications, where characters and objects move around a virtual space in realtime. Causing the objects to have different materials in different environments was, understandably, a nightmare.

The beginning of the end for this dilemma came thanks to The Walt Disney Animation Studios. In 2010 during the production of Tangled, a film requiring intensive hair rendering, engineers determined that the old ad-hoc shading models designed to give artists intuitive and simple control of properties “lack[ed] the richness seen in real hair” (Sadeghi et al. 1). Developers from The Walt Disney Animation Studios and researchers from the University of California, San Diego, worked on a new model that would give artists intuitive control of intrinsic physical properties: in other words, they set out to design a completely new method of shader control. They were successful in their quest and were able to implement the new shader in the production pipeline of the film.

But they soon went even further. Brent Burley’s 2012 report begins,
Following our success with physically-based hair shading on Tangled, we began considering physically-based shading models for a broader range of materials. With the physically-based hair model, we were able to achieve a great degree of visual richness while maintaining artistic control. However, it proved challenging to integrate the lighting of the hair with the rest of the scene which had still used traditional ‘ad-hoc’ shading models and punctual lights. For subsequent films we wanted to increase the richness of all of our materials while making lighting responses more consistent between materials and environments and also wanted to improve artist productivity through the use of simplified controls. (1)
But there were difficulties. “When we began our investigation it wasn’t obvious which models to use or even how physically-based we wanted to be. Should we be perfectly energy conserving? Should we favor physical parameters like index-of-refraction?” (1). They started their research by scientifically analyzing a freely available data set from Mitsubishi Electric Research Laboratories, “a set of 100 isotropic BRDF material samples . . . [capturing] a wide range of materials including paints, woods, metals, fabric, stone, rubber, plastic, and other synthetic materials” (3). To compare measured data and shading models, they developed a special tool, which also helped artists to gain a better understanding of how the materials worked (3-4). They analyzed the MERL image slices for multiple different properties, took note of anomalies, and contrasted previous models with the measured data (5-12). After their research, they began to design a new bidirectional reflectance distribution function, or BRDF; but artists were anxious to caution them that the shader should be “art-directable and not necessarily physically correct” (12). As a compromise, they determined upon a set of five principles the shader should follow:

1. Parameters should be intuitive, not physical.

2. The number of parameters should be kept to a minimum.

3. Parameters should be normalized (zero to one) over their most useful values.

4. It should be possible to push parameters beyond their normalized range when required.

5. Every combination of parameter values should be as believable and robust as possible.

They then “thoroughly debated the addition of each parameter” (12), and ended up with one color and ten scalar parameters:

baseColor: The surface color.

subsurface: Mixes a subsurface scattering approximation.

metallic: Blends between two different BRDF models: dielectric and metal.

specular: Controls the base reflectivity of the dielectric model; a remapped IOR value.

specularTint: Adds baseColor to the base reflectivity; not physically accurate, added for artistic control.

roughness: Controls the microsurface component of both dielectric and metallic models.

anisotropic: Controls distance of reflection stretching.

sheen: Adds additional grazing reflection.

sheenTint: Adds baseColor to the sheen component.

clearcoat: Adds an additional layer of reflection to the surface.

clearcoatGloss: A reverse roughness parameter exclusively for the clearcoat reflection layer.

This new shader, called “Principled Layers,” was used to create almost every material in Wreck-It Ralph, except for hair, which still used the Tangled shader (17). To make it work, though, the artists had to switch from using punctual lights (light sources with no physical size) to using sampled area lights and image-based lighting, simulating real-world light sources (17). Thankfully, artists welcomed this change, as well as many other adaptations of their workflow (18-19).

At the end of his paper, Burley acknowledges one major issue with the shader: the lack of an intuitive subsurface model. He describes the model used in the shader this way:
Our subsurface parameter blends between the base diffuse shape and one inspired by the Hanrahan-Krueger subsurface BRDF. This is useful for giving a subsurface appearance on distant objects and on objects where the average scattering path length is small; it’s not however a substitute for doing full subsurface transport as it won’t bleed light into the shadows or through the surface. (14)
This also meant the shader could not create transparent, or “transmissive” materials, like glass or water. Its shortcoming caused problems later on, as we read in Burley’s 2015 report:
For our next film, Frozen, we continued to use this BRDF unmodified, but effects like refraction and subsurface scattering were computed separately from the BRDF, and indirect illumination was approximated using point clouds. All of these effects were combined through ad hoc shading in an additive way. (1)
So the team was again called upon to conceive a solution.
Starting with Big Hero 6 in 2014, we switched from ad hoc lighting and shading to path-traced global illumination where refraction, subsurface scattering, and indirect illumination were all integrated in a single physically based framework. With path-traced global illumination, energy conservation becomes critical as non-energy conserving materials may amplify light and prevent the image from converging. The additive nature of ad hoc shading is generally not energy conserving given that the various components are redundant representations of the refracted energy. In order to ensure energy conservation, we extended our BRDF to a unified BSDF model where all such effects are accounted for in a consistent, energy-conserving way. (1)
Whereas the first principled shader had only an approximate internal scattering model, the new principled shader received full support for transparency, translucency, and path-traced subsurface scattering, allowing it to be used for glass, fluids, and ice, as well as much more realistic snow, wax, marble, skin, among many others. For the new subsurface model, the developers at first developed a remarkably accurate approximation (7); but though it worked well in many instances, it was unable to cope with highly detailed surfaces with cracks and crevices, due to its basic assumption that the surface is infinitely flat (10). At this point the developers found that path-traced subsurface scattering was practical and eliminated the artifacts. For refraction they found it necessary to implement the full Fresnel equation rather than the commonly-used Fresnel-Schlick approximation to avoid unreasonably bright specular reflections due to error (13). As a part of specular transmission through solids, they included volumetric absorption controlled by two artistically-intuitive parameters, transmittance color and atDistance (5-6). Transmittance color sets the color tint of light passing through the solid, and atDistance controls the distance the light must travel through the solid before it is completely absorbed. The specular BSDF (specular transmission, or refraction and transparency factor) is blended with the dielectric/subsurface model by the specTrans parameter (3). For practical reasons, the developers split the shader into two separate models: one for solids (shapes enclosing a volume), and another for thin surfaces (mesh with no thickness) (3, 12). Both provide a specTrans and ior parameter (Presentation Slides 52), but they behave a little differently. Specular transmission describes light traveling through a solid, but only affects it as it passes through the surface. For solids, it affects light twice: as it enters and as it exits. For thin surfaces it can affect it only once, at a single point (12). The ior parameter applies to solids by controlling the amount of light bending as it enters and exits the interior; however, on thin surfaces, it is assumed that the bending approximately cancels (12); so ior doesn’t cause any bending. Instead, it affects a transmission-only roughness value that scatters the light, resulting in a blurry image. Additional parameters for solids include scatrDistR/G/B to control the path-traced subsurface scattering of light separately for each color wavelength: red, green, and blue (7; Presentation Slides 52). For thin surfaces, though, the additional parameters are diffTrans, which transfers diffuse light directly to the back side of the surface, imitating translucency; and flatness, which integrates the old subsurface approximation from the first principled BRDF as an alternative (12).

The resulting shader was more powerful and versatile than ever before; however, even then Burley acknowledged that there was still more to be done.
. . . we would like to emphasize that we believe the value of our model comes more from our unified approach—having a single model with a intuitive, minimal set of parameters that can be used for nearly all materials—and less from the specific constituent parts. We will continue to improve the robustness or plausibility of the model and extend it as our needs dictate, but we remain committed to the generality and simplicity of our approach. (17)
It is because of this approach that the shader came to be known informally as the “uber” shader; in other words, “the shader to end all shaders.”

These developments took the realtime interactive rendering world by storm and were first introduced to mainstream gaming in 2013 through games using Unreal Engine 3 and CryEngine (Burda 21). For the first time, designers were able to create realistic materials that would respond correctly in all environments (Karis), and light their scenes with environment maps. That truly was a game changer!

For the production of Zootopia, released in 2016, Burley and his crew replaced the old intuitive PBR hair shader from Tangled. They address the change in their 2015 report this way:
Sadeghi et al. presented an artist friendly hair shading model explicitly based on the single-scattering model of Marschner et al. and the multiple scattering approximation of Dual Scattering. They consider the same problem of artist controllability as us, however, their method concerns little about the physical constraint of the underlying models and explicitly relies on Dual Scattering to obtain intuitive controls in the presence of multiple scattering in hair. In contrast, our approach of intuitive parameterization is independent of the multiple-scattering method used and retains the physically based properties of the single-scattering shading model. (2-3)
As time goes on, shaders for meshes and strands continue to develop independently. Due to the differing nature of the meshes and strands, the shaders most likely will not be merged.

Now that I have explained what PBR means and chronicled the development of the “uber” shader, I will finish the paper with an overview of the basic theory behind physically-based shading methods.

First, it is assumed that every material is either metal or dielectric (non-metal). When a light ray hits any object, it either penetrates into the object (diffusion) or bounces off the surface in a single direction (specular reflection). A different portion of light reflects at varying angles of incidence to the viewer. When the angle of incidence is 0° (the surface of the object directly faces the viewer), the amount of light reflected is called base reflectance. This is usually low for dielectrics and high for metals. As the angle of incidence approaches 90° (the surface is perpendicular to the viewer), the specular reflectance increases according to the Fresnel principle until it reaches total reflectance. This is true for all materials. Dielectrics reflect all wavelengths of light the same amount, so their reflections are never tinted. Metals, however, may reflect differing wavelengths different amounts. Their reflections are tinted—as can be seen in copper and gold, for instance.

One of the most important elements of PBR, energy conservation, demands that any light not reflected must be diffused. Diffusion is different between metal and dielectric materials. In dielectrics, the light that penetrates strikes particles within the object which may absorb them as heat or scatter them. Eventually the light may come back out of the object, at which point it is called diffuse light. If the particles within the object absorb all wavelengths, no light will bounce out again, and the object will appear black. If they absorb no light, the object appears white. But if they absorb only select wavelengths, the other wavelengths bounce out and give the object its apparent color. Metals, however, absorb all diffuse light as heat, and never reflect diffuse light.

If a dielectric allows diffuse light to scatter freely enough, as in skin or wax, it can actually change the look of the object, and requires special attention. These kinds of materials are said to exhibit subsurface scattering because the diffuse light travels a significant distance. If the object is thin enough and scattering distance is great enough, the light may scatter all the way to the other side of the object. When this happens, the object is translucent. Sometimes when a light ray hits the surface of an object, it may bend (refraction according to Snell’s law) as it passes into the new medium but otherwise maintain its direction all the way through, until it passes out of the other side, bending again. This is called specular transmission, and can be seen in glass, clear plastic, or any other transparent material. If certain wavelengths of the light are absorbed as they pass through the solid, we have the effect of colored glass or colored clear plastic.

But one of the most fundamental and important PBR shading elements of all is microfacet theory. According to this model, on the microscopic level, the surface of an object is made up of countless minuscule reflective facets. The amount of light these microfacets reflect is dictated by three factors: Fresnel, normal distribution, and geometry. Fresnel is as I described it above—but it must be thought of on a much smaller scale. If an object is very smooth at the microsurface level, Fresnel can be observed on the object’s surface: the part of the object facing the viewer has the least reflection; the part perpendicular to the viewer is completely reflective. But now imagine that the surface microfacets are very rugged and irregular. At any given point on the surface of the object, an equal ratio of microfacets can be found facing any given direction relative to the viewer. Thus, on the object as a whole, the Fresnel effect appears to have vanished. The entire object appears to reflect the same dull amount of light; it appears flat. The Fresnel effect applies as much as ever, but on a scale much too small for the human eye to observe.

The normal distribution factor is closely related. It also has to do with the ratio of microfacets facing in any given direction. But rather than determining the amount of microsurface reflectance based on the angles of incidence of the microfacets, it is concerned more with the ratio of microfacets facing in a particular direction. That is, if the scene’s light source is a small sharp point, and the surface of an object is smooth, it will reflect the light as a small sharp point with the maximum intensity for the point of reflectance. If the microsurface is rough, however, a portion of the microfacets at the point where the light was exclusively reflected before will now be facing in other directions; and a portion of the microfacets in the area around that point will now be pointing at an angle so as to reflect the light source. This will cause the small sharp light source to reflect as a large blurry glow. But most importantly, the intensity of light reflected in the original area will be much less than before because the ratio of microfacets not reflecting the light will be greater. In short, the reflection will be spread over a larger area, but will be less intense because the same total amount of light is reflected.

Finally, the geometry factor describes the amount of microsurface shadowing that occurs at varying degrees of roughness. When the microsurface is more rough, incoming light has a greater chance of bouncing around within a crack or crevice, or bouncing in a direction away from the viewer. The result is that, at greater roughness values, the total amount of light reflected toward the viewer actually decreases. However, light bouncing around in the cracks and crevices has a greater chance of penetrating the surface, so more light is diffused.

This description should help to provide a basic understanding of the way physically-based shading is evaluated and give context to the advances in methods described above.

The most important thing to take away from this study is that technological advances allow engineers to create more realistic, effective, and unified shaders. The greatest challenge is to provide artists with controls that are simple and intuitive, while at the same time utilizing the best available methods for mimicking the way light works in the real world. The closer engineers and developers get to this point, the easier it will be for artists to quickly create convincing virtual worlds. While it is essential to allow artists some degree of creative control, it is also desirable to obtain plausible and believable results from any combination of parameters. This careful balance is finally being achieved and is making a difference in the way art is made. Knowing the possibilities of technology, we are all the more responsible to use that technology for quality content creation. We are able; but are we worthy?


[1] Burda, Rudolf. “PBR Workflow Implementation for Game Environments.” Brno: Masaryk University Faculty of Informatics, Spring 2017. https://is.muni.cz/th/btbti/PBR_Workflow_Implementation_for_Game_Environments.pdf.

[2] Burley, Brent. “Extending the Disney BRDF to a BSDF with Integrated Subsurface Scattering.” SIGGRAPH 2015 Course Notes. https://blog.selfshadow.com/publications/s2015-shading-course/burley/s2015_pbs_disney_bsdf_notes.pdf.

[3] Burley, Brent. “Extending the Disney BRDF to a BSDF with Integrated Subsurface Scattering” ACM SIGGRAPH 2015 Presentation Slides. https://blog.selfshadow.com/publications/s2015-shading-course/burley/s2015_pbs_disney_bsdf_slides.pdf.

[4] Burley, Brent. “Physically-Based Shading at Disney.” ACM SIGGRAPH 2012 Course Notes. https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf.

[5] Chiang, Matt Jen-Yuan, Benedikt Bitterli, Chuck Tappan, and Brent Burley. “A Practical and Controllable Hair and Fur Model for Production Path Tracing.” ACM SIGGRAPH 2015 Course Notes. https://disney-animation.s3.amazonaws.com/uploads/production/publication_asset/152/asset/eurographics2016Fur_Smaller.pdf.

[6] Karis, Brian. “Physically Based Rendering.” Online posting. Unreal Engine Documentation. https://docs.unrealengine.com/en-us/Engine/Rendering/Materials/PhysicallyBased.

[7] Russell, Jeff. “Basic Theory of Physically-based Rendering.” 1 Nov. 2015. Online posting. Marmoset Toolbag Tutorials. https://marmoset.co/posts/basic-theory-of-physically-based-rendering/.

[8] Sadeghi, Iman, Henrik Wann Jensen, Heather Pritchett, and Rasmus Tamstorf. “An Artist Friendly Hair Shading System” ACM SIGGRAPH 2010 Course Notes. https://s3-us-west-1.amazonaws.com/disneyresearch/wp-content/uploads/20150419222515/An-Artist-Friendly-Hair-Shading-System-Paper.pdf.

[9] Sillion, Fran├žois. “The State of the Art in Physically-based Rendering and its Impact on Future Applications.” Photorealistic Rendering in Computer Graphics. Focus on Computer Graphics (Tutorials and Perspectives in Computer Graphics). Ed. Pere Brunet and Frederik W. Jansen. Springer: Berlin, Heidelberg, 1994. https://link.springer.com/chapter/10.1007/978-3-642-57963-9_1.

[10] de Vries, Joey. “PBR Theory.” Online posting. LearnOpenGL. https://learnopengl.com/PBR/Theory.


All resources are current at time of writing. Page references in the section about Burley’s 2012 report point to resource 4. Most of the page references in the section about Burley’s 2015 report point to resource 2, except those marked “Presentation Slides,” which point to resource 3. The final section about PBR theory drew upon information from virtually all resources, but primarily resources 7 and 10. My understanding of the subject was aided by many other resources that would be difficult to cite. A special thanks goes to everyone who has worked to explain computer graphics to novices. I only hope to have been one of them.

October 13, 2018

2018-10-13 10:30

Hello again! It’s me, Micah. I suppose you knew that. Anyway, do you see that book up there? That’s volume six of Alberta in the 20th Century. My father introduced me to those books several years ago, and he’s already collected two complete sets of them for pennies on the dollar. Why? Because everybody else is discarding them! About two or three years ago Father and I read the first two volumes together. There are no words to describe how good they are: the history they present is more intimate and colorful than anything else you will ever read. And they don’t shy away from the hard issues either: rather, they give enough facts to show who was actually responsible, something that few people bother with now. And yet, for all that, one of the sets we have now was dumped upon us by the Bonnyville Library! The other one came from the nursing home.

Their representation on Goodreads, though, is a mess. That’s why, yesterday morning, I applied to become a Goodreads librarian! Which would mean I could clear them up and make them seem the respectable books that they are. Let’s hope they grant me the status!

Thank you all for your support!

P.S. My aunt and uncle live in Defuniak Springs, Florida. Their house held out fine, and they were back at work the next day, but their church in Freeport lost its roof and one of its walls. That’s one benefit of living where we do: no hurricanes, no tornadoes, and almost no thunderstorms! But I can’t say no floods after last year. Haha.