Wavelet Transparency
Abstract
Order-independent transparency schemes rely on low-order approximations of transmittance as a function of depth. We introduce a new wavelet representation of this function and an algorithm for building and evaluating it efficiently on a GPU. We then extend the order-independent Phenomenological Transparency algorithm to our representation and introduce a new phenomenological approximation of chromatic aberration under refraction. This generates comparable image quality to reference A-buffering for challenging cases such as smoke coverage, more realistic refraction, and comparable or better performance and bandwidth to the state-of-the-art Moment transparency with a simpler implementation. We provide GLSL source code.
1 Introduction and Related Work
“Transparency” in computer graphics broadly covers all situations in which multiple surface layers or a continuous medium contribute to the value of a pixel. This includes otherwise-opaque surfaces partly covering a pixel (a.k.a., edge antialiasing, alpha masking), the gradual obscuring of objects due to smoke volumes and particles (a.k.a., partial coverage/alpha), the refraction and modulation of a background by glass, and the diffusion of light passing through fog and ground glass, as well as shadows cast in such scenes.
Real-time rendering largely relies on provisioning parallel resources proportional to the number of pixels. This works well for opaque triangles that are larger than the pixels. Real-time transparency is challenging because it is a case where screen size is not a bound on the amount of processing required, since many primitives may affect each pixel. Fortunately, human observers appear to be extremely tolerant of error in transparency in general situations, and especially in entertainment applications. This creates significant room for approximation, which has been exploited by decades of clever real-time transparency solutions and centuries of fine art.
1.1 Ray Tracing
The ultimate solution for transparency is path tracing, as has been demonstrated by the quality of offline rendering and relative elegance of its implementation [PJH16]. We expect that algorithmic and hardware improvements some day will enable real-time path tracing for complicated transparent objects. Today’s fastest GPUs can trace and shade 1-10 million rays per millisecond, which enables ray-traced reflections and shadows in currently available video games. However, path tracing and even Whitted ray tracing currently are too expensive for complex transparent materials, which require many more rays per pixel than opaque materials.
For example, consider a simple bottle of wine in an otherwise empty room. At least five ray intersections per pixel are required to render this scene: air to glass, glass to wine, wine to glass, glass to air, and then air to the wall behind the bottle. If each of those intersections spawns recursive shadow and reflection rays, then at least 15 rays per pixel are required and the ray cast and shade cost quickly consumes the entire frame budget at 1080p 60 Hz. Furthermore, this is the simplest case of path tracing for perfect specular reflection and refraction. For the more difficult case of rough, diffusing surfaces such as ground glass and participating media such as fog, path tracing requires many stochastic samples to converge. For opaque surfaces, it is common to take fewer samples, which leave noise, and then apply spatio-temporal denoising methods (e.g., [MMBJ17, SKW∗17]). However, those denoisers do not work well for transparent materials because the noise is at varying depths and temporal reprojection is impractical.
1.2 The Visibility Function
The transparency visibility function maps distance along a view ray to net transmittance from that distance back to the ray origin. For example, in a scene with a single, 25% opaque plane 1m from the camera, the visibility function is on and then for (Figure 2b). Opaque rendering is the subcase in which the visibility function is binary. This compositing notion of transparency of grew out of Deep Shadow Maps [LV00] in which visibility is relative to a light source, but is now applied to all visibility problems. The visibility function also can be computed separately for discrete light frequencies to model colored transmission and absorption, and the “ray” can be bent into a path or extended into a cone to model specular and diffuse refraction.
1.3 OIT Methods
In contrast to earlier approaches (see the survey in [AMHH∗18]), the order-independent transparency (OIT) methods preferred for real-time transparency today do not require storing or sorting all surfaces at a pixel. This enables constant space per pixel and linear time in the number of pixels covered collectively by the primitives. Grouped roughly by their representation of the visibility function, the main OIT methods include:
These all can be extended to perform not just partial coverage but also correct modulation of color during transmission by processing color channels coverage independently; efficient solutions have been demonstrated for stochastic [ME11] and WBOIT [MB13].
Although developed for rasterization systems, these OIT methods even allow limited ray tracing of partial coverage transparency along a straight line. For example, Battlefield V uses WBOIT to accumulate shading along particles using an “any hit” ray cast, rather than performing a true “closest hit” cast or stochastic path tracing which, as discussed in the introduction, would be prohibitively expensive today [SUD19].
1.4 Refraction
There are several methods for approximating refraction in real-time rendering, which the OIT methods from the previous section do not address [Wym05, Sou05, dRBS∗12, GD15].
Chromatic aberration occurs due to the index of refraction varying with the frequency of light. This produces the phenomena of colored fringes at edges in strong refractions (photographed in Figure 1), rainbows, and the spectral spread observed with a prism. It is trivial to implement in a ray tracer that samples light frequencies separately, and very approximate chromatic aberration due to the camera lens is a common post-processed effect in video games [GKK15].
1.5 Other Phenomena
There are many targeted solutions for transparency in specific contexts, for example, fog [NSTN93, Nis98, NN03, NRN04, PAT∗04, Hil15], caustics [WD06, SKP07, Wym08, WD08, YK09], skin [JWSG10], and hair [SA09, YT10]. These outperform general transparency methods for their specific case.
Phenomenological Transparency [MM17] attempts to present a unified solution for real-time transparency by combining weighted, blended OIT and colored stochastic shadow maps [ME11] with new order-independent approximations for refraction, diffusion, ambient occlusion, shadowing, and caustics. It gains efficiency from approximating specific phenomena important to perception of transparency rather than performing physically-accurate simulation of all light rays.
1.6 New Contributions
We extend Phenomenological Transparency with (1) an improved visibility representation and (2) a minor phenomenological approximation of chromatic aberration at transparent surfaces.
The new visibility representation uses wavelets [Mal08], a hierarchical basis transformation combining the compressibility of frequency methods with the locality of spatial methods. This suits the visibility function, which exhibits sudden local changes for surfaces such as glass as well as continuous variation for materials such as fog. As with Fourier Opacity Maps, Deep Shadow Maps, and Moment Transparency and unlike other OIT methods, our representation requires separate passes over geometry to build the visibility approximation and then perform shading using it. We suggest some ways to eliminate this limitation as future work.
The new chromatic aberration term adjusts the gather samples for each color channel based on the strength of refraction and heuristics for identifying the “front” of objects. This is similar to previous ad hoc methods observed in video games for specific objects. It is the first addition of chromatic aberration into a general real-time OIT method that we are aware of in the literature.
Our results section shows that these new ideas combine with the previous strengths of Phenomenological and Moment Transparency to render more robustly than alternative OIT algorithms while also creating improved perception of transparency phenomena in many cases. The performance of this new Wavelet OIT depends on a design that caters to human perception rather than radiometric error, so it is not appropriate for predictive rendering and we make no claim of a direct relationship or objective comparison to physically-based simulation. Instead, we evaluate the method based on objective space and time efficiency and present visual comparison to real-time alternatives for the reader to consider for their application.
2 Wavelet Algorithm
2.1 Overview
The wavelet model of compositing has three steps, similar to MLAB and Moment Transparency. The first step is two unordered passes over all transparent geometry or their bounding boxes to build the visibility approximation, which is stored as wavelet coefficients. The second step is another unordered pass over all transparent geometry to shade it and weigh it by the visibility function. The third step composites the accumulated transparent results over the opaque parts of the scene, which is where the other effects from Phenomenological Transparency are applied.
The algorithm is:
-
1.
Compute tight per-pixel depth bounds by drawing transparents (or bounds) into a depth buffer.
-
2.
Construct the wavelet coefficient buffer for transparents.
-
3.
Accumulate transparents by drawing modulated by the visibilty function, as with previous methods.
-
4.
Composite over opaques and apply refraction, diffusion, etc.
For wavelets of rank we store coefficients per channel, packed into 32 bits with a shared exponent in the form E5B9G9R9. See our code supplement for the bitpacking implementation. For example, rank wavelets encode 48 coefficients in 64 bytes per pixel total.
2.2 Derivation
The radiance compositing equation is
where is the incoming radiance at th interface, is the reflectance, and is the net transmittance visibility function at depth . Because transmittance decays exponentially with successive surfaces, for numerical stability convert to the net absorbance function:
For discrete interfaces such as glass, the absorbance function is:
where is Heaviside theta function, and it is smooth for continuous media such as fog. We observe that Haar wavelets are a good representation of .
2.3 Wavelet approximation
The purpose of this section is to approximate with a linear interpolation of Haar wavelets. First we start with a direct approximation of with Haar wavelets.
The approach will be based on two algorithms. One for efficient evaluation of the coefficients, and another one for efficient evaluation of the approximation at a point.
2.3.1 Coefficients
The coefficients are evaluated directly with integration by parts:
and:
The functions are non negative on the interval , and are localized as their corresponding wavelets. Overall obtaining that all wavelet coefficients are non positive, while the scaling function coefficient is non negative.
So if the approximation is done only with wavelets and a single scaling function, then a logarithmic number of coefficients needs to be updated per incoming interface. Same applies during evaluation of the function.
The choice of the nature of the depth has a direct effect on the wavelet basis that should be used. For instance linear depth will require uniform wavelet scale, while screen space depth will work better with increased resolution towards where the information is compressed. In this work the approach that is chosen is the one with linear depth.
2.3.2 Interpolation
Direct approximation with Haar wavelets will produce a staircasing which will not coincide with the steps of the original function. To alleviate this issue, but still keep a relatively smooth and predictable approximation, the approximated function can be linearly interpolated between the points on the subdivision.
2.3.3 Coefficient update
Simplest implementation that will preserve coherency can be based on ROVs, however in this case atomic operations can be used as well. If the hardware supports floating point atomic operations, then the implementation is trivial. If not, then a CAS loop can be done per coefficient.
3 Chromatic Aberration Algorithm
Phenomenological Transparency additively accumulates a screen-space refraction offset vector at each pixel during step 2 (shading) of transparency rendering. This vector can be derived from ray cast against a fixed background plane, a background plane at the opaque depth of the current pixel (used in this paper’s results), or via true geometric or screen-space ray casting against the full depth buffer. During the step 3 (compositing), Phenomenological Transparency uses this offset vector for the texture sample of the background image of opaque surfaces. Our new chromatic aberration approximation affects both steps.
During the shading step, it cubes the transmission coefficient when the refractive index is higher than 1. This is a heuristic for crudely approximating an extinction coefficient within the medium in a order-independent way. It causes the back surface of a transparent object surrounded by a lower-index medium (e.g., a glass in air or water) to be darker, as if there were absorption while the ray passed through the object. In particular, this causes reflections on the inner surface of an object to be slightly darker than on the outer surface, as observed in real scenes. The inverted reflection of the lamp on the lower part of the real ball in Figure 1 is an example of this phenomenon; it is dimmer than the right-side-up specular highlight on the top part of that ball.
During the compositing step, the algorithm replaces the single bilinear sample of the background image with bilinear samples along a short line segment, aligned with the refraction vector and with proportional magnitude. For each tap index , we weight the color channels by:
vec3 spectralWeight(int i) { float t = 0.5 + float(2 * i) / float(k - 1); vec3 w; w.r = smoothstep(0.5, 1./3, t); w.b = smoothstep(0.5, 2./3., t); w.g = 1.0 - w.r - w.b; return w; }



4 Results
4.1 Visualizing Visibility
To understand the nature of the approximations, we graph the visibility function along the central ray for some everyday scenes with transparency in Figure 2. The figures in this section display the scenes as cartoon schematics and the graphs as computed from the equivalent real 3D scene.
In the figure, ground truth visibility is a dashed black line, Moment Transparency is yellow, and Wavelet Transparency is a thick green line. We compare at parameters that yield approximately equal performance for them.
Figure 2a shows the view along a ray through the windows of a car that is surrounded by fog. The ground truth visibility function decreases in a shallow exponential within fog, is constant in air, and suddenly drops at the surface of the glass windows. Although both approximations slightly overestimate visibility within fog near the camera, Wavelet is a strictly better approximation than Moment across the entire scene. Figure 2b shows visibility in the bottle of winebottle scene described in Section 1.2, modeled without extinction within the wine. Here, both approximations oversmooth and Moment gives a tighter fit near the first interface. Figure 2c shows separate smoke and fire particle systems. Wavelet is a better fit overall and captures the sharp transitions slightly better.
MLAB4 [SV14] | WBOIT [MB13] | Moment [Sha18] | Wavelet (New) | A-Buffer [Car84] | |
1 | ![]() |
![]() |
![]() |
![]() |
![]() |
---|---|---|---|---|---|
2 | ![]() |
![]() |
![]() |
![]() |
![]() |
3 | ![]() |
![]() |
![]() |
![]() |
![]() |
4 | ![]() |
![]() |
![]() |
![]() |
![]() |
5 | ![]() |
![]() |
![]() |
![]() |
![]() |
6 | ![]() |
![]() |
![]() |
![]() |
![]() |
7 | ![]() |
![]() |
![]() |
![]() |
![]() |
8 | ![]() |
![]() |
![]() |
![]() |
![]() |
9 | ![]() |
![]() |
![]() |
![]() |
![]() |
4.2 Quality Comparison
The grid in Figure 3 compares three recent OIT strategies to the new Wavelet Transparency and to a non-realtime A-buffer as a reference. We implemented each of these methods as a replacement for the visibility strategy for Phenomenological Transparency but retained the full suite of other effects that algorithm produces. In the first two rows, Moment8 and Wave4 are used because Moment6 underflowed on this scene. All other rows use Moment6 and Wave3. We use single-precision Moment instead of quantized to maximize its image quality (for performance comparisons we used 16-bit quantization). MLAB uses four layers in all rows.
Row 1 shows an opaque sphere in a particle system against an orange background. Row 2 repeats this scene with a transparent sphere. In each case MLAB4 underperforms on coverage and WBOIT fails badly, revealing too much of the sphere. Moment and wavelet transparency each give similar results, which are close to the ideal A-buffer result.
Rows 3-5 are different views of the Bistro scene with many layers of drinking glasses. Where the number of layers is small, all algorithms are similar. When the number of layers becomes high it exceeds the MLAB’s buffer, as seen at the reflection on the first glass just left of center in row 5. In these cases WBOIT fails more gradually but presents an ambiguous result, where the ordering of glasses in depth becomes less obvious. Moment is much better, but oversmooths the visibility function and tends to excessively dim the highlights from glasses beyond the first layer. Wavelets are indistinguishable from the A-buffer, as the step functions due to glass are the ideal case for the Haar basis.
Rows 6-7 show alpha-masked leaves from two viewpoints. Examining the coverage transition at the edges of leaves, especially where those edges overlap in depth, we observe that MLAB underestimates coverage, WBOIT and moment have some noise and discontinuity, and Wavelet most closely (but not perfectly) matches the reference A-buffer.
We conclude that across all results, Wavelet is the most robust because it is consistently among the best approximations, even though in specific cases others may also perform well.
4.3 Chromatic Aberration
Figure 5 shows a simplified scene to highlight the new chromatic aberration approximation (all other figures in this paper used it with refraction as well). On the left is the scene without refraction simulation, in the center is the method of McGuire and Mara [MM17], and in the right is the new chromatic aberration, which is most noticable at high-contrast edges. Compare this to real chromatic aberration photographed in Figure 1. Despite a tenuous relationship to the physics of the real effect, we suggest that the simple simulation in (c) yields a more convincing image of the refraction phenomenon than without it in (b).
Figure 6 is inspired by a famous path traced figure by Walter et al. [WMLT07]. At low cost in a rasterization pixel shader pass, our method demonstrates the phenomena of refraction, diffusion, and chromatic aberration of a frosted glass globe. The refraction pattern differs because they modeled a solid ball and ours is a hollow sphere with thick walls.
4.4 Performance
Table 1 shows the time in milliseconds for steps 2 and 3: building the visibility approximation and then evaluating it and shading, for the rendering of the view of drinking glasses on the Bistro bar in row 5 of Figure 3 on GeForce RTX 3090. The composite time for step 4 is the same for all algorithms: 0.1ms. Performance of Moment Transparency is done here in 16-bit quantized precision mode.
MLAB4 | WBOIT | Moment6 | Moment8 | Wavelet3 | Wavelet4 |
1.8 | 0.46 | 2.04 | 2.1 | 1.8 | 2.05 |
5 Discussion
Previous methods such as Fourier opacity maps read and write all coefficients in the visibility approximation at a pixel at every step. Due to the hierarchical compact support, wavelets require only a logarithmic number of basis evaluations in the extent of the represented function. This gives their superior asymptotic bandwidth bound. Because constant factors differ across bases, we chose the minimal 1st-order Daubechies, a.k.a. Haar, basis. For the asymptotically optimally higher-order Daubechies basis [Dau92], the constant factor impractically large for real-time visibility function approximation today.
Furthermore, the wavelet coefficients are calculated by adding same signed and non dimminishing values. As a result, the coefficients themselves are also non possitive for all wavelet functions. Overall the algorithm exhibits superior numerical robustness compared to other methods.
The decimated wavelet transform we employed lacks shift invariance. Therefore, the visibility approximation of a fixed scene can change under camera motion along the view vector. We propose future work exploring the use of complex wavelets [SBK05], which are nearly shift invariant, although more difficult to optimize. Stationary wavelets are also shift invariant, although require too many coefficients to be viable for this application.
Fourier Opacity Maps, Moment Transparency, and Wavelet OIT share the limitation that they must make two passes over transparent geometry per frame. To avoid this for Wavelets we propose computing the coefficients in the same pass as shading. This of course means that the current coefficients are not available to the shading. So, when reading the coefficients to reconstruct the visibility function, the shading routine uses the transformations from the previous frame to read at the projected positions from the previous wavelet buffer. A guard band can handle clipping at the edge of the screen, but efficiently handling occlusions with opaque objects on screen is one open problem for this optimization.
References
- [AMHH∗18] Akenine-Möller T., Haines E., Hoffman N., Pesce A., Iwanicki M., Hillaire S.: Real-Time Rendering 4th Edition. A K Peters/CRC Press, Boca Raton, FL, USA, 2018.
- [BCL∗07] Bavoil L., Callahan S. P., Lefohn A., Comba J. L. D., Silva C. T.: Multi-fragment effects on the GPU using the -buffer. In Proc. of I3D (2007), ACM, pp. 97–104. doi:http://doi.acm.org/10.1145/1230100.1230117.
- [BM08] Bavoil L., Myers K.: Order independent transparency with dual depth peeling. Tech. rep., NVIDIA, 2008.
- [Car84] Carpenter L.: The A-buffer, an antialiased hidden surface method. In Proc. of SIGGRAPH (1984), ACM, pp. 103–108. URL: http://doi.acm.org/10.1145/964965.808585, doi:10.1145/964965.808585.
- [CMFL15] Crassin C., McGuire M., Fatahalian K., Lefohn A.: Aggregate g-buffer anti-aliasing. In Proceedings of the 19th Symposium on Interactive 3D Graphics and Games (New York, NY, USA, 2015), i3D ’15, ACM, pp. 109–119. URL: http://doi.acm.org/10.1145/2699276.2699285, doi:10.1145/2699276.2699285.
- [CWML18] Crassin C., Wyman C., McGuire M., Lefohn A.: Correlation-aware semi-analytic visibility for antialiased rendering. In Proceedings of the Conference on High-Performance Graphics (New York, NY, USA, 2018), HPG ’18, ACM, pp. 2:1–2:4. URL: http://doi.acm.org/10.1145/3231578.3231584, doi:10.1145/3231578.3231584.
- [Dau92] Daubechies I.: Ten Lectures on Wavelets. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1992.
- [DL06] Donnelly W., Lauritzen A.: Variance shadow maps. In Proc. of I3D (2006), ACM, pp. 161–165. URL: http://doi.acm.org/10.1145/1111411.1111440, doi:10.1145/1111411.1111440.
- [dRBS∗12] de Rousiers C., Bousseau A., Subr K., Holzschuch N., Ramamoorthi R.: Real-time rendering of rough refraction. IEEE Trans. on Vis. and Comp. Graph. (Feb 2012). URL: http://graphics.berkeley.edu/papers/Rousiers-RTR-2012-02/.
- [ESSL10a] Enderton E., Sintorn E., Shirley P., Luebke D.: Stochastic transparency. In Proc. of I3D (2010), ACM, pp. 157–164.
- [ESSL10b] Enderton E., Sintorn E., Shirley P., Luebke D.: Stochastic transparency. In Proc. of I3D (2010), ACM, pp. 157–164. URL: http://doi.acm.org/10.1145/1730804.1730830, doi:10.1145/1730804.1730830.
- [Eve01] Everitt C.: Interactive Order-Independent Transparency. Tech. rep., NVIDIA, 2001.
- [GD15] Ganestam P., Doggett M.: Real-time multiply recursive reflections and refractions using hybrid rendering. Vis. Comput. 31, 10 (Oct. 2015), 1395–1403. URL: http://dx.doi.org/10.1007/s00371-014-1021-7, doi:10.1007/s00371-014-1021-7.
- [GKK15] Gotanda Y., Kawase M., Kakimoto M.: Real-time rendering of physically based optical effects in theory and practice. In ACM SIGGRAPH 2015 Courses (New York, NY, USA, 2015), SIGGRAPH ’15, ACM, pp. 23:1–23:14. URL: http://doi.acm.org/10.1145/2776880.2792715, doi:10.1145/2776880.2792715.
- [Hil15] Hillaire S.: Towards unified and physically-based volumetric lighting in frostbite, 2015. in SIGGRAPH Advances in Real-Time Rendering Course.
- [JB10] Jansen J., Bavoil L.: Fourier opacity mapping. In Proc. of I3D (2010), ACM, pp. 165–172. URL: http://doi.acm.org/10.1145/1730804.1730831, doi:10.1145/1730804.1730831.
- [JC99] Jouppi N. P., Chang C.-F.: Z3: an economical hardware technique for high-quality antialiasing and transparency. In Proc. of Graphics Hardware (1999), ACM, pp. 85–93. URL: http://doi.acm.org/10.1145/311534.311582, doi:10.1145/311534.311582.
- [JWSG10] Jimenez J., Whelan D., Sundstedt V., Gutierrez D.: Real-time realistic skin translucency. CG&A 30, 4 (2010), 32–41.
- [KN01] Kim T.-Y., Neumann U.: Opacity shadow maps. In Proceedings of the 12th Eurographics Workshop on Rendering Techniques (London, UK, UK, 2001), Springer-Verlag, pp. 177–182. URL: http://dl.acm.org/citation.cfm?id=647653.732282.
- [LV00] Lokovic T., Veach E.: Deep shadow maps. In Proc. of SIGGRAPH (2000), ACM, pp. 385–392. URL: http://dx.doi.org/10.1145/344779.344958, doi:10.1145/344779.344958.
- [Mal08] Mallat S.: A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way, 3rd ed. Academic Press, Inc., Orlando, FL, USA, 2008.
- [MB13] McGuire M., Bavoil L.: Weighted blended order-independent transparency. JCGT 2, 2 (December 2013), 122–141. URL: http://jcgt.org/published/0002/02/09/.
- [MCTB13] Maule M., Comba J., Torchelsen R., Bastos R.: Hybrid transparency. In Proc. of I3D (2013), ACM, pp. 103–118. URL: http://doi.acm.org/10.1145/2448196.2448212, doi:10.1145/2448196.2448212.
- [ME11] McGuire M., Enderton E.: Colored stochastic shadow maps. In Proc. of I3D (February 2011), ACM, pp. 89–96. URL: http://research.nvidia.com/publication/colored-stochastic-shadow-maps.
- [Mes07] Meshkin H.: Sort-independent alpha blending, March 2007. Perpetual Entertainment, GDC Session. URL: http://twvideo01.ubm-us.net/o1/vault/gdc07/slides/S3721i1.pdf.
- [MM17] McGuire M., Mara M.: Phenomenological transparency. IEEE Transactions of Visualization and Computer Graphics 23, 5 (May 2017), 1465–1478. IEEE Transactions of Visualization and Computer Graphics. URL: http://casual-effects.com/research/McGuire2017Transparency/index.html.
- [MMBJ17] Mara M., McGuire M., Bitterli B., Jarosz W.: An efficient denoising algorithm for global illumination. In Proceedings of High Performance Graphics (New York, NY, USA, 2017), HPG ’17, ACM, pp. 3:1–3:7. URL: http://doi.acm.org/10.1145/3105762.3105774, doi:10.1145/3105762.3105774.
- [MMM17] McGuire M., Mara M., Majercik Z.: The G3D innovation engine, 01 2017. https://casual-effects.com/g3d. URL: https://casual-effects.com/g3d.
- [Nis98] Nishita T.: Light scattering models for the realistic rendering of natural scenes. In Proc. of E.G. (1998), Eurographics, pp. 1–10.
- [NN03] Narasimhan S. G., Nayar S. K.: Shedding light on the weather. In Proc. of CVPR (2003), IEEE, pp. 665–672. URL: http://dl.acm.org/citation.cfm?id=1965841.1965928.
- [NRN04] Narasimhan S. G., Ramamoorthi R., Nayar S. K.: Analytic rendering of multiple scattering in participating media. Tech. rep., Columbia University, 2004.
- [NSTN93] Nishita T., Sirai T., Tadamura K., Nakamae E.: Display of the earth taking into account atmospheric scattering. In Proc. of SIGGRAPH (1993), ACM, pp. 175–182. URL: http://doi.acm.org/10.1145/166117.166140, doi:10.1145/166117.166140.
- [PAT∗04] Premože S., Ashikhmin M., Tessendorf J., Ramamoorthi R., Nayar S.: Practical rendering of multiple scattering effects in participating media. In Proc. of EGSR (2004), Eurographics, pp. 363–374. URL: http://dx.doi.org/10.2312/EGWR/EGSR04/363-374.
- [PJH16] Pharr M., Jakob W., Humphreys G.: Physically Based Rendering: From Theory to Implementation, 3rd ed. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2016.
- [PK15] Peters C., Klein R.: Moment shadow mapping. In Proceedings of the 19th Symposium on Interactive 3D Graphics and Games (New York, NY, USA, 2015), i3D ’15, ACM, pp. 7–14. URL: http://doi.acm.org/10.1145/2699276.2699277, doi:10.1145/2699276.2699277.
- [PMWK17] Peters C., Münstermann C., Wetzstein N., Klein R.: Improved moment shadow maps for translucent occluders, soft shadows and single scattering. Journal of Computer Graphics Techniques (JCGT) 6, 1 (March 2017), 17–67. URL: http://jcgt.org/published/0006/01/03/.
- [SA09] Sintorn E., Assarsson U.: Hair self shadowing and transparency depth ordering using occupancy maps. In Proc. of I3D (2009), ACM, pp. 67–74. URL: http://doi.acm.org/10.1145/1507149.1507160, doi:10.1145/1507149.1507160.
- [SBK05] Selesnick I. W., Baraniuk R. G., Kingsbury N. C.: The dual-tree complex wavelet transform. IEEE Signal Processing Magazine 22, 6 (Nov 2005), 123–151. doi:10.1109/MSP.2005.1550194.
- [Sha18] Sharpe B.: Moment transparency. In Proceedings of the Conference on High-Performance Graphics (New York, NY, USA, 2018), HPG ’18, ACM, pp. 8:1–8:4. URL: http://doi.acm.org/10.1145/3231578.3231585, doi:10.1145/3231578.3231585.
- [SKP07] Shah M. A., Kontinnen J., Pattanaik S. N.: Caustics mapping: An image-space technique for real-time caustics. IEEE Trans. Vis. Comput. Graph. 13, 2 (2007), 272–280.
- [SKW∗17] Schied C., Kaplanyan A., Wyman C., Patney A., Chaitanya C. R. A., Burgess J., Liu S., Dachsbacher C., Lefohn A., Salvi M.: Spatiotemporal variance-guided filtering: Real-time reconstruction for path-traced global illumination. In Proceedings of High Performance Graphics (New York, NY, USA, 2017), HPG ’17, ACM, pp. 2:1–2:12. URL: http://doi.acm.org/10.1145/3105762.3105770, doi:10.1145/3105762.3105770.
- [SML11] Salvi M., Montgomery J., Lefohn A.: Adaptive transparency. In Proc. of HPG (2011), ACM, pp. 119–126. URL: http://doi.acm.org/10.1145/2018323.2018342, doi:10.1145/2018323.2018342.
- [Sou05] Sousa T.: Generic Refraction Simulation. Addison-Wesley Professional, 2005, pp. 295–305. URL: https://developer.nvidia.com/gpugems/GPUGems2/gpugems2_chapter19.html.
- [SUD19] Schmid J., Uludag Y., Deligiannis J.: It just works: Ray-traced reflections in Battlefield V, March 2019. Presentation at the GPU Technology Conference.
- [SV14] Salvi M., Vaidyanathan K.: Multi-layer alpha blending. In Proc. of I3D (2014), ACM, pp. 151–158. URL: http://doi.acm.org/10.1145/2556700.2556705, doi:10.1145/2556700.2556705.
- [VF14] Vasilakis A. A., Fudos I.: K+-buffer: Fragment synchronized k-buffer. In Proc. of I3D (2014), ACM, pp. 143–150.
- [WD06] Wyman C., Davis S.: Interactive image-space techniques for approximating caustics. In Proc. of I3D (2006), ACM, pp. 153–160. URL: http://doi.acm.org/10.1145/1111411.1111439, doi:10.1145/1111411.1111439.
- [WD08] Wyman C., Dachsbacher C.: Reducing noise in image-space caustics with variable-sized splatting. J. Graphics Tools 13, 1 (2008), 1–17.
- [WMLT07] Walter B., Marschner S. R., Li H., Torrance K. E.: Microfacet models for refraction through rough surfaces. In Proceedings of the 18th Eurographics Conference on Rendering Techniques (Aire-la-Ville, Switzerland, Switzerland, 2007), EGSR’07, Eurographics Association, pp. 195–206. doi:10.2312/EGWR/EGSR07/195-206.
- [Wym05] Wyman C.: An approximate image-space approach for interactive refraction. ACM Trans. Graph. 24, 3 (2005), 1050–1053. URL: http://doi.acm.org/10.1145/1073204.1073310, doi:10.1145/1073204.1073310.
- [Wym08] Wyman C.: Hierarchical caustic maps. In Proc. of I3D (2008), ACM, pp. 163–171. URL: http://doi.acm.org/10.1145/1342250.1342276, doi:10.1145/1342250.1342276.
- [YK09] Yuksel C., Keyser J.: Fast real-time caustics from height fields. Vis. Comput. 25, 5-7 (Apr. 2009), 559–564. URL: http://dx.doi.org/10.1007/s00371-009-0350-4, doi:10.1007/s00371-009-0350-4.
- [YT10] Yuksel C., Tariq S.: Advanced techniques in real-time hair rendering and simulation. In SIGGRAPH Courses (2010), ACM, pp. 1–168. URL: http://doi.acm.org/10.1145/1837101.1837102, doi:10.1145/1837101.1837102.