Shadows and Image-based Rendering

advertisement
INB382/INN382 Real-Time
Rendering Techniques Lecture 9:
Shadows and Image-based
Rendering
Ross Brown
CRICOS No. 000213J
Queensland University of Technology
Lecture Contents
•
•
•
•
•
•
Planar Shadows
Shadow Volumes
Shadow Maps
Soft Shadows
Image-based Rendering - Billboards
Image Processing - Edge Detection and Blur
a university for the
real world
R
CRICOS No. 000213J
Shadow Dancing…
• "There are three kinds of men: those who are
preceded by their shadow, those who are
pursued by it, and those who have never seen
the sun.“ - Gerd de Ley
a university for the
real world
R
CRICOS No. 000213J
Lecture Context
• Today we add shadows to round out the 3D
visual effects and begin to enter global
illumination more
• And some image-based rendering to bring in the
use of images and related processing algorithms
to augment geometric rendering approaches
a university for the
real world
R
CRICOS No. 000213J
Shadows - Principles
• We will cover two geometric
methods Planar Shadows
and Shadow Volumes
• Shadows are important
elements for realistic
rendering
• They give a sense of the
spatial relationships
between objects –
especially whether an object
is above another object
• Image to the right is an
illusion, due to the position
of the wet patch “shadow”
a university for the
real world
R
CRICOS No. 000213J
Shadows – Terminology
• Occluders – objects
that cast shadows
onto receivers
• Hard Shadows – are
generated by point
light sources (why?)
• Soft Shadows – are
generated by area
light sources with
umbra and penumbra
a university for the
real world
R
CRICOS No. 000213J
Shadows - Principles
• Any shadow is better than no shadow at all
• Simple methods like the rendering of geometry
with y values dropped – drop shadow - can give
a sense of position for the shadowing object
• Scale(1.0, 0.0, 1.0)
a university for the
real world
R
CRICOS No. 000213J
Planar Shadows – Arbitrary Plane
• To this end, the ray emanating at l, which goes
through v, is intersected by the plane π
a university for the
real world
R
CRICOS No. 000213J
General Planar Shadows
• This yields the point p
• Which can be converted into a projection matrix
M so that Mv = p
• Apply M to objects and render in a dark colour
on shadow plane
• What problem is there to avoid?
a university for the
real world
R
CRICOS No. 000213J
Shadow Volumes - Theory
• Can cast shadows
onto arbitrary objects
by clever use of the
stencil buffer
• Imagine a point and a
triangle
• Extend lines from a
point through vertices
to form a pyramid
a university for the
real world
R
CRICOS No. 000213J
Shadow Volumes
• If the point is a light source
• Then all objects within the volume are in shadow
• Viewing a scene we pass a ray through a pixel,
until the ray hits an object on the screen
• Increment a counter for every front face of a
shadow volume we encounter
• Decrement the counter for every back face
passed
a university for the
real world
R
CRICOS No. 000213J
Shadow Volumes
• Upon hitting an object
• If the counter is greater
than zero, then the object
is in shadow, else it is not
• Figure is the zpass
approach, can use zfail
for inside shadow volume
• Hard to do geometrically
– ray tracing has a
simpler approach
• Use the stencil buffer!
a university for the
real world
R
CRICOS No. 000213J
Stencil Buffer
• Restricts drawing to
certain regions of the
screen – just like
craftwork!
• With shadows we keep
track of the Ray Hits by
incrementing and
decrementing the
stencil buffer – we have
8 bits of precision
a university for the
real world
R
CRICOS No. 000213J
Shadow Volumes – Screen Dumps
Shadowed View
Volume Front Face
Stencil Buffer (False Colours)
Volume Back Face
a university for the
real world
R
CRICOS No. 000213J
Shadow Volumes – Stencil Algorithm
1.
2.
3.
4.
5.
6.
Clear Stencil Buffer
Draw scene into frame buffer with ambient and emission
values, with Z Buffer on
Turn Z Buffer updates off and writing to colour buffer off
Render front facing polygons of shadow volumes. Stencil is
set to increment where polygon is drawn
Render back faces of shadow volumes, this time
decrementing the stencil buffer. NB: incrementing and
decrementing done when shadow volume faces are visible not real geometry)
Finally the whole scene is rendered again, this time with only
the diffuse and specular components of the materials active,
and displayed only where the value in the stencil buffer is 0.
NB: zero represents regions that are not in shadow
a university for the
real world
R
CRICOS No. 000213J
Shadow Volumes - Issues
• One stencil bit is required for each shadow volume
• Stencil buffer is typically 8 bits
• How to compute shadow volume?
– Extrude every edge into a quad?
– May need to be recomputed every frame
– Can use Geometry Shaders – see demo
• Shadow volumes are large with respect to the
objects themselves
• Shadow volumes are rendering hardware intensive
a university for the
real world
R
CRICOS No. 000213J
Unity Asset Store – Shadow Volumes
Toolkit
https://youtu.be/6u_56AH5FNo
a university for the
real world
R
CRICOS No. 000213J
Projected Textures
• Before we enter into the
theory on Shadow Mapping
we need to understand
projected textures
• So called, as it allows a
program to project a texture
onto arbitrary geometry
• This is useful for projection
effects, such as slide
projectors and light
mapping, where a light
source is represented as a
texture
a university for the
real world
R
CRICOS No. 000213J
Projected Textures
• The key is to generate texture coordinates over
the surface of the receiving geometry
• The texture is then applied, giving the
appearance that it has been projected onto the
surface of the object
• These are called projective texture coordinates
a university for the
real world
R
CRICOS No. 000213J
Projected Textures
•
•
•
•
Need to create a view matrix VL
And a projection matrix VP for the projector
They define a frustum relative to world space
That is the project projects through a frustum in
the world
• Use the matrices and the homogeneous divide
to project the geometry onto the projection plane
of the projector
a university for the
real world
R
CRICOS No. 000213J
Projected Textures
• Vertices thus end up within the bounds of:
– -1 ≤ x ≥ 1
– -1 ≤ y ≥ 1
– 0≤z≥1
• We then turn these coordinates into the texture
coordinate system of [0..1] via a simple viewport
transform
– u=½x+½
– v = -½ y + ½
• where u,v [0..1] provided x,y [-1..1]
a university for the
real world
R
CRICOS No. 000213J
Projected Textures
• Note that we invert y due to its opposite direction
to the v axis
• The texture coordinates generated in this
fashion correctly identify the part of the texture
that should be projected onto each triangle
a university for the
real world
R
CRICOS No. 000213J
Backwards Projection
• NB: the spatial relationships are not taken into
account, so the projection can go backwards
behind the light
• Therefore to overcome this, treat the projection
coordinates as a spotlight, and only project
within a cone of illumination
a university for the
real world
R
CRICOS No. 000213J
Cg Code
struct VSOutput {
float4 pos: SV_POSITION;
float4 col: COLOR0;
float4 projTex: TEXCOORD0;
float4 normal: TEXCOORD1;
};
a university for the
real world
R
CRICOS No. 000213J
Vertex Shader Projection
• Project texture coordinates, like vertices, onto
the projection plane for later viewport mapping
– Output.projTex = mul(float4(a_Input.pos, 1.0f),
g_matLightWVP);
a university for the
real world
R
CRICOS No. 000213J
Pixel Shader Viewport Transform
• a_Input.projTex.xy /=
a_Input.projTex.w;
• a_Input.projTex.x = 0.5f *
a_Input.projTex.x + 0.5f;
• a_Input.projTex.y = -0.5f
* a_Input.projTex.y + 0.5f;
a university for the
real world
R
CRICOS No. 000213J
Shadow Mapping - Theory
• Use Z buffer to render shadows on arbitrary
objects
• First render the scene Z depth from the position
of the light source
• Each pixel now contains the distance to the
object closest to the light source
• Now render the scene with respect to the viewer
a university for the
real world
R
CRICOS No. 000213J
Shadow Mapping – Theory
•
•
•
As each primitive is being drawn, its location is compared to the shadow
map
If a rendered point is farther away from the light source than the value in the
shadow map, then that point is in shadow; otherwise it is not
Implemented using eye space texture coordinate projections onto the
surface of the shadowed objects – as shown before
a university for the
real world
R
CRICOS No. 000213J
Shadow Mapping - Theory
• A shadow testing step is performed within the
shader
• Compares the z-value in the Z-buffer with the z
value in the shadow map
• Z shadow map value is transformed from the
coordinate system of the light source into the
coordinate system of the viewer
• That is, the depth value (z buffer) as a texture is
projected onto the surface of the object to be
rendered
a university for the
real world
R
CRICOS No. 000213J
Shadow Mapping - Theory
• If depthmap & zbuffer
values approximately
equal
– pixel color taken from
normal rendering pass
• Else
– pixel color taken from
ambient rendering pass
a university for the
real world
R
CRICOS No. 000213J
YouTube Shadow Mapping Demo
https://www.youtube.com/watch?v=3AdLu0PHOnE
a university for the
real world
R
CRICOS No. 000213J
Two Rendering Passes
1. To the Depth Map from light viewpoint with z
value only – use Unity Camera as light source
2. To the normal Framebuffer, from the eye point,
passing in previous depth map rendering as a
texture for depth comparisons
a university for the
real world
R
CRICOS No. 000213J
Rendering to Shadow Map (Light
Camera)
• You need to set the camera up
to render a depth map using a
script and its own renderer
(see right)
• Problem is that it will be
encoded, it is not a simple rgb
texture; it is rgb to 32bit float
• Then it needs to be linearised
in the shaders (later); precision
is loaded into the closest
distance to the camera
• Hints at right, please watch
video for details
a university for the
real world
R
CRICOS No. 000213J
Unity Cg Shadowmap Shaders for Main
Camera (Viewpoint)
// Compute pixel depth for shadowing.
float depth = a_Input.projTex.z / a_Input.projTex.w;
// Now linearise using a formula by Humus, drawn from the near and far clipping planes
of the camera.
float sceneDepth = _NearClip * (depth + 1.0) / (_FarClip + _NearClip - depth *
(_FarClip - _NearClip));
// Transform to texel space
float2 texelpos = _TexSize * a_Input.projTex.xy;
// Determine the lerp amounts.
float2 lerps = frac( texelpos );
// sample shadow map
float dx = 1.0f / _TexSize;
float s0 = (DecodeFloatRGBA(tex2D(_ShadowMap,
) ? 0.0f : 1.0f;
float s1 = (DecodeFloatRGBA(tex2D(_ShadowMap,
_Bias < sceneDepth) ? 0.0f : 1.0f;
float s2 = (DecodeFloatRGBA(tex2D(_ShadowMap,
_Bias < sceneDepth) ? 0.0f : 1.0f;
float s3 = (DecodeFloatRGBA(tex2D(_ShadowMap,
Bias < sceneDepth) ? 0.0f : 1.0f;
a university for the
real world
a_Input.projTex.xy)) + _Bias < sceneDepth
a_Input.projTex.xy + float2(dx, 0.0f))) +
a_Input.projTex.xy + float2(0.0f, dx))) +
a_Input.projTex.xy + float2(dx, dx))) + _
R
CRICOS No. 000213J
CG Shadowmap Shaders
float shadowCoeff = lerp( lerp( s0, s1, lerps.x ),
lerp( s2, s3, lerps.x ), lerps.y);
// output colour multipled by shadow value
return float4(shadowCoeff * a_Input.col.rgb,
g_vecMaterialDiffuse.a);
a university for the
real world
R
CRICOS No. 000213J
Shadow Mapping – Analysis
• Can use general purpose hardware - no need for
pixel shaders - though you can using pbuffers as
textures
• Cost is linear to number of objects and access time
is constant
• Disadvantage is that quality depends on resolution
of shadow map and precision of z-buffer – chunky
shadows and self shadowing
a university for the
real world
R
CRICOS No. 000213J
Soft Shadows
• The problem with the previous techniques are
that they generate hard edged shadows
• Looking around you, you will see that these
forms of shadows are rare
• Soft shadows are the norm, and thus have to be
modelled in order to make a scene look
convincing
a university for the
real world
R
CRICOS No. 000213J
General Approach
• Soft shadows appear when a light source has an
area – this is why radiosity is able to generate
good representations of soft shadows
• One way to simulate this is to generate a
number of shadow solutions with point light
sources on the surface of the are light source
• For each of these light sources the results are
rendered to an accumulation buffer
• The average of these images is an image with
soft shadows
a university for the
real world
R
CRICOS No. 000213J
Heckbert and Herf
• First the receiver is
rendered into a texture
• Objects are rendered into
a texture in black and
accumulated in black
• Points on the area light
source are used for each
iteration
• Accumulation creates
blurred shadows
• But looks like a lot of
accumulated point light
source shadows
Moller and Haines [2]
a university for the
real world
R
CRICOS No. 000213J
Gooch
• In this technique, the
receiver is moved along a
normal and averaged to
create the nested shadows
• Can also remove the need
for re-projection of the
shadows by simply
projecting once, and then
either
– blurring the texture using
image processing
– jittering the texture
coordinates, accumulating
and then averaging
Moller and Haines [2]
a university for the
real world
R
CRICOS No. 000213J
Shadow Creeping
• These techniques can have
a phenomenon known as
shadow creep
• This is due to the linear
interpolation of the
accumulated shadow
• Even if the object is very
close to the floor, or even
sitting on it
• Shadows may creep out
from underneath the object
due to the accumulation
and linear interpolation of
the values
a university for the
real world
R
CRICOS No. 000213J
Shadow Creeping
• Often due to the resolution of the shadow being
low compared to the geometry in the scene
• Happens with radiosity solutions, due to the lack
of precise meshing and generation of fine
patches, so that the linear interpolation of the
radiosity values creeps along
• Think of it as a form of inaccuracy, due to the
interpolation, as with Gouraud shading vs Phong
shading
a university for the
real world
R
CRICOS No. 000213J
Shadow Creeping
• Other problems with these methods are the finite
number of shades generated by the
Accumulation
• For n shadow passes only n+1 distinct shades
can be generated
• Leaving aliasing as a problem – again!
a university for the
real world
R
CRICOS No. 000213J
Percentage-Closer Soft Shadows [1]
• Soft shadows provide valuable cues about the
relationships between objects, becoming
sharper as objects contact each other and more
blurry (softer) the further they are apart
• Generates perceptually accurate soft shadows
• Uses a single light source sample (one shadow
map)
• Requires no pre-processing, post-processing, or
additional geometry
a university for the
real world
R
CRICOS No. 000213J
Percentage-Closer Soft Shadows
• Seamlessly replaces a traditional shadow map
query
• Thus same advantages as traditional shadow
mapping independent of scene complexity
• Works with alpha testing, displacement
mapping, and so on
• Only required change is to replace the typical
shadow mapping pixel shader with a PCSS
shader
a university for the
real world
R
CRICOS No. 000213J
Percentage-Closer Soft Shadows
• When shading each pixel in the eye view, PCSS
returns a floating point value that indicates the
amount of shadowing at each shaded point
• Replacing the traditional depth comparison of
ordinary shadow mapping
• PCSS is based on the observation that as the size
of the PCF kernel increases, the resulting shadows
become softer
• Challenge is to vary the filter size intelligently to
achieve the correct degree of softness
a university for the
real world
R
CRICOS No. 000213J
Percentage-Closer Soft Shadows
• Step 1: Blocker search. We search the shadow map
and average the depths that are closer to the light
source than to the point being shaded (“receiver”). The
size of the search region depends on the light size and
the receiver’s distance from the light source
• Step 2: Penumbra estimation. Using a parallel planes
approximation, we estimate the penumbra width based
on the light size and blocker/receiver distances from the
light:
– wPenumbra = (dReceiver – dBlocker) · wLight / dBlocker
• Step 3: Filtering. Now perform a typical PCF step on the
shadow map using a kernel size proportional to the
penumbra estimate from Step 2
a university for the
real world
R
CRICOS No. 000213J
Blockers
• During the blocker search
step, the PCSS algorithm
searches a region of the
shadow map (shown in
red)
• During the search, it
averages depth values
that are closer to the light
than the receiving point
• This average depth value
is used in the subsequent
penumbra size estimation
step
a university for the
real world
R
CRICOS No. 000213J
Penumbra Calculations
a university for the
real world
R
CRICOS No. 000213J
PCSS Demonstration Video
https://youtu.be/QW6Tm_mfOmw
a university for the
real world
R
CRICOS No. 000213J
Image-Based Rendering
• Instead of using geometry, Image-Based Rendering
(IBR) uses an image as a model of the object – an
obvious extension of texturing
• Is more efficient – rendering is proportional to pixels in
image not geometry
• Can be used to display objects almost impossible to
model geometrically – fur, clouds etc.
• Downside is that geometry can be rendered adequately
from any view, not so with images
a university for the
real world
R
CRICOS No. 000213J
Image-Based Rendering
• Rendering methods can cover the gamut shown in the diagram
(Lengyel 1998)
• Covers a spectrum from physically based methods to appearance
based
• We will cover Billboarding
Appearance
based
Lumigraph
and
light field
Geometric
Models
Images
Sprites
a university for the
Layers
real world
Billboards
Triangles
(Moller and Haines
Physically
2003)
based
Global
Illumination
R
CRICOS No. 000213J
Billboarding
• Many special effects are in fact simple polygons
with an image placed over it – eg. lens flare,
trees, clouds, smoke, fog etc.
• In the first place the billboard must be oriented
towards a particular direction
a university for the
real world
R
CRICOS No. 000213J
Rock Music Videos!
Presets
MGMT
http://au.youtube.com/watch?v=M1uf
W2INWmM
a university for the
real world
http://www.youtube.com/watch?v=A
_OUqukBHT0
R
CRICOS No. 000213J
Billboard Orientation
• Often the surface normal n and the up vector u are not
perpendicular – want n to point to viewer
• Create r by taking the cross product of n and u
• Then create u’ by a cross product of n and r
• Thus forming a rotation matrix M = (r, u’, n)
u
u
u
u’
r
n
n
r
a university for the
real world
r
R
CRICOS No. 000213J
Tree Example
• The tree in this example is
oriented automatically towards
the user along an up axis
• In order to orientate the object,
a rotation is performed around
the up vector
• By the angle between the
lookAt vector and the vector
from the object to the camera
• The vector from the object to
the camera can be defined as
– objToCam = CamPos ObjPos
a university for the
real world
object
angle
camera
R
CRICOS No. 000213J
Tree Example
• The vector objToCamProj is
the projection of objToCam in
the XZ plane
• Therefore its Y component
should be set to zero
up
objToCam
right
camera
angle
objToCamProj
lookAt
a university for the
real world
R
CRICOS No. 000213J
Tree Example
•
If objToCamProj is normalized
then the inner product between
lookAt and objToCam will allow
the computation of the cosine of
the angle
1. normalize(objToCamProj)
2. angleCosine = dotProduct(lookAt,
objToCamProj) * 180 / π
3. upAux = crossProduct(lookAt,
objToCamProj)
4. rotate(acos(angleCosine),
upAux[x], upAux[y], upAux[z]);
a university for the
real world
up
right
objToCam
camera
objToCamProj = lookAt
R
CRICOS No. 000213J
Tree Example
•
•
•
Then need to use the alpha
channel on the textures to blend
the values into the framebuffer
In an RGBA texel, the A
represents the alpha value – often
translated to visibility but can be
anything
The following code then makes
the alpha values control blending
into the framebuffer
a university for the
real world
•
•
•
Can use clip command in hlsl to
implement alpha for billboarding
Discards pixel if parameter is -1
Eg.
– clip(colour.b+colour.r+colour.g0.01f);
•
•
If colour is black, turn off rendering
of pixel to framebuffer
Or can use normal alpha channel
approaches
R
CRICOS No. 000213J
Billboarding Pixel Shader Demo
float4 PS_Texture(VSOutput_PosTex a_Input) :
COLOR
{
float4 colour = Tex2D(TexSamplerWrap,
a_Input.tex);
// Apply the alpha test with a threshold
clip(colour.b+colour.r+colour.g-0.01f);
return colour;
}
a university for the
real world
R
CRICOS No. 000213J
Tree Example – Billboard
Base Texture
(RGB)
a university for the
Alpha Values
(A)
real world
Blended
Billboard
R
CRICOS No. 000213J
No Billboard Demonstration as in
Assignment 2
a university for the
real world
R
CRICOS No. 000213J
Image Processing on GPUs
• GPUs can be utilised for things other than Image
Synthesis
• Image processing algorithms can be implemented on
modern programmable systems
• Image processing is the opposite to image synthesis
– Image processing is applying algorithms to captured images
– Image synthesis is the process of generating synthetic images
a university for the
real world
R
CRICOS No. 000213J
Image Processing on GPUs
•
•
•
A lot of image processing uses
convolution filters to perform such
operations as blurring and edge
detection
Convolution filters are the
application of a kernel to a pixel,
which utilises the surrounding
pixels to update the central pixel
of the kernel
Essentially the integral of the
multiplication of one function upon
a number in a limited window that
is translated over the main
function
http://www.infolizer.com/?title=Convolution+kernel
a university for the
real world
R
CRICOS No. 000213J
Image Processing on GPUs
• OpenGL has had convolution operations for
some time
• New programmable shaders open up further
possibilities
• Can now perform pixel neighbourhood lookups
• Useful for full screen visual effects
• We will show edge detection and glow
a university for the
real world
R
CRICOS No. 000213J
Image Processing on GPUs
• Texture coordinates can
be generated within a
fragment shader or
duplicated in vertex
shaders
• A neighbourhood lookup
can be performed to do
convolutions
• (right) for location x,y
sample one texel either
side to gain eight other
samples
a university for the
real world
Input Pixel
Pixel Region
Input

0
-1
0
-2
-1
1
2
1
0
Convolution
Filter (Vertical)
+
Filter Response
R
CRICOS No. 000213J
Sobel Edge Detection Shader
• Sobel edge detector
is a 3x3 region filter
responsive to edges
in images
• Problem: texture
coordinates are floats
[0, n] – for the
moment...
a university for the
real world
R
CRICOS No. 000213J
Sobel Edge Detection Shader
• Have to use a floating
offset
• Offset = n / img-size
• where n is max s,t coord
• Turn off any texture
supersampling options
(why?)
• Sobel filters shown right
– vertical top, horizontal
bottom
a university for the
real world
é-1 0 1ù
ê
ú
ê-2 0 2ú
êë-1 0 1úû
é1 2 1ù
ê
ú
0
0
0
ê
ú
êë-1 -2 -1úû
R
CRICOS No. 000213J
Sobel Edge Detection Shader
• Multiply by filters to
get A and B response
then:
• Pix = sqrt(A2 + B2)
• Demo – EdgeDemo
a university for the
real world
R
CRICOS No. 000213J
Blur Filters
• Another useful image
processing technique is
blurring
• We have touched on this
in texture processing
• Using a convolution filter
of arbitrary size
• An image can be blurred
by weighting the
contributions of each
kernel value
• Following is similar to a
Gaussian kernel
a university for the
real world
(Beer goggles image by A of DooM, CC 2.0)
R
CRICOS No. 000213J
Blur Filters
• For our purposes we can use a
five tap or nine tap filter to
perform the blurring
• Pixel value produced is an
average of the surrounding
pixels via weighting on
surrounding samples
• Thus it ends up being a low
pass filter – blurring the
output
• It also spreads the image out
in a dilation process
• Which is handy for…
a university for the
real world
0
0
0
0

0.4 0.8

0
0
 0
0
0.4 0
0
0.8 0
0 
1.0 0.8 0.4

0.8 0
0
0.4 0
0 
R
CRICOS No. 000213J
Glowing
•
•
•
•
•
The trick is to render to a texture
the blurred image
And to then combine the blurred
image with the original sharp
image
The averaging produces a
bleeding effect, as the sharp
values are averaged over a
larger area
So when blended, the object
appears to glow
To single out objects:
– render the others in black,
– single object on a black
background
– use shader parameter
a university for the
real world
R
CRICOS No. 000213J
Cg Pixel Shader
OutCol += tex2D(_MainTex, float2(a_Input.tex.x + TexelIncrement, a_Input.tex.y)) * (WT5_2 /
WT5_NORMALIZE);
OutCol += tex2D(_MainTex, float2(a_Input.tex.x + TexelIncrement * 2, a_Input.tex.y)) * (WT5_2 /
WT5_NORMALIZE);
OutCol += tex2D(_MainTex, float2(a_Input.tex.x, a_Input.tex.y)) * (WT5_0 / WT5_NORMALIZE);
OutCol += tex2D(_MainTex, float2(a_Input.tex.x - TexelIncrement, a_Input.tex.y)) * (WT5_1 /
WT5_NORMALIZE);
OutCol += tex2D(_MainTex, float2(a_Input.tex.x - TexelIncrement * 2, a_Input.tex.y)) * (WT5_2 /
WT5_NORMALIZE);
OutCol += tex2D(_MainTex, float2(a_Input.tex.x, a_Input.tex.y + TexelIncrement)) * (WT5_1 /
WT5_NORMALIZ
OutCol += tex2D(_MainTex, float2(a_Input.tex.x, a_Input.tex.y + TexelIncrement * 2)) * (WT5_2 /
WT5_NORMALIZE);
OutCol += tex2D(_MainTex, float2(a_Input.tex.x, a_Input.tex.y)) * (WT5_0 / WT5_NORMALIZE);
OutCol += tex2D(_MainTex, float2(a_Input.tex.x, a_Input.tex.y - TexelIncrement)) * (WT5_1 /
WT5_NORMALIZE);
OutCol += tex2D(_MainTex, float2(a_Input.tex.x, a_Input.tex.y - TexelIncrement * 2)) * (WT5_2 /
WT5_NORMALIZE);
a university for the
real world
R
CRICOS No. 000213J
Unity Glow Effect – glowing version on
right
a university for the
real world
R
CRICOS No. 000213J
References
1. Randima Fernando, Percentage-Closer Soft Shadows,
developer.download.nvidia.com/shaderlibrary/docs/sha
dow_PCSS.pdf, 04/05/2007
2. Akenine-Moller, T. and Haines E. Real-Time Rendering,
second edition, AK Peters, Natick USA, 2002
a university for the
real world
R
CRICOS No. 000213J
Download