Add in first draft of the final delivery

This commit is contained in:
Peder Bergebakken Sundt 2019-04-07 00:18:43 +02:00
parent 95d6981461
commit d73da5b2a1
7 changed files with 354 additions and 3 deletions

View File

@ -12,16 +12,29 @@ echo Building log_combined.md...
cat log_part7_daylight.md; echo; cat log_part7_daylight.md; echo;
) | sed -e "s/ i / I /g" | sed -e "s/ i'm / I'm /g" > log_combined.md ) | sed -e "s/ i / I /g" | sed -e "s/ i'm / I'm /g" > log_combined.md
echo Building delivery_combined.md...
(
cat delivery_part1.md; echo;
cat delivery_part2.md; echo;
cat delivery_part3.md; echo;
cat delivery_part4.md; echo;
cat delivery_part5.md; echo;
) | sed -e "s/ i / I /g" | sed -e "s/ i'm / I'm /g" > delivery_combined.md
#ENGINE=pdflatex #ENGINE=pdflatex
#ENGINE=lualatex #ENGINE=lualatex
ENGINE=xelatex ENGINE=xelatex
VARIABLES="$VARIABLES --filter pandoc-codeblock-include"
VARIABLES="$VARIABLES --filter pandoc-imagine" VARIABLES="$VARIABLES --filter pandoc-imagine"
VARIABLES="$VARIABLES --filter pandoc-crossref"
#VARIABLES="$VARIABLES --variable classoption=twocolumn" #VARIABLES="$VARIABLES --variable classoption=twocolumn"
VARIABLES="$VARIABLES --variable papersize=a4paper"
VARIABLES="$VARIABLES --table-of-contents" VARIABLES="$VARIABLES --table-of-contents"
VARIABLES="$VARIABLES --number-sections" VARIABLES="$VARIABLES --number-sections"
#VARIABLES="$VARIABLES --number-offset=0,0" #VARIABLES="$VARIABLES --number-offset=0,0"
VARIABLES="$VARIABLES --variable papersize=a4paper"
VARIABLES="$VARIABLES --variable geometry:margin=2cm"
VARIABLES="$VARIABLES --variable links-as-notes=true" VARIABLES="$VARIABLES --variable links-as-notes=true"
VARIABLES="$VARIABLES --highlight-style=pygments" # the default VARIABLES="$VARIABLES --highlight-style=pygments" # the default
@ -33,6 +46,7 @@ VARIABLES="$VARIABLES --highlight-style=pygments" # the default
#VARIABLES="$VARIABLES --highlight-style=monochrome" #VARIABLES="$VARIABLES --highlight-style=monochrome"
#VARIABLES="$VARIABLES --highlight-style=breezedark" #VARIABLES="$VARIABLES --highlight-style=breezedark"
ls -1 *.md | grep -v "part" | ls -1 *.md | grep -v "part" |
( while read source; do ( while read source; do
( (
@ -42,8 +56,11 @@ ls -1 *.md | grep -v "part" |
cd "$(dirname $source)" cd "$(dirname $source)"
if [ "$(md5sum "$source")" != "$(cat ${source}5_hash 2>/dev/null)" ]; then
md5sum "$source" > "${source}5_hash"
echo "Converting $source into $(dirname $source)/${dest}.pdf ..." echo "Converting $source into $(dirname $source)/${dest}.pdf ..."
pandoc "$base" --pdf-engine="$ENGINE" $VARIABLES -o "$dest.pdf" pandoc "$base" --pdf-engine="$ENGINE" $VARIABLES -o "$dest.pdf"
fi
) & ) &
done done

34
report/delivery_part1.md Normal file
View File

@ -0,0 +1,34 @@
% TDT4230Final assignment report
% Peder Berbebakken Sundt
% insert date here
\small
```{.shebang im_out="stdout"}
#!/usr/bin/env bash
printf "time for some intricate graphics surgery!\n" | cowsay -f surgery | head -n -4 | sed -e "s/^/ /"
```
\normalsize
\newpage
# The project
For this project, we're supposed to investigate a more advanced or complex visualisation method in detail by implementing it ourselves using C++ and OpenGl 4.3+. I'll be doing it by myself.
I want to look more into effects one can apply to a scene of different materials. In detail, i plan to implement:
Phong lighting,
texturing,
normal mapping,
displacement mapping,
importing model meshes with transformations and materials from external files,
reflections,
fog and
rim backlights.
I also want to implement som post-processing effects:
Chromatic aberration,
Depth of field,
Vignette and
Noise / Grain
The idea i have in mind for the scene i want to create, is a field of grass with trees spread about in it, where a car is driving along the ups and downs of the hills. I then plan to throw all the effect i can at it to make it look good.

57
report/delivery_part2.md Normal file
View File

@ -0,0 +1,57 @@
# How does the implementation achieve its goal?
The final implementation has four types defined:
`SceneNode`, `Mesh`, `PNGImage`, and `Material`. The `Material` structs references several `PNGImage`s and stores colors and rules on how to be applied to a `SceneNode`. `SceneNode`s reference a `Mesh`, stores all material properties applied to the node, which shader it should rendered with, and a list of child `SceneNode`s.
Each mesh can be UV mapped. Each vertex has a UV coordinate assigned to it, which is passed along with the vertex position into the shaders. Texturing meshes is done by looking up the pixel color from a diffuse texture, using the interpolated UV coordinates. This diffuse color is used as the 'basecolor' in further calculations.
## Normal mapping
Normals are defined in two places: One normal vector per vertex in the mesh, and an optional tangental normal map texture. The normal vector is combined with it's tangent and bitangent vectors (tangents in the U and V directions respectively) into a TBN transformation matrix, which the tangential normal vector fetched from the normal map can be transformed with. This allows us to define the normal vector along the surfaces of the mesh.
## Displacement mapping
Displacement mapping is done in the vertex shader. A displacement texture is mapped into using the UV coordinates. The texture describes how much to offset the vertex along the normal vector. This is further controlled with a displacement coefficient uniform passed to the vertex shader. See @fig:img-fine-plane and @fig:img-displacement-normals.
## Phong lighting
The Phong lighting model is implemented in the fragment shader. The model describes four light components: The diffuse component, the emissive component, the specular component and the ambient component. Each of these components have a color/intensity assigned with them, which is stored in the `SceneNode`/`Material`.
The colors are computed using the normal vector computed as described above. The basecolor is multiplied with sum of the diffuse and the emissive colors, and the specular color is added on top. I chose to combine the ambient and emissive into one single component, since i don't need the distinction in my case. I did however make the small change of multiplying the emissive color with the color of the first light in the scene. This allows me to 'tint' the emissive components.
I have two type of nodes in the scene for lights: point lights and spot lights. Each light has a color associated with them as well as a position and three attenuation factors. The final attenuation is computed as $\frac{1}{x + y\cdot |L| + z\cdot |L|^2}$ from these three factors.
## Loading models
Importing of models is done using the library called `assimp`. It is a huge and bulky library which takes decades to compile, but it gets the job done. Each model file is actually a whole 'scene'. I first traverse the materials defined in this scene and store them into my own material structs. I then traverse the textures in the scene and load them into `PNGImage` structs. I then traverse all the meshes stored in the scene and store the. At last i traverse the nodes in the scene, creating my own nodes. I apply the transformations, materials, textures and meshes referenced. Finally i transform the root node to account for me using a coordinate system where z points skyward.
## Reflections
Reflections are implemented in the fragment shader, using the vector pointing from the camera to the fragment (F), and the normal vector. I reflect the F vector along the normal vector and normalize the result. Computing the dot product between the normalized reflection and any other unit vector gives my the cosine of the angle between the two. Computing this cosine northward and skyward allows me to map the reflection into a sphere and retrieve the UV coordinates used to fetch a reflection color value from a reflection map texture (see fig:img-reflection and fig:img-reflection-map).
## Fog
TODO
## Rim backlights
To make objects pop a bit more, one can apply a rim backlight color. The effect tries to create a edge/rim/silhouette light around an object: The more the surface normal points away from the camera, the more it lights up. Maximum brightness at 90 degrees away from the camera, and decreases the more it faces the camera. I compute the dot product between the normalized vector from camera to the fragment, and the normal vector, which gives me the cosine value between the two: A value of 1 when pointing away from the camera, 0 when at 90 degrees, and a value of -1 when facing the camera. Adding a "strength" value to this will skew it more towards the camera. Divide it by the same strength value and clamping it will yield the rim light strength (see @fig:img-rim-lights).
## Post processing
Post processing is achieved by rendering the whole scene, not to the window, but to an internal framebuffer instead. This framebuffer is then used as a texture covering a single quad which is then rendered to the window. This in-between step allows me to apply different kinds of effects using the fragment shader, which rely on being able to access neighboring pixel's depth and color values.
### Depth of Field / Blur
Using this post processing shader, I could apply blur to the scene. Depth of field is a selective blur, keeping a certain distance in focus. I first transform the depthbuffer (see @fig:img-depth-map) to be 0 around the point of focus and tend towards 1 otherwise. I then use this focus value as the range of my blur. The blur is simply the average of a selection of neighboring pixels. See @fig:img-depth-of-field for results.
### Chromatic aberration
Light refracts differently depending on wavelength. (see @fig:img-what-is). By scaling the tree color components by different amounts, i can recreate this effect. This scaling is further multiplied by the focus value, to avoid aberration near the vertical line in @fig:img-what-is.
### Vignette
The vignette effect is simply a darkening of the image the further away from the center one is. One can simply use the euclidean distance to compute this. See @fig:img-vingette.
### Noise / Grain
GLSL doesn't have a random number generator built in, but I found one online. I modified it to use the UV vector and a time uniform as its seed. This generator is used to add noise to the image. The nose is multiplied with the focus value for a dramatic effect.

41
report/delivery_part3.md Normal file
View File

@ -0,0 +1,41 @@
# Notable problems encountered on the way, and how i solved them
## General difficulties
A lot of time was spent cleaning up and modifying the gloom base project. A lot of time was also spent working with `assimp` and getting the internal framebuffer to render correctly. `assimp` and `OpenGL` aren't the most verbose companion out there.
I learned that the handedness of face culling and normal maps aren't the same everywhere. Luckily `assimp` supports flipping faces. When reading the grass texture, i had to flip the R and G color components of the normal map to make it look right. See @fig:img-wrong-handedness and @fig:img-flipped-handedness.
## The slope of the displacement map
The scrolling field of grass is actually just a static plane mesh of 100x100 vertices, with a perlin noise displacement map applied to it (I use an UV offset uniform to make the field scroll). You can however see in @fig:img-fine-plane that the old normals doesn't mesh with the now displaced geometry. I therefore had to recalculate the normals using the slope of the displacement. I rotate the TBN matrix and normal vectors in the shader to make it behave nice with the lighting. Luckily i have both the tangent and bitangen vector pointing in the U and V direction. calculating the slope of the displacment in both of these directions allows me to add the normal vector times the slope to the tangent and the bitangent. after normalizing the tangens, i can compute the new normal vector using the cross product of the two. From these i construct the TBN matrix. See @lst:new-tbn for the code.
This did however give me a pretty coarse image, so I moved the computation of the TBN matrix from the vertex shader to the fragement shader. This will give me a slight performance penalty, but I can undo the change in a simplified shader should I need the performance boost. See @fig-img-displacement-normals for results.
## Transparent objects {#sec:trans}
When rendering transparent objects with depth testing enabled, we run into issues as seen in @fig:img-tree-alpha. The depth test is simply a comparison against the depth buffer, which determines if a fragment should be rendered or not. When a fragment is rendered, the depth buffer is updated with the depth of the rendered fragment. Only fragment which will appear behind already rendered fragments will be skipped. But non-opaque objects should allow objects behind to still be visible.
As a first step to try to fix this issue, i split the rendering of the scene into two stages: opaque nodes and transparent nodes. The first stage will traverse the scene graph and store all transparent nodes in a list. Afterwards the list is sorted by distance from camera, then rendered back to front. This will ensure that the transparent meshes furthest away are rendered before the ones in front, which won't trip up the depth test. The results of this can be viewed in @fig:img-tree-sorted.
We still have issues here however. Faces within the same mesh aren't sorted and could be rendered in the wrong order. This is visible near the top of the tree in @fig:img-tree-sorted. To fix one could sort all the faces, but this isn't feasible in real time rendering applications. I then had the idea to try to disable the depth test. This look *better* in this case, but it would mean that opaque objects would always be beneath transparent ones, since the transparent ones are rendered in a second pass afterwards.
I then arrived at the solution of setting `glDepthMask(GL_FALSE);`, which makes the depth buffer read only. All writes to the depth buffer is ignored. Using this, the depth buffer created by the opaque objects can be used while rendering the transparent ones, and since the transparent ones are rendered in sorted order, they *kinda* work out as well. See @fig:img-tree-depth-readonly for the result. The new rendering pipeline is visualized in @fig:render-pipeline.
## Need for optimizations
At some point when i had over 5000 meshes in the scene i noticed a performance drop. I started to look into some optimizations. Resizing the window didn't affect the FPS, so I shouldn't be fragment bound. I assume i'm not vertex bound either, so i had to be bandwidth bound, which makes sense due to my single channel ram and integrated graphics. Reducing the amount of data sent between the CPU and the GPU was my goal.
After some searching through the code I came over the part where I upload the uniforms for each draw call to gl. (See @lst:uniform-upload)
I first optimized the `s->location()` calls:
It is a lookup from a uniform name string to location `GLint`. Asking GL directly everytime is costly due to the limited bandwith, and the compiler being unable to inline dynamically linked function. Therefore I'll cache the results per shader. See @fig:location-cache for the fix.
Uploading all these uniforms per GL draw call is ineffective as well.
Most of the uniforms don't change between draw calls. I therefore added caching of the uniforms using a bunch of nasty prepossessing tricks and static variables. See @lst:uniform-cache.
The next step for optimization would be to combine meshes with same materials into a single mesh. Most of the grass could be optimized this way. Each bundle of grass consists of 64 nodes the same materials and textures applied. concatenating the meshes would decrease the scene traversal and other rendering overhead significantly.
I could also specialize the shader for each material. I thought of replacing many of the uniforms with defines, and compile a separate shader for each material, but time is a limited resource.

9
report/delivery_part4.md Normal file
View File

@ -0,0 +1,9 @@
# What i learned about the methods in terms of advantages, limitations, and how to use it effectively
Post-processing is a great tool, but it adds complexity to the rendering pipeline. Debugging issues with the framebuffer isn't easy. It does have the advantage allowing me to skew the window along a sinus curve should i want to.
Post-processing also a cost-saving measure in terms of performance. It can allow me to only compute some value only once per pixel instead of once per fragment which are privy to cover one another. The grain and vignette effect are both possible to implement in the scene shader, doing it in the post processing step spares computation.
The method i used to render transparent objects works *okay*, as described in @sec:trans, but it does have consequences for the post-processing step later in the pipeline. I now have an incomplete depth buffer to work with, as seen in @fig:img-depth-map. This makes adding a fog effect in post create many artifacts. Fog can however be done in the fragment shader for the scene anyway, with only a slight performance penalty due to overlapping fragments.
One other weakness with the way i render transparent objects is that transparent meshes which cut into eachother will be render incorrectly. The whole mesh is sorted and rendered, not each face. If i had two transparent ice cubes inside one another *(kinda like a Venn diagram)* then one cube would be rendered on top of the other one. This doesn't matter for grass, but more complex and central objects in the scene may suffer from this.

176
report/delivery_part5.md Normal file
View File

@ -0,0 +1,176 @@
# Appendix
![
The seqmented pane with the cobble texture and normal map
](images/0-base.png){#fig:img-base}
![
The plane from @fig:img-base with a perlin noise displacement map applied to it
](images/1-perlin-displacement.png){#fig:img-perlin-displacement}
![
First rendering of the downloaded grass texture and normal map
](images/2-wrong-handedness.png){#fig:img-wrong-handedness}
![
Rendering of the downloaded grass texture with flipped normal map handedness
](images/3-flipped-handedness.png){#fig:img-flipped-handedness}
![
The field with grass texture, normal map and displacement map
](images/4-fine-plane.png){#fig:img-fine-plane}
![
How a mirrored-on-repeat texture behaves
](images/5-gl-mirror.jpg){#fig:img-gl-mirror}
```{.cpp #lst:new-tbn caption="Modified TBN matrix to account for the slope of the displacement"}
if (isDisplacementMapped) {
float o = texture(displaceTex, UV).r * 2 - 1;
float u = (texture(displaceTex, UV + vec2(0.0001, 0)).r*2-1 - o) / 0.0004; // magic numbers!
float v = (texture(displaceTex, UV + vec2(0, 0.0001)).r*2-1 - o) / 0.0004; // magic numbers!
TBN = mat3(
normalize(tangent + normal*u),
normalize(bitangent + normal*v),
normalize(cross(tangent + normal*u, bitangent + normal*v))
);
}
```
![
The displaced field with the TBN matrix rotated along the slope of the displacement.
](images/6-displacement-normals.png){#fig:img-displacement-normals}
![
Car mesh loaded from car model, without transformations
](images/7-car-meshes.png){#fig:img-car-meshes}
![
Car mesh loaded from car model with transformations applied.
](images/8-car-transformations.png){#fig:img-car-transformations}
![
Car mesh loaded from car model with transformations applied, rotated to make z point skyward.
](images/9-car-coordinate-system.png){#fig:img-car-coordinate-system}
![
Diffuse colors loaded from car model
](images/10-car-materials.png){#fig:img-car-materials}
![
Diffuse, emissive and specular colors loaded from car model
](images/11-material-colors.png){#fig:img-material-colors}
![
Car model with all colors, with reflection mapping applied.
](images/12-reflection.png){#fig:img-reflection}
![
The reflection map texture applied to the car
](../res/textures/reflection_field.png){#fig:img-reflection-map}
![
Tree model loaded from model file, no texture support yet.
](images/13-tree.png){#fig:img-tree}
![
Tree model loaded from model file, textures applied.
](images/14-tree-alpha.png){#fig:img-tree-alpha}
![
Tree model with textures, transparent meshes rendered last in sorted order.
](images/15-tree-sorted.png){#fig:img-tree-sorted}
![
Tree model with textures, transparent meshes rendered last in sorted order, with depthbuffer in read-only mode.
](images/16-tree-depth-readonly.png){#fig:img-tree-depth-readonly}
```{.dot include="images/rendering-pipeline.dot" caption="The scene rendering pipline, handling transparent nodes" #fig:render-pipeline}
```
![
Grass model loaded, cloned and placed randomly throughout the scene
](images/17-low-fps.png){#fig:img-low-fps}
```{.cpp caption="The node uniforms being uploaded to the shader" #lst:uniform-upload}
glUniformMatrix4fv(s->location("MVP") , 1, GL_FALSE, glm::value_ptr(node->MVP));
glUniformMatrix4fv(s->location("MV") , 1, GL_FALSE, glm::value_ptr(node->MV));
glUniformMatrix4fv(s->location("MVnormal"), 1, GL_FALSE, glm::value_ptr(node->MVnormal));
glUniform2fv(s->location("uvOffset") , 1, glm::value_ptr(node->uvOffset));
glUniform3fv(s->location("diffuse_color") , 1, glm::value_ptr(node->diffuse_color));
glUniform3fv(s->location("emissive_color"), 1, glm::value_ptr(node->emissive_color));
glUniform3fv(s->location("specular_color"), 1, glm::value_ptr(node->specular_color));
glUniform1f( s->location("opacity"), node->opacity);
glUniform1f( s->location("shininess"), node->shininess);
glUniform1f( s->location("reflexiveness"), node->reflexiveness);
glUniform1f( s->location("displacementCoefficient"), node->displacementCoefficient);
glUniform1ui(s->location("isTextured"), node->isTextured);
glUniform1ui(s->location("isVertexColored"), node->isVertexColored);
glUniform1ui(s->location("isNormalMapped"), node->isNormalMapped);
glUniform1ui(s->location("isDisplacementMapped"), node->isDisplacementMapped);
glUniform1ui(s->location("isReflectionMapped"), node->isReflectionMapped);
glUniform1ui(s->location("isIlluminated"), node->isIlluminated);
glUniform1ui(s->location("isInverted"), node->isInverted);
```
```{.cpp caption="Function for caching the uniform locations per shader. The commented line is the old implementation." #lst:location-cache}
GLint inline Shader::location(std::string const& name) {
//return glGetUniformLocation(mProgram, name.c_str());
auto it = this->cache.find(name);
if (it == this->cache.end())
return this->cache[name] = glGetUniformLocation(mProgram, name.c_str());
return it->second;
}
```
```{.cpp caption="The uniform caching defines" #lst:uniform-cache}
bool shader_changed = s != prev_s;
#define cache(x) static decltype(node->x) cached_ ## x; \
if (shader_changed || cached_ ## x != node->x) { cached_ ## x = node->x;
#define um4fv(x) cache(x) glUniformMatrix4fv(s->location(#x), 1, GL_FALSE, glm::value_ptr(node->x)); }
#define u2fv(x) cache(x) glUniform2fv( s->location(#x), 1, glm::value_ptr(node->x)); }
#define u3fv(x) cache(x) glUniform3fv( s->location(#x), 1, glm::value_ptr(node->x)); }
#define u1f(x) cache(x) glUniform1f( s->location(#x), node->x); }
#define u1ui(x) cache(x) glUniform1ui( s->location(#x), node->x); }
```
![
Car, trees and grass combined into a night driving scene. Two yellow spot lights attached to the head lights, two yelllow point lights attached to the head lights, two red point lights attached to the read lights.
](images/18-night-scene-lights.png){#fig:img-night-scene-lights}
![
A pink rim backlight applied to the car with strength of 0.3.
](images/19-rim-lights.png){#fig:img-rim-lights}
![
Visualisation of the transformed depth buffer, transformed into a 'point of focus' buffer. z=0 at the depth of the car, tends toward 1 otherwise.
](images/20-depth-map.png){#fig:img-depth-map}
![
Depth of field applied to the scene
](images/21-depth-of-field.png){#fig:img-depth-of-field}
![
The vignette effect, applied to a white frame buffer
](images/22-vingette.png){#fig:img-vingette}
![
The chromatic aberration effect. F is the point of focus. The transformed depthbuffer is centered around the vertical line crossing F.
](images/23.5-what-is.jpg){#fig:img-what-is}
![
Chromatic aberration applied to the scene, where the aberration coefficients have been multiplied by 3.
](images/23-chromatic-aberration.png){#fig:img-chromatic-aberration}
![
Noise/grain applied to the frame buffer.
](images/24-noise.png){#fig:img-noise}
![
Noise/grain, multiplied by the depthbuffer/point of focus, applied to the frame buffer.
](images/25-all-effects.png){#fig:img-all-effects}
![
The same scene, during the day. Spotlights have been turned off.
](images/26-day.png){#fig:img-day}

View File

@ -0,0 +1,17 @@
digraph asd {
//rankdir=LR;
dpi=600;
ratio=0.55;
node [fontname=arial, shape=rectangle, style=filled, fillcolor="#dddddd"]
null [ label="updateNodes(rootNode);" ]
0 [ label="renderNodes(rootNode, only_opaque=true);" ]
1 [ label="std::sort(transparent_nodes);" ]
2 [ label="glDepthMask(GL_FALSE);" ]
3 [ label="for (Node* n : transparent_nodes)\l renderNodes(n, no_recursion=true);\l" ]
4 [ label="glDepthMask(GL_TRUE);" ]
5 [ label="renderNodes(hudNode);" ]
null->0
0->1 [label="create vector of the\lskipped transparent nodes"]
1->2->3->4->5
}