Refactoring

This commit is contained in:
mklefrancois 2021-09-07 09:42:21 +02:00
parent 3e399adf0a
commit d90ce79135
222 changed files with 9045 additions and 5734 deletions

View file

@ -1,8 +1,142 @@
# NVIDIA Vulkan Ray Tracing Tutorial
This example is a simple OBJ viewer in Vulkan, without any ray tracing functionality.
It is the starting point of the ray tracing tutorial, the source of the application in which ray tracing will be added.
This sample is a simple OBJ viewer in Vulkan, without any ray tracing functionality.
It is the starting point of the [ray tracing tutorial](https://nvpro-samples.github.io/vk_raytracing_tutorial_KHR/),
the source of the application where ray tracing will be added.
![](images/vk_ray_tracing__before.png)
## [**Start Ray Tracing Tutorial**](https://nvpro-samples.github.io/vk_raytracing_tutorial_KHR/)
Before starting the tutorial and adding what is necessary to enable ray tracing, here is a brief description of how this basic example was created.
## Structure
![resultRaytraceShadowMedieval](../docs/Images/resultRasterCube.png)
* main.cpp : creates a Vulkan context, call for loading OBJ files, drawing loop
* hello_vulkan.[cpp|h]: example class that loads and store OBJ in Vulkan buffers, load textures, creates pipelines and render the scene
* [nvvk](https://github.com/nvpro-samples/nvpro_core/tree/master/nvvk): library of independent Vulkan helpers.
## Overview
The following describes the function calls in main(). The helpers will not be described here, but following the link should give more details on how they work.
### Window and Vulkan Setup
* Creates a window using [GLFW](https://www.glfw.org/).
* Creates a Vulkan context using the [`nvvk::Context`](https://github.com/nvpro-samples/nvpro_core/tree/master/nvvk#context_vkhpp)
helper. The result is a `VkInstance`, `VkDevice`, `VkPhisicalDevice`, the queues and the extensions initialized.
* The window was created, but not the surface to draw in this window. This is done using the information from `VkInstance` and `GLFWwindow`. The
`VkSurfaceKHR` will be used to find the proper queue.
### Sample Setup
* Hello vulkan setup:
* [base] Keep a copy of `VkInstance`, `VkDevice`, `VkPhysicalDevice` and graphics Queue Index.
* [base] Create a command pool
* Initialize the [Vulkan resource allocator](https://github.com/nvpro-samples/nvpro_core/tree/master/nvvk#resourceallocator_vkhpp) (nvvk::alloc).
* Create Swapchain [base]:
* Using the base class method and the help of [`nvvk::Swapchain`](https://github.com/nvpro-samples/nvpro_core/tree/master/nvvk#swapchain_vkhpp),
initialize the swapchain for rendering onto the surface.
* Finds the surface color and depth format.
* Create *n* `VkFence` for synchronization.
* Create depth buffer [base]: depth buffer image with the format query in the swapchain
* Create render pass [base]:
* Simple render pass, with two attachments: color and depth
* Using swapchain color and depth format.
* Create frame buffers [base]:
* Create *n images* of the number returned by the swapchain
* Create as many frame buffers and attach the images
* Initialize GUI [base]: we are using [Dear ImGui](https://github.com/ocornut/imgui) and
this is doing the initialization for Vulkan.
* Loading [Wavefront .obj file](https://en.wikipedia.org/wiki/Wavefront_.obj_file):
* This is a very simple format, but it allows us to make manual modifications in the file, to load several .obj files and to combine them to obtain a more complex scene. We can also easily create several instances of the loaded objects.
* Loading the .obj is done through the [ObjLoader](https://github.com/nvpro-samples/vk_raytracing_tutorial_KHR/blob/3e399adf0a3e991795fbdf91e0e6c23b9f492bb8/common/obj_loader.cpp#L26) which is using [tinyobjloader](https://github.com/tinyobjloader/tinyobjloader).
* All vertices are in the form of: position, normal, color, texture coordinates
* Every 3 vertex indices are creating a triangle.
* Every triangle has an index of the material it is using.
* There are M materials and T textures.
* We allocate buffers and staging the transfer to the GPU using our resource allocator.
* We create an instance of the loaded model.
* :warning: We keep the addresses of the created buffers and append this to a vector of `ObjDesc`, which allows easy access to the model information from a shader.
### Offscreen image & Graphic Pipeline
We have a swapchain and window frame buffers, but we will not render the scene directly to them. Instead, we will render to an RGBA32F image, and then render that image to a full screen triangle using a tone mapper. We could have rendered directly to one of the swapchain framebuffers and applied a tone mapper, but this separation will be very useful for rendering G-Buffers and doing something similar with ray tracing.
* Create offscreen render:
* Create a color and depth image, format are VK_FORMAT_R32G32B32A32_SFLOAT and best depth for the physical device using [`nvvk::findDepthFormat`](https://github.com/nvpro-samples/nvpro_core/tree/master/nvvk#renderpasses_vkhpp).
* Create a render pass to render using those images [`nvvk::createRenderPass`](https://github.com/nvpro-samples/nvpro_core/tree/master/nvvk#renderpasses_vkhpp).
* Create a frame buffer, to use as a target when rendering offline.
* Create descriptor set layout:
* Using the [`nvvk::DescriptorSetBindings`](https://github.com/nvpro-samples/nvpro_core/tree/master/nvvk#class-nvvkdescriptorsetbindings), describing the resources that will be used in the vertex and fragment shader.
* Create graphic pipeline:
* Using [`nvvk::GraphicsPipelineGeneratorCombined`](https://github.com/nvpro-samples/nvpro_core/tree/master/nvvk#class-nvvkgraphicspipelinegeneratorcombined)
* Load both vertex and fragment SpirV shaders
* Describe the vertex attributes: pos, nrm, color, uv
* Create uniform buffer:
* The uniform buffer is holding global scene information and is updated at each frame.
* The information in this sample consist of camera matrices and will be
updated in the drawing loop using `updateUniformBuffer()`.
* Create Obj description buffer:
* When loading .obj, we stored the addresses of the OBJ buffers.
* If we have loaded many .obj, the array will be as large as the number of obj loaded.
* Each instance has the index of the OBJ, therefore the information of the model will be easily retrievable in the shader.
* This function creates the buffer holding the vector of `ObjDesc`.
* Update descriptor set:
* The descriptor set has been created, but here we are writing the information, meaning all the buffers and textures we have created.
At this point, the OBJs are loaded and their information is uploaded to the GPU. The graphic pipeline and the information describing the resources are also created.
### Post Pipeline
We need to create a "post pipeline". This is specific to the raster of this sample. As mentioned before, we render in a full screen triangle, the result of the raster and apply a tonemapper. This step will be rendered directly into the framebuffer of the swapchain.
![](images/base_pipeline.png)
* Create post descriptor:
* Adding the image input (result of rendering)
* Create post pipeline:
* Passtrough vertex shader (full screen triangle)
* Fragment tone-mapper shader
* Update post descriptor:
* Writing the image (RGB32F) address
At this point we are ready to start the rendering loop.
* Setup Glfw callback:
* Our base class has many functions to react on Glfw, such as change of window size, mouse and keyboard inputs.
* All callback functions are virtual, which allow to override the base functionality, but for most there isn't anything to do.
* Except for `onResize`, since we render to an intermediate image (RGB32F), this image will need to be re-created with the new size and the post pipeline will need the address of this image. Therefore, there is an overload of this function.
### Rendering Loop
* Render indefinitely until Glfw receive a close action
* Glfw pulling events
* Imgui new frame and rendering (not drawing)
* Prepare Frame
* This is part of the swapchain, we can have a double or triple buffer and
this call will be waiting (fence) to make sure that one swapchain frame buffer
is available.
* Get current frame
* Depending if we have a double or triple buffer, it will return: 0 or 1, or 0, 1 or 2.
* Resources in the base class exist in that amount: color images, frame buffer, command buffer, fence
* Get the command buffer for the current frame (better to re-use command buffers than creating new ones)
* Update the uniform buffer:
* Push the current camera matrices
#### Offscreen
* Start offscreen rendering pass
* We will be rendering (raster) in the RGBA32F image using the offscreen render pass and frame buffer.
* Rasterize
* This is rasterizing the entire scene, all instances of each object.
#### Final
Start another rendering, this time in the swapchain frame buffer
* Use the base class render pass
* Use the swapchain frame buffer
* Render a full-screen triangle with the raster image and tone-mapping it.
* Actually rendering Imgui in Vulkan
* Submit the frame for execution and display
### Clean up
The rest is clean up all allocations

View file

@ -40,14 +40,6 @@
extern std::vector<std::string> defaultSearchPaths;
// Holding the camera matrices
struct CameraMatrices
{
nvmath::mat4f view;
nvmath::mat4f proj;
nvmath::mat4f viewInverse;
};
//--------------------------------------------------------------------------------------------------
// Keep the handle on the device
// Initialize the tool to do all our allocations: buffers, images
@ -67,14 +59,17 @@ void HelloVulkan::updateUniformBuffer(const VkCommandBuffer& cmdBuf)
{
// Prepare new UBO contents on host.
const float aspectRatio = m_size.width / static_cast<float>(m_size.height);
CameraMatrices hostUBO = {};
hostUBO.view = CameraManip.getMatrix();
hostUBO.proj = nvmath::perspectiveVK(CameraManip.getFov(), aspectRatio, 0.1f, 1000.0f);
// hostUBO.proj[1][1] *= -1; // Inverting Y for Vulkan (not needed with perspectiveVK).
hostUBO.viewInverse = nvmath::invert(hostUBO.view);
GlobalUniforms hostUBO = {};
const auto& view = CameraManip.getMatrix();
const auto& proj = nvmath::perspectiveVK(CameraManip.getFov(), aspectRatio, 0.1f, 1000.0f);
// proj[1][1] *= -1; // Inverting Y for Vulkan (not needed with perspectiveVK).
hostUBO.viewProj = proj * view;
hostUBO.viewInverse = nvmath::invert(view);
hostUBO.projInverse = nvmath::invert(proj);
// UBO on the device, and what stages access it.
VkBuffer deviceUBO = m_cameraMat.buffer;
VkBuffer deviceUBO = m_bGlobals.buffer;
auto uboUsageStages = VK_PIPELINE_STAGE_VERTEX_SHADER_BIT;
// Ensure that the modified UBO is not visible to previous frames.
@ -90,7 +85,7 @@ void HelloVulkan::updateUniformBuffer(const VkCommandBuffer& cmdBuf)
// Schedule the host-to-device upload. (hostUBO is copied into the cmd
// buffer so it is okay to deallocate when the function returns).
vkCmdUpdateBuffer(cmdBuf, m_cameraMat.buffer, 0, sizeof(CameraMatrices), &hostUBO);
vkCmdUpdateBuffer(cmdBuf, m_bGlobals.buffer, 0, sizeof(GlobalUniforms), &hostUBO);
// Making sure the updated UBO will be visible.
VkBufferMemoryBarrier afterBarrier{VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER};
@ -110,12 +105,15 @@ void HelloVulkan::createDescriptorSetLayout()
{
auto nbTxt = static_cast<uint32_t>(m_textures.size());
// Camera matrices (binding = 0)
m_descSetLayoutBind.addBinding(0, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 1, VK_SHADER_STAGE_VERTEX_BIT);
// Scene description (binding = 1)
m_descSetLayoutBind.addBinding(1, VK_DESCRIPTOR_TYPE_STORAGE_BUFFER, 1, VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT);
// Textures (binding = 2)
m_descSetLayoutBind.addBinding(2, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, nbTxt, VK_SHADER_STAGE_FRAGMENT_BIT);
// Camera matrices
m_descSetLayoutBind.addBinding(SceneBindings::eGlobals, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, 1,
VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_RAYGEN_BIT_KHR);
// Obj descriptions
m_descSetLayoutBind.addBinding(SceneBindings::eObjDescs, VK_DESCRIPTOR_TYPE_STORAGE_BUFFER, 1,
VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT | VK_SHADER_STAGE_CLOSEST_HIT_BIT_KHR);
// Textures
m_descSetLayoutBind.addBinding(SceneBindings::eTextures, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, nbTxt,
VK_SHADER_STAGE_FRAGMENT_BIT | VK_SHADER_STAGE_CLOSEST_HIT_BIT_KHR);
m_descSetLayout = m_descSetLayoutBind.createLayout(m_device);
@ -131,11 +129,11 @@ void HelloVulkan::updateDescriptorSet()
std::vector<VkWriteDescriptorSet> writes;
// Camera matrices and scene description
VkDescriptorBufferInfo dbiUnif{m_cameraMat.buffer, 0, VK_WHOLE_SIZE};
writes.emplace_back(m_descSetLayoutBind.makeWrite(m_descSet, 0, &dbiUnif));
VkDescriptorBufferInfo dbiUnif{m_bGlobals.buffer, 0, VK_WHOLE_SIZE};
writes.emplace_back(m_descSetLayoutBind.makeWrite(m_descSet, SceneBindings::eGlobals, &dbiUnif));
VkDescriptorBufferInfo dbiSceneDesc{m_sceneDesc.buffer, 0, VK_WHOLE_SIZE};
writes.emplace_back(m_descSetLayoutBind.makeWrite(m_descSet, 1, &dbiSceneDesc));
VkDescriptorBufferInfo dbiSceneDesc{m_bObjDesc.buffer, 0, VK_WHOLE_SIZE};
writes.emplace_back(m_descSetLayoutBind.makeWrite(m_descSet, SceneBindings::eObjDescs, &dbiSceneDesc));
// All texture samplers
std::vector<VkDescriptorImageInfo> diit;
@ -143,7 +141,7 @@ void HelloVulkan::updateDescriptorSet()
{
diit.emplace_back(texture.descriptor);
}
writes.emplace_back(m_descSetLayoutBind.makeWriteArray(m_descSet, 2, diit.data()));
writes.emplace_back(m_descSetLayoutBind.makeWriteArray(m_descSet, SceneBindings::eTextures, diit.data()));
// Writing the information
vkUpdateDescriptorSets(m_device, static_cast<uint32_t>(writes.size()), writes.data(), 0, nullptr);
@ -155,7 +153,7 @@ void HelloVulkan::updateDescriptorSet()
//
void HelloVulkan::createGraphicsPipeline()
{
VkPushConstantRange pushConstantRanges = {VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT, 0, sizeof(ObjPushConstant)};
VkPushConstantRange pushConstantRanges = {VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT, 0, sizeof(PushConstantRaster)};
// Creating the Pipeline Layout
VkPipelineLayoutCreateInfo createInfo{VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO};
@ -213,30 +211,35 @@ void HelloVulkan::loadModel(const std::string& filename, nvmath::mat4f transform
model.indexBuffer = m_alloc.createBuffer(cmdBuf, loader.m_indices, VK_BUFFER_USAGE_INDEX_BUFFER_BIT | flag);
model.matColorBuffer = m_alloc.createBuffer(cmdBuf, loader.m_materials, VK_BUFFER_USAGE_STORAGE_BUFFER_BIT | flag);
model.matIndexBuffer = m_alloc.createBuffer(cmdBuf, loader.m_matIndx, VK_BUFFER_USAGE_STORAGE_BUFFER_BIT | flag);
// Creates all textures found
uint32_t txtOffset = static_cast<uint32_t>(m_textures.size());
// Creates all textures found and find the offset for this model
auto txtOffset = static_cast<uint32_t>(m_textures.size());
createTextureImages(cmdBuf, loader.m_textures);
cmdBufGet.submitAndWait(cmdBuf);
m_alloc.finalizeAndReleaseStaging();
std::string objNb = std::to_string(m_objModel.size());
m_debug.setObjectName(model.vertexBuffer.buffer, (std::string("vertex_" + objNb).c_str()));
m_debug.setObjectName(model.indexBuffer.buffer, (std::string("index_" + objNb).c_str()));
m_debug.setObjectName(model.matColorBuffer.buffer, (std::string("mat_" + objNb).c_str()));
m_debug.setObjectName(model.matIndexBuffer.buffer, (std::string("matIdx_" + objNb).c_str()));
m_debug.setObjectName(model.vertexBuffer.buffer, (std::string("vertex_" + objNb)));
m_debug.setObjectName(model.indexBuffer.buffer, (std::string("index_" + objNb)));
m_debug.setObjectName(model.matColorBuffer.buffer, (std::string("mat_" + objNb)));
m_debug.setObjectName(model.matIndexBuffer.buffer, (std::string("matIdx_" + objNb)));
// Keeping transformation matrix of the instance
ObjInstance instance;
instance.objIndex = static_cast<uint32_t>(m_objModel.size());
instance.transform = transform;
instance.transformIT = nvmath::transpose(nvmath::invert(transform));
instance.txtOffset = txtOffset;
instance.vertices = nvvk::getBufferDeviceAddress(m_device, model.vertexBuffer.buffer);
instance.indices = nvvk::getBufferDeviceAddress(m_device, model.indexBuffer.buffer);
instance.materials = nvvk::getBufferDeviceAddress(m_device, model.matColorBuffer.buffer);
instance.materialIndices = nvvk::getBufferDeviceAddress(m_device, model.matIndexBuffer.buffer);
instance.transform = transform;
instance.objIndex = static_cast<uint32_t>(m_objModel.size());
m_instances.push_back(instance);
// Creating information for device access
ObjDesc desc;
desc.txtOffset = txtOffset;
desc.vertexAddress = nvvk::getBufferDeviceAddress(m_device, model.vertexBuffer.buffer);
desc.indexAddress = nvvk::getBufferDeviceAddress(m_device, model.indexBuffer.buffer);
desc.materialAddress = nvvk::getBufferDeviceAddress(m_device, model.matColorBuffer.buffer);
desc.materialIndexAddress = nvvk::getBufferDeviceAddress(m_device, model.matIndexBuffer.buffer);
// Keeping the obj host model and device description
m_objModel.emplace_back(model);
m_objInstance.emplace_back(instance);
m_objDesc.emplace_back(desc);
}
@ -246,9 +249,9 @@ void HelloVulkan::loadModel(const std::string& filename, nvmath::mat4f transform
//
void HelloVulkan::createUniformBuffer()
{
m_cameraMat = m_alloc.createBuffer(sizeof(CameraMatrices), VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT,
VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
m_debug.setObjectName(m_cameraMat.buffer, "cameraMat");
m_bGlobals = m_alloc.createBuffer(sizeof(GlobalUniforms), VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT,
VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT);
m_debug.setObjectName(m_bGlobals.buffer, "Globals");
}
//--------------------------------------------------------------------------------------------------
@ -257,15 +260,15 @@ void HelloVulkan::createUniformBuffer()
// - Transformation
// - Offset for texture
//
void HelloVulkan::createSceneDescriptionBuffer()
void HelloVulkan::createObjDescriptionBuffer()
{
nvvk::CommandPool cmdGen(m_device, m_graphicsQueueIndex);
auto cmdBuf = cmdGen.createCommandBuffer();
m_sceneDesc = m_alloc.createBuffer(cmdBuf, m_objInstance, VK_BUFFER_USAGE_STORAGE_BUFFER_BIT);
m_bObjDesc = m_alloc.createBuffer(cmdBuf, m_objDesc, VK_BUFFER_USAGE_STORAGE_BUFFER_BIT);
cmdGen.submitAndWait(cmdBuf);
m_alloc.finalizeAndReleaseStaging();
m_debug.setObjectName(m_sceneDesc.buffer, "sceneDesc");
m_debug.setObjectName(m_bObjDesc.buffer, "ObjDescs");
}
//--------------------------------------------------------------------------------------------------
@ -351,8 +354,8 @@ void HelloVulkan::destroyResources()
vkDestroyDescriptorPool(m_device, m_descPool, nullptr);
vkDestroyDescriptorSetLayout(m_device, m_descSetLayout, nullptr);
m_alloc.destroy(m_cameraMat);
m_alloc.destroy(m_sceneDesc);
m_alloc.destroy(m_bGlobals);
m_alloc.destroy(m_bObjDesc);
for(auto& m : m_objModel)
{
@ -397,14 +400,14 @@ void HelloVulkan::rasterize(const VkCommandBuffer& cmdBuf)
vkCmdBindDescriptorSets(cmdBuf, VK_PIPELINE_BIND_POINT_GRAPHICS, m_pipelineLayout, 0, 1, &m_descSet, 0, nullptr);
for(int i = 0; i < m_objInstance.size(); ++i)
for(const HelloVulkan::ObjInstance& inst : m_instances)
{
auto& inst = m_objInstance[i];
auto& model = m_objModel[inst.objIndex];
m_pushConstant.instanceId = i; // Telling which instance is drawn
auto& model = m_objModel[inst.objIndex];
m_pcRaster.objIndex = inst.objIndex; // Telling which object is drawn
m_pcRaster.modelMatrix = inst.transform;
vkCmdPushConstants(cmdBuf, m_pipelineLayout, VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT, 0,
sizeof(ObjPushConstant), &m_pushConstant);
sizeof(PushConstantRaster), &m_pcRaster);
vkCmdBindVertexBuffers(cmdBuf, 0, 1, &model.vertexBuffer.buffer, &offset);
vkCmdBindIndexBuffer(cmdBuf, model.indexBuffer.buffer, 0, VK_INDEX_TYPE_UINT32);
vkCmdDrawIndexed(cmdBuf, model.nbIndices, 1, 0, 0, 0);

View file

@ -24,6 +24,7 @@
#include "nvvk/descriptorsets_vk.hpp"
#include "nvvk/memallocator_dma_vk.hpp"
#include "nvvk/resourceallocator_vk.hpp"
#include "shaders/host_device.h"
//--------------------------------------------------------------------------------------------------
// Simple rasterizer of OBJ objects
@ -41,7 +42,7 @@ public:
void loadModel(const std::string& filename, nvmath::mat4f transform = nvmath::mat4f(1));
void updateDescriptorSet();
void createUniformBuffer();
void createSceneDescriptionBuffer();
void createObjDescriptionBuffer();
void createTextureImages(const VkCommandBuffer& cmdBuf, const std::vector<std::string>& textures);
void updateUniformBuffer(const VkCommandBuffer& cmdBuf);
void onResize(int /*w*/, int /*h*/) override;
@ -59,32 +60,27 @@ public:
nvvk::Buffer matIndexBuffer; // Device buffer of array of 'Wavefront material'
};
// Instance of the OBJ
struct ObjInstance
{
nvmath::mat4f transform{1}; // Position of the instance
nvmath::mat4f transformIT{1}; // Inverse transpose
uint32_t objIndex{0}; // Reference to the `m_objModel`
uint32_t txtOffset{0}; // Offset in `m_textures`
VkDeviceAddress vertices;
VkDeviceAddress indices;
VkDeviceAddress materials;
VkDeviceAddress materialIndices;
nvmath::mat4f transform; // Matrix of the instance
uint32_t objIndex{0}; // Model index reference
};
// Information pushed at each draw call
struct ObjPushConstant
{
nvmath::vec3f lightPosition{10.f, 15.f, 8.f};
int instanceId{0}; // To retrieve the transformation matrix
float lightIntensity{100.f};
int lightType{0}; // 0: point, 1: infinite
PushConstantRaster m_pcRaster{
{1}, // Identity matrix
{10.f, 15.f, 8.f}, // light position
0, // instance Id
100.f, // light intensity
0 // light type
};
ObjPushConstant m_pushConstant;
// Array of objects and instances in the scene
std::vector<ObjModel> m_objModel;
std::vector<ObjInstance> m_objInstance;
std::vector<ObjModel> m_objModel; // Model on host
std::vector<ObjDesc> m_objDesc; // Model description for device access
std::vector<ObjInstance> m_instances; // Scene model instances
// Graphic pipeline
VkPipelineLayout m_pipelineLayout;
@ -94,8 +90,8 @@ public:
VkDescriptorSetLayout m_descSetLayout;
VkDescriptorSet m_descSet;
nvvk::Buffer m_cameraMat; // Device-Host of the camera matrices
nvvk::Buffer m_sceneDesc; // Device buffer of the OBJ instances
nvvk::Buffer m_bGlobals; // Device-Host of the camera matrices
nvvk::Buffer m_bObjDesc; // Device buffer of the OBJ descriptions
std::vector<nvvk::Texture> m_textures; // vector of all textures of the scene
@ -104,7 +100,7 @@ public:
nvvk::DebugUtil m_debug; // Utility to name objects
// #Post
// #Post - Draw the rendered image on a quad using a tonemapper
void createOffscreenRender();
void createPostPipeline();
void createPostDescriptor();

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View file

@ -56,12 +56,12 @@ void renderUI(HelloVulkan& helloVk)
ImGuiH::CameraWidget();
if(ImGui::CollapsingHeader("Light"))
{
ImGui::RadioButton("Point", &helloVk.m_pushConstant.lightType, 0);
ImGui::RadioButton("Point", &helloVk.m_pcRaster.lightType, 0);
ImGui::SameLine();
ImGui::RadioButton("Infinite", &helloVk.m_pushConstant.lightType, 1);
ImGui::RadioButton("Infinite", &helloVk.m_pcRaster.lightType, 1);
ImGui::SliderFloat3("Position", &helloVk.m_pushConstant.lightPosition.x, -20.f, 20.f);
ImGui::SliderFloat("Intensity", &helloVk.m_pushConstant.lightIntensity, 0.f, 150.f);
ImGui::SliderFloat3("Position", &helloVk.m_pcRaster.lightPosition.x, -20.f, 20.f);
ImGui::SliderFloat("Intensity", &helloVk.m_pcRaster.lightIntensity, 0.f, 150.f);
}
}
@ -156,7 +156,7 @@ int main(int argc, char** argv)
helloVk.createDescriptorSetLayout();
helloVk.createGraphicsPipeline();
helloVk.createUniformBuffer();
helloVk.createSceneDescriptionBuffer();
helloVk.createObjDescriptionBuffer();
helloVk.updateDescriptorSet();
helloVk.createPostDescriptor();

View file

@ -29,62 +29,55 @@
#include "wavefront.glsl"
layout(push_constant) uniform shaderInformation
layout(push_constant) uniform _PushConstantRaster
{
vec3 lightPosition;
uint instanceId;
float lightIntensity;
int lightType;
}
pushC;
PushConstantRaster pcRaster;
};
// clang-format off
// Incoming
layout(location = 1) in vec2 fragTexCoord;
layout(location = 2) in vec3 fragNormal;
layout(location = 3) in vec3 viewDir;
layout(location = 4) in vec3 worldPos;
layout(location = 1) in vec3 i_worldPos;
layout(location = 2) in vec3 i_worldNrm;
layout(location = 3) in vec3 i_viewDir;
layout(location = 4) in vec2 i_texCoord;
// Outgoing
layout(location = 0) out vec4 outColor;
layout(location = 0) out vec4 o_color;
layout(buffer_reference, scalar) buffer Vertices {Vertex v[]; }; // Positions of an object
layout(buffer_reference, scalar) buffer Indices {uint i[]; }; // Triangle indices
layout(buffer_reference, scalar) buffer Materials {WaveFrontMaterial m[]; }; // Array of all materials on an object
layout(buffer_reference, scalar) buffer MatIndices {int i[]; }; // Material ID for each triangle
layout(binding = 1, scalar) buffer SceneDesc_ { SceneDesc i[]; } sceneDesc;
layout(binding = 2) uniform sampler2D[] textureSamplers;
layout(binding = eObjDescs, scalar) buffer ObjDesc_ { ObjDesc i[]; } objDesc;
layout(binding = eTextures) uniform sampler2D[] textureSamplers;
// clang-format on
void main()
{
// Object of this instance
int objId = sceneDesc.i[pushC.instanceId].objId;
// Material of the object
SceneDesc objResource = sceneDesc.i[pushC.instanceId];
ObjDesc objResource = objDesc.i[pcRaster.objIndex];
MatIndices matIndices = MatIndices(objResource.materialIndexAddress);
Materials materials = Materials(objResource.materialAddress);
int matIndex = matIndices.i[gl_PrimitiveID];
WaveFrontMaterial mat = materials.m[matIndex];
vec3 N = normalize(fragNormal);
vec3 N = normalize(i_worldNrm);
// Vector toward light
vec3 L;
float lightIntensity = pushC.lightIntensity;
if(pushC.lightType == 0)
float lightIntensity = pcRaster.lightIntensity;
if(pcRaster.lightType == 0)
{
vec3 lDir = pushC.lightPosition - worldPos;
vec3 lDir = pcRaster.lightPosition - i_worldPos;
float d = length(lDir);
lightIntensity = pushC.lightIntensity / (d * d);
lightIntensity = pcRaster.lightIntensity / (d * d);
L = normalize(lDir);
}
else
{
L = normalize(pushC.lightPosition - vec3(0));
L = normalize(pcRaster.lightPosition);
}
@ -92,15 +85,15 @@ void main()
vec3 diffuse = computeDiffuse(mat, L, N);
if(mat.textureId >= 0)
{
int txtOffset = sceneDesc.i[pushC.instanceId].txtOffset;
int txtOffset = objDesc.i[pcRaster.objIndex].txtOffset;
uint txtId = txtOffset + mat.textureId;
vec3 diffuseTxt = texture(textureSamplers[nonuniformEXT(txtId)], fragTexCoord).xyz;
vec3 diffuseTxt = texture(textureSamplers[nonuniformEXT(txtId)], i_texCoord).xyz;
diffuse *= diffuseTxt;
}
// Specular
vec3 specular = computeSpecular(mat, viewDir, L, N);
vec3 specular = computeSpecular(mat, i_viewDir, L, N);
// Result
outColor = vec4(lightIntensity * (diffuse + specular), 1);
o_color = vec4(lightIntensity * (diffuse + specular), 1);
}

View file

@ -0,0 +1,117 @@
/*
* Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef COMMON_HOST_DEVICE
#define COMMON_HOST_DEVICE
#ifdef __cplusplus
#include "nvmath/nvmath.h"
// GLSL Type
using vec2 = nvmath::vec2f;
using vec3 = nvmath::vec3f;
using vec4 = nvmath::vec4f;
using mat4 = nvmath::mat4f;
using uint = unsigned int;
#endif
// clang-format off
#ifdef __cplusplus // Descriptor binding helper for C++ and GLSL
#define START_BINDING(a) enum a {
#define END_BINDING() }
#else
#define START_BINDING(a) const uint
#define END_BINDING()
#endif
START_BINDING(SceneBindings)
eGlobals = 0, // Global uniform containing camera matrices
eObjDescs = 1, // Access to the object descriptions
eTextures = 2 // Access to textures
END_BINDING();
START_BINDING(RtxBindings)
eTlas = 0, // Top-level acceleration structure
eOutImage = 1 // Ray tracer output image
END_BINDING();
// clang-format on
// Information of a obj model when referenced in a shader
struct ObjDesc
{
int txtOffset; // Texture index offset in the array of textures
uint64_t vertexAddress; // Address of the Vertex buffer
uint64_t indexAddress; // Address of the index buffer
uint64_t materialAddress; // Address of the material buffer
uint64_t materialIndexAddress; // Address of the triangle material index buffer
};
// Uniform buffer set at each frame
struct GlobalUniforms
{
mat4 viewProj; // Camera view * projection
mat4 viewInverse; // Camera inverse view matrix
mat4 projInverse; // Camera inverse projection matrix
};
// Push constant structure for the raster
struct PushConstantRaster
{
mat4 modelMatrix; // matrix of the instance
vec3 lightPosition;
uint objIndex;
float lightIntensity;
int lightType;
};
// Push constant structure for the ray tracer
struct PushConstantRay
{
vec4 clearColor;
vec3 lightPosition;
float lightIntensity;
int lightType;
};
struct Vertex // See ObjLoader, copy of VertexObj, could be compressed for device
{
vec3 pos;
vec3 nrm;
vec3 color;
vec2 texCoord;
};
struct WaveFrontMaterial // See ObjLoader, copy of MaterialObj, could be compressed for device
{
vec3 ambient;
vec3 diffuse;
vec3 specular;
vec3 transmittance;
vec3 emission;
float shininess;
float ior; // index of refraction
float dissolve; // 1 == opaque; 0 == fully transparent
int illum; // illumination model (see http://www.fileformat.info/format/material/)
int textureId;
};
#endif

View file

@ -26,38 +26,26 @@
#include "wavefront.glsl"
// clang-format off
layout(binding = 1, scalar) buffer SceneDesc_ { SceneDesc i[]; } sceneDesc;
// clang-format on
layout(binding = 0) uniform UniformBufferObject
layout(binding = 0) uniform _GlobalUniforms
{
mat4 view;
mat4 proj;
mat4 viewI;
}
ubo;
GlobalUniforms uni;
};
layout(push_constant) uniform shaderInformation
layout(push_constant) uniform _PushConstantRaster
{
vec3 lightPosition;
uint instanceId;
float lightIntensity;
int lightType;
}
pushC;
PushConstantRaster pcRaster;
};
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inNormal;
layout(location = 2) in vec3 inColor;
layout(location = 3) in vec2 inTexCoord;
layout(location = 0) in vec3 i_position;
layout(location = 1) in vec3 i_normal;
layout(location = 2) in vec3 i_color;
layout(location = 3) in vec2 i_texCoord;
//layout(location = 0) flat out int matIndex;
layout(location = 1) out vec2 fragTexCoord;
layout(location = 2) out vec3 fragNormal;
layout(location = 3) out vec3 viewDir;
layout(location = 4) out vec3 worldPos;
layout(location = 1) out vec3 o_worldPos;
layout(location = 2) out vec3 o_worldNrm;
layout(location = 3) out vec3 o_viewDir;
layout(location = 4) out vec2 o_texCoord;
out gl_PerVertex
{
@ -67,16 +55,12 @@ out gl_PerVertex
void main()
{
mat4 objMatrix = sceneDesc.i[pushC.instanceId].transfo;
mat4 objMatrixIT = sceneDesc.i[pushC.instanceId].transfoIT;
vec3 origin = vec3(uni.viewInverse * vec4(0, 0, 0, 1));
vec3 origin = vec3(ubo.viewI * vec4(0, 0, 0, 1));
o_worldPos = vec3(pcRaster.modelMatrix * vec4(i_position, 1.0));
o_viewDir = vec3(o_worldPos - origin);
o_texCoord = i_texCoord;
o_worldNrm = mat3(pcRaster.modelMatrix) * i_normal;
worldPos = vec3(objMatrix * vec4(inPosition, 1.0));
viewDir = vec3(worldPos - origin);
fragTexCoord = inTexCoord;
fragNormal = vec3(objMatrixIT * vec4(inNormal, 0.0));
// matIndex = inMatID;
gl_Position = ubo.proj * ubo.view * vec4(worldPos, 1.0);
gl_Position = uni.viewProj * vec4(o_worldPos, 1.0);
}

View file

@ -17,41 +17,7 @@
* SPDX-License-Identifier: Apache-2.0
*/
struct Vertex
{
vec3 pos;
vec3 nrm;
vec3 color;
vec2 texCoord;
};
struct WaveFrontMaterial
{
vec3 ambient;
vec3 diffuse;
vec3 specular;
vec3 transmittance;
vec3 emission;
float shininess;
float ior; // index of refraction
float dissolve; // 1 == opaque; 0 == fully transparent
int illum; // illumination model (see http://www.fileformat.info/format/material/)
int textureId;
};
struct SceneDesc
{
mat4 transfo;
mat4 transfoIT;
int objId;
int txtOffset;
uint64_t vertexAddress;
uint64_t indexAddress;
uint64_t materialAddress;
uint64_t materialIndexAddress;
};
#include "host_device.h"
vec3 computeDiffuse(WaveFrontMaterial mat, vec3 lightDir, vec3 normal)
{