HoloLens Terrain Generation Demo Part 8 – Depth Only Rendering of Surface Meshes and Moving Some Code Around

In today’s post, I’m going to look at enabling depth only rendering of the surface meshes. Before I get to that, however, I’m going to look at how I’ve restructured the Surface Observer code in our Main class.

Moving the Surface Observer code

Last post, I presented how Microsoft’s Spatial Mapping sample code handles finding surfaces. Their code is pretty much entirely contained within the Update() method of the Main class.

// Only create a surface observer when you need to - do not create a new one each frame.
if (!m_surfaceObserver)	{
	// Initialize the Surface Observer using a valid coordinate system.
	if (!m_spatialPerceptionAccessRequested) {
		// The spatial mapping API reads information about the user's environment. The user must
		// grant permission to the app to use this capability of the Windows Holographic device.
		auto initSurfaceObserverTask = create_task(SpatialSurfaceObserver::RequestAccessAsync());
		initSurfaceObserverTask.then([this, currentCoordinateSystem](Windows::Perception::Spatial::SpatialPerceptionAccessStatus status) {
			if (status == SpatialPerceptionAccessStatus::Allowed) {
				m_surfaceAccessAllowed = true;
			}
		});

		m_spatialPerceptionAccessRequested = true;
	}
}

if (m_surfaceAccessAllowed)	{
	SpatialBoundingBox aabb =	{
		{ 0.f,  0.f, 0.f },
		{ 20.f, 20.f, 5.f },
	};
	SpatialBoundingVolume^ bounds = SpatialBoundingVolume::FromBox(currentCoordinateSystem, aabb);

	// If status is Allowed, we can create the surface observer.
	if (!m_surfaceObserver)	{
		// First, we'll set up the surface observer to use our preferred data formats.
		// In this example, a "preferred" format is chosen that is compatible with our precompiled shader pipeline.
		m_surfaceMeshOptions = ref new SpatialSurfaceMeshOptions();
		IVectorView<DirectXPixelFormat>^ supportedVertexPositionFormats = m_surfaceMeshOptions->SupportedVertexPositionFormats;
		unsigned int formatIndex = 0;
		if (supportedVertexPositionFormats->IndexOf(DirectXPixelFormat::R16G16B16A16IntNormalized, &formatIndex)) {
			m_surfaceMeshOptions->VertexPositionFormat = DirectXPixelFormat::R16G16B16A16IntNormalized;
		}
		IVectorView<DirectXPixelFormat>^ supportedVertexNormalFormats = m_surfaceMeshOptions->SupportedVertexNormalFormats;
		if (supportedVertexNormalFormats->IndexOf(DirectXPixelFormat::R8G8B8A8IntNormalized, &formatIndex))	{
			m_surfaceMeshOptions->VertexNormalFormat = DirectXPixelFormat::R8G8B8A8IntNormalized;
		}

		// Create the observer.
		m_surfaceObserver = ref new SpatialSurfaceObserver();
		if (m_surfaceObserver) {
			m_surfaceObserver->SetBoundingVolume(bounds);

			// If the surface observer was successfully created, we can initialize our
			// collection by pulling the current data set.
			auto mapContainingSurfaceCollection = m_surfaceObserver->GetObservedSurfaces();
			for (auto const& pair : mapContainingSurfaceCollection)	{
				auto const& id = pair->Key;
				auto const& surfaceInfo = pair->Value;
				m_meshRenderer->AddSurface(id, surfaceInfo);
			}

			// We then subscribe to an event to receive up-to-date data.
			m_surfacesChangedToken = m_surfaceObserver->ObservedSurfacesChanged +=
				ref new TypedEventHandler<SpatialSurfaceObserver^, Platform::Object^>(
						bind(&HoloLensTerrainGenDemoMain::OnSurfacesChanged, this, _1, _2)
					);
		}
	}

	// Keep the surface observer positioned at the device's location.
	if (m_surfaceObserver) {
		m_surfaceObserver->SetBoundingVolume(bounds);
	}
}

If you look at what this is doing, the first IF statement essentially only ever executes once. Once we’ve requested access to the Spatial Mapping system, we will never ask again. Further, assuming access is granted by the user, we only ever need to initialize the Surface Observer once. It seems odd to me that we would put this single execution code in our Update() method. To do so, we need to hide it behind these IF statements that should be unnecessary.
If we move the code that only needs to be run once to a different location, this bit of our Update() method is drastically simplified.

if (m_surfaceObserver)	{
	SpatialBoundingBox aabb =	{
		{ 0.f,  0.f, 0.f },
		{ 20.f, 20.f, 5.f },
	};
	SpatialBoundingVolume^ bounds = SpatialBoundingVolume::FromBox(currentCoordinateSystem, aabb);

	// Keep the surface observer positioned at the device's location.
	m_surfaceObserver->SetBoundingVolume(bounds);
}

Now, all we do is check that we have initialized our Surface Observer. If we have, then we just update the bounding volume to remain centered on our location and heading.

The majority of the rest of the code has been moved to our SetHolographicSpace() method, which handles the majority of system initialization.

// The spatial mapping API reads information about the user's environment. The user must
// grant permission to the app to use this capability of the Windows Holographic device.
auto requestPerceptionAccessTask = create_task(SpatialSurfaceObserver::RequestAccessAsync());
requestPerceptionAccessTask.then([this](Windows::Perception::Spatial::SpatialPerceptionAccessStatus status) {
	if (status == SpatialPerceptionAccessStatus::Allowed) {
		// Create an initial stationary reference frame to get a coordinate system from to calculate our initial
		// bounding volume for the surface observer.
		SpatialStationaryFrameOfReference^ baseReference = m_locator->CreateStationaryFrameOfReferenceAtCurrentLocation();
				
		SpatialBoundingBox aabb = {
			{ 0.f,  0.f, 0.f },
			{ 20.f, 20.f, 5.f },
		};
		SpatialBoundingVolume^ bounds = SpatialBoundingVolume::FromBox(baseReference->CoordinateSystem, aabb);

		// Create the observer.
		m_surfaceObserver = ref new SpatialSurfaceObserver();
		if (m_surfaceObserver) {
			m_surfaceObserver->SetBoundingVolume(bounds);

			// If the surface observer was successfully created, we can initialize our
			// collection by pulling the current data set.
			auto mapContainingSurfaceCollection = m_surfaceObserver->GetObservedSurfaces();
			for (auto const& pair : mapContainingSurfaceCollection) {
				auto const& id = pair->Key;
				auto const& surfaceInfo = pair->Value;
				m_meshRenderer->AddSurface(id, surfaceInfo);
			}

			// We then subscribe to an event to receive up-to-date data.
			m_surfacesChangedToken = m_surfaceObserver->ObservedSurfacesChanged +=
				ref new TypedEventHandler<SpatialSurfaceObserver^, Platform::Object^>(
						bind(&HoloLensTerrainGenDemoMain::OnSurfacesChanged, this, _1, _2)
					);
		}
	}
});

It is now in this method where we create a task to request access to the Spatial Mapping system. Once that task returns, and if access was granted, then we can initialize our Surface Observer and load the initial list of surfaces.
For the purpose of initialization, we create a temporary stationary frame of reference with which we can calculate our initial bounding volume.
At the end, we attach an event handler that will update our list of surface meshes, should the surface data change. My experience with the emulator has this method firing pretty much every time I start the application, and when I load a different room.

I should note that there appears to be a bug where occasionally no surfaces will be found by the emulator. The OnSurfacesChanged() method still fires when the application starts, but the screen remains blank. As the terrain will only be initialized once we have at least one surface mesh, we can be certain that none were found. My guess is that the emulator does not actually simulate the constant updating of the surface meshes that the real device does. If I load a new room file, the OnSurfacesChanged() method fires, and the surfaces of the new room load and render fine.
This bug occurred before I moved the code around, but I never observed it in the original sample code. I’m not sure if I missed something, or if I simply haven’t run the sample code enough to have caught this happening with it.

Moving on, you may have noticed that I removed the code referencing our Surface Mesh Options. As I mentioned last post, that code was doing absolutely nothing in our Main class and needed to be moved to the RealtimeSurfaceMeshRenderer class where the options are actually used. You’ll find it in the AddOrUpdateSurfaceAsync() method.

Depth Only Rendering of the Surface Meshes

You’d probably think there was something complicated about this, seeing as I gave it it’s own section. It’s actually super easy.
As you can see from the featured image at the top, the code already has depth testing enabled. Our Main Render() method renders the surface meshes first, and then the terrain.

if (cameraActive) {
	m_meshRenderer->Render(pCameraResources->IsRenderingStereoscopic(), m_renderWireframe, !m_renderSurfaces);

	// Draw the sample hologram.
	if (m_terrain) {
		m_terrain->Render();
	}
}

If depth testing were disabled, the terrain would always be rendered over the surfaces.
Obviously, we can’t simply turn off rendering of the surface meshes when we don’t want them to be displayed. That would mean their data is never loaded into the Depth Stencil Buffer. What we need is to send the data through the pipeline so that the Depth Stencil Buffer is loaded, without rendering them to the screen.
As it happens, all we need to do for this is disable the Pixel Shader. To do this, we simply set it to NULL.

if (depthOnly) {
	// Use the default rasterizer state as this will enable depth culling.
	m_deviceResources->GetD3DDeviceContext()->RSSetState(m_defaultRasterizerState.Get());

	// Attach no pixel shader to the pipeline.
	context->PSSetShader(nullptr, nullptr, 0);
} else {
	if (useWireframe) {
		// Use a wireframe rasterizer state.
		m_deviceResources->GetD3DDeviceContext()->RSSetState(m_wireframeRasterizerState.Get());

		// Attach a pixel shader to render a solid color wireframe.
		context->PSSetShader(
			m_colorPixelShader.Get(),
			nullptr,
			0
		);
	} else {
		// Use the default rasterizer state.
		m_deviceResources->GetD3DDeviceContext()->RSSetState(m_defaultRasterizerState.Get());

		// Attach a pixel shader that can do lighting.
		context->PSSetShader(
			m_lightingPixelShader.Get(),
			nullptr,
			0
		);
	}
}

As you can see from the above code, we also need to set the Rasterizer State. We do this just to ensure we’re in the correct state, rather than leaving things to chance.
The end result is that we can now turn off rendering of the surfaces and still occlude the terrain.

In the above image, you can see the blank region overlaying the terrain. In the emulator, this is all you can see. On a real HoloLens, in a real room, this would be some object blocking your view of the hologram.

I’ve also enabled switching back and forth between flat mesh and wire frame for the surfaces. This resulted in a bit of an interface problem as, up until now, I only had one possible input and now I have three things I want to do1. In my next post, I’ll discuss the basics of Gesture Recognition as it currently applies to this project.

For the latest version, go to GitHub.

Traagen